2 This is a version (aka dlmalloc) of malloc/free/realloc written by
3 Doug Lea and released to the public domain, as explained at
4 http://creativecommons.org/licenses/publicdomain. Send questions,
5 comments, complaints, performance data, etc to dl@cs.oswego.edu
7 * Version 2.8.3 Thu Sep 22 11:16:15 2005 Doug Lea (dl at gee)
9 Note: There may be an updated version of this malloc obtainable at
10 ftp://gee.cs.oswego.edu/pub/misc/malloc.c
11 Check before installing!
15 * Modifications made to the original version for mono:
16 * - added PROT_EXEC to MMAP_PROT
17 * - added PAGE_EXECUTE_READWRITE to the win32mmap and win32direct_mmap
18 * - a large portion of functions is #ifdef'ed out to make the native code smaller
22 #define USE_DL_PREFIX 1
24 /* Use mmap for allocating memory */
25 #define HAVE_MORECORE 0
31 This library is all in one file to simplify the most common usage:
32 ftp it, compile it (-O3), and link it into another program. All of
33 the compile-time options default to reasonable values for use on
34 most platforms. You might later want to step through various
35 compile-time and dynamic tuning options.
37 For convenience, an include file for code using this malloc is at:
38 ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
39 You don't really need this .h file unless you call functions not
40 defined in your system include files. The .h file contains only the
41 excerpts from this file needed for using this malloc on ANSI C/C++
42 systems, so long as you haven't changed compile-time options about
43 naming and tuning parameters. If you do, then you can create your
44 own malloc.h that does include all settings by cutting at the point
45 indicated below. Note that you may already by default be using a C
46 library containing a malloc that is based on some version of this
47 malloc (for example in linux). You might still want to use the one
48 in this file to customize settings or to avoid overheads associated
49 with library versions.
53 Supported pointer/size_t representation: 4 or 8 bytes
54 size_t MUST be an unsigned type of the same width as
55 pointers. (If you are using an ancient system that declares
56 size_t as a signed type, or need it to be a different width
57 than pointers, you can use a previous release of this malloc
58 (e.g. 2.7.2) supporting these.)
60 Alignment: 8 bytes (default)
61 This suffices for nearly all current machines and C compilers.
62 However, you can define MALLOC_ALIGNMENT to be wider than this
63 if necessary (up to 128bytes), at the expense of using more space.
65 Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes)
66 8 or 16 bytes (if 8byte sizes)
67 Each malloced chunk has a hidden word of overhead holding size
68 and status information, and additional cross-check word
69 if FOOTERS is defined.
71 Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead)
72 8-byte ptrs: 32 bytes (including overhead)
74 Even a request for zero bytes (i.e., malloc(0)) returns a
75 pointer to something of the minimum allocatable size.
76 The maximum overhead wastage (i.e., number of extra bytes
77 allocated than were requested in malloc) is less than or equal
78 to the minimum size, except for requests >= mmap_threshold that
79 are serviced via mmap(), where the worst case wastage is about
80 32 bytes plus the remainder from a system page (the minimal
81 mmap unit); typically 4096 or 8192 bytes.
83 Security: static-safe; optionally more or less
84 The "security" of malloc refers to the ability of malicious
85 code to accentuate the effects of errors (for example, freeing
86 space that is not currently malloc'ed or overwriting past the
87 ends of chunks) in code that calls malloc. This malloc
88 guarantees not to modify any memory locations below the base of
89 heap, i.e., static variables, even in the presence of usage
90 errors. The routines additionally detect most improper frees
91 and reallocs. All this holds as long as the static bookkeeping
92 for malloc itself is not corrupted by some other means. This
93 is only one aspect of security -- these checks do not, and
94 cannot, detect all possible programming errors.
96 If FOOTERS is defined nonzero, then each allocated chunk
97 carries an additional check word to verify that it was malloced
98 from its space. These check words are the same within each
99 execution of a program using malloc, but differ across
100 executions, so externally crafted fake chunks cannot be
101 freed. This improves security by rejecting frees/reallocs that
102 could corrupt heap memory, in addition to the checks preventing
103 writes to statics that are always on. This may further improve
104 security at the expense of time and space overhead. (Note that
105 FOOTERS may also be worth using with MSPACES.)
107 By default detected errors cause the program to abort (calling
108 "abort()"). You can override this to instead proceed past
109 errors by defining PROCEED_ON_ERROR. In this case, a bad free
110 has no effect, and a malloc that encounters a bad address
111 caused by user overwrites will ignore the bad address by
112 dropping pointers and indices to all known memory. This may
113 be appropriate for programs that should continue if at all
114 possible in the face of programming errors, although they may
115 run out of memory because dropped memory is never reclaimed.
117 If you don't like either of these options, you can define
118 CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
119 else. And if if you are sure that your program using malloc has
120 no errors or vulnerabilities, you can define INSECURE to 1,
121 which might (or might not) provide a small performance improvement.
123 Thread-safety: NOT thread-safe unless USE_LOCKS defined
124 When USE_LOCKS is defined, each public call to malloc, free,
125 etc is surrounded with either a pthread mutex or a win32
126 spinlock (depending on WIN32). This is not especially fast, and
127 can be a major bottleneck. It is designed only to provide
128 minimal protection in concurrent environments, and to provide a
129 basis for extensions. If you are using malloc in a concurrent
130 program, consider instead using ptmalloc, which is derived from
131 a version of this malloc. (See http://www.malloc.de).
133 System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
134 This malloc can use unix sbrk or any emulation (invoked using
135 the CALL_MORECORE macro) and/or mmap/munmap or any emulation
136 (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
137 memory. On most unix systems, it tends to work best if both
138 MORECORE and MMAP are enabled. On Win32, it uses emulations
139 based on VirtualAlloc. It also uses common C library functions
142 Compliance: I believe it is compliant with the Single Unix Specification
143 (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
146 * Overview of algorithms
148 This is not the fastest, most space-conserving, most portable, or
149 most tunable malloc ever written. However it is among the fastest
150 while also being among the most space-conserving, portable and
151 tunable. Consistent balance across these factors results in a good
152 general-purpose allocator for malloc-intensive programs.
154 In most ways, this malloc is a best-fit allocator. Generally, it
155 chooses the best-fitting existing chunk for a request, with ties
156 broken in approximately least-recently-used order. (This strategy
157 normally maintains low fragmentation.) However, for requests less
158 than 256bytes, it deviates from best-fit when there is not an
159 exactly fitting available chunk by preferring to use space adjacent
160 to that used for the previous small request, as well as by breaking
161 ties in approximately most-recently-used order. (These enhance
162 locality of series of small allocations.) And for very large requests
163 (>= 256Kb by default), it relies on system memory mapping
164 facilities, if supported. (This helps avoid carrying around and
165 possibly fragmenting memory used only for large chunks.)
167 All operations (except malloc_stats and mallinfo) have execution
168 times that are bounded by a constant factor of the number of bits in
169 a size_t, not counting any clearing in calloc or copying in realloc,
170 or actions surrounding MORECORE and MMAP that have times
171 proportional to the number of non-contiguous regions returned by
172 system allocation routines, which is often just 1.
174 The implementation is not very modular and seriously overuses
175 macros. Perhaps someday all C compilers will do as good a job
176 inlining modular code as can now be done by brute-force expansion,
177 but now, enough of them seem not to.
179 Some compilers issue a lot of warnings about code that is
180 dead/unreachable only on some platforms, and also about intentional
181 uses of negation on unsigned types. All known cases of each can be
184 For a longer but out of date high-level description, see
185 http://gee.cs.oswego.edu/dl/html/malloc.html
188 If MSPACES is defined, then in addition to malloc, free, etc.,
189 this file also defines mspace_malloc, mspace_free, etc. These
190 are versions of malloc routines that take an "mspace" argument
191 obtained using create_mspace, to control all internal bookkeeping.
192 If ONLY_MSPACES is defined, only these versions are compiled.
193 So if you would like to use this allocator for only some allocations,
194 and your system malloc for others, you can compile with
195 ONLY_MSPACES and then do something like...
196 static mspace mymspace = create_mspace(0,0); // for example
197 #define mymalloc(bytes) mspace_malloc(mymspace, bytes)
199 (Note: If you only need one instance of an mspace, you can instead
200 use "USE_DL_PREFIX" to relabel the global malloc.)
202 You can similarly create thread-local allocators by storing
203 mspaces as thread-locals. For example:
204 static __thread mspace tlms = 0;
205 void* tlmalloc(size_t bytes) {
206 if (tlms == 0) tlms = create_mspace(0, 0);
207 return mspace_malloc(tlms, bytes);
209 void tlfree(void* mem) { mspace_free(tlms, mem); }
211 Unless FOOTERS is defined, each mspace is completely independent.
212 You cannot allocate from one and free to another (although
213 conformance is only weakly checked, so usage errors are not always
214 caught). If FOOTERS is defined, then each chunk carries around a tag
215 indicating its originating mspace, and frees are directed to their
218 ------------------------- Compile-time options ---------------------------
220 Be careful in setting #define values for numerical constants of type
221 size_t. On some systems, literal values are not automatically extended
222 to size_t precision unless they are explicitly casted.
224 WIN32 default: defined if _WIN32 defined
225 Defining WIN32 sets up defaults for MS environment and compilers.
226 Otherwise defaults are for unix.
228 MALLOC_ALIGNMENT default: (size_t)8
229 Controls the minimum alignment for malloc'ed chunks. It must be a
230 power of two and at least 8, even on machines for which smaller
231 alignments would suffice. It may be defined as larger than this
232 though. Note however that code and data structures are optimized for
233 the case of 8-byte alignment.
235 MSPACES default: 0 (false)
236 If true, compile in support for independent allocation spaces.
237 This is only supported if HAVE_MMAP is true.
239 ONLY_MSPACES default: 0 (false)
240 If true, only compile in mspace versions, not regular versions.
242 USE_LOCKS default: 0 (false)
243 Causes each call to each public routine to be surrounded with
244 pthread or WIN32 mutex lock/unlock. (If set true, this can be
245 overridden on a per-mspace basis for mspace versions.)
248 If true, provide extra checking and dispatching by placing
249 information in the footers of allocated chunks. This adds
250 space and time overhead.
253 If true, omit checks for usage errors and heap space overwrites.
255 USE_DL_PREFIX default: NOT defined
256 Causes compiler to prefix all public routines with the string 'dl'.
257 This can be useful when you only want to use this malloc in one part
258 of a program, using your regular system malloc elsewhere.
260 ABORT default: defined as abort()
261 Defines how to abort on failed checks. On most systems, a failed
262 check cannot die with an "assert" or even print an informative
263 message, because the underlying print routines in turn call malloc,
264 which will fail again. Generally, the best policy is to simply call
265 abort(). It's not very useful to do more than this because many
266 errors due to overwriting will show up as address faults (null, odd
267 addresses etc) rather than malloc-triggered checks, so will also
268 abort. Also, most compilers know that abort() does not return, so
269 can better optimize code conditionally calling it.
271 PROCEED_ON_ERROR default: defined as 0 (false)
272 Controls whether detected bad addresses cause them to bypassed
273 rather than aborting. If set, detected bad arguments to free and
274 realloc are ignored. And all bookkeeping information is zeroed out
275 upon a detected overwrite of freed heap space, thus losing the
276 ability to ever return it from malloc again, but enabling the
277 application to proceed. If PROCEED_ON_ERROR is defined, the
278 static variable malloc_corruption_error_count is compiled in
279 and can be examined to see if errors have occurred. This option
280 generates slower code than the default abort policy.
282 DEBUG default: NOT defined
283 The DEBUG setting is mainly intended for people trying to modify
284 this code or diagnose problems when porting to new platforms.
285 However, it may also be able to better isolate user errors than just
286 using runtime checks. The assertions in the check routines spell
287 out in more detail the assumptions and invariants underlying the
288 algorithms. The checking is fairly extensive, and will slow down
289 execution noticeably. Calling malloc_stats or mallinfo with DEBUG
290 set will attempt to check every non-mmapped allocated and free chunk
291 in the course of computing the summaries.
293 ABORT_ON_ASSERT_FAILURE default: defined as 1 (true)
294 Debugging assertion failures can be nearly impossible if your
295 version of the assert macro causes malloc to be called, which will
296 lead to a cascade of further failures, blowing the runtime stack.
297 ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
298 which will usually make debugging easier.
300 MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32
301 The action to take before "return 0" when malloc fails to be able to
302 return memory because there is none available.
304 HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES
305 True if this system supports sbrk or an emulation of it.
307 MORECORE default: sbrk
308 The name of the sbrk-style system routine to call to obtain more
309 memory. See below for guidance on writing custom MORECORE
310 functions. The type of the argument to sbrk/MORECORE varies across
311 systems. It cannot be size_t, because it supports negative
312 arguments, so it is normally the signed type of the same width as
313 size_t (sometimes declared as "intptr_t"). It doesn't much matter
314 though. Internally, we only call it with arguments less than half
315 the max value of a size_t, which should work across all reasonable
316 possibilities, although sometimes generating compiler warnings. See
317 near the end of this file for guidelines for creating a custom
320 MORECORE_CONTIGUOUS default: 1 (true)
321 If true, take advantage of fact that consecutive calls to MORECORE
322 with positive arguments always return contiguous increasing
323 addresses. This is true of unix sbrk. It does not hurt too much to
324 set it true anyway, since malloc copes with non-contiguities.
325 Setting it false when definitely non-contiguous saves time
326 and possibly wasted space it would take to discover this though.
328 MORECORE_CANNOT_TRIM default: NOT defined
329 True if MORECORE cannot release space back to the system when given
330 negative arguments. This is generally necessary only if you are
331 using a hand-crafted MORECORE function that cannot handle negative
334 HAVE_MMAP default: 1 (true)
335 True if this system supports mmap or an emulation of it. If so, and
336 HAVE_MORECORE is not true, MMAP is used for all system
337 allocation. If set and HAVE_MORECORE is true as well, MMAP is
338 primarily used to directly allocate very large blocks. It is also
339 used as a backup strategy in cases where MORECORE fails to provide
340 space from system. Note: A single call to MUNMAP is assumed to be
341 able to unmap memory that may have be allocated using multiple calls
342 to MMAP, so long as they are adjacent.
344 HAVE_MREMAP default: 1 on linux, else 0
345 If true realloc() uses mremap() to re-allocate large blocks and
346 extend or shrink allocation spaces.
348 MMAP_CLEARS default: 1 on unix
349 True if mmap clears memory so calloc doesn't need to. This is true
350 for standard unix mmap using /dev/zero.
352 USE_BUILTIN_FFS default: 0 (i.e., not used)
353 Causes malloc to use the builtin ffs() function to compute indices.
354 Some compilers may recognize and intrinsify ffs to be faster than the
355 supplied C version. Also, the case of x86 using gcc is special-cased
356 to an asm instruction, so is already as fast as it can be, and so
357 this setting has no effect. (On most x86s, the asm version is only
358 slightly faster than the C version.)
360 malloc_getpagesize default: derive from system includes, or 4096.
361 The system page size. To the extent possible, this malloc manages
362 memory from the system in page-size units. This may be (and
363 usually is) a function rather than a constant. This is ignored
364 if WIN32, where page size is determined using getSystemInfo during
367 USE_DEV_RANDOM default: 0 (i.e., not used)
368 Causes malloc to use /dev/random to initialize secure magic seed for
369 stamping footers. Otherwise, the current time is used.
371 NO_MALLINFO default: 0
372 If defined, don't compile "mallinfo". This can be a simple way
373 of dealing with mismatches between system declarations and
376 MALLINFO_FIELD_TYPE default: size_t
377 The type of the fields in the mallinfo struct. This was originally
378 defined as "int" in SVID etc, but is more usefully defined as
379 size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set
381 REALLOC_ZERO_BYTES_FREES default: not defined
382 This should be set if a call to realloc with zero bytes should
383 be the same as a call to free. Some people think it should. Otherwise,
384 since this malloc returns a unique pointer for malloc(0), so does
387 LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
388 LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H
389 LACKS_STDLIB_H default: NOT defined unless on WIN32
390 Define these if your system does not have these header files.
391 You might need to manually insert some of the declarations they provide.
393 DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS,
394 system_info.dwAllocationGranularity in WIN32,
396 Also settable using mallopt(M_GRANULARITY, x)
397 The unit for allocating and deallocating memory from the system. On
398 most systems with contiguous MORECORE, there is no reason to
399 make this more than a page. However, systems with MMAP tend to
400 either require or encourage larger granularities. You can increase
401 this value to prevent system allocation functions to be called so
402 often, especially if they are slow. The value must be at least one
403 page and must be a power of two. Setting to 0 causes initialization
404 to either page size or win32 region size. (Note: In previous
405 versions of malloc, the equivalent of this option was called
408 DEFAULT_TRIM_THRESHOLD default: 2MB
409 Also settable using mallopt(M_TRIM_THRESHOLD, x)
410 The maximum amount of unused top-most memory to keep before
411 releasing via malloc_trim in free(). Automatic trimming is mainly
412 useful in long-lived programs using contiguous MORECORE. Because
413 trimming via sbrk can be slow on some systems, and can sometimes be
414 wasteful (in cases where programs immediately afterward allocate
415 more large chunks) the value should be high enough so that your
416 overall system performance would improve by releasing this much
417 memory. As a rough guide, you might set to a value close to the
418 average size of a process (program) running on your system.
419 Releasing this much memory would allow such a process to run in
420 memory. Generally, it is worth tuning trim thresholds when a
421 program undergoes phases where several large chunks are allocated
422 and released in ways that can reuse each other's storage, perhaps
423 mixed with phases where there are no such chunks at all. The trim
424 value must be greater than page size to have any useful effect. To
425 disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
426 some people use of mallocing a huge space and then freeing it at
427 program startup, in an attempt to reserve system memory, doesn't
428 have the intended effect under automatic trimming, since that memory
429 will immediately be returned to the system.
431 DEFAULT_MMAP_THRESHOLD default: 256K
432 Also settable using mallopt(M_MMAP_THRESHOLD, x)
433 The request size threshold for using MMAP to directly service a
434 request. Requests of at least this size that cannot be allocated
435 using already-existing space will be serviced via mmap. (If enough
436 normal freed space already exists it is used instead.) Using mmap
437 segregates relatively large chunks of memory so that they can be
438 individually obtained and released from the host system. A request
439 serviced through mmap is never reused by any other request (at least
440 not directly; the system may just so happen to remap successive
441 requests to the same locations). Segregating space in this way has
442 the benefits that: Mmapped space can always be individually released
443 back to the system, which helps keep the system level memory demands
444 of a long-lived program low. Also, mapped memory doesn't become
445 `locked' between other chunks, as can happen with normally allocated
446 chunks, which means that even trimming via malloc_trim would not
447 release them. However, it has the disadvantage that the space
448 cannot be reclaimed, consolidated, and then used to service later
449 requests, as happens with normal chunks. The advantages of mmap
450 nearly always outweigh disadvantages for "large" chunks, but the
451 value of "large" may vary across systems. The default is an
452 empirically derived value that works well in most systems. You can
453 disable mmap by setting to MAX_SIZE_T.
463 #define WIN32_LEAN_AND_MEAN
466 #define HAVE_MORECORE 0
467 #define LACKS_UNISTD_H
468 #define LACKS_SYS_PARAM_H
469 #define LACKS_SYS_MMAN_H
470 #define LACKS_STRING_H
471 #define LACKS_STRINGS_H
472 #define LACKS_SYS_TYPES_H
473 #define LACKS_ERRNO_H
474 #define MALLOC_FAILURE_ACTION
475 #define MMAP_CLEARS 0 /* WINCE and some others apparently don't clear */
478 #if defined(DARWIN) || defined(_DARWIN)
479 /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
480 #ifndef HAVE_MORECORE
481 #define HAVE_MORECORE 0
483 #endif /* HAVE_MORECORE */
486 #ifndef LACKS_SYS_TYPES_H
487 #include <sys/types.h> /* For size_t */
488 #endif /* LACKS_SYS_TYPES_H */
490 /* The maximum possible size_t value has all bits set */
491 #define MAX_SIZE_T (~(size_t)0)
494 #define ONLY_MSPACES 0
495 #endif /* ONLY_MSPACES */
499 #else /* ONLY_MSPACES */
501 #endif /* ONLY_MSPACES */
503 #ifndef MALLOC_ALIGNMENT
504 #define MALLOC_ALIGNMENT ((size_t)8U)
505 #endif /* MALLOC_ALIGNMENT */
510 #define ABORT abort()
512 #ifndef ABORT_ON_ASSERT_FAILURE
513 #define ABORT_ON_ASSERT_FAILURE 1
514 #endif /* ABORT_ON_ASSERT_FAILURE */
515 #ifndef PROCEED_ON_ERROR
516 #define PROCEED_ON_ERROR 0
517 #endif /* PROCEED_ON_ERROR */
520 #endif /* USE_LOCKS */
523 #endif /* INSECURE */
526 #endif /* HAVE_MMAP */
528 #define MMAP_CLEARS 1
529 #endif /* MMAP_CLEARS */
532 #define HAVE_MREMAP 1
534 #define HAVE_MREMAP 0
536 #endif /* HAVE_MREMAP */
537 #ifndef MALLOC_FAILURE_ACTION
538 #define MALLOC_FAILURE_ACTION errno = ENOMEM;
539 #endif /* MALLOC_FAILURE_ACTION */
540 #ifndef HAVE_MORECORE
542 #define HAVE_MORECORE 0
543 #else /* ONLY_MSPACES */
544 #define HAVE_MORECORE 1
545 #endif /* ONLY_MSPACES */
546 #endif /* HAVE_MORECORE */
548 #define MORECORE_CONTIGUOUS 0
549 #else /* !HAVE_MORECORE */
551 #define MORECORE sbrk
552 #endif /* MORECORE */
553 #ifndef MORECORE_CONTIGUOUS
554 #define MORECORE_CONTIGUOUS 1
555 #endif /* MORECORE_CONTIGUOUS */
556 #endif /* HAVE_MORECORE */
557 #ifndef DEFAULT_GRANULARITY
558 #if MORECORE_CONTIGUOUS
559 #define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */
560 #else /* MORECORE_CONTIGUOUS */
561 #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
562 #endif /* MORECORE_CONTIGUOUS */
563 #endif /* DEFAULT_GRANULARITY */
564 #ifndef DEFAULT_TRIM_THRESHOLD
565 #ifndef MORECORE_CANNOT_TRIM
566 #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
567 #else /* MORECORE_CANNOT_TRIM */
568 #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
569 #endif /* MORECORE_CANNOT_TRIM */
570 #endif /* DEFAULT_TRIM_THRESHOLD */
571 #ifndef DEFAULT_MMAP_THRESHOLD
573 #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
574 #else /* HAVE_MMAP */
575 #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
576 #endif /* HAVE_MMAP */
577 #endif /* DEFAULT_MMAP_THRESHOLD */
578 #ifndef USE_BUILTIN_FFS
579 #define USE_BUILTIN_FFS 0
580 #endif /* USE_BUILTIN_FFS */
581 #ifndef USE_DEV_RANDOM
582 #define USE_DEV_RANDOM 0
583 #endif /* USE_DEV_RANDOM */
585 #define NO_MALLINFO 0
586 #endif /* NO_MALLINFO */
587 #ifndef MALLINFO_FIELD_TYPE
588 #define MALLINFO_FIELD_TYPE size_t
589 #endif /* MALLINFO_FIELD_TYPE */
592 mallopt tuning options. SVID/XPG defines four standard parameter
593 numbers for mallopt, normally defined in malloc.h. None of these
594 are used in this malloc, so setting them has no effect. But this
595 malloc does support the following options.
598 #define M_TRIM_THRESHOLD (-1)
599 #define M_GRANULARITY (-2)
600 #define M_MMAP_THRESHOLD (-3)
602 /* ------------------------ Mallinfo declarations ------------------------ */
606 This version of malloc supports the standard SVID/XPG mallinfo
607 routine that returns a struct containing usage properties and
608 statistics. It should work on any system that has a
609 /usr/include/malloc.h defining struct mallinfo. The main
610 declaration needed is the mallinfo struct that is returned (by-copy)
611 by mallinfo(). The malloinfo struct contains a bunch of fields that
612 are not even meaningful in this version of malloc. These fields are
613 are instead filled by mallinfo() with other numbers that might be of
616 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
617 /usr/include/malloc.h file that includes a declaration of struct
618 mallinfo. If so, it is included; else a compliant version is
619 declared below. These must be precisely the same for mallinfo() to
620 work. The original SVID version of this struct, defined on most
621 systems with mallinfo, declares all fields as ints. But some others
622 define as unsigned long. If your system defines the fields using a
623 type of different width than listed here, you MUST #include your
624 system version and #define HAVE_USR_INCLUDE_MALLOC_H.
627 /* #define HAVE_USR_INCLUDE_MALLOC_H */
629 #ifdef HAVE_USR_INCLUDE_MALLOC_H
630 #include "/usr/include/malloc.h"
631 #else /* HAVE_USR_INCLUDE_MALLOC_H */
634 MALLINFO_FIELD_TYPE arena
; /* non-mmapped space allocated from system */
635 MALLINFO_FIELD_TYPE ordblks
; /* number of free chunks */
636 MALLINFO_FIELD_TYPE smblks
; /* always 0 */
637 MALLINFO_FIELD_TYPE hblks
; /* always 0 */
638 MALLINFO_FIELD_TYPE hblkhd
; /* space in mmapped regions */
639 MALLINFO_FIELD_TYPE usmblks
; /* maximum total allocated space */
640 MALLINFO_FIELD_TYPE fsmblks
; /* always 0 */
641 MALLINFO_FIELD_TYPE uordblks
; /* total allocated space */
642 MALLINFO_FIELD_TYPE fordblks
; /* total free space */
643 MALLINFO_FIELD_TYPE keepcost
; /* releasable (via malloc_trim) space */
646 #endif /* HAVE_USR_INCLUDE_MALLOC_H */
647 #endif /* NO_MALLINFO */
651 #endif /* __cplusplus */
655 /* ------------------- Declarations of public routines ------------------- */
657 #ifndef USE_DL_PREFIX
658 #define dlcalloc calloc
660 #define dlmalloc malloc
661 #define dlmemalign memalign
662 #define dlrealloc realloc
663 #define dlvalloc valloc
664 #define dlpvalloc pvalloc
665 #define dlmallinfo mallinfo
666 #define dlmallopt mallopt
667 #define dlmalloc_trim malloc_trim
668 #define dlmalloc_stats malloc_stats
669 #define dlmalloc_usable_size malloc_usable_size
670 #define dlmalloc_footprint malloc_footprint
671 #define dlmalloc_max_footprint malloc_max_footprint
672 #define dlindependent_calloc independent_calloc
673 #define dlindependent_comalloc independent_comalloc
674 #endif /* USE_DL_PREFIX */
679 Returns a pointer to a newly allocated chunk of at least n bytes, or
680 null if no space is available, in which case errno is set to ENOMEM
683 If n is zero, malloc returns a minimum-sized chunk. (The minimum
684 size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
685 systems.) Note that size_t is an unsigned type, so calls with
686 arguments that would be negative if signed are interpreted as
687 requests for huge amounts of space, which will often fail. The
688 maximum supported value of n differs across systems, but is in all
689 cases less than the maximum representable value of a size_t.
691 void* dlmalloc(size_t);
695 Releases the chunk of memory pointed to by p, that had been previously
696 allocated using malloc or a related routine such as realloc.
697 It has no effect if p is null. If p was not malloced or already
698 freed, free(p) will by default cause the current program to abort.
703 calloc(size_t n_elements, size_t element_size);
704 Returns a pointer to n_elements * element_size bytes, with all locations
707 void* dlcalloc(size_t, size_t);
710 realloc(void* p, size_t n)
711 Returns a pointer to a chunk of size n that contains the same data
712 as does chunk p up to the minimum of (n, p's size) bytes, or null
713 if no space is available.
715 The returned pointer may or may not be the same as p. The algorithm
716 prefers extending p in most cases when possible, otherwise it
717 employs the equivalent of a malloc-copy-free sequence.
719 If p is null, realloc is equivalent to malloc.
721 If space is not available, realloc returns null, errno is set (if on
722 ANSI) and p is NOT freed.
724 if n is for fewer bytes than already held by p, the newly unused
725 space is lopped off and freed if possible. realloc with a size
726 argument of zero (re)allocates a minimum-sized chunk.
728 The old unix realloc convention of allowing the last-free'd chunk
729 to be used as an argument to realloc is not supported.
732 void* dlrealloc(void*, size_t);
735 memalign(size_t alignment, size_t n);
736 Returns a pointer to a newly allocated chunk of n bytes, aligned
737 in accord with the alignment argument.
739 The alignment argument should be a power of two. If the argument is
740 not a power of two, the nearest greater power is used.
741 8-byte alignment is guaranteed by normal malloc calls, so don't
742 bother calling memalign with an argument of 8 or less.
744 Overreliance on memalign is a sure way to fragment space.
746 void* dlmemalign(size_t, size_t);
750 Equivalent to memalign(pagesize, n), where pagesize is the page
751 size of the system. If the pagesize is unknown, 4096 is used.
753 void* dlvalloc(size_t);
756 mallopt(int parameter_number, int parameter_value)
757 Sets tunable parameters The format is to provide a
758 (parameter-number, parameter-value) pair. mallopt then sets the
759 corresponding parameter to the argument value if it can (i.e., so
760 long as the value is meaningful), and returns 1 if successful else
761 0. SVID/XPG/ANSI defines four standard param numbers for mallopt,
762 normally defined in malloc.h. None of these are use in this malloc,
763 so setting them has no effect. But this malloc also supports other
764 options in mallopt. See below for details. Briefly, supported
765 parameters are as follows (listed defaults are for "typical"
768 Symbol param # default allowed param values
769 M_TRIM_THRESHOLD -1 2*1024*1024 any (MAX_SIZE_T disables)
770 M_GRANULARITY -2 page size any power of 2 >= page size
771 M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
773 int dlmallopt(int, int);
777 Returns the number of bytes obtained from the system. The total
778 number of bytes allocated by malloc, realloc etc., is less than this
779 value. Unlike mallinfo, this function returns only a precomputed
780 result, so can be called frequently to monitor memory consumption.
781 Even if locks are otherwise defined, this function does not use them,
782 so results might not be up to date.
784 size_t dlmalloc_footprint(void);
787 malloc_max_footprint();
788 Returns the maximum number of bytes obtained from the system. This
789 value will be greater than current footprint if deallocated space
790 has been reclaimed by the system. The peak number of bytes allocated
791 by malloc, realloc etc., is less than this value. Unlike mallinfo,
792 this function returns only a precomputed result, so can be called
793 frequently to monitor memory consumption. Even if locks are
794 otherwise defined, this function does not use them, so results might
797 size_t dlmalloc_max_footprint(void);
802 Returns (by copy) a struct containing various summary statistics:
804 arena: current total non-mmapped bytes allocated from system
805 ordblks: the number of free chunks
807 hblks: current number of mmapped regions
808 hblkhd: total bytes held in mmapped regions
809 usmblks: the maximum total allocated space. This will be greater
810 than current total if trimming has occurred.
812 uordblks: current total allocated space (normal or mmapped)
813 fordblks: total free space
814 keepcost: the maximum number of bytes that could ideally be released
815 back to system via malloc_trim. ("ideally" means that
816 it ignores page restrictions etc.)
818 Because these fields are ints, but internal bookkeeping may
819 be kept as longs, the reported values may wrap around zero and
822 struct mallinfo
dlmallinfo(void);
823 #endif /* NO_MALLINFO */
826 independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
828 independent_calloc is similar to calloc, but instead of returning a
829 single cleared space, it returns an array of pointers to n_elements
830 independent elements that can hold contents of size elem_size, each
831 of which starts out cleared, and can be independently freed,
832 realloc'ed etc. The elements are guaranteed to be adjacently
833 allocated (this is not guaranteed to occur with multiple callocs or
834 mallocs), which may also improve cache locality in some
837 The "chunks" argument is optional (i.e., may be null, which is
838 probably the most typical usage). If it is null, the returned array
839 is itself dynamically allocated and should also be freed when it is
840 no longer needed. Otherwise, the chunks array must be of at least
841 n_elements in length. It is filled in with the pointers to the
844 In either case, independent_calloc returns this pointer array, or
845 null if the allocation failed. If n_elements is zero and "chunks"
846 is null, it returns a chunk representing an array with zero elements
847 (which should be freed if not wanted).
849 Each element must be individually freed when it is no longer
850 needed. If you'd like to instead be able to free all at once, you
851 should instead use regular calloc and assign pointers into this
852 space to represent elements. (In this case though, you cannot
853 independently free elements.)
855 independent_calloc simplifies and speeds up implementations of many
856 kinds of pools. It may also be useful when constructing large data
857 structures that initially have a fixed number of fixed-sized nodes,
858 but the number is not known at compile time, and some of the nodes
859 may later need to be freed. For example:
861 struct Node { int item; struct Node* next; };
863 struct Node* build_list() {
865 int n = read_number_of_nodes_needed();
866 if (n <= 0) return 0;
867 pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
868 if (pool == 0) die();
869 // organize into a linked list...
870 struct Node* first = pool[0];
871 for (i = 0; i < n-1; ++i)
872 pool[i]->next = pool[i+1];
873 free(pool); // Can now free the array (or not, if it is needed later)
877 void** dlindependent_calloc(size_t, size_t, void**);
880 independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
882 independent_comalloc allocates, all at once, a set of n_elements
883 chunks with sizes indicated in the "sizes" array. It returns
884 an array of pointers to these elements, each of which can be
885 independently freed, realloc'ed etc. The elements are guaranteed to
886 be adjacently allocated (this is not guaranteed to occur with
887 multiple callocs or mallocs), which may also improve cache locality
888 in some applications.
890 The "chunks" argument is optional (i.e., may be null). If it is null
891 the returned array is itself dynamically allocated and should also
892 be freed when it is no longer needed. Otherwise, the chunks array
893 must be of at least n_elements in length. It is filled in with the
894 pointers to the chunks.
896 In either case, independent_comalloc returns this pointer array, or
897 null if the allocation failed. If n_elements is zero and chunks is
898 null, it returns a chunk representing an array with zero elements
899 (which should be freed if not wanted).
901 Each element must be individually freed when it is no longer
902 needed. If you'd like to instead be able to free all at once, you
903 should instead use a single regular malloc, and assign pointers at
904 particular offsets in the aggregate space. (In this case though, you
905 cannot independently free elements.)
907 independent_comallac differs from independent_calloc in that each
908 element may have a different size, and also that it does not
909 automatically clear elements.
911 independent_comalloc can be used to speed up allocation in cases
912 where several structs or objects must always be allocated at the
913 same time. For example:
918 void send_message(char* msg) {
919 int msglen = strlen(msg);
920 size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
922 if (independent_comalloc(3, sizes, chunks) == 0)
924 struct Head* head = (struct Head*)(chunks[0]);
925 char* body = (char*)(chunks[1]);
926 struct Foot* foot = (struct Foot*)(chunks[2]);
930 In general though, independent_comalloc is worth using only for
931 larger values of n_elements. For small values, you probably won't
932 detect enough difference from series of malloc calls to bother.
934 Overuse of independent_comalloc can increase overall memory usage,
935 since it cannot reuse existing noncontiguous small chunks that
936 might be available for some of the elements.
938 void** dlindependent_comalloc(size_t, size_t*, void**);
943 Equivalent to valloc(minimum-page-that-holds(n)), that is,
944 round up n to nearest pagesize.
946 void* dlpvalloc(size_t);
949 malloc_trim(size_t pad);
951 If possible, gives memory back to the system (via negative arguments
952 to sbrk) if there is unused memory at the `high' end of the malloc
953 pool or in unused MMAP segments. You can call this after freeing
954 large blocks of memory to potentially reduce the system-level memory
955 requirements of a program. However, it cannot guarantee to reduce
956 memory. Under some allocation patterns, some large free blocks of
957 memory will be locked between two used chunks, so they cannot be
958 given back to the system.
960 The `pad' argument to malloc_trim represents the amount of free
961 trailing space to leave untrimmed. If this argument is zero, only
962 the minimum amount of memory to maintain internal data structures
963 will be left. Non-zero arguments can be supplied to maintain enough
964 trailing space to service future expected allocations without having
965 to re-obtain memory from the system.
967 Malloc_trim returns 1 if it actually released any memory, else 0.
969 int dlmalloc_trim(size_t);
972 malloc_usable_size(void* p);
974 Returns the number of bytes you can actually use in
975 an allocated chunk, which may be more than you requested (although
976 often not) due to alignment and minimum size constraints.
977 You can use this many bytes without worrying about
978 overwriting other allocated objects. This is not a particularly great
979 programming practice. malloc_usable_size can be more useful in
980 debugging and assertions, for example:
983 assert(malloc_usable_size(p) >= 256);
985 size_t dlmalloc_usable_size(void*);
989 Prints on stderr the amount of space obtained from the system (both
990 via sbrk and mmap), the maximum amount (which may be more than
991 current if malloc_trim and/or munmap got called), and the current
992 number of bytes allocated via malloc (or realloc, etc) but not yet
993 freed. Note that this is the number of bytes allocated, not the
994 number requested. It will be larger than the number requested
995 because of alignment and bookkeeping overhead. Because it includes
996 alignment wastage as being in use, this figure may be greater than
997 zero even when no user-level chunks are allocated.
999 The reported current and maximum system memory can be inaccurate if
1000 a program makes other calls to system memory allocation functions
1001 (normally sbrk) outside of malloc.
1003 malloc_stats prints only the most commonly interesting statistics.
1004 More information can be obtained by calling mallinfo.
1006 void dlmalloc_stats(void);
1008 #endif /* ONLY_MSPACES */
1013 mspace is an opaque type representing an independent
1014 region of space that supports mspace_malloc, etc.
1016 typedef void* mspace
;
1019 create_mspace creates and returns a new independent space with the
1020 given initial capacity, or, if 0, the default granularity size. It
1021 returns null if there is no system memory available to create the
1022 space. If argument locked is non-zero, the space uses a separate
1023 lock to control access. The capacity of the space will grow
1024 dynamically as needed to service mspace_malloc requests. You can
1025 control the sizes of incremental increases of this space by
1026 compiling with a different DEFAULT_GRANULARITY or dynamically
1027 setting with mallopt(M_GRANULARITY, value).
1029 mspace
create_mspace(size_t capacity
, int locked
);
1032 destroy_mspace destroys the given space, and attempts to return all
1033 of its memory back to the system, returning the total number of
1034 bytes freed. After destruction, the results of access to all memory
1035 used by the space become undefined.
1037 size_t destroy_mspace(mspace msp
);
1040 create_mspace_with_base uses the memory supplied as the initial base
1041 of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1042 space is used for bookkeeping, so the capacity must be at least this
1043 large. (Otherwise 0 is returned.) When this initial space is
1044 exhausted, additional memory will be obtained from the system.
1045 Destroying this space will deallocate all additionally allocated
1046 space (if possible) but not the initial base.
1048 mspace
create_mspace_with_base(void* base
, size_t capacity
, int locked
);
1051 mspace_malloc behaves as malloc, but operates within
1054 void* mspace_malloc(mspace msp
, size_t bytes
);
1057 mspace_free behaves as free, but operates within
1060 If compiled with FOOTERS==1, mspace_free is not actually needed.
1061 free may be called instead of mspace_free because freed chunks from
1062 any space are handled by their originating spaces.
1064 void mspace_free(mspace msp
, void* mem
);
1067 mspace_realloc behaves as realloc, but operates within
1070 If compiled with FOOTERS==1, mspace_realloc is not actually
1071 needed. realloc may be called instead of mspace_realloc because
1072 realloced chunks from any space are handled by their originating
1075 void* mspace_realloc(mspace msp
, void* mem
, size_t newsize
);
1078 mspace_calloc behaves as calloc, but operates within
1081 void* mspace_calloc(mspace msp
, size_t n_elements
, size_t elem_size
);
1084 mspace_memalign behaves as memalign, but operates within
1087 void* mspace_memalign(mspace msp
, size_t alignment
, size_t bytes
);
1090 mspace_independent_calloc behaves as independent_calloc, but
1091 operates within the given space.
1093 void** mspace_independent_calloc(mspace msp
, size_t n_elements
,
1094 size_t elem_size
, void* chunks
[]);
1097 mspace_independent_comalloc behaves as independent_comalloc, but
1098 operates within the given space.
1100 void** mspace_independent_comalloc(mspace msp
, size_t n_elements
,
1101 size_t sizes
[], void* chunks
[]);
1104 mspace_footprint() returns the number of bytes obtained from the
1105 system for this space.
1107 size_t mspace_footprint(mspace msp
);
1110 mspace_max_footprint() returns the peak number of bytes obtained from the
1111 system for this space.
1113 size_t mspace_max_footprint(mspace msp
);
1118 mspace_mallinfo behaves as mallinfo, but reports properties of
1121 struct mallinfo
mspace_mallinfo(mspace msp
);
1122 #endif /* NO_MALLINFO */
1125 mspace_malloc_stats behaves as malloc_stats, but reports
1126 properties of the given space.
1128 void mspace_malloc_stats(mspace msp
);
1131 mspace_trim behaves as malloc_trim, but
1132 operates within the given space.
1134 int mspace_trim(mspace msp
, size_t pad
);
1137 An alias for mallopt.
1139 int mspace_mallopt(int, int);
1141 #endif /* MSPACES */
1144 }; /* end of extern "C" */
1145 #endif /* __cplusplus */
1148 ========================================================================
1149 To make a fully customizable malloc.h header file, cut everything
1150 above this line, put into file malloc.h, edit to suit, and #include it
1151 on the next line, as well as in programs that use this malloc.
1152 ========================================================================
1155 /* #include "malloc.h" */
1157 /*------------------------------ internal #includes ---------------------- */
1160 #pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1163 #include <stdio.h> /* for printing in malloc_stats */
1165 #ifndef LACKS_ERRNO_H
1166 #include <errno.h> /* for MALLOC_FAILURE_ACTION */
1167 #endif /* LACKS_ERRNO_H */
1169 #include <time.h> /* for magic initialization */
1170 #endif /* FOOTERS */
1171 #ifndef LACKS_STDLIB_H
1172 #include <stdlib.h> /* for abort() */
1173 #endif /* LACKS_STDLIB_H */
1175 #if ABORT_ON_ASSERT_FAILURE
1176 #define assert(x) if(!(x)) ABORT
1177 #else /* ABORT_ON_ASSERT_FAILURE */
1179 #endif /* ABORT_ON_ASSERT_FAILURE */
1183 #ifndef LACKS_STRING_H
1184 #include <string.h> /* for memset etc */
1185 #endif /* LACKS_STRING_H */
1187 #ifndef LACKS_STRINGS_H
1188 #include <strings.h> /* for ffs */
1189 #endif /* LACKS_STRINGS_H */
1190 #endif /* USE_BUILTIN_FFS */
1192 #ifndef LACKS_SYS_MMAN_H
1193 #include <sys/mman.h> /* for mmap */
1194 #endif /* LACKS_SYS_MMAN_H */
1195 #ifndef LACKS_FCNTL_H
1197 #endif /* LACKS_FCNTL_H */
1198 #endif /* HAVE_MMAP */
1200 #ifndef LACKS_UNISTD_H
1201 #include <unistd.h> /* for sbrk */
1202 #else /* LACKS_UNISTD_H */
1203 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1204 extern void* sbrk(ptrdiff_t);
1205 #endif /* FreeBSD etc */
1206 #endif /* LACKS_UNISTD_H */
1207 #endif /* HAVE_MMAP */
1210 #ifndef malloc_getpagesize
1211 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
1212 # ifndef _SC_PAGE_SIZE
1213 # define _SC_PAGE_SIZE _SC_PAGESIZE
1216 # ifdef _SC_PAGE_SIZE
1217 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1219 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1220 extern size_t getpagesize();
1221 # define malloc_getpagesize getpagesize()
1223 # ifdef WIN32 /* use supplied emulation of getpagesize */
1224 # define malloc_getpagesize getpagesize()
1226 # ifndef LACKS_SYS_PARAM_H
1227 # include <sys/param.h>
1229 # ifdef EXEC_PAGESIZE
1230 # define malloc_getpagesize EXEC_PAGESIZE
1234 # define malloc_getpagesize NBPG
1236 # define malloc_getpagesize (NBPG * CLSIZE)
1240 # define malloc_getpagesize NBPC
1243 # define malloc_getpagesize PAGESIZE
1244 # else /* just guess */
1245 # define malloc_getpagesize ((size_t)4096U)
1256 /* ------------------- size_t and alignment properties -------------------- */
1258 /* The byte and bit size of a size_t */
1259 #define SIZE_T_SIZE (sizeof(size_t))
1260 #define SIZE_T_BITSIZE (sizeof(size_t) << 3)
1262 /* Some constants coerced to size_t */
1263 /* Annoying but necessary to avoid errors on some plaftorms */
1264 #define SIZE_T_ZERO ((size_t)0)
1265 #define SIZE_T_ONE ((size_t)1)
1266 #define SIZE_T_TWO ((size_t)2)
1267 #define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1)
1268 #define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2)
1269 #define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1270 #define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U)
1272 /* The bit mask value corresponding to MALLOC_ALIGNMENT */
1273 #define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE)
1275 /* True if address a has acceptable alignment */
1276 #define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1278 /* the number of bytes to offset an address to align it */
1279 #define align_offset(A)\
1280 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1281 ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1283 /* -------------------------- MMAP preliminaries ------------------------- */
1286 If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1287 checks to fail so compiler optimizer can delete code rather than
1288 using so many "#if"s.
1292 /* MORECORE and MMAP must return MFAIL on failure */
1293 #define MFAIL ((void*)(MAX_SIZE_T))
1294 #define CMFAIL ((char*)(MFAIL)) /* defined for convenience */
1297 #define IS_MMAPPED_BIT (SIZE_T_ZERO)
1298 #define USE_MMAP_BIT (SIZE_T_ZERO)
1299 #define CALL_MMAP(s) MFAIL
1300 #define CALL_MUNMAP(a, s) (-1)
1301 #define DIRECT_MMAP(s) MFAIL
1303 #else /* HAVE_MMAP */
1304 #define IS_MMAPPED_BIT (SIZE_T_ONE)
1305 #define USE_MMAP_BIT (SIZE_T_ONE)
1308 #define CALL_MUNMAP(a, s) munmap((a), (s))
1309 #define MMAP_PROT (PROT_READ|PROT_WRITE|PROT_EXEC)
1310 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1311 #define MAP_ANONYMOUS MAP_ANON
1312 #endif /* MAP_ANON */
1313 #ifdef MAP_ANONYMOUS
1314 #define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS)
1315 #define CALL_MMAP(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1316 #else /* MAP_ANONYMOUS */
1318 Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1319 is unlikely to be needed, but is supplied just in case.
1321 #define MMAP_FLAGS (MAP_PRIVATE)
1322 static int dev_zero_fd
= -1; /* Cached file descriptor for /dev/zero. */
1323 #define CALL_MMAP(s) ((dev_zero_fd < 0) ? \
1324 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1325 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1326 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1327 #endif /* MAP_ANONYMOUS */
1329 #define DIRECT_MMAP(s) CALL_MMAP(s)
1332 /* Win32 MMAP via VirtualAlloc */
1333 static void* win32mmap(size_t size
) {
1334 void* ptr
= VirtualAlloc(0, size
, MEM_RESERVE
|MEM_COMMIT
, PAGE_EXECUTE_READWRITE
);
1335 return (ptr
!= 0)? ptr
: MFAIL
;
1338 /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
1339 static void* win32direct_mmap(size_t size
) {
1340 void* ptr
= VirtualAlloc(0, size
, MEM_RESERVE
|MEM_COMMIT
|MEM_TOP_DOWN
,
1341 PAGE_EXECUTE_READWRITE
);
1342 return (ptr
!= 0)? ptr
: MFAIL
;
1345 /* This function supports releasing coalesed segments */
1346 static int win32munmap(void* ptr
, size_t size
) {
1347 MEMORY_BASIC_INFORMATION minfo
;
1350 if (VirtualQuery(cptr
, &minfo
, sizeof(minfo
)) == 0)
1352 if (minfo
.BaseAddress
!= cptr
|| minfo
.AllocationBase
!= cptr
||
1353 minfo
.State
!= MEM_COMMIT
|| minfo
.RegionSize
> size
)
1355 if (VirtualFree(cptr
, 0, MEM_RELEASE
) == 0)
1357 cptr
+= minfo
.RegionSize
;
1358 size
-= minfo
.RegionSize
;
1363 #define CALL_MMAP(s) win32mmap(s)
1364 #define CALL_MUNMAP(a, s) win32munmap((a), (s))
1365 #define DIRECT_MMAP(s) win32direct_mmap(s)
1367 #endif /* HAVE_MMAP */
1369 #if HAVE_MMAP && HAVE_MREMAP
1370 #define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1371 #else /* HAVE_MMAP && HAVE_MREMAP */
1372 #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1373 #endif /* HAVE_MMAP && HAVE_MREMAP */
1376 #define CALL_MORECORE(S) MORECORE(S)
1377 #else /* HAVE_MORECORE */
1378 #define CALL_MORECORE(S) MFAIL
1379 #endif /* HAVE_MORECORE */
1381 /* mstate bit set if continguous morecore disabled or failed */
1382 #define USE_NONCONTIGUOUS_BIT (4U)
1384 /* segment bit set in create_mspace_with_base */
1385 #define EXTERN_BIT (8U)
1388 /* --------------------------- Lock preliminaries ------------------------ */
1393 When locks are defined, there are up to two global locks:
1395 * If HAVE_MORECORE, morecore_mutex protects sequences of calls to
1396 MORECORE. In many cases sys_alloc requires two calls, that should
1397 not be interleaved with calls by other threads. This does not
1398 protect against direct calls to MORECORE by other threads not
1399 using this lock, so there is still code to cope the best we can on
1402 * magic_init_mutex ensures that mparams.magic and other
1403 unique mparams values are initialized only once.
1407 /* By default use posix locks */
1408 #include <pthread.h>
1409 #define MLOCK_T pthread_mutex_t
1410 #define INITIAL_LOCK(l) pthread_mutex_init(l, NULL)
1411 #define ACQUIRE_LOCK(l) pthread_mutex_lock(l)
1412 #define RELEASE_LOCK(l) pthread_mutex_unlock(l)
1415 static MLOCK_T morecore_mutex
= PTHREAD_MUTEX_INITIALIZER
;
1416 #endif /* HAVE_MORECORE */
1418 static MLOCK_T magic_init_mutex
= PTHREAD_MUTEX_INITIALIZER
;
1422 Because lock-protected regions have bounded times, and there
1423 are no recursive lock calls, we can use simple spinlocks.
1426 #define MLOCK_T long
1427 static int win32_acquire_lock (MLOCK_T
*sl
) {
1429 #ifdef InterlockedCompareExchangePointer
1430 if (!InterlockedCompareExchange(sl
, 1, 0))
1432 #else /* Use older void* version */
1433 if (!InterlockedCompareExchange((void**)sl
, (void*)1, (void*)0))
1435 #endif /* InterlockedCompareExchangePointer */
1440 static void win32_release_lock (MLOCK_T
*sl
) {
1441 InterlockedExchange (sl
, 0);
1444 #define INITIAL_LOCK(l) *(l)=0
1445 #define ACQUIRE_LOCK(l) win32_acquire_lock(l)
1446 #define RELEASE_LOCK(l) win32_release_lock(l)
1448 static MLOCK_T morecore_mutex
;
1449 #endif /* HAVE_MORECORE */
1450 static MLOCK_T magic_init_mutex
;
1453 #define USE_LOCK_BIT (2U)
1454 #else /* USE_LOCKS */
1455 #define USE_LOCK_BIT (0U)
1456 #define INITIAL_LOCK(l)
1457 #endif /* USE_LOCKS */
1459 #if USE_LOCKS && HAVE_MORECORE
1460 #define ACQUIRE_MORECORE_LOCK() ACQUIRE_LOCK(&morecore_mutex);
1461 #define RELEASE_MORECORE_LOCK() RELEASE_LOCK(&morecore_mutex);
1462 #else /* USE_LOCKS && HAVE_MORECORE */
1463 #define ACQUIRE_MORECORE_LOCK()
1464 #define RELEASE_MORECORE_LOCK()
1465 #endif /* USE_LOCKS && HAVE_MORECORE */
1468 #define ACQUIRE_MAGIC_INIT_LOCK() ACQUIRE_LOCK(&magic_init_mutex);
1469 #define RELEASE_MAGIC_INIT_LOCK() RELEASE_LOCK(&magic_init_mutex);
1470 #else /* USE_LOCKS */
1471 #define ACQUIRE_MAGIC_INIT_LOCK()
1472 #define RELEASE_MAGIC_INIT_LOCK()
1473 #endif /* USE_LOCKS */
1476 /* ----------------------- Chunk representations ------------------------ */
1479 (The following includes lightly edited explanations by Colin Plumb.)
1481 The malloc_chunk declaration below is misleading (but accurate and
1482 necessary). It declares a "view" into memory allowing access to
1483 necessary fields at known offsets from a given base.
1485 Chunks of memory are maintained using a `boundary tag' method as
1486 originally described by Knuth. (See the paper by Paul Wilson
1487 ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
1488 techniques.) Sizes of free chunks are stored both in the front of
1489 each chunk and at the end. This makes consolidating fragmented
1490 chunks into bigger chunks fast. The head fields also hold bits
1491 representing whether chunks are free or in use.
1493 Here are some pictures to make it clearer. They are "exploded" to
1494 show that the state of a chunk can be thought of as extending from
1495 the high 31 bits of the head field of its header through the
1496 prev_foot and PINUSE_BIT bit of the following chunk header.
1498 A chunk that's in use looks like:
1500 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1501 | Size of previous chunk (if P = 1) |
1502 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1503 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1504 | Size of this chunk 1| +-+
1505 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1511 +- size - sizeof(size_t) available payload bytes -+
1515 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1516 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
1517 | Size of next chunk (may or may not be in use) | +-+
1518 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1520 And if it's free, it looks like this:
1523 | User payload (must be in use, or we would have merged!) |
1524 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1525 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1526 | Size of this chunk 0| +-+
1527 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1529 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1531 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1533 +- size - sizeof(struct chunk) unused bytes -+
1535 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1536 | Size of this chunk |
1537 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1538 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
1539 | Size of next chunk (must be in use, or we would have merged)| +-+
1540 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1544 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1547 Note that since we always merge adjacent free chunks, the chunks
1548 adjacent to a free chunk must be in use.
1550 Given a pointer to a chunk (which can be derived trivially from the
1551 payload pointer) we can, in O(1) time, find out whether the adjacent
1552 chunks are free, and if so, unlink them from the lists that they
1553 are on and merge them with the current chunk.
1555 Chunks always begin on even word boundaries, so the mem portion
1556 (which is returned to the user) is also on an even word boundary, and
1557 thus at least double-word aligned.
1559 The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
1560 chunk size (which is always a multiple of two words), is an in-use
1561 bit for the *previous* chunk. If that bit is *clear*, then the
1562 word before the current chunk size contains the previous chunk
1563 size, and can be used to find the front of the previous chunk.
1564 The very first chunk allocated always has this bit set, preventing
1565 access to non-existent (or non-owned) memory. If pinuse is set for
1566 any given chunk, then you CANNOT determine the size of the
1567 previous chunk, and might even get a memory addressing fault when
1570 The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
1571 the chunk size redundantly records whether the current chunk is
1572 inuse. This redundancy enables usage checks within free and realloc,
1573 and reduces indirection when freeing and consolidating chunks.
1575 Each freshly allocated chunk must have both cinuse and pinuse set.
1576 That is, each allocated chunk borders either a previously allocated
1577 and still in-use chunk, or the base of its memory arena. This is
1578 ensured by making all allocations from the the `lowest' part of any
1579 found chunk. Further, no free chunk physically borders another one,
1580 so each free chunk is known to be preceded and followed by either
1581 inuse chunks or the ends of memory.
1583 Note that the `foot' of the current chunk is actually represented
1584 as the prev_foot of the NEXT chunk. This makes it easier to
1585 deal with alignments etc but can be very confusing when trying
1586 to extend or adapt this code.
1588 The exceptions to all this are
1590 1. The special chunk `top' is the top-most available chunk (i.e.,
1591 the one bordering the end of available memory). It is treated
1592 specially. Top is never included in any bin, is used only if
1593 no other chunk is available, and is released back to the
1594 system if it is very large (see M_TRIM_THRESHOLD). In effect,
1595 the top chunk is treated as larger (and thus less well
1596 fitting) than any other available chunk. The top chunk
1597 doesn't update its trailing size field since there is no next
1598 contiguous chunk that would have to index off it. However,
1599 space is still allocated for it (TOP_FOOT_SIZE) to enable
1600 separation or merging when space is extended.
1602 3. Chunks allocated via mmap, which have the lowest-order bit
1603 (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
1604 PINUSE_BIT in their head fields. Because they are allocated
1605 one-by-one, each must carry its own prev_foot field, which is
1606 also used to hold the offset this chunk has within its mmapped
1607 region, which is needed to preserve alignment. Each mmapped
1608 chunk is trailed by the first two fields of a fake next-chunk
1609 for sake of usage checks.
1613 struct malloc_chunk
{
1614 size_t prev_foot
; /* Size of previous chunk (if free). */
1615 size_t head
; /* Size and inuse bits. */
1616 struct malloc_chunk
* fd
; /* double links -- used only if free. */
1617 struct malloc_chunk
* bk
;
1620 typedef struct malloc_chunk mchunk
;
1621 typedef struct malloc_chunk
* mchunkptr
;
1622 typedef struct malloc_chunk
* sbinptr
; /* The type of bins of chunks */
1623 typedef unsigned int bindex_t
; /* Described below */
1624 typedef unsigned int binmap_t
; /* Described below */
1625 typedef unsigned int flag_t
; /* The type of various bit flag sets */
1627 /* ------------------- Chunks sizes and alignments ----------------------- */
1629 #define MCHUNK_SIZE (sizeof(mchunk))
1632 #define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
1634 #define CHUNK_OVERHEAD (SIZE_T_SIZE)
1635 #endif /* FOOTERS */
1637 /* MMapped chunks need a second word of overhead ... */
1638 #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
1639 /* ... and additional padding for fake next-chunk at foot */
1640 #define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES)
1642 /* The smallest size we can malloc is an aligned minimal chunk */
1643 #define MIN_CHUNK_SIZE\
1644 ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1646 /* conversion from malloc headers to user pointers, and back */
1647 #define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES))
1648 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
1649 /* chunk associated with aligned address A */
1650 #define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A)))
1652 /* Bounds on request (not chunk) sizes. */
1653 #define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2)
1654 #define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
1656 /* pad request bytes into a usable size */
1657 #define pad_request(req) \
1658 (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1660 /* pad request, checking for minimum (but not maximum) */
1661 #define request2size(req) \
1662 (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
1665 /* ------------------ Operations on head and foot fields ----------------- */
1668 The head field of a chunk is or'ed with PINUSE_BIT when previous
1669 adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
1670 use. If the chunk was obtained with mmap, the prev_foot field has
1671 IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
1672 mmapped region to the base of the chunk.
1675 #define PINUSE_BIT (SIZE_T_ONE)
1676 #define CINUSE_BIT (SIZE_T_TWO)
1677 #define INUSE_BITS (PINUSE_BIT|CINUSE_BIT)
1679 /* Head value for fenceposts */
1680 #define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE)
1682 /* extraction of fields from head words */
1683 #define cinuse(p) ((p)->head & CINUSE_BIT)
1684 #define pinuse(p) ((p)->head & PINUSE_BIT)
1685 #define chunksize(p) ((p)->head & ~(INUSE_BITS))
1687 #define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT)
1688 #define clear_cinuse(p) ((p)->head &= ~CINUSE_BIT)
1690 /* Treat space at ptr +/- offset as a chunk */
1691 #define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
1692 #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
1694 /* Ptr to next or previous physical malloc_chunk. */
1695 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
1696 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
1698 /* extract next chunk's pinuse bit */
1699 #define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT)
1701 /* Get/set size at footer */
1702 #define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot)
1703 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
1705 /* Set size, pinuse bit, and foot */
1706 #define set_size_and_pinuse_of_free_chunk(p, s)\
1707 ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
1709 /* Set size, pinuse bit, foot, and clear next pinuse */
1710 #define set_free_with_pinuse(p, s, n)\
1711 (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
1713 #define is_mmapped(p)\
1714 (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
1716 /* Get the internal overhead associated with chunk p */
1717 #define overhead_for(p)\
1718 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
1720 /* Return true if malloced space is not necessarily cleared */
1722 #define calloc_must_clear(p) (!is_mmapped(p))
1723 #else /* MMAP_CLEARS */
1724 #define calloc_must_clear(p) (1)
1725 #endif /* MMAP_CLEARS */
1727 /* ---------------------- Overlaid data structures ----------------------- */
1730 When chunks are not in use, they are treated as nodes of either
1733 "Small" chunks are stored in circular doubly-linked lists, and look
1736 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1737 | Size of previous chunk |
1738 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1739 `head:' | Size of chunk, in bytes |P|
1740 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1741 | Forward pointer to next chunk in list |
1742 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1743 | Back pointer to previous chunk in list |
1744 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1745 | Unused space (may be 0 bytes long) .
1748 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1749 `foot:' | Size of chunk, in bytes |
1750 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1752 Larger chunks are kept in a form of bitwise digital trees (aka
1753 tries) keyed on chunksizes. Because malloc_tree_chunks are only for
1754 free chunks greater than 256 bytes, their size doesn't impose any
1755 constraints on user chunk sizes. Each node looks like:
1757 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1758 | Size of previous chunk |
1759 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1760 `head:' | Size of chunk, in bytes |P|
1761 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1762 | Forward pointer to next chunk of same size |
1763 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1764 | Back pointer to previous chunk of same size |
1765 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1766 | Pointer to left child (child[0]) |
1767 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1768 | Pointer to right child (child[1]) |
1769 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1770 | Pointer to parent |
1771 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1772 | bin index of this chunk |
1773 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1776 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1777 `foot:' | Size of chunk, in bytes |
1778 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1780 Each tree holding treenodes is a tree of unique chunk sizes. Chunks
1781 of the same size are arranged in a circularly-linked list, with only
1782 the oldest chunk (the next to be used, in our FIFO ordering)
1783 actually in the tree. (Tree members are distinguished by a non-null
1784 parent pointer.) If a chunk with the same size an an existing node
1785 is inserted, it is linked off the existing node using pointers that
1786 work in the same way as fd/bk pointers of small chunks.
1788 Each tree contains a power of 2 sized range of chunk sizes (the
1789 smallest is 0x100 <= x < 0x180), which is is divided in half at each
1790 tree level, with the chunks in the smaller half of the range (0x100
1791 <= x < 0x140 for the top nose) in the left subtree and the larger
1792 half (0x140 <= x < 0x180) in the right subtree. This is, of course,
1793 done by inspecting individual bits.
1795 Using these rules, each node's left subtree contains all smaller
1796 sizes than its right subtree. However, the node at the root of each
1797 subtree has no particular ordering relationship to either. (The
1798 dividing line between the subtree sizes is based on trie relation.)
1799 If we remove the last chunk of a given size from the interior of the
1800 tree, we need to replace it with a leaf node. The tree ordering
1801 rules permit a node to be replaced by any leaf below it.
1803 The smallest chunk in a tree (a common operation in a best-fit
1804 allocator) can be found by walking a path to the leftmost leaf in
1805 the tree. Unlike a usual binary tree, where we follow left child
1806 pointers until we reach a null, here we follow the right child
1807 pointer any time the left one is null, until we reach a leaf with
1808 both child pointers null. The smallest chunk in the tree will be
1809 somewhere along that path.
1811 The worst case number of steps to add, find, or remove a node is
1812 bounded by the number of bits differentiating chunks within
1813 bins. Under current bin calculations, this ranges from 6 up to 21
1814 (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
1815 is of course much better.
1818 struct malloc_tree_chunk
{
1819 /* The first four fields must be compatible with malloc_chunk */
1822 struct malloc_tree_chunk
* fd
;
1823 struct malloc_tree_chunk
* bk
;
1825 struct malloc_tree_chunk
* child
[2];
1826 struct malloc_tree_chunk
* parent
;
1830 typedef struct malloc_tree_chunk tchunk
;
1831 typedef struct malloc_tree_chunk
* tchunkptr
;
1832 typedef struct malloc_tree_chunk
* tbinptr
; /* The type of bins of trees */
1834 /* A little helper macro for trees */
1835 #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
1837 /* ----------------------------- Segments -------------------------------- */
1840 Each malloc space may include non-contiguous segments, held in a
1841 list headed by an embedded malloc_segment record representing the
1842 top-most space. Segments also include flags holding properties of
1843 the space. Large chunks that are directly allocated by mmap are not
1844 included in this list. They are instead independently created and
1845 destroyed without otherwise keeping track of them.
1847 Segment management mainly comes into play for spaces allocated by
1848 MMAP. Any call to MMAP might or might not return memory that is
1849 adjacent to an existing segment. MORECORE normally contiguously
1850 extends the current space, so this space is almost always adjacent,
1851 which is simpler and faster to deal with. (This is why MORECORE is
1852 used preferentially to MMAP when both are available -- see
1853 sys_alloc.) When allocating using MMAP, we don't use any of the
1854 hinting mechanisms (inconsistently) supported in various
1855 implementations of unix mmap, or distinguish reserving from
1856 committing memory. Instead, we just ask for space, and exploit
1857 contiguity when we get it. It is probably possible to do
1858 better than this on some systems, but no general scheme seems
1859 to be significantly better.
1861 Management entails a simpler variant of the consolidation scheme
1862 used for chunks to reduce fragmentation -- new adjacent memory is
1863 normally prepended or appended to an existing segment. However,
1864 there are limitations compared to chunk consolidation that mostly
1865 reflect the fact that segment processing is relatively infrequent
1866 (occurring only when getting memory from system) and that we
1867 don't expect to have huge numbers of segments:
1869 * Segments are not indexed, so traversal requires linear scans. (It
1870 would be possible to index these, but is not worth the extra
1871 overhead and complexity for most programs on most platforms.)
1872 * New segments are only appended to old ones when holding top-most
1873 memory; if they cannot be prepended to others, they are held in
1876 Except for the top-most segment of an mstate, each segment record
1877 is kept at the tail of its segment. Segments are added by pushing
1878 segment records onto the list headed by &mstate.seg for the
1881 Segment flags control allocation/merge/deallocation policies:
1882 * If EXTERN_BIT set, then we did not allocate this segment,
1883 and so should not try to deallocate or merge with others.
1884 (This currently holds only for the initial segment passed
1885 into create_mspace_with_base.)
1886 * If IS_MMAPPED_BIT set, the segment may be merged with
1887 other surrounding mmapped segments and trimmed/de-allocated
1889 * If neither bit is set, then the segment was obtained using
1890 MORECORE so can be merged with surrounding MORECORE'd segments
1891 and deallocated/trimmed using MORECORE with negative arguments.
1894 struct malloc_segment
{
1895 char* base
; /* base address */
1896 size_t size
; /* allocated size */
1897 struct malloc_segment
* next
; /* ptr to next segment */
1898 flag_t sflags
; /* mmap and extern flag */
1901 #define is_mmapped_segment(S) ((S)->sflags & IS_MMAPPED_BIT)
1902 #define is_extern_segment(S) ((S)->sflags & EXTERN_BIT)
1904 typedef struct malloc_segment msegment
;
1905 typedef struct malloc_segment
* msegmentptr
;
1907 /* ---------------------------- malloc_state ----------------------------- */
1910 A malloc_state holds all of the bookkeeping for a space.
1911 The main fields are:
1914 The topmost chunk of the currently active segment. Its size is
1915 cached in topsize. The actual size of topmost space is
1916 topsize+TOP_FOOT_SIZE, which includes space reserved for adding
1917 fenceposts and segment records if necessary when getting more
1918 space from the system. The size at which to autotrim top is
1919 cached from mparams in trim_check, except that it is disabled if
1922 Designated victim (dv)
1923 This is the preferred chunk for servicing small requests that
1924 don't have exact fits. It is normally the chunk split off most
1925 recently to service another small request. Its size is cached in
1926 dvsize. The link fields of this chunk are not maintained since it
1927 is not kept in a bin.
1930 An array of bin headers for free chunks. These bins hold chunks
1931 with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
1932 chunks of all the same size, spaced 8 bytes apart. To simplify
1933 use in double-linked lists, each bin header acts as a malloc_chunk
1934 pointing to the real first node, if it exists (else pointing to
1935 itself). This avoids special-casing for headers. But to avoid
1936 waste, we allocate only the fd/bk pointers of bins, and then use
1937 repositioning tricks to treat these as the fields of a chunk.
1940 Treebins are pointers to the roots of trees holding a range of
1941 sizes. There are 2 equally spaced treebins for each power of two
1942 from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
1946 There is one bit map for small bins ("smallmap") and one for
1947 treebins ("treemap). Each bin sets its bit when non-empty, and
1948 clears the bit when empty. Bit operations are then used to avoid
1949 bin-by-bin searching -- nearly all "search" is done without ever
1950 looking at bins that won't be selected. The bit maps
1951 conservatively use 32 bits per map word, even if on 64bit system.
1952 For a good description of some of the bit-based techniques used
1953 here, see Henry S. Warren Jr's book "Hacker's Delight" (and
1954 supplement at http://hackersdelight.org/). Many of these are
1955 intended to reduce the branchiness of paths through malloc etc, as
1956 well as to reduce the number of memory locations read or written.
1959 A list of segments headed by an embedded malloc_segment record
1960 representing the initial space.
1962 Address check support
1963 The least_addr field is the least address ever obtained from
1964 MORECORE or MMAP. Attempted frees and reallocs of any address less
1965 than this are trapped (unless INSECURE is defined).
1968 A cross-check field that should always hold same value as mparams.magic.
1971 Bits recording whether to use MMAP, locks, or contiguous MORECORE
1974 Each space keeps track of current and maximum system memory
1975 obtained via MORECORE or MMAP.
1978 If USE_LOCKS is defined, the "mutex" lock is acquired and released
1979 around every public call using this mspace.
1982 /* Bin types, widths and sizes */
1983 #define NSMALLBINS (32U)
1984 #define NTREEBINS (32U)
1985 #define SMALLBIN_SHIFT (3U)
1986 #define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT)
1987 #define TREEBIN_SHIFT (8U)
1988 #define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT)
1989 #define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE)
1990 #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
1992 struct malloc_state
{
2002 mchunkptr smallbins
[(NSMALLBINS
+1)*2];
2003 tbinptr treebins
[NTREEBINS
];
2005 size_t max_footprint
;
2008 MLOCK_T mutex
; /* locate lock among fields that rarely change */
2009 #endif /* USE_LOCKS */
2013 typedef struct malloc_state
* mstate
;
2015 /* ------------- Global malloc_state and malloc_params ------------------- */
2018 malloc_params holds global properties, including those that can be
2019 dynamically set using mallopt. There is a single instance, mparams,
2020 initialized in init_mparams.
2023 struct malloc_params
{
2027 size_t mmap_threshold
;
2028 size_t trim_threshold
;
2029 flag_t default_mflags
;
2032 static struct malloc_params mparams
;
2034 /* The global malloc_state used for all non-"mspace" calls */
2035 static struct malloc_state _gm_
;
2037 #define is_global(M) ((M) == &_gm_)
2038 #define is_initialized(M) ((M)->top != 0)
2040 /* -------------------------- system alloc setup ------------------------- */
2042 /* Operations on mflags */
2044 #define use_lock(M) ((M)->mflags & USE_LOCK_BIT)
2045 #define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT)
2046 #define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT)
2048 #define use_mmap(M) ((M)->mflags & USE_MMAP_BIT)
2049 #define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT)
2050 #define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT)
2052 #define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT)
2053 #define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT)
2055 #define set_lock(M,L)\
2056 ((M)->mflags = (L)?\
2057 ((M)->mflags | USE_LOCK_BIT) :\
2058 ((M)->mflags & ~USE_LOCK_BIT))
2060 /* page-align a size */
2061 #define page_align(S)\
2062 (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
2064 /* granularity-align a size */
2065 #define granularity_align(S)\
2066 (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
2068 #define is_page_aligned(S)\
2069 (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2070 #define is_granularity_aligned(S)\
2071 (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2073 /* True if segment S holds address A */
2074 #define segment_holds(S, A)\
2075 ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2077 /* Return segment holding given address */
2078 static msegmentptr
segment_holding(mstate m
, char* addr
) {
2079 msegmentptr sp
= &m
->seg
;
2081 if (addr
>= sp
->base
&& addr
< sp
->base
+ sp
->size
)
2083 if ((sp
= sp
->next
) == 0)
2088 /* Return true if segment contains a segment link */
2089 static int has_segment_link(mstate m
, msegmentptr ss
) {
2090 msegmentptr sp
= &m
->seg
;
2092 if ((char*)sp
>= ss
->base
&& (char*)sp
< ss
->base
+ ss
->size
)
2094 if ((sp
= sp
->next
) == 0)
2099 #ifndef MORECORE_CANNOT_TRIM
2100 #define should_trim(M,s) ((s) > (M)->trim_check)
2101 #else /* MORECORE_CANNOT_TRIM */
2102 #define should_trim(M,s) (0)
2103 #endif /* MORECORE_CANNOT_TRIM */
2106 TOP_FOOT_SIZE is padding at the end of a segment, including space
2107 that may be needed to place segment records and fenceposts when new
2108 noncontiguous segments are added.
2110 #define TOP_FOOT_SIZE\
2111 (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2114 /* ------------------------------- Hooks -------------------------------- */
2117 PREACTION should be defined to return 0 on success, and nonzero on
2118 failure. If you are not using locking, you can redefine these to do
2124 /* Ensure locks are initialized */
2125 #define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
2127 #define PREACTION(M) ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2128 #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2129 #else /* USE_LOCKS */
2132 #define PREACTION(M) (0)
2133 #endif /* PREACTION */
2136 #define POSTACTION(M)
2137 #endif /* POSTACTION */
2139 #endif /* USE_LOCKS */
2142 CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2143 USAGE_ERROR_ACTION is triggered on detected bad frees and
2144 reallocs. The argument p is an address that might have triggered the
2145 fault. It is ignored by the two predefined actions, but might be
2146 useful in custom actions that try to help diagnose errors.
2149 #if PROCEED_ON_ERROR
2151 /* A count of the number of corruption errors causing resets */
2152 int malloc_corruption_error_count
;
2154 /* default corruption action */
2155 static void reset_on_error(mstate m
);
2157 #define CORRUPTION_ERROR_ACTION(m) reset_on_error(m)
2158 #define USAGE_ERROR_ACTION(m, p)
2160 #else /* PROCEED_ON_ERROR */
2162 #ifndef CORRUPTION_ERROR_ACTION
2163 #define CORRUPTION_ERROR_ACTION(m) ABORT
2164 #endif /* CORRUPTION_ERROR_ACTION */
2166 #ifndef USAGE_ERROR_ACTION
2167 #define USAGE_ERROR_ACTION(m,p) ABORT
2168 #endif /* USAGE_ERROR_ACTION */
2170 #endif /* PROCEED_ON_ERROR */
2172 /* -------------------------- Debugging setup ---------------------------- */
2176 #define check_free_chunk(M,P)
2177 #define check_inuse_chunk(M,P)
2178 #define check_malloced_chunk(M,P,N)
2179 #define check_mmapped_chunk(M,P)
2180 #define check_malloc_state(M)
2181 #define check_top_chunk(M,P)
2184 #define check_free_chunk(M,P) do_check_free_chunk(M,P)
2185 #define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P)
2186 #define check_top_chunk(M,P) do_check_top_chunk(M,P)
2187 #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2188 #define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P)
2189 #define check_malloc_state(M) do_check_malloc_state(M)
2191 static void do_check_any_chunk(mstate m
, mchunkptr p
);
2192 static void do_check_top_chunk(mstate m
, mchunkptr p
);
2193 static void do_check_mmapped_chunk(mstate m
, mchunkptr p
);
2194 static void do_check_inuse_chunk(mstate m
, mchunkptr p
);
2195 static void do_check_free_chunk(mstate m
, mchunkptr p
);
2196 static void do_check_malloced_chunk(mstate m
, void* mem
, size_t s
);
2197 static void do_check_tree(mstate m
, tchunkptr t
);
2198 static void do_check_treebin(mstate m
, bindex_t i
);
2199 static void do_check_smallbin(mstate m
, bindex_t i
);
2200 static void do_check_malloc_state(mstate m
);
2201 static int bin_find(mstate m
, mchunkptr x
);
2202 static size_t traverse_and_check(mstate m
);
2205 /* ---------------------------- Indexing Bins ---------------------------- */
2207 #define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2208 #define small_index(s) ((s) >> SMALLBIN_SHIFT)
2209 #define small_index2size(i) ((i) << SMALLBIN_SHIFT)
2210 #define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE))
2212 /* addressing by index. See above about smallbin repositioning */
2213 #define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
2214 #define treebin_at(M,i) (&((M)->treebins[i]))
2216 /* assign tree index for size S to variable I */
2217 #if defined(__GNUC__) && defined(i386)
2218 #define compute_tree_index(S, I)\
2220 size_t X = S >> TREEBIN_SHIFT;\
2223 else if (X > 0xFFFF)\
2227 __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm" (X));\
2228 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2232 #define compute_tree_index(S, I)\
2234 size_t X = S >> TREEBIN_SHIFT;\
2237 else if (X > 0xFFFF)\
2240 unsigned int Y = (unsigned int)X;\
2241 unsigned int N = ((Y - 0x100) >> 16) & 8;\
2242 unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2244 N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2245 K = 14 - N + ((Y <<= K) >> 15);\
2246 I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2251 /* Bit representing maximum resolved size in a treebin at i */
2252 #define bit_for_tree_index(i) \
2253 (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2255 /* Shift placing maximum resolved bit in a treebin at i as sign bit */
2256 #define leftshift_for_tree_index(i) \
2257 ((i == NTREEBINS-1)? 0 : \
2258 ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2260 /* The size of the smallest chunk held in bin with index i */
2261 #define minsize_for_tree_index(i) \
2262 ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \
2263 (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2266 /* ------------------------ Operations on bin maps ----------------------- */
2268 /* bit corresponding to given index */
2269 #define idx2bit(i) ((binmap_t)(1) << (i))
2271 /* Mark/Clear bits with given index */
2272 #define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i))
2273 #define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i))
2274 #define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i))
2276 #define mark_treemap(M,i) ((M)->treemap |= idx2bit(i))
2277 #define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i))
2278 #define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i))
2280 /* index corresponding to given bit */
2282 #if defined(__GNUC__) && defined(i386)
2283 #define compute_bit2idx(X, I)\
2286 __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
2292 #define compute_bit2idx(X, I) I = ffs(X)-1
2294 #else /* USE_BUILTIN_FFS */
2295 #define compute_bit2idx(X, I)\
2297 unsigned int Y = X - 1;\
2298 unsigned int K = Y >> (16-4) & 16;\
2299 unsigned int N = K; Y >>= K;\
2300 N += K = Y >> (8-3) & 8; Y >>= K;\
2301 N += K = Y >> (4-2) & 4; Y >>= K;\
2302 N += K = Y >> (2-1) & 2; Y >>= K;\
2303 N += K = Y >> (1-0) & 1; Y >>= K;\
2304 I = (bindex_t)(N + Y);\
2306 #endif /* USE_BUILTIN_FFS */
2309 /* isolate the least set bit of a bitmap */
2310 #define least_bit(x) ((x) & -(x))
2312 /* mask with all bits to left of least bit of x on */
2313 #define left_bits(x) ((x<<1) | -(x<<1))
2315 /* mask with all bits to left of or equal to least bit of x on */
2316 #define same_or_left_bits(x) ((x) | -(x))
2319 /* ----------------------- Runtime Check Support ------------------------- */
2322 For security, the main invariant is that malloc/free/etc never
2323 writes to a static address other than malloc_state, unless static
2324 malloc_state itself has been corrupted, which cannot occur via
2325 malloc (because of these checks). In essence this means that we
2326 believe all pointers, sizes, maps etc held in malloc_state, but
2327 check all of those linked or offsetted from other embedded data
2328 structures. These checks are interspersed with main code in a way
2329 that tends to minimize their run-time cost.
2331 When FOOTERS is defined, in addition to range checking, we also
2332 verify footer fields of inuse chunks, which can be used guarantee
2333 that the mstate controlling malloc/free is intact. This is a
2334 streamlined version of the approach described by William Robertson
2335 et al in "Run-time Detection of Heap-based Overflows" LISA'03
2336 http://www.usenix.org/events/lisa03/tech/robertson.html The footer
2337 of an inuse chunk holds the xor of its mstate and a random seed,
2338 that is checked upon calls to free() and realloc(). This is
2339 (probablistically) unguessable from outside the program, but can be
2340 computed by any code successfully malloc'ing any chunk, so does not
2341 itself provide protection against code that has already broken
2342 security through some other means. Unlike Robertson et al, we
2343 always dynamically check addresses of all offset chunks (previous,
2344 next, etc). This turns out to be cheaper than relying on hashes.
2348 /* Check if address a is at least as high as any from MORECORE or MMAP */
2349 #define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
2350 /* Check if address of next chunk n is higher than base chunk p */
2351 #define ok_next(p, n) ((char*)(p) < (char*)(n))
2352 /* Check if p has its cinuse bit on */
2353 #define ok_cinuse(p) cinuse(p)
2354 /* Check if p has its pinuse bit on */
2355 #define ok_pinuse(p) pinuse(p)
2357 #else /* !INSECURE */
2358 #define ok_address(M, a) (1)
2359 #define ok_next(b, n) (1)
2360 #define ok_cinuse(p) (1)
2361 #define ok_pinuse(p) (1)
2362 #endif /* !INSECURE */
2364 #if (FOOTERS && !INSECURE)
2365 /* Check if (alleged) mstate m has expected magic field */
2366 #define ok_magic(M) ((M)->magic == mparams.magic)
2367 #else /* (FOOTERS && !INSECURE) */
2368 #define ok_magic(M) (1)
2369 #endif /* (FOOTERS && !INSECURE) */
2372 /* In gcc, use __builtin_expect to minimize impact of checks */
2374 #if defined(__GNUC__) && __GNUC__ >= 3
2375 #define RTCHECK(e) __builtin_expect(e, 1)
2377 #define RTCHECK(e) (e)
2379 #else /* !INSECURE */
2380 #define RTCHECK(e) (1)
2381 #endif /* !INSECURE */
2383 /* macros to set up inuse chunks with or without footers */
2387 #define mark_inuse_foot(M,p,s)
2389 /* Set cinuse bit and pinuse bit of next chunk */
2390 #define set_inuse(M,p,s)\
2391 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2392 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2394 /* Set cinuse and pinuse of this chunk and pinuse of next chunk */
2395 #define set_inuse_and_pinuse(M,p,s)\
2396 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2397 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2399 /* Set size, cinuse and pinuse bit of this chunk */
2400 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2401 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
2405 /* Set foot of inuse chunk to be xor of mstate and seed */
2406 #define mark_inuse_foot(M,p,s)\
2407 (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
2409 #define get_mstate_for(p)\
2410 ((mstate)(((mchunkptr)((char*)(p) +\
2411 (chunksize(p))))->prev_foot ^ mparams.magic))
2413 #define set_inuse(M,p,s)\
2414 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2415 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
2416 mark_inuse_foot(M,p,s))
2418 #define set_inuse_and_pinuse(M,p,s)\
2419 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2420 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
2421 mark_inuse_foot(M,p,s))
2423 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2424 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2425 mark_inuse_foot(M, p, s))
2427 #endif /* !FOOTERS */
2429 /* ---------------------------- setting mparams -------------------------- */
2431 /* Initialize mparams */
2432 static int init_mparams(void) {
2433 if (mparams
.page_size
== 0) {
2436 mparams
.mmap_threshold
= DEFAULT_MMAP_THRESHOLD
;
2437 mparams
.trim_threshold
= DEFAULT_TRIM_THRESHOLD
;
2438 #if MORECORE_CONTIGUOUS
2439 mparams
.default_mflags
= USE_LOCK_BIT
|USE_MMAP_BIT
;
2440 #else /* MORECORE_CONTIGUOUS */
2441 mparams
.default_mflags
= USE_LOCK_BIT
|USE_MMAP_BIT
|USE_NONCONTIGUOUS_BIT
;
2442 #endif /* MORECORE_CONTIGUOUS */
2444 #if (FOOTERS && !INSECURE)
2448 unsigned char buf
[sizeof(size_t)];
2449 /* Try to use /dev/urandom, else fall back on using time */
2450 if ((fd
= open("/dev/urandom", O_RDONLY
)) >= 0 &&
2451 read(fd
, buf
, sizeof(buf
)) == sizeof(buf
)) {
2452 s
= *((size_t *) buf
);
2456 #endif /* USE_DEV_RANDOM */
2457 s
= (size_t)(time(0) ^ (size_t)0x55555555U
);
2459 s
|= (size_t)8U; /* ensure nonzero */
2460 s
&= ~(size_t)7U; /* improve chances of fault for bad values */
2463 #else /* (FOOTERS && !INSECURE) */
2464 s
= (size_t)0x58585858U
;
2465 #endif /* (FOOTERS && !INSECURE) */
2466 ACQUIRE_MAGIC_INIT_LOCK();
2467 if (mparams
.magic
== 0) {
2469 /* Set up lock for main malloc area */
2470 INITIAL_LOCK(&gm
->mutex
);
2471 gm
->mflags
= mparams
.default_mflags
;
2473 RELEASE_MAGIC_INIT_LOCK();
2476 mparams
.page_size
= malloc_getpagesize
;
2477 mparams
.granularity
= ((DEFAULT_GRANULARITY
!= 0)?
2478 DEFAULT_GRANULARITY
: mparams
.page_size
);
2481 SYSTEM_INFO system_info
;
2482 GetSystemInfo(&system_info
);
2483 mparams
.page_size
= system_info
.dwPageSize
;
2484 mparams
.granularity
= system_info
.dwAllocationGranularity
;
2488 /* Sanity-check configuration:
2489 size_t must be unsigned and as wide as pointer type.
2490 ints must be at least 4 bytes.
2491 alignment must be at least 8.
2492 Alignment, min chunk size, and page size must all be powers of 2.
2494 if ((sizeof(size_t) != sizeof(char*)) ||
2495 (MAX_SIZE_T
< MIN_CHUNK_SIZE
) ||
2496 (sizeof(int) < 4) ||
2497 (MALLOC_ALIGNMENT
< (size_t)8U) ||
2498 ((MALLOC_ALIGNMENT
& (MALLOC_ALIGNMENT
-SIZE_T_ONE
)) != 0) ||
2499 ((MCHUNK_SIZE
& (MCHUNK_SIZE
-SIZE_T_ONE
)) != 0) ||
2500 ((mparams
.granularity
& (mparams
.granularity
-SIZE_T_ONE
)) != 0) ||
2501 ((mparams
.page_size
& (mparams
.page_size
-SIZE_T_ONE
)) != 0))
2507 /* support for mallopt */
2508 static int change_mparam(int param_number
, int value
) {
2509 size_t val
= (size_t)value
;
2511 switch(param_number
) {
2512 case M_TRIM_THRESHOLD
:
2513 mparams
.trim_threshold
= val
;
2516 if (val
>= mparams
.page_size
&& ((val
& (val
-1)) == 0)) {
2517 mparams
.granularity
= val
;
2522 case M_MMAP_THRESHOLD
:
2523 mparams
.mmap_threshold
= val
;
2531 /* ------------------------- Debugging Support --------------------------- */
2533 /* Check properties of any chunk, whether free, inuse, mmapped etc */
2534 static void do_check_any_chunk(mstate m
, mchunkptr p
) {
2535 assert((is_aligned(chunk2mem(p
))) || (p
->head
== FENCEPOST_HEAD
));
2536 assert(ok_address(m
, p
));
2539 /* Check properties of top chunk */
2540 static void do_check_top_chunk(mstate m
, mchunkptr p
) {
2541 msegmentptr sp
= segment_holding(m
, (char*)p
);
2542 size_t sz
= chunksize(p
);
2544 assert((is_aligned(chunk2mem(p
))) || (p
->head
== FENCEPOST_HEAD
));
2545 assert(ok_address(m
, p
));
2546 assert(sz
== m
->topsize
);
2548 assert(sz
== ((sp
->base
+ sp
->size
) - (char*)p
) - TOP_FOOT_SIZE
);
2550 assert(!next_pinuse(p
));
2553 /* Check properties of (inuse) mmapped chunks */
2554 static void do_check_mmapped_chunk(mstate m
, mchunkptr p
) {
2555 size_t sz
= chunksize(p
);
2556 size_t len
= (sz
+ (p
->prev_foot
& ~IS_MMAPPED_BIT
) + MMAP_FOOT_PAD
);
2557 assert(is_mmapped(p
));
2558 assert(use_mmap(m
));
2559 assert((is_aligned(chunk2mem(p
))) || (p
->head
== FENCEPOST_HEAD
));
2560 assert(ok_address(m
, p
));
2561 assert(!is_small(sz
));
2562 assert((len
& (mparams
.page_size
-SIZE_T_ONE
)) == 0);
2563 assert(chunk_plus_offset(p
, sz
)->head
== FENCEPOST_HEAD
);
2564 assert(chunk_plus_offset(p
, sz
+SIZE_T_SIZE
)->head
== 0);
2567 /* Check properties of inuse chunks */
2568 static void do_check_inuse_chunk(mstate m
, mchunkptr p
) {
2569 do_check_any_chunk(m
, p
);
2571 assert(next_pinuse(p
));
2572 /* If not pinuse and not mmapped, previous chunk has OK offset */
2573 assert(is_mmapped(p
) || pinuse(p
) || next_chunk(prev_chunk(p
)) == p
);
2575 do_check_mmapped_chunk(m
, p
);
2578 /* Check properties of free chunks */
2579 static void do_check_free_chunk(mstate m
, mchunkptr p
) {
2580 size_t sz
= p
->head
& ~(PINUSE_BIT
|CINUSE_BIT
);
2581 mchunkptr next
= chunk_plus_offset(p
, sz
);
2582 do_check_any_chunk(m
, p
);
2584 assert(!next_pinuse(p
));
2585 assert (!is_mmapped(p
));
2586 if (p
!= m
->dv
&& p
!= m
->top
) {
2587 if (sz
>= MIN_CHUNK_SIZE
) {
2588 assert((sz
& CHUNK_ALIGN_MASK
) == 0);
2589 assert(is_aligned(chunk2mem(p
)));
2590 assert(next
->prev_foot
== sz
);
2592 assert (next
== m
->top
|| cinuse(next
));
2593 assert(p
->fd
->bk
== p
);
2594 assert(p
->bk
->fd
== p
);
2596 else /* markers are always of size SIZE_T_SIZE */
2597 assert(sz
== SIZE_T_SIZE
);
2601 /* Check properties of malloced chunks at the point they are malloced */
2602 static void do_check_malloced_chunk(mstate m
, void* mem
, size_t s
) {
2604 mchunkptr p
= mem2chunk(mem
);
2605 size_t sz
= p
->head
& ~(PINUSE_BIT
|CINUSE_BIT
);
2606 do_check_inuse_chunk(m
, p
);
2607 assert((sz
& CHUNK_ALIGN_MASK
) == 0);
2608 assert(sz
>= MIN_CHUNK_SIZE
);
2610 /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
2611 assert(is_mmapped(p
) || sz
< (s
+ MIN_CHUNK_SIZE
));
2615 /* Check a tree and its subtrees. */
2616 static void do_check_tree(mstate m
, tchunkptr t
) {
2619 bindex_t tindex
= t
->index
;
2620 size_t tsize
= chunksize(t
);
2622 compute_tree_index(tsize
, idx
);
2623 assert(tindex
== idx
);
2624 assert(tsize
>= MIN_LARGE_SIZE
);
2625 assert(tsize
>= minsize_for_tree_index(idx
));
2626 assert((idx
== NTREEBINS
-1) || (tsize
< minsize_for_tree_index((idx
+1))));
2628 do { /* traverse through chain of same-sized nodes */
2629 do_check_any_chunk(m
, ((mchunkptr
)u
));
2630 assert(u
->index
== tindex
);
2631 assert(chunksize(u
) == tsize
);
2633 assert(!next_pinuse(u
));
2634 assert(u
->fd
->bk
== u
);
2635 assert(u
->bk
->fd
== u
);
2636 if (u
->parent
== 0) {
2637 assert(u
->child
[0] == 0);
2638 assert(u
->child
[1] == 0);
2641 assert(head
== 0); /* only one node on chain has parent */
2643 assert(u
->parent
!= u
);
2644 assert (u
->parent
->child
[0] == u
||
2645 u
->parent
->child
[1] == u
||
2646 *((tbinptr
*)(u
->parent
)) == u
);
2647 if (u
->child
[0] != 0) {
2648 assert(u
->child
[0]->parent
== u
);
2649 assert(u
->child
[0] != u
);
2650 do_check_tree(m
, u
->child
[0]);
2652 if (u
->child
[1] != 0) {
2653 assert(u
->child
[1]->parent
== u
);
2654 assert(u
->child
[1] != u
);
2655 do_check_tree(m
, u
->child
[1]);
2657 if (u
->child
[0] != 0 && u
->child
[1] != 0) {
2658 assert(chunksize(u
->child
[0]) < chunksize(u
->child
[1]));
2666 /* Check all the chunks in a treebin. */
2667 static void do_check_treebin(mstate m
, bindex_t i
) {
2668 tbinptr
* tb
= treebin_at(m
, i
);
2670 int empty
= (m
->treemap
& (1U << i
)) == 0;
2674 do_check_tree(m
, t
);
2677 /* Check all the chunks in a smallbin. */
2678 static void do_check_smallbin(mstate m
, bindex_t i
) {
2679 sbinptr b
= smallbin_at(m
, i
);
2680 mchunkptr p
= b
->bk
;
2681 unsigned int empty
= (m
->smallmap
& (1U << i
)) == 0;
2685 for (; p
!= b
; p
= p
->bk
) {
2686 size_t size
= chunksize(p
);
2688 /* each chunk claims to be free */
2689 do_check_free_chunk(m
, p
);
2690 /* chunk belongs in bin */
2691 assert(small_index(size
) == i
);
2692 assert(p
->bk
== b
|| chunksize(p
->bk
) == chunksize(p
));
2693 /* chunk is followed by an inuse chunk */
2695 if (q
->head
!= FENCEPOST_HEAD
)
2696 do_check_inuse_chunk(m
, q
);
2701 /* Find x in a bin. Used in other check functions. */
2702 static int bin_find(mstate m
, mchunkptr x
) {
2703 size_t size
= chunksize(x
);
2704 if (is_small(size
)) {
2705 bindex_t sidx
= small_index(size
);
2706 sbinptr b
= smallbin_at(m
, sidx
);
2707 if (smallmap_is_marked(m
, sidx
)) {
2712 } while ((p
= p
->fd
) != b
);
2717 compute_tree_index(size
, tidx
);
2718 if (treemap_is_marked(m
, tidx
)) {
2719 tchunkptr t
= *treebin_at(m
, tidx
);
2720 size_t sizebits
= size
<< leftshift_for_tree_index(tidx
);
2721 while (t
!= 0 && chunksize(t
) != size
) {
2722 t
= t
->child
[(sizebits
>> (SIZE_T_BITSIZE
-SIZE_T_ONE
)) & 1];
2728 if (u
== (tchunkptr
)x
)
2730 } while ((u
= u
->fd
) != t
);
2737 /* Traverse each chunk and check it; return total */
2738 static size_t traverse_and_check(mstate m
) {
2740 if (is_initialized(m
)) {
2741 msegmentptr s
= &m
->seg
;
2742 sum
+= m
->topsize
+ TOP_FOOT_SIZE
;
2744 mchunkptr q
= align_as_chunk(s
->base
);
2745 mchunkptr lastq
= 0;
2747 while (segment_holds(s
, q
) &&
2748 q
!= m
->top
&& q
->head
!= FENCEPOST_HEAD
) {
2749 sum
+= chunksize(q
);
2751 assert(!bin_find(m
, q
));
2752 do_check_inuse_chunk(m
, q
);
2755 assert(q
== m
->dv
|| bin_find(m
, q
));
2756 assert(lastq
== 0 || cinuse(lastq
)); /* Not 2 consecutive free */
2757 do_check_free_chunk(m
, q
);
2768 /* Check all properties of malloc_state. */
2769 static void do_check_malloc_state(mstate m
) {
2773 for (i
= 0; i
< NSMALLBINS
; ++i
)
2774 do_check_smallbin(m
, i
);
2775 for (i
= 0; i
< NTREEBINS
; ++i
)
2776 do_check_treebin(m
, i
);
2778 if (m
->dvsize
!= 0) { /* check dv chunk */
2779 do_check_any_chunk(m
, m
->dv
);
2780 assert(m
->dvsize
== chunksize(m
->dv
));
2781 assert(m
->dvsize
>= MIN_CHUNK_SIZE
);
2782 assert(bin_find(m
, m
->dv
) == 0);
2785 if (m
->top
!= 0) { /* check top chunk */
2786 do_check_top_chunk(m
, m
->top
);
2787 assert(m
->topsize
== chunksize(m
->top
));
2788 assert(m
->topsize
> 0);
2789 assert(bin_find(m
, m
->top
) == 0);
2792 total
= traverse_and_check(m
);
2793 assert(total
<= m
->footprint
);
2794 assert(m
->footprint
<= m
->max_footprint
);
2798 /* ----------------------------- statistics ------------------------------ */
2801 static struct mallinfo
internal_mallinfo(mstate m
) {
2802 struct mallinfo nm
= { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
2803 if (!PREACTION(m
)) {
2804 check_malloc_state(m
);
2805 if (is_initialized(m
)) {
2806 size_t nfree
= SIZE_T_ONE
; /* top always free */
2807 size_t mfree
= m
->topsize
+ TOP_FOOT_SIZE
;
2809 msegmentptr s
= &m
->seg
;
2811 mchunkptr q
= align_as_chunk(s
->base
);
2812 while (segment_holds(s
, q
) &&
2813 q
!= m
->top
&& q
->head
!= FENCEPOST_HEAD
) {
2814 size_t sz
= chunksize(q
);
2827 nm
.hblkhd
= m
->footprint
- sum
;
2828 nm
.usmblks
= m
->max_footprint
;
2829 nm
.uordblks
= m
->footprint
- mfree
;
2830 nm
.fordblks
= mfree
;
2831 nm
.keepcost
= m
->topsize
;
2838 #endif /* !NO_MALLINFO */
2840 static void internal_malloc_stats(mstate m
) {
2841 if (!PREACTION(m
)) {
2845 check_malloc_state(m
);
2846 if (is_initialized(m
)) {
2847 msegmentptr s
= &m
->seg
;
2848 maxfp
= m
->max_footprint
;
2850 used
= fp
- (m
->topsize
+ TOP_FOOT_SIZE
);
2853 mchunkptr q
= align_as_chunk(s
->base
);
2854 while (segment_holds(s
, q
) &&
2855 q
!= m
->top
&& q
->head
!= FENCEPOST_HEAD
) {
2857 used
-= chunksize(q
);
2864 fprintf(stderr
, "max system bytes = %10lu\n", (unsigned long)(maxfp
));
2865 fprintf(stderr
, "system bytes = %10lu\n", (unsigned long)(fp
));
2866 fprintf(stderr
, "in use bytes = %10lu\n", (unsigned long)(used
));
2872 /* ----------------------- Operations on smallbins ----------------------- */
2875 Various forms of linking and unlinking are defined as macros. Even
2876 the ones for trees, which are very long but have very short typical
2877 paths. This is ugly but reduces reliance on inlining support of
2881 /* Link a free chunk into a smallbin */
2882 #define insert_small_chunk(M, P, S) {\
2883 bindex_t I = small_index(S);\
2884 mchunkptr B = smallbin_at(M, I);\
2886 assert(S >= MIN_CHUNK_SIZE);\
2887 if (!smallmap_is_marked(M, I))\
2888 mark_smallmap(M, I);\
2889 else if (RTCHECK(ok_address(M, B->fd)))\
2892 CORRUPTION_ERROR_ACTION(M);\
2900 /* Unlink a chunk from a smallbin */
2901 #define unlink_small_chunk(M, P, S) {\
2902 mchunkptr F = P->fd;\
2903 mchunkptr B = P->bk;\
2904 bindex_t I = small_index(S);\
2907 assert(chunksize(P) == small_index2size(I));\
2909 clear_smallmap(M, I);\
2910 else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
2911 (B == smallbin_at(M,I) || ok_address(M, B)))) {\
2916 CORRUPTION_ERROR_ACTION(M);\
2920 /* Unlink the first chunk from a smallbin */
2921 #define unlink_first_small_chunk(M, B, P, I) {\
2922 mchunkptr F = P->fd;\
2925 assert(chunksize(P) == small_index2size(I));\
2927 clear_smallmap(M, I);\
2928 else if (RTCHECK(ok_address(M, F))) {\
2933 CORRUPTION_ERROR_ACTION(M);\
2937 /* Replace dv node, binning the old one */
2938 /* Used only when dvsize known to be small */
2939 #define replace_dv(M, P, S) {\
2940 size_t DVS = M->dvsize;\
2942 mchunkptr DV = M->dv;\
2943 assert(is_small(DVS));\
2944 insert_small_chunk(M, DV, DVS);\
2950 /* ------------------------- Operations on trees ------------------------- */
2952 /* Insert chunk into tree */
2953 #define insert_large_chunk(M, X, S) {\
2956 compute_tree_index(S, I);\
2957 H = treebin_at(M, I);\
2959 X->child[0] = X->child[1] = 0;\
2960 if (!treemap_is_marked(M, I)) {\
2961 mark_treemap(M, I);\
2963 X->parent = (tchunkptr)H;\
2968 size_t K = S << leftshift_for_tree_index(I);\
2970 if (chunksize(T) != S) {\
2971 tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
2975 else if (RTCHECK(ok_address(M, C))) {\
2982 CORRUPTION_ERROR_ACTION(M);\
2987 tchunkptr F = T->fd;\
2988 if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
2996 CORRUPTION_ERROR_ACTION(M);\
3007 1. If x is a chained node, unlink it from its same-sized fd/bk links
3008 and choose its bk node as its replacement.
3009 2. If x was the last node of its size, but not a leaf node, it must
3010 be replaced with a leaf node (not merely one with an open left or
3011 right), to make sure that lefts and rights of descendents
3012 correspond properly to bit masks. We use the rightmost descendent
3013 of x. We could use any other leaf, but this is easy to locate and
3014 tends to counteract removal of leftmosts elsewhere, and so keeps
3015 paths shorter than minimally guaranteed. This doesn't loop much
3016 because on average a node in a tree is near the bottom.
3017 3. If x is the base of a chain (i.e., has parent links) relink
3018 x's parent and children to x's replacement (or null if none).
3021 #define unlink_large_chunk(M, X) {\
3022 tchunkptr XP = X->parent;\
3025 tchunkptr F = X->fd;\
3027 if (RTCHECK(ok_address(M, F))) {\
3032 CORRUPTION_ERROR_ACTION(M);\
3037 if (((R = *(RP = &(X->child[1]))) != 0) ||\
3038 ((R = *(RP = &(X->child[0]))) != 0)) {\
3040 while ((*(CP = &(R->child[1])) != 0) ||\
3041 (*(CP = &(R->child[0])) != 0)) {\
3044 if (RTCHECK(ok_address(M, RP)))\
3047 CORRUPTION_ERROR_ACTION(M);\
3052 tbinptr* H = treebin_at(M, X->index);\
3054 if ((*H = R) == 0) \
3055 clear_treemap(M, X->index);\
3057 else if (RTCHECK(ok_address(M, XP))) {\
3058 if (XP->child[0] == X) \
3064 CORRUPTION_ERROR_ACTION(M);\
3066 if (RTCHECK(ok_address(M, R))) {\
3069 if ((C0 = X->child[0]) != 0) {\
3070 if (RTCHECK(ok_address(M, C0))) {\
3075 CORRUPTION_ERROR_ACTION(M);\
3077 if ((C1 = X->child[1]) != 0) {\
3078 if (RTCHECK(ok_address(M, C1))) {\
3083 CORRUPTION_ERROR_ACTION(M);\
3087 CORRUPTION_ERROR_ACTION(M);\
3092 /* Relays to large vs small bin operations */
3094 #define insert_chunk(M, P, S)\
3095 if (is_small(S)) insert_small_chunk(M, P, S)\
3096 else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3098 #define unlink_chunk(M, P, S)\
3099 if (is_small(S)) unlink_small_chunk(M, P, S)\
3100 else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3103 /* Relays to internal calls to malloc/free from realloc, memalign etc */
3106 #define internal_malloc(m, b) mspace_malloc(m, b)
3107 #define internal_free(m, mem) mspace_free(m,mem);
3108 #else /* ONLY_MSPACES */
3110 #define internal_malloc(m, b)\
3111 (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
3112 #define internal_free(m, mem)\
3113 if (m == gm) dlfree(mem); else mspace_free(m,mem);
3115 #define internal_malloc(m, b) dlmalloc(b)
3116 #define internal_free(m, mem) dlfree(mem)
3117 #endif /* MSPACES */
3118 #endif /* ONLY_MSPACES */
3120 /* ----------------------- Direct-mmapping chunks ----------------------- */
3123 Directly mmapped chunks are set up with an offset to the start of
3124 the mmapped region stored in the prev_foot field of the chunk. This
3125 allows reconstruction of the required argument to MUNMAP when freed,
3126 and also allows adjustment of the returned chunk to meet alignment
3127 requirements (especially in memalign). There is also enough space
3128 allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
3129 the PINUSE bit so frees can be checked.
3132 /* Malloc using mmap */
3133 static void* mmap_alloc(mstate m
, size_t nb
) {
3134 size_t mmsize
= granularity_align(nb
+ SIX_SIZE_T_SIZES
+ CHUNK_ALIGN_MASK
);
3135 if (mmsize
> nb
) { /* Check for wrap around 0 */
3136 char* mm
= (char*)(DIRECT_MMAP(mmsize
));
3138 size_t offset
= align_offset(chunk2mem(mm
));
3139 size_t psize
= mmsize
- offset
- MMAP_FOOT_PAD
;
3140 mchunkptr p
= (mchunkptr
)(mm
+ offset
);
3141 p
->prev_foot
= offset
| IS_MMAPPED_BIT
;
3142 (p
)->head
= (psize
|CINUSE_BIT
);
3143 mark_inuse_foot(m
, p
, psize
);
3144 chunk_plus_offset(p
, psize
)->head
= FENCEPOST_HEAD
;
3145 chunk_plus_offset(p
, psize
+SIZE_T_SIZE
)->head
= 0;
3147 if (mm
< m
->least_addr
)
3149 if ((m
->footprint
+= mmsize
) > m
->max_footprint
)
3150 m
->max_footprint
= m
->footprint
;
3151 assert(is_aligned(chunk2mem(p
)));
3152 check_mmapped_chunk(m
, p
);
3153 return chunk2mem(p
);
3161 /* Realloc using mmap */
3162 static mchunkptr
mmap_resize(mstate m
, mchunkptr oldp
, size_t nb
) {
3163 size_t oldsize
= chunksize(oldp
);
3164 if (is_small(nb
)) /* Can't shrink mmap regions below small size */
3166 /* Keep old chunk if big enough but not too big */
3167 if (oldsize
>= nb
+ SIZE_T_SIZE
&&
3168 (oldsize
- nb
) <= (mparams
.granularity
<< 1))
3171 size_t offset
= oldp
->prev_foot
& ~IS_MMAPPED_BIT
;
3172 size_t oldmmsize
= oldsize
+ offset
+ MMAP_FOOT_PAD
;
3173 size_t newmmsize
= granularity_align(nb
+ SIX_SIZE_T_SIZES
+
3175 char* cp
= (char*)CALL_MREMAP((char*)oldp
- offset
,
3176 oldmmsize
, newmmsize
, 1);
3178 mchunkptr newp
= (mchunkptr
)(cp
+ offset
);
3179 size_t psize
= newmmsize
- offset
- MMAP_FOOT_PAD
;
3180 newp
->head
= (psize
|CINUSE_BIT
);
3181 mark_inuse_foot(m
, newp
, psize
);
3182 chunk_plus_offset(newp
, psize
)->head
= FENCEPOST_HEAD
;
3183 chunk_plus_offset(newp
, psize
+SIZE_T_SIZE
)->head
= 0;
3185 if (cp
< m
->least_addr
)
3187 if ((m
->footprint
+= newmmsize
- oldmmsize
) > m
->max_footprint
)
3188 m
->max_footprint
= m
->footprint
;
3189 check_mmapped_chunk(m
, newp
);
3198 /* -------------------------- mspace management -------------------------- */
3200 /* Initialize top chunk and its size */
3201 static void init_top(mstate m
, mchunkptr p
, size_t psize
) {
3202 /* Ensure alignment */
3203 size_t offset
= align_offset(chunk2mem(p
));
3204 p
= (mchunkptr
)((char*)p
+ offset
);
3209 p
->head
= psize
| PINUSE_BIT
;
3210 /* set size of fake trailing chunk holding overhead space only once */
3211 chunk_plus_offset(p
, psize
)->head
= TOP_FOOT_SIZE
;
3212 m
->trim_check
= mparams
.trim_threshold
; /* reset on each update */
3215 /* Initialize bins for a new mstate that is otherwise zeroed out */
3216 static void init_bins(mstate m
) {
3217 /* Establish circular links for smallbins */
3219 for (i
= 0; i
< NSMALLBINS
; ++i
) {
3220 sbinptr bin
= smallbin_at(m
,i
);
3221 bin
->fd
= bin
->bk
= bin
;
3225 #if PROCEED_ON_ERROR
3227 /* default corruption action */
3228 static void reset_on_error(mstate m
) {
3230 ++malloc_corruption_error_count
;
3231 /* Reinitialize fields to forget about all memory */
3232 m
->smallbins
= m
->treebins
= 0;
3233 m
->dvsize
= m
->topsize
= 0;
3238 for (i
= 0; i
< NTREEBINS
; ++i
)
3239 *treebin_at(m
, i
) = 0;
3242 #endif /* PROCEED_ON_ERROR */
3244 /* Allocate chunk and prepend remainder with chunk in successor base. */
3245 static void* prepend_alloc(mstate m
, char* newbase
, char* oldbase
,
3247 mchunkptr p
= align_as_chunk(newbase
);
3248 mchunkptr oldfirst
= align_as_chunk(oldbase
);
3249 size_t psize
= (char*)oldfirst
- (char*)p
;
3250 mchunkptr q
= chunk_plus_offset(p
, nb
);
3251 size_t qsize
= psize
- nb
;
3252 set_size_and_pinuse_of_inuse_chunk(m
, p
, nb
);
3254 assert((char*)oldfirst
> (char*)q
);
3255 assert(pinuse(oldfirst
));
3256 assert(qsize
>= MIN_CHUNK_SIZE
);
3258 /* consolidate remainder with first chunk of old base */
3259 if (oldfirst
== m
->top
) {
3260 size_t tsize
= m
->topsize
+= qsize
;
3262 q
->head
= tsize
| PINUSE_BIT
;
3263 check_top_chunk(m
, q
);
3265 else if (oldfirst
== m
->dv
) {
3266 size_t dsize
= m
->dvsize
+= qsize
;
3268 set_size_and_pinuse_of_free_chunk(q
, dsize
);
3271 if (!cinuse(oldfirst
)) {
3272 size_t nsize
= chunksize(oldfirst
);
3273 unlink_chunk(m
, oldfirst
, nsize
);
3274 oldfirst
= chunk_plus_offset(oldfirst
, nsize
);
3277 set_free_with_pinuse(q
, qsize
, oldfirst
);
3278 insert_chunk(m
, q
, qsize
);
3279 check_free_chunk(m
, q
);
3282 check_malloced_chunk(m
, chunk2mem(p
), nb
);
3283 return chunk2mem(p
);
3287 /* Add a segment to hold a new noncontiguous region */
3288 static void add_segment(mstate m
, char* tbase
, size_t tsize
, flag_t mmapped
) {
3289 /* Determine locations and sizes of segment, fenceposts, old top */
3290 char* old_top
= (char*)m
->top
;
3291 msegmentptr oldsp
= segment_holding(m
, old_top
);
3292 char* old_end
= oldsp
->base
+ oldsp
->size
;
3293 size_t ssize
= pad_request(sizeof(struct malloc_segment
));
3294 char* rawsp
= old_end
- (ssize
+ FOUR_SIZE_T_SIZES
+ CHUNK_ALIGN_MASK
);
3295 size_t offset
= align_offset(chunk2mem(rawsp
));
3296 char* asp
= rawsp
+ offset
;
3297 char* csp
= (asp
< (old_top
+ MIN_CHUNK_SIZE
))? old_top
: asp
;
3298 mchunkptr sp
= (mchunkptr
)csp
;
3299 msegmentptr ss
= (msegmentptr
)(chunk2mem(sp
));
3300 mchunkptr tnext
= chunk_plus_offset(sp
, ssize
);
3301 mchunkptr p
= tnext
;
3304 /* reset top to new space */
3305 init_top(m
, (mchunkptr
)tbase
, tsize
- TOP_FOOT_SIZE
);
3307 /* Set up segment record */
3308 assert(is_aligned(ss
));
3309 set_size_and_pinuse_of_inuse_chunk(m
, sp
, ssize
);
3310 *ss
= m
->seg
; /* Push current record */
3311 m
->seg
.base
= tbase
;
3312 m
->seg
.size
= tsize
;
3313 m
->seg
.sflags
= mmapped
;
3316 /* Insert trailing fenceposts */
3318 mchunkptr nextp
= chunk_plus_offset(p
, SIZE_T_SIZE
);
3319 p
->head
= FENCEPOST_HEAD
;
3321 if ((char*)(&(nextp
->head
)) < old_end
)
3326 assert(nfences
>= 2);
3328 /* Insert the rest of old top into a bin as an ordinary free chunk */
3329 if (csp
!= old_top
) {
3330 mchunkptr q
= (mchunkptr
)old_top
;
3331 size_t psize
= csp
- old_top
;
3332 mchunkptr tn
= chunk_plus_offset(q
, psize
);
3333 set_free_with_pinuse(q
, psize
, tn
);
3334 insert_chunk(m
, q
, psize
);
3337 check_top_chunk(m
, m
->top
);
3340 /* -------------------------- System allocation -------------------------- */
3342 /* Get memory from system using MORECORE or MMAP */
3343 static void* sys_alloc(mstate m
, size_t nb
) {
3344 char* tbase
= CMFAIL
;
3346 flag_t mmap_flag
= 0;
3350 /* Directly map large chunks */
3351 if (use_mmap(m
) && nb
>= mparams
.mmap_threshold
) {
3352 void* mem
= mmap_alloc(m
, nb
);
3358 Try getting memory in any of three ways (in most-preferred to
3359 least-preferred order):
3360 1. A call to MORECORE that can normally contiguously extend memory.
3361 (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
3362 or main space is mmapped or a previous contiguous call failed)
3363 2. A call to MMAP new space (disabled if not HAVE_MMAP).
3364 Note that under the default settings, if MORECORE is unable to
3365 fulfill a request, and HAVE_MMAP is true, then mmap is
3366 used as a noncontiguous system allocator. This is a useful backup
3367 strategy for systems with holes in address spaces -- in this case
3368 sbrk cannot contiguously expand the heap, but mmap may be able to
3370 3. A call to MORECORE that cannot usually contiguously extend memory.
3371 (disabled if not HAVE_MORECORE)
3374 if (MORECORE_CONTIGUOUS
&& !use_noncontiguous(m
)) {
3376 msegmentptr ss
= (m
->top
== 0)? 0 : segment_holding(m
, (char*)m
->top
);
3378 ACQUIRE_MORECORE_LOCK();
3380 if (ss
== 0) { /* First time through or recovery */
3381 char* base
= (char*)CALL_MORECORE(0);
3382 if (base
!= CMFAIL
) {
3383 asize
= granularity_align(nb
+ TOP_FOOT_SIZE
+ SIZE_T_ONE
);
3384 /* Adjust to end on a page boundary */
3385 if (!is_page_aligned(base
))
3386 asize
+= (page_align((size_t)base
) - (size_t)base
);
3387 /* Can't call MORECORE if size is negative when treated as signed */
3388 if (asize
< HALF_MAX_SIZE_T
&&
3389 (br
= (char*)(CALL_MORECORE(asize
))) == base
) {
3396 /* Subtract out existing available top space from MORECORE request. */
3397 asize
= granularity_align(nb
- m
->topsize
+ TOP_FOOT_SIZE
+ SIZE_T_ONE
);
3398 /* Use mem here only if it did continuously extend old space */
3399 if (asize
< HALF_MAX_SIZE_T
&&
3400 (br
= (char*)(CALL_MORECORE(asize
))) == ss
->base
+ss
->size
) {
3406 if (tbase
== CMFAIL
) { /* Cope with partial failure */
3407 if (br
!= CMFAIL
) { /* Try to use/extend the space we did get */
3408 if (asize
< HALF_MAX_SIZE_T
&&
3409 asize
< nb
+ TOP_FOOT_SIZE
+ SIZE_T_ONE
) {
3410 size_t esize
= granularity_align(nb
+ TOP_FOOT_SIZE
+ SIZE_T_ONE
- asize
);
3411 if (esize
< HALF_MAX_SIZE_T
) {
3412 char* end
= (char*)CALL_MORECORE(esize
);
3415 else { /* Can't use; try to release */
3416 CALL_MORECORE(-asize
);
3422 if (br
!= CMFAIL
) { /* Use the space we did get */
3427 disable_contiguous(m
); /* Don't try contiguous path in the future */
3430 RELEASE_MORECORE_LOCK();
3433 if (HAVE_MMAP
&& tbase
== CMFAIL
) { /* Try MMAP */
3434 size_t req
= nb
+ TOP_FOOT_SIZE
+ SIZE_T_ONE
;
3435 size_t rsize
= granularity_align(req
);
3436 if (rsize
> nb
) { /* Fail if wraps around zero */
3437 char* mp
= (char*)(CALL_MMAP(rsize
));
3441 mmap_flag
= IS_MMAPPED_BIT
;
3446 if (HAVE_MORECORE
&& tbase
== CMFAIL
) { /* Try noncontiguous MORECORE */
3447 size_t asize
= granularity_align(nb
+ TOP_FOOT_SIZE
+ SIZE_T_ONE
);
3448 if (asize
< HALF_MAX_SIZE_T
) {
3451 ACQUIRE_MORECORE_LOCK();
3452 br
= (char*)(CALL_MORECORE(asize
));
3453 end
= (char*)(CALL_MORECORE(0));
3454 RELEASE_MORECORE_LOCK();
3455 if (br
!= CMFAIL
&& end
!= CMFAIL
&& br
< end
) {
3456 size_t ssize
= end
- br
;
3457 if (ssize
> nb
+ TOP_FOOT_SIZE
) {
3465 if (tbase
!= CMFAIL
) {
3467 if ((m
->footprint
+= tsize
) > m
->max_footprint
)
3468 m
->max_footprint
= m
->footprint
;
3470 if (!is_initialized(m
)) { /* first-time initialization */
3471 m
->seg
.base
= m
->least_addr
= tbase
;
3472 m
->seg
.size
= tsize
;
3473 m
->seg
.sflags
= mmap_flag
;
3474 m
->magic
= mparams
.magic
;
3477 init_top(m
, (mchunkptr
)tbase
, tsize
- TOP_FOOT_SIZE
);
3479 /* Offset top by embedded malloc_state */
3480 mchunkptr mn
= next_chunk(mem2chunk(m
));
3481 init_top(m
, mn
, (size_t)((tbase
+ tsize
) - (char*)mn
) -TOP_FOOT_SIZE
);
3486 /* Try to merge with an existing segment */
3487 msegmentptr sp
= &m
->seg
;
3488 while (sp
!= 0 && tbase
!= sp
->base
+ sp
->size
)
3491 !is_extern_segment(sp
) &&
3492 (sp
->sflags
& IS_MMAPPED_BIT
) == mmap_flag
&&
3493 segment_holds(sp
, m
->top
)) { /* append */
3495 init_top(m
, m
->top
, m
->topsize
+ tsize
);
3498 if (tbase
< m
->least_addr
)
3499 m
->least_addr
= tbase
;
3501 while (sp
!= 0 && sp
->base
!= tbase
+ tsize
)
3504 !is_extern_segment(sp
) &&
3505 (sp
->sflags
& IS_MMAPPED_BIT
) == mmap_flag
) {
3506 char* oldbase
= sp
->base
;
3509 return prepend_alloc(m
, tbase
, oldbase
, nb
);
3512 add_segment(m
, tbase
, tsize
, mmap_flag
);
3516 if (nb
< m
->topsize
) { /* Allocate from new or extended top space */
3517 size_t rsize
= m
->topsize
-= nb
;
3518 mchunkptr p
= m
->top
;
3519 mchunkptr r
= m
->top
= chunk_plus_offset(p
, nb
);
3520 r
->head
= rsize
| PINUSE_BIT
;
3521 set_size_and_pinuse_of_inuse_chunk(m
, p
, nb
);
3522 check_top_chunk(m
, m
->top
);
3523 check_malloced_chunk(m
, chunk2mem(p
), nb
);
3524 return chunk2mem(p
);
3528 MALLOC_FAILURE_ACTION
;
3532 /* ----------------------- system deallocation -------------------------- */
3534 /* Unmap and unlink any mmapped segments that don't contain used chunks */
3535 static size_t release_unused_segments(mstate m
) {
3536 size_t released
= 0;
3537 msegmentptr pred
= &m
->seg
;
3538 msegmentptr sp
= pred
->next
;
3540 char* base
= sp
->base
;
3541 size_t size
= sp
->size
;
3542 msegmentptr next
= sp
->next
;
3543 if (is_mmapped_segment(sp
) && !is_extern_segment(sp
)) {
3544 mchunkptr p
= align_as_chunk(base
);
3545 size_t psize
= chunksize(p
);
3546 /* Can unmap if first chunk holds entire segment and not pinned */
3547 if (!cinuse(p
) && (char*)p
+ psize
>= base
+ size
- TOP_FOOT_SIZE
) {
3548 tchunkptr tp
= (tchunkptr
)p
;
3549 assert(segment_holds(sp
, (char*)sp
));
3555 unlink_large_chunk(m
, tp
);
3557 if (CALL_MUNMAP(base
, size
) == 0) {
3559 m
->footprint
-= size
;
3560 /* unlink obsoleted record */
3564 else { /* back out if cannot unmap */
3565 insert_large_chunk(m
, tp
, psize
);
3575 static int sys_trim(mstate m
, size_t pad
) {
3576 size_t released
= 0;
3577 if (pad
< MAX_REQUEST
&& is_initialized(m
)) {
3578 pad
+= TOP_FOOT_SIZE
; /* ensure enough room for segment overhead */
3580 if (m
->topsize
> pad
) {
3581 /* Shrink top space in granularity-size units, keeping at least one */
3582 size_t unit
= mparams
.granularity
;
3583 size_t extra
= ((m
->topsize
- pad
+ (unit
- SIZE_T_ONE
)) / unit
-
3585 msegmentptr sp
= segment_holding(m
, (char*)m
->top
);
3587 if (!is_extern_segment(sp
)) {
3588 if (is_mmapped_segment(sp
)) {
3590 sp
->size
>= extra
&&
3591 !has_segment_link(m
, sp
)) { /* can't shrink if pinned */
3592 size_t newsize
= sp
->size
- extra
;
3593 /* Prefer mremap, fall back to munmap */
3594 if ((CALL_MREMAP(sp
->base
, sp
->size
, newsize
, 0) != MFAIL
) ||
3595 (CALL_MUNMAP(sp
->base
+ newsize
, extra
) == 0)) {
3600 else if (HAVE_MORECORE
) {
3601 if (extra
>= HALF_MAX_SIZE_T
) /* Avoid wrapping negative */
3602 extra
= (HALF_MAX_SIZE_T
) + SIZE_T_ONE
- unit
;
3603 ACQUIRE_MORECORE_LOCK();
3605 /* Make sure end of memory is where we last set it. */
3606 char* old_br
= (char*)(CALL_MORECORE(0));
3607 if (old_br
== sp
->base
+ sp
->size
) {
3608 char* rel_br
= (char*)(CALL_MORECORE(-extra
));
3609 char* new_br
= (char*)(CALL_MORECORE(0));
3610 if (rel_br
!= CMFAIL
&& new_br
< old_br
)
3611 released
= old_br
- new_br
;
3614 RELEASE_MORECORE_LOCK();
3618 if (released
!= 0) {
3619 sp
->size
-= released
;
3620 m
->footprint
-= released
;
3621 init_top(m
, m
->top
, m
->topsize
- released
);
3622 check_top_chunk(m
, m
->top
);
3626 /* Unmap any unused mmapped segments */
3628 released
+= release_unused_segments(m
);
3630 /* On failure, disable autotrim to avoid repeated failed future calls */
3632 m
->trim_check
= MAX_SIZE_T
;
3635 return (released
!= 0)? 1 : 0;
3638 /* ---------------------------- malloc support --------------------------- */
3640 /* allocate a large request from the best fitting chunk in a treebin */
3641 static void* tmalloc_large(mstate m
, size_t nb
) {
3643 size_t rsize
= -nb
; /* Unsigned negation */
3646 compute_tree_index(nb
, idx
);
3648 if ((t
= *treebin_at(m
, idx
)) != 0) {
3649 /* Traverse tree for this bin looking for node with size == nb */
3650 size_t sizebits
= nb
<< leftshift_for_tree_index(idx
);
3651 tchunkptr rst
= 0; /* The deepest untaken right subtree */
3654 size_t trem
= chunksize(t
) - nb
;
3657 if ((rsize
= trem
) == 0)
3661 t
= t
->child
[(sizebits
>> (SIZE_T_BITSIZE
-SIZE_T_ONE
)) & 1];
3662 if (rt
!= 0 && rt
!= t
)
3665 t
= rst
; /* set t to least subtree holding sizes > nb */
3672 if (t
== 0 && v
== 0) { /* set t to root of next non-empty treebin */
3673 binmap_t leftbits
= left_bits(idx2bit(idx
)) & m
->treemap
;
3674 if (leftbits
!= 0) {
3676 binmap_t leastbit
= least_bit(leftbits
);
3677 compute_bit2idx(leastbit
, i
);
3678 t
= *treebin_at(m
, i
);
3682 while (t
!= 0) { /* find smallest of tree or subtree */
3683 size_t trem
= chunksize(t
) - nb
;
3688 t
= leftmost_child(t
);
3691 /* If dv is a better fit, return 0 so malloc will use it */
3692 if (v
!= 0 && rsize
< (size_t)(m
->dvsize
- nb
)) {
3693 if (RTCHECK(ok_address(m
, v
))) { /* split */
3694 mchunkptr r
= chunk_plus_offset(v
, nb
);
3695 assert(chunksize(v
) == rsize
+ nb
);
3696 if (RTCHECK(ok_next(v
, r
))) {
3697 unlink_large_chunk(m
, v
);
3698 if (rsize
< MIN_CHUNK_SIZE
)
3699 set_inuse_and_pinuse(m
, v
, (rsize
+ nb
));
3701 set_size_and_pinuse_of_inuse_chunk(m
, v
, nb
);
3702 set_size_and_pinuse_of_free_chunk(r
, rsize
);
3703 insert_chunk(m
, r
, rsize
);
3705 return chunk2mem(v
);
3708 CORRUPTION_ERROR_ACTION(m
);
3713 /* allocate a small request from the best fitting chunk in a treebin */
3714 static void* tmalloc_small(mstate m
, size_t nb
) {
3718 binmap_t leastbit
= least_bit(m
->treemap
);
3719 compute_bit2idx(leastbit
, i
);
3721 v
= t
= *treebin_at(m
, i
);
3722 rsize
= chunksize(t
) - nb
;
3724 while ((t
= leftmost_child(t
)) != 0) {
3725 size_t trem
= chunksize(t
) - nb
;
3732 if (RTCHECK(ok_address(m
, v
))) {
3733 mchunkptr r
= chunk_plus_offset(v
, nb
);
3734 assert(chunksize(v
) == rsize
+ nb
);
3735 if (RTCHECK(ok_next(v
, r
))) {
3736 unlink_large_chunk(m
, v
);
3737 if (rsize
< MIN_CHUNK_SIZE
)
3738 set_inuse_and_pinuse(m
, v
, (rsize
+ nb
));
3740 set_size_and_pinuse_of_inuse_chunk(m
, v
, nb
);
3741 set_size_and_pinuse_of_free_chunk(r
, rsize
);
3742 replace_dv(m
, r
, rsize
);
3744 return chunk2mem(v
);
3748 CORRUPTION_ERROR_ACTION(m
);
3752 /* --------------------------- realloc support --------------------------- */
3756 static void* internal_realloc(mstate m
, void* oldmem
, size_t bytes
) {
3757 if (bytes
>= MAX_REQUEST
) {
3758 MALLOC_FAILURE_ACTION
;
3761 if (!PREACTION(m
)) {
3762 mchunkptr oldp
= mem2chunk(oldmem
);
3763 size_t oldsize
= chunksize(oldp
);
3764 mchunkptr next
= chunk_plus_offset(oldp
, oldsize
);
3768 /* Try to either shrink or extend into top. Else malloc-copy-free */
3770 if (RTCHECK(ok_address(m
, oldp
) && ok_cinuse(oldp
) &&
3771 ok_next(oldp
, next
) && ok_pinuse(next
))) {
3772 size_t nb
= request2size(bytes
);
3773 if (is_mmapped(oldp
))
3774 newp
= mmap_resize(m
, oldp
, nb
);
3775 else if (oldsize
>= nb
) { /* already big enough */
3776 size_t rsize
= oldsize
- nb
;
3778 if (rsize
>= MIN_CHUNK_SIZE
) {
3779 mchunkptr remainder
= chunk_plus_offset(newp
, nb
);
3780 set_inuse(m
, newp
, nb
);
3781 set_inuse(m
, remainder
, rsize
);
3782 extra
= chunk2mem(remainder
);
3785 else if (next
== m
->top
&& oldsize
+ m
->topsize
> nb
) {
3786 /* Expand into top */
3787 size_t newsize
= oldsize
+ m
->topsize
;
3788 size_t newtopsize
= newsize
- nb
;
3789 mchunkptr newtop
= chunk_plus_offset(oldp
, nb
);
3790 set_inuse(m
, oldp
, nb
);
3791 newtop
->head
= newtopsize
|PINUSE_BIT
;
3793 m
->topsize
= newtopsize
;
3798 USAGE_ERROR_ACTION(m
, oldmem
);
3807 internal_free(m
, extra
);
3809 check_inuse_chunk(m
, newp
);
3810 return chunk2mem(newp
);
3813 void* newmem
= internal_malloc(m
, bytes
);
3815 size_t oc
= oldsize
- overhead_for(oldp
);
3816 memcpy(newmem
, oldmem
, (oc
< bytes
)? oc
: bytes
);
3817 internal_free(m
, oldmem
);
3827 /* --------------------------- memalign support -------------------------- */
3829 static void* internal_memalign(mstate m
, size_t alignment
, size_t bytes
) {
3830 if (alignment
<= MALLOC_ALIGNMENT
) /* Can just use malloc */
3831 return internal_malloc(m
, bytes
);
3832 if (alignment
< MIN_CHUNK_SIZE
) /* must be at least a minimum chunk size */
3833 alignment
= MIN_CHUNK_SIZE
;
3834 if ((alignment
& (alignment
-SIZE_T_ONE
)) != 0) {/* Ensure a power of 2 */
3835 size_t a
= MALLOC_ALIGNMENT
<< 1;
3836 while (a
< alignment
) a
<<= 1;
3840 if (bytes
>= MAX_REQUEST
- alignment
) {
3841 if (m
!= 0) { /* Test isn't needed but avoids compiler warning */
3842 MALLOC_FAILURE_ACTION
;
3846 size_t nb
= request2size(bytes
);
3847 size_t req
= nb
+ alignment
+ MIN_CHUNK_SIZE
- CHUNK_OVERHEAD
;
3848 char* mem
= (char*)internal_malloc(m
, req
);
3852 mchunkptr p
= mem2chunk(mem
);
3854 if (PREACTION(m
)) return 0;
3855 if ((((size_t)(mem
)) % alignment
) != 0) { /* misaligned */
3857 Find an aligned spot inside chunk. Since we need to give
3858 back leading space in a chunk of at least MIN_CHUNK_SIZE, if
3859 the first calculation places us at a spot with less than
3860 MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
3861 We've allocated enough total room so that this is always
3864 char* br
= (char*)mem2chunk((size_t)(((size_t)(mem
+
3868 char* pos
= ((size_t)(br
- (char*)(p
)) >= MIN_CHUNK_SIZE
)?
3870 mchunkptr newp
= (mchunkptr
)pos
;
3871 size_t leadsize
= pos
- (char*)(p
);
3872 size_t newsize
= chunksize(p
) - leadsize
;
3874 if (is_mmapped(p
)) { /* For mmapped chunks, just adjust offset */
3875 newp
->prev_foot
= p
->prev_foot
+ leadsize
;
3876 newp
->head
= (newsize
|CINUSE_BIT
);
3878 else { /* Otherwise, give back leader, use the rest */
3879 set_inuse(m
, newp
, newsize
);
3880 set_inuse(m
, p
, leadsize
);
3881 leader
= chunk2mem(p
);
3886 /* Give back spare room at the end */
3887 if (!is_mmapped(p
)) {
3888 size_t size
= chunksize(p
);
3889 if (size
> nb
+ MIN_CHUNK_SIZE
) {
3890 size_t remainder_size
= size
- nb
;
3891 mchunkptr remainder
= chunk_plus_offset(p
, nb
);
3892 set_inuse(m
, p
, nb
);
3893 set_inuse(m
, remainder
, remainder_size
);
3894 trailer
= chunk2mem(remainder
);
3898 assert (chunksize(p
) >= nb
);
3899 assert((((size_t)(chunk2mem(p
))) % alignment
) == 0);
3900 check_inuse_chunk(m
, p
);
3903 internal_free(m
, leader
);
3906 internal_free(m
, trailer
);
3908 return chunk2mem(p
);
3916 /* ------------------------ comalloc/coalloc support --------------------- */
3918 static void** ialloc(mstate m
,
3924 This provides common support for independent_X routines, handling
3925 all of the combinations that can result.
3928 bit 0 set if all elements are same size (using sizes[0])
3929 bit 1 set if elements should be zeroed
3932 size_t element_size
; /* chunksize of each element, if all same */
3933 size_t contents_size
; /* total size of elements */
3934 size_t array_size
; /* request size of pointer array */
3935 void* mem
; /* malloced aggregate space */
3936 mchunkptr p
; /* corresponding chunk */
3937 size_t remainder_size
; /* remaining bytes while splitting */
3938 void** marray
; /* either "chunks" or malloced ptr array */
3939 mchunkptr array_chunk
; /* chunk for malloced ptr array */
3940 flag_t was_enabled
; /* to disable mmap */
3944 /* compute array length, if needed */
3946 if (n_elements
== 0)
3947 return chunks
; /* nothing to do */
3952 /* if empty req, must still return chunk representing empty array */
3953 if (n_elements
== 0)
3954 return (void**)internal_malloc(m
, 0);
3956 array_size
= request2size(n_elements
* (sizeof(void*)));
3959 /* compute total element size */
3960 if (opts
& 0x1) { /* all-same-size */
3961 element_size
= request2size(*sizes
);
3962 contents_size
= n_elements
* element_size
;
3964 else { /* add up all the sizes */
3967 for (i
= 0; i
!= n_elements
; ++i
)
3968 contents_size
+= request2size(sizes
[i
]);
3971 size
= contents_size
+ array_size
;
3974 Allocate the aggregate chunk. First disable direct-mmapping so
3975 malloc won't use it, since we would not be able to later
3976 free/realloc space internal to a segregated mmap region.
3978 was_enabled
= use_mmap(m
);
3980 mem
= internal_malloc(m
, size
- CHUNK_OVERHEAD
);
3986 if (PREACTION(m
)) return 0;
3988 remainder_size
= chunksize(p
);
3990 assert(!is_mmapped(p
));
3992 if (opts
& 0x2) { /* optionally clear the elements */
3993 memset((size_t*)mem
, 0, remainder_size
- SIZE_T_SIZE
- array_size
);
3996 /* If not provided, allocate the pointer array as final part of chunk */
3998 size_t array_chunk_size
;
3999 array_chunk
= chunk_plus_offset(p
, contents_size
);
4000 array_chunk_size
= remainder_size
- contents_size
;
4001 marray
= (void**) (chunk2mem(array_chunk
));
4002 set_size_and_pinuse_of_inuse_chunk(m
, array_chunk
, array_chunk_size
);
4003 remainder_size
= contents_size
;
4006 /* split out elements */
4007 for (i
= 0; ; ++i
) {
4008 marray
[i
] = chunk2mem(p
);
4009 if (i
!= n_elements
-1) {
4010 if (element_size
!= 0)
4011 size
= element_size
;
4013 size
= request2size(sizes
[i
]);
4014 remainder_size
-= size
;
4015 set_size_and_pinuse_of_inuse_chunk(m
, p
, size
);
4016 p
= chunk_plus_offset(p
, size
);
4018 else { /* the final element absorbs any overallocation slop */
4019 set_size_and_pinuse_of_inuse_chunk(m
, p
, remainder_size
);
4025 if (marray
!= chunks
) {
4026 /* final element must have exactly exhausted chunk */
4027 if (element_size
!= 0) {
4028 assert(remainder_size
== element_size
);
4031 assert(remainder_size
== request2size(sizes
[i
]));
4033 check_inuse_chunk(m
, mem2chunk(marray
));
4035 for (i
= 0; i
!= n_elements
; ++i
)
4036 check_inuse_chunk(m
, mem2chunk(marray
[i
]));
4046 /* -------------------------- public routines ---------------------------- */
4050 void* dlmalloc(size_t bytes
) {
4053 If a small request (< 256 bytes minus per-chunk overhead):
4054 1. If one exists, use a remainderless chunk in associated smallbin.
4055 (Remainderless means that there are too few excess bytes to
4056 represent as a chunk.)
4057 2. If it is big enough, use the dv chunk, which is normally the
4058 chunk adjacent to the one used for the most recent small request.
4059 3. If one exists, split the smallest available chunk in a bin,
4060 saving remainder in dv.
4061 4. If it is big enough, use the top chunk.
4062 5. If available, get memory from system and use it
4063 Otherwise, for a large request:
4064 1. Find the smallest available binned chunk that fits, and use it
4065 if it is better fitting than dv chunk, splitting if necessary.
4066 2. If better fitting than any binned chunk, use the dv chunk.
4067 3. If it is big enough, use the top chunk.
4068 4. If request size >= mmap threshold, try to directly mmap this chunk.
4069 5. If available, get memory from system and use it
4071 The ugly goto's here ensure that postaction occurs along all paths.
4074 if (!PREACTION(gm
)) {
4077 if (bytes
<= MAX_SMALL_REQUEST
) {
4080 nb
= (bytes
< MIN_REQUEST
)? MIN_CHUNK_SIZE
: pad_request(bytes
);
4081 idx
= small_index(nb
);
4082 smallbits
= gm
->smallmap
>> idx
;
4084 if ((smallbits
& 0x3U
) != 0) { /* Remainderless fit to a smallbin. */
4086 idx
+= ~smallbits
& 1; /* Uses next bin if idx empty */
4087 b
= smallbin_at(gm
, idx
);
4089 assert(chunksize(p
) == small_index2size(idx
));
4090 unlink_first_small_chunk(gm
, b
, p
, idx
);
4091 set_inuse_and_pinuse(gm
, p
, small_index2size(idx
));
4093 check_malloced_chunk(gm
, mem
, nb
);
4097 else if (nb
> gm
->dvsize
) {
4098 if (smallbits
!= 0) { /* Use chunk in next nonempty smallbin */
4102 binmap_t leftbits
= (smallbits
<< idx
) & left_bits(idx2bit(idx
));
4103 binmap_t leastbit
= least_bit(leftbits
);
4104 compute_bit2idx(leastbit
, i
);
4105 b
= smallbin_at(gm
, i
);
4107 assert(chunksize(p
) == small_index2size(i
));
4108 unlink_first_small_chunk(gm
, b
, p
, i
);
4109 rsize
= small_index2size(i
) - nb
;
4110 /* Fit here cannot be remainderless if 4byte sizes */
4111 if (SIZE_T_SIZE
!= 4 && rsize
< MIN_CHUNK_SIZE
)
4112 set_inuse_and_pinuse(gm
, p
, small_index2size(i
));
4114 set_size_and_pinuse_of_inuse_chunk(gm
, p
, nb
);
4115 r
= chunk_plus_offset(p
, nb
);
4116 set_size_and_pinuse_of_free_chunk(r
, rsize
);
4117 replace_dv(gm
, r
, rsize
);
4120 check_malloced_chunk(gm
, mem
, nb
);
4124 else if (gm
->treemap
!= 0 && (mem
= tmalloc_small(gm
, nb
)) != 0) {
4125 check_malloced_chunk(gm
, mem
, nb
);
4130 else if (bytes
>= MAX_REQUEST
)
4131 nb
= MAX_SIZE_T
; /* Too big to allocate. Force failure (in sys alloc) */
4133 nb
= pad_request(bytes
);
4134 if (gm
->treemap
!= 0 && (mem
= tmalloc_large(gm
, nb
)) != 0) {
4135 check_malloced_chunk(gm
, mem
, nb
);
4140 if (nb
<= gm
->dvsize
) {
4141 size_t rsize
= gm
->dvsize
- nb
;
4142 mchunkptr p
= gm
->dv
;
4143 if (rsize
>= MIN_CHUNK_SIZE
) { /* split dv */
4144 mchunkptr r
= gm
->dv
= chunk_plus_offset(p
, nb
);
4146 set_size_and_pinuse_of_free_chunk(r
, rsize
);
4147 set_size_and_pinuse_of_inuse_chunk(gm
, p
, nb
);
4149 else { /* exhaust dv */
4150 size_t dvs
= gm
->dvsize
;
4153 set_inuse_and_pinuse(gm
, p
, dvs
);
4156 check_malloced_chunk(gm
, mem
, nb
);
4160 else if (nb
< gm
->topsize
) { /* Split top */
4161 size_t rsize
= gm
->topsize
-= nb
;
4162 mchunkptr p
= gm
->top
;
4163 mchunkptr r
= gm
->top
= chunk_plus_offset(p
, nb
);
4164 r
->head
= rsize
| PINUSE_BIT
;
4165 set_size_and_pinuse_of_inuse_chunk(gm
, p
, nb
);
4167 check_top_chunk(gm
, gm
->top
);
4168 check_malloced_chunk(gm
, mem
, nb
);
4172 mem
= sys_alloc(gm
, nb
);
4182 void dlfree(void* mem
) {
4184 Consolidate freed chunks with preceeding or succeeding bordering
4185 free chunks, if they exist, and then place in a bin. Intermixed
4186 with special cases for top, dv, mmapped chunks, and usage errors.
4190 mchunkptr p
= mem2chunk(mem
);
4192 mstate fm
= get_mstate_for(p
);
4193 if (!ok_magic(fm
)) {
4194 USAGE_ERROR_ACTION(fm
, p
);
4199 #endif /* FOOTERS */
4200 if (!PREACTION(fm
)) {
4201 check_inuse_chunk(fm
, p
);
4202 if (RTCHECK(ok_address(fm
, p
) && ok_cinuse(p
))) {
4203 size_t psize
= chunksize(p
);
4204 mchunkptr next
= chunk_plus_offset(p
, psize
);
4206 size_t prevsize
= p
->prev_foot
;
4207 if ((prevsize
& IS_MMAPPED_BIT
) != 0) {
4208 prevsize
&= ~IS_MMAPPED_BIT
;
4209 psize
+= prevsize
+ MMAP_FOOT_PAD
;
4210 if (CALL_MUNMAP((char*)p
- prevsize
, psize
) == 0)
4211 fm
->footprint
-= psize
;
4215 mchunkptr prev
= chunk_minus_offset(p
, prevsize
);
4218 if (RTCHECK(ok_address(fm
, prev
))) { /* consolidate backward */
4220 unlink_chunk(fm
, p
, prevsize
);
4222 else if ((next
->head
& INUSE_BITS
) == INUSE_BITS
) {
4224 set_free_with_pinuse(p
, psize
, next
);
4233 if (RTCHECK(ok_next(p
, next
) && ok_pinuse(next
))) {
4234 if (!cinuse(next
)) { /* consolidate forward */
4235 if (next
== fm
->top
) {
4236 size_t tsize
= fm
->topsize
+= psize
;
4238 p
->head
= tsize
| PINUSE_BIT
;
4243 if (should_trim(fm
, tsize
))
4247 else if (next
== fm
->dv
) {
4248 size_t dsize
= fm
->dvsize
+= psize
;
4250 set_size_and_pinuse_of_free_chunk(p
, dsize
);
4254 size_t nsize
= chunksize(next
);
4256 unlink_chunk(fm
, next
, nsize
);
4257 set_size_and_pinuse_of_free_chunk(p
, psize
);
4265 set_free_with_pinuse(p
, psize
, next
);
4266 insert_chunk(fm
, p
, psize
);
4267 check_free_chunk(fm
, p
);
4272 USAGE_ERROR_ACTION(fm
, p
);
4279 #endif /* FOOTERS */
4284 void* dlcalloc(size_t n_elements
, size_t elem_size
) {
4287 if (n_elements
!= 0) {
4288 req
= n_elements
* elem_size
;
4289 if (((n_elements
| elem_size
) & ~(size_t)0xffff) &&
4290 (req
/ n_elements
!= elem_size
))
4291 req
= MAX_SIZE_T
; /* force downstream failure on overflow */
4293 mem
= dlmalloc(req
);
4294 if (mem
!= 0 && calloc_must_clear(mem2chunk(mem
)))
4295 memset(mem
, 0, req
);
4299 void* dlrealloc(void* oldmem
, size_t bytes
) {
4301 return dlmalloc(bytes
);
4302 #ifdef REALLOC_ZERO_BYTES_FREES
4307 #endif /* REALLOC_ZERO_BYTES_FREES */
4312 mstate m
= get_mstate_for(mem2chunk(oldmem
));
4314 USAGE_ERROR_ACTION(m
, oldmem
);
4317 #endif /* FOOTERS */
4318 return internal_realloc(m
, oldmem
, bytes
);
4324 void* dlmemalign(size_t alignment
, size_t bytes
) {
4325 return internal_memalign(gm
, alignment
, bytes
);
4330 void** dlindependent_calloc(size_t n_elements
, size_t elem_size
,
4332 size_t sz
= elem_size
; /* serves as 1-element array */
4333 return ialloc(gm
, n_elements
, &sz
, 3, chunks
);
4336 void** dlindependent_comalloc(size_t n_elements
, size_t sizes
[],
4338 return ialloc(gm
, n_elements
, sizes
, 0, chunks
);
4341 void* dlvalloc(size_t bytes
) {
4344 pagesz
= mparams
.page_size
;
4345 return dlmemalign(pagesz
, bytes
);
4348 void* dlpvalloc(size_t bytes
) {
4351 pagesz
= mparams
.page_size
;
4352 return dlmemalign(pagesz
, (bytes
+ pagesz
- SIZE_T_ONE
) & ~(pagesz
- SIZE_T_ONE
));
4355 int dlmalloc_trim(size_t pad
) {
4357 if (!PREACTION(gm
)) {
4358 result
= sys_trim(gm
, pad
);
4364 size_t dlmalloc_footprint(void) {
4365 return gm
->footprint
;
4368 size_t dlmalloc_max_footprint(void) {
4369 return gm
->max_footprint
;
4373 struct mallinfo
dlmallinfo(void) {
4374 return internal_mallinfo(gm
);
4376 #endif /* NO_MALLINFO */
4378 void dlmalloc_stats() {
4379 internal_malloc_stats(gm
);
4382 size_t dlmalloc_usable_size(void* mem
) {
4384 mchunkptr p
= mem2chunk(mem
);
4386 return chunksize(p
) - overhead_for(p
);
4391 int dlmallopt(int param_number
, int value
) {
4392 return change_mparam(param_number
, value
);
4397 #endif /* !ONLY_MSPACES */
4399 /* ----------------------------- user mspaces ---------------------------- */
4403 static mstate
init_user_mstate(char* tbase
, size_t tsize
) {
4404 size_t msize
= pad_request(sizeof(struct malloc_state
));
4406 mchunkptr msp
= align_as_chunk(tbase
);
4407 mstate m
= (mstate
)(chunk2mem(msp
));
4408 memset(m
, 0, msize
);
4409 INITIAL_LOCK(&m
->mutex
);
4410 msp
->head
= (msize
|PINUSE_BIT
|CINUSE_BIT
);
4411 m
->seg
.base
= m
->least_addr
= tbase
;
4412 m
->seg
.size
= m
->footprint
= m
->max_footprint
= tsize
;
4413 m
->magic
= mparams
.magic
;
4414 m
->mflags
= mparams
.default_mflags
;
4415 disable_contiguous(m
);
4417 mn
= next_chunk(mem2chunk(m
));
4418 init_top(m
, mn
, (size_t)((tbase
+ tsize
) - (char*)mn
) - TOP_FOOT_SIZE
);
4419 check_top_chunk(m
, m
->top
);
4423 mspace
create_mspace(size_t capacity
, int locked
) {
4425 size_t msize
= pad_request(sizeof(struct malloc_state
));
4426 init_mparams(); /* Ensure pagesize etc initialized */
4428 if (capacity
< (size_t) -(msize
+ TOP_FOOT_SIZE
+ mparams
.page_size
)) {
4429 size_t rs
= ((capacity
== 0)? mparams
.granularity
:
4430 (capacity
+ TOP_FOOT_SIZE
+ msize
));
4431 size_t tsize
= granularity_align(rs
);
4432 char* tbase
= (char*)(CALL_MMAP(tsize
));
4433 if (tbase
!= CMFAIL
) {
4434 m
= init_user_mstate(tbase
, tsize
);
4435 m
->seg
.sflags
= IS_MMAPPED_BIT
;
4436 set_lock(m
, locked
);
4442 mspace
create_mspace_with_base(void* base
, size_t capacity
, int locked
) {
4444 size_t msize
= pad_request(sizeof(struct malloc_state
));
4445 init_mparams(); /* Ensure pagesize etc initialized */
4447 if (capacity
> msize
+ TOP_FOOT_SIZE
&&
4448 capacity
< (size_t) -(msize
+ TOP_FOOT_SIZE
+ mparams
.page_size
)) {
4449 m
= init_user_mstate((char*)base
, capacity
);
4450 m
->seg
.sflags
= EXTERN_BIT
;
4451 set_lock(m
, locked
);
4456 size_t destroy_mspace(mspace msp
) {
4458 mstate ms
= (mstate
)msp
;
4460 msegmentptr sp
= &ms
->seg
;
4462 char* base
= sp
->base
;
4463 size_t size
= sp
->size
;
4464 flag_t flag
= sp
->sflags
;
4466 if ((flag
& IS_MMAPPED_BIT
) && !(flag
& EXTERN_BIT
) &&
4467 CALL_MUNMAP(base
, size
) == 0)
4472 USAGE_ERROR_ACTION(ms
,ms
);
4478 mspace versions of routines are near-clones of the global
4479 versions. This is not so nice but better than the alternatives.
4482 void* mspace_malloc(mspace msp
, size_t bytes
) {
4483 mstate ms
= (mstate
)msp
;
4484 if (!ok_magic(ms
)) {
4485 USAGE_ERROR_ACTION(ms
,ms
);
4488 if (!PREACTION(ms
)) {
4491 if (bytes
<= MAX_SMALL_REQUEST
) {
4494 nb
= (bytes
< MIN_REQUEST
)? MIN_CHUNK_SIZE
: pad_request(bytes
);
4495 idx
= small_index(nb
);
4496 smallbits
= ms
->smallmap
>> idx
;
4498 if ((smallbits
& 0x3U
) != 0) { /* Remainderless fit to a smallbin. */
4500 idx
+= ~smallbits
& 1; /* Uses next bin if idx empty */
4501 b
= smallbin_at(ms
, idx
);
4503 assert(chunksize(p
) == small_index2size(idx
));
4504 unlink_first_small_chunk(ms
, b
, p
, idx
);
4505 set_inuse_and_pinuse(ms
, p
, small_index2size(idx
));
4507 check_malloced_chunk(ms
, mem
, nb
);
4511 else if (nb
> ms
->dvsize
) {
4512 if (smallbits
!= 0) { /* Use chunk in next nonempty smallbin */
4516 binmap_t leftbits
= (smallbits
<< idx
) & left_bits(idx2bit(idx
));
4517 binmap_t leastbit
= least_bit(leftbits
);
4518 compute_bit2idx(leastbit
, i
);
4519 b
= smallbin_at(ms
, i
);
4521 assert(chunksize(p
) == small_index2size(i
));
4522 unlink_first_small_chunk(ms
, b
, p
, i
);
4523 rsize
= small_index2size(i
) - nb
;
4524 /* Fit here cannot be remainderless if 4byte sizes */
4525 if (SIZE_T_SIZE
!= 4 && rsize
< MIN_CHUNK_SIZE
)
4526 set_inuse_and_pinuse(ms
, p
, small_index2size(i
));
4528 set_size_and_pinuse_of_inuse_chunk(ms
, p
, nb
);
4529 r
= chunk_plus_offset(p
, nb
);
4530 set_size_and_pinuse_of_free_chunk(r
, rsize
);
4531 replace_dv(ms
, r
, rsize
);
4534 check_malloced_chunk(ms
, mem
, nb
);
4538 else if (ms
->treemap
!= 0 && (mem
= tmalloc_small(ms
, nb
)) != 0) {
4539 check_malloced_chunk(ms
, mem
, nb
);
4544 else if (bytes
>= MAX_REQUEST
)
4545 nb
= MAX_SIZE_T
; /* Too big to allocate. Force failure (in sys alloc) */
4547 nb
= pad_request(bytes
);
4548 if (ms
->treemap
!= 0 && (mem
= tmalloc_large(ms
, nb
)) != 0) {
4549 check_malloced_chunk(ms
, mem
, nb
);
4554 if (nb
<= ms
->dvsize
) {
4555 size_t rsize
= ms
->dvsize
- nb
;
4556 mchunkptr p
= ms
->dv
;
4557 if (rsize
>= MIN_CHUNK_SIZE
) { /* split dv */
4558 mchunkptr r
= ms
->dv
= chunk_plus_offset(p
, nb
);
4560 set_size_and_pinuse_of_free_chunk(r
, rsize
);
4561 set_size_and_pinuse_of_inuse_chunk(ms
, p
, nb
);
4563 else { /* exhaust dv */
4564 size_t dvs
= ms
->dvsize
;
4567 set_inuse_and_pinuse(ms
, p
, dvs
);
4570 check_malloced_chunk(ms
, mem
, nb
);
4574 else if (nb
< ms
->topsize
) { /* Split top */
4575 size_t rsize
= ms
->topsize
-= nb
;
4576 mchunkptr p
= ms
->top
;
4577 mchunkptr r
= ms
->top
= chunk_plus_offset(p
, nb
);
4578 r
->head
= rsize
| PINUSE_BIT
;
4579 set_size_and_pinuse_of_inuse_chunk(ms
, p
, nb
);
4581 check_top_chunk(ms
, ms
->top
);
4582 check_malloced_chunk(ms
, mem
, nb
);
4586 mem
= sys_alloc(ms
, nb
);
4596 void mspace_free(mspace msp
, void* mem
) {
4598 mchunkptr p
= mem2chunk(mem
);
4600 mstate fm
= get_mstate_for(p
);
4602 mstate fm
= (mstate
)msp
;
4603 #endif /* FOOTERS */
4604 if (!ok_magic(fm
)) {
4605 USAGE_ERROR_ACTION(fm
, p
);
4608 if (!PREACTION(fm
)) {
4609 check_inuse_chunk(fm
, p
);
4610 if (RTCHECK(ok_address(fm
, p
) && ok_cinuse(p
))) {
4611 size_t psize
= chunksize(p
);
4612 mchunkptr next
= chunk_plus_offset(p
, psize
);
4614 size_t prevsize
= p
->prev_foot
;
4615 if ((prevsize
& IS_MMAPPED_BIT
) != 0) {
4616 prevsize
&= ~IS_MMAPPED_BIT
;
4617 psize
+= prevsize
+ MMAP_FOOT_PAD
;
4618 if (CALL_MUNMAP((char*)p
- prevsize
, psize
) == 0)
4619 fm
->footprint
-= psize
;
4623 mchunkptr prev
= chunk_minus_offset(p
, prevsize
);
4626 if (RTCHECK(ok_address(fm
, prev
))) { /* consolidate backward */
4628 unlink_chunk(fm
, p
, prevsize
);
4630 else if ((next
->head
& INUSE_BITS
) == INUSE_BITS
) {
4632 set_free_with_pinuse(p
, psize
, next
);
4641 if (RTCHECK(ok_next(p
, next
) && ok_pinuse(next
))) {
4642 if (!cinuse(next
)) { /* consolidate forward */
4643 if (next
== fm
->top
) {
4644 size_t tsize
= fm
->topsize
+= psize
;
4646 p
->head
= tsize
| PINUSE_BIT
;
4651 if (should_trim(fm
, tsize
))
4655 else if (next
== fm
->dv
) {
4656 size_t dsize
= fm
->dvsize
+= psize
;
4658 set_size_and_pinuse_of_free_chunk(p
, dsize
);
4662 size_t nsize
= chunksize(next
);
4664 unlink_chunk(fm
, next
, nsize
);
4665 set_size_and_pinuse_of_free_chunk(p
, psize
);
4673 set_free_with_pinuse(p
, psize
, next
);
4674 insert_chunk(fm
, p
, psize
);
4675 check_free_chunk(fm
, p
);
4680 USAGE_ERROR_ACTION(fm
, p
);
4687 void* mspace_calloc(mspace msp
, size_t n_elements
, size_t elem_size
) {
4690 mstate ms
= (mstate
)msp
;
4691 if (!ok_magic(ms
)) {
4692 USAGE_ERROR_ACTION(ms
,ms
);
4695 if (n_elements
!= 0) {
4696 req
= n_elements
* elem_size
;
4697 if (((n_elements
| elem_size
) & ~(size_t)0xffff) &&
4698 (req
/ n_elements
!= elem_size
))
4699 req
= MAX_SIZE_T
; /* force downstream failure on overflow */
4701 mem
= internal_malloc(ms
, req
);
4702 if (mem
!= 0 && calloc_must_clear(mem2chunk(mem
)))
4703 memset(mem
, 0, req
);
4707 void* mspace_realloc(mspace msp
, void* oldmem
, size_t bytes
) {
4709 return mspace_malloc(msp
, bytes
);
4710 #ifdef REALLOC_ZERO_BYTES_FREES
4712 mspace_free(msp
, oldmem
);
4715 #endif /* REALLOC_ZERO_BYTES_FREES */
4718 mchunkptr p
= mem2chunk(oldmem
);
4719 mstate ms
= get_mstate_for(p
);
4721 mstate ms
= (mstate
)msp
;
4722 #endif /* FOOTERS */
4723 if (!ok_magic(ms
)) {
4724 USAGE_ERROR_ACTION(ms
,ms
);
4727 return internal_realloc(ms
, oldmem
, bytes
);
4731 void* mspace_memalign(mspace msp
, size_t alignment
, size_t bytes
) {
4732 mstate ms
= (mstate
)msp
;
4733 if (!ok_magic(ms
)) {
4734 USAGE_ERROR_ACTION(ms
,ms
);
4737 return internal_memalign(ms
, alignment
, bytes
);
4740 void** mspace_independent_calloc(mspace msp
, size_t n_elements
,
4741 size_t elem_size
, void* chunks
[]) {
4742 size_t sz
= elem_size
; /* serves as 1-element array */
4743 mstate ms
= (mstate
)msp
;
4744 if (!ok_magic(ms
)) {
4745 USAGE_ERROR_ACTION(ms
,ms
);
4748 return ialloc(ms
, n_elements
, &sz
, 3, chunks
);
4751 void** mspace_independent_comalloc(mspace msp
, size_t n_elements
,
4752 size_t sizes
[], void* chunks
[]) {
4753 mstate ms
= (mstate
)msp
;
4754 if (!ok_magic(ms
)) {
4755 USAGE_ERROR_ACTION(ms
,ms
);
4758 return ialloc(ms
, n_elements
, sizes
, 0, chunks
);
4761 int mspace_trim(mspace msp
, size_t pad
) {
4763 mstate ms
= (mstate
)msp
;
4765 if (!PREACTION(ms
)) {
4766 result
= sys_trim(ms
, pad
);
4771 USAGE_ERROR_ACTION(ms
,ms
);
4776 void mspace_malloc_stats(mspace msp
) {
4777 mstate ms
= (mstate
)msp
;
4779 internal_malloc_stats(ms
);
4782 USAGE_ERROR_ACTION(ms
,ms
);
4786 size_t mspace_footprint(mspace msp
) {
4788 mstate ms
= (mstate
)msp
;
4790 result
= ms
->footprint
;
4792 USAGE_ERROR_ACTION(ms
,ms
);
4797 size_t mspace_max_footprint(mspace msp
) {
4799 mstate ms
= (mstate
)msp
;
4801 result
= ms
->max_footprint
;
4803 USAGE_ERROR_ACTION(ms
,ms
);
4809 struct mallinfo
mspace_mallinfo(mspace msp
) {
4810 mstate ms
= (mstate
)msp
;
4811 if (!ok_magic(ms
)) {
4812 USAGE_ERROR_ACTION(ms
,ms
);
4814 return internal_mallinfo(ms
);
4816 #endif /* NO_MALLINFO */
4818 int mspace_mallopt(int param_number
, int value
) {
4819 return change_mparam(param_number
, value
);
4822 #endif /* MSPACES */
4824 /* -------------------- Alternative MORECORE functions ------------------- */
4827 Guidelines for creating a custom version of MORECORE:
4829 * For best performance, MORECORE should allocate in multiples of pagesize.
4830 * MORECORE may allocate more memory than requested. (Or even less,
4831 but this will usually result in a malloc failure.)
4832 * MORECORE must not allocate memory when given argument zero, but
4833 instead return one past the end address of memory from previous
4835 * For best performance, consecutive calls to MORECORE with positive
4836 arguments should return increasing addresses, indicating that
4837 space has been contiguously extended.
4838 * Even though consecutive calls to MORECORE need not return contiguous
4839 addresses, it must be OK for malloc'ed chunks to span multiple
4840 regions in those cases where they do happen to be contiguous.
4841 * MORECORE need not handle negative arguments -- it may instead
4842 just return MFAIL when given negative arguments.
4843 Negative arguments are always multiples of pagesize. MORECORE
4844 must not misinterpret negative args as large positive unsigned
4845 args. You can suppress all such calls from even occurring by defining
4846 MORECORE_CANNOT_TRIM,
4848 As an example alternative MORECORE, here is a custom allocator
4849 kindly contributed for pre-OSX macOS. It uses virtually but not
4850 necessarily physically contiguous non-paged memory (locked in,
4851 present and won't get swapped out). You can use it by uncommenting
4852 this section, adding some #includes, and setting up the appropriate
4855 #define MORECORE osMoreCore
4857 There is also a shutdown routine that should somehow be called for
4858 cleanup upon program exit.
4860 #define MAX_POOL_ENTRIES 100
4861 #define MINIMUM_MORECORE_SIZE (64 * 1024U)
4862 static int next_os_pool;
4863 void *our_os_pools[MAX_POOL_ENTRIES];
4865 void *osMoreCore(int size)
4868 static void *sbrk_top = 0;
4872 if (size < MINIMUM_MORECORE_SIZE)
4873 size = MINIMUM_MORECORE_SIZE;
4874 if (CurrentExecutionLevel() == kTaskLevel)
4875 ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
4878 return (void *) MFAIL;
4880 // save ptrs so they can be freed during cleanup
4881 our_os_pools[next_os_pool] = ptr;
4883 ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
4884 sbrk_top = (char *) ptr + size;
4889 // we don't currently support shrink behavior
4890 return (void *) MFAIL;
4898 // cleanup any allocated memory pools
4899 // called as last thing before shutting down driver
4901 void osCleanupMem(void)
4905 for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
4908 PoolDeallocate(*ptr);
4916 /* -----------------------------------------------------------------------
4918 V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee)
4919 * Add max_footprint functions
4920 * Ensure all appropriate literals are size_t
4921 * Fix conditional compilation problem for some #define settings
4922 * Avoid concatenating segments with the one provided
4923 in create_mspace_with_base
4924 * Rename some variables to avoid compiler shadowing warnings
4925 * Use explicit lock initialization.
4926 * Better handling of sbrk interference.
4927 * Simplify and fix segment insertion, trimming and mspace_destroy
4928 * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
4929 * Thanks especially to Dennis Flanagan for help on these.
4931 V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee)
4932 * Fix memalign brace error.
4934 V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee)
4935 * Fix improper #endif nesting in C++
4936 * Add explicit casts needed for C++
4938 V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee)
4939 * Use trees for large bins
4941 * Use segments to unify sbrk-based and mmap-based system allocation,
4942 removing need for emulation on most platforms without sbrk.
4943 * Default safety checks
4944 * Optional footer checks. Thanks to William Robertson for the idea.
4945 * Internal code refactoring
4946 * Incorporate suggestions and platform-specific changes.
4947 Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
4948 Aaron Bachmann, Emery Berger, and others.
4949 * Speed up non-fastbin processing enough to remove fastbins.
4950 * Remove useless cfree() to avoid conflicts with other apps.
4951 * Remove internal memcpy, memset. Compilers handle builtins better.
4952 * Remove some options that no one ever used and rename others.
4954 V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
4955 * Fix malloc_state bitmap array misdeclaration
4957 V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)
4958 * Allow tuning of FIRST_SORTED_BIN_SIZE
4959 * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
4960 * Better detection and support for non-contiguousness of MORECORE.
4961 Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
4962 * Bypass most of malloc if no frees. Thanks To Emery Berger.
4963 * Fix freeing of old top non-contiguous chunk im sysmalloc.
4964 * Raised default trim and map thresholds to 256K.
4965 * Fix mmap-related #defines. Thanks to Lubos Lunak.
4966 * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
4967 * Branch-free bin calculation
4968 * Default trim and mmap thresholds now 256K.
4970 V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
4971 * Introduce independent_comalloc and independent_calloc.
4972 Thanks to Michael Pachos for motivation and help.
4973 * Make optional .h file available
4974 * Allow > 2GB requests on 32bit systems.
4975 * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
4976 Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
4978 * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
4980 * memalign: check alignment arg
4981 * realloc: don't try to shift chunks backwards, since this
4982 leads to more fragmentation in some programs and doesn't
4983 seem to help in any others.
4984 * Collect all cases in malloc requiring system memory into sysmalloc
4985 * Use mmap as backup to sbrk
4986 * Place all internal state in malloc_state
4987 * Introduce fastbins (although similar to 2.5.1)
4988 * Many minor tunings and cosmetic improvements
4989 * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
4990 * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
4991 Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
4992 * Include errno.h to support default failure action.
4994 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
4995 * return null for negative arguments
4996 * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
4997 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
4998 (e.g. WIN32 platforms)
4999 * Cleanup header file inclusion for WIN32 platforms
5000 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
5001 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
5002 memory allocation routines
5003 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
5004 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
5005 usage of 'assert' in non-WIN32 code
5006 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
5008 * Always call 'fREe()' rather than 'free()'
5010 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
5011 * Fixed ordering problem with boundary-stamping
5013 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
5014 * Added pvalloc, as recommended by H.J. Liu
5015 * Added 64bit pointer support mainly from Wolfram Gloger
5016 * Added anonymously donated WIN32 sbrk emulation
5017 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
5018 * malloc_extend_top: fix mask error that caused wastage after
5020 * Add linux mremap support code from HJ Liu
5022 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
5023 * Integrated most documentation with the code.
5024 * Add support for mmap, with help from
5025 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5026 * Use last_remainder in more cases.
5027 * Pack bins using idea from colin@nyx10.cs.du.edu
5028 * Use ordered bins instead of best-fit threshhold
5029 * Eliminate block-local decls to simplify tracing and debugging.
5030 * Support another case of realloc via move into top
5031 * Fix error occuring when initial sbrk_base not word-aligned.
5032 * Rely on page size for units instead of SBRK_UNIT to
5033 avoid surprises about sbrk alignment conventions.
5034 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
5035 (raymond@es.ele.tue.nl) for the suggestion.
5036 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
5037 * More precautions for cases where other routines call sbrk,
5038 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5039 * Added macros etc., allowing use in linux libc from
5040 H.J. Lu (hjl@gnu.ai.mit.edu)
5041 * Inverted this history list
5043 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
5044 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
5045 * Removed all preallocation code since under current scheme
5046 the work required to undo bad preallocations exceeds
5047 the work saved in good cases for most test programs.
5048 * No longer use return list or unconsolidated bins since
5049 no scheme using them consistently outperforms those that don't
5050 given above changes.
5051 * Use best fit for very large chunks to prevent some worst-cases.
5052 * Added some support for debugging
5054 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
5055 * Removed footers when chunks are in use. Thanks to
5056 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
5058 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
5059 * Added malloc_trim, with help from Wolfram Gloger
5060 (wmglo@Dent.MED.Uni-Muenchen.DE).
5062 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
5064 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
5065 * realloc: try to expand in both directions
5066 * malloc: swap order of clean-bin strategy;
5067 * realloc: only conditionally expand backwards
5068 * Try not to scavenge used bins
5069 * Use bin counts as a guide to preallocation
5070 * Occasionally bin return list chunks in first scan
5071 * Add a few optimizations from colin@nyx10.cs.du.edu
5073 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
5074 * faster bin computation & slightly different binning
5075 * merged all consolidations to one part of malloc proper
5076 (eliminating old malloc_find_space & malloc_clean_bin)
5077 * Scan 2 returns chunks (not just 1)
5078 * Propagate failure in realloc if malloc returns 0
5079 * Add stuff to allow compilation on non-ANSI compilers
5080 from kpv@research.att.com
5082 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
5083 * removed potential for odd address access in prev_chunk
5084 * removed dependency on getpagesize.h
5085 * misc cosmetics and a bit more internal documentation
5086 * anticosmetics: mangled names in macros to evade debugger strangeness
5087 * tested on sparc, hp-700, dec-mips, rs6000
5088 with gcc & native cc (hp, dec only) allowing
5089 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
5091 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
5092 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
5093 structure of old version, but most details differ.)