2 This is a version (aka dlmalloc) of malloc/free/realloc written by
3 Doug Lea and released to the public domain, as explained at
4 http://creativecommons.org/licenses/publicdomain. Send questions,
5 comments, complaints, performance data, etc to dl@cs.oswego.edu
7 * Version pre-2.8.4 Mon Nov 27 11:22:37 2006 (dl at gee)
9 Note: There may be an updated version of this malloc obtainable at
10 ftp://gee.cs.oswego.edu/pub/misc/malloc.c
11 Check before installing!
15 This library is all in one file to simplify the most common usage:
16 ftp it, compile it (-O3), and link it into another program. All of
17 the compile-time options default to reasonable values for use on
18 most platforms. You might later want to step through various
19 compile-time and dynamic tuning options.
21 For convenience, an include file for code using this malloc is at:
22 ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.4.h
23 You don't really need this .h file unless you call functions not
24 defined in your system include files. The .h file contains only the
25 excerpts from this file needed for using this malloc on ANSI C/C++
26 systems, so long as you haven't changed compile-time options about
27 naming and tuning parameters. If you do, then you can create your
28 own malloc.h that does include all settings by cutting at the point
29 indicated below. Note that you may already by default be using a C
30 library containing a malloc that is based on some version of this
31 malloc (for example in linux). You might still want to use the one
32 in this file to customize settings or to avoid overheads associated
33 with library versions.
37 Supported pointer/size_t representation: 4 or 8 bytes
38 size_t MUST be an unsigned type of the same width as
39 pointers. (If you are using an ancient system that declares
40 size_t as a signed type, or need it to be a different width
41 than pointers, you can use a previous release of this malloc
42 (e.g. 2.7.2) supporting these.)
44 Alignment: 8 bytes (default)
45 This suffices for nearly all current machines and C compilers.
46 However, you can define MALLOC_ALIGNMENT to be wider than this
47 if necessary (up to 128bytes), at the expense of using more space.
49 Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes)
50 8 or 16 bytes (if 8byte sizes)
51 Each malloced chunk has a hidden word of overhead holding size
52 and status information, and additional cross-check word
53 if FOOTERS is defined.
55 Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead)
56 8-byte ptrs: 32 bytes (including overhead)
58 Even a request for zero bytes (i.e., malloc(0)) returns a
59 pointer to something of the minimum allocatable size.
60 The maximum overhead wastage (i.e., number of extra bytes
61 allocated than were requested in malloc) is less than or equal
62 to the minimum size, except for requests >= mmap_threshold that
63 are serviced via mmap(), where the worst case wastage is about
64 32 bytes plus the remainder from a system page (the minimal
65 mmap unit); typically 4096 or 8192 bytes.
67 Security: static-safe; optionally more or less
68 The "security" of malloc refers to the ability of malicious
69 code to accentuate the effects of errors (for example, freeing
70 space that is not currently malloc'ed or overwriting past the
71 ends of chunks) in code that calls malloc. This malloc
72 guarantees not to modify any memory locations below the base of
73 heap, i.e., static variables, even in the presence of usage
74 errors. The routines additionally detect most improper frees
75 and reallocs. All this holds as long as the static bookkeeping
76 for malloc itself is not corrupted by some other means. This
77 is only one aspect of security -- these checks do not, and
78 cannot, detect all possible programming errors.
80 If FOOTERS is defined nonzero, then each allocated chunk
81 carries an additional check word to verify that it was malloced
82 from its space. These check words are the same within each
83 execution of a program using malloc, but differ across
84 executions, so externally crafted fake chunks cannot be
85 freed. This improves security by rejecting frees/reallocs that
86 could corrupt heap memory, in addition to the checks preventing
87 writes to statics that are always on. This may further improve
88 security at the expense of time and space overhead. (Note that
89 FOOTERS may also be worth using with MSPACES.)
91 By default detected errors cause the program to abort (calling
92 "abort()"). You can override this to instead proceed past
93 errors by defining PROCEED_ON_ERROR. In this case, a bad free
94 has no effect, and a malloc that encounters a bad address
95 caused by user overwrites will ignore the bad address by
96 dropping pointers and indices to all known memory. This may
97 be appropriate for programs that should continue if at all
98 possible in the face of programming errors, although they may
99 run out of memory because dropped memory is never reclaimed.
101 If you don't like either of these options, you can define
102 CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
103 else. And if you are sure that your program using malloc has
104 no errors or vulnerabilities, you can define INSECURE to 1,
105 which might (or might not) provide a small performance improvement.
107 Thread-safety: NOT thread-safe unless USE_LOCKS defined
108 When USE_LOCKS is defined, each public call to malloc, free,
109 etc is surrounded with either a pthread mutex or a win32
110 spinlock (depending on WIN32). This is not especially fast, and
111 can be a major bottleneck. It is designed only to provide
112 minimal protection in concurrent environments, and to provide a
113 basis for extensions. If you are using malloc in a concurrent
114 program, consider instead using nedmalloc
115 (http://www.nedprod.com/programs/portable/nedmalloc/) or
116 ptmalloc (See http://www.malloc.de), which are derived
117 from versions of this malloc.
119 System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
120 This malloc can use unix sbrk or any emulation (invoked using
121 the CALL_MORECORE macro) and/or mmap/munmap or any emulation
122 (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
123 memory. On most unix systems, it tends to work best if both
124 MORECORE and MMAP are enabled. On Win32, it uses emulations
125 based on VirtualAlloc. It also uses common C library functions
128 Compliance: I believe it is compliant with the Single Unix Specification
129 (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
132 * Overview of algorithms
134 This is not the fastest, most space-conserving, most portable, or
135 most tunable malloc ever written. However it is among the fastest
136 while also being among the most space-conserving, portable and
137 tunable. Consistent balance across these factors results in a good
138 general-purpose allocator for malloc-intensive programs.
140 In most ways, this malloc is a best-fit allocator. Generally, it
141 chooses the best-fitting existing chunk for a request, with ties
142 broken in approximately least-recently-used order. (This strategy
143 normally maintains low fragmentation.) However, for requests less
144 than 256bytes, it deviates from best-fit when there is not an
145 exactly fitting available chunk by preferring to use space adjacent
146 to that used for the previous small request, as well as by breaking
147 ties in approximately most-recently-used order. (These enhance
148 locality of series of small allocations.) And for very large requests
149 (>= 256Kb by default), it relies on system memory mapping
150 facilities, if supported. (This helps avoid carrying around and
151 possibly fragmenting memory used only for large chunks.)
153 All operations (except malloc_stats and mallinfo) have execution
154 times that are bounded by a constant factor of the number of bits in
155 a size_t, not counting any clearing in calloc or copying in realloc,
156 or actions surrounding MORECORE and MMAP that have times
157 proportional to the number of non-contiguous regions returned by
158 system allocation routines, which is often just 1. In real-time
159 applications, you can optionally suppress segment traversals using
160 NO_SEGMENT_TRAVERSAL, which assures bounded execution even when
161 system allocators return non-contiguous spaces, at the typical
162 expense of carrying around more memory and increased fragmentation.
164 The implementation is not very modular and seriously overuses
165 macros. Perhaps someday all C compilers will do as good a job
166 inlining modular code as can now be done by brute-force expansion,
167 but now, enough of them seem not to.
169 Some compilers issue a lot of warnings about code that is
170 dead/unreachable only on some platforms, and also about intentional
171 uses of negation on unsigned types. All known cases of each can be
174 For a longer but out of date high-level description, see
175 http://gee.cs.oswego.edu/dl/html/malloc.html
178 If MSPACES is defined, then in addition to malloc, free, etc.,
179 this file also defines mspace_malloc, mspace_free, etc. These
180 are versions of malloc routines that take an "mspace" argument
181 obtained using create_mspace, to control all internal bookkeeping.
182 If ONLY_MSPACES is defined, only these versions are compiled.
183 So if you would like to use this allocator for only some allocations,
184 and your system malloc for others, you can compile with
185 ONLY_MSPACES and then do something like...
186 static mspace mymspace = create_mspace(0,0); // for example
187 #define mymalloc(bytes) mspace_malloc(mymspace, bytes)
189 (Note: If you only need one instance of an mspace, you can instead
190 use "USE_DL_PREFIX" to relabel the global malloc.)
192 You can similarly create thread-local allocators by storing
193 mspaces as thread-locals. For example:
194 static __thread mspace tlms = 0;
195 void* tlmalloc(size_t bytes) {
196 if (tlms == 0) tlms = create_mspace(0, 0);
197 return mspace_malloc(tlms, bytes);
199 void tlfree(void* mem) { mspace_free(tlms, mem); }
201 Unless FOOTERS is defined, each mspace is completely independent.
202 You cannot allocate from one and free to another (although
203 conformance is only weakly checked, so usage errors are not always
204 caught). If FOOTERS is defined, then each chunk carries around a tag
205 indicating its originating mspace, and frees are directed to their
208 ------------------------- Compile-time options ---------------------------
210 Be careful in setting #define values for numerical constants of type
211 size_t. On some systems, literal values are not automatically extended
212 to size_t precision unless they are explicitly casted. You can also
213 use the symbolic values MAX_SIZE_T, SIZE_T_ONE, etc below.
215 WIN32 default: defined if _WIN32 defined
216 Defining WIN32 sets up defaults for MS environment and compilers.
217 Otherwise defaults are for unix. Beware that there seem to be some
218 cases where this malloc might not be a pure drop-in replacement for
219 Win32 malloc: Random-looking failures from Win32 GDI API's (eg;
220 SetDIBits()) may be due to bugs in some video driver implementations
221 when pixel buffers are malloc()ed, and the region spans more than
222 one VirtualAlloc()ed region. Because dlmalloc uses a small (64Kb)
223 default granularity, pixel buffers may straddle virtual allocation
224 regions more often than when using the Microsoft allocator. You can
225 avoid this by using VirtualAlloc() and VirtualFree() for all pixel
226 buffers rather than using malloc(). If this is not possible,
227 recompile this malloc with a larger DEFAULT_GRANULARITY.
229 MALLOC_ALIGNMENT default: (size_t)8
230 Controls the minimum alignment for malloc'ed chunks. It must be a
231 power of two and at least 8, even on machines for which smaller
232 alignments would suffice. It may be defined as larger than this
233 though. Note however that code and data structures are optimized for
234 the case of 8-byte alignment.
236 MSPACES default: 0 (false)
237 If true, compile in support for independent allocation spaces.
238 This is only supported if HAVE_MMAP is true.
240 ONLY_MSPACES default: 0 (false)
241 If true, only compile in mspace versions, not regular versions.
243 USE_LOCKS default: 0 (false)
244 Causes each call to each public routine to be surrounded with
245 pthread or WIN32 mutex lock/unlock. (If set true, this can be
246 overridden on a per-mspace basis for mspace versions.) If set to a
247 non-zero value other than 1, locks are used, but their
248 implementation is left out, so lock functions must be supplied manually.
250 USE_SPIN_LOCKS default: 1 iff USE_LOCKS and on x86 using gcc or MSC
251 If true, uses custom spin locks for locking. This is currently
252 supported only for x86 platforms using gcc or recent MS compilers.
253 Otherwise, posix locks or win32 critical sections are used.
256 If true, provide extra checking and dispatching by placing
257 information in the footers of allocated chunks. This adds
258 space and time overhead.
261 If true, omit checks for usage errors and heap space overwrites.
263 USE_DL_PREFIX default: NOT defined
264 Causes compiler to prefix all public routines with the string 'dl'.
265 This can be useful when you only want to use this malloc in one part
266 of a program, using your regular system malloc elsewhere.
268 ABORT default: defined as abort()
269 Defines how to abort on failed checks. On most systems, a failed
270 check cannot die with an "assert" or even print an informative
271 message, because the underlying print routines in turn call malloc,
272 which will fail again. Generally, the best policy is to simply call
273 abort(). It's not very useful to do more than this because many
274 errors due to overwriting will show up as address faults (null, odd
275 addresses etc) rather than malloc-triggered checks, so will also
276 abort. Also, most compilers know that abort() does not return, so
277 can better optimize code conditionally calling it.
279 PROCEED_ON_ERROR default: defined as 0 (false)
280 Controls whether detected bad addresses cause them to bypassed
281 rather than aborting. If set, detected bad arguments to free and
282 realloc are ignored. And all bookkeeping information is zeroed out
283 upon a detected overwrite of freed heap space, thus losing the
284 ability to ever return it from malloc again, but enabling the
285 application to proceed. If PROCEED_ON_ERROR is defined, the
286 static variable malloc_corruption_error_count is compiled in
287 and can be examined to see if errors have occurred. This option
288 generates slower code than the default abort policy.
290 DEBUG default: NOT defined
291 The DEBUG setting is mainly intended for people trying to modify
292 this code or diagnose problems when porting to new platforms.
293 However, it may also be able to better isolate user errors than just
294 using runtime checks. The assertions in the check routines spell
295 out in more detail the assumptions and invariants underlying the
296 algorithms. The checking is fairly extensive, and will slow down
297 execution noticeably. Calling malloc_stats or mallinfo with DEBUG
298 set will attempt to check every non-mmapped allocated and free chunk
299 in the course of computing the summaries.
301 ABORT_ON_ASSERT_FAILURE default: defined as 1 (true)
302 Debugging assertion failures can be nearly impossible if your
303 version of the assert macro causes malloc to be called, which will
304 lead to a cascade of further failures, blowing the runtime stack.
305 ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
306 which will usually make debugging easier.
308 MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32
309 The action to take before "return 0" when malloc fails to be able to
310 return memory because there is none available.
312 HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES
313 True if this system supports sbrk or an emulation of it.
315 MORECORE default: sbrk
316 The name of the sbrk-style system routine to call to obtain more
317 memory. See below for guidance on writing custom MORECORE
318 functions. The type of the argument to sbrk/MORECORE varies across
319 systems. It cannot be size_t, because it supports negative
320 arguments, so it is normally the signed type of the same width as
321 size_t (sometimes declared as "intptr_t"). It doesn't much matter
322 though. Internally, we only call it with arguments less than half
323 the max value of a size_t, which should work across all reasonable
324 possibilities, although sometimes generating compiler warnings.
326 MORECORE_CONTIGUOUS default: 1 (true) if HAVE_MORECORE
327 If true, take advantage of fact that consecutive calls to MORECORE
328 with positive arguments always return contiguous increasing
329 addresses. This is true of unix sbrk. It does not hurt too much to
330 set it true anyway, since malloc copes with non-contiguities.
331 Setting it false when definitely non-contiguous saves time
332 and possibly wasted space it would take to discover this though.
334 MORECORE_CANNOT_TRIM default: NOT defined
335 True if MORECORE cannot release space back to the system when given
336 negative arguments. This is generally necessary only if you are
337 using a hand-crafted MORECORE function that cannot handle negative
340 NO_SEGMENT_TRAVERSAL default: 0
341 If non-zero, suppresses traversals of memory segments
342 returned by either MORECORE or CALL_MMAP. This disables
343 merging of segments that are contiguous, and selectively
344 releasing them to the OS if unused, but bounds execution times.
346 HAVE_MMAP default: 1 (true)
347 True if this system supports mmap or an emulation of it. If so, and
348 HAVE_MORECORE is not true, MMAP is used for all system
349 allocation. If set and HAVE_MORECORE is true as well, MMAP is
350 primarily used to directly allocate very large blocks. It is also
351 used as a backup strategy in cases where MORECORE fails to provide
352 space from system. Note: A single call to MUNMAP is assumed to be
353 able to unmap memory that may have be allocated using multiple calls
354 to MMAP, so long as they are adjacent.
356 HAVE_MREMAP default: 1 on linux, else 0
357 If true realloc() uses mremap() to re-allocate large blocks and
358 extend or shrink allocation spaces.
360 MMAP_CLEARS default: 1 except on WINCE.
361 True if mmap clears memory so calloc doesn't need to. This is true
362 for standard unix mmap using /dev/zero and on WIN32 except for WINCE.
364 USE_BUILTIN_FFS default: 0 (i.e., not used)
365 Causes malloc to use the builtin ffs() function to compute indices.
366 Some compilers may recognize and intrinsify ffs to be faster than the
367 supplied C version. Also, the case of x86 using gcc is special-cased
368 to an asm instruction, so is already as fast as it can be, and so
369 this setting has no effect. Similarly for Win32 under recent MS compilers.
370 (On most x86s, the asm version is only slightly faster than the C version.)
372 malloc_getpagesize default: derive from system includes, or 4096.
373 The system page size. To the extent possible, this malloc manages
374 memory from the system in page-size units. This may be (and
375 usually is) a function rather than a constant. This is ignored
376 if WIN32, where page size is determined using getSystemInfo during
379 USE_DEV_RANDOM default: 0 (i.e., not used)
380 Causes malloc to use /dev/random to initialize secure magic seed for
381 stamping footers. Otherwise, the current time is used.
383 NO_MALLINFO default: 0
384 If defined, don't compile "mallinfo". This can be a simple way
385 of dealing with mismatches between system declarations and
388 MALLINFO_FIELD_TYPE default: size_t
389 The type of the fields in the mallinfo struct. This was originally
390 defined as "int" in SVID etc, but is more usefully defined as
391 size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set
393 REALLOC_ZERO_BYTES_FREES default: not defined
394 This should be set if a call to realloc with zero bytes should
395 be the same as a call to free. Some people think it should. Otherwise,
396 since this malloc returns a unique pointer for malloc(0), so does
399 LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
400 LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H
401 LACKS_STDLIB_H default: NOT defined unless on WIN32
402 Define these if your system does not have these header files.
403 You might need to manually insert some of the declarations they provide.
405 DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS,
406 system_info.dwAllocationGranularity in WIN32,
408 Also settable using mallopt(M_GRANULARITY, x)
409 The unit for allocating and deallocating memory from the system. On
410 most systems with contiguous MORECORE, there is no reason to
411 make this more than a page. However, systems with MMAP tend to
412 either require or encourage larger granularities. You can increase
413 this value to prevent system allocation functions to be called so
414 often, especially if they are slow. The value must be at least one
415 page and must be a power of two. Setting to 0 causes initialization
416 to either page size or win32 region size. (Note: In previous
417 versions of malloc, the equivalent of this option was called
420 DEFAULT_TRIM_THRESHOLD default: 2MB
421 Also settable using mallopt(M_TRIM_THRESHOLD, x)
422 The maximum amount of unused top-most memory to keep before
423 releasing via malloc_trim in free(). Automatic trimming is mainly
424 useful in long-lived programs using contiguous MORECORE. Because
425 trimming via sbrk can be slow on some systems, and can sometimes be
426 wasteful (in cases where programs immediately afterward allocate
427 more large chunks) the value should be high enough so that your
428 overall system performance would improve by releasing this much
429 memory. As a rough guide, you might set to a value close to the
430 average size of a process (program) running on your system.
431 Releasing this much memory would allow such a process to run in
432 memory. Generally, it is worth tuning trim thresholds when a
433 program undergoes phases where several large chunks are allocated
434 and released in ways that can reuse each other's storage, perhaps
435 mixed with phases where there are no such chunks at all. The trim
436 value must be greater than page size to have any useful effect. To
437 disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
438 some people use of mallocing a huge space and then freeing it at
439 program startup, in an attempt to reserve system memory, doesn't
440 have the intended effect under automatic trimming, since that memory
441 will immediately be returned to the system.
443 DEFAULT_MMAP_THRESHOLD default: 256K
444 Also settable using mallopt(M_MMAP_THRESHOLD, x)
445 The request size threshold for using MMAP to directly service a
446 request. Requests of at least this size that cannot be allocated
447 using already-existing space will be serviced via mmap. (If enough
448 normal freed space already exists it is used instead.) Using mmap
449 segregates relatively large chunks of memory so that they can be
450 individually obtained and released from the host system. A request
451 serviced through mmap is never reused by any other request (at least
452 not directly; the system may just so happen to remap successive
453 requests to the same locations). Segregating space in this way has
454 the benefits that: Mmapped space can always be individually released
455 back to the system, which helps keep the system level memory demands
456 of a long-lived program low. Also, mapped memory doesn't become
457 `locked' between other chunks, as can happen with normally allocated
458 chunks, which means that even trimming via malloc_trim would not
459 release them. However, it has the disadvantage that the space
460 cannot be reclaimed, consolidated, and then used to service later
461 requests, as happens with normal chunks. The advantages of mmap
462 nearly always outweigh disadvantages for "large" chunks, but the
463 value of "large" may vary across systems. The default is an
464 empirically derived value that works well in most systems. You can
465 disable mmap by setting to MAX_SIZE_T.
467 MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP
468 The number of consolidated frees between checks to release
469 unused segments when freeing. When using non-contiguous segments,
470 especially with multiple mspaces, checking only for topmost space
471 doesn't always suffice to trigger trimming. To compensate for this,
472 free() will, with a period of MAX_RELEASE_CHECK_RATE (or the
473 current number of segments, if greater) try to release unused
474 segments to the OS when freeing chunks that result in
475 consolidation. The best value for this parameter is a compromise
476 between slowing down frees with relatively costly checks that
477 rarely trigger versus holding on to unused memory. To effectively
478 disable, set to MAX_SIZE_T. This may lead to a very slight speed
479 improvement at the expense of carrying around more memory.
482 /* Version identifier to allow people to support multiple versions */
483 #ifndef DLMALLOC_VERSION
484 #define DLMALLOC_VERSION 20804
485 #endif /* DLMALLOC_VERSION */
488 #define _GNU_SOURCE 1
496 #define LACKS_FCNTL_H
498 #endif /* _WIN32_WCE */
501 #define WIN32_LEAN_AND_MEAN
503 #define _WIN32_WINNT 0x403
507 #define HAVE_MORECORE 0
508 #define LACKS_UNISTD_H
509 #define LACKS_SYS_PARAM_H
510 #define LACKS_SYS_MMAN_H
511 #define LACKS_STRING_H
512 #define LACKS_STRINGS_H
513 #define LACKS_SYS_TYPES_H
514 #define LACKS_ERRNO_H
515 #ifndef MALLOC_FAILURE_ACTION
516 #define MALLOC_FAILURE_ACTION
517 #endif /* MALLOC_FAILURE_ACTION */
518 #ifdef _WIN32_WCE /* WINCE reportedly does not clear */
519 #define MMAP_CLEARS 0
521 #define MMAP_CLEARS 1
522 #endif /* _WIN32_WCE */
525 #if defined(DARWIN) || defined(_DARWIN)
526 /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
527 #ifndef HAVE_MORECORE
528 #define HAVE_MORECORE 0
530 /* OSX allocators provide 16 byte alignment */
531 #ifndef MALLOC_ALIGNMENT
532 #define MALLOC_ALIGNMENT ((size_t)16U)
534 #endif /* HAVE_MORECORE */
537 #ifndef LACKS_SYS_TYPES_H
538 #include <sys/types.h> /* For size_t */
539 #endif /* LACKS_SYS_TYPES_H */
541 /* The maximum possible size_t value has all bits set */
542 #define MAX_SIZE_T (~(size_t)0)
545 #define ONLY_MSPACES 0 /* define to a value */
547 #define ONLY_MSPACES 1
548 #endif /* ONLY_MSPACES */
552 #else /* ONLY_MSPACES */
554 #endif /* ONLY_MSPACES */
556 #ifndef MALLOC_ALIGNMENT
557 #define MALLOC_ALIGNMENT ((size_t)8U)
558 #endif /* MALLOC_ALIGNMENT */
563 #define ABORT abort()
565 #ifndef ABORT_ON_ASSERT_FAILURE
566 #define ABORT_ON_ASSERT_FAILURE 1
567 #endif /* ABORT_ON_ASSERT_FAILURE */
568 #ifndef PROCEED_ON_ERROR
569 #define PROCEED_ON_ERROR 0
570 #endif /* PROCEED_ON_ERROR */
573 #endif /* USE_LOCKS */
574 #ifndef USE_SPIN_LOCKS
575 #if USE_LOCKS && (defined(__GNUC__) && ((defined(__i386__) || defined(__x86_64__)))) || (defined(_MSC_VER) && _MSC_VER>=1310)
576 #define USE_SPIN_LOCKS 1
578 #define USE_SPIN_LOCKS 0
579 #endif /* USE_LOCKS && ... */
580 #endif /* USE_SPIN_LOCKS */
583 #endif /* INSECURE */
586 #endif /* HAVE_MMAP */
588 #define MMAP_CLEARS 1
589 #endif /* MMAP_CLEARS */
592 #define HAVE_MREMAP 1
594 #define HAVE_MREMAP 0
596 #endif /* HAVE_MREMAP */
597 #ifndef MALLOC_FAILURE_ACTION
598 #define MALLOC_FAILURE_ACTION errno = ENOMEM;
599 #endif /* MALLOC_FAILURE_ACTION */
600 #ifndef HAVE_MORECORE
602 #define HAVE_MORECORE 0
603 #else /* ONLY_MSPACES */
604 #define HAVE_MORECORE 1
605 #endif /* ONLY_MSPACES */
606 #endif /* HAVE_MORECORE */
608 #define MORECORE_CONTIGUOUS 0
609 #else /* !HAVE_MORECORE */
610 #define MORECORE_DEFAULT sbrk
611 #ifndef MORECORE_CONTIGUOUS
612 #define MORECORE_CONTIGUOUS 1
613 #endif /* MORECORE_CONTIGUOUS */
614 #endif /* HAVE_MORECORE */
615 #ifndef DEFAULT_GRANULARITY
616 #if (MORECORE_CONTIGUOUS || defined(WIN32))
617 #define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */
618 #else /* MORECORE_CONTIGUOUS */
619 #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
620 #endif /* MORECORE_CONTIGUOUS */
621 #endif /* DEFAULT_GRANULARITY */
622 #ifndef DEFAULT_TRIM_THRESHOLD
623 #ifndef MORECORE_CANNOT_TRIM
624 #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
625 #else /* MORECORE_CANNOT_TRIM */
626 #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
627 #endif /* MORECORE_CANNOT_TRIM */
628 #endif /* DEFAULT_TRIM_THRESHOLD */
629 #ifndef DEFAULT_MMAP_THRESHOLD
631 #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
632 #else /* HAVE_MMAP */
633 #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
634 #endif /* HAVE_MMAP */
635 #endif /* DEFAULT_MMAP_THRESHOLD */
636 #ifndef MAX_RELEASE_CHECK_RATE
638 #define MAX_RELEASE_CHECK_RATE 4095
640 #define MAX_RELEASE_CHECK_RATE MAX_SIZE_T
641 #endif /* HAVE_MMAP */
642 #endif /* MAX_RELEASE_CHECK_RATE */
643 #ifndef USE_BUILTIN_FFS
644 #define USE_BUILTIN_FFS 0
645 #endif /* USE_BUILTIN_FFS */
646 #ifndef USE_DEV_RANDOM
647 #define USE_DEV_RANDOM 0
648 #endif /* USE_DEV_RANDOM */
650 #define NO_MALLINFO 0
651 #endif /* NO_MALLINFO */
652 #ifndef MALLINFO_FIELD_TYPE
653 #define MALLINFO_FIELD_TYPE size_t
654 #endif /* MALLINFO_FIELD_TYPE */
655 #ifndef NO_SEGMENT_TRAVERSAL
656 #define NO_SEGMENT_TRAVERSAL 0
657 #endif /* NO_SEGMENT_TRAVERSAL */
660 mallopt tuning options. SVID/XPG defines four standard parameter
661 numbers for mallopt, normally defined in malloc.h. None of these
662 are used in this malloc, so setting them has no effect. But this
663 malloc does support the following options.
666 #define M_TRIM_THRESHOLD (-1)
667 #define M_GRANULARITY (-2)
668 #define M_MMAP_THRESHOLD (-3)
670 /* ------------------------ Mallinfo declarations ------------------------ */
674 This version of malloc supports the standard SVID/XPG mallinfo
675 routine that returns a struct containing usage properties and
676 statistics. It should work on any system that has a
677 /usr/include/malloc.h defining struct mallinfo. The main
678 declaration needed is the mallinfo struct that is returned (by-copy)
679 by mallinfo(). The malloinfo struct contains a bunch of fields that
680 are not even meaningful in this version of malloc. These fields are
681 are instead filled by mallinfo() with other numbers that might be of
684 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
685 /usr/include/malloc.h file that includes a declaration of struct
686 mallinfo. If so, it is included; else a compliant version is
687 declared below. These must be precisely the same for mallinfo() to
688 work. The original SVID version of this struct, defined on most
689 systems with mallinfo, declares all fields as ints. But some others
690 define as unsigned long. If your system defines the fields using a
691 type of different width than listed here, you MUST #include your
692 system version and #define HAVE_USR_INCLUDE_MALLOC_H.
695 /* #define HAVE_USR_INCLUDE_MALLOC_H */
697 #ifdef HAVE_USR_INCLUDE_MALLOC_H
698 #include "/usr/include/malloc.h"
699 #else /* HAVE_USR_INCLUDE_MALLOC_H */
700 #ifndef STRUCT_MALLINFO_DECLARED
701 #define STRUCT_MALLINFO_DECLARED 1
703 MALLINFO_FIELD_TYPE arena
; /* non-mmapped space allocated from system */
704 MALLINFO_FIELD_TYPE ordblks
; /* number of free chunks */
705 MALLINFO_FIELD_TYPE smblks
; /* always 0 */
706 MALLINFO_FIELD_TYPE hblks
; /* always 0 */
707 MALLINFO_FIELD_TYPE hblkhd
; /* space in mmapped regions */
708 MALLINFO_FIELD_TYPE usmblks
; /* maximum total allocated space */
709 MALLINFO_FIELD_TYPE fsmblks
; /* always 0 */
710 MALLINFO_FIELD_TYPE uordblks
; /* total allocated space */
711 MALLINFO_FIELD_TYPE fordblks
; /* total free space */
712 MALLINFO_FIELD_TYPE keepcost
; /* releasable (via malloc_trim) space */
714 #endif /* STRUCT_MALLINFO_DECLARED */
715 #endif /* HAVE_USR_INCLUDE_MALLOC_H */
716 #endif /* NO_MALLINFO */
719 Try to persuade compilers to inline. The most critical functions for
720 inlining are defined as macros, so these aren't used for them.
724 #if defined(__GNUC__)
725 #define FORCEINLINE __inline __attribute__ ((always_inline))
726 #elif defined(_MSC_VER)
727 #define FORCEINLINE __forceinline
731 #if defined(__GNUC__)
732 #define NOINLINE __attribute__ ((noinline))
733 #elif defined(_MSC_VER)
734 #define NOINLINE __declspec(noinline)
743 #define FORCEINLINE inline
745 #endif /* __cplusplus */
752 /* ------------------- Declarations of public routines ------------------- */
754 #ifndef USE_DL_PREFIX
755 #define dlcalloc calloc
757 #define dlmalloc malloc
758 #define dlmemalign memalign
759 #define dlrealloc realloc
760 #define dlvalloc valloc
761 #define dlpvalloc pvalloc
762 #define dlmallinfo mallinfo
763 #define dlmallopt mallopt
764 #define dlmalloc_trim malloc_trim
765 #define dlmalloc_stats malloc_stats
766 #define dlmalloc_usable_size malloc_usable_size
767 #define dlmalloc_footprint malloc_footprint
768 #define dlmalloc_max_footprint malloc_max_footprint
769 #define dlindependent_calloc independent_calloc
770 #define dlindependent_comalloc independent_comalloc
771 #endif /* USE_DL_PREFIX */
776 Returns a pointer to a newly allocated chunk of at least n bytes, or
777 null if no space is available, in which case errno is set to ENOMEM
780 If n is zero, malloc returns a minimum-sized chunk. (The minimum
781 size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
782 systems.) Note that size_t is an unsigned type, so calls with
783 arguments that would be negative if signed are interpreted as
784 requests for huge amounts of space, which will often fail. The
785 maximum supported value of n differs across systems, but is in all
786 cases less than the maximum representable value of a size_t.
788 void* dlmalloc(size_t);
792 Releases the chunk of memory pointed to by p, that had been previously
793 allocated using malloc or a related routine such as realloc.
794 It has no effect if p is null. If p was not malloced or already
795 freed, free(p) will by default cause the current program to abort.
800 calloc(size_t n_elements, size_t element_size);
801 Returns a pointer to n_elements * element_size bytes, with all locations
804 void* dlcalloc(size_t, size_t);
807 realloc(void* p, size_t n)
808 Returns a pointer to a chunk of size n that contains the same data
809 as does chunk p up to the minimum of (n, p's size) bytes, or null
810 if no space is available.
812 The returned pointer may or may not be the same as p. The algorithm
813 prefers extending p in most cases when possible, otherwise it
814 employs the equivalent of a malloc-copy-free sequence.
816 If p is null, realloc is equivalent to malloc.
818 If space is not available, realloc returns null, errno is set (if on
819 ANSI) and p is NOT freed.
821 if n is for fewer bytes than already held by p, the newly unused
822 space is lopped off and freed if possible. realloc with a size
823 argument of zero (re)allocates a minimum-sized chunk.
825 The old unix realloc convention of allowing the last-free'd chunk
826 to be used as an argument to realloc is not supported.
829 void* dlrealloc(void*, size_t);
832 memalign(size_t alignment, size_t n);
833 Returns a pointer to a newly allocated chunk of n bytes, aligned
834 in accord with the alignment argument.
836 The alignment argument should be a power of two. If the argument is
837 not a power of two, the nearest greater power is used.
838 8-byte alignment is guaranteed by normal malloc calls, so don't
839 bother calling memalign with an argument of 8 or less.
841 Overreliance on memalign is a sure way to fragment space.
843 void* dlmemalign(size_t, size_t);
847 Equivalent to memalign(pagesize, n), where pagesize is the page
848 size of the system. If the pagesize is unknown, 4096 is used.
850 void* dlvalloc(size_t);
853 mallopt(int parameter_number, int parameter_value)
854 Sets tunable parameters The format is to provide a
855 (parameter-number, parameter-value) pair. mallopt then sets the
856 corresponding parameter to the argument value if it can (i.e., so
857 long as the value is meaningful), and returns 1 if successful else
858 0. To workaround the fact that mallopt is specified to use int,
859 not size_t parameters, the value -1 is specially treated as the
860 maximum unsigned size_t value.
862 SVID/XPG/ANSI defines four standard param numbers for mallopt,
863 normally defined in malloc.h. None of these are use in this malloc,
864 so setting them has no effect. But this malloc also supports other
865 options in mallopt. See below for details. Briefly, supported
866 parameters are as follows (listed defaults are for "typical"
869 Symbol param # default allowed param values
870 M_TRIM_THRESHOLD -1 2*1024*1024 any (-1 disables)
871 M_GRANULARITY -2 page size any power of 2 >= page size
872 M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
874 int dlmallopt(int, int);
878 Returns the number of bytes obtained from the system. The total
879 number of bytes allocated by malloc, realloc etc., is less than this
880 value. Unlike mallinfo, this function returns only a precomputed
881 result, so can be called frequently to monitor memory consumption.
882 Even if locks are otherwise defined, this function does not use them,
883 so results might not be up to date.
885 size_t dlmalloc_footprint(void);
888 malloc_max_footprint();
889 Returns the maximum number of bytes obtained from the system. This
890 value will be greater than current footprint if deallocated space
891 has been reclaimed by the system. The peak number of bytes allocated
892 by malloc, realloc etc., is less than this value. Unlike mallinfo,
893 this function returns only a precomputed result, so can be called
894 frequently to monitor memory consumption. Even if locks are
895 otherwise defined, this function does not use them, so results might
898 size_t dlmalloc_max_footprint(void);
903 Returns (by copy) a struct containing various summary statistics:
905 arena: current total non-mmapped bytes allocated from system
906 ordblks: the number of free chunks
908 hblks: current number of mmapped regions
909 hblkhd: total bytes held in mmapped regions
910 usmblks: the maximum total allocated space. This will be greater
911 than current total if trimming has occurred.
913 uordblks: current total allocated space (normal or mmapped)
914 fordblks: total free space
915 keepcost: the maximum number of bytes that could ideally be released
916 back to system via malloc_trim. ("ideally" means that
917 it ignores page restrictions etc.)
919 Because these fields are ints, but internal bookkeeping may
920 be kept as longs, the reported values may wrap around zero and
923 struct mallinfo
dlmallinfo(void);
924 #endif /* NO_MALLINFO */
927 independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
929 independent_calloc is similar to calloc, but instead of returning a
930 single cleared space, it returns an array of pointers to n_elements
931 independent elements that can hold contents of size elem_size, each
932 of which starts out cleared, and can be independently freed,
933 realloc'ed etc. The elements are guaranteed to be adjacently
934 allocated (this is not guaranteed to occur with multiple callocs or
935 mallocs), which may also improve cache locality in some
938 The "chunks" argument is optional (i.e., may be null, which is
939 probably the most typical usage). If it is null, the returned array
940 is itself dynamically allocated and should also be freed when it is
941 no longer needed. Otherwise, the chunks array must be of at least
942 n_elements in length. It is filled in with the pointers to the
945 In either case, independent_calloc returns this pointer array, or
946 null if the allocation failed. If n_elements is zero and "chunks"
947 is null, it returns a chunk representing an array with zero elements
948 (which should be freed if not wanted).
950 Each element must be individually freed when it is no longer
951 needed. If you'd like to instead be able to free all at once, you
952 should instead use regular calloc and assign pointers into this
953 space to represent elements. (In this case though, you cannot
954 independently free elements.)
956 independent_calloc simplifies and speeds up implementations of many
957 kinds of pools. It may also be useful when constructing large data
958 structures that initially have a fixed number of fixed-sized nodes,
959 but the number is not known at compile time, and some of the nodes
960 may later need to be freed. For example:
962 struct Node { int item; struct Node* next; };
964 struct Node* build_list() {
966 int n = read_number_of_nodes_needed();
967 if (n <= 0) return 0;
968 pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
969 if (pool == 0) die();
970 // organize into a linked list...
971 struct Node* first = pool[0];
972 for (i = 0; i < n-1; ++i)
973 pool[i]->next = pool[i+1];
974 free(pool); // Can now free the array (or not, if it is needed later)
978 void** dlindependent_calloc(size_t, size_t, void**);
981 independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
983 independent_comalloc allocates, all at once, a set of n_elements
984 chunks with sizes indicated in the "sizes" array. It returns
985 an array of pointers to these elements, each of which can be
986 independently freed, realloc'ed etc. The elements are guaranteed to
987 be adjacently allocated (this is not guaranteed to occur with
988 multiple callocs or mallocs), which may also improve cache locality
989 in some applications.
991 The "chunks" argument is optional (i.e., may be null). If it is null
992 the returned array is itself dynamically allocated and should also
993 be freed when it is no longer needed. Otherwise, the chunks array
994 must be of at least n_elements in length. It is filled in with the
995 pointers to the chunks.
997 In either case, independent_comalloc returns this pointer array, or
998 null if the allocation failed. If n_elements is zero and chunks is
999 null, it returns a chunk representing an array with zero elements
1000 (which should be freed if not wanted).
1002 Each element must be individually freed when it is no longer
1003 needed. If you'd like to instead be able to free all at once, you
1004 should instead use a single regular malloc, and assign pointers at
1005 particular offsets in the aggregate space. (In this case though, you
1006 cannot independently free elements.)
1008 independent_comallac differs from independent_calloc in that each
1009 element may have a different size, and also that it does not
1010 automatically clear elements.
1012 independent_comalloc can be used to speed up allocation in cases
1013 where several structs or objects must always be allocated at the
1014 same time. For example:
1019 void send_message(char* msg) {
1020 int msglen = strlen(msg);
1021 size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
1023 if (independent_comalloc(3, sizes, chunks) == 0)
1025 struct Head* head = (struct Head*)(chunks[0]);
1026 char* body = (char*)(chunks[1]);
1027 struct Foot* foot = (struct Foot*)(chunks[2]);
1031 In general though, independent_comalloc is worth using only for
1032 larger values of n_elements. For small values, you probably won't
1033 detect enough difference from series of malloc calls to bother.
1035 Overuse of independent_comalloc can increase overall memory usage,
1036 since it cannot reuse existing noncontiguous small chunks that
1037 might be available for some of the elements.
1039 void** dlindependent_comalloc(size_t, size_t*, void**);
1044 Equivalent to valloc(minimum-page-that-holds(n)), that is,
1045 round up n to nearest pagesize.
1047 void* dlpvalloc(size_t);
1050 malloc_trim(size_t pad);
1052 If possible, gives memory back to the system (via negative arguments
1053 to sbrk) if there is unused memory at the `high' end of the malloc
1054 pool or in unused MMAP segments. You can call this after freeing
1055 large blocks of memory to potentially reduce the system-level memory
1056 requirements of a program. However, it cannot guarantee to reduce
1057 memory. Under some allocation patterns, some large free blocks of
1058 memory will be locked between two used chunks, so they cannot be
1059 given back to the system.
1061 The `pad' argument to malloc_trim represents the amount of free
1062 trailing space to leave untrimmed. If this argument is zero, only
1063 the minimum amount of memory to maintain internal data structures
1064 will be left. Non-zero arguments can be supplied to maintain enough
1065 trailing space to service future expected allocations without having
1066 to re-obtain memory from the system.
1068 Malloc_trim returns 1 if it actually released any memory, else 0.
1070 int dlmalloc_trim(size_t);
1074 Prints on stderr the amount of space obtained from the system (both
1075 via sbrk and mmap), the maximum amount (which may be more than
1076 current if malloc_trim and/or munmap got called), and the current
1077 number of bytes allocated via malloc (or realloc, etc) but not yet
1078 freed. Note that this is the number of bytes allocated, not the
1079 number requested. It will be larger than the number requested
1080 because of alignment and bookkeeping overhead. Because it includes
1081 alignment wastage as being in use, this figure may be greater than
1082 zero even when no user-level chunks are allocated.
1084 The reported current and maximum system memory can be inaccurate if
1085 a program makes other calls to system memory allocation functions
1086 (normally sbrk) outside of malloc.
1088 malloc_stats prints only the most commonly interesting statistics.
1089 More information can be obtained by calling mallinfo.
1091 void dlmalloc_stats(void);
1093 #endif /* ONLY_MSPACES */
1096 malloc_usable_size(void* p);
1098 Returns the number of bytes you can actually use in
1099 an allocated chunk, which may be more than you requested (although
1100 often not) due to alignment and minimum size constraints.
1101 You can use this many bytes without worrying about
1102 overwriting other allocated objects. This is not a particularly great
1103 programming practice. malloc_usable_size can be more useful in
1104 debugging and assertions, for example:
1107 assert(malloc_usable_size(p) >= 256);
1109 size_t dlmalloc_usable_size(void*);
1115 mspace is an opaque type representing an independent
1116 region of space that supports mspace_malloc, etc.
1118 typedef void* mspace
;
1121 create_mspace creates and returns a new independent space with the
1122 given initial capacity, or, if 0, the default granularity size. It
1123 returns null if there is no system memory available to create the
1124 space. If argument locked is non-zero, the space uses a separate
1125 lock to control access. The capacity of the space will grow
1126 dynamically as needed to service mspace_malloc requests. You can
1127 control the sizes of incremental increases of this space by
1128 compiling with a different DEFAULT_GRANULARITY or dynamically
1129 setting with mallopt(M_GRANULARITY, value).
1131 mspace
create_mspace(size_t capacity
, int locked
);
1134 destroy_mspace destroys the given space, and attempts to return all
1135 of its memory back to the system, returning the total number of
1136 bytes freed. After destruction, the results of access to all memory
1137 used by the space become undefined.
1139 size_t destroy_mspace(mspace msp
);
1142 create_mspace_with_base uses the memory supplied as the initial base
1143 of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1144 space is used for bookkeeping, so the capacity must be at least this
1145 large. (Otherwise 0 is returned.) When this initial space is
1146 exhausted, additional memory will be obtained from the system.
1147 Destroying this space will deallocate all additionally allocated
1148 space (if possible) but not the initial base.
1150 mspace
create_mspace_with_base(void* base
, size_t capacity
, int locked
);
1153 mspace_mmap_large_chunks controls whether requests for large chunks
1154 are allocated in their own mmapped regions, separate from others in
1155 this mspace. By default this is enabled, which reduces
1156 fragmentation. However, such chunks are not necessarily released to
1157 the system upon destroy_mspace. Disabling by setting to false may
1158 increase fragmentation, but avoids leakage when relying on
1159 destroy_mspace to release all memory allocated using this space.
1161 int mspace_mmap_large_chunks(mspace msp
, int enable
);
1165 mspace_malloc behaves as malloc, but operates within
1168 void* mspace_malloc(mspace msp
, size_t bytes
);
1171 mspace_free behaves as free, but operates within
1174 If compiled with FOOTERS==1, mspace_free is not actually needed.
1175 free may be called instead of mspace_free because freed chunks from
1176 any space are handled by their originating spaces.
1178 void mspace_free(mspace msp
, void* mem
);
1181 mspace_realloc behaves as realloc, but operates within
1184 If compiled with FOOTERS==1, mspace_realloc is not actually
1185 needed. realloc may be called instead of mspace_realloc because
1186 realloced chunks from any space are handled by their originating
1189 void* mspace_realloc(mspace msp
, void* mem
, size_t newsize
);
1192 mspace_calloc behaves as calloc, but operates within
1195 void* mspace_calloc(mspace msp
, size_t n_elements
, size_t elem_size
);
1198 mspace_memalign behaves as memalign, but operates within
1201 void* mspace_memalign(mspace msp
, size_t alignment
, size_t bytes
);
1204 mspace_independent_calloc behaves as independent_calloc, but
1205 operates within the given space.
1207 void** mspace_independent_calloc(mspace msp
, size_t n_elements
,
1208 size_t elem_size
, void* chunks
[]);
1211 mspace_independent_comalloc behaves as independent_comalloc, but
1212 operates within the given space.
1214 void** mspace_independent_comalloc(mspace msp
, size_t n_elements
,
1215 size_t sizes
[], void* chunks
[]);
1218 mspace_footprint() returns the number of bytes obtained from the
1219 system for this space.
1221 size_t mspace_footprint(mspace msp
);
1224 mspace_max_footprint() returns the peak number of bytes obtained from the
1225 system for this space.
1227 size_t mspace_max_footprint(mspace msp
);
1232 mspace_mallinfo behaves as mallinfo, but reports properties of
1235 struct mallinfo
mspace_mallinfo(mspace msp
);
1236 #endif /* NO_MALLINFO */
1239 malloc_usable_size(void* p) behaves the same as malloc_usable_size;
1241 size_t mspace_usable_size(void* mem
);
1244 mspace_malloc_stats behaves as malloc_stats, but reports
1245 properties of the given space.
1247 void mspace_malloc_stats(mspace msp
);
1250 mspace_trim behaves as malloc_trim, but
1251 operates within the given space.
1253 int mspace_trim(mspace msp
, size_t pad
);
1256 An alias for mallopt.
1258 int mspace_mallopt(int, int);
1260 #endif /* MSPACES */
1263 }; /* end of extern "C" */
1264 #endif /* __cplusplus */
1267 ========================================================================
1268 To make a fully customizable malloc.h header file, cut everything
1269 above this line, put into file malloc.h, edit to suit, and #include it
1270 on the next line, as well as in programs that use this malloc.
1271 ========================================================================
1274 /* #include "malloc.h" */
1276 /*------------------------------ internal #includes ---------------------- */
1280 #pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1284 #include <stdio.h> /* for printing in malloc_stats */
1286 #ifndef LACKS_ERRNO_H
1287 #include <errno.h> /* for MALLOC_FAILURE_ACTION */
1288 #endif /* LACKS_ERRNO_H */
1290 #include <time.h> /* for magic initialization */
1291 #endif /* FOOTERS */
1292 #ifndef LACKS_STDLIB_H
1293 #include <stdlib.h> /* for abort() */
1294 #endif /* LACKS_STDLIB_H */
1296 #if ABORT_ON_ASSERT_FAILURE
1297 #define assert(x) if(!(x)) ABORT
1298 #else /* ABORT_ON_ASSERT_FAILURE */
1300 #endif /* ABORT_ON_ASSERT_FAILURE */
1307 #ifndef LACKS_STRING_H
1308 #include <string.h> /* for memset etc */
1309 #endif /* LACKS_STRING_H */
1311 #ifndef LACKS_STRINGS_H
1312 #include <strings.h> /* for ffs */
1313 #endif /* LACKS_STRINGS_H */
1314 #endif /* USE_BUILTIN_FFS */
1316 #ifndef LACKS_SYS_MMAN_H
1317 #include <sys/mman.h> /* for mmap */
1318 #endif /* LACKS_SYS_MMAN_H */
1319 #ifndef LACKS_FCNTL_H
1321 #endif /* LACKS_FCNTL_H */
1322 #endif /* HAVE_MMAP */
1323 #ifndef LACKS_UNISTD_H
1324 #include <unistd.h> /* for sbrk, sysconf */
1325 #else /* LACKS_UNISTD_H */
1326 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1327 extern void* sbrk(ptrdiff_t);
1328 #endif /* FreeBSD etc */
1329 #endif /* LACKS_UNISTD_H */
1331 /* Declarations for locking */
1334 #include <pthread.h>
1335 #if defined (__SVR4) && defined (__sun) /* solaris */
1337 #endif /* solaris */
1340 /* These are already defined on AMD64 builds */
1343 #endif /* __cplusplus */
1345 LONG __cdecl
_InterlockedCompareExchange(LONG
volatile *Dest
, LONG Exchange
, LONG Comp
);
1346 LONG __cdecl
_InterlockedExchange(LONG
volatile *Target
, LONG Value
);
1350 #endif /* __cplusplus */
1351 #endif /* _M_AMD64 */
1353 #pragma intrinsic (_InterlockedCompareExchange)
1354 #pragma intrinsic (_InterlockedExchange)
1356 /* --[ start GCC compatibility ]----------------------------------------------
1357 * Compatibility <intrin_x86.h> header for GCC -- GCC equivalents of intrinsic
1358 * Microsoft Visual C++ functions. Originally developed for the ReactOS
1359 * (<http://www.reactos.org/>) and TinyKrnl (<http://www.tinykrnl.org/>)
1362 * Copyright (c) 2006 KJK::Hyperion <hackbunny@reactos.com>
1364 * Permission is hereby granted, free of charge, to any person obtaining a
1365 * copy of this software and associated documentation files (the "Software"),
1366 * to deal in the Software without restriction, including without limitation
1367 * the rights to use, copy, modify, merge, publish, distribute, sublicense,
1368 * and/or sell copies of the Software, and to permit persons to whom the
1369 * Software is furnished to do so, subject to the following conditions:
1371 * The above copyright notice and this permission notice shall be included in
1372 * all copies or substantial portions of the Software.
1374 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
1375 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
1376 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
1377 * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
1378 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
1379 * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
1380 * DEALINGS IN THE SOFTWARE.
1383 /*** Atomic operations ***/
1384 #if (__GNUC__ * 10000 + __GNUC_MINOR__ * 100 + __GNUC_PATCHLEVEL__) > 40100
1385 #define _ReadWriteBarrier() __sync_synchronize()
1387 static __inline__
__attribute__((always_inline
)) long __sync_lock_test_and_set(volatile long * const Target
, const long Value
)
1390 __asm__
__volatile__("xchg%z0 %2, %0" : "=g" (*(Target
)), "=r" (res
) : "1" (Value
));
1393 static void __inline__
__attribute__((always_inline
)) _MemoryBarrier(void)
1395 __asm__
__volatile__("" : : : "memory");
1397 #define _ReadWriteBarrier() _MemoryBarrier()
1399 /* BUGBUG: GCC only supports full barriers */
1400 static __inline__
__attribute__((always_inline
)) long _InterlockedExchange(volatile long * const Target
, const long Value
)
1402 /* NOTE: __sync_lock_test_and_set would be an acquire barrier, so we force a full barrier */
1403 _ReadWriteBarrier();
1404 return __sync_lock_test_and_set(Target
, Value
);
1406 /* --[ end GCC compatibility ]---------------------------------------------- */
1408 #define interlockedcompareexchange _InterlockedCompareExchange
1409 #define interlockedexchange _InterlockedExchange
1411 #endif /* USE_LOCKS */
1413 /* Declarations for bit scanning on win32 */
1414 #if defined(_MSC_VER) && _MSC_VER>=1300
1415 #ifndef BitScanForward /* Try to avoid pulling in WinNT.h */
1418 #endif /* __cplusplus */
1419 unsigned char _BitScanForward(unsigned long *index
, unsigned long mask
);
1420 unsigned char _BitScanReverse(unsigned long *index
, unsigned long mask
);
1423 #endif /* __cplusplus */
1425 #define BitScanForward _BitScanForward
1426 #define BitScanReverse _BitScanReverse
1427 #pragma intrinsic(_BitScanForward)
1428 #pragma intrinsic(_BitScanReverse)
1429 #endif /* BitScanForward */
1430 #endif /* defined(_MSC_VER) && _MSC_VER>=1300 */
1433 #ifndef malloc_getpagesize
1434 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
1435 # ifndef _SC_PAGE_SIZE
1436 # define _SC_PAGE_SIZE _SC_PAGESIZE
1439 # ifdef _SC_PAGE_SIZE
1440 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1442 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1443 extern size_t getpagesize();
1444 # define malloc_getpagesize getpagesize()
1446 # ifdef WIN32 /* use supplied emulation of getpagesize */
1447 # define malloc_getpagesize getpagesize()
1449 # ifndef LACKS_SYS_PARAM_H
1450 # include <sys/param.h>
1452 # ifdef EXEC_PAGESIZE
1453 # define malloc_getpagesize EXEC_PAGESIZE
1457 # define malloc_getpagesize NBPG
1459 # define malloc_getpagesize (NBPG * CLSIZE)
1463 # define malloc_getpagesize NBPC
1466 # define malloc_getpagesize PAGESIZE
1467 # else /* just guess */
1468 # define malloc_getpagesize ((size_t)4096U)
1481 /* ------------------- size_t and alignment properties -------------------- */
1483 /* The byte and bit size of a size_t */
1484 #define SIZE_T_SIZE (sizeof(size_t))
1485 #define SIZE_T_BITSIZE (sizeof(size_t) << 3)
1487 /* Some constants coerced to size_t */
1488 /* Annoying but necessary to avoid errors on some platforms */
1489 #define SIZE_T_ZERO ((size_t)0)
1490 #define SIZE_T_ONE ((size_t)1)
1491 #define SIZE_T_TWO ((size_t)2)
1492 #define SIZE_T_FOUR ((size_t)4)
1493 #define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1)
1494 #define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2)
1495 #define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1496 #define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U)
1498 /* The bit mask value corresponding to MALLOC_ALIGNMENT */
1499 #define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE)
1501 /* True if address a has acceptable alignment */
1502 #define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1504 /* the number of bytes to offset an address to align it */
1505 #define align_offset(A)\
1506 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1507 ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1509 /* -------------------------- MMAP preliminaries ------------------------- */
1512 If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1513 checks to fail so compiler optimizer can delete code rather than
1514 using so many "#if"s.
1518 /* MORECORE and MMAP must return MFAIL on failure */
1519 #define MFAIL ((void*)(MAX_SIZE_T))
1520 #define CMFAIL ((char*)(MFAIL)) /* defined for convenience */
1525 #define MUNMAP_DEFAULT(a, s) munmap((a), (s))
1526 #define MMAP_PROT (PROT_READ|PROT_WRITE)
1527 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1528 #define MAP_ANONYMOUS MAP_ANON
1529 #endif /* MAP_ANON */
1530 #ifdef MAP_ANONYMOUS
1531 #define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS)
1532 #define MMAP_DEFAULT(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1533 #else /* MAP_ANONYMOUS */
1535 Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1536 is unlikely to be needed, but is supplied just in case.
1538 #define MMAP_FLAGS (MAP_PRIVATE)
1539 static int dev_zero_fd
= -1; /* Cached file descriptor for /dev/zero. */
1540 #define MMAP_DEFAULT(s) ((dev_zero_fd < 0) ? \
1541 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1542 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1543 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1544 #endif /* MAP_ANONYMOUS */
1546 #define DIRECT_MMAP_DEFAULT(s) MMAP_DEFAULT(s)
1550 /* Win32 MMAP via VirtualAlloc */
1551 static FORCEINLINE
void* win32mmap(size_t size
) {
1552 void* ptr
= VirtualAlloc(0, size
, MEM_RESERVE
|MEM_COMMIT
, PAGE_READWRITE
);
1553 return (ptr
!= 0)? ptr
: MFAIL
;
1556 /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
1557 static FORCEINLINE
void* win32direct_mmap(size_t size
) {
1558 void* ptr
= VirtualAlloc(0, size
, MEM_RESERVE
|MEM_COMMIT
|MEM_TOP_DOWN
,
1560 return (ptr
!= 0)? ptr
: MFAIL
;
1563 /* This function supports releasing coalesed segments */
1564 static FORCEINLINE
int win32munmap(void* ptr
, size_t size
) {
1565 MEMORY_BASIC_INFORMATION minfo
;
1566 char* cptr
= (char*)ptr
;
1568 if (VirtualQuery(cptr
, &minfo
, sizeof(minfo
)) == 0)
1570 if (minfo
.BaseAddress
!= cptr
|| minfo
.AllocationBase
!= cptr
||
1571 minfo
.State
!= MEM_COMMIT
|| minfo
.RegionSize
> size
)
1573 if (VirtualFree(cptr
, 0, MEM_RELEASE
) == 0)
1575 cptr
+= minfo
.RegionSize
;
1576 size
-= minfo
.RegionSize
;
1581 #define MMAP_DEFAULT(s) win32mmap(s)
1582 #define MUNMAP_DEFAULT(a, s) win32munmap((a), (s))
1583 #define DIRECT_MMAP_DEFAULT(s) win32direct_mmap(s)
1585 #endif /* HAVE_MMAP */
1589 #define MREMAP_DEFAULT(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1591 #endif /* HAVE_MREMAP */
1595 * Define CALL_MORECORE
1599 #define CALL_MORECORE(S) MORECORE(S)
1600 #else /* MORECORE */
1601 #define CALL_MORECORE(S) MORECORE_DEFAULT(S)
1602 #endif /* MORECORE */
1603 #else /* HAVE_MORECORE */
1604 #define CALL_MORECORE(S) MFAIL
1605 #endif /* HAVE_MORECORE */
1608 * Define CALL_MMAP/CALL_MUNMAP/CALL_DIRECT_MMAP
1611 #define IS_MMAPPED_BIT (SIZE_T_ONE)
1612 #define USE_MMAP_BIT (SIZE_T_ONE)
1615 #define CALL_MMAP(s) MMAP(s)
1617 #define CALL_MMAP(s) MMAP_DEFAULT(s)
1620 #define CALL_MUNMAP(a, s) MUNMAP((a), (s))
1622 #define CALL_MUNMAP(a, s) MUNMAP_DEFAULT((a), (s))
1625 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1626 #else /* DIRECT_MMAP */
1627 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP_DEFAULT(s)
1628 #endif /* DIRECT_MMAP */
1629 #else /* HAVE_MMAP */
1630 #define IS_MMAPPED_BIT (SIZE_T_ZERO)
1631 #define USE_MMAP_BIT (SIZE_T_ZERO)
1633 #define MMAP(s) MFAIL
1634 #define MUNMAP(a, s) (-1)
1635 #define DIRECT_MMAP(s) MFAIL
1636 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1637 #define CALL_MMAP(s) MMAP(s)
1638 #define CALL_MUNMAP(a, s) MUNMAP((a), (s))
1639 #endif /* HAVE_MMAP */
1642 * Define CALL_MREMAP
1644 #if HAVE_MMAP && HAVE_MREMAP
1646 #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP((addr), (osz), (nsz), (mv))
1648 #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP_DEFAULT((addr), (osz), (nsz), (mv))
1650 #else /* HAVE_MMAP && HAVE_MREMAP */
1651 #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1652 #endif /* HAVE_MMAP && HAVE_MREMAP */
1654 /* mstate bit set if continguous morecore disabled or failed */
1655 #define USE_NONCONTIGUOUS_BIT (4U)
1657 /* segment bit set in create_mspace_with_base */
1658 #define EXTERN_BIT (8U)
1661 /* --------------------------- Lock preliminaries ------------------------ */
1664 When locks are defined, there is one global lock, plus
1665 one per-mspace lock.
1667 The global lock_ensures that mparams.magic and other unique
1668 mparams values are initialized only once. It also protects
1669 sequences of calls to MORECORE. In many cases sys_alloc requires
1670 two calls, that should not be interleaved with calls by other
1671 threads. This does not protect against direct calls to MORECORE
1672 by other threads not using this lock, so there is still code to
1673 cope the best we can on interference.
1675 Per-mspace locks surround calls to malloc, free, etc. To enable use
1676 in layered extensions, per-mspace locks are reentrant.
1678 Because lock-protected regions generally have bounded times, it is
1679 OK to use the supplied simple spinlocks in the custom versions for
1682 If USE_LOCKS is > 1, the definitions of lock routines here are
1683 bypassed, in which case you will need to define at least
1684 INITIAL_LOCK, ACQUIRE_LOCK, RELEASE_LOCK and possibly TRY_LOCK
1685 (which is not used in this malloc, but commonly needed in
1694 /* Custom pthread-style spin locks on x86 and x64 for gcc */
1695 struct pthread_mlock_t
{
1696 volatile unsigned int l
;
1697 volatile unsigned int c
;
1698 volatile pthread_t threadid
;
1700 #define MLOCK_T struct pthread_mlock_t
1701 #define CURRENT_THREAD pthread_self()
1702 #define INITIAL_LOCK(sl) (memset(sl, 0, sizeof(MLOCK_T)), 0)
1703 #define ACQUIRE_LOCK(sl) pthread_acquire_lock(sl)
1704 #define RELEASE_LOCK(sl) pthread_release_lock(sl)
1705 #define TRY_LOCK(sl) pthread_try_lock(sl)
1706 #define SPINS_PER_YIELD 63
1708 static MLOCK_T malloc_global_mutex
= { 0, 0, 0};
1710 static FORCEINLINE
int pthread_acquire_lock (MLOCK_T
*sl
) {
1712 volatile unsigned int* lp
= &sl
->l
;
1715 if (sl
->threadid
== CURRENT_THREAD
) {
1721 /* place args to cmpxchgl in locals to evade oddities in some gccs */
1725 __asm__
__volatile__ ("lock; cmpxchgl %1, %2"
1727 : "r" (val
), "m" (*(lp
)), "0"(cmp
)
1730 assert(!sl
->threadid
);
1732 sl
->threadid
= CURRENT_THREAD
;
1735 if ((++spins
& SPINS_PER_YIELD
) == 0) {
1736 #if defined (__SVR4) && defined (__sun) /* solaris */
1739 #if defined(__linux__) || defined(__FreeBSD__) || defined(__APPLE__)
1741 #else /* no-op yield on unknown systems */
1743 #endif /* __linux__ || __FreeBSD__ || __APPLE__ */
1744 #endif /* solaris */
1750 static FORCEINLINE
void pthread_release_lock (MLOCK_T
*sl
) {
1752 assert(sl
->threadid
== CURRENT_THREAD
);
1755 volatile unsigned int* lp
= &sl
->l
;
1758 __asm__
__volatile__ ("lock; xchgl %0, %1"
1760 : "m" (*(lp
)), "0"(prev
)
1765 static FORCEINLINE
int pthread_try_lock (MLOCK_T
*sl
) {
1766 volatile unsigned int* lp
= &sl
->l
;
1768 if (sl
->threadid
== CURRENT_THREAD
) {
1777 __asm__
__volatile__ ("lock; cmpxchgl %1, %2"
1779 : "r" (val
), "m" (*(lp
)), "0"(cmp
)
1782 assert(!sl
->threadid
);
1784 sl
->threadid
= CURRENT_THREAD
;
1793 /* Custom win32-style spin locks on x86 and x64 for MSC */
1794 struct win32_mlock_t
1797 volatile unsigned int c
;
1798 volatile long threadid
;
1801 #define MLOCK_T struct win32_mlock_t
1802 #define CURRENT_THREAD win32_getcurrentthreadid()
1803 #define INITIAL_LOCK(sl) (memset(sl, 0, sizeof(MLOCK_T)), 0)
1804 #define ACQUIRE_LOCK(sl) win32_acquire_lock(sl)
1805 #define RELEASE_LOCK(sl) win32_release_lock(sl)
1806 #define TRY_LOCK(sl) win32_try_lock(sl)
1807 #define SPINS_PER_YIELD 63
1809 static MLOCK_T malloc_global_mutex
= { 0, 0, 0};
1811 static FORCEINLINE
long win32_getcurrentthreadid(void) {
1813 #if defined(_M_IX86)
1814 long *threadstruct
=(long *)__readfsdword(0x18);
1815 long threadid
=threadstruct
[0x24/sizeof(long)];
1817 #elif defined(_M_X64)
1819 return GetCurrentThreadId();
1821 return GetCurrentThreadId();
1824 return GetCurrentThreadId();
1828 static FORCEINLINE
int win32_acquire_lock (MLOCK_T
*sl
) {
1832 if (sl
->threadid
== CURRENT_THREAD
) {
1838 if (!interlockedexchange(&sl
->l
, 1)) {
1839 assert(!sl
->threadid
);
1840 sl
->c
=CURRENT_THREAD
;
1841 sl
->threadid
= CURRENT_THREAD
;
1846 if ((++spins
& SPINS_PER_YIELD
) == 0)
1851 static FORCEINLINE
void win32_release_lock (MLOCK_T
*sl
) {
1852 assert(sl
->threadid
== CURRENT_THREAD
);
1856 interlockedexchange (&sl
->l
, 0);
1860 static FORCEINLINE
int win32_try_lock (MLOCK_T
*sl
) {
1862 if (sl
->threadid
== CURRENT_THREAD
) {
1868 if (!interlockedexchange(&sl
->l
, 1)){
1869 assert(!sl
->threadid
);
1870 sl
->threadid
= CURRENT_THREAD
;
1879 #else /* USE_SPIN_LOCKS */
1882 /* pthreads-based locks */
1884 #define MLOCK_T pthread_mutex_t
1885 #define CURRENT_THREAD pthread_self()
1886 #define INITIAL_LOCK(sl) pthread_init_lock(sl)
1887 #define ACQUIRE_LOCK(sl) pthread_mutex_lock(sl)
1888 #define RELEASE_LOCK(sl) pthread_mutex_unlock(sl)
1889 #define TRY_LOCK(sl) (!pthread_mutex_trylock(sl))
1891 static MLOCK_T malloc_global_mutex
= PTHREAD_MUTEX_INITIALIZER
;
1893 /* Cope with old-style linux recursive lock initialization by adding */
1894 /* skipped internal declaration from pthread.h */
1896 #ifndef PTHREAD_MUTEX_RECURSIVE
1897 extern int pthread_mutexattr_setkind_np
__P ((pthread_mutexattr_t
*__attr
,
1899 #define PTHREAD_MUTEX_RECURSIVE PTHREAD_MUTEX_RECURSIVE_NP
1900 #define pthread_mutexattr_settype(x,y) pthread_mutexattr_setkind_np(x,y)
1904 static int pthread_init_lock (MLOCK_T
*sl
) {
1905 pthread_mutexattr_t attr
;
1906 if (pthread_mutexattr_init(&attr
)) return 1;
1907 if (pthread_mutexattr_settype(&attr
, PTHREAD_MUTEX_RECURSIVE
)) return 1;
1908 if (pthread_mutex_init(sl
, &attr
)) return 1;
1909 if (pthread_mutexattr_destroy(&attr
)) return 1;
1914 /* Win32 critical sections */
1915 #define MLOCK_T CRITICAL_SECTION
1916 #define CURRENT_THREAD GetCurrentThreadId()
1917 #define INITIAL_LOCK(s) (!InitializeCriticalSectionAndSpinCount((s), 0x80000000|4000))
1918 #define ACQUIRE_LOCK(s) (EnterCriticalSection(s), 0)
1919 #define RELEASE_LOCK(s) LeaveCriticalSection(s)
1920 #define TRY_LOCK(s) TryEnterCriticalSection(s)
1921 #define NEED_GLOBAL_LOCK_INIT
1923 static MLOCK_T malloc_global_mutex
;
1924 static volatile long malloc_global_mutex_status
;
1926 /* Use spin loop to initialize global lock */
1927 static void init_malloc_global_mutex() {
1929 long stat
= malloc_global_mutex_status
;
1932 /* transition to < 0 while initializing, then to > 0) */
1934 interlockedcompareexchange(&malloc_global_mutex_status
, -1, 0) == 0) {
1935 InitializeCriticalSection(&malloc_global_mutex
);
1936 interlockedexchange(&malloc_global_mutex_status
,1);
1944 #endif /* USE_SPIN_LOCKS */
1945 #endif /* USE_LOCKS == 1 */
1947 /* ----------------------- User-defined locks ------------------------ */
1950 /* Define your own lock implementation here */
1951 /* #define INITIAL_LOCK(sl) ... */
1952 /* #define ACQUIRE_LOCK(sl) ... */
1953 /* #define RELEASE_LOCK(sl) ... */
1954 /* #define TRY_LOCK(sl) ... */
1955 /* static MLOCK_T malloc_global_mutex = ... */
1956 #endif /* USE_LOCKS > 1 */
1958 /* ----------------------- Lock-based state ------------------------ */
1961 #define USE_LOCK_BIT (2U)
1962 #else /* USE_LOCKS */
1963 #define USE_LOCK_BIT (0U)
1964 #define INITIAL_LOCK(l)
1965 #endif /* USE_LOCKS */
1968 #define ACQUIRE_MALLOC_GLOBAL_LOCK() ACQUIRE_LOCK(&malloc_global_mutex);
1969 #define RELEASE_MALLOC_GLOBAL_LOCK() RELEASE_LOCK(&malloc_global_mutex);
1970 #else /* USE_LOCKS */
1971 #define ACQUIRE_MALLOC_GLOBAL_LOCK()
1972 #define RELEASE_MALLOC_GLOBAL_LOCK()
1973 #endif /* USE_LOCKS */
1976 /* ----------------------- Chunk representations ------------------------ */
1979 (The following includes lightly edited explanations by Colin Plumb.)
1981 The malloc_chunk declaration below is misleading (but accurate and
1982 necessary). It declares a "view" into memory allowing access to
1983 necessary fields at known offsets from a given base.
1985 Chunks of memory are maintained using a `boundary tag' method as
1986 originally described by Knuth. (See the paper by Paul Wilson
1987 ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
1988 techniques.) Sizes of free chunks are stored both in the front of
1989 each chunk and at the end. This makes consolidating fragmented
1990 chunks into bigger chunks fast. The head fields also hold bits
1991 representing whether chunks are free or in use.
1993 Here are some pictures to make it clearer. They are "exploded" to
1994 show that the state of a chunk can be thought of as extending from
1995 the high 31 bits of the head field of its header through the
1996 prev_foot and PINUSE_BIT bit of the following chunk header.
1998 A chunk that's in use looks like:
2000 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2001 | Size of previous chunk (if P = 0) |
2002 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2003 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2004 | Size of this chunk 1| +-+
2005 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2011 +- size - sizeof(size_t) available payload bytes -+
2015 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2016 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
2017 | Size of next chunk (may or may not be in use) | +-+
2018 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2020 And if it's free, it looks like this:
2023 | User payload (must be in use, or we would have merged!) |
2024 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2025 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2026 | Size of this chunk 0| +-+
2027 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2029 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2031 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2033 +- size - sizeof(struct chunk) unused bytes -+
2035 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2036 | Size of this chunk |
2037 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2038 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
2039 | Size of next chunk (must be in use, or we would have merged)| +-+
2040 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2044 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2047 Note that since we always merge adjacent free chunks, the chunks
2048 adjacent to a free chunk must be in use.
2050 Given a pointer to a chunk (which can be derived trivially from the
2051 payload pointer) we can, in O(1) time, find out whether the adjacent
2052 chunks are free, and if so, unlink them from the lists that they
2053 are on and merge them with the current chunk.
2055 Chunks always begin on even word boundaries, so the mem portion
2056 (which is returned to the user) is also on an even word boundary, and
2057 thus at least double-word aligned.
2059 The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
2060 chunk size (which is always a multiple of two words), is an in-use
2061 bit for the *previous* chunk. If that bit is *clear*, then the
2062 word before the current chunk size contains the previous chunk
2063 size, and can be used to find the front of the previous chunk.
2064 The very first chunk allocated always has this bit set, preventing
2065 access to non-existent (or non-owned) memory. If pinuse is set for
2066 any given chunk, then you CANNOT determine the size of the
2067 previous chunk, and might even get a memory addressing fault when
2070 The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
2071 the chunk size redundantly records whether the current chunk is
2072 inuse. This redundancy enables usage checks within free and realloc,
2073 and reduces indirection when freeing and consolidating chunks.
2075 Each freshly allocated chunk must have both cinuse and pinuse set.
2076 That is, each allocated chunk borders either a previously allocated
2077 and still in-use chunk, or the base of its memory arena. This is
2078 ensured by making all allocations from the `lowest' part of any
2079 found chunk. Further, no free chunk physically borders another one,
2080 so each free chunk is known to be preceded and followed by either
2081 inuse chunks or the ends of memory.
2083 Note that the `foot' of the current chunk is actually represented
2084 as the prev_foot of the NEXT chunk. This makes it easier to
2085 deal with alignments etc but can be very confusing when trying
2086 to extend or adapt this code.
2088 The exceptions to all this are
2090 1. The special chunk `top' is the top-most available chunk (i.e.,
2091 the one bordering the end of available memory). It is treated
2092 specially. Top is never included in any bin, is used only if
2093 no other chunk is available, and is released back to the
2094 system if it is very large (see M_TRIM_THRESHOLD). In effect,
2095 the top chunk is treated as larger (and thus less well
2096 fitting) than any other available chunk. The top chunk
2097 doesn't update its trailing size field since there is no next
2098 contiguous chunk that would have to index off it. However,
2099 space is still allocated for it (TOP_FOOT_SIZE) to enable
2100 separation or merging when space is extended.
2102 3. Chunks allocated via mmap, which have the lowest-order bit
2103 (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
2104 PINUSE_BIT in their head fields. Because they are allocated
2105 one-by-one, each must carry its own prev_foot field, which is
2106 also used to hold the offset this chunk has within its mmapped
2107 region, which is needed to preserve alignment. Each mmapped
2108 chunk is trailed by the first two fields of a fake next-chunk
2109 for sake of usage checks.
2113 struct malloc_chunk
{
2114 size_t prev_foot
; /* Size of previous chunk (if free). */
2115 size_t head
; /* Size and inuse bits. */
2116 struct malloc_chunk
* fd
; /* double links -- used only if free. */
2117 struct malloc_chunk
* bk
;
2120 typedef struct malloc_chunk mchunk
;
2121 typedef struct malloc_chunk
* mchunkptr
;
2122 typedef struct malloc_chunk
* sbinptr
; /* The type of bins of chunks */
2123 typedef unsigned int bindex_t
; /* Described below */
2124 typedef unsigned int binmap_t
; /* Described below */
2125 typedef unsigned int flag_t
; /* The type of various bit flag sets */
2127 /* ------------------- Chunks sizes and alignments ----------------------- */
2129 #define MCHUNK_SIZE (sizeof(mchunk))
2132 #define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2134 #define CHUNK_OVERHEAD (SIZE_T_SIZE)
2135 #endif /* FOOTERS */
2137 /* MMapped chunks need a second word of overhead ... */
2138 #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2139 /* ... and additional padding for fake next-chunk at foot */
2140 #define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES)
2142 /* The smallest size we can malloc is an aligned minimal chunk */
2143 #define MIN_CHUNK_SIZE\
2144 ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2146 /* conversion from malloc headers to user pointers, and back */
2147 #define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES))
2148 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
2149 /* chunk associated with aligned address A */
2150 #define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A)))
2152 /* Bounds on request (not chunk) sizes. */
2153 #define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2)
2154 #define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
2156 /* pad request bytes into a usable size */
2157 #define pad_request(req) \
2158 (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2160 /* pad request, checking for minimum (but not maximum) */
2161 #define request2size(req) \
2162 (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
2165 /* ------------------ Operations on head and foot fields ----------------- */
2168 The head field of a chunk is or'ed with PINUSE_BIT when previous
2169 adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
2170 use. If the chunk was obtained with mmap, the prev_foot field has
2171 IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
2172 mmapped region to the base of the chunk.
2174 FLAG4_BIT is not used by this malloc, but might be useful in extensions.
2177 #define PINUSE_BIT (SIZE_T_ONE)
2178 #define CINUSE_BIT (SIZE_T_TWO)
2179 #define FLAG4_BIT (SIZE_T_FOUR)
2180 #define INUSE_BITS (PINUSE_BIT|CINUSE_BIT)
2181 #define FLAG_BITS (PINUSE_BIT|CINUSE_BIT|FLAG4_BIT)
2183 /* Head value for fenceposts */
2184 #define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE)
2186 /* extraction of fields from head words */
2187 #define cinuse(p) ((p)->head & CINUSE_BIT)
2188 #define pinuse(p) ((p)->head & PINUSE_BIT)
2189 #define chunksize(p) ((p)->head & ~(FLAG_BITS))
2191 #define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT)
2192 #define clear_cinuse(p) ((p)->head &= ~CINUSE_BIT)
2194 /* Treat space at ptr +/- offset as a chunk */
2195 #define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
2196 #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
2198 /* Ptr to next or previous physical malloc_chunk. */
2199 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~FLAG_BITS)))
2200 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
2202 /* extract next chunk's pinuse bit */
2203 #define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT)
2205 /* Get/set size at footer */
2206 #define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot)
2207 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
2209 /* Set size, pinuse bit, and foot */
2210 #define set_size_and_pinuse_of_free_chunk(p, s)\
2211 ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
2213 /* Set size, pinuse bit, foot, and clear next pinuse */
2214 #define set_free_with_pinuse(p, s, n)\
2215 (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
2217 #define is_mmapped(p)\
2218 (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
2220 /* Get the internal overhead associated with chunk p */
2221 #define overhead_for(p)\
2222 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
2224 /* Return true if malloced space is not necessarily cleared */
2226 #define calloc_must_clear(p) (!is_mmapped(p))
2227 #else /* MMAP_CLEARS */
2228 #define calloc_must_clear(p) (1)
2229 #endif /* MMAP_CLEARS */
2231 /* ---------------------- Overlaid data structures ----------------------- */
2234 When chunks are not in use, they are treated as nodes of either
2237 "Small" chunks are stored in circular doubly-linked lists, and look
2240 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2241 | Size of previous chunk |
2242 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2243 `head:' | Size of chunk, in bytes |P|
2244 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2245 | Forward pointer to next chunk in list |
2246 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2247 | Back pointer to previous chunk in list |
2248 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2249 | Unused space (may be 0 bytes long) .
2252 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2253 `foot:' | Size of chunk, in bytes |
2254 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2256 Larger chunks are kept in a form of bitwise digital trees (aka
2257 tries) keyed on chunksizes. Because malloc_tree_chunks are only for
2258 free chunks greater than 256 bytes, their size doesn't impose any
2259 constraints on user chunk sizes. Each node looks like:
2261 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2262 | Size of previous chunk |
2263 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2264 `head:' | Size of chunk, in bytes |P|
2265 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2266 | Forward pointer to next chunk of same size |
2267 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2268 | Back pointer to previous chunk of same size |
2269 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2270 | Pointer to left child (child[0]) |
2271 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2272 | Pointer to right child (child[1]) |
2273 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2274 | Pointer to parent |
2275 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2276 | bin index of this chunk |
2277 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2280 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2281 `foot:' | Size of chunk, in bytes |
2282 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2284 Each tree holding treenodes is a tree of unique chunk sizes. Chunks
2285 of the same size are arranged in a circularly-linked list, with only
2286 the oldest chunk (the next to be used, in our FIFO ordering)
2287 actually in the tree. (Tree members are distinguished by a non-null
2288 parent pointer.) If a chunk with the same size as an existing node
2289 is inserted, it is linked off the existing node using pointers that
2290 work in the same way as fd/bk pointers of small chunks.
2292 Each tree contains a power of 2 sized range of chunk sizes (the
2293 smallest is 0x100 <= x < 0x180), which is divided in half at each
2294 tree level, with the chunks in the smaller half of the range (0x100
2295 <= x < 0x140 for the top nose) in the left subtree and the larger
2296 half (0x140 <= x < 0x180) in the right subtree. This is, of course,
2297 done by inspecting individual bits.
2299 Using these rules, each node's left subtree contains all smaller
2300 sizes than its right subtree. However, the node at the root of each
2301 subtree has no particular ordering relationship to either. (The
2302 dividing line between the subtree sizes is based on trie relation.)
2303 If we remove the last chunk of a given size from the interior of the
2304 tree, we need to replace it with a leaf node. The tree ordering
2305 rules permit a node to be replaced by any leaf below it.
2307 The smallest chunk in a tree (a common operation in a best-fit
2308 allocator) can be found by walking a path to the leftmost leaf in
2309 the tree. Unlike a usual binary tree, where we follow left child
2310 pointers until we reach a null, here we follow the right child
2311 pointer any time the left one is null, until we reach a leaf with
2312 both child pointers null. The smallest chunk in the tree will be
2313 somewhere along that path.
2315 The worst case number of steps to add, find, or remove a node is
2316 bounded by the number of bits differentiating chunks within
2317 bins. Under current bin calculations, this ranges from 6 up to 21
2318 (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
2319 is of course much better.
2322 struct malloc_tree_chunk
{
2323 /* The first four fields must be compatible with malloc_chunk */
2326 struct malloc_tree_chunk
* fd
;
2327 struct malloc_tree_chunk
* bk
;
2329 struct malloc_tree_chunk
* child
[2];
2330 struct malloc_tree_chunk
* parent
;
2334 typedef struct malloc_tree_chunk tchunk
;
2335 typedef struct malloc_tree_chunk
* tchunkptr
;
2336 typedef struct malloc_tree_chunk
* tbinptr
; /* The type of bins of trees */
2338 /* A little helper macro for trees */
2339 #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
2341 /* ----------------------------- Segments -------------------------------- */
2344 Each malloc space may include non-contiguous segments, held in a
2345 list headed by an embedded malloc_segment record representing the
2346 top-most space. Segments also include flags holding properties of
2347 the space. Large chunks that are directly allocated by mmap are not
2348 included in this list. They are instead independently created and
2349 destroyed without otherwise keeping track of them.
2351 Segment management mainly comes into play for spaces allocated by
2352 MMAP. Any call to MMAP might or might not return memory that is
2353 adjacent to an existing segment. MORECORE normally contiguously
2354 extends the current space, so this space is almost always adjacent,
2355 which is simpler and faster to deal with. (This is why MORECORE is
2356 used preferentially to MMAP when both are available -- see
2357 sys_alloc.) When allocating using MMAP, we don't use any of the
2358 hinting mechanisms (inconsistently) supported in various
2359 implementations of unix mmap, or distinguish reserving from
2360 committing memory. Instead, we just ask for space, and exploit
2361 contiguity when we get it. It is probably possible to do
2362 better than this on some systems, but no general scheme seems
2363 to be significantly better.
2365 Management entails a simpler variant of the consolidation scheme
2366 used for chunks to reduce fragmentation -- new adjacent memory is
2367 normally prepended or appended to an existing segment. However,
2368 there are limitations compared to chunk consolidation that mostly
2369 reflect the fact that segment processing is relatively infrequent
2370 (occurring only when getting memory from system) and that we
2371 don't expect to have huge numbers of segments:
2373 * Segments are not indexed, so traversal requires linear scans. (It
2374 would be possible to index these, but is not worth the extra
2375 overhead and complexity for most programs on most platforms.)
2376 * New segments are only appended to old ones when holding top-most
2377 memory; if they cannot be prepended to others, they are held in
2380 Except for the top-most segment of an mstate, each segment record
2381 is kept at the tail of its segment. Segments are added by pushing
2382 segment records onto the list headed by &mstate.seg for the
2385 Segment flags control allocation/merge/deallocation policies:
2386 * If EXTERN_BIT set, then we did not allocate this segment,
2387 and so should not try to deallocate or merge with others.
2388 (This currently holds only for the initial segment passed
2389 into create_mspace_with_base.)
2390 * If IS_MMAPPED_BIT set, the segment may be merged with
2391 other surrounding mmapped segments and trimmed/de-allocated
2393 * If neither bit is set, then the segment was obtained using
2394 MORECORE so can be merged with surrounding MORECORE'd segments
2395 and deallocated/trimmed using MORECORE with negative arguments.
2398 struct malloc_segment
{
2399 char* base
; /* base address */
2400 size_t size
; /* allocated size */
2401 struct malloc_segment
* next
; /* ptr to next segment */
2402 flag_t sflags
; /* mmap and extern flag */
2405 #define is_mmapped_segment(S) ((S)->sflags & IS_MMAPPED_BIT)
2406 #define is_extern_segment(S) ((S)->sflags & EXTERN_BIT)
2408 typedef struct malloc_segment msegment
;
2409 typedef struct malloc_segment
* msegmentptr
;
2411 /* ---------------------------- malloc_state ----------------------------- */
2414 A malloc_state holds all of the bookkeeping for a space.
2415 The main fields are:
2418 The topmost chunk of the currently active segment. Its size is
2419 cached in topsize. The actual size of topmost space is
2420 topsize+TOP_FOOT_SIZE, which includes space reserved for adding
2421 fenceposts and segment records if necessary when getting more
2422 space from the system. The size at which to autotrim top is
2423 cached from mparams in trim_check, except that it is disabled if
2426 Designated victim (dv)
2427 This is the preferred chunk for servicing small requests that
2428 don't have exact fits. It is normally the chunk split off most
2429 recently to service another small request. Its size is cached in
2430 dvsize. The link fields of this chunk are not maintained since it
2431 is not kept in a bin.
2434 An array of bin headers for free chunks. These bins hold chunks
2435 with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
2436 chunks of all the same size, spaced 8 bytes apart. To simplify
2437 use in double-linked lists, each bin header acts as a malloc_chunk
2438 pointing to the real first node, if it exists (else pointing to
2439 itself). This avoids special-casing for headers. But to avoid
2440 waste, we allocate only the fd/bk pointers of bins, and then use
2441 repositioning tricks to treat these as the fields of a chunk.
2444 Treebins are pointers to the roots of trees holding a range of
2445 sizes. There are 2 equally spaced treebins for each power of two
2446 from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
2450 There is one bit map for small bins ("smallmap") and one for
2451 treebins ("treemap). Each bin sets its bit when non-empty, and
2452 clears the bit when empty. Bit operations are then used to avoid
2453 bin-by-bin searching -- nearly all "search" is done without ever
2454 looking at bins that won't be selected. The bit maps
2455 conservatively use 32 bits per map word, even if on 64bit system.
2456 For a good description of some of the bit-based techniques used
2457 here, see Henry S. Warren Jr's book "Hacker's Delight" (and
2458 supplement at http://hackersdelight.org/). Many of these are
2459 intended to reduce the branchiness of paths through malloc etc, as
2460 well as to reduce the number of memory locations read or written.
2463 A list of segments headed by an embedded malloc_segment record
2464 representing the initial space.
2466 Address check support
2467 The least_addr field is the least address ever obtained from
2468 MORECORE or MMAP. Attempted frees and reallocs of any address less
2469 than this are trapped (unless INSECURE is defined).
2472 A cross-check field that should always hold same value as mparams.magic.
2475 Bits recording whether to use MMAP, locks, or contiguous MORECORE
2478 Each space keeps track of current and maximum system memory
2479 obtained via MORECORE or MMAP.
2482 Fields holding the amount of unused topmost memory that should trigger
2483 timming, and a counter to force periodic scanning to release unused
2484 non-topmost segments.
2487 If USE_LOCKS is defined, the "mutex" lock is acquired and released
2488 around every public call using this mspace.
2491 A void* pointer and a size_t field that can be used to help implement
2492 extensions to this malloc.
2495 /* Bin types, widths and sizes */
2496 #define NSMALLBINS (32U)
2497 #define NTREEBINS (32U)
2498 #define SMALLBIN_SHIFT (3U)
2499 #define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT)
2500 #define TREEBIN_SHIFT (8U)
2501 #define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT)
2502 #define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE)
2503 #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
2505 struct malloc_state
{
2514 size_t release_checks
;
2516 mchunkptr smallbins
[(NSMALLBINS
+1)*2];
2517 tbinptr treebins
[NTREEBINS
];
2519 size_t max_footprint
;
2522 MLOCK_T mutex
; /* locate lock among fields that rarely change */
2523 #endif /* USE_LOCKS */
2525 void* extp
; /* Unused but available for extensions */
2529 typedef struct malloc_state
* mstate
;
2531 /* ------------- Global malloc_state and malloc_params ------------------- */
2534 malloc_params holds global properties, including those that can be
2535 dynamically set using mallopt. There is a single instance, mparams,
2536 initialized in init_mparams. Note that the non-zeroness of "magic"
2537 also serves as an initialization flag.
2540 struct malloc_params
{
2541 volatile size_t magic
;
2544 size_t mmap_threshold
;
2545 size_t trim_threshold
;
2546 flag_t default_mflags
;
2549 static struct malloc_params mparams
;
2551 /* Ensure mparams initialized */
2552 #define ensure_initialization() ((void)(mparams.magic != 0 || init_mparams()))
2556 /* The global malloc_state used for all non-"mspace" calls */
2557 static struct malloc_state _gm_
;
2559 #define is_global(M) ((M) == &_gm_)
2561 #endif /* !ONLY_MSPACES */
2563 #define is_initialized(M) ((M)->top != 0)
2565 /* -------------------------- system alloc setup ------------------------- */
2567 /* Operations on mflags */
2569 #define use_lock(M) ((M)->mflags & USE_LOCK_BIT)
2570 #define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT)
2571 #define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT)
2573 #define use_mmap(M) ((M)->mflags & USE_MMAP_BIT)
2574 #define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT)
2575 #define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT)
2577 #define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT)
2578 #define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT)
2580 #define set_lock(M,L)\
2581 ((M)->mflags = (L)?\
2582 ((M)->mflags | USE_LOCK_BIT) :\
2583 ((M)->mflags & ~USE_LOCK_BIT))
2585 /* page-align a size */
2586 #define page_align(S)\
2587 (((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE))
2589 /* granularity-align a size */
2590 #define granularity_align(S)\
2591 (((S) + (mparams.granularity - SIZE_T_ONE))\
2592 & ~(mparams.granularity - SIZE_T_ONE))
2595 /* For mmap, use granularity alignment on windows, else page-align */
2597 #define mmap_align(S) granularity_align(S)
2599 #define mmap_align(S) page_align(S)
2602 /* For sys_alloc, enough padding to ensure can malloc request on success */
2603 #define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT)
2605 #define is_page_aligned(S)\
2606 (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2607 #define is_granularity_aligned(S)\
2608 (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2610 /* True if segment S holds address A */
2611 #define segment_holds(S, A)\
2612 ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2614 /* Return segment holding given address */
2615 static msegmentptr
segment_holding(mstate m
, char* addr
) {
2616 msegmentptr sp
= &m
->seg
;
2618 if (addr
>= sp
->base
&& addr
< sp
->base
+ sp
->size
)
2620 if ((sp
= sp
->next
) == 0)
2625 /* Return true if segment contains a segment link */
2626 static int has_segment_link(mstate m
, msegmentptr ss
) {
2627 msegmentptr sp
= &m
->seg
;
2629 if ((char*)sp
>= ss
->base
&& (char*)sp
< ss
->base
+ ss
->size
)
2631 if ((sp
= sp
->next
) == 0)
2636 #ifndef MORECORE_CANNOT_TRIM
2637 #define should_trim(M,s) ((s) > (M)->trim_check)
2638 #else /* MORECORE_CANNOT_TRIM */
2639 #define should_trim(M,s) (0)
2640 #endif /* MORECORE_CANNOT_TRIM */
2643 TOP_FOOT_SIZE is padding at the end of a segment, including space
2644 that may be needed to place segment records and fenceposts when new
2645 noncontiguous segments are added.
2647 #define TOP_FOOT_SIZE\
2648 (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2651 /* ------------------------------- Hooks -------------------------------- */
2654 PREACTION should be defined to return 0 on success, and nonzero on
2655 failure. If you are not using locking, you can redefine these to do
2661 #define PREACTION(M) ((use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2662 #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2663 #else /* USE_LOCKS */
2666 #define PREACTION(M) (0)
2667 #endif /* PREACTION */
2670 #define POSTACTION(M)
2671 #endif /* POSTACTION */
2673 #endif /* USE_LOCKS */
2676 CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2677 USAGE_ERROR_ACTION is triggered on detected bad frees and
2678 reallocs. The argument p is an address that might have triggered the
2679 fault. It is ignored by the two predefined actions, but might be
2680 useful in custom actions that try to help diagnose errors.
2683 #if PROCEED_ON_ERROR
2685 /* A count of the number of corruption errors causing resets */
2686 int malloc_corruption_error_count
;
2688 /* default corruption action */
2689 static void reset_on_error(mstate m
);
2691 #define CORRUPTION_ERROR_ACTION(m) reset_on_error(m)
2692 #define USAGE_ERROR_ACTION(m, p)
2694 #else /* PROCEED_ON_ERROR */
2696 #ifndef CORRUPTION_ERROR_ACTION
2697 #define CORRUPTION_ERROR_ACTION(m) ABORT
2698 #endif /* CORRUPTION_ERROR_ACTION */
2700 #ifndef USAGE_ERROR_ACTION
2701 #define USAGE_ERROR_ACTION(m,p) ABORT
2702 #endif /* USAGE_ERROR_ACTION */
2704 #endif /* PROCEED_ON_ERROR */
2706 /* -------------------------- Debugging setup ---------------------------- */
2710 #define check_free_chunk(M,P)
2711 #define check_inuse_chunk(M,P)
2712 #define check_malloced_chunk(M,P,N)
2713 #define check_mmapped_chunk(M,P)
2714 #define check_malloc_state(M)
2715 #define check_top_chunk(M,P)
2718 #define check_free_chunk(M,P) do_check_free_chunk(M,P)
2719 #define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P)
2720 #define check_top_chunk(M,P) do_check_top_chunk(M,P)
2721 #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2722 #define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P)
2723 #define check_malloc_state(M) do_check_malloc_state(M)
2725 static void do_check_any_chunk(mstate m
, mchunkptr p
);
2726 static void do_check_top_chunk(mstate m
, mchunkptr p
);
2727 static void do_check_mmapped_chunk(mstate m
, mchunkptr p
);
2728 static void do_check_inuse_chunk(mstate m
, mchunkptr p
);
2729 static void do_check_free_chunk(mstate m
, mchunkptr p
);
2730 static void do_check_malloced_chunk(mstate m
, void* mem
, size_t s
);
2731 static void do_check_tree(mstate m
, tchunkptr t
);
2732 static void do_check_treebin(mstate m
, bindex_t i
);
2733 static void do_check_smallbin(mstate m
, bindex_t i
);
2734 static void do_check_malloc_state(mstate m
);
2735 static int bin_find(mstate m
, mchunkptr x
);
2736 static size_t traverse_and_check(mstate m
);
2739 /* ---------------------------- Indexing Bins ---------------------------- */
2741 #define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2742 #define small_index(s) ((s) >> SMALLBIN_SHIFT)
2743 #define small_index2size(i) ((i) << SMALLBIN_SHIFT)
2744 #define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE))
2746 /* addressing by index. See above about smallbin repositioning */
2747 #define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
2748 #define treebin_at(M,i) (&((M)->treebins[i]))
2750 /* assign tree index for size S to variable I. Use x86 asm if possible */
2751 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2752 #define compute_tree_index(S, I)\
2754 unsigned int X = S >> TREEBIN_SHIFT;\
2757 else if (X > 0xFFFF)\
2761 __asm__("bsrl\t%1, %0\n\t" : "=r" (K) : "rm" (X));\
2762 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2766 #elif defined (__INTEL_COMPILER)
2767 #define compute_tree_index(S, I)\
2769 size_t X = S >> TREEBIN_SHIFT;\
2772 else if (X > 0xFFFF)\
2775 unsigned int K = _bit_scan_reverse (X); \
2776 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2780 #elif defined(_MSC_VER) && _MSC_VER>=1300
2781 #define compute_tree_index(S, I)\
2783 size_t X = S >> TREEBIN_SHIFT;\
2786 else if (X > 0xFFFF)\
2790 _BitScanReverse((DWORD *) &K, X);\
2791 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2796 #define compute_tree_index(S, I)\
2798 size_t X = S >> TREEBIN_SHIFT;\
2801 else if (X > 0xFFFF)\
2804 unsigned int Y = (unsigned int)X;\
2805 unsigned int N = ((Y - 0x100) >> 16) & 8;\
2806 unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2808 N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2809 K = 14 - N + ((Y <<= K) >> 15);\
2810 I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2815 /* Bit representing maximum resolved size in a treebin at i */
2816 #define bit_for_tree_index(i) \
2817 (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2819 /* Shift placing maximum resolved bit in a treebin at i as sign bit */
2820 #define leftshift_for_tree_index(i) \
2821 ((i == NTREEBINS-1)? 0 : \
2822 ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2824 /* The size of the smallest chunk held in bin with index i */
2825 #define minsize_for_tree_index(i) \
2826 ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \
2827 (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2830 /* ------------------------ Operations on bin maps ----------------------- */
2832 /* bit corresponding to given index */
2833 #define idx2bit(i) ((binmap_t)(1) << (i))
2835 /* Mark/Clear bits with given index */
2836 #define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i))
2837 #define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i))
2838 #define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i))
2840 #define mark_treemap(M,i) ((M)->treemap |= idx2bit(i))
2841 #define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i))
2842 #define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i))
2844 /* isolate the least set bit of a bitmap */
2845 #define least_bit(x) ((x) & -(x))
2847 /* mask with all bits to left of least bit of x on */
2848 #define left_bits(x) ((x<<1) | -(x<<1))
2850 /* mask with all bits to left of or equal to least bit of x on */
2851 #define same_or_left_bits(x) ((x) | -(x))
2853 /* index corresponding to given bit. Use x86 asm if possible */
2855 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2856 #define compute_bit2idx(X, I)\
2859 __asm__("bsfl\t%1, %0\n\t" : "=r" (J) : "rm" (X));\
2863 #elif defined (__INTEL_COMPILER)
2864 #define compute_bit2idx(X, I)\
2867 J = _bit_scan_forward (X); \
2871 #elif defined(_MSC_VER) && _MSC_VER>=1300
2872 #define compute_bit2idx(X, I)\
2875 _BitScanForward((DWORD *) &J, X);\
2879 #elif USE_BUILTIN_FFS
2880 #define compute_bit2idx(X, I) I = ffs(X)-1
2883 #define compute_bit2idx(X, I)\
2885 unsigned int Y = X - 1;\
2886 unsigned int K = Y >> (16-4) & 16;\
2887 unsigned int N = K; Y >>= K;\
2888 N += K = Y >> (8-3) & 8; Y >>= K;\
2889 N += K = Y >> (4-2) & 4; Y >>= K;\
2890 N += K = Y >> (2-1) & 2; Y >>= K;\
2891 N += K = Y >> (1-0) & 1; Y >>= K;\
2892 I = (bindex_t)(N + Y);\
2897 /* ----------------------- Runtime Check Support ------------------------- */
2900 For security, the main invariant is that malloc/free/etc never
2901 writes to a static address other than malloc_state, unless static
2902 malloc_state itself has been corrupted, which cannot occur via
2903 malloc (because of these checks). In essence this means that we
2904 believe all pointers, sizes, maps etc held in malloc_state, but
2905 check all of those linked or offsetted from other embedded data
2906 structures. These checks are interspersed with main code in a way
2907 that tends to minimize their run-time cost.
2909 When FOOTERS is defined, in addition to range checking, we also
2910 verify footer fields of inuse chunks, which can be used guarantee
2911 that the mstate controlling malloc/free is intact. This is a
2912 streamlined version of the approach described by William Robertson
2913 et al in "Run-time Detection of Heap-based Overflows" LISA'03
2914 http://www.usenix.org/events/lisa03/tech/robertson.html The footer
2915 of an inuse chunk holds the xor of its mstate and a random seed,
2916 that is checked upon calls to free() and realloc(). This is
2917 (probablistically) unguessable from outside the program, but can be
2918 computed by any code successfully malloc'ing any chunk, so does not
2919 itself provide protection against code that has already broken
2920 security through some other means. Unlike Robertson et al, we
2921 always dynamically check addresses of all offset chunks (previous,
2922 next, etc). This turns out to be cheaper than relying on hashes.
2926 /* Check if address a is at least as high as any from MORECORE or MMAP */
2927 #define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
2928 /* Check if address of next chunk n is higher than base chunk p */
2929 #define ok_next(p, n) ((char*)(p) < (char*)(n))
2930 /* Check if p has its cinuse bit on */
2931 #define ok_cinuse(p) cinuse(p)
2932 /* Check if p has its pinuse bit on */
2933 #define ok_pinuse(p) pinuse(p)
2935 #else /* !INSECURE */
2936 #define ok_address(M, a) (1)
2937 #define ok_next(b, n) (1)
2938 #define ok_cinuse(p) (1)
2939 #define ok_pinuse(p) (1)
2940 #endif /* !INSECURE */
2942 #if (FOOTERS && !INSECURE)
2943 /* Check if (alleged) mstate m has expected magic field */
2944 #define ok_magic(M) ((M)->magic == mparams.magic)
2945 #else /* (FOOTERS && !INSECURE) */
2946 #define ok_magic(M) (1)
2947 #endif /* (FOOTERS && !INSECURE) */
2950 /* In gcc, use __builtin_expect to minimize impact of checks */
2952 #if defined(__GNUC__) && __GNUC__ >= 3
2953 #define RTCHECK(e) __builtin_expect(e, 1)
2955 #define RTCHECK(e) (e)
2957 #else /* !INSECURE */
2958 #define RTCHECK(e) (1)
2959 #endif /* !INSECURE */
2961 /* macros to set up inuse chunks with or without footers */
2965 #define mark_inuse_foot(M,p,s)
2967 /* Set cinuse bit and pinuse bit of next chunk */
2968 #define set_inuse(M,p,s)\
2969 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2970 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2972 /* Set cinuse and pinuse of this chunk and pinuse of next chunk */
2973 #define set_inuse_and_pinuse(M,p,s)\
2974 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2975 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2977 /* Set size, cinuse and pinuse bit of this chunk */
2978 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2979 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
2983 /* Set foot of inuse chunk to be xor of mstate and seed */
2984 #define mark_inuse_foot(M,p,s)\
2985 (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
2987 #define get_mstate_for(p)\
2988 ((mstate)(((mchunkptr)((char*)(p) +\
2989 (chunksize(p))))->prev_foot ^ mparams.magic))
2991 #define set_inuse(M,p,s)\
2992 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2993 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
2994 mark_inuse_foot(M,p,s))
2996 #define set_inuse_and_pinuse(M,p,s)\
2997 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2998 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
2999 mark_inuse_foot(M,p,s))
3001 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
3002 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3003 mark_inuse_foot(M, p, s))
3005 #endif /* !FOOTERS */
3007 /* ---------------------------- setting mparams -------------------------- */
3009 /* Initialize mparams */
3010 static int init_mparams(void) {
3011 #ifdef NEED_GLOBAL_LOCK_INIT
3012 if (malloc_global_mutex_status
<= 0)
3013 init_malloc_global_mutex();
3016 ACQUIRE_MALLOC_GLOBAL_LOCK();
3017 if (mparams
.magic
== 0) {
3023 psize
= malloc_getpagesize
;
3024 gsize
= ((DEFAULT_GRANULARITY
!= 0)? DEFAULT_GRANULARITY
: psize
);
3027 SYSTEM_INFO system_info
;
3028 GetSystemInfo(&system_info
);
3029 psize
= system_info
.dwPageSize
;
3030 gsize
= ((DEFAULT_GRANULARITY
!= 0)?
3031 DEFAULT_GRANULARITY
: system_info
.dwAllocationGranularity
);
3035 /* Sanity-check configuration:
3036 size_t must be unsigned and as wide as pointer type.
3037 ints must be at least 4 bytes.
3038 alignment must be at least 8.
3039 Alignment, min chunk size, and page size must all be powers of 2.
3041 if ((sizeof(size_t) != sizeof(char*)) ||
3042 (MAX_SIZE_T
< MIN_CHUNK_SIZE
) ||
3043 (sizeof(int) < 4) ||
3044 (MALLOC_ALIGNMENT
< (size_t)8U) ||
3045 ((MALLOC_ALIGNMENT
& (MALLOC_ALIGNMENT
-SIZE_T_ONE
)) != 0) ||
3046 ((MCHUNK_SIZE
& (MCHUNK_SIZE
-SIZE_T_ONE
)) != 0) ||
3047 ((gsize
& (gsize
-SIZE_T_ONE
)) != 0) ||
3048 ((psize
& (psize
-SIZE_T_ONE
)) != 0))
3051 mparams
.granularity
= gsize
;
3052 mparams
.page_size
= psize
;
3053 mparams
.mmap_threshold
= DEFAULT_MMAP_THRESHOLD
;
3054 mparams
.trim_threshold
= DEFAULT_TRIM_THRESHOLD
;
3055 #if MORECORE_CONTIGUOUS
3056 mparams
.default_mflags
= USE_LOCK_BIT
|USE_MMAP_BIT
;
3057 #else /* MORECORE_CONTIGUOUS */
3058 mparams
.default_mflags
= USE_LOCK_BIT
|USE_MMAP_BIT
|USE_NONCONTIGUOUS_BIT
;
3059 #endif /* MORECORE_CONTIGUOUS */
3062 /* Set up lock for main malloc area */
3063 gm
->mflags
= mparams
.default_mflags
;
3064 INITIAL_LOCK(&gm
->mutex
);
3067 #if (FOOTERS && !INSECURE)
3071 unsigned char buf
[sizeof(size_t)];
3072 /* Try to use /dev/urandom, else fall back on using time */
3073 if ((fd
= open("/dev/urandom", O_RDONLY
)) >= 0 &&
3074 read(fd
, buf
, sizeof(buf
)) == sizeof(buf
)) {
3075 magic
= *((size_t *) buf
);
3079 #endif /* USE_DEV_RANDOM */
3081 magic
= (size_t)(GetTickCount() ^ (size_t)0x55555555U
);
3083 magic
= (size_t)(time(0) ^ (size_t)0x55555555U
);
3085 magic
|= (size_t)8U; /* ensure nonzero */
3086 magic
&= ~(size_t)7U; /* improve chances of fault for bad values */
3088 #else /* (FOOTERS && !INSECURE) */
3089 magic
= (size_t)0x58585858U
;
3090 #endif /* (FOOTERS && !INSECURE) */
3092 mparams
.magic
= magic
;
3095 RELEASE_MALLOC_GLOBAL_LOCK();
3099 /* support for mallopt */
3100 static int change_mparam(int param_number
, int value
) {
3101 size_t val
= (value
== -1)? MAX_SIZE_T
: (size_t)value
;
3102 ensure_initialization();
3103 switch(param_number
) {
3104 case M_TRIM_THRESHOLD
:
3105 mparams
.trim_threshold
= val
;
3108 if (val
>= mparams
.page_size
&& ((val
& (val
-1)) == 0)) {
3109 mparams
.granularity
= val
;
3114 case M_MMAP_THRESHOLD
:
3115 mparams
.mmap_threshold
= val
;
3123 /* ------------------------- Debugging Support --------------------------- */
3125 /* Check properties of any chunk, whether free, inuse, mmapped etc */
3126 static void do_check_any_chunk(mstate m
, mchunkptr p
) {
3127 assert((is_aligned(chunk2mem(p
))) || (p
->head
== FENCEPOST_HEAD
));
3128 assert(ok_address(m
, p
));
3131 /* Check properties of top chunk */
3132 static void do_check_top_chunk(mstate m
, mchunkptr p
) {
3133 msegmentptr sp
= segment_holding(m
, (char*)p
);
3134 size_t sz
= p
->head
& ~INUSE_BITS
; /* third-lowest bit can be set! */
3136 assert((is_aligned(chunk2mem(p
))) || (p
->head
== FENCEPOST_HEAD
));
3137 assert(ok_address(m
, p
));
3138 assert(sz
== m
->topsize
);
3140 assert(sz
== ((sp
->base
+ sp
->size
) - (char*)p
) - TOP_FOOT_SIZE
);
3142 assert(!pinuse(chunk_plus_offset(p
, sz
)));
3145 /* Check properties of (inuse) mmapped chunks */
3146 static void do_check_mmapped_chunk(mstate m
, mchunkptr p
) {
3147 size_t sz
= chunksize(p
);
3148 size_t len
= (sz
+ (p
->prev_foot
& ~IS_MMAPPED_BIT
) + MMAP_FOOT_PAD
);
3149 assert(is_mmapped(p
));
3150 assert(use_mmap(m
));
3151 assert((is_aligned(chunk2mem(p
))) || (p
->head
== FENCEPOST_HEAD
));
3152 assert(ok_address(m
, p
));
3153 assert(!is_small(sz
));
3154 assert((len
& (mparams
.page_size
-SIZE_T_ONE
)) == 0);
3155 assert(chunk_plus_offset(p
, sz
)->head
== FENCEPOST_HEAD
);
3156 assert(chunk_plus_offset(p
, sz
+SIZE_T_SIZE
)->head
== 0);
3159 /* Check properties of inuse chunks */
3160 static void do_check_inuse_chunk(mstate m
, mchunkptr p
) {
3161 do_check_any_chunk(m
, p
);
3163 assert(next_pinuse(p
));
3164 /* If not pinuse and not mmapped, previous chunk has OK offset */
3165 assert(is_mmapped(p
) || pinuse(p
) || next_chunk(prev_chunk(p
)) == p
);
3167 do_check_mmapped_chunk(m
, p
);
3170 /* Check properties of free chunks */
3171 static void do_check_free_chunk(mstate m
, mchunkptr p
) {
3172 size_t sz
= chunksize(p
);
3173 mchunkptr next
= chunk_plus_offset(p
, sz
);
3174 do_check_any_chunk(m
, p
);
3176 assert(!next_pinuse(p
));
3177 assert (!is_mmapped(p
));
3178 if (p
!= m
->dv
&& p
!= m
->top
) {
3179 if (sz
>= MIN_CHUNK_SIZE
) {
3180 assert((sz
& CHUNK_ALIGN_MASK
) == 0);
3181 assert(is_aligned(chunk2mem(p
)));
3182 assert(next
->prev_foot
== sz
);
3184 assert (next
== m
->top
|| cinuse(next
));
3185 assert(p
->fd
->bk
== p
);
3186 assert(p
->bk
->fd
== p
);
3188 else /* markers are always of size SIZE_T_SIZE */
3189 assert(sz
== SIZE_T_SIZE
);
3193 /* Check properties of malloced chunks at the point they are malloced */
3194 static void do_check_malloced_chunk(mstate m
, void* mem
, size_t s
) {
3196 mchunkptr p
= mem2chunk(mem
);
3197 size_t sz
= p
->head
& ~(PINUSE_BIT
|CINUSE_BIT
);
3198 do_check_inuse_chunk(m
, p
);
3199 assert((sz
& CHUNK_ALIGN_MASK
) == 0);
3200 assert(sz
>= MIN_CHUNK_SIZE
);
3202 /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
3203 assert(is_mmapped(p
) || sz
< (s
+ MIN_CHUNK_SIZE
));
3207 /* Check a tree and its subtrees. */
3208 static void do_check_tree(mstate m
, tchunkptr t
) {
3211 bindex_t tindex
= t
->index
;
3212 size_t tsize
= chunksize(t
);
3214 compute_tree_index(tsize
, idx
);
3215 assert(tindex
== idx
);
3216 assert(tsize
>= MIN_LARGE_SIZE
);
3217 assert(tsize
>= minsize_for_tree_index(idx
));
3218 assert((idx
== NTREEBINS
-1) || (tsize
< minsize_for_tree_index((idx
+1))));
3220 do { /* traverse through chain of same-sized nodes */
3221 do_check_any_chunk(m
, ((mchunkptr
)u
));
3222 assert(u
->index
== tindex
);
3223 assert(chunksize(u
) == tsize
);
3225 assert(!next_pinuse(u
));
3226 assert(u
->fd
->bk
== u
);
3227 assert(u
->bk
->fd
== u
);
3228 if (u
->parent
== 0) {
3229 assert(u
->child
[0] == 0);
3230 assert(u
->child
[1] == 0);
3233 assert(head
== 0); /* only one node on chain has parent */
3235 assert(u
->parent
!= u
);
3236 assert (u
->parent
->child
[0] == u
||
3237 u
->parent
->child
[1] == u
||
3238 *((tbinptr
*)(u
->parent
)) == u
);
3239 if (u
->child
[0] != 0) {
3240 assert(u
->child
[0]->parent
== u
);
3241 assert(u
->child
[0] != u
);
3242 do_check_tree(m
, u
->child
[0]);
3244 if (u
->child
[1] != 0) {
3245 assert(u
->child
[1]->parent
== u
);
3246 assert(u
->child
[1] != u
);
3247 do_check_tree(m
, u
->child
[1]);
3249 if (u
->child
[0] != 0 && u
->child
[1] != 0) {
3250 assert(chunksize(u
->child
[0]) < chunksize(u
->child
[1]));
3258 /* Check all the chunks in a treebin. */
3259 static void do_check_treebin(mstate m
, bindex_t i
) {
3260 tbinptr
* tb
= treebin_at(m
, i
);
3262 int empty
= (m
->treemap
& (1U << i
)) == 0;
3266 do_check_tree(m
, t
);
3269 /* Check all the chunks in a smallbin. */
3270 static void do_check_smallbin(mstate m
, bindex_t i
) {
3271 sbinptr b
= smallbin_at(m
, i
);
3272 mchunkptr p
= b
->bk
;
3273 unsigned int empty
= (m
->smallmap
& (1U << i
)) == 0;
3277 for (; p
!= b
; p
= p
->bk
) {
3278 size_t size
= chunksize(p
);
3280 /* each chunk claims to be free */
3281 do_check_free_chunk(m
, p
);
3282 /* chunk belongs in bin */
3283 assert(small_index(size
) == i
);
3284 assert(p
->bk
== b
|| chunksize(p
->bk
) == chunksize(p
));
3285 /* chunk is followed by an inuse chunk */
3287 if (q
->head
!= FENCEPOST_HEAD
)
3288 do_check_inuse_chunk(m
, q
);
3293 /* Find x in a bin. Used in other check functions. */
3294 static int bin_find(mstate m
, mchunkptr x
) {
3295 size_t size
= chunksize(x
);
3296 if (is_small(size
)) {
3297 bindex_t sidx
= small_index(size
);
3298 sbinptr b
= smallbin_at(m
, sidx
);
3299 if (smallmap_is_marked(m
, sidx
)) {
3304 } while ((p
= p
->fd
) != b
);
3309 compute_tree_index(size
, tidx
);
3310 if (treemap_is_marked(m
, tidx
)) {
3311 tchunkptr t
= *treebin_at(m
, tidx
);
3312 size_t sizebits
= size
<< leftshift_for_tree_index(tidx
);
3313 while (t
!= 0 && chunksize(t
) != size
) {
3314 t
= t
->child
[(sizebits
>> (SIZE_T_BITSIZE
-SIZE_T_ONE
)) & 1];
3320 if (u
== (tchunkptr
)x
)
3322 } while ((u
= u
->fd
) != t
);
3329 /* Traverse each chunk and check it; return total */
3330 static size_t traverse_and_check(mstate m
) {
3332 if (is_initialized(m
)) {
3333 msegmentptr s
= &m
->seg
;
3334 sum
+= m
->topsize
+ TOP_FOOT_SIZE
;
3336 mchunkptr q
= align_as_chunk(s
->base
);
3337 mchunkptr lastq
= 0;
3339 while (segment_holds(s
, q
) &&
3340 q
!= m
->top
&& q
->head
!= FENCEPOST_HEAD
) {
3341 sum
+= chunksize(q
);
3343 assert(!bin_find(m
, q
));
3344 do_check_inuse_chunk(m
, q
);
3347 assert(q
== m
->dv
|| bin_find(m
, q
));
3348 assert(lastq
== 0 || cinuse(lastq
)); /* Not 2 consecutive free */
3349 do_check_free_chunk(m
, q
);
3360 /* Check all properties of malloc_state. */
3361 static void do_check_malloc_state(mstate m
) {
3365 for (i
= 0; i
< NSMALLBINS
; ++i
)
3366 do_check_smallbin(m
, i
);
3367 for (i
= 0; i
< NTREEBINS
; ++i
)
3368 do_check_treebin(m
, i
);
3370 if (m
->dvsize
!= 0) { /* check dv chunk */
3371 do_check_any_chunk(m
, m
->dv
);
3372 assert(m
->dvsize
== chunksize(m
->dv
));
3373 assert(m
->dvsize
>= MIN_CHUNK_SIZE
);
3374 assert(bin_find(m
, m
->dv
) == 0);
3377 if (m
->top
!= 0) { /* check top chunk */
3378 do_check_top_chunk(m
, m
->top
);
3379 /*assert(m->topsize == chunksize(m->top)); redundant */
3380 assert(m
->topsize
> 0);
3381 assert(bin_find(m
, m
->top
) == 0);
3384 total
= traverse_and_check(m
);
3385 assert(total
<= m
->footprint
);
3386 assert(m
->footprint
<= m
->max_footprint
);
3390 /* ----------------------------- statistics ------------------------------ */
3393 static struct mallinfo
internal_mallinfo(mstate m
) {
3394 struct mallinfo nm
= { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
3395 ensure_initialization();
3396 if (!PREACTION(m
)) {
3397 check_malloc_state(m
);
3398 if (is_initialized(m
)) {
3399 size_t nfree
= SIZE_T_ONE
; /* top always free */
3400 size_t mfree
= m
->topsize
+ TOP_FOOT_SIZE
;
3402 msegmentptr s
= &m
->seg
;
3404 mchunkptr q
= align_as_chunk(s
->base
);
3405 while (segment_holds(s
, q
) &&
3406 q
!= m
->top
&& q
->head
!= FENCEPOST_HEAD
) {
3407 size_t sz
= chunksize(q
);
3420 nm
.hblkhd
= m
->footprint
- sum
;
3421 nm
.usmblks
= m
->max_footprint
;
3422 nm
.uordblks
= m
->footprint
- mfree
;
3423 nm
.fordblks
= mfree
;
3424 nm
.keepcost
= m
->topsize
;
3431 #endif /* !NO_MALLINFO */
3433 static void internal_malloc_stats(mstate m
) {
3434 ensure_initialization();
3435 if (!PREACTION(m
)) {
3439 check_malloc_state(m
);
3440 if (is_initialized(m
)) {
3441 msegmentptr s
= &m
->seg
;
3442 maxfp
= m
->max_footprint
;
3444 used
= fp
- (m
->topsize
+ TOP_FOOT_SIZE
);
3447 mchunkptr q
= align_as_chunk(s
->base
);
3448 while (segment_holds(s
, q
) &&
3449 q
!= m
->top
&& q
->head
!= FENCEPOST_HEAD
) {
3451 used
-= chunksize(q
);
3458 fprintf(stderr
, "max system bytes = %10lu\n", (unsigned long)(maxfp
));
3459 fprintf(stderr
, "system bytes = %10lu\n", (unsigned long)(fp
));
3460 fprintf(stderr
, "in use bytes = %10lu\n", (unsigned long)(used
));
3466 /* ----------------------- Operations on smallbins ----------------------- */
3469 Various forms of linking and unlinking are defined as macros. Even
3470 the ones for trees, which are very long but have very short typical
3471 paths. This is ugly but reduces reliance on inlining support of
3475 /* Link a free chunk into a smallbin */
3476 #define insert_small_chunk(M, P, S) {\
3477 bindex_t I = small_index(S);\
3478 mchunkptr B = smallbin_at(M, I);\
3480 assert(S >= MIN_CHUNK_SIZE);\
3481 if (!smallmap_is_marked(M, I))\
3482 mark_smallmap(M, I);\
3483 else if (RTCHECK(ok_address(M, B->fd)))\
3486 CORRUPTION_ERROR_ACTION(M);\
3494 /* Unlink a chunk from a smallbin */
3495 #define unlink_small_chunk(M, P, S) {\
3496 mchunkptr F = P->fd;\
3497 mchunkptr B = P->bk;\
3498 bindex_t I = small_index(S);\
3501 assert(chunksize(P) == small_index2size(I));\
3503 clear_smallmap(M, I);\
3504 else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
3505 (B == smallbin_at(M,I) || ok_address(M, B)))) {\
3510 CORRUPTION_ERROR_ACTION(M);\
3514 /* Unlink the first chunk from a smallbin */
3515 #define unlink_first_small_chunk(M, B, P, I) {\
3516 mchunkptr F = P->fd;\
3519 assert(chunksize(P) == small_index2size(I));\
3521 clear_smallmap(M, I);\
3522 else if (RTCHECK(ok_address(M, F))) {\
3527 CORRUPTION_ERROR_ACTION(M);\
3533 /* Replace dv node, binning the old one */
3534 /* Used only when dvsize known to be small */
3535 #define replace_dv(M, P, S) {\
3536 size_t DVS = M->dvsize;\
3538 mchunkptr DV = M->dv;\
3539 assert(is_small(DVS));\
3540 insert_small_chunk(M, DV, DVS);\
3546 /* ------------------------- Operations on trees ------------------------- */
3548 /* Insert chunk into tree */
3549 #define insert_large_chunk(M, X, S) {\
3552 compute_tree_index(S, I);\
3553 H = treebin_at(M, I);\
3555 X->child[0] = X->child[1] = 0;\
3556 if (!treemap_is_marked(M, I)) {\
3557 mark_treemap(M, I);\
3559 X->parent = (tchunkptr)H;\
3564 size_t K = S << leftshift_for_tree_index(I);\
3566 if (chunksize(T) != S) {\
3567 tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
3571 else if (RTCHECK(ok_address(M, C))) {\
3578 CORRUPTION_ERROR_ACTION(M);\
3583 tchunkptr F = T->fd;\
3584 if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
3592 CORRUPTION_ERROR_ACTION(M);\
3603 1. If x is a chained node, unlink it from its same-sized fd/bk links
3604 and choose its bk node as its replacement.
3605 2. If x was the last node of its size, but not a leaf node, it must
3606 be replaced with a leaf node (not merely one with an open left or
3607 right), to make sure that lefts and rights of descendants
3608 correspond properly to bit masks. We use the rightmost descendant
3609 of x. We could use any other leaf, but this is easy to locate and
3610 tends to counteract removal of leftmosts elsewhere, and so keeps
3611 paths shorter than minimally guaranteed. This doesn't loop much
3612 because on average a node in a tree is near the bottom.
3613 3. If x is the base of a chain (i.e., has parent links) relink
3614 x's parent and children to x's replacement (or null if none).
3617 #define unlink_large_chunk(M, X) {\
3618 tchunkptr XP = X->parent;\
3621 tchunkptr F = X->fd;\
3623 if (RTCHECK(ok_address(M, F))) {\
3628 CORRUPTION_ERROR_ACTION(M);\
3633 if (((R = *(RP = &(X->child[1]))) != 0) ||\
3634 ((R = *(RP = &(X->child[0]))) != 0)) {\
3636 while ((*(CP = &(R->child[1])) != 0) ||\
3637 (*(CP = &(R->child[0])) != 0)) {\
3640 if (RTCHECK(ok_address(M, RP)))\
3643 CORRUPTION_ERROR_ACTION(M);\
3648 tbinptr* H = treebin_at(M, X->index);\
3650 if ((*H = R) == 0) \
3651 clear_treemap(M, X->index);\
3653 else if (RTCHECK(ok_address(M, XP))) {\
3654 if (XP->child[0] == X) \
3660 CORRUPTION_ERROR_ACTION(M);\
3662 if (RTCHECK(ok_address(M, R))) {\
3665 if ((C0 = X->child[0]) != 0) {\
3666 if (RTCHECK(ok_address(M, C0))) {\
3671 CORRUPTION_ERROR_ACTION(M);\
3673 if ((C1 = X->child[1]) != 0) {\
3674 if (RTCHECK(ok_address(M, C1))) {\
3679 CORRUPTION_ERROR_ACTION(M);\
3683 CORRUPTION_ERROR_ACTION(M);\
3688 /* Relays to large vs small bin operations */
3690 #define insert_chunk(M, P, S)\
3691 if (is_small(S)) insert_small_chunk(M, P, S)\
3692 else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3694 #define unlink_chunk(M, P, S)\
3695 if (is_small(S)) unlink_small_chunk(M, P, S)\
3696 else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3699 /* Relays to internal calls to malloc/free from realloc, memalign etc */
3702 #define internal_malloc(m, b) mspace_malloc(m, b)
3703 #define internal_free(m, mem) mspace_free(m,mem);
3704 #else /* ONLY_MSPACES */
3706 #define internal_malloc(m, b)\
3707 (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
3708 #define internal_free(m, mem)\
3709 if (m == gm) dlfree(mem); else mspace_free(m,mem);
3711 #define internal_malloc(m, b) dlmalloc(b)
3712 #define internal_free(m, mem) dlfree(mem)
3713 #endif /* MSPACES */
3714 #endif /* ONLY_MSPACES */
3716 /* ----------------------- Direct-mmapping chunks ----------------------- */
3719 Directly mmapped chunks are set up with an offset to the start of
3720 the mmapped region stored in the prev_foot field of the chunk. This
3721 allows reconstruction of the required argument to MUNMAP when freed,
3722 and also allows adjustment of the returned chunk to meet alignment
3723 requirements (especially in memalign). There is also enough space
3724 allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
3725 the PINUSE bit so frees can be checked.
3728 /* Malloc using mmap */
3729 static void* mmap_alloc(mstate m
, size_t nb
) {
3730 size_t mmsize
= mmap_align(nb
+ SIX_SIZE_T_SIZES
+ CHUNK_ALIGN_MASK
);
3731 if (mmsize
> nb
) { /* Check for wrap around 0 */
3732 char* mm
= (char*)(CALL_DIRECT_MMAP(mmsize
));
3734 size_t offset
= align_offset(chunk2mem(mm
));
3735 size_t psize
= mmsize
- offset
- MMAP_FOOT_PAD
;
3736 mchunkptr p
= (mchunkptr
)(mm
+ offset
);
3737 p
->prev_foot
= offset
| IS_MMAPPED_BIT
;
3738 (p
)->head
= (psize
|CINUSE_BIT
);
3739 mark_inuse_foot(m
, p
, psize
);
3740 chunk_plus_offset(p
, psize
)->head
= FENCEPOST_HEAD
;
3741 chunk_plus_offset(p
, psize
+SIZE_T_SIZE
)->head
= 0;
3743 if (mm
< m
->least_addr
)
3745 if ((m
->footprint
+= mmsize
) > m
->max_footprint
)
3746 m
->max_footprint
= m
->footprint
;
3747 assert(is_aligned(chunk2mem(p
)));
3748 check_mmapped_chunk(m
, p
);
3749 return chunk2mem(p
);
3755 /* Realloc using mmap */
3756 static mchunkptr
mmap_resize(mstate m
, mchunkptr oldp
, size_t nb
) {
3757 size_t oldsize
= chunksize(oldp
);
3758 if (is_small(nb
)) /* Can't shrink mmap regions below small size */
3760 /* Keep old chunk if big enough but not too big */
3761 if (oldsize
>= nb
+ SIZE_T_SIZE
&&
3762 (oldsize
- nb
) <= (mparams
.granularity
<< 1))
3765 size_t offset
= oldp
->prev_foot
& ~IS_MMAPPED_BIT
;
3766 size_t oldmmsize
= oldsize
+ offset
+ MMAP_FOOT_PAD
;
3767 size_t newmmsize
= mmap_align(nb
+ SIX_SIZE_T_SIZES
+ CHUNK_ALIGN_MASK
);
3768 char* cp
= (char*)CALL_MREMAP((char*)oldp
- offset
,
3769 oldmmsize
, newmmsize
, 1);
3771 mchunkptr newp
= (mchunkptr
)(cp
+ offset
);
3772 size_t psize
= newmmsize
- offset
- MMAP_FOOT_PAD
;
3773 newp
->head
= (psize
|CINUSE_BIT
);
3774 mark_inuse_foot(m
, newp
, psize
);
3775 chunk_plus_offset(newp
, psize
)->head
= FENCEPOST_HEAD
;
3776 chunk_plus_offset(newp
, psize
+SIZE_T_SIZE
)->head
= 0;
3778 if (cp
< m
->least_addr
)
3780 if ((m
->footprint
+= newmmsize
- oldmmsize
) > m
->max_footprint
)
3781 m
->max_footprint
= m
->footprint
;
3782 check_mmapped_chunk(m
, newp
);
3789 /* -------------------------- mspace management -------------------------- */
3791 /* Initialize top chunk and its size */
3792 static void init_top(mstate m
, mchunkptr p
, size_t psize
) {
3793 /* Ensure alignment */
3794 size_t offset
= align_offset(chunk2mem(p
));
3795 p
= (mchunkptr
)((char*)p
+ offset
);
3800 p
->head
= psize
| PINUSE_BIT
;
3801 /* set size of fake trailing chunk holding overhead space only once */
3802 chunk_plus_offset(p
, psize
)->head
= TOP_FOOT_SIZE
;
3803 m
->trim_check
= mparams
.trim_threshold
; /* reset on each update */
3806 /* Initialize bins for a new mstate that is otherwise zeroed out */
3807 static void init_bins(mstate m
) {
3808 /* Establish circular links for smallbins */
3810 for (i
= 0; i
< NSMALLBINS
; ++i
) {
3811 sbinptr bin
= smallbin_at(m
,i
);
3812 bin
->fd
= bin
->bk
= bin
;
3816 #if PROCEED_ON_ERROR
3818 /* default corruption action */
3819 static void reset_on_error(mstate m
) {
3821 ++malloc_corruption_error_count
;
3822 /* Reinitialize fields to forget about all memory */
3823 m
->smallbins
= m
->treebins
= 0;
3824 m
->dvsize
= m
->topsize
= 0;
3829 for (i
= 0; i
< NTREEBINS
; ++i
)
3830 *treebin_at(m
, i
) = 0;
3833 #endif /* PROCEED_ON_ERROR */
3835 /* Allocate chunk and prepend remainder with chunk in successor base. */
3836 static void* prepend_alloc(mstate m
, char* newbase
, char* oldbase
,
3838 mchunkptr p
= align_as_chunk(newbase
);
3839 mchunkptr oldfirst
= align_as_chunk(oldbase
);
3840 size_t psize
= (char*)oldfirst
- (char*)p
;
3841 mchunkptr q
= chunk_plus_offset(p
, nb
);
3842 size_t qsize
= psize
- nb
;
3843 set_size_and_pinuse_of_inuse_chunk(m
, p
, nb
);
3845 assert((char*)oldfirst
> (char*)q
);
3846 assert(pinuse(oldfirst
));
3847 assert(qsize
>= MIN_CHUNK_SIZE
);
3849 /* consolidate remainder with first chunk of old base */
3850 if (oldfirst
== m
->top
) {
3851 size_t tsize
= m
->topsize
+= qsize
;
3853 q
->head
= tsize
| PINUSE_BIT
;
3854 check_top_chunk(m
, q
);
3856 else if (oldfirst
== m
->dv
) {
3857 size_t dsize
= m
->dvsize
+= qsize
;
3859 set_size_and_pinuse_of_free_chunk(q
, dsize
);
3862 if (!cinuse(oldfirst
)) {
3863 size_t nsize
= chunksize(oldfirst
);
3864 unlink_chunk(m
, oldfirst
, nsize
);
3865 oldfirst
= chunk_plus_offset(oldfirst
, nsize
);
3868 set_free_with_pinuse(q
, qsize
, oldfirst
);
3869 insert_chunk(m
, q
, qsize
);
3870 check_free_chunk(m
, q
);
3873 check_malloced_chunk(m
, chunk2mem(p
), nb
);
3874 return chunk2mem(p
);
3877 /* Add a segment to hold a new noncontiguous region */
3878 static void add_segment(mstate m
, char* tbase
, size_t tsize
, flag_t mmapped
) {
3879 /* Determine locations and sizes of segment, fenceposts, old top */
3880 char* old_top
= (char*)m
->top
;
3881 msegmentptr oldsp
= segment_holding(m
, old_top
);
3882 char* old_end
= oldsp
->base
+ oldsp
->size
;
3883 size_t ssize
= pad_request(sizeof(struct malloc_segment
));
3884 char* rawsp
= old_end
- (ssize
+ FOUR_SIZE_T_SIZES
+ CHUNK_ALIGN_MASK
);
3885 size_t offset
= align_offset(chunk2mem(rawsp
));
3886 char* asp
= rawsp
+ offset
;
3887 char* csp
= (asp
< (old_top
+ MIN_CHUNK_SIZE
))? old_top
: asp
;
3888 mchunkptr sp
= (mchunkptr
)csp
;
3889 msegmentptr ss
= (msegmentptr
)(chunk2mem(sp
));
3890 mchunkptr tnext
= chunk_plus_offset(sp
, ssize
);
3891 mchunkptr p
= tnext
;
3894 /* reset top to new space */
3895 init_top(m
, (mchunkptr
)tbase
, tsize
- TOP_FOOT_SIZE
);
3897 /* Set up segment record */
3898 assert(is_aligned(ss
));
3899 set_size_and_pinuse_of_inuse_chunk(m
, sp
, ssize
);
3900 *ss
= m
->seg
; /* Push current record */
3901 m
->seg
.base
= tbase
;
3902 m
->seg
.size
= tsize
;
3903 m
->seg
.sflags
= mmapped
;
3906 /* Insert trailing fenceposts */
3908 mchunkptr nextp
= chunk_plus_offset(p
, SIZE_T_SIZE
);
3909 p
->head
= FENCEPOST_HEAD
;
3911 if ((char*)(&(nextp
->head
)) < old_end
)
3916 assert(nfences
>= 2);
3918 /* Insert the rest of old top into a bin as an ordinary free chunk */
3919 if (csp
!= old_top
) {
3920 mchunkptr q
= (mchunkptr
)old_top
;
3921 size_t psize
= csp
- old_top
;
3922 mchunkptr tn
= chunk_plus_offset(q
, psize
);
3923 set_free_with_pinuse(q
, psize
, tn
);
3924 insert_chunk(m
, q
, psize
);
3927 check_top_chunk(m
, m
->top
);
3930 /* -------------------------- System allocation -------------------------- */
3932 /* Get memory from system using MORECORE or MMAP */
3933 static void* sys_alloc(mstate m
, size_t nb
) {
3934 char* tbase
= CMFAIL
;
3936 flag_t mmap_flag
= 0;
3938 ensure_initialization();
3940 /* Directly map large chunks */
3941 if (use_mmap(m
) && nb
>= mparams
.mmap_threshold
) {
3942 void* mem
= mmap_alloc(m
, nb
);
3948 Try getting memory in any of three ways (in most-preferred to
3949 least-preferred order):
3950 1. A call to MORECORE that can normally contiguously extend memory.
3951 (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
3952 main space is mmapped or a previous contiguous call failed)
3953 2. A call to MMAP new space (disabled if not HAVE_MMAP).
3954 Note that under the default settings, if MORECORE is unable to
3955 fulfill a request, and HAVE_MMAP is true, then mmap is
3956 used as a noncontiguous system allocator. This is a useful backup
3957 strategy for systems with holes in address spaces -- in this case
3958 sbrk cannot contiguously expand the heap, but mmap may be able to
3960 3. A call to MORECORE that cannot usually contiguously extend memory.
3961 (disabled if not HAVE_MORECORE)
3963 In all cases, we need to request enough bytes from system to ensure
3964 we can malloc nb bytes upon success, so pad with enough space for
3965 top_foot, plus alignment-pad to make sure we don't lose bytes if
3966 not on boundary, and round this up to a granularity unit.
3969 if (MORECORE_CONTIGUOUS
&& !use_noncontiguous(m
)) {
3971 msegmentptr ss
= (m
->top
== 0)? 0 : segment_holding(m
, (char*)m
->top
);
3973 ACQUIRE_MALLOC_GLOBAL_LOCK();
3975 if (ss
== 0) { /* First time through or recovery */
3976 char* base
= (char*)CALL_MORECORE(0);
3977 if (base
!= CMFAIL
) {
3978 asize
= granularity_align(nb
+ SYS_ALLOC_PADDING
);
3979 /* Adjust to end on a page boundary */
3980 if (!is_page_aligned(base
))
3981 asize
+= (page_align((size_t)base
) - (size_t)base
);
3982 /* Can't call MORECORE if size is negative when treated as signed */
3983 if (asize
< HALF_MAX_SIZE_T
&&
3984 (br
= (char*)(CALL_MORECORE(asize
))) == base
) {
3991 /* Subtract out existing available top space from MORECORE request. */
3992 asize
= granularity_align(nb
- m
->topsize
+ SYS_ALLOC_PADDING
);
3993 /* Use mem here only if it did continuously extend old space */
3994 if (asize
< HALF_MAX_SIZE_T
&&
3995 (br
= (char*)(CALL_MORECORE(asize
))) == ss
->base
+ss
->size
) {
4001 if (tbase
== CMFAIL
) { /* Cope with partial failure */
4002 if (br
!= CMFAIL
) { /* Try to use/extend the space we did get */
4003 if (asize
< HALF_MAX_SIZE_T
&&
4004 asize
< nb
+ SYS_ALLOC_PADDING
) {
4005 size_t esize
= granularity_align(nb
+ SYS_ALLOC_PADDING
- asize
);
4006 if (esize
< HALF_MAX_SIZE_T
) {
4007 char* end
= (char*)CALL_MORECORE(esize
);
4010 else { /* Can't use; try to release */
4011 (void) CALL_MORECORE(-asize
);
4017 if (br
!= CMFAIL
) { /* Use the space we did get */
4022 disable_contiguous(m
); /* Don't try contiguous path in the future */
4025 RELEASE_MALLOC_GLOBAL_LOCK();
4028 if (HAVE_MMAP
&& tbase
== CMFAIL
) { /* Try MMAP */
4029 size_t rsize
= granularity_align(nb
+ SYS_ALLOC_PADDING
);
4030 if (rsize
> nb
) { /* Fail if wraps around zero */
4031 char* mp
= (char*)(CALL_MMAP(rsize
));
4035 mmap_flag
= IS_MMAPPED_BIT
;
4040 if (HAVE_MORECORE
&& tbase
== CMFAIL
) { /* Try noncontiguous MORECORE */
4041 size_t asize
= granularity_align(nb
+ SYS_ALLOC_PADDING
);
4042 if (asize
< HALF_MAX_SIZE_T
) {
4045 ACQUIRE_MALLOC_GLOBAL_LOCK();
4046 br
= (char*)(CALL_MORECORE(asize
));
4047 end
= (char*)(CALL_MORECORE(0));
4048 RELEASE_MALLOC_GLOBAL_LOCK();
4049 if (br
!= CMFAIL
&& end
!= CMFAIL
&& br
< end
) {
4050 size_t ssize
= end
- br
;
4051 if (ssize
> nb
+ TOP_FOOT_SIZE
) {
4059 if (tbase
!= CMFAIL
) {
4061 if ((m
->footprint
+= tsize
) > m
->max_footprint
)
4062 m
->max_footprint
= m
->footprint
;
4064 if (!is_initialized(m
)) { /* first-time initialization */
4065 m
->seg
.base
= m
->least_addr
= tbase
;
4066 m
->seg
.size
= tsize
;
4067 m
->seg
.sflags
= mmap_flag
;
4068 m
->magic
= mparams
.magic
;
4069 m
->release_checks
= MAX_RELEASE_CHECK_RATE
;
4073 init_top(m
, (mchunkptr
)tbase
, tsize
- TOP_FOOT_SIZE
);
4077 /* Offset top by embedded malloc_state */
4078 mchunkptr mn
= next_chunk(mem2chunk(m
));
4079 init_top(m
, mn
, (size_t)((tbase
+ tsize
) - (char*)mn
) -TOP_FOOT_SIZE
);
4084 /* Try to merge with an existing segment */
4085 msegmentptr sp
= &m
->seg
;
4086 /* Only consider most recent segment if traversal suppressed */
4087 while (sp
!= 0 && tbase
!= sp
->base
+ sp
->size
)
4088 sp
= (NO_SEGMENT_TRAVERSAL
) ? 0 : sp
->next
;
4090 !is_extern_segment(sp
) &&
4091 (sp
->sflags
& IS_MMAPPED_BIT
) == mmap_flag
&&
4092 segment_holds(sp
, m
->top
)) { /* append */
4094 init_top(m
, m
->top
, m
->topsize
+ tsize
);
4097 if (tbase
< m
->least_addr
)
4098 m
->least_addr
= tbase
;
4100 while (sp
!= 0 && sp
->base
!= tbase
+ tsize
)
4101 sp
= (NO_SEGMENT_TRAVERSAL
) ? 0 : sp
->next
;
4103 !is_extern_segment(sp
) &&
4104 (sp
->sflags
& IS_MMAPPED_BIT
) == mmap_flag
) {
4105 char* oldbase
= sp
->base
;
4108 return prepend_alloc(m
, tbase
, oldbase
, nb
);
4111 add_segment(m
, tbase
, tsize
, mmap_flag
);
4115 if (nb
< m
->topsize
) { /* Allocate from new or extended top space */
4116 size_t rsize
= m
->topsize
-= nb
;
4117 mchunkptr p
= m
->top
;
4118 mchunkptr r
= m
->top
= chunk_plus_offset(p
, nb
);
4119 r
->head
= rsize
| PINUSE_BIT
;
4120 set_size_and_pinuse_of_inuse_chunk(m
, p
, nb
);
4121 check_top_chunk(m
, m
->top
);
4122 check_malloced_chunk(m
, chunk2mem(p
), nb
);
4123 return chunk2mem(p
);
4127 MALLOC_FAILURE_ACTION
;
4131 /* ----------------------- system deallocation -------------------------- */
4133 /* Unmap and unlink any mmapped segments that don't contain used chunks */
4134 static size_t release_unused_segments(mstate m
) {
4135 size_t released
= 0;
4137 msegmentptr pred
= &m
->seg
;
4138 msegmentptr sp
= pred
->next
;
4140 char* base
= sp
->base
;
4141 size_t size
= sp
->size
;
4142 msegmentptr next
= sp
->next
;
4144 if (is_mmapped_segment(sp
) && !is_extern_segment(sp
)) {
4145 mchunkptr p
= align_as_chunk(base
);
4146 size_t psize
= chunksize(p
);
4147 /* Can unmap if first chunk holds entire segment and not pinned */
4148 if (!cinuse(p
) && (char*)p
+ psize
>= base
+ size
- TOP_FOOT_SIZE
) {
4149 tchunkptr tp
= (tchunkptr
)p
;
4150 assert(segment_holds(sp
, (char*)sp
));
4156 unlink_large_chunk(m
, tp
);
4158 if (CALL_MUNMAP(base
, size
) == 0) {
4160 m
->footprint
-= size
;
4161 /* unlink obsoleted record */
4165 else { /* back out if cannot unmap */
4166 insert_large_chunk(m
, tp
, psize
);
4170 if (NO_SEGMENT_TRAVERSAL
) /* scan only first segment */
4175 /* Reset check counter */
4176 m
->release_checks
= ((nsegs
> MAX_RELEASE_CHECK_RATE
)?
4177 nsegs
: MAX_RELEASE_CHECK_RATE
);
4181 static int sys_trim(mstate m
, size_t pad
) {
4182 size_t released
= 0;
4183 ensure_initialization();
4184 if (pad
< MAX_REQUEST
&& is_initialized(m
)) {
4185 pad
+= TOP_FOOT_SIZE
; /* ensure enough room for segment overhead */
4187 if (m
->topsize
> pad
) {
4188 /* Shrink top space in granularity-size units, keeping at least one */
4189 size_t unit
= mparams
.granularity
;
4190 size_t extra
= ((m
->topsize
- pad
+ (unit
- SIZE_T_ONE
)) / unit
-
4192 msegmentptr sp
= segment_holding(m
, (char*)m
->top
);
4194 if (!is_extern_segment(sp
)) {
4195 if (is_mmapped_segment(sp
)) {
4197 sp
->size
>= extra
&&
4198 !has_segment_link(m
, sp
)) { /* can't shrink if pinned */
4199 size_t newsize
= sp
->size
- extra
;
4200 /* Prefer mremap, fall back to munmap */
4201 if ((CALL_MREMAP(sp
->base
, sp
->size
, newsize
, 0) != MFAIL
) ||
4202 (CALL_MUNMAP(sp
->base
+ newsize
, extra
) == 0)) {
4207 else if (HAVE_MORECORE
) {
4208 if (extra
>= HALF_MAX_SIZE_T
) /* Avoid wrapping negative */
4209 extra
= (HALF_MAX_SIZE_T
) + SIZE_T_ONE
- unit
;
4210 ACQUIRE_MALLOC_GLOBAL_LOCK();
4212 /* Make sure end of memory is where we last set it. */
4213 char* old_br
= (char*)(CALL_MORECORE(0));
4214 if (old_br
== sp
->base
+ sp
->size
) {
4215 char* rel_br
= (char*)(CALL_MORECORE(-extra
));
4216 char* new_br
= (char*)(CALL_MORECORE(0));
4217 if (rel_br
!= CMFAIL
&& new_br
< old_br
)
4218 released
= old_br
- new_br
;
4221 RELEASE_MALLOC_GLOBAL_LOCK();
4225 if (released
!= 0) {
4226 sp
->size
-= released
;
4227 m
->footprint
-= released
;
4228 init_top(m
, m
->top
, m
->topsize
- released
);
4229 check_top_chunk(m
, m
->top
);
4233 /* Unmap any unused mmapped segments */
4235 released
+= release_unused_segments(m
);
4237 /* On failure, disable autotrim to avoid repeated failed future calls */
4238 if (released
== 0 && m
->topsize
> m
->trim_check
)
4239 m
->trim_check
= MAX_SIZE_T
;
4242 return (released
!= 0)? 1 : 0;
4246 /* ---------------------------- malloc support --------------------------- */
4248 /* allocate a large request from the best fitting chunk in a treebin */
4249 static void* tmalloc_large(mstate m
, size_t nb
) {
4251 size_t rsize
= -nb
; /* Unsigned negation */
4254 compute_tree_index(nb
, idx
);
4255 if ((t
= *treebin_at(m
, idx
)) != 0) {
4256 /* Traverse tree for this bin looking for node with size == nb */
4257 size_t sizebits
= nb
<< leftshift_for_tree_index(idx
);
4258 tchunkptr rst
= 0; /* The deepest untaken right subtree */
4261 size_t trem
= chunksize(t
) - nb
;
4264 if ((rsize
= trem
) == 0)
4268 t
= t
->child
[(sizebits
>> (SIZE_T_BITSIZE
-SIZE_T_ONE
)) & 1];
4269 if (rt
!= 0 && rt
!= t
)
4272 t
= rst
; /* set t to least subtree holding sizes > nb */
4278 if (t
== 0 && v
== 0) { /* set t to root of next non-empty treebin */
4279 binmap_t leftbits
= left_bits(idx2bit(idx
)) & m
->treemap
;
4280 if (leftbits
!= 0) {
4282 binmap_t leastbit
= least_bit(leftbits
);
4283 compute_bit2idx(leastbit
, i
);
4284 t
= *treebin_at(m
, i
);
4288 while (t
!= 0) { /* find smallest of tree or subtree */
4289 size_t trem
= chunksize(t
) - nb
;
4294 t
= leftmost_child(t
);
4297 /* If dv is a better fit, return 0 so malloc will use it */
4298 if (v
!= 0 && rsize
< (size_t)(m
->dvsize
- nb
)) {
4299 if (RTCHECK(ok_address(m
, v
))) { /* split */
4300 mchunkptr r
= chunk_plus_offset(v
, nb
);
4301 assert(chunksize(v
) == rsize
+ nb
);
4302 if (RTCHECK(ok_next(v
, r
))) {
4303 unlink_large_chunk(m
, v
);
4304 if (rsize
< MIN_CHUNK_SIZE
)
4305 set_inuse_and_pinuse(m
, v
, (rsize
+ nb
));
4307 set_size_and_pinuse_of_inuse_chunk(m
, v
, nb
);
4308 set_size_and_pinuse_of_free_chunk(r
, rsize
);
4309 insert_chunk(m
, r
, rsize
);
4311 return chunk2mem(v
);
4314 CORRUPTION_ERROR_ACTION(m
);
4319 /* allocate a small request from the best fitting chunk in a treebin */
4320 static void* tmalloc_small(mstate m
, size_t nb
) {
4324 binmap_t leastbit
= least_bit(m
->treemap
);
4325 compute_bit2idx(leastbit
, i
);
4326 v
= t
= *treebin_at(m
, i
);
4327 rsize
= chunksize(t
) - nb
;
4329 while ((t
= leftmost_child(t
)) != 0) {
4330 size_t trem
= chunksize(t
) - nb
;
4337 if (RTCHECK(ok_address(m
, v
))) {
4338 mchunkptr r
= chunk_plus_offset(v
, nb
);
4339 assert(chunksize(v
) == rsize
+ nb
);
4340 if (RTCHECK(ok_next(v
, r
))) {
4341 unlink_large_chunk(m
, v
);
4342 if (rsize
< MIN_CHUNK_SIZE
)
4343 set_inuse_and_pinuse(m
, v
, (rsize
+ nb
));
4345 set_size_and_pinuse_of_inuse_chunk(m
, v
, nb
);
4346 set_size_and_pinuse_of_free_chunk(r
, rsize
);
4347 replace_dv(m
, r
, rsize
);
4349 return chunk2mem(v
);
4353 CORRUPTION_ERROR_ACTION(m
);
4357 /* --------------------------- realloc support --------------------------- */
4359 static void* internal_realloc(mstate m
, void* oldmem
, size_t bytes
) {
4360 if (bytes
>= MAX_REQUEST
) {
4361 MALLOC_FAILURE_ACTION
;
4364 if (!PREACTION(m
)) {
4365 mchunkptr oldp
= mem2chunk(oldmem
);
4366 size_t oldsize
= chunksize(oldp
);
4367 mchunkptr next
= chunk_plus_offset(oldp
, oldsize
);
4371 /* Try to either shrink or extend into top. Else malloc-copy-free */
4373 if (RTCHECK(ok_address(m
, oldp
) && ok_cinuse(oldp
) &&
4374 ok_next(oldp
, next
) && ok_pinuse(next
))) {
4375 size_t nb
= request2size(bytes
);
4376 if (is_mmapped(oldp
))
4377 newp
= mmap_resize(m
, oldp
, nb
);
4378 else if (oldsize
>= nb
) { /* already big enough */
4379 size_t rsize
= oldsize
- nb
;
4381 if (rsize
>= MIN_CHUNK_SIZE
) {
4382 mchunkptr remainder
= chunk_plus_offset(newp
, nb
);
4383 set_inuse(m
, newp
, nb
);
4384 set_inuse(m
, remainder
, rsize
);
4385 extra
= chunk2mem(remainder
);
4388 else if (next
== m
->top
&& oldsize
+ m
->topsize
> nb
) {
4389 /* Expand into top */
4390 size_t newsize
= oldsize
+ m
->topsize
;
4391 size_t newtopsize
= newsize
- nb
;
4392 mchunkptr newtop
= chunk_plus_offset(oldp
, nb
);
4393 set_inuse(m
, oldp
, nb
);
4394 newtop
->head
= newtopsize
|PINUSE_BIT
;
4396 m
->topsize
= newtopsize
;
4401 USAGE_ERROR_ACTION(m
, oldmem
);
4410 internal_free(m
, extra
);
4412 check_inuse_chunk(m
, newp
);
4413 return chunk2mem(newp
);
4416 void* newmem
= internal_malloc(m
, bytes
);
4418 size_t oc
= oldsize
- overhead_for(oldp
);
4419 memcpy(newmem
, oldmem
, (oc
< bytes
)? oc
: bytes
);
4420 internal_free(m
, oldmem
);
4428 /* --------------------------- memalign support -------------------------- */
4430 static void* internal_memalign(mstate m
, size_t alignment
, size_t bytes
) {
4431 if (alignment
<= MALLOC_ALIGNMENT
) /* Can just use malloc */
4432 return internal_malloc(m
, bytes
);
4433 if (alignment
< MIN_CHUNK_SIZE
) /* must be at least a minimum chunk size */
4434 alignment
= MIN_CHUNK_SIZE
;
4435 if ((alignment
& (alignment
-SIZE_T_ONE
)) != 0) {/* Ensure a power of 2 */
4436 size_t a
= MALLOC_ALIGNMENT
<< 1;
4437 while (a
< alignment
) a
<<= 1;
4441 if (bytes
>= MAX_REQUEST
- alignment
) {
4442 if (m
!= 0) { /* Test isn't needed but avoids compiler warning */
4443 MALLOC_FAILURE_ACTION
;
4447 size_t nb
= request2size(bytes
);
4448 size_t req
= nb
+ alignment
+ MIN_CHUNK_SIZE
- CHUNK_OVERHEAD
;
4449 char* mem
= (char*)internal_malloc(m
, req
);
4453 mchunkptr p
= mem2chunk(mem
);
4455 if (PREACTION(m
)) return 0;
4456 if ((((size_t)(mem
)) % alignment
) != 0) { /* misaligned */
4458 Find an aligned spot inside chunk. Since we need to give
4459 back leading space in a chunk of at least MIN_CHUNK_SIZE, if
4460 the first calculation places us at a spot with less than
4461 MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
4462 We've allocated enough total room so that this is always
4465 char* br
= (char*)mem2chunk((size_t)(((size_t)(mem
+
4469 char* pos
= ((size_t)(br
- (char*)(p
)) >= MIN_CHUNK_SIZE
)?
4471 mchunkptr newp
= (mchunkptr
)pos
;
4472 size_t leadsize
= pos
- (char*)(p
);
4473 size_t newsize
= chunksize(p
) - leadsize
;
4475 if (is_mmapped(p
)) { /* For mmapped chunks, just adjust offset */
4476 newp
->prev_foot
= p
->prev_foot
+ leadsize
;
4477 newp
->head
= (newsize
|CINUSE_BIT
);
4479 else { /* Otherwise, give back leader, use the rest */
4480 set_inuse(m
, newp
, newsize
);
4481 set_inuse(m
, p
, leadsize
);
4482 leader
= chunk2mem(p
);
4487 /* Give back spare room at the end */
4488 if (!is_mmapped(p
)) {
4489 size_t size
= chunksize(p
);
4490 if (size
> nb
+ MIN_CHUNK_SIZE
) {
4491 size_t remainder_size
= size
- nb
;
4492 mchunkptr remainder
= chunk_plus_offset(p
, nb
);
4493 set_inuse(m
, p
, nb
);
4494 set_inuse(m
, remainder
, remainder_size
);
4495 trailer
= chunk2mem(remainder
);
4499 assert (chunksize(p
) >= nb
);
4500 assert((((size_t)(chunk2mem(p
))) % alignment
) == 0);
4501 check_inuse_chunk(m
, p
);
4504 internal_free(m
, leader
);
4507 internal_free(m
, trailer
);
4509 return chunk2mem(p
);
4515 /* ------------------------ comalloc/coalloc support --------------------- */
4517 static void** ialloc(mstate m
,
4523 This provides common support for independent_X routines, handling
4524 all of the combinations that can result.
4527 bit 0 set if all elements are same size (using sizes[0])
4528 bit 1 set if elements should be zeroed
4531 size_t element_size
; /* chunksize of each element, if all same */
4532 size_t contents_size
; /* total size of elements */
4533 size_t array_size
; /* request size of pointer array */
4534 void* mem
; /* malloced aggregate space */
4535 mchunkptr p
; /* corresponding chunk */
4536 size_t remainder_size
; /* remaining bytes while splitting */
4537 void** marray
; /* either "chunks" or malloced ptr array */
4538 mchunkptr array_chunk
; /* chunk for malloced ptr array */
4539 flag_t was_enabled
; /* to disable mmap */
4543 ensure_initialization();
4544 /* compute array length, if needed */
4546 if (n_elements
== 0)
4547 return chunks
; /* nothing to do */
4552 /* if empty req, must still return chunk representing empty array */
4553 if (n_elements
== 0)
4554 return (void**)internal_malloc(m
, 0);
4556 array_size
= request2size(n_elements
* (sizeof(void*)));
4559 /* compute total element size */
4560 if (opts
& 0x1) { /* all-same-size */
4561 element_size
= request2size(*sizes
);
4562 contents_size
= n_elements
* element_size
;
4564 else { /* add up all the sizes */
4567 for (i
= 0; i
!= n_elements
; ++i
)
4568 contents_size
+= request2size(sizes
[i
]);
4571 size
= contents_size
+ array_size
;
4574 Allocate the aggregate chunk. First disable direct-mmapping so
4575 malloc won't use it, since we would not be able to later
4576 free/realloc space internal to a segregated mmap region.
4578 was_enabled
= use_mmap(m
);
4580 mem
= internal_malloc(m
, size
- CHUNK_OVERHEAD
);
4586 if (PREACTION(m
)) return 0;
4588 remainder_size
= chunksize(p
);
4590 assert(!is_mmapped(p
));
4592 if (opts
& 0x2) { /* optionally clear the elements */
4593 memset((size_t*)mem
, 0, remainder_size
- SIZE_T_SIZE
- array_size
);
4596 /* If not provided, allocate the pointer array as final part of chunk */
4598 size_t array_chunk_size
;
4599 array_chunk
= chunk_plus_offset(p
, contents_size
);
4600 array_chunk_size
= remainder_size
- contents_size
;
4601 marray
= (void**) (chunk2mem(array_chunk
));
4602 set_size_and_pinuse_of_inuse_chunk(m
, array_chunk
, array_chunk_size
);
4603 remainder_size
= contents_size
;
4606 /* split out elements */
4607 for (i
= 0; ; ++i
) {
4608 marray
[i
] = chunk2mem(p
);
4609 if (i
!= n_elements
-1) {
4610 if (element_size
!= 0)
4611 size
= element_size
;
4613 size
= request2size(sizes
[i
]);
4614 remainder_size
-= size
;
4615 set_size_and_pinuse_of_inuse_chunk(m
, p
, size
);
4616 p
= chunk_plus_offset(p
, size
);
4618 else { /* the final element absorbs any overallocation slop */
4619 set_size_and_pinuse_of_inuse_chunk(m
, p
, remainder_size
);
4625 if (marray
!= chunks
) {
4626 /* final element must have exactly exhausted chunk */
4627 if (element_size
!= 0) {
4628 assert(remainder_size
== element_size
);
4631 assert(remainder_size
== request2size(sizes
[i
]));
4633 check_inuse_chunk(m
, mem2chunk(marray
));
4635 for (i
= 0; i
!= n_elements
; ++i
)
4636 check_inuse_chunk(m
, mem2chunk(marray
[i
]));
4645 /* -------------------------- public routines ---------------------------- */
4649 void* dlmalloc(size_t bytes
) {
4652 If a small request (< 256 bytes minus per-chunk overhead):
4653 1. If one exists, use a remainderless chunk in associated smallbin.
4654 (Remainderless means that there are too few excess bytes to
4655 represent as a chunk.)
4656 2. If it is big enough, use the dv chunk, which is normally the
4657 chunk adjacent to the one used for the most recent small request.
4658 3. If one exists, split the smallest available chunk in a bin,
4659 saving remainder in dv.
4660 4. If it is big enough, use the top chunk.
4661 5. If available, get memory from system and use it
4662 Otherwise, for a large request:
4663 1. Find the smallest available binned chunk that fits, and use it
4664 if it is better fitting than dv chunk, splitting if necessary.
4665 2. If better fitting than any binned chunk, use the dv chunk.
4666 3. If it is big enough, use the top chunk.
4667 4. If request size >= mmap threshold, try to directly mmap this chunk.
4668 5. If available, get memory from system and use it
4670 The ugly goto's here ensure that postaction occurs along all paths.
4674 ensure_initialization(); /* initialize in sys_alloc if not using locks */
4677 if (!PREACTION(gm
)) {
4680 if (bytes
<= MAX_SMALL_REQUEST
) {
4683 nb
= (bytes
< MIN_REQUEST
)? MIN_CHUNK_SIZE
: pad_request(bytes
);
4684 idx
= small_index(nb
);
4685 smallbits
= gm
->smallmap
>> idx
;
4687 if ((smallbits
& 0x3U
) != 0) { /* Remainderless fit to a smallbin. */
4689 idx
+= ~smallbits
& 1; /* Uses next bin if idx empty */
4690 b
= smallbin_at(gm
, idx
);
4692 assert(chunksize(p
) == small_index2size(idx
));
4693 unlink_first_small_chunk(gm
, b
, p
, idx
);
4694 set_inuse_and_pinuse(gm
, p
, small_index2size(idx
));
4696 check_malloced_chunk(gm
, mem
, nb
);
4700 else if (nb
> gm
->dvsize
) {
4701 if (smallbits
!= 0) { /* Use chunk in next nonempty smallbin */
4705 binmap_t leftbits
= (smallbits
<< idx
) & left_bits(idx2bit(idx
));
4706 binmap_t leastbit
= least_bit(leftbits
);
4707 compute_bit2idx(leastbit
, i
);
4708 b
= smallbin_at(gm
, i
);
4710 assert(chunksize(p
) == small_index2size(i
));
4711 unlink_first_small_chunk(gm
, b
, p
, i
);
4712 rsize
= small_index2size(i
) - nb
;
4713 /* Fit here cannot be remainderless if 4byte sizes */
4714 if (SIZE_T_SIZE
!= 4 && rsize
< MIN_CHUNK_SIZE
)
4715 set_inuse_and_pinuse(gm
, p
, small_index2size(i
));
4717 set_size_and_pinuse_of_inuse_chunk(gm
, p
, nb
);
4718 r
= chunk_plus_offset(p
, nb
);
4719 set_size_and_pinuse_of_free_chunk(r
, rsize
);
4720 replace_dv(gm
, r
, rsize
);
4723 check_malloced_chunk(gm
, mem
, nb
);
4727 else if (gm
->treemap
!= 0 && (mem
= tmalloc_small(gm
, nb
)) != 0) {
4728 check_malloced_chunk(gm
, mem
, nb
);
4733 else if (bytes
>= MAX_REQUEST
)
4734 nb
= MAX_SIZE_T
; /* Too big to allocate. Force failure (in sys alloc) */
4736 nb
= pad_request(bytes
);
4737 if (gm
->treemap
!= 0 && (mem
= tmalloc_large(gm
, nb
)) != 0) {
4738 check_malloced_chunk(gm
, mem
, nb
);
4743 if (nb
<= gm
->dvsize
) {
4744 size_t rsize
= gm
->dvsize
- nb
;
4745 mchunkptr p
= gm
->dv
;
4746 if (rsize
>= MIN_CHUNK_SIZE
) { /* split dv */
4747 mchunkptr r
= gm
->dv
= chunk_plus_offset(p
, nb
);
4749 set_size_and_pinuse_of_free_chunk(r
, rsize
);
4750 set_size_and_pinuse_of_inuse_chunk(gm
, p
, nb
);
4752 else { /* exhaust dv */
4753 size_t dvs
= gm
->dvsize
;
4756 set_inuse_and_pinuse(gm
, p
, dvs
);
4759 check_malloced_chunk(gm
, mem
, nb
);
4763 else if (nb
< gm
->topsize
) { /* Split top */
4764 size_t rsize
= gm
->topsize
-= nb
;
4765 mchunkptr p
= gm
->top
;
4766 mchunkptr r
= gm
->top
= chunk_plus_offset(p
, nb
);
4767 r
->head
= rsize
| PINUSE_BIT
;
4768 set_size_and_pinuse_of_inuse_chunk(gm
, p
, nb
);
4770 check_top_chunk(gm
, gm
->top
);
4771 check_malloced_chunk(gm
, mem
, nb
);
4775 mem
= sys_alloc(gm
, nb
);
4785 void dlfree(void* mem
) {
4787 Consolidate freed chunks with preceding or succeeding bordering
4788 free chunks, if they exist, and then place in a bin. Intermixed
4789 with special cases for top, dv, mmapped chunks, and usage errors.
4793 mchunkptr p
= mem2chunk(mem
);
4795 mstate fm
= get_mstate_for(p
);
4796 if (!ok_magic(fm
)) {
4797 USAGE_ERROR_ACTION(fm
, p
);
4802 #endif /* FOOTERS */
4803 if (!PREACTION(fm
)) {
4804 check_inuse_chunk(fm
, p
);
4805 if (RTCHECK(ok_address(fm
, p
) && ok_cinuse(p
))) {
4806 size_t psize
= chunksize(p
);
4807 mchunkptr next
= chunk_plus_offset(p
, psize
);
4809 size_t prevsize
= p
->prev_foot
;
4810 if ((prevsize
& IS_MMAPPED_BIT
) != 0) {
4811 prevsize
&= ~IS_MMAPPED_BIT
;
4812 psize
+= prevsize
+ MMAP_FOOT_PAD
;
4813 if (CALL_MUNMAP((char*)p
- prevsize
, psize
) == 0)
4814 fm
->footprint
-= psize
;
4818 mchunkptr prev
= chunk_minus_offset(p
, prevsize
);
4821 if (RTCHECK(ok_address(fm
, prev
))) { /* consolidate backward */
4823 unlink_chunk(fm
, p
, prevsize
);
4825 else if ((next
->head
& INUSE_BITS
) == INUSE_BITS
) {
4827 set_free_with_pinuse(p
, psize
, next
);
4836 if (RTCHECK(ok_next(p
, next
) && ok_pinuse(next
))) {
4837 if (!cinuse(next
)) { /* consolidate forward */
4838 if (next
== fm
->top
) {
4839 size_t tsize
= fm
->topsize
+= psize
;
4841 p
->head
= tsize
| PINUSE_BIT
;
4846 if (should_trim(fm
, tsize
))
4850 else if (next
== fm
->dv
) {
4851 size_t dsize
= fm
->dvsize
+= psize
;
4853 set_size_and_pinuse_of_free_chunk(p
, dsize
);
4857 size_t nsize
= chunksize(next
);
4859 unlink_chunk(fm
, next
, nsize
);
4860 set_size_and_pinuse_of_free_chunk(p
, psize
);
4868 set_free_with_pinuse(p
, psize
, next
);
4870 if (is_small(psize
)) {
4871 insert_small_chunk(fm
, p
, psize
);
4872 check_free_chunk(fm
, p
);
4875 tchunkptr tp
= (tchunkptr
)p
;
4876 insert_large_chunk(fm
, tp
, psize
);
4877 check_free_chunk(fm
, p
);
4878 if (--fm
->release_checks
== 0)
4879 release_unused_segments(fm
);
4885 USAGE_ERROR_ACTION(fm
, p
);
4892 #endif /* FOOTERS */
4895 void* dlcalloc(size_t n_elements
, size_t elem_size
) {
4898 if (n_elements
!= 0) {
4899 req
= n_elements
* elem_size
;
4900 if (((n_elements
| elem_size
) & ~(size_t)0xffff) &&
4901 (req
/ n_elements
!= elem_size
))
4902 req
= MAX_SIZE_T
; /* force downstream failure on overflow */
4904 mem
= dlmalloc(req
);
4905 if (mem
!= 0 && calloc_must_clear(mem2chunk(mem
)))
4906 memset(mem
, 0, req
);
4910 void* dlrealloc(void* oldmem
, size_t bytes
) {
4912 return dlmalloc(bytes
);
4913 #ifdef REALLOC_ZERO_BYTES_FREES
4918 #endif /* REALLOC_ZERO_BYTES_FREES */
4923 mstate m
= get_mstate_for(mem2chunk(oldmem
));
4925 USAGE_ERROR_ACTION(m
, oldmem
);
4928 #endif /* FOOTERS */
4929 return internal_realloc(m
, oldmem
, bytes
);
4933 void* dlmemalign(size_t alignment
, size_t bytes
) {
4934 return internal_memalign(gm
, alignment
, bytes
);
4937 void** dlindependent_calloc(size_t n_elements
, size_t elem_size
,
4939 size_t sz
= elem_size
; /* serves as 1-element array */
4940 return ialloc(gm
, n_elements
, &sz
, 3, chunks
);
4943 void** dlindependent_comalloc(size_t n_elements
, size_t sizes
[],
4945 return ialloc(gm
, n_elements
, sizes
, 0, chunks
);
4948 void* dlvalloc(size_t bytes
) {
4950 ensure_initialization();
4951 pagesz
= mparams
.page_size
;
4952 return dlmemalign(pagesz
, bytes
);
4955 void* dlpvalloc(size_t bytes
) {
4957 ensure_initialization();
4958 pagesz
= mparams
.page_size
;
4959 return dlmemalign(pagesz
, (bytes
+ pagesz
- SIZE_T_ONE
) & ~(pagesz
- SIZE_T_ONE
));
4962 int dlmalloc_trim(size_t pad
) {
4963 ensure_initialization();
4965 if (!PREACTION(gm
)) {
4966 result
= sys_trim(gm
, pad
);
4972 size_t dlmalloc_footprint(void) {
4973 return gm
->footprint
;
4976 size_t dlmalloc_max_footprint(void) {
4977 return gm
->max_footprint
;
4981 struct mallinfo
dlmallinfo(void) {
4982 return internal_mallinfo(gm
);
4984 #endif /* NO_MALLINFO */
4986 void dlmalloc_stats() {
4987 internal_malloc_stats(gm
);
4990 int dlmallopt(int param_number
, int value
) {
4991 return change_mparam(param_number
, value
);
4994 #endif /* !ONLY_MSPACES */
4996 size_t dlmalloc_usable_size(void* mem
) {
4998 mchunkptr p
= mem2chunk(mem
);
5000 return chunksize(p
) - overhead_for(p
);
5005 /* ----------------------------- user mspaces ---------------------------- */
5009 static mstate
init_user_mstate(char* tbase
, size_t tsize
) {
5010 size_t msize
= pad_request(sizeof(struct malloc_state
));
5012 mchunkptr msp
= align_as_chunk(tbase
);
5013 mstate m
= (mstate
)(chunk2mem(msp
));
5014 memset(m
, 0, msize
);
5015 INITIAL_LOCK(&m
->mutex
);
5016 msp
->head
= (msize
|PINUSE_BIT
|CINUSE_BIT
);
5017 m
->seg
.base
= m
->least_addr
= tbase
;
5018 m
->seg
.size
= m
->footprint
= m
->max_footprint
= tsize
;
5019 m
->magic
= mparams
.magic
;
5020 m
->release_checks
= MAX_RELEASE_CHECK_RATE
;
5021 m
->mflags
= mparams
.default_mflags
;
5024 disable_contiguous(m
);
5026 mn
= next_chunk(mem2chunk(m
));
5027 init_top(m
, mn
, (size_t)((tbase
+ tsize
) - (char*)mn
) - TOP_FOOT_SIZE
);
5028 check_top_chunk(m
, m
->top
);
5032 mspace
create_mspace(size_t capacity
, int locked
) {
5035 ensure_initialization();
5036 msize
= pad_request(sizeof(struct malloc_state
));
5037 if (capacity
< (size_t) -(msize
+ TOP_FOOT_SIZE
+ mparams
.page_size
)) {
5038 size_t rs
= ((capacity
== 0)? mparams
.granularity
:
5039 (capacity
+ TOP_FOOT_SIZE
+ msize
));
5040 size_t tsize
= granularity_align(rs
);
5041 char* tbase
= (char*)(CALL_MMAP(tsize
));
5042 if (tbase
!= CMFAIL
) {
5043 m
= init_user_mstate(tbase
, tsize
);
5044 m
->seg
.sflags
= IS_MMAPPED_BIT
;
5045 set_lock(m
, locked
);
5051 mspace
create_mspace_with_base(void* base
, size_t capacity
, int locked
) {
5054 ensure_initialization();
5055 msize
= pad_request(sizeof(struct malloc_state
));
5056 if (capacity
> msize
+ TOP_FOOT_SIZE
&&
5057 capacity
< (size_t) -(msize
+ TOP_FOOT_SIZE
+ mparams
.page_size
)) {
5058 m
= init_user_mstate((char*)base
, capacity
);
5059 m
->seg
.sflags
= EXTERN_BIT
;
5060 set_lock(m
, locked
);
5065 int mspace_mmap_large_chunks(mspace msp
, int enable
) {
5067 mstate ms
= (mstate
)msp
;
5068 if (!PREACTION(ms
)) {
5080 size_t destroy_mspace(mspace msp
) {
5082 mstate ms
= (mstate
)msp
;
5084 msegmentptr sp
= &ms
->seg
;
5086 char* base
= sp
->base
;
5087 size_t size
= sp
->size
;
5088 flag_t flag
= sp
->sflags
;
5090 if ((flag
& IS_MMAPPED_BIT
) && !(flag
& EXTERN_BIT
) &&
5091 CALL_MUNMAP(base
, size
) == 0)
5096 USAGE_ERROR_ACTION(ms
,ms
);
5102 mspace versions of routines are near-clones of the global
5103 versions. This is not so nice but better than the alternatives.
5107 void* mspace_malloc(mspace msp
, size_t bytes
) {
5108 mstate ms
= (mstate
)msp
;
5109 if (!ok_magic(ms
)) {
5110 USAGE_ERROR_ACTION(ms
,ms
);
5113 if (!PREACTION(ms
)) {
5116 if (bytes
<= MAX_SMALL_REQUEST
) {
5119 nb
= (bytes
< MIN_REQUEST
)? MIN_CHUNK_SIZE
: pad_request(bytes
);
5120 idx
= small_index(nb
);
5121 smallbits
= ms
->smallmap
>> idx
;
5123 if ((smallbits
& 0x3U
) != 0) { /* Remainderless fit to a smallbin. */
5125 idx
+= ~smallbits
& 1; /* Uses next bin if idx empty */
5126 b
= smallbin_at(ms
, idx
);
5128 assert(chunksize(p
) == small_index2size(idx
));
5129 unlink_first_small_chunk(ms
, b
, p
, idx
);
5130 set_inuse_and_pinuse(ms
, p
, small_index2size(idx
));
5132 check_malloced_chunk(ms
, mem
, nb
);
5136 else if (nb
> ms
->dvsize
) {
5137 if (smallbits
!= 0) { /* Use chunk in next nonempty smallbin */
5141 binmap_t leftbits
= (smallbits
<< idx
) & left_bits(idx2bit(idx
));
5142 binmap_t leastbit
= least_bit(leftbits
);
5143 compute_bit2idx(leastbit
, i
);
5144 b
= smallbin_at(ms
, i
);
5146 assert(chunksize(p
) == small_index2size(i
));
5147 unlink_first_small_chunk(ms
, b
, p
, i
);
5148 rsize
= small_index2size(i
) - nb
;
5149 /* Fit here cannot be remainderless if 4byte sizes */
5150 if (SIZE_T_SIZE
!= 4 && rsize
< MIN_CHUNK_SIZE
)
5151 set_inuse_and_pinuse(ms
, p
, small_index2size(i
));
5153 set_size_and_pinuse_of_inuse_chunk(ms
, p
, nb
);
5154 r
= chunk_plus_offset(p
, nb
);
5155 set_size_and_pinuse_of_free_chunk(r
, rsize
);
5156 replace_dv(ms
, r
, rsize
);
5159 check_malloced_chunk(ms
, mem
, nb
);
5163 else if (ms
->treemap
!= 0 && (mem
= tmalloc_small(ms
, nb
)) != 0) {
5164 check_malloced_chunk(ms
, mem
, nb
);
5169 else if (bytes
>= MAX_REQUEST
)
5170 nb
= MAX_SIZE_T
; /* Too big to allocate. Force failure (in sys alloc) */
5172 nb
= pad_request(bytes
);
5173 if (ms
->treemap
!= 0 && (mem
= tmalloc_large(ms
, nb
)) != 0) {
5174 check_malloced_chunk(ms
, mem
, nb
);
5179 if (nb
<= ms
->dvsize
) {
5180 size_t rsize
= ms
->dvsize
- nb
;
5181 mchunkptr p
= ms
->dv
;
5182 if (rsize
>= MIN_CHUNK_SIZE
) { /* split dv */
5183 mchunkptr r
= ms
->dv
= chunk_plus_offset(p
, nb
);
5185 set_size_and_pinuse_of_free_chunk(r
, rsize
);
5186 set_size_and_pinuse_of_inuse_chunk(ms
, p
, nb
);
5188 else { /* exhaust dv */
5189 size_t dvs
= ms
->dvsize
;
5192 set_inuse_and_pinuse(ms
, p
, dvs
);
5195 check_malloced_chunk(ms
, mem
, nb
);
5199 else if (nb
< ms
->topsize
) { /* Split top */
5200 size_t rsize
= ms
->topsize
-= nb
;
5201 mchunkptr p
= ms
->top
;
5202 mchunkptr r
= ms
->top
= chunk_plus_offset(p
, nb
);
5203 r
->head
= rsize
| PINUSE_BIT
;
5204 set_size_and_pinuse_of_inuse_chunk(ms
, p
, nb
);
5206 check_top_chunk(ms
, ms
->top
);
5207 check_malloced_chunk(ms
, mem
, nb
);
5211 mem
= sys_alloc(ms
, nb
);
5221 void mspace_free(mspace msp
, void* mem
) {
5223 mchunkptr p
= mem2chunk(mem
);
5225 mstate fm
= get_mstate_for(p
);
5227 mstate fm
= (mstate
)msp
;
5228 #endif /* FOOTERS */
5229 if (!ok_magic(fm
)) {
5230 USAGE_ERROR_ACTION(fm
, p
);
5233 if (!PREACTION(fm
)) {
5234 check_inuse_chunk(fm
, p
);
5235 if (RTCHECK(ok_address(fm
, p
) && ok_cinuse(p
))) {
5236 size_t psize
= chunksize(p
);
5237 mchunkptr next
= chunk_plus_offset(p
, psize
);
5239 size_t prevsize
= p
->prev_foot
;
5240 if ((prevsize
& IS_MMAPPED_BIT
) != 0) {
5241 prevsize
&= ~IS_MMAPPED_BIT
;
5242 psize
+= prevsize
+ MMAP_FOOT_PAD
;
5243 if (CALL_MUNMAP((char*)p
- prevsize
, psize
) == 0)
5244 fm
->footprint
-= psize
;
5248 mchunkptr prev
= chunk_minus_offset(p
, prevsize
);
5251 if (RTCHECK(ok_address(fm
, prev
))) { /* consolidate backward */
5253 unlink_chunk(fm
, p
, prevsize
);
5255 else if ((next
->head
& INUSE_BITS
) == INUSE_BITS
) {
5257 set_free_with_pinuse(p
, psize
, next
);
5266 if (RTCHECK(ok_next(p
, next
) && ok_pinuse(next
))) {
5267 if (!cinuse(next
)) { /* consolidate forward */
5268 if (next
== fm
->top
) {
5269 size_t tsize
= fm
->topsize
+= psize
;
5271 p
->head
= tsize
| PINUSE_BIT
;
5276 if (should_trim(fm
, tsize
))
5280 else if (next
== fm
->dv
) {
5281 size_t dsize
= fm
->dvsize
+= psize
;
5283 set_size_and_pinuse_of_free_chunk(p
, dsize
);
5287 size_t nsize
= chunksize(next
);
5289 unlink_chunk(fm
, next
, nsize
);
5290 set_size_and_pinuse_of_free_chunk(p
, psize
);
5298 set_free_with_pinuse(p
, psize
, next
);
5300 if (is_small(psize
)) {
5301 insert_small_chunk(fm
, p
, psize
);
5302 check_free_chunk(fm
, p
);
5305 tchunkptr tp
= (tchunkptr
)p
;
5306 insert_large_chunk(fm
, tp
, psize
);
5307 check_free_chunk(fm
, p
);
5308 if (--fm
->release_checks
== 0)
5309 release_unused_segments(fm
);
5315 USAGE_ERROR_ACTION(fm
, p
);
5322 void* mspace_calloc(mspace msp
, size_t n_elements
, size_t elem_size
) {
5325 mstate ms
= (mstate
)msp
;
5326 if (!ok_magic(ms
)) {
5327 USAGE_ERROR_ACTION(ms
,ms
);
5330 if (n_elements
!= 0) {
5331 req
= n_elements
* elem_size
;
5332 if (((n_elements
| elem_size
) & ~(size_t)0xffff) &&
5333 (req
/ n_elements
!= elem_size
))
5334 req
= MAX_SIZE_T
; /* force downstream failure on overflow */
5336 mem
= internal_malloc(ms
, req
);
5337 if (mem
!= 0 && calloc_must_clear(mem2chunk(mem
)))
5338 memset(mem
, 0, req
);
5342 void* mspace_realloc(mspace msp
, void* oldmem
, size_t bytes
) {
5344 return mspace_malloc(msp
, bytes
);
5345 #ifdef REALLOC_ZERO_BYTES_FREES
5347 mspace_free(msp
, oldmem
);
5350 #endif /* REALLOC_ZERO_BYTES_FREES */
5353 mchunkptr p
= mem2chunk(oldmem
);
5354 mstate ms
= get_mstate_for(p
);
5356 mstate ms
= (mstate
)msp
;
5357 #endif /* FOOTERS */
5358 if (!ok_magic(ms
)) {
5359 USAGE_ERROR_ACTION(ms
,ms
);
5362 return internal_realloc(ms
, oldmem
, bytes
);
5366 void* mspace_memalign(mspace msp
, size_t alignment
, size_t bytes
) {
5367 mstate ms
= (mstate
)msp
;
5368 if (!ok_magic(ms
)) {
5369 USAGE_ERROR_ACTION(ms
,ms
);
5372 return internal_memalign(ms
, alignment
, bytes
);
5375 void** mspace_independent_calloc(mspace msp
, size_t n_elements
,
5376 size_t elem_size
, void* chunks
[]) {
5377 size_t sz
= elem_size
; /* serves as 1-element array */
5378 mstate ms
= (mstate
)msp
;
5379 if (!ok_magic(ms
)) {
5380 USAGE_ERROR_ACTION(ms
,ms
);
5383 return ialloc(ms
, n_elements
, &sz
, 3, chunks
);
5386 void** mspace_independent_comalloc(mspace msp
, size_t n_elements
,
5387 size_t sizes
[], void* chunks
[]) {
5388 mstate ms
= (mstate
)msp
;
5389 if (!ok_magic(ms
)) {
5390 USAGE_ERROR_ACTION(ms
,ms
);
5393 return ialloc(ms
, n_elements
, sizes
, 0, chunks
);
5396 int mspace_trim(mspace msp
, size_t pad
) {
5398 mstate ms
= (mstate
)msp
;
5400 if (!PREACTION(ms
)) {
5401 result
= sys_trim(ms
, pad
);
5406 USAGE_ERROR_ACTION(ms
,ms
);
5411 void mspace_malloc_stats(mspace msp
) {
5412 mstate ms
= (mstate
)msp
;
5414 internal_malloc_stats(ms
);
5417 USAGE_ERROR_ACTION(ms
,ms
);
5421 size_t mspace_footprint(mspace msp
) {
5423 mstate ms
= (mstate
)msp
;
5425 result
= ms
->footprint
;
5428 USAGE_ERROR_ACTION(ms
,ms
);
5434 size_t mspace_max_footprint(mspace msp
) {
5436 mstate ms
= (mstate
)msp
;
5438 result
= ms
->max_footprint
;
5441 USAGE_ERROR_ACTION(ms
,ms
);
5448 struct mallinfo
mspace_mallinfo(mspace msp
) {
5449 mstate ms
= (mstate
)msp
;
5450 if (!ok_magic(ms
)) {
5451 USAGE_ERROR_ACTION(ms
,ms
);
5453 return internal_mallinfo(ms
);
5455 #endif /* NO_MALLINFO */
5457 size_t mspace_usable_size(void* mem
) {
5459 mchunkptr p
= mem2chunk(mem
);
5461 return chunksize(p
) - overhead_for(p
);
5466 int mspace_mallopt(int param_number
, int value
) {
5467 return change_mparam(param_number
, value
);
5470 #endif /* MSPACES */
5472 /* -------------------- Alternative MORECORE functions ------------------- */
5475 Guidelines for creating a custom version of MORECORE:
5477 * For best performance, MORECORE should allocate in multiples of pagesize.
5478 * MORECORE may allocate more memory than requested. (Or even less,
5479 but this will usually result in a malloc failure.)
5480 * MORECORE must not allocate memory when given argument zero, but
5481 instead return one past the end address of memory from previous
5483 * For best performance, consecutive calls to MORECORE with positive
5484 arguments should return increasing addresses, indicating that
5485 space has been contiguously extended.
5486 * Even though consecutive calls to MORECORE need not return contiguous
5487 addresses, it must be OK for malloc'ed chunks to span multiple
5488 regions in those cases where they do happen to be contiguous.
5489 * MORECORE need not handle negative arguments -- it may instead
5490 just return MFAIL when given negative arguments.
5491 Negative arguments are always multiples of pagesize. MORECORE
5492 must not misinterpret negative args as large positive unsigned
5493 args. You can suppress all such calls from even occurring by defining
5494 MORECORE_CANNOT_TRIM,
5496 As an example alternative MORECORE, here is a custom allocator
5497 kindly contributed for pre-OSX macOS. It uses virtually but not
5498 necessarily physically contiguous non-paged memory (locked in,
5499 present and won't get swapped out). You can use it by uncommenting
5500 this section, adding some #includes, and setting up the appropriate
5503 #define MORECORE osMoreCore
5505 There is also a shutdown routine that should somehow be called for
5506 cleanup upon program exit.
5508 #define MAX_POOL_ENTRIES 100
5509 #define MINIMUM_MORECORE_SIZE (64 * 1024U)
5510 static int next_os_pool;
5511 void *our_os_pools[MAX_POOL_ENTRIES];
5513 void *osMoreCore(int size)
5516 static void *sbrk_top = 0;
5520 if (size < MINIMUM_MORECORE_SIZE)
5521 size = MINIMUM_MORECORE_SIZE;
5522 if (CurrentExecutionLevel() == kTaskLevel)
5523 ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
5526 return (void *) MFAIL;
5528 // save ptrs so they can be freed during cleanup
5529 our_os_pools[next_os_pool] = ptr;
5531 ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
5532 sbrk_top = (char *) ptr + size;
5537 // we don't currently support shrink behavior
5538 return (void *) MFAIL;
5546 // cleanup any allocated memory pools
5547 // called as last thing before shutting down driver
5549 void osCleanupMem(void)
5553 for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
5556 PoolDeallocate(*ptr);
5564 /* -----------------------------------------------------------------------
5566 V2.8.4 (not yet released)
5567 * Add mspace_mmap_large_chunks; thanks to Jean Brouwers
5568 * Fix insufficient sys_alloc padding when using 16byte alignment
5569 * Fix bad error check in mspace_footprint
5570 * Adaptations for ptmalloc, courtesy of Wolfram Gloger.
5571 * Reentrant spin locks, courtesy of Earl Chew and others
5572 * Win32 improvements, courtesy of Niall Douglas and Earl Chew
5573 * Add NO_SEGMENT_TRAVERSAL and MAX_RELEASE_CHECK_RATE options
5574 * Extension hook in malloc_state
5575 * Various small adjustments to reduce warnings on some compilers
5576 * Various configuration extensions/changes for more platforms. Thanks
5577 to all who contributed these.
5579 V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee)
5580 * Add max_footprint functions
5581 * Ensure all appropriate literals are size_t
5582 * Fix conditional compilation problem for some #define settings
5583 * Avoid concatenating segments with the one provided
5584 in create_mspace_with_base
5585 * Rename some variables to avoid compiler shadowing warnings
5586 * Use explicit lock initialization.
5587 * Better handling of sbrk interference.
5588 * Simplify and fix segment insertion, trimming and mspace_destroy
5589 * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
5590 * Thanks especially to Dennis Flanagan for help on these.
5592 V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee)
5593 * Fix memalign brace error.
5595 V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee)
5596 * Fix improper #endif nesting in C++
5597 * Add explicit casts needed for C++
5599 V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee)
5600 * Use trees for large bins
5602 * Use segments to unify sbrk-based and mmap-based system allocation,
5603 removing need for emulation on most platforms without sbrk.
5604 * Default safety checks
5605 * Optional footer checks. Thanks to William Robertson for the idea.
5606 * Internal code refactoring
5607 * Incorporate suggestions and platform-specific changes.
5608 Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
5609 Aaron Bachmann, Emery Berger, and others.
5610 * Speed up non-fastbin processing enough to remove fastbins.
5611 * Remove useless cfree() to avoid conflicts with other apps.
5612 * Remove internal memcpy, memset. Compilers handle builtins better.
5613 * Remove some options that no one ever used and rename others.
5615 V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
5616 * Fix malloc_state bitmap array misdeclaration
5618 V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)
5619 * Allow tuning of FIRST_SORTED_BIN_SIZE
5620 * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
5621 * Better detection and support for non-contiguousness of MORECORE.
5622 Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
5623 * Bypass most of malloc if no frees. Thanks To Emery Berger.
5624 * Fix freeing of old top non-contiguous chunk im sysmalloc.
5625 * Raised default trim and map thresholds to 256K.
5626 * Fix mmap-related #defines. Thanks to Lubos Lunak.
5627 * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
5628 * Branch-free bin calculation
5629 * Default trim and mmap thresholds now 256K.
5631 V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
5632 * Introduce independent_comalloc and independent_calloc.
5633 Thanks to Michael Pachos for motivation and help.
5634 * Make optional .h file available
5635 * Allow > 2GB requests on 32bit systems.
5636 * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
5637 Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
5639 * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
5641 * memalign: check alignment arg
5642 * realloc: don't try to shift chunks backwards, since this
5643 leads to more fragmentation in some programs and doesn't
5644 seem to help in any others.
5645 * Collect all cases in malloc requiring system memory into sysmalloc
5646 * Use mmap as backup to sbrk
5647 * Place all internal state in malloc_state
5648 * Introduce fastbins (although similar to 2.5.1)
5649 * Many minor tunings and cosmetic improvements
5650 * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
5651 * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
5652 Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
5653 * Include errno.h to support default failure action.
5655 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
5656 * return null for negative arguments
5657 * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
5658 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
5659 (e.g. WIN32 platforms)
5660 * Cleanup header file inclusion for WIN32 platforms
5661 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
5662 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
5663 memory allocation routines
5664 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
5665 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
5666 usage of 'assert' in non-WIN32 code
5667 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
5669 * Always call 'fREe()' rather than 'free()'
5671 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
5672 * Fixed ordering problem with boundary-stamping
5674 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
5675 * Added pvalloc, as recommended by H.J. Liu
5676 * Added 64bit pointer support mainly from Wolfram Gloger
5677 * Added anonymously donated WIN32 sbrk emulation
5678 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
5679 * malloc_extend_top: fix mask error that caused wastage after
5681 * Add linux mremap support code from HJ Liu
5683 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
5684 * Integrated most documentation with the code.
5685 * Add support for mmap, with help from
5686 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5687 * Use last_remainder in more cases.
5688 * Pack bins using idea from colin@nyx10.cs.du.edu
5689 * Use ordered bins instead of best-fit threshold
5690 * Eliminate block-local decls to simplify tracing and debugging.
5691 * Support another case of realloc via move into top
5692 * Fix error occurring when initial sbrk_base not word-aligned.
5693 * Rely on page size for units instead of SBRK_UNIT to
5694 avoid surprises about sbrk alignment conventions.
5695 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
5696 (raymond@es.ele.tue.nl) for the suggestion.
5697 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
5698 * More precautions for cases where other routines call sbrk,
5699 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5700 * Added macros etc., allowing use in linux libc from
5701 H.J. Lu (hjl@gnu.ai.mit.edu)
5702 * Inverted this history list
5704 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
5705 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
5706 * Removed all preallocation code since under current scheme
5707 the work required to undo bad preallocations exceeds
5708 the work saved in good cases for most test programs.
5709 * No longer use return list or unconsolidated bins since
5710 no scheme using them consistently outperforms those that don't
5711 given above changes.
5712 * Use best fit for very large chunks to prevent some worst-cases.
5713 * Added some support for debugging
5715 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
5716 * Removed footers when chunks are in use. Thanks to
5717 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
5719 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
5720 * Added malloc_trim, with help from Wolfram Gloger
5721 (wmglo@Dent.MED.Uni-Muenchen.DE).
5723 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
5725 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
5726 * realloc: try to expand in both directions
5727 * malloc: swap order of clean-bin strategy;
5728 * realloc: only conditionally expand backwards
5729 * Try not to scavenge used bins
5730 * Use bin counts as a guide to preallocation
5731 * Occasionally bin return list chunks in first scan
5732 * Add a few optimizations from colin@nyx10.cs.du.edu
5734 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
5735 * faster bin computation & slightly different binning
5736 * merged all consolidations to one part of malloc proper
5737 (eliminating old malloc_find_space & malloc_clean_bin)
5738 * Scan 2 returns chunks (not just 1)
5739 * Propagate failure in realloc if malloc returns 0
5740 * Add stuff to allow compilation on non-ANSI compilers
5741 from kpv@research.att.com
5743 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
5744 * removed potential for odd address access in prev_chunk
5745 * removed dependency on getpagesize.h
5746 * misc cosmetics and a bit more internal documentation
5747 * anticosmetics: mangled names in macros to evade debugger strangeness
5748 * tested on sparc, hp-700, dec-mips, rs6000
5749 with gcc & native cc (hp, dec only) allowing
5750 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
5752 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
5753 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
5754 structure of old version, but most details differ.)