[Facades] Add netstandard
[mono-project.git] / mono / utils / dlmalloc.c
blob3177f2d17a1194f5ebc59fbea7382990120a4db5
1 /*
2 This is a version (aka dlmalloc) of malloc/free/realloc written by
3 Doug Lea and released to the public domain, as explained at
4 http://creativecommons.org/licenses/publicdomain. Send questions,
5 comments, complaints, performance data, etc to dl@cs.oswego.edu
7 * Version 2.8.3 Thu Sep 22 11:16:15 2005 Doug Lea (dl at gee)
9 Note: There may be an updated version of this malloc obtainable at
10 ftp://gee.cs.oswego.edu/pub/misc/malloc.c
11 Check before installing!
15 * Modifications made to the original version for mono:
16 * - added PROT_EXEC to MMAP_PROT
17 * - added PAGE_EXECUTE_READWRITE to the win32mmap and win32direct_mmap
18 * - a large portion of functions is #ifdef'ed out to make the native code smaller
19 * - the defines below
22 #define USE_DL_PREFIX 1
23 #define USE_LOCKS 1
24 /* Use mmap for allocating memory */
25 #define HAVE_MORECORE 0
26 #define NO_MALLINFO 1
27 #include <mono/utils/dlmalloc.h>
30 * Quickstart
32 This library is all in one file to simplify the most common usage:
33 ftp it, compile it (-O3), and link it into another program. All of
34 the compile-time options default to reasonable values for use on
35 most platforms. You might later want to step through various
36 compile-time and dynamic tuning options.
38 For convenience, an include file for code using this malloc is at:
39 ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
40 You don't really need this .h file unless you call functions not
41 defined in your system include files. The .h file contains only the
42 excerpts from this file needed for using this malloc on ANSI C/C++
43 systems, so long as you haven't changed compile-time options about
44 naming and tuning parameters. If you do, then you can create your
45 own malloc.h that does include all settings by cutting at the point
46 indicated below. Note that you may already by default be using a C
47 library containing a malloc that is based on some version of this
48 malloc (for example in linux). You might still want to use the one
49 in this file to customize settings or to avoid overheads associated
50 with library versions.
52 * Vital statistics:
54 Supported pointer/size_t representation: 4 or 8 bytes
55 size_t MUST be an unsigned type of the same width as
56 pointers. (If you are using an ancient system that declares
57 size_t as a signed type, or need it to be a different width
58 than pointers, you can use a previous release of this malloc
59 (e.g. 2.7.2) supporting these.)
61 Alignment: 8 bytes (default)
62 This suffices for nearly all current machines and C compilers.
63 However, you can define MALLOC_ALIGNMENT to be wider than this
64 if necessary (up to 128bytes), at the expense of using more space.
66 Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes)
67 8 or 16 bytes (if 8byte sizes)
68 Each malloced chunk has a hidden word of overhead holding size
69 and status information, and additional cross-check word
70 if FOOTERS is defined.
72 Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead)
73 8-byte ptrs: 32 bytes (including overhead)
75 Even a request for zero bytes (i.e., malloc(0)) returns a
76 pointer to something of the minimum allocatable size.
77 The maximum overhead wastage (i.e., number of extra bytes
78 allocated than were requested in malloc) is less than or equal
79 to the minimum size, except for requests >= mmap_threshold that
80 are serviced via mmap(), where the worst case wastage is about
81 32 bytes plus the remainder from a system page (the minimal
82 mmap unit); typically 4096 or 8192 bytes.
84 Security: static-safe; optionally more or less
85 The "security" of malloc refers to the ability of malicious
86 code to accentuate the effects of errors (for example, freeing
87 space that is not currently malloc'ed or overwriting past the
88 ends of chunks) in code that calls malloc. This malloc
89 guarantees not to modify any memory locations below the base of
90 heap, i.e., static variables, even in the presence of usage
91 errors. The routines additionally detect most improper frees
92 and reallocs. All this holds as long as the static bookkeeping
93 for malloc itself is not corrupted by some other means. This
94 is only one aspect of security -- these checks do not, and
95 cannot, detect all possible programming errors.
97 If FOOTERS is defined nonzero, then each allocated chunk
98 carries an additional check word to verify that it was malloced
99 from its space. These check words are the same within each
100 execution of a program using malloc, but differ across
101 executions, so externally crafted fake chunks cannot be
102 freed. This improves security by rejecting frees/reallocs that
103 could corrupt heap memory, in addition to the checks preventing
104 writes to statics that are always on. This may further improve
105 security at the expense of time and space overhead. (Note that
106 FOOTERS may also be worth using with MSPACES.)
108 By default detected errors cause the program to abort (calling
109 "abort()"). You can override this to instead proceed past
110 errors by defining PROCEED_ON_ERROR. In this case, a bad free
111 has no effect, and a malloc that encounters a bad address
112 caused by user overwrites will ignore the bad address by
113 dropping pointers and indices to all known memory. This may
114 be appropriate for programs that should continue if at all
115 possible in the face of programming errors, although they may
116 run out of memory because dropped memory is never reclaimed.
118 If you don't like either of these options, you can define
119 CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
120 else. And if if you are sure that your program using malloc has
121 no errors or vulnerabilities, you can define INSECURE to 1,
122 which might (or might not) provide a small performance improvement.
124 Thread-safety: NOT thread-safe unless USE_LOCKS defined
125 When USE_LOCKS is defined, each public call to malloc, free,
126 etc is surrounded with either a pthread mutex or a win32
127 spinlock (depending on WIN32). This is not especially fast, and
128 can be a major bottleneck. It is designed only to provide
129 minimal protection in concurrent environments, and to provide a
130 basis for extensions. If you are using malloc in a concurrent
131 program, consider instead using ptmalloc, which is derived from
132 a version of this malloc. (See http://www.malloc.de).
134 System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
135 This malloc can use unix sbrk or any emulation (invoked using
136 the CALL_MORECORE macro) and/or mmap/munmap or any emulation
137 (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
138 memory. On most unix systems, it tends to work best if both
139 MORECORE and MMAP are enabled. On Win32, it uses emulations
140 based on VirtualAlloc. It also uses common C library functions
141 like memset.
143 Compliance: I believe it is compliant with the Single Unix Specification
144 (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
145 others as well.
147 * Overview of algorithms
149 This is not the fastest, most space-conserving, most portable, or
150 most tunable malloc ever written. However it is among the fastest
151 while also being among the most space-conserving, portable and
152 tunable. Consistent balance across these factors results in a good
153 general-purpose allocator for malloc-intensive programs.
155 In most ways, this malloc is a best-fit allocator. Generally, it
156 chooses the best-fitting existing chunk for a request, with ties
157 broken in approximately least-recently-used order. (This strategy
158 normally maintains low fragmentation.) However, for requests less
159 than 256bytes, it deviates from best-fit when there is not an
160 exactly fitting available chunk by preferring to use space adjacent
161 to that used for the previous small request, as well as by breaking
162 ties in approximately most-recently-used order. (These enhance
163 locality of series of small allocations.) And for very large requests
164 (>= 256Kb by default), it relies on system memory mapping
165 facilities, if supported. (This helps avoid carrying around and
166 possibly fragmenting memory used only for large chunks.)
168 All operations (except malloc_stats and mallinfo) have execution
169 times that are bounded by a constant factor of the number of bits in
170 a size_t, not counting any clearing in calloc or copying in realloc,
171 or actions surrounding MORECORE and MMAP that have times
172 proportional to the number of non-contiguous regions returned by
173 system allocation routines, which is often just 1.
175 The implementation is not very modular and seriously overuses
176 macros. Perhaps someday all C compilers will do as good a job
177 inlining modular code as can now be done by brute-force expansion,
178 but now, enough of them seem not to.
180 Some compilers issue a lot of warnings about code that is
181 dead/unreachable only on some platforms, and also about intentional
182 uses of negation on unsigned types. All known cases of each can be
183 ignored.
185 For a longer but out of date high-level description, see
186 http://gee.cs.oswego.edu/dl/html/malloc.html
188 * MSPACES
189 If MSPACES is defined, then in addition to malloc, free, etc.,
190 this file also defines mspace_malloc, mspace_free, etc. These
191 are versions of malloc routines that take an "mspace" argument
192 obtained using create_mspace, to control all internal bookkeeping.
193 If ONLY_MSPACES is defined, only these versions are compiled.
194 So if you would like to use this allocator for only some allocations,
195 and your system malloc for others, you can compile with
196 ONLY_MSPACES and then do something like...
197 static mspace mymspace = create_mspace(0,0); // for example
198 #define mymalloc(bytes) mspace_malloc(mymspace, bytes)
200 (Note: If you only need one instance of an mspace, you can instead
201 use "USE_DL_PREFIX" to relabel the global malloc.)
203 You can similarly create thread-local allocators by storing
204 mspaces as thread-locals. For example:
205 static __thread mspace tlms = 0;
206 void* tlmalloc(size_t bytes) {
207 if (tlms == 0) tlms = create_mspace(0, 0);
208 return mspace_malloc(tlms, bytes);
210 void tlfree(void* mem) { mspace_free(tlms, mem); }
212 Unless FOOTERS is defined, each mspace is completely independent.
213 You cannot allocate from one and free to another (although
214 conformance is only weakly checked, so usage errors are not always
215 caught). If FOOTERS is defined, then each chunk carries around a tag
216 indicating its originating mspace, and frees are directed to their
217 originating spaces.
219 ------------------------- Compile-time options ---------------------------
221 Be careful in setting #define values for numerical constants of type
222 size_t. On some systems, literal values are not automatically extended
223 to size_t precision unless they are explicitly casted.
225 WIN32 default: defined if _WIN32 defined
226 Defining WIN32 sets up defaults for MS environment and compilers.
227 Otherwise defaults are for unix.
229 MALLOC_ALIGNMENT default: (size_t)8
230 Controls the minimum alignment for malloc'ed chunks. It must be a
231 power of two and at least 8, even on machines for which smaller
232 alignments would suffice. It may be defined as larger than this
233 though. Note however that code and data structures are optimized for
234 the case of 8-byte alignment.
236 MSPACES default: 0 (false)
237 If true, compile in support for independent allocation spaces.
238 This is only supported if HAVE_MMAP is true.
240 ONLY_MSPACES default: 0 (false)
241 If true, only compile in mspace versions, not regular versions.
243 USE_LOCKS default: 0 (false)
244 Causes each call to each public routine to be surrounded with
245 pthread or WIN32 mutex lock/unlock. (If set true, this can be
246 overridden on a per-mspace basis for mspace versions.)
248 FOOTERS default: 0
249 If true, provide extra checking and dispatching by placing
250 information in the footers of allocated chunks. This adds
251 space and time overhead.
253 INSECURE default: 0
254 If true, omit checks for usage errors and heap space overwrites.
256 USE_DL_PREFIX default: NOT defined
257 Causes compiler to prefix all public routines with the string 'dl'.
258 This can be useful when you only want to use this malloc in one part
259 of a program, using your regular system malloc elsewhere.
261 ABORT default: defined as abort()
262 Defines how to abort on failed checks. On most systems, a failed
263 check cannot die with an "assert" or even print an informative
264 message, because the underlying print routines in turn call malloc,
265 which will fail again. Generally, the best policy is to simply call
266 abort(). It's not very useful to do more than this because many
267 errors due to overwriting will show up as address faults (null, odd
268 addresses etc) rather than malloc-triggered checks, so will also
269 abort. Also, most compilers know that abort() does not return, so
270 can better optimize code conditionally calling it.
272 PROCEED_ON_ERROR default: defined as 0 (false)
273 Controls whether detected bad addresses cause them to bypassed
274 rather than aborting. If set, detected bad arguments to free and
275 realloc are ignored. And all bookkeeping information is zeroed out
276 upon a detected overwrite of freed heap space, thus losing the
277 ability to ever return it from malloc again, but enabling the
278 application to proceed. If PROCEED_ON_ERROR is defined, the
279 static variable malloc_corruption_error_count is compiled in
280 and can be examined to see if errors have occurred. This option
281 generates slower code than the default abort policy.
283 DEBUG default: NOT defined
284 The DEBUG setting is mainly intended for people trying to modify
285 this code or diagnose problems when porting to new platforms.
286 However, it may also be able to better isolate user errors than just
287 using runtime checks. The assertions in the check routines spell
288 out in more detail the assumptions and invariants underlying the
289 algorithms. The checking is fairly extensive, and will slow down
290 execution noticeably. Calling malloc_stats or mallinfo with DEBUG
291 set will attempt to check every non-mmapped allocated and free chunk
292 in the course of computing the summaries.
294 ABORT_ON_ASSERT_FAILURE default: defined as 1 (true)
295 Debugging assertion failures can be nearly impossible if your
296 version of the assert macro causes malloc to be called, which will
297 lead to a cascade of further failures, blowing the runtime stack.
298 ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
299 which will usually make debugging easier.
301 MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32
302 The action to take before "return 0" when malloc fails to be able to
303 return memory because there is none available.
305 HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES
306 True if this system supports sbrk or an emulation of it.
308 MORECORE default: sbrk
309 The name of the sbrk-style system routine to call to obtain more
310 memory. See below for guidance on writing custom MORECORE
311 functions. The type of the argument to sbrk/MORECORE varies across
312 systems. It cannot be size_t, because it supports negative
313 arguments, so it is normally the signed type of the same width as
314 size_t (sometimes declared as "intptr_t"). It doesn't much matter
315 though. Internally, we only call it with arguments less than half
316 the max value of a size_t, which should work across all reasonable
317 possibilities, although sometimes generating compiler warnings. See
318 near the end of this file for guidelines for creating a custom
319 version of MORECORE.
321 MORECORE_CONTIGUOUS default: 1 (true)
322 If true, take advantage of fact that consecutive calls to MORECORE
323 with positive arguments always return contiguous increasing
324 addresses. This is true of unix sbrk. It does not hurt too much to
325 set it true anyway, since malloc copes with non-contiguities.
326 Setting it false when definitely non-contiguous saves time
327 and possibly wasted space it would take to discover this though.
329 MORECORE_CANNOT_TRIM default: NOT defined
330 True if MORECORE cannot release space back to the system when given
331 negative arguments. This is generally necessary only if you are
332 using a hand-crafted MORECORE function that cannot handle negative
333 arguments.
335 HAVE_MMAP default: 1 (true)
336 True if this system supports mmap or an emulation of it. If so, and
337 HAVE_MORECORE is not true, MMAP is used for all system
338 allocation. If set and HAVE_MORECORE is true as well, MMAP is
339 primarily used to directly allocate very large blocks. It is also
340 used as a backup strategy in cases where MORECORE fails to provide
341 space from system. Note: A single call to MUNMAP is assumed to be
342 able to unmap memory that may have be allocated using multiple calls
343 to MMAP, so long as they are adjacent.
345 HAVE_MREMAP default: 1 on linux and NetBSD, else 0
346 If true realloc() uses mremap() to re-allocate large blocks and
347 extend or shrink allocation spaces.
349 MMAP_CLEARS default: 1 on unix
350 True if mmap clears memory so calloc doesn't need to. This is true
351 for standard unix mmap using /dev/zero.
353 USE_BUILTIN_FFS default: 0 (i.e., not used)
354 Causes malloc to use the builtin ffs() function to compute indices.
355 Some compilers may recognize and intrinsify ffs to be faster than the
356 supplied C version. Also, the case of x86 using gcc is special-cased
357 to an asm instruction, so is already as fast as it can be, and so
358 this setting has no effect. (On most x86s, the asm version is only
359 slightly faster than the C version.)
361 malloc_getpagesize default: derive from system includes, or 4096.
362 The system page size. To the extent possible, this malloc manages
363 memory from the system in page-size units. This may be (and
364 usually is) a function rather than a constant. This is ignored
365 if WIN32, where page size is determined using getSystemInfo during
366 initialization.
368 USE_DEV_RANDOM default: 0 (i.e., not used)
369 Causes malloc to use /dev/random to initialize secure magic seed for
370 stamping footers. Otherwise, the current time is used.
372 NO_MALLINFO default: 0
373 If defined, don't compile "mallinfo". This can be a simple way
374 of dealing with mismatches between system declarations and
375 those in this file.
377 MALLINFO_FIELD_TYPE default: size_t
378 The type of the fields in the mallinfo struct. This was originally
379 defined as "int" in SVID etc, but is more usefully defined as
380 size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set
382 REALLOC_ZERO_BYTES_FREES default: not defined
383 This should be set if a call to realloc with zero bytes should
384 be the same as a call to free. Some people think it should. Otherwise,
385 since this malloc returns a unique pointer for malloc(0), so does
386 realloc(p, 0).
388 LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
389 LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H
390 LACKS_STDLIB_H default: NOT defined unless on WIN32
391 Define these if your system does not have these header files.
392 You might need to manually insert some of the declarations they provide.
394 DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS,
395 system_info.dwAllocationGranularity in WIN32,
396 otherwise 64K.
397 Also settable using mallopt(M_GRANULARITY, x)
398 The unit for allocating and deallocating memory from the system. On
399 most systems with contiguous MORECORE, there is no reason to
400 make this more than a page. However, systems with MMAP tend to
401 either require or encourage larger granularities. You can increase
402 this value to prevent system allocation functions to be called so
403 often, especially if they are slow. The value must be at least one
404 page and must be a power of two. Setting to 0 causes initialization
405 to either page size or win32 region size. (Note: In previous
406 versions of malloc, the equivalent of this option was called
407 "TOP_PAD")
409 DEFAULT_TRIM_THRESHOLD default: 2MB
410 Also settable using mallopt(M_TRIM_THRESHOLD, x)
411 The maximum amount of unused top-most memory to keep before
412 releasing via malloc_trim in free(). Automatic trimming is mainly
413 useful in long-lived programs using contiguous MORECORE. Because
414 trimming via sbrk can be slow on some systems, and can sometimes be
415 wasteful (in cases where programs immediately afterward allocate
416 more large chunks) the value should be high enough so that your
417 overall system performance would improve by releasing this much
418 memory. As a rough guide, you might set to a value close to the
419 average size of a process (program) running on your system.
420 Releasing this much memory would allow such a process to run in
421 memory. Generally, it is worth tuning trim thresholds when a
422 program undergoes phases where several large chunks are allocated
423 and released in ways that can reuse each other's storage, perhaps
424 mixed with phases where there are no such chunks at all. The trim
425 value must be greater than page size to have any useful effect. To
426 disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
427 some people use of mallocing a huge space and then freeing it at
428 program startup, in an attempt to reserve system memory, doesn't
429 have the intended effect under automatic trimming, since that memory
430 will immediately be returned to the system.
432 DEFAULT_MMAP_THRESHOLD default: 256K
433 Also settable using mallopt(M_MMAP_THRESHOLD, x)
434 The request size threshold for using MMAP to directly service a
435 request. Requests of at least this size that cannot be allocated
436 using already-existing space will be serviced via mmap. (If enough
437 normal freed space already exists it is used instead.) Using mmap
438 segregates relatively large chunks of memory so that they can be
439 individually obtained and released from the host system. A request
440 serviced through mmap is never reused by any other request (at least
441 not directly; the system may just so happen to remap successive
442 requests to the same locations). Segregating space in this way has
443 the benefits that: Mmapped space can always be individually released
444 back to the system, which helps keep the system level memory demands
445 of a long-lived program low. Also, mapped memory doesn't become
446 `locked' between other chunks, as can happen with normally allocated
447 chunks, which means that even trimming via malloc_trim would not
448 release them. However, it has the disadvantage that the space
449 cannot be reclaimed, consolidated, and then used to service later
450 requests, as happens with normal chunks. The advantages of mmap
451 nearly always outweigh disadvantages for "large" chunks, but the
452 value of "large" may vary across systems. The default is an
453 empirically derived value that works well in most systems. You can
454 disable mmap by setting to MAX_SIZE_T.
458 #ifndef WIN32
459 #ifdef _WIN32
460 #define WIN32 1
461 #endif /* _WIN32 */
462 #endif /* WIN32 */
463 #ifdef WIN32
464 #define WIN32_LEAN_AND_MEAN
465 #include <windows.h>
466 #define HAVE_MMAP 1
467 #define HAVE_MORECORE 0
468 #define LACKS_UNISTD_H
469 #define LACKS_SYS_PARAM_H
470 #define LACKS_SYS_MMAN_H
471 #define LACKS_STRING_H
472 #define LACKS_STRINGS_H
473 #define LACKS_SYS_TYPES_H
474 #define LACKS_ERRNO_H
475 #define MALLOC_FAILURE_ACTION
476 #define MMAP_CLEARS 0 /* WINCE and some others apparently don't clear */
477 #endif /* WIN32 */
479 #if defined(DARWIN) || defined(_DARWIN)
480 /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
481 #ifndef HAVE_MORECORE
482 #define HAVE_MORECORE 0
483 #define HAVE_MMAP 1
484 #endif /* HAVE_MORECORE */
485 #endif /* DARWIN */
487 #if defined(__native_client__)
488 #undef HAVE_MMAP
489 #undef HAVE_MREMAP
490 #define HAVE_MMAP 0
491 #define HAVE_MREMAP 0
492 #endif
494 #ifndef LACKS_SYS_TYPES_H
495 #include <sys/types.h> /* For size_t */
496 #endif /* LACKS_SYS_TYPES_H */
498 /* The maximum possible size_t value has all bits set */
499 #define MAX_SIZE_T (~(size_t)0)
501 #ifndef ONLY_MSPACES
502 #define ONLY_MSPACES 0
503 #endif /* ONLY_MSPACES */
504 #ifndef MSPACES
505 #if ONLY_MSPACES
506 #define MSPACES 1
507 #else /* ONLY_MSPACES */
508 #define MSPACES 0
509 #endif /* ONLY_MSPACES */
510 #endif /* MSPACES */
511 #ifndef MALLOC_ALIGNMENT
512 #define MALLOC_ALIGNMENT ((size_t)8U)
513 #endif /* MALLOC_ALIGNMENT */
514 #ifndef FOOTERS
515 #define FOOTERS 0
516 #endif /* FOOTERS */
517 #ifndef ABORT
518 #define ABORT abort()
519 #endif /* ABORT */
520 #ifndef ABORT_ON_ASSERT_FAILURE
521 #define ABORT_ON_ASSERT_FAILURE 1
522 #endif /* ABORT_ON_ASSERT_FAILURE */
523 #ifndef PROCEED_ON_ERROR
524 #define PROCEED_ON_ERROR 0
525 #endif /* PROCEED_ON_ERROR */
526 #ifndef USE_LOCKS
527 #define USE_LOCKS 0
528 #endif /* USE_LOCKS */
529 #ifndef INSECURE
530 #define INSECURE 0
531 #endif /* INSECURE */
532 #ifndef HAVE_MMAP
533 #define HAVE_MMAP 1
534 #endif /* HAVE_MMAP */
535 #ifndef MMAP_CLEARS
536 #define MMAP_CLEARS 1
537 #endif /* MMAP_CLEARS */
538 #ifndef HAVE_MREMAP
539 #if defined(linux) || defined(__NetBSD__)
540 #define HAVE_MREMAP 1
541 #else /* linux || __NetBSD__ */
542 #define HAVE_MREMAP 0
543 #endif /* linux || __NetBSD__ */
544 #endif /* HAVE_MREMAP */
545 #ifndef MALLOC_FAILURE_ACTION
546 #define MALLOC_FAILURE_ACTION errno = ENOMEM;
547 #endif /* MALLOC_FAILURE_ACTION */
548 #ifndef HAVE_MORECORE
549 #if ONLY_MSPACES
550 #define HAVE_MORECORE 0
551 #else /* ONLY_MSPACES */
552 #define HAVE_MORECORE 1
553 #endif /* ONLY_MSPACES */
554 #endif /* HAVE_MORECORE */
555 #if !HAVE_MORECORE
556 #define MORECORE_CONTIGUOUS 0
557 #else /* !HAVE_MORECORE */
558 #ifndef MORECORE
559 #define MORECORE sbrk
560 #endif /* MORECORE */
561 #ifndef MORECORE_CONTIGUOUS
562 #define MORECORE_CONTIGUOUS 1
563 #endif /* MORECORE_CONTIGUOUS */
564 #endif /* HAVE_MORECORE */
565 #ifndef DEFAULT_GRANULARITY
566 #if MORECORE_CONTIGUOUS
567 #define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */
568 #else /* MORECORE_CONTIGUOUS */
569 #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
570 #endif /* MORECORE_CONTIGUOUS */
571 #endif /* DEFAULT_GRANULARITY */
572 #ifndef DEFAULT_TRIM_THRESHOLD
573 #ifndef MORECORE_CANNOT_TRIM
574 #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
575 #else /* MORECORE_CANNOT_TRIM */
576 #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
577 #endif /* MORECORE_CANNOT_TRIM */
578 #endif /* DEFAULT_TRIM_THRESHOLD */
579 #ifndef DEFAULT_MMAP_THRESHOLD
580 #if HAVE_MMAP
581 #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
582 #else /* HAVE_MMAP */
583 #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
584 #endif /* HAVE_MMAP */
585 #endif /* DEFAULT_MMAP_THRESHOLD */
586 #ifndef USE_BUILTIN_FFS
587 #define USE_BUILTIN_FFS 0
588 #endif /* USE_BUILTIN_FFS */
589 #ifndef USE_DEV_RANDOM
590 #define USE_DEV_RANDOM 0
591 #endif /* USE_DEV_RANDOM */
592 #ifndef NO_MALLINFO
593 #define NO_MALLINFO 0
594 #endif /* NO_MALLINFO */
595 #ifndef MALLINFO_FIELD_TYPE
596 #define MALLINFO_FIELD_TYPE size_t
597 #endif /* MALLINFO_FIELD_TYPE */
600 mallopt tuning options. SVID/XPG defines four standard parameter
601 numbers for mallopt, normally defined in malloc.h. None of these
602 are used in this malloc, so setting them has no effect. But this
603 malloc does support the following options.
606 #define M_TRIM_THRESHOLD (-1)
607 #define M_GRANULARITY (-2)
608 #define M_MMAP_THRESHOLD (-3)
610 /* ------------------------ Mallinfo declarations ------------------------ */
612 #if !NO_MALLINFO
614 This version of malloc supports the standard SVID/XPG mallinfo
615 routine that returns a struct containing usage properties and
616 statistics. It should work on any system that has a
617 /usr/include/malloc.h defining struct mallinfo. The main
618 declaration needed is the mallinfo struct that is returned (by-copy)
619 by mallinfo(). The malloinfo struct contains a bunch of fields that
620 are not even meaningful in this version of malloc. These fields are
621 are instead filled by mallinfo() with other numbers that might be of
622 interest.
624 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
625 /usr/include/malloc.h file that includes a declaration of struct
626 mallinfo. If so, it is included; else a compliant version is
627 declared below. These must be precisely the same for mallinfo() to
628 work. The original SVID version of this struct, defined on most
629 systems with mallinfo, declares all fields as ints. But some others
630 define as unsigned long. If your system defines the fields using a
631 type of different width than listed here, you MUST #include your
632 system version and #define HAVE_USR_INCLUDE_MALLOC_H.
635 /* #define HAVE_USR_INCLUDE_MALLOC_H */
637 #ifdef HAVE_USR_INCLUDE_MALLOC_H
638 #include "/usr/include/malloc.h"
639 #else /* HAVE_USR_INCLUDE_MALLOC_H */
641 struct mallinfo {
642 MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */
643 MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */
644 MALLINFO_FIELD_TYPE smblks; /* always 0 */
645 MALLINFO_FIELD_TYPE hblks; /* always 0 */
646 MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */
647 MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */
648 MALLINFO_FIELD_TYPE fsmblks; /* always 0 */
649 MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
650 MALLINFO_FIELD_TYPE fordblks; /* total free space */
651 MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
654 #endif /* HAVE_USR_INCLUDE_MALLOC_H */
655 #endif /* NO_MALLINFO */
657 #ifdef __cplusplus
658 extern "C" {
659 #endif /* __cplusplus */
661 #if !ONLY_MSPACES
663 /* ------------------- Declarations of public routines ------------------- */
665 #ifndef USE_DL_PREFIX
666 #define dlcalloc calloc
667 #define dlfree free
668 #define dlmalloc malloc
669 #define dlmemalign memalign
670 #define dlrealloc realloc
671 #define dlvalloc valloc
672 #define dlpvalloc pvalloc
673 #define dlmallinfo mallinfo
674 #define dlmallopt mallopt
675 #define dlmalloc_trim malloc_trim
676 #define dlmalloc_stats malloc_stats
677 #define dlmalloc_usable_size malloc_usable_size
678 #define dlmalloc_footprint malloc_footprint
679 #define dlmalloc_max_footprint malloc_max_footprint
680 #define dlindependent_calloc independent_calloc
681 #define dlindependent_comalloc independent_comalloc
682 #endif /* USE_DL_PREFIX */
686 malloc(size_t n)
687 Returns a pointer to a newly allocated chunk of at least n bytes, or
688 null if no space is available, in which case errno is set to ENOMEM
689 on ANSI C systems.
691 If n is zero, malloc returns a minimum-sized chunk. (The minimum
692 size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
693 systems.) Note that size_t is an unsigned type, so calls with
694 arguments that would be negative if signed are interpreted as
695 requests for huge amounts of space, which will often fail. The
696 maximum supported value of n differs across systems, but is in all
697 cases less than the maximum representable value of a size_t.
699 void* dlmalloc(size_t);
702 free(void* p)
703 Releases the chunk of memory pointed to by p, that had been previously
704 allocated using malloc or a related routine such as realloc.
705 It has no effect if p is null. If p was not malloced or already
706 freed, free(p) will by default cause the current program to abort.
708 void dlfree(void*);
711 calloc(size_t n_elements, size_t element_size);
712 Returns a pointer to n_elements * element_size bytes, with all locations
713 set to zero.
715 void* dlcalloc(size_t, size_t);
718 realloc(void* p, size_t n)
719 Returns a pointer to a chunk of size n that contains the same data
720 as does chunk p up to the minimum of (n, p's size) bytes, or null
721 if no space is available.
723 The returned pointer may or may not be the same as p. The algorithm
724 prefers extending p in most cases when possible, otherwise it
725 employs the equivalent of a malloc-copy-free sequence.
727 If p is null, realloc is equivalent to malloc.
729 If space is not available, realloc returns null, errno is set (if on
730 ANSI) and p is NOT freed.
732 if n is for fewer bytes than already held by p, the newly unused
733 space is lopped off and freed if possible. realloc with a size
734 argument of zero (re)allocates a minimum-sized chunk.
736 The old unix realloc convention of allowing the last-free'd chunk
737 to be used as an argument to realloc is not supported.
740 void* dlrealloc(void*, size_t);
743 memalign(size_t alignment, size_t n);
744 Returns a pointer to a newly allocated chunk of n bytes, aligned
745 in accord with the alignment argument.
747 The alignment argument should be a power of two. If the argument is
748 not a power of two, the nearest greater power is used.
749 8-byte alignment is guaranteed by normal malloc calls, so don't
750 bother calling memalign with an argument of 8 or less.
752 Overreliance on memalign is a sure way to fragment space.
754 void* dlmemalign(size_t, size_t);
757 valloc(size_t n);
758 Equivalent to memalign(pagesize, n), where pagesize is the page
759 size of the system. If the pagesize is unknown, 4096 is used.
761 void* dlvalloc(size_t);
764 mallopt(int parameter_number, int parameter_value)
765 Sets tunable parameters The format is to provide a
766 (parameter-number, parameter-value) pair. mallopt then sets the
767 corresponding parameter to the argument value if it can (i.e., so
768 long as the value is meaningful), and returns 1 if successful else
769 0. SVID/XPG/ANSI defines four standard param numbers for mallopt,
770 normally defined in malloc.h. None of these are use in this malloc,
771 so setting them has no effect. But this malloc also supports other
772 options in mallopt. See below for details. Briefly, supported
773 parameters are as follows (listed defaults are for "typical"
774 configurations).
776 Symbol param # default allowed param values
777 M_TRIM_THRESHOLD -1 2*1024*1024 any (MAX_SIZE_T disables)
778 M_GRANULARITY -2 page size any power of 2 >= page size
779 M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
781 int dlmallopt(int, int);
784 malloc_footprint();
785 Returns the number of bytes obtained from the system. The total
786 number of bytes allocated by malloc, realloc etc., is less than this
787 value. Unlike mallinfo, this function returns only a precomputed
788 result, so can be called frequently to monitor memory consumption.
789 Even if locks are otherwise defined, this function does not use them,
790 so results might not be up to date.
792 size_t dlmalloc_footprint(void);
795 malloc_max_footprint();
796 Returns the maximum number of bytes obtained from the system. This
797 value will be greater than current footprint if deallocated space
798 has been reclaimed by the system. The peak number of bytes allocated
799 by malloc, realloc etc., is less than this value. Unlike mallinfo,
800 this function returns only a precomputed result, so can be called
801 frequently to monitor memory consumption. Even if locks are
802 otherwise defined, this function does not use them, so results might
803 not be up to date.
805 size_t dlmalloc_max_footprint(void);
807 #if !NO_MALLINFO
809 mallinfo()
810 Returns (by copy) a struct containing various summary statistics:
812 arena: current total non-mmapped bytes allocated from system
813 ordblks: the number of free chunks
814 smblks: always zero.
815 hblks: current number of mmapped regions
816 hblkhd: total bytes held in mmapped regions
817 usmblks: the maximum total allocated space. This will be greater
818 than current total if trimming has occurred.
819 fsmblks: always zero
820 uordblks: current total allocated space (normal or mmapped)
821 fordblks: total free space
822 keepcost: the maximum number of bytes that could ideally be released
823 back to system via malloc_trim. ("ideally" means that
824 it ignores page restrictions etc.)
826 Because these fields are ints, but internal bookkeeping may
827 be kept as longs, the reported values may wrap around zero and
828 thus be inaccurate.
830 struct mallinfo dlmallinfo(void);
831 #endif /* NO_MALLINFO */
834 independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
836 independent_calloc is similar to calloc, but instead of returning a
837 single cleared space, it returns an array of pointers to n_elements
838 independent elements that can hold contents of size elem_size, each
839 of which starts out cleared, and can be independently freed,
840 realloc'ed etc. The elements are guaranteed to be adjacently
841 allocated (this is not guaranteed to occur with multiple callocs or
842 mallocs), which may also improve cache locality in some
843 applications.
845 The "chunks" argument is optional (i.e., may be null, which is
846 probably the most typical usage). If it is null, the returned array
847 is itself dynamically allocated and should also be freed when it is
848 no longer needed. Otherwise, the chunks array must be of at least
849 n_elements in length. It is filled in with the pointers to the
850 chunks.
852 In either case, independent_calloc returns this pointer array, or
853 null if the allocation failed. If n_elements is zero and "chunks"
854 is null, it returns a chunk representing an array with zero elements
855 (which should be freed if not wanted).
857 Each element must be individually freed when it is no longer
858 needed. If you'd like to instead be able to free all at once, you
859 should instead use regular calloc and assign pointers into this
860 space to represent elements. (In this case though, you cannot
861 independently free elements.)
863 independent_calloc simplifies and speeds up implementations of many
864 kinds of pools. It may also be useful when constructing large data
865 structures that initially have a fixed number of fixed-sized nodes,
866 but the number is not known at compile time, and some of the nodes
867 may later need to be freed. For example:
869 struct Node { int item; struct Node* next; };
871 struct Node* build_list() {
872 struct Node** pool;
873 int n = read_number_of_nodes_needed();
874 if (n <= 0) return 0;
875 pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
876 if (pool == 0) die();
877 // organize into a linked list...
878 struct Node* first = pool[0];
879 for (i = 0; i < n-1; ++i)
880 pool[i]->next = pool[i+1];
881 free(pool); // Can now free the array (or not, if it is needed later)
882 return first;
885 void** dlindependent_calloc(size_t, size_t, void**);
888 independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
890 independent_comalloc allocates, all at once, a set of n_elements
891 chunks with sizes indicated in the "sizes" array. It returns
892 an array of pointers to these elements, each of which can be
893 independently freed, realloc'ed etc. The elements are guaranteed to
894 be adjacently allocated (this is not guaranteed to occur with
895 multiple callocs or mallocs), which may also improve cache locality
896 in some applications.
898 The "chunks" argument is optional (i.e., may be null). If it is null
899 the returned array is itself dynamically allocated and should also
900 be freed when it is no longer needed. Otherwise, the chunks array
901 must be of at least n_elements in length. It is filled in with the
902 pointers to the chunks.
904 In either case, independent_comalloc returns this pointer array, or
905 null if the allocation failed. If n_elements is zero and chunks is
906 null, it returns a chunk representing an array with zero elements
907 (which should be freed if not wanted).
909 Each element must be individually freed when it is no longer
910 needed. If you'd like to instead be able to free all at once, you
911 should instead use a single regular malloc, and assign pointers at
912 particular offsets in the aggregate space. (In this case though, you
913 cannot independently free elements.)
915 independent_comallac differs from independent_calloc in that each
916 element may have a different size, and also that it does not
917 automatically clear elements.
919 independent_comalloc can be used to speed up allocation in cases
920 where several structs or objects must always be allocated at the
921 same time. For example:
923 struct Head { ... }
924 struct Foot { ... }
926 void send_message(char* msg) {
927 int msglen = strlen(msg);
928 size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
929 void* chunks[3];
930 if (independent_comalloc(3, sizes, chunks) == 0)
931 die();
932 struct Head* head = (struct Head*)(chunks[0]);
933 char* body = (char*)(chunks[1]);
934 struct Foot* foot = (struct Foot*)(chunks[2]);
935 // ...
938 In general though, independent_comalloc is worth using only for
939 larger values of n_elements. For small values, you probably won't
940 detect enough difference from series of malloc calls to bother.
942 Overuse of independent_comalloc can increase overall memory usage,
943 since it cannot reuse existing noncontiguous small chunks that
944 might be available for some of the elements.
946 void** dlindependent_comalloc(size_t, size_t*, void**);
950 pvalloc(size_t n);
951 Equivalent to valloc(minimum-page-that-holds(n)), that is,
952 round up n to nearest pagesize.
954 void* dlpvalloc(size_t);
957 malloc_trim(size_t pad);
959 If possible, gives memory back to the system (via negative arguments
960 to sbrk) if there is unused memory at the `high' end of the malloc
961 pool or in unused MMAP segments. You can call this after freeing
962 large blocks of memory to potentially reduce the system-level memory
963 requirements of a program. However, it cannot guarantee to reduce
964 memory. Under some allocation patterns, some large free blocks of
965 memory will be locked between two used chunks, so they cannot be
966 given back to the system.
968 The `pad' argument to malloc_trim represents the amount of free
969 trailing space to leave untrimmed. If this argument is zero, only
970 the minimum amount of memory to maintain internal data structures
971 will be left. Non-zero arguments can be supplied to maintain enough
972 trailing space to service future expected allocations without having
973 to re-obtain memory from the system.
975 Malloc_trim returns 1 if it actually released any memory, else 0.
977 int dlmalloc_trim(size_t);
980 malloc_usable_size(void* p);
982 Returns the number of bytes you can actually use in
983 an allocated chunk, which may be more than you requested (although
984 often not) due to alignment and minimum size constraints.
985 You can use this many bytes without worrying about
986 overwriting other allocated objects. This is not a particularly great
987 programming practice. malloc_usable_size can be more useful in
988 debugging and assertions, for example:
990 p = malloc(n);
991 assert(malloc_usable_size(p) >= 256);
993 size_t dlmalloc_usable_size(void*);
996 malloc_stats();
997 Prints on stderr the amount of space obtained from the system (both
998 via sbrk and mmap), the maximum amount (which may be more than
999 current if malloc_trim and/or munmap got called), and the current
1000 number of bytes allocated via malloc (or realloc, etc) but not yet
1001 freed. Note that this is the number of bytes allocated, not the
1002 number requested. It will be larger than the number requested
1003 because of alignment and bookkeeping overhead. Because it includes
1004 alignment wastage as being in use, this figure may be greater than
1005 zero even when no user-level chunks are allocated.
1007 The reported current and maximum system memory can be inaccurate if
1008 a program makes other calls to system memory allocation functions
1009 (normally sbrk) outside of malloc.
1011 malloc_stats prints only the most commonly interesting statistics.
1012 More information can be obtained by calling mallinfo.
1014 void dlmalloc_stats(void);
1016 #endif /* ONLY_MSPACES */
1018 #if MSPACES
1021 mspace is an opaque type representing an independent
1022 region of space that supports mspace_malloc, etc.
1024 typedef void* mspace;
1027 create_mspace creates and returns a new independent space with the
1028 given initial capacity, or, if 0, the default granularity size. It
1029 returns null if there is no system memory available to create the
1030 space. If argument locked is non-zero, the space uses a separate
1031 lock to control access. The capacity of the space will grow
1032 dynamically as needed to service mspace_malloc requests. You can
1033 control the sizes of incremental increases of this space by
1034 compiling with a different DEFAULT_GRANULARITY or dynamically
1035 setting with mallopt(M_GRANULARITY, value).
1037 mspace create_mspace(size_t capacity, int locked);
1040 destroy_mspace destroys the given space, and attempts to return all
1041 of its memory back to the system, returning the total number of
1042 bytes freed. After destruction, the results of access to all memory
1043 used by the space become undefined.
1045 size_t destroy_mspace(mspace msp);
1048 create_mspace_with_base uses the memory supplied as the initial base
1049 of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1050 space is used for bookkeeping, so the capacity must be at least this
1051 large. (Otherwise 0 is returned.) When this initial space is
1052 exhausted, additional memory will be obtained from the system.
1053 Destroying this space will deallocate all additionally allocated
1054 space (if possible) but not the initial base.
1056 mspace create_mspace_with_base(void* base, size_t capacity, int locked);
1059 mspace_malloc behaves as malloc, but operates within
1060 the given space.
1062 void* mspace_malloc(mspace msp, size_t bytes);
1065 mspace_free behaves as free, but operates within
1066 the given space.
1068 If compiled with FOOTERS==1, mspace_free is not actually needed.
1069 free may be called instead of mspace_free because freed chunks from
1070 any space are handled by their originating spaces.
1072 void mspace_free(mspace msp, void* mem);
1075 mspace_realloc behaves as realloc, but operates within
1076 the given space.
1078 If compiled with FOOTERS==1, mspace_realloc is not actually
1079 needed. realloc may be called instead of mspace_realloc because
1080 realloced chunks from any space are handled by their originating
1081 spaces.
1083 void* mspace_realloc(mspace msp, void* mem, size_t newsize);
1086 mspace_calloc behaves as calloc, but operates within
1087 the given space.
1089 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
1092 mspace_memalign behaves as memalign, but operates within
1093 the given space.
1095 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
1098 mspace_independent_calloc behaves as independent_calloc, but
1099 operates within the given space.
1101 void** mspace_independent_calloc(mspace msp, size_t n_elements,
1102 size_t elem_size, void* chunks[]);
1105 mspace_independent_comalloc behaves as independent_comalloc, but
1106 operates within the given space.
1108 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
1109 size_t sizes[], void* chunks[]);
1112 mspace_footprint() returns the number of bytes obtained from the
1113 system for this space.
1115 size_t mspace_footprint(mspace msp);
1118 mspace_max_footprint() returns the peak number of bytes obtained from the
1119 system for this space.
1121 size_t mspace_max_footprint(mspace msp);
1124 #if !NO_MALLINFO
1126 mspace_mallinfo behaves as mallinfo, but reports properties of
1127 the given space.
1129 struct mallinfo mspace_mallinfo(mspace msp);
1130 #endif /* NO_MALLINFO */
1133 mspace_malloc_stats behaves as malloc_stats, but reports
1134 properties of the given space.
1136 void mspace_malloc_stats(mspace msp);
1139 mspace_trim behaves as malloc_trim, but
1140 operates within the given space.
1142 int mspace_trim(mspace msp, size_t pad);
1145 An alias for mallopt.
1147 int mspace_mallopt(int, int);
1149 #endif /* MSPACES */
1151 #ifdef __cplusplus
1152 }; /* end of extern "C" */
1153 #endif /* __cplusplus */
1156 ========================================================================
1157 To make a fully customizable malloc.h header file, cut everything
1158 above this line, put into file malloc.h, edit to suit, and #include it
1159 on the next line, as well as in programs that use this malloc.
1160 ========================================================================
1163 /* #include "malloc.h" */
1165 /*------------------------------ internal #includes ---------------------- */
1167 #ifdef _MSC_VER
1168 #pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1169 #endif /* WIN32 */
1171 #include <stdio.h> /* for printing in malloc_stats */
1173 #ifndef LACKS_ERRNO_H
1174 #include <errno.h> /* for MALLOC_FAILURE_ACTION */
1175 #endif /* LACKS_ERRNO_H */
1176 #if FOOTERS
1177 #include <time.h> /* for magic initialization */
1178 #endif /* FOOTERS */
1179 #ifndef LACKS_STDLIB_H
1180 #include <stdlib.h> /* for abort() */
1181 #endif /* LACKS_STDLIB_H */
1182 #ifdef DEBUG
1183 #if ABORT_ON_ASSERT_FAILURE
1184 #define assert(x) if(!(x)) ABORT
1185 #else /* ABORT_ON_ASSERT_FAILURE */
1186 #include <assert.h>
1187 #endif /* ABORT_ON_ASSERT_FAILURE */
1188 #else /* DEBUG */
1189 #define assert(x)
1190 #endif /* DEBUG */
1191 #ifndef LACKS_STRING_H
1192 #include <string.h> /* for memset etc */
1193 #endif /* LACKS_STRING_H */
1194 #if USE_BUILTIN_FFS
1195 #ifndef LACKS_STRINGS_H
1196 #include <strings.h> /* for ffs */
1197 #endif /* LACKS_STRINGS_H */
1198 #endif /* USE_BUILTIN_FFS */
1199 #if HAVE_MMAP
1200 #ifndef LACKS_SYS_MMAN_H
1201 #include <sys/mman.h> /* for mmap */
1202 #endif /* LACKS_SYS_MMAN_H */
1203 #ifndef LACKS_FCNTL_H
1204 #include <fcntl.h>
1205 #endif /* LACKS_FCNTL_H */
1206 #endif /* HAVE_MMAP */
1207 #if HAVE_MORECORE
1208 #ifndef LACKS_UNISTD_H
1209 #include <unistd.h> /* for sbrk */
1210 #else /* LACKS_UNISTD_H */
1211 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1212 extern void* sbrk(ptrdiff_t);
1213 #endif /* FreeBSD etc */
1214 #endif /* LACKS_UNISTD_H */
1215 #endif /* HAVE_MMAP */
1217 #ifndef WIN32
1218 #ifndef malloc_getpagesize
1219 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
1220 # ifndef _SC_PAGE_SIZE
1221 # define _SC_PAGE_SIZE _SC_PAGESIZE
1222 # endif
1223 # endif
1224 # ifdef _SC_PAGE_SIZE
1225 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1226 # else
1227 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1228 extern size_t getpagesize();
1229 # define malloc_getpagesize getpagesize()
1230 # else
1231 # ifdef WIN32 /* use supplied emulation of getpagesize */
1232 # define malloc_getpagesize getpagesize()
1233 # else
1234 # ifndef LACKS_SYS_PARAM_H
1235 # include <sys/param.h>
1236 # endif
1237 # ifdef EXEC_PAGESIZE
1238 # define malloc_getpagesize EXEC_PAGESIZE
1239 # else
1240 # ifdef NBPG
1241 # ifndef CLSIZE
1242 # define malloc_getpagesize NBPG
1243 # else
1244 # define malloc_getpagesize (NBPG * CLSIZE)
1245 # endif
1246 # else
1247 # ifdef NBPC
1248 # define malloc_getpagesize NBPC
1249 # else
1250 # ifdef PAGESIZE
1251 # define malloc_getpagesize PAGESIZE
1252 # else /* just guess */
1253 # define malloc_getpagesize ((size_t)4096U)
1254 # endif
1255 # endif
1256 # endif
1257 # endif
1258 # endif
1259 # endif
1260 # endif
1261 #endif
1262 #endif
1264 /* ------------------- size_t and alignment properties -------------------- */
1266 /* The byte and bit size of a size_t */
1267 #define SIZE_T_SIZE (sizeof(size_t))
1268 #define SIZE_T_BITSIZE (sizeof(size_t) << 3)
1270 /* Some constants coerced to size_t */
1271 /* Annoying but necessary to avoid errors on some plaftorms */
1272 #define SIZE_T_ZERO ((size_t)0)
1273 #define SIZE_T_ONE ((size_t)1)
1274 #define SIZE_T_TWO ((size_t)2)
1275 #define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1)
1276 #define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2)
1277 #define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1278 #define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U)
1280 /* The bit mask value corresponding to MALLOC_ALIGNMENT */
1281 #define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE)
1283 /* True if address a has acceptable alignment */
1284 #define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1286 /* the number of bytes to offset an address to align it */
1287 #define align_offset(A)\
1288 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1289 ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1291 /* -------------------------- MMAP preliminaries ------------------------- */
1294 If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1295 checks to fail so compiler optimizer can delete code rather than
1296 using so many "#if"s.
1300 /* MORECORE and MMAP must return MFAIL on failure */
1301 #define MFAIL ((void*)(MAX_SIZE_T))
1302 #define CMFAIL ((char*)(MFAIL)) /* defined for convenience */
1304 #if !HAVE_MMAP
1305 #define IS_MMAPPED_BIT (SIZE_T_ZERO)
1306 #define USE_MMAP_BIT (SIZE_T_ZERO)
1307 #define CALL_MMAP(s) MFAIL
1308 #define CALL_MUNMAP(a, s) (-1)
1309 #define DIRECT_MMAP(s) MFAIL
1311 #else /* HAVE_MMAP */
1312 #define IS_MMAPPED_BIT (SIZE_T_ONE)
1313 #define USE_MMAP_BIT (SIZE_T_ONE)
1315 #ifndef WIN32
1316 #define CALL_MUNMAP(a, s) munmap((a), (s))
1317 #define MMAP_PROT (PROT_READ|PROT_WRITE|PROT_EXEC)
1318 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1319 #define MAP_ANONYMOUS MAP_ANON
1320 #endif /* MAP_ANON */
1321 #ifdef MAP_ANONYMOUS
1322 #define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS)
1323 #define CALL_MMAP(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1324 #else /* MAP_ANONYMOUS */
1326 Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1327 is unlikely to be needed, but is supplied just in case.
1329 #define MMAP_FLAGS (MAP_PRIVATE)
1330 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1331 #define CALL_MMAP(s) ((dev_zero_fd < 0) ? \
1332 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1333 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1334 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1335 #endif /* MAP_ANONYMOUS */
1337 #define DIRECT_MMAP(s) CALL_MMAP(s)
1338 #else /* WIN32 */
1340 /* Win32 MMAP via VirtualAlloc */
1341 static void* win32mmap(size_t size) {
1342 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_EXECUTE_READWRITE);
1343 return (ptr != 0)? ptr: MFAIL;
1346 /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
1347 static void* win32direct_mmap(size_t size) {
1348 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
1349 PAGE_EXECUTE_READWRITE);
1350 return (ptr != 0)? ptr: MFAIL;
1353 /* This function supports releasing coalesed segments */
1354 static int win32munmap(void* ptr, size_t size) {
1355 MEMORY_BASIC_INFORMATION minfo;
1356 char* cptr = ptr;
1357 while (size) {
1358 if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
1359 return -1;
1360 if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
1361 minfo.State != MEM_COMMIT || minfo.RegionSize > size)
1362 return -1;
1363 if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
1364 return -1;
1365 cptr += minfo.RegionSize;
1366 size -= minfo.RegionSize;
1368 return 0;
1371 #define CALL_MMAP(s) win32mmap(s)
1372 #define CALL_MUNMAP(a, s) win32munmap((a), (s))
1373 #define DIRECT_MMAP(s) win32direct_mmap(s)
1374 #endif /* WIN32 */
1375 #endif /* HAVE_MMAP */
1377 #if HAVE_MMAP && HAVE_MREMAP
1378 #if defined(linux)
1379 #define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1380 #elif defined(__NetBSD__)
1381 #define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (addr), (nsz), (mv))
1382 #else
1383 #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1384 #endif
1385 #else /* HAVE_MMAP && HAVE_MREMAP */
1386 #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1387 #endif /* HAVE_MMAP && HAVE_MREMAP */
1389 #if HAVE_MORECORE
1390 #define CALL_MORECORE(S) MORECORE(S)
1391 #else /* HAVE_MORECORE */
1392 #define CALL_MORECORE(S) MFAIL
1393 #endif /* HAVE_MORECORE */
1395 /* mstate bit set if continguous morecore disabled or failed */
1396 #define USE_NONCONTIGUOUS_BIT (4U)
1398 /* segment bit set in create_mspace_with_base */
1399 #define EXTERN_BIT (8U)
1402 /* --------------------------- Lock preliminaries ------------------------ */
1404 #if USE_LOCKS
1407 When locks are defined, there are up to two global locks:
1409 * If HAVE_MORECORE, morecore_mutex protects sequences of calls to
1410 MORECORE. In many cases sys_alloc requires two calls, that should
1411 not be interleaved with calls by other threads. This does not
1412 protect against direct calls to MORECORE by other threads not
1413 using this lock, so there is still code to cope the best we can on
1414 interference.
1416 * magic_init_mutex ensures that mparams.magic and other
1417 unique mparams values are initialized only once.
1420 #ifndef WIN32
1421 /* By default use posix locks */
1422 #include <pthread.h>
1423 #define MLOCK_T pthread_mutex_t
1424 #define INITIAL_LOCK(l) pthread_mutex_init(l, NULL)
1425 #define ACQUIRE_LOCK(l) pthread_mutex_lock(l)
1426 #define RELEASE_LOCK(l) pthread_mutex_unlock(l)
1428 #if HAVE_MORECORE
1429 static MLOCK_T morecore_mutex = PTHREAD_MUTEX_INITIALIZER;
1430 #endif /* HAVE_MORECORE */
1432 static MLOCK_T magic_init_mutex = PTHREAD_MUTEX_INITIALIZER;
1434 #else /* WIN32 */
1436 Because lock-protected regions have bounded times, and there
1437 are no recursive lock calls, we can use simple spinlocks.
1440 #define MLOCK_T long
1441 static int win32_acquire_lock (MLOCK_T *sl) {
1442 for (;;) {
1443 #ifdef InterlockedCompareExchangePointer
1444 if (!InterlockedCompareExchange(sl, 1, 0))
1445 return 0;
1446 #else /* Use older void* version */
1447 if (!InterlockedCompareExchange((void**)sl, (void*)1, (void*)0))
1448 return 0;
1449 #endif /* InterlockedCompareExchangePointer */
1450 Sleep (0);
1454 static void win32_release_lock (MLOCK_T *sl) {
1455 InterlockedExchange (sl, 0);
1458 #define INITIAL_LOCK(l) *(l)=0
1459 #define ACQUIRE_LOCK(l) win32_acquire_lock(l)
1460 #define RELEASE_LOCK(l) win32_release_lock(l)
1461 #if HAVE_MORECORE
1462 static MLOCK_T morecore_mutex;
1463 #endif /* HAVE_MORECORE */
1464 static MLOCK_T magic_init_mutex;
1465 #endif /* WIN32 */
1467 #define USE_LOCK_BIT (2U)
1468 #else /* USE_LOCKS */
1469 #define USE_LOCK_BIT (0U)
1470 #define INITIAL_LOCK(l)
1471 #endif /* USE_LOCKS */
1473 #if USE_LOCKS && HAVE_MORECORE
1474 #define ACQUIRE_MORECORE_LOCK() ACQUIRE_LOCK(&morecore_mutex);
1475 #define RELEASE_MORECORE_LOCK() RELEASE_LOCK(&morecore_mutex);
1476 #else /* USE_LOCKS && HAVE_MORECORE */
1477 #define ACQUIRE_MORECORE_LOCK()
1478 #define RELEASE_MORECORE_LOCK()
1479 #endif /* USE_LOCKS && HAVE_MORECORE */
1481 #if USE_LOCKS
1482 #define ACQUIRE_MAGIC_INIT_LOCK() ACQUIRE_LOCK(&magic_init_mutex);
1483 #define RELEASE_MAGIC_INIT_LOCK() RELEASE_LOCK(&magic_init_mutex);
1484 #else /* USE_LOCKS */
1485 #define ACQUIRE_MAGIC_INIT_LOCK()
1486 #define RELEASE_MAGIC_INIT_LOCK()
1487 #endif /* USE_LOCKS */
1490 /* ----------------------- Chunk representations ------------------------ */
1493 (The following includes lightly edited explanations by Colin Plumb.)
1495 The malloc_chunk declaration below is misleading (but accurate and
1496 necessary). It declares a "view" into memory allowing access to
1497 necessary fields at known offsets from a given base.
1499 Chunks of memory are maintained using a `boundary tag' method as
1500 originally described by Knuth. (See the paper by Paul Wilson
1501 ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
1502 techniques.) Sizes of free chunks are stored both in the front of
1503 each chunk and at the end. This makes consolidating fragmented
1504 chunks into bigger chunks fast. The head fields also hold bits
1505 representing whether chunks are free or in use.
1507 Here are some pictures to make it clearer. They are "exploded" to
1508 show that the state of a chunk can be thought of as extending from
1509 the high 31 bits of the head field of its header through the
1510 prev_foot and PINUSE_BIT bit of the following chunk header.
1512 A chunk that's in use looks like:
1514 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1515 | Size of previous chunk (if P = 1) |
1516 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1517 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1518 | Size of this chunk 1| +-+
1519 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1521 +- -+
1523 +- -+
1525 +- size - sizeof(size_t) available payload bytes -+
1527 chunk-> +- -+
1529 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1530 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
1531 | Size of next chunk (may or may not be in use) | +-+
1532 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1534 And if it's free, it looks like this:
1536 chunk-> +- -+
1537 | User payload (must be in use, or we would have merged!) |
1538 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1539 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1540 | Size of this chunk 0| +-+
1541 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1542 | Next pointer |
1543 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1544 | Prev pointer |
1545 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1547 +- size - sizeof(struct chunk) unused bytes -+
1549 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1550 | Size of this chunk |
1551 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1552 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
1553 | Size of next chunk (must be in use, or we would have merged)| +-+
1554 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1556 +- User payload -+
1558 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1561 Note that since we always merge adjacent free chunks, the chunks
1562 adjacent to a free chunk must be in use.
1564 Given a pointer to a chunk (which can be derived trivially from the
1565 payload pointer) we can, in O(1) time, find out whether the adjacent
1566 chunks are free, and if so, unlink them from the lists that they
1567 are on and merge them with the current chunk.
1569 Chunks always begin on even word boundaries, so the mem portion
1570 (which is returned to the user) is also on an even word boundary, and
1571 thus at least double-word aligned.
1573 The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
1574 chunk size (which is always a multiple of two words), is an in-use
1575 bit for the *previous* chunk. If that bit is *clear*, then the
1576 word before the current chunk size contains the previous chunk
1577 size, and can be used to find the front of the previous chunk.
1578 The very first chunk allocated always has this bit set, preventing
1579 access to non-existent (or non-owned) memory. If pinuse is set for
1580 any given chunk, then you CANNOT determine the size of the
1581 previous chunk, and might even get a memory addressing fault when
1582 trying to do so.
1584 The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
1585 the chunk size redundantly records whether the current chunk is
1586 inuse. This redundancy enables usage checks within free and realloc,
1587 and reduces indirection when freeing and consolidating chunks.
1589 Each freshly allocated chunk must have both cinuse and pinuse set.
1590 That is, each allocated chunk borders either a previously allocated
1591 and still in-use chunk, or the base of its memory arena. This is
1592 ensured by making all allocations from the the `lowest' part of any
1593 found chunk. Further, no free chunk physically borders another one,
1594 so each free chunk is known to be preceded and followed by either
1595 inuse chunks or the ends of memory.
1597 Note that the `foot' of the current chunk is actually represented
1598 as the prev_foot of the NEXT chunk. This makes it easier to
1599 deal with alignments etc but can be very confusing when trying
1600 to extend or adapt this code.
1602 The exceptions to all this are
1604 1. The special chunk `top' is the top-most available chunk (i.e.,
1605 the one bordering the end of available memory). It is treated
1606 specially. Top is never included in any bin, is used only if
1607 no other chunk is available, and is released back to the
1608 system if it is very large (see M_TRIM_THRESHOLD). In effect,
1609 the top chunk is treated as larger (and thus less well
1610 fitting) than any other available chunk. The top chunk
1611 doesn't update its trailing size field since there is no next
1612 contiguous chunk that would have to index off it. However,
1613 space is still allocated for it (TOP_FOOT_SIZE) to enable
1614 separation or merging when space is extended.
1616 3. Chunks allocated via mmap, which have the lowest-order bit
1617 (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
1618 PINUSE_BIT in their head fields. Because they are allocated
1619 one-by-one, each must carry its own prev_foot field, which is
1620 also used to hold the offset this chunk has within its mmapped
1621 region, which is needed to preserve alignment. Each mmapped
1622 chunk is trailed by the first two fields of a fake next-chunk
1623 for sake of usage checks.
1627 struct malloc_chunk {
1628 size_t prev_foot; /* Size of previous chunk (if free). */
1629 size_t head; /* Size and inuse bits. */
1630 struct malloc_chunk* fd; /* double links -- used only if free. */
1631 struct malloc_chunk* bk;
1634 typedef struct malloc_chunk mchunk;
1635 typedef struct malloc_chunk* mchunkptr;
1636 typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */
1637 typedef unsigned int bindex_t; /* Described below */
1638 typedef unsigned int binmap_t; /* Described below */
1639 typedef unsigned int flag_t; /* The type of various bit flag sets */
1641 /* ------------------- Chunks sizes and alignments ----------------------- */
1643 #define MCHUNK_SIZE (sizeof(mchunk))
1645 #if FOOTERS
1646 #define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
1647 #else /* FOOTERS */
1648 #define CHUNK_OVERHEAD (SIZE_T_SIZE)
1649 #endif /* FOOTERS */
1651 /* MMapped chunks need a second word of overhead ... */
1652 #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
1653 /* ... and additional padding for fake next-chunk at foot */
1654 #define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES)
1656 /* The smallest size we can malloc is an aligned minimal chunk */
1657 #define MIN_CHUNK_SIZE\
1658 ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1660 /* conversion from malloc headers to user pointers, and back */
1661 #define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES))
1662 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
1663 /* chunk associated with aligned address A */
1664 #define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A)))
1666 /* Bounds on request (not chunk) sizes. */
1667 #define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2)
1668 #define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
1670 /* pad request bytes into a usable size */
1671 #define pad_request(req) \
1672 (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1674 /* pad request, checking for minimum (but not maximum) */
1675 #define request2size(req) \
1676 (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
1679 /* ------------------ Operations on head and foot fields ----------------- */
1682 The head field of a chunk is or'ed with PINUSE_BIT when previous
1683 adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
1684 use. If the chunk was obtained with mmap, the prev_foot field has
1685 IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
1686 mmapped region to the base of the chunk.
1689 #define PINUSE_BIT (SIZE_T_ONE)
1690 #define CINUSE_BIT (SIZE_T_TWO)
1691 #define INUSE_BITS (PINUSE_BIT|CINUSE_BIT)
1693 /* Head value for fenceposts */
1694 #define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE)
1696 /* extraction of fields from head words */
1697 #define cinuse(p) ((p)->head & CINUSE_BIT)
1698 #define pinuse(p) ((p)->head & PINUSE_BIT)
1699 #define chunksize(p) ((p)->head & ~(INUSE_BITS))
1701 #define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT)
1702 #define clear_cinuse(p) ((p)->head &= ~CINUSE_BIT)
1704 /* Treat space at ptr +/- offset as a chunk */
1705 #define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
1706 #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
1708 /* Ptr to next or previous physical malloc_chunk. */
1709 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
1710 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
1712 /* extract next chunk's pinuse bit */
1713 #define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT)
1715 /* Get/set size at footer */
1716 #define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot)
1717 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
1719 /* Set size, pinuse bit, and foot */
1720 #define set_size_and_pinuse_of_free_chunk(p, s)\
1721 ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
1723 /* Set size, pinuse bit, foot, and clear next pinuse */
1724 #define set_free_with_pinuse(p, s, n)\
1725 (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
1727 #define is_mmapped(p)\
1728 (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
1730 /* Get the internal overhead associated with chunk p */
1731 #define overhead_for(p)\
1732 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
1734 /* Return true if malloced space is not necessarily cleared */
1735 #if MMAP_CLEARS
1736 #define calloc_must_clear(p) (!is_mmapped(p))
1737 #else /* MMAP_CLEARS */
1738 #define calloc_must_clear(p) (1)
1739 #endif /* MMAP_CLEARS */
1741 /* ---------------------- Overlaid data structures ----------------------- */
1744 When chunks are not in use, they are treated as nodes of either
1745 lists or trees.
1747 "Small" chunks are stored in circular doubly-linked lists, and look
1748 like this:
1750 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1751 | Size of previous chunk |
1752 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1753 `head:' | Size of chunk, in bytes |P|
1754 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1755 | Forward pointer to next chunk in list |
1756 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1757 | Back pointer to previous chunk in list |
1758 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1759 | Unused space (may be 0 bytes long) .
1762 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1763 `foot:' | Size of chunk, in bytes |
1764 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1766 Larger chunks are kept in a form of bitwise digital trees (aka
1767 tries) keyed on chunksizes. Because malloc_tree_chunks are only for
1768 free chunks greater than 256 bytes, their size doesn't impose any
1769 constraints on user chunk sizes. Each node looks like:
1771 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1772 | Size of previous chunk |
1773 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1774 `head:' | Size of chunk, in bytes |P|
1775 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1776 | Forward pointer to next chunk of same size |
1777 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1778 | Back pointer to previous chunk of same size |
1779 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1780 | Pointer to left child (child[0]) |
1781 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1782 | Pointer to right child (child[1]) |
1783 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1784 | Pointer to parent |
1785 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1786 | bin index of this chunk |
1787 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1788 | Unused space .
1790 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1791 `foot:' | Size of chunk, in bytes |
1792 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1794 Each tree holding treenodes is a tree of unique chunk sizes. Chunks
1795 of the same size are arranged in a circularly-linked list, with only
1796 the oldest chunk (the next to be used, in our FIFO ordering)
1797 actually in the tree. (Tree members are distinguished by a non-null
1798 parent pointer.) If a chunk with the same size an an existing node
1799 is inserted, it is linked off the existing node using pointers that
1800 work in the same way as fd/bk pointers of small chunks.
1802 Each tree contains a power of 2 sized range of chunk sizes (the
1803 smallest is 0x100 <= x < 0x180), which is is divided in half at each
1804 tree level, with the chunks in the smaller half of the range (0x100
1805 <= x < 0x140 for the top nose) in the left subtree and the larger
1806 half (0x140 <= x < 0x180) in the right subtree. This is, of course,
1807 done by inspecting individual bits.
1809 Using these rules, each node's left subtree contains all smaller
1810 sizes than its right subtree. However, the node at the root of each
1811 subtree has no particular ordering relationship to either. (The
1812 dividing line between the subtree sizes is based on trie relation.)
1813 If we remove the last chunk of a given size from the interior of the
1814 tree, we need to replace it with a leaf node. The tree ordering
1815 rules permit a node to be replaced by any leaf below it.
1817 The smallest chunk in a tree (a common operation in a best-fit
1818 allocator) can be found by walking a path to the leftmost leaf in
1819 the tree. Unlike a usual binary tree, where we follow left child
1820 pointers until we reach a null, here we follow the right child
1821 pointer any time the left one is null, until we reach a leaf with
1822 both child pointers null. The smallest chunk in the tree will be
1823 somewhere along that path.
1825 The worst case number of steps to add, find, or remove a node is
1826 bounded by the number of bits differentiating chunks within
1827 bins. Under current bin calculations, this ranges from 6 up to 21
1828 (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
1829 is of course much better.
1832 struct malloc_tree_chunk {
1833 /* The first four fields must be compatible with malloc_chunk */
1834 size_t prev_foot;
1835 size_t head;
1836 struct malloc_tree_chunk* fd;
1837 struct malloc_tree_chunk* bk;
1839 struct malloc_tree_chunk* child[2];
1840 struct malloc_tree_chunk* parent;
1841 bindex_t index;
1844 typedef struct malloc_tree_chunk tchunk;
1845 typedef struct malloc_tree_chunk* tchunkptr;
1846 typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
1848 /* A little helper macro for trees */
1849 #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
1851 /* ----------------------------- Segments -------------------------------- */
1854 Each malloc space may include non-contiguous segments, held in a
1855 list headed by an embedded malloc_segment record representing the
1856 top-most space. Segments also include flags holding properties of
1857 the space. Large chunks that are directly allocated by mmap are not
1858 included in this list. They are instead independently created and
1859 destroyed without otherwise keeping track of them.
1861 Segment management mainly comes into play for spaces allocated by
1862 MMAP. Any call to MMAP might or might not return memory that is
1863 adjacent to an existing segment. MORECORE normally contiguously
1864 extends the current space, so this space is almost always adjacent,
1865 which is simpler and faster to deal with. (This is why MORECORE is
1866 used preferentially to MMAP when both are available -- see
1867 sys_alloc.) When allocating using MMAP, we don't use any of the
1868 hinting mechanisms (inconsistently) supported in various
1869 implementations of unix mmap, or distinguish reserving from
1870 committing memory. Instead, we just ask for space, and exploit
1871 contiguity when we get it. It is probably possible to do
1872 better than this on some systems, but no general scheme seems
1873 to be significantly better.
1875 Management entails a simpler variant of the consolidation scheme
1876 used for chunks to reduce fragmentation -- new adjacent memory is
1877 normally prepended or appended to an existing segment. However,
1878 there are limitations compared to chunk consolidation that mostly
1879 reflect the fact that segment processing is relatively infrequent
1880 (occurring only when getting memory from system) and that we
1881 don't expect to have huge numbers of segments:
1883 * Segments are not indexed, so traversal requires linear scans. (It
1884 would be possible to index these, but is not worth the extra
1885 overhead and complexity for most programs on most platforms.)
1886 * New segments are only appended to old ones when holding top-most
1887 memory; if they cannot be prepended to others, they are held in
1888 different segments.
1890 Except for the top-most segment of an mstate, each segment record
1891 is kept at the tail of its segment. Segments are added by pushing
1892 segment records onto the list headed by &mstate.seg for the
1893 containing mstate.
1895 Segment flags control allocation/merge/deallocation policies:
1896 * If EXTERN_BIT set, then we did not allocate this segment,
1897 and so should not try to deallocate or merge with others.
1898 (This currently holds only for the initial segment passed
1899 into create_mspace_with_base.)
1900 * If IS_MMAPPED_BIT set, the segment may be merged with
1901 other surrounding mmapped segments and trimmed/de-allocated
1902 using munmap.
1903 * If neither bit is set, then the segment was obtained using
1904 MORECORE so can be merged with surrounding MORECORE'd segments
1905 and deallocated/trimmed using MORECORE with negative arguments.
1908 struct malloc_segment {
1909 char* base; /* base address */
1910 size_t size; /* allocated size */
1911 struct malloc_segment* next; /* ptr to next segment */
1912 flag_t sflags; /* mmap and extern flag */
1915 #define is_mmapped_segment(S) ((S)->sflags & IS_MMAPPED_BIT)
1916 #define is_extern_segment(S) ((S)->sflags & EXTERN_BIT)
1918 typedef struct malloc_segment msegment;
1919 typedef struct malloc_segment* msegmentptr;
1921 /* ---------------------------- malloc_state ----------------------------- */
1924 A malloc_state holds all of the bookkeeping for a space.
1925 The main fields are:
1928 The topmost chunk of the currently active segment. Its size is
1929 cached in topsize. The actual size of topmost space is
1930 topsize+TOP_FOOT_SIZE, which includes space reserved for adding
1931 fenceposts and segment records if necessary when getting more
1932 space from the system. The size at which to autotrim top is
1933 cached from mparams in trim_check, except that it is disabled if
1934 an autotrim fails.
1936 Designated victim (dv)
1937 This is the preferred chunk for servicing small requests that
1938 don't have exact fits. It is normally the chunk split off most
1939 recently to service another small request. Its size is cached in
1940 dvsize. The link fields of this chunk are not maintained since it
1941 is not kept in a bin.
1943 SmallBins
1944 An array of bin headers for free chunks. These bins hold chunks
1945 with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
1946 chunks of all the same size, spaced 8 bytes apart. To simplify
1947 use in double-linked lists, each bin header acts as a malloc_chunk
1948 pointing to the real first node, if it exists (else pointing to
1949 itself). This avoids special-casing for headers. But to avoid
1950 waste, we allocate only the fd/bk pointers of bins, and then use
1951 repositioning tricks to treat these as the fields of a chunk.
1953 TreeBins
1954 Treebins are pointers to the roots of trees holding a range of
1955 sizes. There are 2 equally spaced treebins for each power of two
1956 from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
1957 larger.
1959 Bin maps
1960 There is one bit map for small bins ("smallmap") and one for
1961 treebins ("treemap). Each bin sets its bit when non-empty, and
1962 clears the bit when empty. Bit operations are then used to avoid
1963 bin-by-bin searching -- nearly all "search" is done without ever
1964 looking at bins that won't be selected. The bit maps
1965 conservatively use 32 bits per map word, even if on 64bit system.
1966 For a good description of some of the bit-based techniques used
1967 here, see Henry S. Warren Jr's book "Hacker's Delight" (and
1968 supplement at http://hackersdelight.org/). Many of these are
1969 intended to reduce the branchiness of paths through malloc etc, as
1970 well as to reduce the number of memory locations read or written.
1972 Segments
1973 A list of segments headed by an embedded malloc_segment record
1974 representing the initial space.
1976 Address check support
1977 The least_addr field is the least address ever obtained from
1978 MORECORE or MMAP. Attempted frees and reallocs of any address less
1979 than this are trapped (unless INSECURE is defined).
1981 Magic tag
1982 A cross-check field that should always hold same value as mparams.magic.
1984 Flags
1985 Bits recording whether to use MMAP, locks, or contiguous MORECORE
1987 Statistics
1988 Each space keeps track of current and maximum system memory
1989 obtained via MORECORE or MMAP.
1991 Locking
1992 If USE_LOCKS is defined, the "mutex" lock is acquired and released
1993 around every public call using this mspace.
1996 /* Bin types, widths and sizes */
1997 #define NSMALLBINS (32U)
1998 #define NTREEBINS (32U)
1999 #define SMALLBIN_SHIFT (3U)
2000 #define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT)
2001 #define TREEBIN_SHIFT (8U)
2002 #define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT)
2003 #define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE)
2004 #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
2006 struct malloc_state {
2007 binmap_t smallmap;
2008 binmap_t treemap;
2009 size_t dvsize;
2010 size_t topsize;
2011 char* least_addr;
2012 mchunkptr dv;
2013 mchunkptr top;
2014 size_t trim_check;
2015 size_t magic;
2016 mchunkptr smallbins[(NSMALLBINS+1)*2];
2017 tbinptr treebins[NTREEBINS];
2018 size_t footprint;
2019 size_t max_footprint;
2020 flag_t mflags;
2021 #if USE_LOCKS
2022 MLOCK_T mutex; /* locate lock among fields that rarely change */
2023 #endif /* USE_LOCKS */
2024 msegment seg;
2027 typedef struct malloc_state* mstate;
2029 /* ------------- Global malloc_state and malloc_params ------------------- */
2032 malloc_params holds global properties, including those that can be
2033 dynamically set using mallopt. There is a single instance, mparams,
2034 initialized in init_mparams.
2037 struct malloc_params {
2038 size_t magic;
2039 size_t page_size;
2040 size_t granularity;
2041 size_t mmap_threshold;
2042 size_t trim_threshold;
2043 flag_t default_mflags;
2046 static struct malloc_params mparams;
2048 /* The global malloc_state used for all non-"mspace" calls */
2049 static struct malloc_state _gm_;
2050 #define gm (&_gm_)
2051 #define is_global(M) ((M) == &_gm_)
2052 #define is_initialized(M) ((M)->top != 0)
2054 /* -------------------------- system alloc setup ------------------------- */
2056 /* Operations on mflags */
2058 #define use_lock(M) ((M)->mflags & USE_LOCK_BIT)
2059 #define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT)
2060 #define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT)
2062 #define use_mmap(M) ((M)->mflags & USE_MMAP_BIT)
2063 #define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT)
2064 #define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT)
2066 #define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT)
2067 #define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT)
2069 #define set_lock(M,L)\
2070 ((M)->mflags = (L)?\
2071 ((M)->mflags | USE_LOCK_BIT) :\
2072 ((M)->mflags & ~USE_LOCK_BIT))
2074 /* page-align a size */
2075 #define page_align(S)\
2076 (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
2078 /* granularity-align a size */
2079 #define granularity_align(S)\
2080 (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
2082 #define is_page_aligned(S)\
2083 (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2084 #define is_granularity_aligned(S)\
2085 (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2087 /* True if segment S holds address A */
2088 #define segment_holds(S, A)\
2089 ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2091 /* Return segment holding given address */
2092 static msegmentptr segment_holding(mstate m, char* addr) {
2093 msegmentptr sp = &m->seg;
2094 for (;;) {
2095 if (addr >= sp->base && addr < sp->base + sp->size)
2096 return sp;
2097 if ((sp = sp->next) == 0)
2098 return 0;
2102 /* Return true if segment contains a segment link */
2103 static int has_segment_link(mstate m, msegmentptr ss) {
2104 msegmentptr sp = &m->seg;
2105 for (;;) {
2106 if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
2107 return 1;
2108 if ((sp = sp->next) == 0)
2109 return 0;
2113 #ifndef MORECORE_CANNOT_TRIM
2114 #define should_trim(M,s) ((s) > (M)->trim_check)
2115 #else /* MORECORE_CANNOT_TRIM */
2116 #define should_trim(M,s) (0)
2117 #endif /* MORECORE_CANNOT_TRIM */
2120 TOP_FOOT_SIZE is padding at the end of a segment, including space
2121 that may be needed to place segment records and fenceposts when new
2122 noncontiguous segments are added.
2124 #define TOP_FOOT_SIZE\
2125 (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2128 /* ------------------------------- Hooks -------------------------------- */
2131 PREACTION should be defined to return 0 on success, and nonzero on
2132 failure. If you are not using locking, you can redefine these to do
2133 anything you like.
2136 #if USE_LOCKS
2138 /* Ensure locks are initialized */
2139 #define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
2141 #define PREACTION(M) ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2142 #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2143 #else /* USE_LOCKS */
2145 #ifndef PREACTION
2146 #define PREACTION(M) (0)
2147 #endif /* PREACTION */
2149 #ifndef POSTACTION
2150 #define POSTACTION(M)
2151 #endif /* POSTACTION */
2153 #endif /* USE_LOCKS */
2156 CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2157 USAGE_ERROR_ACTION is triggered on detected bad frees and
2158 reallocs. The argument p is an address that might have triggered the
2159 fault. It is ignored by the two predefined actions, but might be
2160 useful in custom actions that try to help diagnose errors.
2163 #if PROCEED_ON_ERROR
2165 /* A count of the number of corruption errors causing resets */
2166 int malloc_corruption_error_count;
2168 /* default corruption action */
2169 static void reset_on_error(mstate m);
2171 #define CORRUPTION_ERROR_ACTION(m) reset_on_error(m)
2172 #define USAGE_ERROR_ACTION(m, p)
2174 #else /* PROCEED_ON_ERROR */
2176 #ifndef CORRUPTION_ERROR_ACTION
2177 #define CORRUPTION_ERROR_ACTION(m) ABORT
2178 #endif /* CORRUPTION_ERROR_ACTION */
2180 #ifndef USAGE_ERROR_ACTION
2181 #define USAGE_ERROR_ACTION(m,p) ABORT
2182 #endif /* USAGE_ERROR_ACTION */
2184 #endif /* PROCEED_ON_ERROR */
2186 /* -------------------------- Debugging setup ---------------------------- */
2188 #if ! DEBUG
2190 #define check_free_chunk(M,P)
2191 #define check_inuse_chunk(M,P)
2192 #define check_malloced_chunk(M,P,N)
2193 #define check_mmapped_chunk(M,P)
2194 #define check_malloc_state(M)
2195 #define check_top_chunk(M,P)
2197 #else /* DEBUG */
2198 #define check_free_chunk(M,P) do_check_free_chunk(M,P)
2199 #define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P)
2200 #define check_top_chunk(M,P) do_check_top_chunk(M,P)
2201 #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2202 #define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P)
2203 #define check_malloc_state(M) do_check_malloc_state(M)
2205 static void do_check_any_chunk(mstate m, mchunkptr p);
2206 static void do_check_top_chunk(mstate m, mchunkptr p);
2207 static void do_check_mmapped_chunk(mstate m, mchunkptr p);
2208 static void do_check_inuse_chunk(mstate m, mchunkptr p);
2209 static void do_check_free_chunk(mstate m, mchunkptr p);
2210 static void do_check_malloced_chunk(mstate m, void* mem, size_t s);
2211 static void do_check_tree(mstate m, tchunkptr t);
2212 static void do_check_treebin(mstate m, bindex_t i);
2213 static void do_check_smallbin(mstate m, bindex_t i);
2214 static void do_check_malloc_state(mstate m);
2215 static int bin_find(mstate m, mchunkptr x);
2216 static size_t traverse_and_check(mstate m);
2217 #endif /* DEBUG */
2219 /* ---------------------------- Indexing Bins ---------------------------- */
2221 #define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2222 #define small_index(s) ((s) >> SMALLBIN_SHIFT)
2223 #define small_index2size(i) ((i) << SMALLBIN_SHIFT)
2224 #define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE))
2226 /* addressing by index. See above about smallbin repositioning */
2227 #define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
2228 #define treebin_at(M,i) (&((M)->treebins[i]))
2230 /* assign tree index for size S to variable I */
2231 #if defined(__GNUC__) && defined(i386)
2232 #define compute_tree_index(S, I)\
2234 size_t X = S >> TREEBIN_SHIFT;\
2235 if (X == 0)\
2236 I = 0;\
2237 else if (X > 0xFFFF)\
2238 I = NTREEBINS-1;\
2239 else {\
2240 unsigned int K;\
2241 __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm" (X));\
2242 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2245 #else /* GNUC */
2246 #define compute_tree_index(S, I)\
2248 size_t X = S >> TREEBIN_SHIFT;\
2249 if (X == 0)\
2250 I = 0;\
2251 else if (X > 0xFFFF)\
2252 I = NTREEBINS-1;\
2253 else {\
2254 unsigned int Y = (unsigned int)X;\
2255 unsigned int N = ((Y - 0x100) >> 16) & 8;\
2256 unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2257 N += K;\
2258 N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2259 K = 14 - N + ((Y <<= K) >> 15);\
2260 I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2263 #endif /* GNUC */
2265 /* Bit representing maximum resolved size in a treebin at i */
2266 #define bit_for_tree_index(i) \
2267 (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2269 /* Shift placing maximum resolved bit in a treebin at i as sign bit */
2270 #define leftshift_for_tree_index(i) \
2271 ((i == NTREEBINS-1)? 0 : \
2272 ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2274 /* The size of the smallest chunk held in bin with index i */
2275 #define minsize_for_tree_index(i) \
2276 ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \
2277 (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2280 /* ------------------------ Operations on bin maps ----------------------- */
2282 /* bit corresponding to given index */
2283 #define idx2bit(i) ((binmap_t)(1) << (i))
2285 /* Mark/Clear bits with given index */
2286 #define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i))
2287 #define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i))
2288 #define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i))
2290 #define mark_treemap(M,i) ((M)->treemap |= idx2bit(i))
2291 #define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i))
2292 #define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i))
2294 /* index corresponding to given bit */
2296 #if defined(__GNUC__) && defined(i386)
2297 #define compute_bit2idx(X, I)\
2299 unsigned int J;\
2300 __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
2301 I = (bindex_t)J;\
2304 #else /* GNUC */
2305 #if USE_BUILTIN_FFS
2306 #define compute_bit2idx(X, I) I = ffs(X)-1
2308 #else /* USE_BUILTIN_FFS */
2309 #define compute_bit2idx(X, I)\
2311 unsigned int Y = X - 1;\
2312 unsigned int K = Y >> (16-4) & 16;\
2313 unsigned int N = K; Y >>= K;\
2314 N += K = Y >> (8-3) & 8; Y >>= K;\
2315 N += K = Y >> (4-2) & 4; Y >>= K;\
2316 N += K = Y >> (2-1) & 2; Y >>= K;\
2317 N += K = Y >> (1-0) & 1; Y >>= K;\
2318 I = (bindex_t)(N + Y);\
2320 #endif /* USE_BUILTIN_FFS */
2321 #endif /* GNUC */
2323 /* isolate the least set bit of a bitmap */
2324 #define least_bit(x) ((x) & -(x))
2326 /* mask with all bits to left of least bit of x on */
2327 #define left_bits(x) ((x<<1) | -(x<<1))
2329 /* mask with all bits to left of or equal to least bit of x on */
2330 #define same_or_left_bits(x) ((x) | -(x))
2333 /* ----------------------- Runtime Check Support ------------------------- */
2336 For security, the main invariant is that malloc/free/etc never
2337 writes to a static address other than malloc_state, unless static
2338 malloc_state itself has been corrupted, which cannot occur via
2339 malloc (because of these checks). In essence this means that we
2340 believe all pointers, sizes, maps etc held in malloc_state, but
2341 check all of those linked or offsetted from other embedded data
2342 structures. These checks are interspersed with main code in a way
2343 that tends to minimize their run-time cost.
2345 When FOOTERS is defined, in addition to range checking, we also
2346 verify footer fields of inuse chunks, which can be used guarantee
2347 that the mstate controlling malloc/free is intact. This is a
2348 streamlined version of the approach described by William Robertson
2349 et al in "Run-time Detection of Heap-based Overflows" LISA'03
2350 http://www.usenix.org/events/lisa03/tech/robertson.html The footer
2351 of an inuse chunk holds the xor of its mstate and a random seed,
2352 that is checked upon calls to free() and realloc(). This is
2353 (probablistically) unguessable from outside the program, but can be
2354 computed by any code successfully malloc'ing any chunk, so does not
2355 itself provide protection against code that has already broken
2356 security through some other means. Unlike Robertson et al, we
2357 always dynamically check addresses of all offset chunks (previous,
2358 next, etc). This turns out to be cheaper than relying on hashes.
2361 #if !INSECURE
2362 /* Check if address a is at least as high as any from MORECORE or MMAP */
2363 #define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
2364 /* Check if address of next chunk n is higher than base chunk p */
2365 #define ok_next(p, n) ((char*)(p) < (char*)(n))
2366 /* Check if p has its cinuse bit on */
2367 #define ok_cinuse(p) cinuse(p)
2368 /* Check if p has its pinuse bit on */
2369 #define ok_pinuse(p) pinuse(p)
2371 #else /* !INSECURE */
2372 #define ok_address(M, a) (1)
2373 #define ok_next(b, n) (1)
2374 #define ok_cinuse(p) (1)
2375 #define ok_pinuse(p) (1)
2376 #endif /* !INSECURE */
2378 #if (FOOTERS && !INSECURE)
2379 /* Check if (alleged) mstate m has expected magic field */
2380 #define ok_magic(M) ((M)->magic == mparams.magic)
2381 #else /* (FOOTERS && !INSECURE) */
2382 #define ok_magic(M) (1)
2383 #endif /* (FOOTERS && !INSECURE) */
2386 /* In gcc, use __builtin_expect to minimize impact of checks */
2387 #if !INSECURE
2388 #if defined(__GNUC__) && __GNUC__ >= 3
2389 #define RTCHECK(e) __builtin_expect(e, 1)
2390 #else /* GNUC */
2391 #define RTCHECK(e) (e)
2392 #endif /* GNUC */
2393 #else /* !INSECURE */
2394 #define RTCHECK(e) (1)
2395 #endif /* !INSECURE */
2397 /* macros to set up inuse chunks with or without footers */
2399 #if !FOOTERS
2401 #define mark_inuse_foot(M,p,s)
2403 /* Set cinuse bit and pinuse bit of next chunk */
2404 #define set_inuse(M,p,s)\
2405 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2406 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2408 /* Set cinuse and pinuse of this chunk and pinuse of next chunk */
2409 #define set_inuse_and_pinuse(M,p,s)\
2410 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2411 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2413 /* Set size, cinuse and pinuse bit of this chunk */
2414 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2415 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
2417 #else /* FOOTERS */
2419 /* Set foot of inuse chunk to be xor of mstate and seed */
2420 #define mark_inuse_foot(M,p,s)\
2421 (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
2423 #define get_mstate_for(p)\
2424 ((mstate)(((mchunkptr)((char*)(p) +\
2425 (chunksize(p))))->prev_foot ^ mparams.magic))
2427 #define set_inuse(M,p,s)\
2428 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2429 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
2430 mark_inuse_foot(M,p,s))
2432 #define set_inuse_and_pinuse(M,p,s)\
2433 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2434 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
2435 mark_inuse_foot(M,p,s))
2437 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2438 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2439 mark_inuse_foot(M, p, s))
2441 #endif /* !FOOTERS */
2443 /* ---------------------------- setting mparams -------------------------- */
2445 /* Initialize mparams */
2446 static int init_mparams(void) {
2447 if (mparams.page_size == 0) {
2448 size_t s;
2450 mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
2451 mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
2452 #if MORECORE_CONTIGUOUS
2453 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
2454 #else /* MORECORE_CONTIGUOUS */
2455 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
2456 #endif /* MORECORE_CONTIGUOUS */
2458 #if (FOOTERS && !INSECURE)
2460 #if USE_DEV_RANDOM
2461 int fd;
2462 unsigned char buf[sizeof(size_t)];
2463 /* Try to use /dev/urandom, else fall back on using time */
2464 if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
2465 read(fd, buf, sizeof(buf)) == sizeof(buf)) {
2466 s = *((size_t *) buf);
2467 close(fd);
2469 else
2470 #endif /* USE_DEV_RANDOM */
2471 s = (size_t)(time(0) ^ (size_t)0x55555555U);
2473 s |= (size_t)8U; /* ensure nonzero */
2474 s &= ~(size_t)7U; /* improve chances of fault for bad values */
2477 #else /* (FOOTERS && !INSECURE) */
2478 s = (size_t)0x58585858U;
2479 #endif /* (FOOTERS && !INSECURE) */
2480 ACQUIRE_MAGIC_INIT_LOCK();
2481 if (mparams.magic == 0) {
2482 mparams.magic = s;
2483 /* Set up lock for main malloc area */
2484 INITIAL_LOCK(&gm->mutex);
2485 gm->mflags = mparams.default_mflags;
2487 RELEASE_MAGIC_INIT_LOCK();
2489 #ifndef WIN32
2490 mparams.page_size = malloc_getpagesize;
2491 mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
2492 DEFAULT_GRANULARITY : mparams.page_size);
2493 #else /* WIN32 */
2495 SYSTEM_INFO system_info;
2496 GetSystemInfo(&system_info);
2497 mparams.page_size = system_info.dwPageSize;
2498 mparams.granularity = system_info.dwAllocationGranularity;
2500 #endif /* WIN32 */
2502 /* Sanity-check configuration:
2503 size_t must be unsigned and as wide as pointer type.
2504 ints must be at least 4 bytes.
2505 alignment must be at least 8.
2506 Alignment, min chunk size, and page size must all be powers of 2.
2508 if ((sizeof(size_t) != sizeof(char*)) ||
2509 (MAX_SIZE_T < MIN_CHUNK_SIZE) ||
2510 (sizeof(int) < 4) ||
2511 (MALLOC_ALIGNMENT < (size_t)8U) ||
2512 ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) ||
2513 ((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) != 0) ||
2514 ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
2515 ((mparams.page_size & (mparams.page_size-SIZE_T_ONE)) != 0))
2516 ABORT;
2518 return 0;
2521 #if 0
2522 /* support for mallopt */
2523 static int change_mparam(int param_number, int value) {
2524 size_t val = (size_t)value;
2525 init_mparams();
2526 switch(param_number) {
2527 case M_TRIM_THRESHOLD:
2528 mparams.trim_threshold = val;
2529 return 1;
2530 case M_GRANULARITY:
2531 if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
2532 mparams.granularity = val;
2533 return 1;
2535 else
2536 return 0;
2537 case M_MMAP_THRESHOLD:
2538 mparams.mmap_threshold = val;
2539 return 1;
2540 default:
2541 return 0;
2544 #endif
2546 #if DEBUG
2547 /* ------------------------- Debugging Support --------------------------- */
2549 /* Check properties of any chunk, whether free, inuse, mmapped etc */
2550 static void do_check_any_chunk(mstate m, mchunkptr p) {
2551 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2552 assert(ok_address(m, p));
2555 /* Check properties of top chunk */
2556 static void do_check_top_chunk(mstate m, mchunkptr p) {
2557 msegmentptr sp = segment_holding(m, (char*)p);
2558 size_t sz = chunksize(p);
2559 assert(sp != 0);
2560 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2561 assert(ok_address(m, p));
2562 assert(sz == m->topsize);
2563 assert(sz > 0);
2564 assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
2565 assert(pinuse(p));
2566 assert(!next_pinuse(p));
2569 /* Check properties of (inuse) mmapped chunks */
2570 static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
2571 size_t sz = chunksize(p);
2572 size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
2573 assert(is_mmapped(p));
2574 assert(use_mmap(m));
2575 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2576 assert(ok_address(m, p));
2577 assert(!is_small(sz));
2578 assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
2579 assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
2580 assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
2583 /* Check properties of inuse chunks */
2584 static void do_check_inuse_chunk(mstate m, mchunkptr p) {
2585 do_check_any_chunk(m, p);
2586 assert(cinuse(p));
2587 assert(next_pinuse(p));
2588 /* If not pinuse and not mmapped, previous chunk has OK offset */
2589 assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
2590 if (is_mmapped(p))
2591 do_check_mmapped_chunk(m, p);
2594 /* Check properties of free chunks */
2595 static void do_check_free_chunk(mstate m, mchunkptr p) {
2596 size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
2597 mchunkptr next = chunk_plus_offset(p, sz);
2598 do_check_any_chunk(m, p);
2599 assert(!cinuse(p));
2600 assert(!next_pinuse(p));
2601 assert (!is_mmapped(p));
2602 if (p != m->dv && p != m->top) {
2603 if (sz >= MIN_CHUNK_SIZE) {
2604 assert((sz & CHUNK_ALIGN_MASK) == 0);
2605 assert(is_aligned(chunk2mem(p)));
2606 assert(next->prev_foot == sz);
2607 assert(pinuse(p));
2608 assert (next == m->top || cinuse(next));
2609 assert(p->fd->bk == p);
2610 assert(p->bk->fd == p);
2612 else /* markers are always of size SIZE_T_SIZE */
2613 assert(sz == SIZE_T_SIZE);
2617 /* Check properties of malloced chunks at the point they are malloced */
2618 static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
2619 if (mem != 0) {
2620 mchunkptr p = mem2chunk(mem);
2621 size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
2622 do_check_inuse_chunk(m, p);
2623 assert((sz & CHUNK_ALIGN_MASK) == 0);
2624 assert(sz >= MIN_CHUNK_SIZE);
2625 assert(sz >= s);
2626 /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
2627 assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
2631 /* Check a tree and its subtrees. */
2632 static void do_check_tree(mstate m, tchunkptr t) {
2633 tchunkptr head = 0;
2634 tchunkptr u = t;
2635 bindex_t tindex = t->index;
2636 size_t tsize = chunksize(t);
2637 bindex_t idx;
2638 compute_tree_index(tsize, idx);
2639 assert(tindex == idx);
2640 assert(tsize >= MIN_LARGE_SIZE);
2641 assert(tsize >= minsize_for_tree_index(idx));
2642 assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
2644 do { /* traverse through chain of same-sized nodes */
2645 do_check_any_chunk(m, ((mchunkptr)u));
2646 assert(u->index == tindex);
2647 assert(chunksize(u) == tsize);
2648 assert(!cinuse(u));
2649 assert(!next_pinuse(u));
2650 assert(u->fd->bk == u);
2651 assert(u->bk->fd == u);
2652 if (u->parent == 0) {
2653 assert(u->child[0] == 0);
2654 assert(u->child[1] == 0);
2656 else {
2657 assert(head == 0); /* only one node on chain has parent */
2658 head = u;
2659 assert(u->parent != u);
2660 assert (u->parent->child[0] == u ||
2661 u->parent->child[1] == u ||
2662 *((tbinptr*)(u->parent)) == u);
2663 if (u->child[0] != 0) {
2664 assert(u->child[0]->parent == u);
2665 assert(u->child[0] != u);
2666 do_check_tree(m, u->child[0]);
2668 if (u->child[1] != 0) {
2669 assert(u->child[1]->parent == u);
2670 assert(u->child[1] != u);
2671 do_check_tree(m, u->child[1]);
2673 if (u->child[0] != 0 && u->child[1] != 0) {
2674 assert(chunksize(u->child[0]) < chunksize(u->child[1]));
2677 u = u->fd;
2678 } while (u != t);
2679 assert(head != 0);
2682 /* Check all the chunks in a treebin. */
2683 static void do_check_treebin(mstate m, bindex_t i) {
2684 tbinptr* tb = treebin_at(m, i);
2685 tchunkptr t = *tb;
2686 int empty = (m->treemap & (1U << i)) == 0;
2687 if (t == 0)
2688 assert(empty);
2689 if (!empty)
2690 do_check_tree(m, t);
2693 /* Check all the chunks in a smallbin. */
2694 static void do_check_smallbin(mstate m, bindex_t i) {
2695 sbinptr b = smallbin_at(m, i);
2696 mchunkptr p = b->bk;
2697 unsigned int empty = (m->smallmap & (1U << i)) == 0;
2698 if (p == b)
2699 assert(empty);
2700 if (!empty) {
2701 for (; p != b; p = p->bk) {
2702 size_t size = chunksize(p);
2703 mchunkptr q;
2704 /* each chunk claims to be free */
2705 do_check_free_chunk(m, p);
2706 /* chunk belongs in bin */
2707 assert(small_index(size) == i);
2708 assert(p->bk == b || chunksize(p->bk) == chunksize(p));
2709 /* chunk is followed by an inuse chunk */
2710 q = next_chunk(p);
2711 if (q->head != FENCEPOST_HEAD)
2712 do_check_inuse_chunk(m, q);
2717 /* Find x in a bin. Used in other check functions. */
2718 static int bin_find(mstate m, mchunkptr x) {
2719 size_t size = chunksize(x);
2720 if (is_small(size)) {
2721 bindex_t sidx = small_index(size);
2722 sbinptr b = smallbin_at(m, sidx);
2723 if (smallmap_is_marked(m, sidx)) {
2724 mchunkptr p = b;
2725 do {
2726 if (p == x)
2727 return 1;
2728 } while ((p = p->fd) != b);
2731 else {
2732 bindex_t tidx;
2733 compute_tree_index(size, tidx);
2734 if (treemap_is_marked(m, tidx)) {
2735 tchunkptr t = *treebin_at(m, tidx);
2736 size_t sizebits = size << leftshift_for_tree_index(tidx);
2737 while (t != 0 && chunksize(t) != size) {
2738 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
2739 sizebits <<= 1;
2741 if (t != 0) {
2742 tchunkptr u = t;
2743 do {
2744 if (u == (tchunkptr)x)
2745 return 1;
2746 } while ((u = u->fd) != t);
2750 return 0;
2753 /* Traverse each chunk and check it; return total */
2754 static size_t traverse_and_check(mstate m) {
2755 size_t sum = 0;
2756 if (is_initialized(m)) {
2757 msegmentptr s = &m->seg;
2758 sum += m->topsize + TOP_FOOT_SIZE;
2759 while (s != 0) {
2760 mchunkptr q = align_as_chunk(s->base);
2761 mchunkptr lastq = 0;
2762 assert(pinuse(q));
2763 while (segment_holds(s, q) &&
2764 q != m->top && q->head != FENCEPOST_HEAD) {
2765 sum += chunksize(q);
2766 if (cinuse(q)) {
2767 assert(!bin_find(m, q));
2768 do_check_inuse_chunk(m, q);
2770 else {
2771 assert(q == m->dv || bin_find(m, q));
2772 assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
2773 do_check_free_chunk(m, q);
2775 lastq = q;
2776 q = next_chunk(q);
2778 s = s->next;
2781 return sum;
2784 /* Check all properties of malloc_state. */
2785 static void do_check_malloc_state(mstate m) {
2786 bindex_t i;
2787 size_t total;
2788 /* check bins */
2789 for (i = 0; i < NSMALLBINS; ++i)
2790 do_check_smallbin(m, i);
2791 for (i = 0; i < NTREEBINS; ++i)
2792 do_check_treebin(m, i);
2794 if (m->dvsize != 0) { /* check dv chunk */
2795 do_check_any_chunk(m, m->dv);
2796 assert(m->dvsize == chunksize(m->dv));
2797 assert(m->dvsize >= MIN_CHUNK_SIZE);
2798 assert(bin_find(m, m->dv) == 0);
2801 if (m->top != 0) { /* check top chunk */
2802 do_check_top_chunk(m, m->top);
2803 assert(m->topsize == chunksize(m->top));
2804 assert(m->topsize > 0);
2805 assert(bin_find(m, m->top) == 0);
2808 total = traverse_and_check(m);
2809 assert(total <= m->footprint);
2810 assert(m->footprint <= m->max_footprint);
2812 #endif /* DEBUG */
2814 /* ----------------------------- statistics ------------------------------ */
2816 #if !NO_MALLINFO
2817 static struct mallinfo internal_mallinfo(mstate m) {
2818 struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
2819 if (!PREACTION(m)) {
2820 check_malloc_state(m);
2821 if (is_initialized(m)) {
2822 size_t nfree = SIZE_T_ONE; /* top always free */
2823 size_t mfree = m->topsize + TOP_FOOT_SIZE;
2824 size_t sum = mfree;
2825 msegmentptr s = &m->seg;
2826 while (s != 0) {
2827 mchunkptr q = align_as_chunk(s->base);
2828 while (segment_holds(s, q) &&
2829 q != m->top && q->head != FENCEPOST_HEAD) {
2830 size_t sz = chunksize(q);
2831 sum += sz;
2832 if (!cinuse(q)) {
2833 mfree += sz;
2834 ++nfree;
2836 q = next_chunk(q);
2838 s = s->next;
2841 nm.arena = sum;
2842 nm.ordblks = nfree;
2843 nm.hblkhd = m->footprint - sum;
2844 nm.usmblks = m->max_footprint;
2845 nm.uordblks = m->footprint - mfree;
2846 nm.fordblks = mfree;
2847 nm.keepcost = m->topsize;
2850 POSTACTION(m);
2852 return nm;
2854 #endif /* !NO_MALLINFO */
2856 #if 0
2857 static void internal_malloc_stats(mstate m) {
2858 if (!PREACTION(m)) {
2859 size_t maxfp = 0;
2860 size_t fp = 0;
2861 size_t used = 0;
2862 check_malloc_state(m);
2863 if (is_initialized(m)) {
2864 msegmentptr s = &m->seg;
2865 maxfp = m->max_footprint;
2866 fp = m->footprint;
2867 used = fp - (m->topsize + TOP_FOOT_SIZE);
2869 while (s != 0) {
2870 mchunkptr q = align_as_chunk(s->base);
2871 while (segment_holds(s, q) &&
2872 q != m->top && q->head != FENCEPOST_HEAD) {
2873 if (!cinuse(q))
2874 used -= chunksize(q);
2875 q = next_chunk(q);
2877 s = s->next;
2881 fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
2882 fprintf(stderr, "system bytes = %10lu\n", (unsigned long)(fp));
2883 fprintf(stderr, "in use bytes = %10lu\n", (unsigned long)(used));
2885 POSTACTION(m);
2888 #endif
2890 /* ----------------------- Operations on smallbins ----------------------- */
2893 Various forms of linking and unlinking are defined as macros. Even
2894 the ones for trees, which are very long but have very short typical
2895 paths. This is ugly but reduces reliance on inlining support of
2896 compilers.
2899 /* Link a free chunk into a smallbin */
2900 #define insert_small_chunk(M, P, S) {\
2901 bindex_t I = small_index(S);\
2902 mchunkptr B = smallbin_at(M, I);\
2903 mchunkptr F = B;\
2904 assert(S >= MIN_CHUNK_SIZE);\
2905 if (!smallmap_is_marked(M, I))\
2906 mark_smallmap(M, I);\
2907 else if (RTCHECK(ok_address(M, B->fd)))\
2908 F = B->fd;\
2909 else {\
2910 CORRUPTION_ERROR_ACTION(M);\
2912 B->fd = P;\
2913 F->bk = P;\
2914 P->fd = F;\
2915 P->bk = B;\
2918 /* Unlink a chunk from a smallbin */
2919 #define unlink_small_chunk(M, P, S) {\
2920 mchunkptr F = P->fd;\
2921 mchunkptr B = P->bk;\
2922 bindex_t I = small_index(S);\
2923 assert(P != B);\
2924 assert(P != F);\
2925 assert(chunksize(P) == small_index2size(I));\
2926 if (F == B)\
2927 clear_smallmap(M, I);\
2928 else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
2929 (B == smallbin_at(M,I) || ok_address(M, B)))) {\
2930 F->bk = B;\
2931 B->fd = F;\
2933 else {\
2934 CORRUPTION_ERROR_ACTION(M);\
2938 /* Unlink the first chunk from a smallbin */
2939 #define unlink_first_small_chunk(M, B, P, I) {\
2940 mchunkptr F = P->fd;\
2941 assert(P != B);\
2942 assert(P != F);\
2943 assert(chunksize(P) == small_index2size(I));\
2944 if (B == F)\
2945 clear_smallmap(M, I);\
2946 else if (RTCHECK(ok_address(M, F))) {\
2947 B->fd = F;\
2948 F->bk = B;\
2950 else {\
2951 CORRUPTION_ERROR_ACTION(M);\
2955 /* Replace dv node, binning the old one */
2956 /* Used only when dvsize known to be small */
2957 #define replace_dv(M, P, S) {\
2958 size_t DVS = M->dvsize;\
2959 if (DVS != 0) {\
2960 mchunkptr DV = M->dv;\
2961 assert(is_small(DVS));\
2962 insert_small_chunk(M, DV, DVS);\
2964 M->dvsize = S;\
2965 M->dv = P;\
2968 /* ------------------------- Operations on trees ------------------------- */
2970 /* Insert chunk into tree */
2971 #define insert_large_chunk(M, X, S) {\
2972 tbinptr* H;\
2973 bindex_t I;\
2974 compute_tree_index(S, I);\
2975 H = treebin_at(M, I);\
2976 X->index = I;\
2977 X->child[0] = X->child[1] = 0;\
2978 if (!treemap_is_marked(M, I)) {\
2979 mark_treemap(M, I);\
2980 *H = X;\
2981 X->parent = (tchunkptr)H;\
2982 X->fd = X->bk = X;\
2984 else {\
2985 tchunkptr T = *H;\
2986 size_t K = S << leftshift_for_tree_index(I);\
2987 for (;;) {\
2988 if (chunksize(T) != S) {\
2989 tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
2990 K <<= 1;\
2991 if (*C != 0)\
2992 T = *C;\
2993 else if (RTCHECK(ok_address(M, C))) {\
2994 *C = X;\
2995 X->parent = T;\
2996 X->fd = X->bk = X;\
2997 break;\
2999 else {\
3000 CORRUPTION_ERROR_ACTION(M);\
3001 break;\
3004 else {\
3005 tchunkptr F = T->fd;\
3006 if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
3007 T->fd = F->bk = X;\
3008 X->fd = F;\
3009 X->bk = T;\
3010 X->parent = 0;\
3011 break;\
3013 else {\
3014 CORRUPTION_ERROR_ACTION(M);\
3015 break;\
3023 Unlink steps:
3025 1. If x is a chained node, unlink it from its same-sized fd/bk links
3026 and choose its bk node as its replacement.
3027 2. If x was the last node of its size, but not a leaf node, it must
3028 be replaced with a leaf node (not merely one with an open left or
3029 right), to make sure that lefts and rights of descendents
3030 correspond properly to bit masks. We use the rightmost descendent
3031 of x. We could use any other leaf, but this is easy to locate and
3032 tends to counteract removal of leftmosts elsewhere, and so keeps
3033 paths shorter than minimally guaranteed. This doesn't loop much
3034 because on average a node in a tree is near the bottom.
3035 3. If x is the base of a chain (i.e., has parent links) relink
3036 x's parent and children to x's replacement (or null if none).
3039 #define unlink_large_chunk(M, X) {\
3040 tchunkptr XP = X->parent;\
3041 tchunkptr R;\
3042 if (X->bk != X) {\
3043 tchunkptr F = X->fd;\
3044 R = X->bk;\
3045 if (RTCHECK(ok_address(M, F))) {\
3046 F->bk = R;\
3047 R->fd = F;\
3049 else {\
3050 CORRUPTION_ERROR_ACTION(M);\
3053 else {\
3054 tchunkptr* RP;\
3055 if (((R = *(RP = &(X->child[1]))) != 0) ||\
3056 ((R = *(RP = &(X->child[0]))) != 0)) {\
3057 tchunkptr* CP;\
3058 while ((*(CP = &(R->child[1])) != 0) ||\
3059 (*(CP = &(R->child[0])) != 0)) {\
3060 R = *(RP = CP);\
3062 if (RTCHECK(ok_address(M, RP)))\
3063 *RP = 0;\
3064 else {\
3065 CORRUPTION_ERROR_ACTION(M);\
3069 if (XP != 0) {\
3070 tbinptr* H = treebin_at(M, X->index);\
3071 if (X == *H) {\
3072 if ((*H = R) == 0) \
3073 clear_treemap(M, X->index);\
3075 else if (RTCHECK(ok_address(M, XP))) {\
3076 if (XP->child[0] == X) \
3077 XP->child[0] = R;\
3078 else \
3079 XP->child[1] = R;\
3081 else\
3082 CORRUPTION_ERROR_ACTION(M);\
3083 if (R != 0) {\
3084 if (RTCHECK(ok_address(M, R))) {\
3085 tchunkptr C0, C1;\
3086 R->parent = XP;\
3087 if ((C0 = X->child[0]) != 0) {\
3088 if (RTCHECK(ok_address(M, C0))) {\
3089 R->child[0] = C0;\
3090 C0->parent = R;\
3092 else\
3093 CORRUPTION_ERROR_ACTION(M);\
3095 if ((C1 = X->child[1]) != 0) {\
3096 if (RTCHECK(ok_address(M, C1))) {\
3097 R->child[1] = C1;\
3098 C1->parent = R;\
3100 else\
3101 CORRUPTION_ERROR_ACTION(M);\
3104 else\
3105 CORRUPTION_ERROR_ACTION(M);\
3110 /* Relays to large vs small bin operations */
3112 #define insert_chunk(M, P, S)\
3113 if (is_small(S)) insert_small_chunk(M, P, S)\
3114 else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3116 #define unlink_chunk(M, P, S)\
3117 if (is_small(S)) unlink_small_chunk(M, P, S)\
3118 else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3121 /* Relays to internal calls to malloc/free from realloc, memalign etc */
3123 #if ONLY_MSPACES
3124 #define internal_malloc(m, b) mspace_malloc(m, b)
3125 #define internal_free(m, mem) mspace_free(m,mem);
3126 #else /* ONLY_MSPACES */
3127 #if MSPACES
3128 #define internal_malloc(m, b)\
3129 (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
3130 #define internal_free(m, mem)\
3131 if (m == gm) dlfree(mem); else mspace_free(m,mem);
3132 #else /* MSPACES */
3133 #define internal_malloc(m, b) dlmalloc(b)
3134 #define internal_free(m, mem) dlfree(mem)
3135 #endif /* MSPACES */
3136 #endif /* ONLY_MSPACES */
3138 /* ----------------------- Direct-mmapping chunks ----------------------- */
3141 Directly mmapped chunks are set up with an offset to the start of
3142 the mmapped region stored in the prev_foot field of the chunk. This
3143 allows reconstruction of the required argument to MUNMAP when freed,
3144 and also allows adjustment of the returned chunk to meet alignment
3145 requirements (especially in memalign). There is also enough space
3146 allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
3147 the PINUSE bit so frees can be checked.
3150 /* Malloc using mmap */
3151 static void* mmap_alloc(mstate m, size_t nb) {
3152 size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3153 if (mmsize > nb) { /* Check for wrap around 0 */
3154 char* mm = (char*)(DIRECT_MMAP(mmsize));
3155 if (mm != CMFAIL) {
3156 size_t offset = align_offset(chunk2mem(mm));
3157 size_t psize = mmsize - offset - MMAP_FOOT_PAD;
3158 mchunkptr p = (mchunkptr)(mm + offset);
3159 p->prev_foot = offset | IS_MMAPPED_BIT;
3160 (p)->head = (psize|CINUSE_BIT);
3161 mark_inuse_foot(m, p, psize);
3162 chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
3163 chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
3165 if (mm < m->least_addr)
3166 m->least_addr = mm;
3167 if ((m->footprint += mmsize) > m->max_footprint)
3168 m->max_footprint = m->footprint;
3169 assert(is_aligned(chunk2mem(p)));
3170 check_mmapped_chunk(m, p);
3171 return chunk2mem(p);
3174 return 0;
3177 #if 0
3179 /* Realloc using mmap */
3180 static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
3181 size_t oldsize = chunksize(oldp);
3182 if (is_small(nb)) /* Can't shrink mmap regions below small size */
3183 return 0;
3184 /* Keep old chunk if big enough but not too big */
3185 if (oldsize >= nb + SIZE_T_SIZE &&
3186 (oldsize - nb) <= (mparams.granularity << 1))
3187 return oldp;
3188 else {
3189 size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
3190 size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
3191 size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
3192 CHUNK_ALIGN_MASK);
3193 char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
3194 oldmmsize, newmmsize, 1);
3195 if (cp != CMFAIL) {
3196 mchunkptr newp = (mchunkptr)(cp + offset);
3197 size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
3198 newp->head = (psize|CINUSE_BIT);
3199 mark_inuse_foot(m, newp, psize);
3200 chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
3201 chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
3203 if (cp < m->least_addr)
3204 m->least_addr = cp;
3205 if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
3206 m->max_footprint = m->footprint;
3207 check_mmapped_chunk(m, newp);
3208 return newp;
3211 return 0;
3214 #endif /* 0 */
3216 /* -------------------------- mspace management -------------------------- */
3218 /* Initialize top chunk and its size */
3219 static void init_top(mstate m, mchunkptr p, size_t psize) {
3220 /* Ensure alignment */
3221 size_t offset = align_offset(chunk2mem(p));
3222 p = (mchunkptr)((char*)p + offset);
3223 psize -= offset;
3225 m->top = p;
3226 m->topsize = psize;
3227 p->head = psize | PINUSE_BIT;
3228 /* set size of fake trailing chunk holding overhead space only once */
3229 chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
3230 m->trim_check = mparams.trim_threshold; /* reset on each update */
3233 /* Initialize bins for a new mstate that is otherwise zeroed out */
3234 static void init_bins(mstate m) {
3235 /* Establish circular links for smallbins */
3236 bindex_t i;
3237 for (i = 0; i < NSMALLBINS; ++i) {
3238 sbinptr bin = smallbin_at(m,i);
3239 bin->fd = bin->bk = bin;
3243 #if PROCEED_ON_ERROR
3245 /* default corruption action */
3246 static void reset_on_error(mstate m) {
3247 int i;
3248 ++malloc_corruption_error_count;
3249 /* Reinitialize fields to forget about all memory */
3250 m->smallbins = m->treebins = 0;
3251 m->dvsize = m->topsize = 0;
3252 m->seg.base = 0;
3253 m->seg.size = 0;
3254 m->seg.next = 0;
3255 m->top = m->dv = 0;
3256 for (i = 0; i < NTREEBINS; ++i)
3257 *treebin_at(m, i) = 0;
3258 init_bins(m);
3260 #endif /* PROCEED_ON_ERROR */
3262 /* Allocate chunk and prepend remainder with chunk in successor base. */
3263 static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
3264 size_t nb) {
3265 mchunkptr p = align_as_chunk(newbase);
3266 mchunkptr oldfirst = align_as_chunk(oldbase);
3267 size_t psize = (char*)oldfirst - (char*)p;
3268 mchunkptr q = chunk_plus_offset(p, nb);
3269 size_t qsize = psize - nb;
3270 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3272 assert((char*)oldfirst > (char*)q);
3273 assert(pinuse(oldfirst));
3274 assert(qsize >= MIN_CHUNK_SIZE);
3276 /* consolidate remainder with first chunk of old base */
3277 if (oldfirst == m->top) {
3278 size_t tsize = m->topsize += qsize;
3279 m->top = q;
3280 q->head = tsize | PINUSE_BIT;
3281 check_top_chunk(m, q);
3283 else if (oldfirst == m->dv) {
3284 size_t dsize = m->dvsize += qsize;
3285 m->dv = q;
3286 set_size_and_pinuse_of_free_chunk(q, dsize);
3288 else {
3289 if (!cinuse(oldfirst)) {
3290 size_t nsize = chunksize(oldfirst);
3291 unlink_chunk(m, oldfirst, nsize);
3292 oldfirst = chunk_plus_offset(oldfirst, nsize);
3293 qsize += nsize;
3295 set_free_with_pinuse(q, qsize, oldfirst);
3296 insert_chunk(m, q, qsize);
3297 check_free_chunk(m, q);
3300 check_malloced_chunk(m, chunk2mem(p), nb);
3301 return chunk2mem(p);
3305 /* Add a segment to hold a new noncontiguous region */
3306 static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
3307 /* Determine locations and sizes of segment, fenceposts, old top */
3308 char* old_top = (char*)m->top;
3309 msegmentptr oldsp = segment_holding(m, old_top);
3310 char* old_end = oldsp->base + oldsp->size;
3311 size_t ssize = pad_request(sizeof(struct malloc_segment));
3312 char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3313 size_t offset = align_offset(chunk2mem(rawsp));
3314 char* asp = rawsp + offset;
3315 char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
3316 mchunkptr sp = (mchunkptr)csp;
3317 msegmentptr ss = (msegmentptr)(chunk2mem(sp));
3318 mchunkptr tnext = chunk_plus_offset(sp, ssize);
3319 mchunkptr p = tnext;
3320 int nfences = 0;
3322 /* reset top to new space */
3323 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3325 /* Set up segment record */
3326 assert(is_aligned(ss));
3327 set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
3328 *ss = m->seg; /* Push current record */
3329 m->seg.base = tbase;
3330 m->seg.size = tsize;
3331 m->seg.sflags = mmapped;
3332 m->seg.next = ss;
3334 /* Insert trailing fenceposts */
3335 for (;;) {
3336 mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
3337 p->head = FENCEPOST_HEAD;
3338 ++nfences;
3339 if ((char*)(&(nextp->head)) < old_end)
3340 p = nextp;
3341 else
3342 break;
3344 assert(nfences >= 2);
3346 /* Insert the rest of old top into a bin as an ordinary free chunk */
3347 if (csp != old_top) {
3348 mchunkptr q = (mchunkptr)old_top;
3349 size_t psize = csp - old_top;
3350 mchunkptr tn = chunk_plus_offset(q, psize);
3351 set_free_with_pinuse(q, psize, tn);
3352 insert_chunk(m, q, psize);
3355 check_top_chunk(m, m->top);
3358 /* -------------------------- System allocation -------------------------- */
3360 /* Get memory from system using MORECORE or MMAP */
3361 static void* sys_alloc(mstate m, size_t nb) {
3362 char* tbase = CMFAIL;
3363 size_t tsize = 0;
3364 flag_t mmap_flag = 0;
3366 init_mparams();
3368 /* Directly map large chunks */
3369 if (use_mmap(m) && nb >= mparams.mmap_threshold) {
3370 void* mem = mmap_alloc(m, nb);
3371 if (mem != 0)
3372 return mem;
3376 Try getting memory in any of three ways (in most-preferred to
3377 least-preferred order):
3378 1. A call to MORECORE that can normally contiguously extend memory.
3379 (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
3380 or main space is mmapped or a previous contiguous call failed)
3381 2. A call to MMAP new space (disabled if not HAVE_MMAP).
3382 Note that under the default settings, if MORECORE is unable to
3383 fulfill a request, and HAVE_MMAP is true, then mmap is
3384 used as a noncontiguous system allocator. This is a useful backup
3385 strategy for systems with holes in address spaces -- in this case
3386 sbrk cannot contiguously expand the heap, but mmap may be able to
3387 find space.
3388 3. A call to MORECORE that cannot usually contiguously extend memory.
3389 (disabled if not HAVE_MORECORE)
3392 if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
3393 char* br = CMFAIL;
3394 msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
3395 size_t asize = 0;
3396 ACQUIRE_MORECORE_LOCK();
3398 if (ss == 0) { /* First time through or recovery */
3399 char* base = (char*)CALL_MORECORE(0);
3400 if (base != CMFAIL) {
3401 asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
3402 /* Adjust to end on a page boundary */
3403 if (!is_page_aligned(base))
3404 asize += (page_align((size_t)base) - (size_t)base);
3405 /* Can't call MORECORE if size is negative when treated as signed */
3406 if (asize < HALF_MAX_SIZE_T &&
3407 (br = (char*)(CALL_MORECORE(asize))) == base) {
3408 tbase = base;
3409 tsize = asize;
3413 else {
3414 /* Subtract out existing available top space from MORECORE request. */
3415 asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
3416 /* Use mem here only if it did continuously extend old space */
3417 if (asize < HALF_MAX_SIZE_T &&
3418 (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
3419 tbase = br;
3420 tsize = asize;
3424 if (tbase == CMFAIL) { /* Cope with partial failure */
3425 if (br != CMFAIL) { /* Try to use/extend the space we did get */
3426 if (asize < HALF_MAX_SIZE_T &&
3427 asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
3428 size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
3429 if (esize < HALF_MAX_SIZE_T) {
3430 char* end = (char*)CALL_MORECORE(esize);
3431 if (end != CMFAIL)
3432 asize += esize;
3433 else { /* Can't use; try to release */
3434 CALL_MORECORE(-asize);
3435 br = CMFAIL;
3440 if (br != CMFAIL) { /* Use the space we did get */
3441 tbase = br;
3442 tsize = asize;
3444 else
3445 disable_contiguous(m); /* Don't try contiguous path in the future */
3448 RELEASE_MORECORE_LOCK();
3451 if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */
3452 size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
3453 size_t rsize = granularity_align(req);
3454 if (rsize > nb) { /* Fail if wraps around zero */
3455 char* mp = (char*)(CALL_MMAP(rsize));
3456 if (mp != CMFAIL) {
3457 tbase = mp;
3458 tsize = rsize;
3459 mmap_flag = IS_MMAPPED_BIT;
3464 if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
3465 size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
3466 if (asize < HALF_MAX_SIZE_T) {
3467 char* br = CMFAIL;
3468 char* end = CMFAIL;
3469 ACQUIRE_MORECORE_LOCK();
3470 br = (char*)(CALL_MORECORE(asize));
3471 end = (char*)(CALL_MORECORE(0));
3472 RELEASE_MORECORE_LOCK();
3473 if (br != CMFAIL && end != CMFAIL && br < end) {
3474 size_t ssize = end - br;
3475 if (ssize > nb + TOP_FOOT_SIZE) {
3476 tbase = br;
3477 tsize = ssize;
3483 if (tbase != CMFAIL) {
3485 if ((m->footprint += tsize) > m->max_footprint)
3486 m->max_footprint = m->footprint;
3488 if (!is_initialized(m)) { /* first-time initialization */
3489 m->seg.base = m->least_addr = tbase;
3490 m->seg.size = tsize;
3491 m->seg.sflags = mmap_flag;
3492 m->magic = mparams.magic;
3493 init_bins(m);
3494 if (is_global(m))
3495 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3496 else {
3497 /* Offset top by embedded malloc_state */
3498 mchunkptr mn = next_chunk(mem2chunk(m));
3499 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
3503 else {
3504 /* Try to merge with an existing segment */
3505 msegmentptr sp = &m->seg;
3506 while (sp != 0 && tbase != sp->base + sp->size)
3507 sp = sp->next;
3508 if (sp != 0 &&
3509 !is_extern_segment(sp) &&
3510 (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
3511 segment_holds(sp, m->top)) { /* append */
3512 sp->size += tsize;
3513 init_top(m, m->top, m->topsize + tsize);
3515 else {
3516 if (tbase < m->least_addr)
3517 m->least_addr = tbase;
3518 sp = &m->seg;
3519 while (sp != 0 && sp->base != tbase + tsize)
3520 sp = sp->next;
3521 if (sp != 0 &&
3522 !is_extern_segment(sp) &&
3523 (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
3524 char* oldbase = sp->base;
3525 sp->base = tbase;
3526 sp->size += tsize;
3527 return prepend_alloc(m, tbase, oldbase, nb);
3529 else
3530 add_segment(m, tbase, tsize, mmap_flag);
3534 if (nb < m->topsize) { /* Allocate from new or extended top space */
3535 size_t rsize = m->topsize -= nb;
3536 mchunkptr p = m->top;
3537 mchunkptr r = m->top = chunk_plus_offset(p, nb);
3538 r->head = rsize | PINUSE_BIT;
3539 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3540 check_top_chunk(m, m->top);
3541 check_malloced_chunk(m, chunk2mem(p), nb);
3542 return chunk2mem(p);
3546 MALLOC_FAILURE_ACTION;
3547 return 0;
3550 /* ----------------------- system deallocation -------------------------- */
3552 /* Unmap and unlink any mmapped segments that don't contain used chunks */
3553 static size_t release_unused_segments(mstate m) {
3554 size_t released = 0;
3555 msegmentptr pred = &m->seg;
3556 msegmentptr sp = pred->next;
3557 while (sp != 0) {
3558 char* base = sp->base;
3559 size_t size = sp->size;
3560 msegmentptr next = sp->next;
3561 if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
3562 mchunkptr p = align_as_chunk(base);
3563 size_t psize = chunksize(p);
3564 /* Can unmap if first chunk holds entire segment and not pinned */
3565 if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
3566 tchunkptr tp = (tchunkptr)p;
3567 assert(segment_holds(sp, (char*)sp));
3568 if (p == m->dv) {
3569 m->dv = 0;
3570 m->dvsize = 0;
3572 else {
3573 unlink_large_chunk(m, tp);
3575 if (CALL_MUNMAP(base, size) == 0) {
3576 released += size;
3577 m->footprint -= size;
3578 /* unlink obsoleted record */
3579 sp = pred;
3580 sp->next = next;
3582 else { /* back out if cannot unmap */
3583 insert_large_chunk(m, tp, psize);
3587 pred = sp;
3588 sp = next;
3590 return released;
3593 static int sys_trim(mstate m, size_t pad) {
3594 size_t released = 0;
3595 if (pad < MAX_REQUEST && is_initialized(m)) {
3596 pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
3598 if (m->topsize > pad) {
3599 /* Shrink top space in granularity-size units, keeping at least one */
3600 size_t unit = mparams.granularity;
3601 size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
3602 SIZE_T_ONE) * unit;
3603 msegmentptr sp = segment_holding(m, (char*)m->top);
3605 if (!is_extern_segment(sp)) {
3606 if (is_mmapped_segment(sp)) {
3607 if (HAVE_MMAP &&
3608 sp->size >= extra &&
3609 !has_segment_link(m, sp)) { /* can't shrink if pinned */
3610 size_t newsize = sp->size - extra;
3611 /* Prefer mremap, fall back to munmap */
3612 if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
3613 (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
3614 released = extra;
3618 else if (HAVE_MORECORE) {
3619 if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
3620 extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
3621 ACQUIRE_MORECORE_LOCK();
3623 /* Make sure end of memory is where we last set it. */
3624 char* old_br = (char*)(CALL_MORECORE(0));
3625 if (old_br == sp->base + sp->size) {
3626 char* rel_br = (char*)(CALL_MORECORE(-extra));
3627 char* new_br = (char*)(CALL_MORECORE(0));
3628 if (rel_br != CMFAIL && new_br < old_br)
3629 released = old_br - new_br;
3632 RELEASE_MORECORE_LOCK();
3636 if (released != 0) {
3637 sp->size -= released;
3638 m->footprint -= released;
3639 init_top(m, m->top, m->topsize - released);
3640 check_top_chunk(m, m->top);
3644 /* Unmap any unused mmapped segments */
3645 if (HAVE_MMAP)
3646 released += release_unused_segments(m);
3648 /* On failure, disable autotrim to avoid repeated failed future calls */
3649 if (released == 0)
3650 m->trim_check = MAX_SIZE_T;
3653 return (released != 0)? 1 : 0;
3656 /* ---------------------------- malloc support --------------------------- */
3658 /* allocate a large request from the best fitting chunk in a treebin */
3659 static void* tmalloc_large(mstate m, size_t nb) {
3660 tchunkptr v = 0;
3661 size_t rsize = -nb; /* Unsigned negation */
3662 tchunkptr t;
3663 bindex_t idx;
3664 compute_tree_index(nb, idx);
3666 if ((t = *treebin_at(m, idx)) != 0) {
3667 /* Traverse tree for this bin looking for node with size == nb */
3668 size_t sizebits = nb << leftshift_for_tree_index(idx);
3669 tchunkptr rst = 0; /* The deepest untaken right subtree */
3670 for (;;) {
3671 tchunkptr rt;
3672 size_t trem = chunksize(t) - nb;
3673 if (trem < rsize) {
3674 v = t;
3675 if ((rsize = trem) == 0)
3676 break;
3678 rt = t->child[1];
3679 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3680 if (rt != 0 && rt != t)
3681 rst = rt;
3682 if (t == 0) {
3683 t = rst; /* set t to least subtree holding sizes > nb */
3684 break;
3686 sizebits <<= 1;
3690 if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
3691 binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
3692 if (leftbits != 0) {
3693 bindex_t i;
3694 binmap_t leastbit = least_bit(leftbits);
3695 compute_bit2idx(leastbit, i);
3696 t = *treebin_at(m, i);
3700 while (t != 0) { /* find smallest of tree or subtree */
3701 size_t trem = chunksize(t) - nb;
3702 if (trem < rsize) {
3703 rsize = trem;
3704 v = t;
3706 t = leftmost_child(t);
3709 /* If dv is a better fit, return 0 so malloc will use it */
3710 if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
3711 if (RTCHECK(ok_address(m, v))) { /* split */
3712 mchunkptr r = chunk_plus_offset(v, nb);
3713 assert(chunksize(v) == rsize + nb);
3714 if (RTCHECK(ok_next(v, r))) {
3715 unlink_large_chunk(m, v);
3716 if (rsize < MIN_CHUNK_SIZE)
3717 set_inuse_and_pinuse(m, v, (rsize + nb));
3718 else {
3719 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3720 set_size_and_pinuse_of_free_chunk(r, rsize);
3721 insert_chunk(m, r, rsize);
3723 return chunk2mem(v);
3726 CORRUPTION_ERROR_ACTION(m);
3728 return 0;
3731 /* allocate a small request from the best fitting chunk in a treebin */
3732 static void* tmalloc_small(mstate m, size_t nb) {
3733 tchunkptr t, v;
3734 size_t rsize;
3735 bindex_t i;
3736 binmap_t leastbit = least_bit(m->treemap);
3737 compute_bit2idx(leastbit, i);
3739 v = t = *treebin_at(m, i);
3740 rsize = chunksize(t) - nb;
3742 while ((t = leftmost_child(t)) != 0) {
3743 size_t trem = chunksize(t) - nb;
3744 if (trem < rsize) {
3745 rsize = trem;
3746 v = t;
3750 if (RTCHECK(ok_address(m, v))) {
3751 mchunkptr r = chunk_plus_offset(v, nb);
3752 assert(chunksize(v) == rsize + nb);
3753 if (RTCHECK(ok_next(v, r))) {
3754 unlink_large_chunk(m, v);
3755 if (rsize < MIN_CHUNK_SIZE)
3756 set_inuse_and_pinuse(m, v, (rsize + nb));
3757 else {
3758 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3759 set_size_and_pinuse_of_free_chunk(r, rsize);
3760 replace_dv(m, r, rsize);
3762 return chunk2mem(v);
3766 CORRUPTION_ERROR_ACTION(m);
3767 return 0;
3770 /* --------------------------- realloc support --------------------------- */
3772 #if 0
3774 static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
3775 if (bytes >= MAX_REQUEST) {
3776 MALLOC_FAILURE_ACTION;
3777 return 0;
3779 if (!PREACTION(m)) {
3780 mchunkptr oldp = mem2chunk(oldmem);
3781 size_t oldsize = chunksize(oldp);
3782 mchunkptr next = chunk_plus_offset(oldp, oldsize);
3783 mchunkptr newp = 0;
3784 void* extra = 0;
3786 /* Try to either shrink or extend into top. Else malloc-copy-free */
3788 if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
3789 ok_next(oldp, next) && ok_pinuse(next))) {
3790 size_t nb = request2size(bytes);
3791 if (is_mmapped(oldp))
3792 newp = mmap_resize(m, oldp, nb);
3793 else if (oldsize >= nb) { /* already big enough */
3794 size_t rsize = oldsize - nb;
3795 newp = oldp;
3796 if (rsize >= MIN_CHUNK_SIZE) {
3797 mchunkptr remainder = chunk_plus_offset(newp, nb);
3798 set_inuse(m, newp, nb);
3799 set_inuse(m, remainder, rsize);
3800 extra = chunk2mem(remainder);
3803 else if (next == m->top && oldsize + m->topsize > nb) {
3804 /* Expand into top */
3805 size_t newsize = oldsize + m->topsize;
3806 size_t newtopsize = newsize - nb;
3807 mchunkptr newtop = chunk_plus_offset(oldp, nb);
3808 set_inuse(m, oldp, nb);
3809 newtop->head = newtopsize |PINUSE_BIT;
3810 m->top = newtop;
3811 m->topsize = newtopsize;
3812 newp = oldp;
3815 else {
3816 USAGE_ERROR_ACTION(m, oldmem);
3817 POSTACTION(m);
3818 return 0;
3821 POSTACTION(m);
3823 if (newp != 0) {
3824 if (extra != 0) {
3825 internal_free(m, extra);
3827 check_inuse_chunk(m, newp);
3828 return chunk2mem(newp);
3830 else {
3831 void* newmem = internal_malloc(m, bytes);
3832 if (newmem != 0) {
3833 size_t oc = oldsize - overhead_for(oldp);
3834 memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
3835 internal_free(m, oldmem);
3837 return newmem;
3840 return 0;
3843 #endif
3845 /* --------------------------- memalign support -------------------------- */
3847 static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
3848 if (alignment <= MALLOC_ALIGNMENT) /* Can just use malloc */
3849 return internal_malloc(m, bytes);
3850 if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
3851 alignment = MIN_CHUNK_SIZE;
3852 if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
3853 size_t a = MALLOC_ALIGNMENT << 1;
3854 while (a < alignment) a <<= 1;
3855 alignment = a;
3858 if (bytes >= MAX_REQUEST - alignment) {
3859 if (m != 0) { /* Test isn't needed but avoids compiler warning */
3860 MALLOC_FAILURE_ACTION;
3863 else {
3864 size_t nb = request2size(bytes);
3865 size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
3866 char* mem = (char*)internal_malloc(m, req);
3867 if (mem != 0) {
3868 void* leader = 0;
3869 void* trailer = 0;
3870 mchunkptr p = mem2chunk(mem);
3872 if (PREACTION(m)) return 0;
3873 if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
3875 Find an aligned spot inside chunk. Since we need to give
3876 back leading space in a chunk of at least MIN_CHUNK_SIZE, if
3877 the first calculation places us at a spot with less than
3878 MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
3879 We've allocated enough total room so that this is always
3880 possible.
3882 char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
3883 alignment -
3884 SIZE_T_ONE)) &
3885 -alignment));
3886 char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
3887 br : br+alignment;
3888 mchunkptr newp = (mchunkptr)pos;
3889 size_t leadsize = pos - (char*)(p);
3890 size_t newsize = chunksize(p) - leadsize;
3892 if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
3893 newp->prev_foot = p->prev_foot + leadsize;
3894 newp->head = (newsize|CINUSE_BIT);
3896 else { /* Otherwise, give back leader, use the rest */
3897 set_inuse(m, newp, newsize);
3898 set_inuse(m, p, leadsize);
3899 leader = chunk2mem(p);
3901 p = newp;
3904 /* Give back spare room at the end */
3905 if (!is_mmapped(p)) {
3906 size_t size = chunksize(p);
3907 if (size > nb + MIN_CHUNK_SIZE) {
3908 size_t remainder_size = size - nb;
3909 mchunkptr remainder = chunk_plus_offset(p, nb);
3910 set_inuse(m, p, nb);
3911 set_inuse(m, remainder, remainder_size);
3912 trailer = chunk2mem(remainder);
3916 assert (chunksize(p) >= nb);
3917 assert((((size_t)(chunk2mem(p))) % alignment) == 0);
3918 check_inuse_chunk(m, p);
3919 POSTACTION(m);
3920 if (leader != 0) {
3921 internal_free(m, leader);
3923 if (trailer != 0) {
3924 internal_free(m, trailer);
3926 return chunk2mem(p);
3929 return 0;
3932 #if 0
3934 /* ------------------------ comalloc/coalloc support --------------------- */
3936 static void** ialloc(mstate m,
3937 size_t n_elements,
3938 size_t* sizes,
3939 int opts,
3940 void* chunks[]) {
3942 This provides common support for independent_X routines, handling
3943 all of the combinations that can result.
3945 The opts arg has:
3946 bit 0 set if all elements are same size (using sizes[0])
3947 bit 1 set if elements should be zeroed
3950 size_t element_size; /* chunksize of each element, if all same */
3951 size_t contents_size; /* total size of elements */
3952 size_t array_size; /* request size of pointer array */
3953 void* mem; /* malloced aggregate space */
3954 mchunkptr p; /* corresponding chunk */
3955 size_t remainder_size; /* remaining bytes while splitting */
3956 void** marray; /* either "chunks" or malloced ptr array */
3957 mchunkptr array_chunk; /* chunk for malloced ptr array */
3958 flag_t was_enabled; /* to disable mmap */
3959 size_t size;
3960 size_t i;
3962 /* compute array length, if needed */
3963 if (chunks != 0) {
3964 if (n_elements == 0)
3965 return chunks; /* nothing to do */
3966 marray = chunks;
3967 array_size = 0;
3969 else {
3970 /* if empty req, must still return chunk representing empty array */
3971 if (n_elements == 0)
3972 return (void**)internal_malloc(m, 0);
3973 marray = 0;
3974 array_size = request2size(n_elements * (sizeof(void*)));
3977 /* compute total element size */
3978 if (opts & 0x1) { /* all-same-size */
3979 element_size = request2size(*sizes);
3980 contents_size = n_elements * element_size;
3982 else { /* add up all the sizes */
3983 element_size = 0;
3984 contents_size = 0;
3985 for (i = 0; i != n_elements; ++i)
3986 contents_size += request2size(sizes[i]);
3989 size = contents_size + array_size;
3992 Allocate the aggregate chunk. First disable direct-mmapping so
3993 malloc won't use it, since we would not be able to later
3994 free/realloc space internal to a segregated mmap region.
3996 was_enabled = use_mmap(m);
3997 disable_mmap(m);
3998 mem = internal_malloc(m, size - CHUNK_OVERHEAD);
3999 if (was_enabled)
4000 enable_mmap(m);
4001 if (mem == 0)
4002 return 0;
4004 if (PREACTION(m)) return 0;
4005 p = mem2chunk(mem);
4006 remainder_size = chunksize(p);
4008 assert(!is_mmapped(p));
4010 if (opts & 0x2) { /* optionally clear the elements */
4011 memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
4014 /* If not provided, allocate the pointer array as final part of chunk */
4015 if (marray == 0) {
4016 size_t array_chunk_size;
4017 array_chunk = chunk_plus_offset(p, contents_size);
4018 array_chunk_size = remainder_size - contents_size;
4019 marray = (void**) (chunk2mem(array_chunk));
4020 set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
4021 remainder_size = contents_size;
4024 /* split out elements */
4025 for (i = 0; ; ++i) {
4026 marray[i] = chunk2mem(p);
4027 if (i != n_elements-1) {
4028 if (element_size != 0)
4029 size = element_size;
4030 else
4031 size = request2size(sizes[i]);
4032 remainder_size -= size;
4033 set_size_and_pinuse_of_inuse_chunk(m, p, size);
4034 p = chunk_plus_offset(p, size);
4036 else { /* the final element absorbs any overallocation slop */
4037 set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
4038 break;
4042 #if DEBUG
4043 if (marray != chunks) {
4044 /* final element must have exactly exhausted chunk */
4045 if (element_size != 0) {
4046 assert(remainder_size == element_size);
4048 else {
4049 assert(remainder_size == request2size(sizes[i]));
4051 check_inuse_chunk(m, mem2chunk(marray));
4053 for (i = 0; i != n_elements; ++i)
4054 check_inuse_chunk(m, mem2chunk(marray[i]));
4056 #endif /* DEBUG */
4058 POSTACTION(m);
4059 return marray;
4062 #endif /* 0 */
4064 /* -------------------------- public routines ---------------------------- */
4066 #if !ONLY_MSPACES
4068 void* dlmalloc(size_t bytes) {
4070 Basic algorithm:
4071 If a small request (< 256 bytes minus per-chunk overhead):
4072 1. If one exists, use a remainderless chunk in associated smallbin.
4073 (Remainderless means that there are too few excess bytes to
4074 represent as a chunk.)
4075 2. If it is big enough, use the dv chunk, which is normally the
4076 chunk adjacent to the one used for the most recent small request.
4077 3. If one exists, split the smallest available chunk in a bin,
4078 saving remainder in dv.
4079 4. If it is big enough, use the top chunk.
4080 5. If available, get memory from system and use it
4081 Otherwise, for a large request:
4082 1. Find the smallest available binned chunk that fits, and use it
4083 if it is better fitting than dv chunk, splitting if necessary.
4084 2. If better fitting than any binned chunk, use the dv chunk.
4085 3. If it is big enough, use the top chunk.
4086 4. If request size >= mmap threshold, try to directly mmap this chunk.
4087 5. If available, get memory from system and use it
4089 The ugly goto's here ensure that postaction occurs along all paths.
4092 if (!PREACTION(gm)) {
4093 void* mem;
4094 size_t nb;
4095 if (bytes <= MAX_SMALL_REQUEST) {
4096 bindex_t idx;
4097 binmap_t smallbits;
4098 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4099 idx = small_index(nb);
4100 smallbits = gm->smallmap >> idx;
4102 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4103 mchunkptr b, p;
4104 idx += ~smallbits & 1; /* Uses next bin if idx empty */
4105 b = smallbin_at(gm, idx);
4106 p = b->fd;
4107 assert(chunksize(p) == small_index2size(idx));
4108 unlink_first_small_chunk(gm, b, p, idx);
4109 set_inuse_and_pinuse(gm, p, small_index2size(idx));
4110 mem = chunk2mem(p);
4111 check_malloced_chunk(gm, mem, nb);
4112 goto postaction;
4115 else if (nb > gm->dvsize) {
4116 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4117 mchunkptr b, p, r;
4118 size_t rsize;
4119 bindex_t i;
4120 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4121 binmap_t leastbit = least_bit(leftbits);
4122 compute_bit2idx(leastbit, i);
4123 b = smallbin_at(gm, i);
4124 p = b->fd;
4125 assert(chunksize(p) == small_index2size(i));
4126 unlink_first_small_chunk(gm, b, p, i);
4127 rsize = small_index2size(i) - nb;
4128 /* Fit here cannot be remainderless if 4byte sizes */
4129 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4130 set_inuse_and_pinuse(gm, p, small_index2size(i));
4131 else {
4132 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4133 r = chunk_plus_offset(p, nb);
4134 set_size_and_pinuse_of_free_chunk(r, rsize);
4135 replace_dv(gm, r, rsize);
4137 mem = chunk2mem(p);
4138 check_malloced_chunk(gm, mem, nb);
4139 goto postaction;
4142 else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
4143 check_malloced_chunk(gm, mem, nb);
4144 goto postaction;
4148 else if (bytes >= MAX_REQUEST)
4149 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4150 else {
4151 nb = pad_request(bytes);
4152 if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
4153 check_malloced_chunk(gm, mem, nb);
4154 goto postaction;
4158 if (nb <= gm->dvsize) {
4159 size_t rsize = gm->dvsize - nb;
4160 mchunkptr p = gm->dv;
4161 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4162 mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
4163 gm->dvsize = rsize;
4164 set_size_and_pinuse_of_free_chunk(r, rsize);
4165 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4167 else { /* exhaust dv */
4168 size_t dvs = gm->dvsize;
4169 gm->dvsize = 0;
4170 gm->dv = 0;
4171 set_inuse_and_pinuse(gm, p, dvs);
4173 mem = chunk2mem(p);
4174 check_malloced_chunk(gm, mem, nb);
4175 goto postaction;
4178 else if (nb < gm->topsize) { /* Split top */
4179 size_t rsize = gm->topsize -= nb;
4180 mchunkptr p = gm->top;
4181 mchunkptr r = gm->top = chunk_plus_offset(p, nb);
4182 r->head = rsize | PINUSE_BIT;
4183 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4184 mem = chunk2mem(p);
4185 check_top_chunk(gm, gm->top);
4186 check_malloced_chunk(gm, mem, nb);
4187 goto postaction;
4190 mem = sys_alloc(gm, nb);
4192 postaction:
4193 POSTACTION(gm);
4194 return mem;
4197 return 0;
4200 void dlfree(void* mem) {
4202 Consolidate freed chunks with preceeding or succeeding bordering
4203 free chunks, if they exist, and then place in a bin. Intermixed
4204 with special cases for top, dv, mmapped chunks, and usage errors.
4207 if (mem != 0) {
4208 mchunkptr p = mem2chunk(mem);
4209 #if FOOTERS
4210 mstate fm = get_mstate_for(p);
4211 if (!ok_magic(fm)) {
4212 USAGE_ERROR_ACTION(fm, p);
4213 return;
4215 #else /* FOOTERS */
4216 #define fm gm
4217 #endif /* FOOTERS */
4218 if (!PREACTION(fm)) {
4219 check_inuse_chunk(fm, p);
4220 if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4221 size_t psize = chunksize(p);
4222 mchunkptr next = chunk_plus_offset(p, psize);
4223 if (!pinuse(p)) {
4224 size_t prevsize = p->prev_foot;
4225 if ((prevsize & IS_MMAPPED_BIT) != 0) {
4226 prevsize &= ~IS_MMAPPED_BIT;
4227 psize += prevsize + MMAP_FOOT_PAD;
4228 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4229 fm->footprint -= psize;
4230 goto postaction;
4232 else {
4233 mchunkptr prev = chunk_minus_offset(p, prevsize);
4234 psize += prevsize;
4235 p = prev;
4236 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4237 if (p != fm->dv) {
4238 unlink_chunk(fm, p, prevsize);
4240 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4241 fm->dvsize = psize;
4242 set_free_with_pinuse(p, psize, next);
4243 goto postaction;
4246 else
4247 goto erroraction;
4251 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4252 if (!cinuse(next)) { /* consolidate forward */
4253 if (next == fm->top) {
4254 size_t tsize = fm->topsize += psize;
4255 fm->top = p;
4256 p->head = tsize | PINUSE_BIT;
4257 if (p == fm->dv) {
4258 fm->dv = 0;
4259 fm->dvsize = 0;
4261 if (should_trim(fm, tsize))
4262 sys_trim(fm, 0);
4263 goto postaction;
4265 else if (next == fm->dv) {
4266 size_t dsize = fm->dvsize += psize;
4267 fm->dv = p;
4268 set_size_and_pinuse_of_free_chunk(p, dsize);
4269 goto postaction;
4271 else {
4272 size_t nsize = chunksize(next);
4273 psize += nsize;
4274 unlink_chunk(fm, next, nsize);
4275 set_size_and_pinuse_of_free_chunk(p, psize);
4276 if (p == fm->dv) {
4277 fm->dvsize = psize;
4278 goto postaction;
4282 else
4283 set_free_with_pinuse(p, psize, next);
4284 insert_chunk(fm, p, psize);
4285 check_free_chunk(fm, p);
4286 goto postaction;
4289 erroraction:
4290 USAGE_ERROR_ACTION(fm, p);
4291 postaction:
4292 POSTACTION(fm);
4295 #if !FOOTERS
4296 #undef fm
4297 #endif /* FOOTERS */
4300 #if 0
4302 void* dlcalloc(size_t n_elements, size_t elem_size) {
4303 void* mem;
4304 size_t req = 0;
4305 if (n_elements != 0) {
4306 req = n_elements * elem_size;
4307 if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4308 (req / n_elements != elem_size))
4309 req = MAX_SIZE_T; /* force downstream failure on overflow */
4311 mem = dlmalloc(req);
4312 if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4313 memset(mem, 0, req);
4314 return mem;
4317 void* dlrealloc(void* oldmem, size_t bytes) {
4318 if (oldmem == 0)
4319 return dlmalloc(bytes);
4320 #ifdef REALLOC_ZERO_BYTES_FREES
4321 if (bytes == 0) {
4322 dlfree(oldmem);
4323 return 0;
4325 #endif /* REALLOC_ZERO_BYTES_FREES */
4326 else {
4327 #if ! FOOTERS
4328 mstate m = gm;
4329 #else /* FOOTERS */
4330 mstate m = get_mstate_for(mem2chunk(oldmem));
4331 if (!ok_magic(m)) {
4332 USAGE_ERROR_ACTION(m, oldmem);
4333 return 0;
4335 #endif /* FOOTERS */
4336 return internal_realloc(m, oldmem, bytes);
4340 #endif
4342 void* dlmemalign(size_t alignment, size_t bytes) {
4343 return internal_memalign(gm, alignment, bytes);
4346 #if 0
4348 void** dlindependent_calloc(size_t n_elements, size_t elem_size,
4349 void* chunks[]) {
4350 size_t sz = elem_size; /* serves as 1-element array */
4351 return ialloc(gm, n_elements, &sz, 3, chunks);
4354 void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
4355 void* chunks[]) {
4356 return ialloc(gm, n_elements, sizes, 0, chunks);
4359 void* dlvalloc(size_t bytes) {
4360 size_t pagesz;
4361 init_mparams();
4362 pagesz = mparams.page_size;
4363 return dlmemalign(pagesz, bytes);
4366 void* dlpvalloc(size_t bytes) {
4367 size_t pagesz;
4368 init_mparams();
4369 pagesz = mparams.page_size;
4370 return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
4373 int dlmalloc_trim(size_t pad) {
4374 int result = 0;
4375 if (!PREACTION(gm)) {
4376 result = sys_trim(gm, pad);
4377 POSTACTION(gm);
4379 return result;
4382 size_t dlmalloc_footprint(void) {
4383 return gm->footprint;
4386 size_t dlmalloc_max_footprint(void) {
4387 return gm->max_footprint;
4390 #if !NO_MALLINFO
4391 struct mallinfo dlmallinfo(void) {
4392 return internal_mallinfo(gm);
4394 #endif /* NO_MALLINFO */
4396 void dlmalloc_stats() {
4397 internal_malloc_stats(gm);
4400 size_t dlmalloc_usable_size(void* mem) {
4401 if (mem != 0) {
4402 mchunkptr p = mem2chunk(mem);
4403 if (cinuse(p))
4404 return chunksize(p) - overhead_for(p);
4406 return 0;
4409 int dlmallopt(int param_number, int value) {
4410 return change_mparam(param_number, value);
4413 #endif /* 0 */
4415 #endif /* !ONLY_MSPACES */
4417 /* ----------------------------- user mspaces ---------------------------- */
4419 #if MSPACES
4421 static mstate init_user_mstate(char* tbase, size_t tsize) {
4422 size_t msize = pad_request(sizeof(struct malloc_state));
4423 mchunkptr mn;
4424 mchunkptr msp = align_as_chunk(tbase);
4425 mstate m = (mstate)(chunk2mem(msp));
4426 memset(m, 0, msize);
4427 INITIAL_LOCK(&m->mutex);
4428 msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
4429 m->seg.base = m->least_addr = tbase;
4430 m->seg.size = m->footprint = m->max_footprint = tsize;
4431 m->magic = mparams.magic;
4432 m->mflags = mparams.default_mflags;
4433 disable_contiguous(m);
4434 init_bins(m);
4435 mn = next_chunk(mem2chunk(m));
4436 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
4437 check_top_chunk(m, m->top);
4438 return m;
4441 mspace create_mspace(size_t capacity, int locked) {
4442 mstate m = 0;
4443 size_t msize = pad_request(sizeof(struct malloc_state));
4444 init_mparams(); /* Ensure pagesize etc initialized */
4446 if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
4447 size_t rs = ((capacity == 0)? mparams.granularity :
4448 (capacity + TOP_FOOT_SIZE + msize));
4449 size_t tsize = granularity_align(rs);
4450 char* tbase = (char*)(CALL_MMAP(tsize));
4451 if (tbase != CMFAIL) {
4452 m = init_user_mstate(tbase, tsize);
4453 m->seg.sflags = IS_MMAPPED_BIT;
4454 set_lock(m, locked);
4457 return (mspace)m;
4460 mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
4461 mstate m = 0;
4462 size_t msize = pad_request(sizeof(struct malloc_state));
4463 init_mparams(); /* Ensure pagesize etc initialized */
4465 if (capacity > msize + TOP_FOOT_SIZE &&
4466 capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
4467 m = init_user_mstate((char*)base, capacity);
4468 m->seg.sflags = EXTERN_BIT;
4469 set_lock(m, locked);
4471 return (mspace)m;
4474 size_t destroy_mspace(mspace msp) {
4475 size_t freed = 0;
4476 mstate ms = (mstate)msp;
4477 if (ok_magic(ms)) {
4478 msegmentptr sp = &ms->seg;
4479 while (sp != 0) {
4480 char* base = sp->base;
4481 size_t size = sp->size;
4482 flag_t flag = sp->sflags;
4483 sp = sp->next;
4484 if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
4485 CALL_MUNMAP(base, size) == 0)
4486 freed += size;
4489 else {
4490 USAGE_ERROR_ACTION(ms,ms);
4492 return freed;
4496 mspace versions of routines are near-clones of the global
4497 versions. This is not so nice but better than the alternatives.
4500 void* mspace_malloc(mspace msp, size_t bytes) {
4501 mstate ms = (mstate)msp;
4502 if (!ok_magic(ms)) {
4503 USAGE_ERROR_ACTION(ms,ms);
4504 return 0;
4506 if (!PREACTION(ms)) {
4507 void* mem;
4508 size_t nb;
4509 if (bytes <= MAX_SMALL_REQUEST) {
4510 bindex_t idx;
4511 binmap_t smallbits;
4512 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4513 idx = small_index(nb);
4514 smallbits = ms->smallmap >> idx;
4516 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4517 mchunkptr b, p;
4518 idx += ~smallbits & 1; /* Uses next bin if idx empty */
4519 b = smallbin_at(ms, idx);
4520 p = b->fd;
4521 assert(chunksize(p) == small_index2size(idx));
4522 unlink_first_small_chunk(ms, b, p, idx);
4523 set_inuse_and_pinuse(ms, p, small_index2size(idx));
4524 mem = chunk2mem(p);
4525 check_malloced_chunk(ms, mem, nb);
4526 goto postaction;
4529 else if (nb > ms->dvsize) {
4530 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4531 mchunkptr b, p, r;
4532 size_t rsize;
4533 bindex_t i;
4534 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4535 binmap_t leastbit = least_bit(leftbits);
4536 compute_bit2idx(leastbit, i);
4537 b = smallbin_at(ms, i);
4538 p = b->fd;
4539 assert(chunksize(p) == small_index2size(i));
4540 unlink_first_small_chunk(ms, b, p, i);
4541 rsize = small_index2size(i) - nb;
4542 /* Fit here cannot be remainderless if 4byte sizes */
4543 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4544 set_inuse_and_pinuse(ms, p, small_index2size(i));
4545 else {
4546 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4547 r = chunk_plus_offset(p, nb);
4548 set_size_and_pinuse_of_free_chunk(r, rsize);
4549 replace_dv(ms, r, rsize);
4551 mem = chunk2mem(p);
4552 check_malloced_chunk(ms, mem, nb);
4553 goto postaction;
4556 else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
4557 check_malloced_chunk(ms, mem, nb);
4558 goto postaction;
4562 else if (bytes >= MAX_REQUEST)
4563 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4564 else {
4565 nb = pad_request(bytes);
4566 if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
4567 check_malloced_chunk(ms, mem, nb);
4568 goto postaction;
4572 if (nb <= ms->dvsize) {
4573 size_t rsize = ms->dvsize - nb;
4574 mchunkptr p = ms->dv;
4575 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4576 mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
4577 ms->dvsize = rsize;
4578 set_size_and_pinuse_of_free_chunk(r, rsize);
4579 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4581 else { /* exhaust dv */
4582 size_t dvs = ms->dvsize;
4583 ms->dvsize = 0;
4584 ms->dv = 0;
4585 set_inuse_and_pinuse(ms, p, dvs);
4587 mem = chunk2mem(p);
4588 check_malloced_chunk(ms, mem, nb);
4589 goto postaction;
4592 else if (nb < ms->topsize) { /* Split top */
4593 size_t rsize = ms->topsize -= nb;
4594 mchunkptr p = ms->top;
4595 mchunkptr r = ms->top = chunk_plus_offset(p, nb);
4596 r->head = rsize | PINUSE_BIT;
4597 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4598 mem = chunk2mem(p);
4599 check_top_chunk(ms, ms->top);
4600 check_malloced_chunk(ms, mem, nb);
4601 goto postaction;
4604 mem = sys_alloc(ms, nb);
4606 postaction:
4607 POSTACTION(ms);
4608 return mem;
4611 return 0;
4614 void mspace_free(mspace msp, void* mem) {
4615 if (mem != 0) {
4616 mchunkptr p = mem2chunk(mem);
4617 #if FOOTERS
4618 mstate fm = get_mstate_for(p);
4619 #else /* FOOTERS */
4620 mstate fm = (mstate)msp;
4621 #endif /* FOOTERS */
4622 if (!ok_magic(fm)) {
4623 USAGE_ERROR_ACTION(fm, p);
4624 return;
4626 if (!PREACTION(fm)) {
4627 check_inuse_chunk(fm, p);
4628 if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4629 size_t psize = chunksize(p);
4630 mchunkptr next = chunk_plus_offset(p, psize);
4631 if (!pinuse(p)) {
4632 size_t prevsize = p->prev_foot;
4633 if ((prevsize & IS_MMAPPED_BIT) != 0) {
4634 prevsize &= ~IS_MMAPPED_BIT;
4635 psize += prevsize + MMAP_FOOT_PAD;
4636 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4637 fm->footprint -= psize;
4638 goto postaction;
4640 else {
4641 mchunkptr prev = chunk_minus_offset(p, prevsize);
4642 psize += prevsize;
4643 p = prev;
4644 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4645 if (p != fm->dv) {
4646 unlink_chunk(fm, p, prevsize);
4648 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4649 fm->dvsize = psize;
4650 set_free_with_pinuse(p, psize, next);
4651 goto postaction;
4654 else
4655 goto erroraction;
4659 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4660 if (!cinuse(next)) { /* consolidate forward */
4661 if (next == fm->top) {
4662 size_t tsize = fm->topsize += psize;
4663 fm->top = p;
4664 p->head = tsize | PINUSE_BIT;
4665 if (p == fm->dv) {
4666 fm->dv = 0;
4667 fm->dvsize = 0;
4669 if (should_trim(fm, tsize))
4670 sys_trim(fm, 0);
4671 goto postaction;
4673 else if (next == fm->dv) {
4674 size_t dsize = fm->dvsize += psize;
4675 fm->dv = p;
4676 set_size_and_pinuse_of_free_chunk(p, dsize);
4677 goto postaction;
4679 else {
4680 size_t nsize = chunksize(next);
4681 psize += nsize;
4682 unlink_chunk(fm, next, nsize);
4683 set_size_and_pinuse_of_free_chunk(p, psize);
4684 if (p == fm->dv) {
4685 fm->dvsize = psize;
4686 goto postaction;
4690 else
4691 set_free_with_pinuse(p, psize, next);
4692 insert_chunk(fm, p, psize);
4693 check_free_chunk(fm, p);
4694 goto postaction;
4697 erroraction:
4698 USAGE_ERROR_ACTION(fm, p);
4699 postaction:
4700 POSTACTION(fm);
4705 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
4706 void* mem;
4707 size_t req = 0;
4708 mstate ms = (mstate)msp;
4709 if (!ok_magic(ms)) {
4710 USAGE_ERROR_ACTION(ms,ms);
4711 return 0;
4713 if (n_elements != 0) {
4714 req = n_elements * elem_size;
4715 if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4716 (req / n_elements != elem_size))
4717 req = MAX_SIZE_T; /* force downstream failure on overflow */
4719 mem = internal_malloc(ms, req);
4720 if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4721 memset(mem, 0, req);
4722 return mem;
4725 void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
4726 if (oldmem == 0)
4727 return mspace_malloc(msp, bytes);
4728 #ifdef REALLOC_ZERO_BYTES_FREES
4729 if (bytes == 0) {
4730 mspace_free(msp, oldmem);
4731 return 0;
4733 #endif /* REALLOC_ZERO_BYTES_FREES */
4734 else {
4735 #if FOOTERS
4736 mchunkptr p = mem2chunk(oldmem);
4737 mstate ms = get_mstate_for(p);
4738 #else /* FOOTERS */
4739 mstate ms = (mstate)msp;
4740 #endif /* FOOTERS */
4741 if (!ok_magic(ms)) {
4742 USAGE_ERROR_ACTION(ms,ms);
4743 return 0;
4745 return internal_realloc(ms, oldmem, bytes);
4749 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
4750 mstate ms = (mstate)msp;
4751 if (!ok_magic(ms)) {
4752 USAGE_ERROR_ACTION(ms,ms);
4753 return 0;
4755 return internal_memalign(ms, alignment, bytes);
4758 void** mspace_independent_calloc(mspace msp, size_t n_elements,
4759 size_t elem_size, void* chunks[]) {
4760 size_t sz = elem_size; /* serves as 1-element array */
4761 mstate ms = (mstate)msp;
4762 if (!ok_magic(ms)) {
4763 USAGE_ERROR_ACTION(ms,ms);
4764 return 0;
4766 return ialloc(ms, n_elements, &sz, 3, chunks);
4769 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
4770 size_t sizes[], void* chunks[]) {
4771 mstate ms = (mstate)msp;
4772 if (!ok_magic(ms)) {
4773 USAGE_ERROR_ACTION(ms,ms);
4774 return 0;
4776 return ialloc(ms, n_elements, sizes, 0, chunks);
4779 int mspace_trim(mspace msp, size_t pad) {
4780 int result = 0;
4781 mstate ms = (mstate)msp;
4782 if (ok_magic(ms)) {
4783 if (!PREACTION(ms)) {
4784 result = sys_trim(ms, pad);
4785 POSTACTION(ms);
4788 else {
4789 USAGE_ERROR_ACTION(ms,ms);
4791 return result;
4794 void mspace_malloc_stats(mspace msp) {
4795 mstate ms = (mstate)msp;
4796 if (ok_magic(ms)) {
4797 internal_malloc_stats(ms);
4799 else {
4800 USAGE_ERROR_ACTION(ms,ms);
4804 size_t mspace_footprint(mspace msp) {
4805 size_t result;
4806 mstate ms = (mstate)msp;
4807 if (ok_magic(ms)) {
4808 result = ms->footprint;
4810 USAGE_ERROR_ACTION(ms,ms);
4811 return result;
4815 size_t mspace_max_footprint(mspace msp) {
4816 size_t result;
4817 mstate ms = (mstate)msp;
4818 if (ok_magic(ms)) {
4819 result = ms->max_footprint;
4821 USAGE_ERROR_ACTION(ms,ms);
4822 return result;
4826 #if !NO_MALLINFO
4827 struct mallinfo mspace_mallinfo(mspace msp) {
4828 mstate ms = (mstate)msp;
4829 if (!ok_magic(ms)) {
4830 USAGE_ERROR_ACTION(ms,ms);
4832 return internal_mallinfo(ms);
4834 #endif /* NO_MALLINFO */
4836 int mspace_mallopt(int param_number, int value) {
4837 return change_mparam(param_number, value);
4840 #endif /* MSPACES */
4842 /* -------------------- Alternative MORECORE functions ------------------- */
4845 Guidelines for creating a custom version of MORECORE:
4847 * For best performance, MORECORE should allocate in multiples of pagesize.
4848 * MORECORE may allocate more memory than requested. (Or even less,
4849 but this will usually result in a malloc failure.)
4850 * MORECORE must not allocate memory when given argument zero, but
4851 instead return one past the end address of memory from previous
4852 nonzero call.
4853 * For best performance, consecutive calls to MORECORE with positive
4854 arguments should return increasing addresses, indicating that
4855 space has been contiguously extended.
4856 * Even though consecutive calls to MORECORE need not return contiguous
4857 addresses, it must be OK for malloc'ed chunks to span multiple
4858 regions in those cases where they do happen to be contiguous.
4859 * MORECORE need not handle negative arguments -- it may instead
4860 just return MFAIL when given negative arguments.
4861 Negative arguments are always multiples of pagesize. MORECORE
4862 must not misinterpret negative args as large positive unsigned
4863 args. You can suppress all such calls from even occurring by defining
4864 MORECORE_CANNOT_TRIM,
4866 As an example alternative MORECORE, here is a custom allocator
4867 kindly contributed for pre-OSX macOS. It uses virtually but not
4868 necessarily physically contiguous non-paged memory (locked in,
4869 present and won't get swapped out). You can use it by uncommenting
4870 this section, adding some #includes, and setting up the appropriate
4871 defines above:
4873 #define MORECORE osMoreCore
4875 There is also a shutdown routine that should somehow be called for
4876 cleanup upon program exit.
4878 #define MAX_POOL_ENTRIES 100
4879 #define MINIMUM_MORECORE_SIZE (64 * 1024U)
4880 static int next_os_pool;
4881 void *our_os_pools[MAX_POOL_ENTRIES];
4883 void *osMoreCore(int size)
4885 void *ptr = 0;
4886 static void *sbrk_top = 0;
4888 if (size > 0)
4890 if (size < MINIMUM_MORECORE_SIZE)
4891 size = MINIMUM_MORECORE_SIZE;
4892 if (CurrentExecutionLevel() == kTaskLevel)
4893 ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
4894 if (ptr == 0)
4896 return (void *) MFAIL;
4898 // save ptrs so they can be freed during cleanup
4899 our_os_pools[next_os_pool] = ptr;
4900 next_os_pool++;
4901 ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
4902 sbrk_top = (char *) ptr + size;
4903 return ptr;
4905 else if (size < 0)
4907 // we don't currently support shrink behavior
4908 return (void *) MFAIL;
4910 else
4912 return sbrk_top;
4916 // cleanup any allocated memory pools
4917 // called as last thing before shutting down driver
4919 void osCleanupMem(void)
4921 void **ptr;
4923 for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
4924 if (*ptr)
4926 PoolDeallocate(*ptr);
4927 *ptr = 0;
4934 /* -----------------------------------------------------------------------
4935 History:
4936 V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee)
4937 * Add max_footprint functions
4938 * Ensure all appropriate literals are size_t
4939 * Fix conditional compilation problem for some #define settings
4940 * Avoid concatenating segments with the one provided
4941 in create_mspace_with_base
4942 * Rename some variables to avoid compiler shadowing warnings
4943 * Use explicit lock initialization.
4944 * Better handling of sbrk interference.
4945 * Simplify and fix segment insertion, trimming and mspace_destroy
4946 * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
4947 * Thanks especially to Dennis Flanagan for help on these.
4949 V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee)
4950 * Fix memalign brace error.
4952 V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee)
4953 * Fix improper #endif nesting in C++
4954 * Add explicit casts needed for C++
4956 V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee)
4957 * Use trees for large bins
4958 * Support mspaces
4959 * Use segments to unify sbrk-based and mmap-based system allocation,
4960 removing need for emulation on most platforms without sbrk.
4961 * Default safety checks
4962 * Optional footer checks. Thanks to William Robertson for the idea.
4963 * Internal code refactoring
4964 * Incorporate suggestions and platform-specific changes.
4965 Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
4966 Aaron Bachmann, Emery Berger, and others.
4967 * Speed up non-fastbin processing enough to remove fastbins.
4968 * Remove useless cfree() to avoid conflicts with other apps.
4969 * Remove internal memcpy, memset. Compilers handle builtins better.
4970 * Remove some options that no one ever used and rename others.
4972 V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
4973 * Fix malloc_state bitmap array misdeclaration
4975 V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)
4976 * Allow tuning of FIRST_SORTED_BIN_SIZE
4977 * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
4978 * Better detection and support for non-contiguousness of MORECORE.
4979 Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
4980 * Bypass most of malloc if no frees. Thanks To Emery Berger.
4981 * Fix freeing of old top non-contiguous chunk im sysmalloc.
4982 * Raised default trim and map thresholds to 256K.
4983 * Fix mmap-related #defines. Thanks to Lubos Lunak.
4984 * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
4985 * Branch-free bin calculation
4986 * Default trim and mmap thresholds now 256K.
4988 V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
4989 * Introduce independent_comalloc and independent_calloc.
4990 Thanks to Michael Pachos for motivation and help.
4991 * Make optional .h file available
4992 * Allow > 2GB requests on 32bit systems.
4993 * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
4994 Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
4995 and Anonymous.
4996 * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
4997 helping test this.)
4998 * memalign: check alignment arg
4999 * realloc: don't try to shift chunks backwards, since this
5000 leads to more fragmentation in some programs and doesn't
5001 seem to help in any others.
5002 * Collect all cases in malloc requiring system memory into sysmalloc
5003 * Use mmap as backup to sbrk
5004 * Place all internal state in malloc_state
5005 * Introduce fastbins (although similar to 2.5.1)
5006 * Many minor tunings and cosmetic improvements
5007 * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
5008 * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
5009 Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
5010 * Include errno.h to support default failure action.
5012 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
5013 * return null for negative arguments
5014 * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
5015 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
5016 (e.g. WIN32 platforms)
5017 * Cleanup header file inclusion for WIN32 platforms
5018 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
5019 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
5020 memory allocation routines
5021 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
5022 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
5023 usage of 'assert' in non-WIN32 code
5024 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
5025 avoid infinite loop
5026 * Always call 'fREe()' rather than 'free()'
5028 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
5029 * Fixed ordering problem with boundary-stamping
5031 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
5032 * Added pvalloc, as recommended by H.J. Liu
5033 * Added 64bit pointer support mainly from Wolfram Gloger
5034 * Added anonymously donated WIN32 sbrk emulation
5035 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
5036 * malloc_extend_top: fix mask error that caused wastage after
5037 foreign sbrks
5038 * Add linux mremap support code from HJ Liu
5040 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
5041 * Integrated most documentation with the code.
5042 * Add support for mmap, with help from
5043 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5044 * Use last_remainder in more cases.
5045 * Pack bins using idea from colin@nyx10.cs.du.edu
5046 * Use ordered bins instead of best-fit threshhold
5047 * Eliminate block-local decls to simplify tracing and debugging.
5048 * Support another case of realloc via move into top
5049 * Fix error occuring when initial sbrk_base not word-aligned.
5050 * Rely on page size for units instead of SBRK_UNIT to
5051 avoid surprises about sbrk alignment conventions.
5052 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
5053 (raymond@es.ele.tue.nl) for the suggestion.
5054 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
5055 * More precautions for cases where other routines call sbrk,
5056 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5057 * Added macros etc., allowing use in linux libc from
5058 H.J. Lu (hjl@gnu.ai.mit.edu)
5059 * Inverted this history list
5061 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
5062 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
5063 * Removed all preallocation code since under current scheme
5064 the work required to undo bad preallocations exceeds
5065 the work saved in good cases for most test programs.
5066 * No longer use return list or unconsolidated bins since
5067 no scheme using them consistently outperforms those that don't
5068 given above changes.
5069 * Use best fit for very large chunks to prevent some worst-cases.
5070 * Added some support for debugging
5072 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
5073 * Removed footers when chunks are in use. Thanks to
5074 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
5076 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
5077 * Added malloc_trim, with help from Wolfram Gloger
5078 (wmglo@Dent.MED.Uni-Muenchen.DE).
5080 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
5082 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
5083 * realloc: try to expand in both directions
5084 * malloc: swap order of clean-bin strategy;
5085 * realloc: only conditionally expand backwards
5086 * Try not to scavenge used bins
5087 * Use bin counts as a guide to preallocation
5088 * Occasionally bin return list chunks in first scan
5089 * Add a few optimizations from colin@nyx10.cs.du.edu
5091 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
5092 * faster bin computation & slightly different binning
5093 * merged all consolidations to one part of malloc proper
5094 (eliminating old malloc_find_space & malloc_clean_bin)
5095 * Scan 2 returns chunks (not just 1)
5096 * Propagate failure in realloc if malloc returns 0
5097 * Add stuff to allow compilation on non-ANSI compilers
5098 from kpv@research.att.com
5100 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
5101 * removed potential for odd address access in prev_chunk
5102 * removed dependency on getpagesize.h
5103 * misc cosmetics and a bit more internal documentation
5104 * anticosmetics: mangled names in macros to evade debugger strangeness
5105 * tested on sparc, hp-700, dec-mips, rs6000
5106 with gcc & native cc (hp, dec only) allowing
5107 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
5109 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
5110 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
5111 structure of old version, but most details differ.)