Avoid "inexact" exceptions in i386/x86_64 trunc functions (bug 15479).
[glibc.git] / malloc / malloc.c
blob1f5f166ea2ecdf601546b4157e3a291dd5c330a4
1 /* Malloc implementation for multiple threads without lock contention.
2 Copyright (C) 1996-2016 Free Software Foundation, Inc.
3 This file is part of the GNU C Library.
4 Contributed by Wolfram Gloger <wg@malloc.de>
5 and Doug Lea <dl@cs.oswego.edu>, 2001.
7 The GNU C Library is free software; you can redistribute it and/or
8 modify it under the terms of the GNU Lesser General Public License as
9 published by the Free Software Foundation; either version 2.1 of the
10 License, or (at your option) any later version.
12 The GNU C Library is distributed in the hope that it will be useful,
13 but WITHOUT ANY WARRANTY; without even the implied warranty of
14 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
15 Lesser General Public License for more details.
17 You should have received a copy of the GNU Lesser General Public
18 License along with the GNU C Library; see the file COPYING.LIB. If
19 not, see <http://www.gnu.org/licenses/>. */
22 This is a version (aka ptmalloc2) of malloc/free/realloc written by
23 Doug Lea and adapted to multiple threads/arenas by Wolfram Gloger.
25 There have been substantial changes made after the integration into
26 glibc in all parts of the code. Do not look for much commonality
27 with the ptmalloc2 version.
29 * Version ptmalloc2-20011215
30 based on:
31 VERSION 2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
33 * Quickstart
35 In order to compile this implementation, a Makefile is provided with
36 the ptmalloc2 distribution, which has pre-defined targets for some
37 popular systems (e.g. "make posix" for Posix threads). All that is
38 typically required with regard to compiler flags is the selection of
39 the thread package via defining one out of USE_PTHREADS, USE_THR or
40 USE_SPROC. Check the thread-m.h file for what effects this has.
41 Many/most systems will additionally require USE_TSD_DATA_HACK to be
42 defined, so this is the default for "make posix".
44 * Why use this malloc?
46 This is not the fastest, most space-conserving, most portable, or
47 most tunable malloc ever written. However it is among the fastest
48 while also being among the most space-conserving, portable and tunable.
49 Consistent balance across these factors results in a good general-purpose
50 allocator for malloc-intensive programs.
52 The main properties of the algorithms are:
53 * For large (>= 512 bytes) requests, it is a pure best-fit allocator,
54 with ties normally decided via FIFO (i.e. least recently used).
55 * For small (<= 64 bytes by default) requests, it is a caching
56 allocator, that maintains pools of quickly recycled chunks.
57 * In between, and for combinations of large and small requests, it does
58 the best it can trying to meet both goals at once.
59 * For very large requests (>= 128KB by default), it relies on system
60 memory mapping facilities, if supported.
62 For a longer but slightly out of date high-level description, see
63 http://gee.cs.oswego.edu/dl/html/malloc.html
65 You may already by default be using a C library containing a malloc
66 that is based on some version of this malloc (for example in
67 linux). You might still want to use the one in this file in order to
68 customize settings or to avoid overheads associated with library
69 versions.
71 * Contents, described in more detail in "description of public routines" below.
73 Standard (ANSI/SVID/...) functions:
74 malloc(size_t n);
75 calloc(size_t n_elements, size_t element_size);
76 free(void* p);
77 realloc(void* p, size_t n);
78 memalign(size_t alignment, size_t n);
79 valloc(size_t n);
80 mallinfo()
81 mallopt(int parameter_number, int parameter_value)
83 Additional functions:
84 independent_calloc(size_t n_elements, size_t size, void* chunks[]);
85 independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
86 pvalloc(size_t n);
87 cfree(void* p);
88 malloc_trim(size_t pad);
89 malloc_usable_size(void* p);
90 malloc_stats();
92 * Vital statistics:
94 Supported pointer representation: 4 or 8 bytes
95 Supported size_t representation: 4 or 8 bytes
96 Note that size_t is allowed to be 4 bytes even if pointers are 8.
97 You can adjust this by defining INTERNAL_SIZE_T
99 Alignment: 2 * sizeof(size_t) (default)
100 (i.e., 8 byte alignment with 4byte size_t). This suffices for
101 nearly all current machines and C compilers. However, you can
102 define MALLOC_ALIGNMENT to be wider than this if necessary.
104 Minimum overhead per allocated chunk: 4 or 8 bytes
105 Each malloced chunk has a hidden word of overhead holding size
106 and status information.
108 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
109 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
111 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
112 ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
113 needed; 4 (8) for a trailing size field and 8 (16) bytes for
114 free list pointers. Thus, the minimum allocatable size is
115 16/24/32 bytes.
117 Even a request for zero bytes (i.e., malloc(0)) returns a
118 pointer to something of the minimum allocatable size.
120 The maximum overhead wastage (i.e., number of extra bytes
121 allocated than were requested in malloc) is less than or equal
122 to the minimum size, except for requests >= mmap_threshold that
123 are serviced via mmap(), where the worst case wastage is 2 *
124 sizeof(size_t) bytes plus the remainder from a system page (the
125 minimal mmap unit); typically 4096 or 8192 bytes.
127 Maximum allocated size: 4-byte size_t: 2^32 minus about two pages
128 8-byte size_t: 2^64 minus about two pages
130 It is assumed that (possibly signed) size_t values suffice to
131 represent chunk sizes. `Possibly signed' is due to the fact
132 that `size_t' may be defined on a system as either a signed or
133 an unsigned type. The ISO C standard says that it must be
134 unsigned, but a few systems are known not to adhere to this.
135 Additionally, even when size_t is unsigned, sbrk (which is by
136 default used to obtain memory from system) accepts signed
137 arguments, and may not be able to handle size_t-wide arguments
138 with negative sign bit. Generally, values that would
139 appear as negative after accounting for overhead and alignment
140 are supported only via mmap(), which does not have this
141 limitation.
143 Requests for sizes outside the allowed range will perform an optional
144 failure action and then return null. (Requests may also
145 also fail because a system is out of memory.)
147 Thread-safety: thread-safe
149 Compliance: I believe it is compliant with the 1997 Single Unix Specification
150 Also SVID/XPG, ANSI C, and probably others as well.
152 * Synopsis of compile-time options:
154 People have reported using previous versions of this malloc on all
155 versions of Unix, sometimes by tweaking some of the defines
156 below. It has been tested most extensively on Solaris and Linux.
157 People also report using it in stand-alone embedded systems.
159 The implementation is in straight, hand-tuned ANSI C. It is not
160 at all modular. (Sorry!) It uses a lot of macros. To be at all
161 usable, this code should be compiled using an optimizing compiler
162 (for example gcc -O3) that can simplify expressions and control
163 paths. (FAQ: some macros import variables as arguments rather than
164 declare locals because people reported that some debuggers
165 otherwise get confused.)
167 OPTION DEFAULT VALUE
169 Compilation Environment options:
171 HAVE_MREMAP 0
173 Changing default word sizes:
175 INTERNAL_SIZE_T size_t
176 MALLOC_ALIGNMENT MAX (2 * sizeof(INTERNAL_SIZE_T),
177 __alignof__ (long double))
179 Configuration and functionality options:
181 USE_PUBLIC_MALLOC_WRAPPERS NOT defined
182 USE_MALLOC_LOCK NOT defined
183 MALLOC_DEBUG NOT defined
184 REALLOC_ZERO_BYTES_FREES 1
185 TRIM_FASTBINS 0
187 Options for customizing MORECORE:
189 MORECORE sbrk
190 MORECORE_FAILURE -1
191 MORECORE_CONTIGUOUS 1
192 MORECORE_CANNOT_TRIM NOT defined
193 MORECORE_CLEARS 1
194 MMAP_AS_MORECORE_SIZE (1024 * 1024)
196 Tuning options that are also dynamically changeable via mallopt:
198 DEFAULT_MXFAST 64 (for 32bit), 128 (for 64bit)
199 DEFAULT_TRIM_THRESHOLD 128 * 1024
200 DEFAULT_TOP_PAD 0
201 DEFAULT_MMAP_THRESHOLD 128 * 1024
202 DEFAULT_MMAP_MAX 65536
204 There are several other #defined constants and macros that you
205 probably don't want to touch unless you are extending or adapting malloc. */
208 void* is the pointer type that malloc should say it returns
211 #ifndef void
212 #define void void
213 #endif /*void*/
215 #include <stddef.h> /* for size_t */
216 #include <stdlib.h> /* for getenv(), abort() */
217 #include <unistd.h> /* for __libc_enable_secure */
219 #include <malloc-machine.h>
220 #include <malloc-sysdep.h>
222 #include <atomic.h>
223 #include <_itoa.h>
224 #include <bits/wordsize.h>
225 #include <sys/sysinfo.h>
227 #include <ldsodefs.h>
229 #include <unistd.h>
230 #include <stdio.h> /* needed for malloc_stats */
231 #include <errno.h>
233 #include <shlib-compat.h>
235 /* For uintptr_t. */
236 #include <stdint.h>
238 /* For va_arg, va_start, va_end. */
239 #include <stdarg.h>
241 /* For MIN, MAX, powerof2. */
242 #include <sys/param.h>
244 /* For ALIGN_UP et. al. */
245 #include <libc-internal.h>
247 #include <malloc/malloc-internal.h>
250 Debugging:
252 Because freed chunks may be overwritten with bookkeeping fields, this
253 malloc will often die when freed memory is overwritten by user
254 programs. This can be very effective (albeit in an annoying way)
255 in helping track down dangling pointers.
257 If you compile with -DMALLOC_DEBUG, a number of assertion checks are
258 enabled that will catch more memory errors. You probably won't be
259 able to make much sense of the actual assertion errors, but they
260 should help you locate incorrectly overwritten memory. The checking
261 is fairly extensive, and will slow down execution
262 noticeably. Calling malloc_stats or mallinfo with MALLOC_DEBUG set
263 will attempt to check every non-mmapped allocated and free chunk in
264 the course of computing the summmaries. (By nature, mmapped regions
265 cannot be checked very much automatically.)
267 Setting MALLOC_DEBUG may also be helpful if you are trying to modify
268 this code. The assertions in the check routines spell out in more
269 detail the assumptions and invariants underlying the algorithms.
271 Setting MALLOC_DEBUG does NOT provide an automated mechanism for
272 checking that all accesses to malloced memory stay within their
273 bounds. However, there are several add-ons and adaptations of this
274 or other mallocs available that do this.
277 #ifndef MALLOC_DEBUG
278 #define MALLOC_DEBUG 0
279 #endif
281 #ifdef NDEBUG
282 # define assert(expr) ((void) 0)
283 #else
284 # define assert(expr) \
285 ((expr) \
286 ? ((void) 0) \
287 : __malloc_assert (#expr, __FILE__, __LINE__, __func__))
289 extern const char *__progname;
291 static void
292 __malloc_assert (const char *assertion, const char *file, unsigned int line,
293 const char *function)
295 (void) __fxprintf (NULL, "%s%s%s:%u: %s%sAssertion `%s' failed.\n",
296 __progname, __progname[0] ? ": " : "",
297 file, line,
298 function ? function : "", function ? ": " : "",
299 assertion);
300 fflush (stderr);
301 abort ();
303 #endif
307 INTERNAL_SIZE_T is the word-size used for internal bookkeeping
308 of chunk sizes.
310 The default version is the same as size_t.
312 While not strictly necessary, it is best to define this as an
313 unsigned type, even if size_t is a signed type. This may avoid some
314 artificial size limitations on some systems.
316 On a 64-bit machine, you may be able to reduce malloc overhead by
317 defining INTERNAL_SIZE_T to be a 32 bit `unsigned int' at the
318 expense of not being able to handle more than 2^32 of malloced
319 space. If this limitation is acceptable, you are encouraged to set
320 this unless you are on a platform requiring 16byte alignments. In
321 this case the alignment requirements turn out to negate any
322 potential advantages of decreasing size_t word size.
324 Implementors: Beware of the possible combinations of:
325 - INTERNAL_SIZE_T might be signed or unsigned, might be 32 or 64 bits,
326 and might be the same width as int or as long
327 - size_t might have different width and signedness as INTERNAL_SIZE_T
328 - int and long might be 32 or 64 bits, and might be the same width
329 To deal with this, most comparisons and difference computations
330 among INTERNAL_SIZE_Ts should cast them to unsigned long, being
331 aware of the fact that casting an unsigned int to a wider long does
332 not sign-extend. (This also makes checking for negative numbers
333 awkward.) Some of these casts result in harmless compiler warnings
334 on some systems.
337 #ifndef INTERNAL_SIZE_T
338 #define INTERNAL_SIZE_T size_t
339 #endif
341 /* The corresponding word size */
342 #define SIZE_SZ (sizeof(INTERNAL_SIZE_T))
346 MALLOC_ALIGNMENT is the minimum alignment for malloc'ed chunks.
347 It must be a power of two at least 2 * SIZE_SZ, even on machines
348 for which smaller alignments would suffice. It may be defined as
349 larger than this though. Note however that code and data structures
350 are optimized for the case of 8-byte alignment.
354 #ifndef MALLOC_ALIGNMENT
355 # define MALLOC_ALIGNMENT (2 * SIZE_SZ < __alignof__ (long double) \
356 ? __alignof__ (long double) : 2 * SIZE_SZ)
357 #endif
359 /* The corresponding bit mask value */
360 #define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
365 REALLOC_ZERO_BYTES_FREES should be set if a call to
366 realloc with zero bytes should be the same as a call to free.
367 This is required by the C standard. Otherwise, since this malloc
368 returns a unique pointer for malloc(0), so does realloc(p, 0).
371 #ifndef REALLOC_ZERO_BYTES_FREES
372 #define REALLOC_ZERO_BYTES_FREES 1
373 #endif
376 TRIM_FASTBINS controls whether free() of a very small chunk can
377 immediately lead to trimming. Setting to true (1) can reduce memory
378 footprint, but will almost always slow down programs that use a lot
379 of small chunks.
381 Define this only if you are willing to give up some speed to more
382 aggressively reduce system-level memory footprint when releasing
383 memory in programs that use many small chunks. You can get
384 essentially the same effect by setting MXFAST to 0, but this can
385 lead to even greater slowdowns in programs using many small chunks.
386 TRIM_FASTBINS is an in-between compile-time option, that disables
387 only those chunks bordering topmost memory from being placed in
388 fastbins.
391 #ifndef TRIM_FASTBINS
392 #define TRIM_FASTBINS 0
393 #endif
396 /* Definition for getting more memory from the OS. */
397 #define MORECORE (*__morecore)
398 #define MORECORE_FAILURE 0
399 void * __default_morecore (ptrdiff_t);
400 void *(*__morecore)(ptrdiff_t) = __default_morecore;
403 #include <string.h>
406 MORECORE-related declarations. By default, rely on sbrk
411 MORECORE is the name of the routine to call to obtain more memory
412 from the system. See below for general guidance on writing
413 alternative MORECORE functions, as well as a version for WIN32 and a
414 sample version for pre-OSX macos.
417 #ifndef MORECORE
418 #define MORECORE sbrk
419 #endif
422 MORECORE_FAILURE is the value returned upon failure of MORECORE
423 as well as mmap. Since it cannot be an otherwise valid memory address,
424 and must reflect values of standard sys calls, you probably ought not
425 try to redefine it.
428 #ifndef MORECORE_FAILURE
429 #define MORECORE_FAILURE (-1)
430 #endif
433 If MORECORE_CONTIGUOUS is true, take advantage of fact that
434 consecutive calls to MORECORE with positive arguments always return
435 contiguous increasing addresses. This is true of unix sbrk. Even
436 if not defined, when regions happen to be contiguous, malloc will
437 permit allocations spanning regions obtained from different
438 calls. But defining this when applicable enables some stronger
439 consistency checks and space efficiencies.
442 #ifndef MORECORE_CONTIGUOUS
443 #define MORECORE_CONTIGUOUS 1
444 #endif
447 Define MORECORE_CANNOT_TRIM if your version of MORECORE
448 cannot release space back to the system when given negative
449 arguments. This is generally necessary only if you are using
450 a hand-crafted MORECORE function that cannot handle negative arguments.
453 /* #define MORECORE_CANNOT_TRIM */
455 /* MORECORE_CLEARS (default 1)
456 The degree to which the routine mapped to MORECORE zeroes out
457 memory: never (0), only for newly allocated space (1) or always
458 (2). The distinction between (1) and (2) is necessary because on
459 some systems, if the application first decrements and then
460 increments the break value, the contents of the reallocated space
461 are unspecified.
464 #ifndef MORECORE_CLEARS
465 # define MORECORE_CLEARS 1
466 #endif
470 MMAP_AS_MORECORE_SIZE is the minimum mmap size argument to use if
471 sbrk fails, and mmap is used as a backup. The value must be a
472 multiple of page size. This backup strategy generally applies only
473 when systems have "holes" in address space, so sbrk cannot perform
474 contiguous expansion, but there is still space available on system.
475 On systems for which this is known to be useful (i.e. most linux
476 kernels), this occurs only when programs allocate huge amounts of
477 memory. Between this, and the fact that mmap regions tend to be
478 limited, the size should be large, to avoid too many mmap calls and
479 thus avoid running out of kernel resources. */
481 #ifndef MMAP_AS_MORECORE_SIZE
482 #define MMAP_AS_MORECORE_SIZE (1024 * 1024)
483 #endif
486 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
487 large blocks.
490 #ifndef HAVE_MREMAP
491 #define HAVE_MREMAP 0
492 #endif
494 /* We may need to support __malloc_initialize_hook for backwards
495 compatibility. */
497 #if SHLIB_COMPAT (libc, GLIBC_2_0, GLIBC_2_24)
498 # define HAVE_MALLOC_INIT_HOOK 1
499 #else
500 # define HAVE_MALLOC_INIT_HOOK 0
501 #endif
505 This version of malloc supports the standard SVID/XPG mallinfo
506 routine that returns a struct containing usage properties and
507 statistics. It should work on any SVID/XPG compliant system that has
508 a /usr/include/malloc.h defining struct mallinfo. (If you'd like to
509 install such a thing yourself, cut out the preliminary declarations
510 as described above and below and save them in a malloc.h file. But
511 there's no compelling reason to bother to do this.)
513 The main declaration needed is the mallinfo struct that is returned
514 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
515 bunch of fields that are not even meaningful in this version of
516 malloc. These fields are are instead filled by mallinfo() with
517 other numbers that might be of interest.
521 /* ---------- description of public routines ------------ */
524 malloc(size_t n)
525 Returns a pointer to a newly allocated chunk of at least n bytes, or null
526 if no space is available. Additionally, on failure, errno is
527 set to ENOMEM on ANSI C systems.
529 If n is zero, malloc returns a minumum-sized chunk. (The minimum
530 size is 16 bytes on most 32bit systems, and 24 or 32 bytes on 64bit
531 systems.) On most systems, size_t is an unsigned type, so calls
532 with negative arguments are interpreted as requests for huge amounts
533 of space, which will often fail. The maximum supported value of n
534 differs across systems, but is in all cases less than the maximum
535 representable value of a size_t.
537 void* __libc_malloc(size_t);
538 libc_hidden_proto (__libc_malloc)
541 free(void* p)
542 Releases the chunk of memory pointed to by p, that had been previously
543 allocated using malloc or a related routine such as realloc.
544 It has no effect if p is null. It can have arbitrary (i.e., bad!)
545 effects if p has already been freed.
547 Unless disabled (using mallopt), freeing very large spaces will
548 when possible, automatically trigger operations that give
549 back unused memory to the system, thus reducing program footprint.
551 void __libc_free(void*);
552 libc_hidden_proto (__libc_free)
555 calloc(size_t n_elements, size_t element_size);
556 Returns a pointer to n_elements * element_size bytes, with all locations
557 set to zero.
559 void* __libc_calloc(size_t, size_t);
562 realloc(void* p, size_t n)
563 Returns a pointer to a chunk of size n that contains the same data
564 as does chunk p up to the minimum of (n, p's size) bytes, or null
565 if no space is available.
567 The returned pointer may or may not be the same as p. The algorithm
568 prefers extending p when possible, otherwise it employs the
569 equivalent of a malloc-copy-free sequence.
571 If p is null, realloc is equivalent to malloc.
573 If space is not available, realloc returns null, errno is set (if on
574 ANSI) and p is NOT freed.
576 if n is for fewer bytes than already held by p, the newly unused
577 space is lopped off and freed if possible. Unless the #define
578 REALLOC_ZERO_BYTES_FREES is set, realloc with a size argument of
579 zero (re)allocates a minimum-sized chunk.
581 Large chunks that were internally obtained via mmap will always
582 be reallocated using malloc-copy-free sequences unless
583 the system supports MREMAP (currently only linux).
585 The old unix realloc convention of allowing the last-free'd chunk
586 to be used as an argument to realloc is not supported.
588 void* __libc_realloc(void*, size_t);
589 libc_hidden_proto (__libc_realloc)
592 memalign(size_t alignment, size_t n);
593 Returns a pointer to a newly allocated chunk of n bytes, aligned
594 in accord with the alignment argument.
596 The alignment argument should be a power of two. If the argument is
597 not a power of two, the nearest greater power is used.
598 8-byte alignment is guaranteed by normal malloc calls, so don't
599 bother calling memalign with an argument of 8 or less.
601 Overreliance on memalign is a sure way to fragment space.
603 void* __libc_memalign(size_t, size_t);
604 libc_hidden_proto (__libc_memalign)
607 valloc(size_t n);
608 Equivalent to memalign(pagesize, n), where pagesize is the page
609 size of the system. If the pagesize is unknown, 4096 is used.
611 void* __libc_valloc(size_t);
616 mallopt(int parameter_number, int parameter_value)
617 Sets tunable parameters The format is to provide a
618 (parameter-number, parameter-value) pair. mallopt then sets the
619 corresponding parameter to the argument value if it can (i.e., so
620 long as the value is meaningful), and returns 1 if successful else
621 0. SVID/XPG/ANSI defines four standard param numbers for mallopt,
622 normally defined in malloc.h. Only one of these (M_MXFAST) is used
623 in this malloc. The others (M_NLBLKS, M_GRAIN, M_KEEP) don't apply,
624 so setting them has no effect. But this malloc also supports four
625 other options in mallopt. See below for details. Briefly, supported
626 parameters are as follows (listed defaults are for "typical"
627 configurations).
629 Symbol param # default allowed param values
630 M_MXFAST 1 64 0-80 (0 disables fastbins)
631 M_TRIM_THRESHOLD -1 128*1024 any (-1U disables trimming)
632 M_TOP_PAD -2 0 any
633 M_MMAP_THRESHOLD -3 128*1024 any (or 0 if no MMAP support)
634 M_MMAP_MAX -4 65536 any (0 disables use of mmap)
636 int __libc_mallopt(int, int);
637 libc_hidden_proto (__libc_mallopt)
641 mallinfo()
642 Returns (by copy) a struct containing various summary statistics:
644 arena: current total non-mmapped bytes allocated from system
645 ordblks: the number of free chunks
646 smblks: the number of fastbin blocks (i.e., small chunks that
647 have been freed but not use resused or consolidated)
648 hblks: current number of mmapped regions
649 hblkhd: total bytes held in mmapped regions
650 usmblks: always 0
651 fsmblks: total bytes held in fastbin blocks
652 uordblks: current total allocated space (normal or mmapped)
653 fordblks: total free space
654 keepcost: the maximum number of bytes that could ideally be released
655 back to system via malloc_trim. ("ideally" means that
656 it ignores page restrictions etc.)
658 Because these fields are ints, but internal bookkeeping may
659 be kept as longs, the reported values may wrap around zero and
660 thus be inaccurate.
662 struct mallinfo __libc_mallinfo(void);
666 pvalloc(size_t n);
667 Equivalent to valloc(minimum-page-that-holds(n)), that is,
668 round up n to nearest pagesize.
670 void* __libc_pvalloc(size_t);
673 malloc_trim(size_t pad);
675 If possible, gives memory back to the system (via negative
676 arguments to sbrk) if there is unused memory at the `high' end of
677 the malloc pool. You can call this after freeing large blocks of
678 memory to potentially reduce the system-level memory requirements
679 of a program. However, it cannot guarantee to reduce memory. Under
680 some allocation patterns, some large free blocks of memory will be
681 locked between two used chunks, so they cannot be given back to
682 the system.
684 The `pad' argument to malloc_trim represents the amount of free
685 trailing space to leave untrimmed. If this argument is zero,
686 only the minimum amount of memory to maintain internal data
687 structures will be left (one page or less). Non-zero arguments
688 can be supplied to maintain enough trailing space to service
689 future expected allocations without having to re-obtain memory
690 from the system.
692 Malloc_trim returns 1 if it actually released any memory, else 0.
693 On systems that do not support "negative sbrks", it will always
694 return 0.
696 int __malloc_trim(size_t);
699 malloc_usable_size(void* p);
701 Returns the number of bytes you can actually use in
702 an allocated chunk, which may be more than you requested (although
703 often not) due to alignment and minimum size constraints.
704 You can use this many bytes without worrying about
705 overwriting other allocated objects. This is not a particularly great
706 programming practice. malloc_usable_size can be more useful in
707 debugging and assertions, for example:
709 p = malloc(n);
710 assert(malloc_usable_size(p) >= 256);
713 size_t __malloc_usable_size(void*);
716 malloc_stats();
717 Prints on stderr the amount of space obtained from the system (both
718 via sbrk and mmap), the maximum amount (which may be more than
719 current if malloc_trim and/or munmap got called), and the current
720 number of bytes allocated via malloc (or realloc, etc) but not yet
721 freed. Note that this is the number of bytes allocated, not the
722 number requested. It will be larger than the number requested
723 because of alignment and bookkeeping overhead. Because it includes
724 alignment wastage as being in use, this figure may be greater than
725 zero even when no user-level chunks are allocated.
727 The reported current and maximum system memory can be inaccurate if
728 a program makes other calls to system memory allocation functions
729 (normally sbrk) outside of malloc.
731 malloc_stats prints only the most commonly interesting statistics.
732 More information can be obtained by calling mallinfo.
735 void __malloc_stats(void);
738 malloc_get_state(void);
740 Returns the state of all malloc variables in an opaque data
741 structure.
743 void* __malloc_get_state(void);
746 malloc_set_state(void* state);
748 Restore the state of all malloc variables from data obtained with
749 malloc_get_state().
751 int __malloc_set_state(void*);
754 posix_memalign(void **memptr, size_t alignment, size_t size);
756 POSIX wrapper like memalign(), checking for validity of size.
758 int __posix_memalign(void **, size_t, size_t);
760 /* mallopt tuning options */
763 M_MXFAST is the maximum request size used for "fastbins", special bins
764 that hold returned chunks without consolidating their spaces. This
765 enables future requests for chunks of the same size to be handled
766 very quickly, but can increase fragmentation, and thus increase the
767 overall memory footprint of a program.
769 This malloc manages fastbins very conservatively yet still
770 efficiently, so fragmentation is rarely a problem for values less
771 than or equal to the default. The maximum supported value of MXFAST
772 is 80. You wouldn't want it any higher than this anyway. Fastbins
773 are designed especially for use with many small structs, objects or
774 strings -- the default handles structs/objects/arrays with sizes up
775 to 8 4byte fields, or small strings representing words, tokens,
776 etc. Using fastbins for larger objects normally worsens
777 fragmentation without improving speed.
779 M_MXFAST is set in REQUEST size units. It is internally used in
780 chunksize units, which adds padding and alignment. You can reduce
781 M_MXFAST to 0 to disable all use of fastbins. This causes the malloc
782 algorithm to be a closer approximation of fifo-best-fit in all cases,
783 not just for larger requests, but will generally cause it to be
784 slower.
788 /* M_MXFAST is a standard SVID/XPG tuning option, usually listed in malloc.h */
789 #ifndef M_MXFAST
790 #define M_MXFAST 1
791 #endif
793 #ifndef DEFAULT_MXFAST
794 #define DEFAULT_MXFAST (64 * SIZE_SZ / 4)
795 #endif
799 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
800 to keep before releasing via malloc_trim in free().
802 Automatic trimming is mainly useful in long-lived programs.
803 Because trimming via sbrk can be slow on some systems, and can
804 sometimes be wasteful (in cases where programs immediately
805 afterward allocate more large chunks) the value should be high
806 enough so that your overall system performance would improve by
807 releasing this much memory.
809 The trim threshold and the mmap control parameters (see below)
810 can be traded off with one another. Trimming and mmapping are
811 two different ways of releasing unused memory back to the
812 system. Between these two, it is often possible to keep
813 system-level demands of a long-lived program down to a bare
814 minimum. For example, in one test suite of sessions measuring
815 the XF86 X server on Linux, using a trim threshold of 128K and a
816 mmap threshold of 192K led to near-minimal long term resource
817 consumption.
819 If you are using this malloc in a long-lived program, it should
820 pay to experiment with these values. As a rough guide, you
821 might set to a value close to the average size of a process
822 (program) running on your system. Releasing this much memory
823 would allow such a process to run in memory. Generally, it's
824 worth it to tune for trimming rather tham memory mapping when a
825 program undergoes phases where several large chunks are
826 allocated and released in ways that can reuse each other's
827 storage, perhaps mixed with phases where there are no such
828 chunks at all. And in well-behaved long-lived programs,
829 controlling release of large blocks via trimming versus mapping
830 is usually faster.
832 However, in most programs, these parameters serve mainly as
833 protection against the system-level effects of carrying around
834 massive amounts of unneeded memory. Since frequent calls to
835 sbrk, mmap, and munmap otherwise degrade performance, the default
836 parameters are set to relatively high values that serve only as
837 safeguards.
839 The trim value It must be greater than page size to have any useful
840 effect. To disable trimming completely, you can set to
841 (unsigned long)(-1)
843 Trim settings interact with fastbin (MXFAST) settings: Unless
844 TRIM_FASTBINS is defined, automatic trimming never takes place upon
845 freeing a chunk with size less than or equal to MXFAST. Trimming is
846 instead delayed until subsequent freeing of larger chunks. However,
847 you can still force an attempted trim by calling malloc_trim.
849 Also, trimming is not generally possible in cases where
850 the main arena is obtained via mmap.
852 Note that the trick some people use of mallocing a huge space and
853 then freeing it at program startup, in an attempt to reserve system
854 memory, doesn't have the intended effect under automatic trimming,
855 since that memory will immediately be returned to the system.
858 #define M_TRIM_THRESHOLD -1
860 #ifndef DEFAULT_TRIM_THRESHOLD
861 #define DEFAULT_TRIM_THRESHOLD (128 * 1024)
862 #endif
865 M_TOP_PAD is the amount of extra `padding' space to allocate or
866 retain whenever sbrk is called. It is used in two ways internally:
868 * When sbrk is called to extend the top of the arena to satisfy
869 a new malloc request, this much padding is added to the sbrk
870 request.
872 * When malloc_trim is called automatically from free(),
873 it is used as the `pad' argument.
875 In both cases, the actual amount of padding is rounded
876 so that the end of the arena is always a system page boundary.
878 The main reason for using padding is to avoid calling sbrk so
879 often. Having even a small pad greatly reduces the likelihood
880 that nearly every malloc request during program start-up (or
881 after trimming) will invoke sbrk, which needlessly wastes
882 time.
884 Automatic rounding-up to page-size units is normally sufficient
885 to avoid measurable overhead, so the default is 0. However, in
886 systems where sbrk is relatively slow, it can pay to increase
887 this value, at the expense of carrying around more memory than
888 the program needs.
891 #define M_TOP_PAD -2
893 #ifndef DEFAULT_TOP_PAD
894 #define DEFAULT_TOP_PAD (0)
895 #endif
898 MMAP_THRESHOLD_MAX and _MIN are the bounds on the dynamically
899 adjusted MMAP_THRESHOLD.
902 #ifndef DEFAULT_MMAP_THRESHOLD_MIN
903 #define DEFAULT_MMAP_THRESHOLD_MIN (128 * 1024)
904 #endif
906 #ifndef DEFAULT_MMAP_THRESHOLD_MAX
907 /* For 32-bit platforms we cannot increase the maximum mmap
908 threshold much because it is also the minimum value for the
909 maximum heap size and its alignment. Going above 512k (i.e., 1M
910 for new heaps) wastes too much address space. */
911 # if __WORDSIZE == 32
912 # define DEFAULT_MMAP_THRESHOLD_MAX (512 * 1024)
913 # else
914 # define DEFAULT_MMAP_THRESHOLD_MAX (4 * 1024 * 1024 * sizeof(long))
915 # endif
916 #endif
919 M_MMAP_THRESHOLD is the request size threshold for using mmap()
920 to service a request. Requests of at least this size that cannot
921 be allocated using already-existing space will be serviced via mmap.
922 (If enough normal freed space already exists it is used instead.)
924 Using mmap segregates relatively large chunks of memory so that
925 they can be individually obtained and released from the host
926 system. A request serviced through mmap is never reused by any
927 other request (at least not directly; the system may just so
928 happen to remap successive requests to the same locations).
930 Segregating space in this way has the benefits that:
932 1. Mmapped space can ALWAYS be individually released back
933 to the system, which helps keep the system level memory
934 demands of a long-lived program low.
935 2. Mapped memory can never become `locked' between
936 other chunks, as can happen with normally allocated chunks, which
937 means that even trimming via malloc_trim would not release them.
938 3. On some systems with "holes" in address spaces, mmap can obtain
939 memory that sbrk cannot.
941 However, it has the disadvantages that:
943 1. The space cannot be reclaimed, consolidated, and then
944 used to service later requests, as happens with normal chunks.
945 2. It can lead to more wastage because of mmap page alignment
946 requirements
947 3. It causes malloc performance to be more dependent on host
948 system memory management support routines which may vary in
949 implementation quality and may impose arbitrary
950 limitations. Generally, servicing a request via normal
951 malloc steps is faster than going through a system's mmap.
953 The advantages of mmap nearly always outweigh disadvantages for
954 "large" chunks, but the value of "large" varies across systems. The
955 default is an empirically derived value that works well in most
956 systems.
959 Update in 2006:
960 The above was written in 2001. Since then the world has changed a lot.
961 Memory got bigger. Applications got bigger. The virtual address space
962 layout in 32 bit linux changed.
964 In the new situation, brk() and mmap space is shared and there are no
965 artificial limits on brk size imposed by the kernel. What is more,
966 applications have started using transient allocations larger than the
967 128Kb as was imagined in 2001.
969 The price for mmap is also high now; each time glibc mmaps from the
970 kernel, the kernel is forced to zero out the memory it gives to the
971 application. Zeroing memory is expensive and eats a lot of cache and
972 memory bandwidth. This has nothing to do with the efficiency of the
973 virtual memory system, by doing mmap the kernel just has no choice but
974 to zero.
976 In 2001, the kernel had a maximum size for brk() which was about 800
977 megabytes on 32 bit x86, at that point brk() would hit the first
978 mmaped shared libaries and couldn't expand anymore. With current 2.6
979 kernels, the VA space layout is different and brk() and mmap
980 both can span the entire heap at will.
982 Rather than using a static threshold for the brk/mmap tradeoff,
983 we are now using a simple dynamic one. The goal is still to avoid
984 fragmentation. The old goals we kept are
985 1) try to get the long lived large allocations to use mmap()
986 2) really large allocations should always use mmap()
987 and we're adding now:
988 3) transient allocations should use brk() to avoid forcing the kernel
989 having to zero memory over and over again
991 The implementation works with a sliding threshold, which is by default
992 limited to go between 128Kb and 32Mb (64Mb for 64 bitmachines) and starts
993 out at 128Kb as per the 2001 default.
995 This allows us to satisfy requirement 1) under the assumption that long
996 lived allocations are made early in the process' lifespan, before it has
997 started doing dynamic allocations of the same size (which will
998 increase the threshold).
1000 The upperbound on the threshold satisfies requirement 2)
1002 The threshold goes up in value when the application frees memory that was
1003 allocated with the mmap allocator. The idea is that once the application
1004 starts freeing memory of a certain size, it's highly probable that this is
1005 a size the application uses for transient allocations. This estimator
1006 is there to satisfy the new third requirement.
1010 #define M_MMAP_THRESHOLD -3
1012 #ifndef DEFAULT_MMAP_THRESHOLD
1013 #define DEFAULT_MMAP_THRESHOLD DEFAULT_MMAP_THRESHOLD_MIN
1014 #endif
1017 M_MMAP_MAX is the maximum number of requests to simultaneously
1018 service using mmap. This parameter exists because
1019 some systems have a limited number of internal tables for
1020 use by mmap, and using more than a few of them may degrade
1021 performance.
1023 The default is set to a value that serves only as a safeguard.
1024 Setting to 0 disables use of mmap for servicing large requests.
1027 #define M_MMAP_MAX -4
1029 #ifndef DEFAULT_MMAP_MAX
1030 #define DEFAULT_MMAP_MAX (65536)
1031 #endif
1033 #include <malloc.h>
1035 #ifndef RETURN_ADDRESS
1036 #define RETURN_ADDRESS(X_) (NULL)
1037 #endif
1039 /* On some platforms we can compile internal, not exported functions better.
1040 Let the environment provide a macro and define it to be empty if it
1041 is not available. */
1042 #ifndef internal_function
1043 # define internal_function
1044 #endif
1046 /* Forward declarations. */
1047 struct malloc_chunk;
1048 typedef struct malloc_chunk* mchunkptr;
1050 /* Internal routines. */
1052 static void* _int_malloc(mstate, size_t);
1053 static void _int_free(mstate, mchunkptr, int);
1054 static void* _int_realloc(mstate, mchunkptr, INTERNAL_SIZE_T,
1055 INTERNAL_SIZE_T);
1056 static void* _int_memalign(mstate, size_t, size_t);
1057 static void* _mid_memalign(size_t, size_t, void *);
1059 static void malloc_printerr(int action, const char *str, void *ptr, mstate av);
1061 static void* internal_function mem2mem_check(void *p, size_t sz);
1062 static int internal_function top_check(void);
1063 static void internal_function munmap_chunk(mchunkptr p);
1064 #if HAVE_MREMAP
1065 static mchunkptr internal_function mremap_chunk(mchunkptr p, size_t new_size);
1066 #endif
1068 static void* malloc_check(size_t sz, const void *caller);
1069 static void free_check(void* mem, const void *caller);
1070 static void* realloc_check(void* oldmem, size_t bytes,
1071 const void *caller);
1072 static void* memalign_check(size_t alignment, size_t bytes,
1073 const void *caller);
1075 /* ------------------ MMAP support ------------------ */
1078 #include <fcntl.h>
1079 #include <sys/mman.h>
1081 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1082 # define MAP_ANONYMOUS MAP_ANON
1083 #endif
1085 #ifndef MAP_NORESERVE
1086 # define MAP_NORESERVE 0
1087 #endif
1089 #define MMAP(addr, size, prot, flags) \
1090 __mmap((addr), (size), (prot), (flags)|MAP_ANONYMOUS|MAP_PRIVATE, -1, 0)
1094 ----------------------- Chunk representations -----------------------
1099 This struct declaration is misleading (but accurate and necessary).
1100 It declares a "view" into memory allowing access to necessary
1101 fields at known offsets from a given base. See explanation below.
1104 struct malloc_chunk {
1106 INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
1107 INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */
1109 struct malloc_chunk* fd; /* double links -- used only if free. */
1110 struct malloc_chunk* bk;
1112 /* Only used for large blocks: pointer to next larger size. */
1113 struct malloc_chunk* fd_nextsize; /* double links -- used only if free. */
1114 struct malloc_chunk* bk_nextsize;
1119 malloc_chunk details:
1121 (The following includes lightly edited explanations by Colin Plumb.)
1123 Chunks of memory are maintained using a `boundary tag' method as
1124 described in e.g., Knuth or Standish. (See the paper by Paul
1125 Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1126 survey of such techniques.) Sizes of free chunks are stored both
1127 in the front of each chunk and at the end. This makes
1128 consolidating fragmented chunks into bigger chunks very fast. The
1129 size fields also hold bits representing whether chunks are free or
1130 in use.
1132 An allocated chunk looks like this:
1135 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1136 | Size of previous chunk, if allocated | |
1137 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1138 | Size of chunk, in bytes |M|P|
1139 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1140 | User data starts here... .
1142 . (malloc_usable_size() bytes) .
1144 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1145 | Size of chunk |
1146 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1149 Where "chunk" is the front of the chunk for the purpose of most of
1150 the malloc code, but "mem" is the pointer that is returned to the
1151 user. "Nextchunk" is the beginning of the next contiguous chunk.
1153 Chunks always begin on even word boundaries, so the mem portion
1154 (which is returned to the user) is also on an even word boundary, and
1155 thus at least double-word aligned.
1157 Free chunks are stored in circular doubly-linked lists, and look like this:
1159 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1160 | Size of previous chunk |
1161 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1162 `head:' | Size of chunk, in bytes |P|
1163 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1164 | Forward pointer to next chunk in list |
1165 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1166 | Back pointer to previous chunk in list |
1167 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1168 | Unused space (may be 0 bytes long) .
1171 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1172 `foot:' | Size of chunk, in bytes |
1173 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1175 The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1176 chunk size (which is always a multiple of two words), is an in-use
1177 bit for the *previous* chunk. If that bit is *clear*, then the
1178 word before the current chunk size contains the previous chunk
1179 size, and can be used to find the front of the previous chunk.
1180 The very first chunk allocated always has this bit set,
1181 preventing access to non-existent (or non-owned) memory. If
1182 prev_inuse is set for any given chunk, then you CANNOT determine
1183 the size of the previous chunk, and might even get a memory
1184 addressing fault when trying to do so.
1186 Note that the `foot' of the current chunk is actually represented
1187 as the prev_size of the NEXT chunk. This makes it easier to
1188 deal with alignments etc but can be very confusing when trying
1189 to extend or adapt this code.
1191 The two exceptions to all this are
1193 1. The special chunk `top' doesn't bother using the
1194 trailing size field since there is no next contiguous chunk
1195 that would have to index off it. After initialization, `top'
1196 is forced to always exist. If it would become less than
1197 MINSIZE bytes long, it is replenished.
1199 2. Chunks allocated via mmap, which have the second-lowest-order
1200 bit M (IS_MMAPPED) set in their size fields. Because they are
1201 allocated one-by-one, each must contain its own trailing size field.
1206 ---------- Size and alignment checks and conversions ----------
1209 /* conversion from malloc headers to user pointers, and back */
1211 #define chunk2mem(p) ((void*)((char*)(p) + 2*SIZE_SZ))
1212 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1214 /* The smallest possible chunk */
1215 #define MIN_CHUNK_SIZE (offsetof(struct malloc_chunk, fd_nextsize))
1217 /* The smallest size we can malloc is an aligned minimal chunk */
1219 #define MINSIZE \
1220 (unsigned long)(((MIN_CHUNK_SIZE+MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK))
1222 /* Check if m has acceptable alignment */
1224 #define aligned_OK(m) (((unsigned long)(m) & MALLOC_ALIGN_MASK) == 0)
1226 #define misaligned_chunk(p) \
1227 ((uintptr_t)(MALLOC_ALIGNMENT == 2 * SIZE_SZ ? (p) : chunk2mem (p)) \
1228 & MALLOC_ALIGN_MASK)
1232 Check if a request is so large that it would wrap around zero when
1233 padded and aligned. To simplify some other code, the bound is made
1234 low enough so that adding MINSIZE will also not wrap around zero.
1237 #define REQUEST_OUT_OF_RANGE(req) \
1238 ((unsigned long) (req) >= \
1239 (unsigned long) (INTERNAL_SIZE_T) (-2 * MINSIZE))
1241 /* pad request bytes into a usable size -- internal version */
1243 #define request2size(req) \
1244 (((req) + SIZE_SZ + MALLOC_ALIGN_MASK < MINSIZE) ? \
1245 MINSIZE : \
1246 ((req) + SIZE_SZ + MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK)
1248 /* Same, except also perform argument check */
1250 #define checked_request2size(req, sz) \
1251 if (REQUEST_OUT_OF_RANGE (req)) { \
1252 __set_errno (ENOMEM); \
1253 return 0; \
1255 (sz) = request2size (req);
1258 --------------- Physical chunk operations ---------------
1262 /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1263 #define PREV_INUSE 0x1
1265 /* extract inuse bit of previous chunk */
1266 #define prev_inuse(p) ((p)->size & PREV_INUSE)
1269 /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
1270 #define IS_MMAPPED 0x2
1272 /* check for mmap()'ed chunk */
1273 #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1276 /* size field is or'ed with NON_MAIN_ARENA if the chunk was obtained
1277 from a non-main arena. This is only set immediately before handing
1278 the chunk to the user, if necessary. */
1279 #define NON_MAIN_ARENA 0x4
1281 /* check for chunk from non-main arena */
1282 #define chunk_non_main_arena(p) ((p)->size & NON_MAIN_ARENA)
1286 Bits to mask off when extracting size
1288 Note: IS_MMAPPED is intentionally not masked off from size field in
1289 macros for which mmapped chunks should never be seen. This should
1290 cause helpful core dumps to occur if it is tried by accident by
1291 people extending or adapting this malloc.
1293 #define SIZE_BITS (PREV_INUSE | IS_MMAPPED | NON_MAIN_ARENA)
1295 /* Get size, ignoring use bits */
1296 #define chunksize(p) ((p)->size & ~(SIZE_BITS))
1299 /* Ptr to next physical malloc_chunk. */
1300 #define next_chunk(p) ((mchunkptr) (((char *) (p)) + ((p)->size & ~SIZE_BITS)))
1302 /* Ptr to previous physical malloc_chunk */
1303 #define prev_chunk(p) ((mchunkptr) (((char *) (p)) - ((p)->prev_size)))
1305 /* Treat space at ptr + offset as a chunk */
1306 #define chunk_at_offset(p, s) ((mchunkptr) (((char *) (p)) + (s)))
1308 /* extract p's inuse bit */
1309 #define inuse(p) \
1310 ((((mchunkptr) (((char *) (p)) + ((p)->size & ~SIZE_BITS)))->size) & PREV_INUSE)
1312 /* set/clear chunk as being inuse without otherwise disturbing */
1313 #define set_inuse(p) \
1314 ((mchunkptr) (((char *) (p)) + ((p)->size & ~SIZE_BITS)))->size |= PREV_INUSE
1316 #define clear_inuse(p) \
1317 ((mchunkptr) (((char *) (p)) + ((p)->size & ~SIZE_BITS)))->size &= ~(PREV_INUSE)
1320 /* check/set/clear inuse bits in known places */
1321 #define inuse_bit_at_offset(p, s) \
1322 (((mchunkptr) (((char *) (p)) + (s)))->size & PREV_INUSE)
1324 #define set_inuse_bit_at_offset(p, s) \
1325 (((mchunkptr) (((char *) (p)) + (s)))->size |= PREV_INUSE)
1327 #define clear_inuse_bit_at_offset(p, s) \
1328 (((mchunkptr) (((char *) (p)) + (s)))->size &= ~(PREV_INUSE))
1331 /* Set size at head, without disturbing its use bit */
1332 #define set_head_size(p, s) ((p)->size = (((p)->size & SIZE_BITS) | (s)))
1334 /* Set size/use field */
1335 #define set_head(p, s) ((p)->size = (s))
1337 /* Set size at footer (only when chunk is not in use) */
1338 #define set_foot(p, s) (((mchunkptr) ((char *) (p) + (s)))->prev_size = (s))
1342 -------------------- Internal data structures --------------------
1344 All internal state is held in an instance of malloc_state defined
1345 below. There are no other static variables, except in two optional
1346 cases:
1347 * If USE_MALLOC_LOCK is defined, the mALLOC_MUTEx declared above.
1348 * If mmap doesn't support MAP_ANONYMOUS, a dummy file descriptor
1349 for mmap.
1351 Beware of lots of tricks that minimize the total bookkeeping space
1352 requirements. The result is a little over 1K bytes (for 4byte
1353 pointers and size_t.)
1357 Bins
1359 An array of bin headers for free chunks. Each bin is doubly
1360 linked. The bins are approximately proportionally (log) spaced.
1361 There are a lot of these bins (128). This may look excessive, but
1362 works very well in practice. Most bins hold sizes that are
1363 unusual as malloc request sizes, but are more usual for fragments
1364 and consolidated sets of chunks, which is what these bins hold, so
1365 they can be found quickly. All procedures maintain the invariant
1366 that no consolidated chunk physically borders another one, so each
1367 chunk in a list is known to be preceeded and followed by either
1368 inuse chunks or the ends of memory.
1370 Chunks in bins are kept in size order, with ties going to the
1371 approximately least recently used chunk. Ordering isn't needed
1372 for the small bins, which all contain the same-sized chunks, but
1373 facilitates best-fit allocation for larger chunks. These lists
1374 are just sequential. Keeping them in order almost never requires
1375 enough traversal to warrant using fancier ordered data
1376 structures.
1378 Chunks of the same size are linked with the most
1379 recently freed at the front, and allocations are taken from the
1380 back. This results in LRU (FIFO) allocation order, which tends
1381 to give each chunk an equal opportunity to be consolidated with
1382 adjacent freed chunks, resulting in larger free chunks and less
1383 fragmentation.
1385 To simplify use in double-linked lists, each bin header acts
1386 as a malloc_chunk. This avoids special-casing for headers.
1387 But to conserve space and improve locality, we allocate
1388 only the fd/bk pointers of bins, and then use repositioning tricks
1389 to treat these as the fields of a malloc_chunk*.
1392 typedef struct malloc_chunk *mbinptr;
1394 /* addressing -- note that bin_at(0) does not exist */
1395 #define bin_at(m, i) \
1396 (mbinptr) (((char *) &((m)->bins[((i) - 1) * 2])) \
1397 - offsetof (struct malloc_chunk, fd))
1399 /* analog of ++bin */
1400 #define next_bin(b) ((mbinptr) ((char *) (b) + (sizeof (mchunkptr) << 1)))
1402 /* Reminders about list directionality within bins */
1403 #define first(b) ((b)->fd)
1404 #define last(b) ((b)->bk)
1406 /* Take a chunk off a bin list */
1407 #define unlink(AV, P, BK, FD) { \
1408 FD = P->fd; \
1409 BK = P->bk; \
1410 if (__builtin_expect (FD->bk != P || BK->fd != P, 0)) \
1411 malloc_printerr (check_action, "corrupted double-linked list", P, AV); \
1412 else { \
1413 FD->bk = BK; \
1414 BK->fd = FD; \
1415 if (!in_smallbin_range (P->size) \
1416 && __builtin_expect (P->fd_nextsize != NULL, 0)) { \
1417 if (__builtin_expect (P->fd_nextsize->bk_nextsize != P, 0) \
1418 || __builtin_expect (P->bk_nextsize->fd_nextsize != P, 0)) \
1419 malloc_printerr (check_action, \
1420 "corrupted double-linked list (not small)", \
1421 P, AV); \
1422 if (FD->fd_nextsize == NULL) { \
1423 if (P->fd_nextsize == P) \
1424 FD->fd_nextsize = FD->bk_nextsize = FD; \
1425 else { \
1426 FD->fd_nextsize = P->fd_nextsize; \
1427 FD->bk_nextsize = P->bk_nextsize; \
1428 P->fd_nextsize->bk_nextsize = FD; \
1429 P->bk_nextsize->fd_nextsize = FD; \
1431 } else { \
1432 P->fd_nextsize->bk_nextsize = P->bk_nextsize; \
1433 P->bk_nextsize->fd_nextsize = P->fd_nextsize; \
1440 Indexing
1442 Bins for sizes < 512 bytes contain chunks of all the same size, spaced
1443 8 bytes apart. Larger bins are approximately logarithmically spaced:
1445 64 bins of size 8
1446 32 bins of size 64
1447 16 bins of size 512
1448 8 bins of size 4096
1449 4 bins of size 32768
1450 2 bins of size 262144
1451 1 bin of size what's left
1453 There is actually a little bit of slop in the numbers in bin_index
1454 for the sake of speed. This makes no difference elsewhere.
1456 The bins top out around 1MB because we expect to service large
1457 requests via mmap.
1459 Bin 0 does not exist. Bin 1 is the unordered list; if that would be
1460 a valid chunk size the small bins are bumped up one.
1463 #define NBINS 128
1464 #define NSMALLBINS 64
1465 #define SMALLBIN_WIDTH MALLOC_ALIGNMENT
1466 #define SMALLBIN_CORRECTION (MALLOC_ALIGNMENT > 2 * SIZE_SZ)
1467 #define MIN_LARGE_SIZE ((NSMALLBINS - SMALLBIN_CORRECTION) * SMALLBIN_WIDTH)
1469 #define in_smallbin_range(sz) \
1470 ((unsigned long) (sz) < (unsigned long) MIN_LARGE_SIZE)
1472 #define smallbin_index(sz) \
1473 ((SMALLBIN_WIDTH == 16 ? (((unsigned) (sz)) >> 4) : (((unsigned) (sz)) >> 3))\
1474 + SMALLBIN_CORRECTION)
1476 #define largebin_index_32(sz) \
1477 (((((unsigned long) (sz)) >> 6) <= 38) ? 56 + (((unsigned long) (sz)) >> 6) :\
1478 ((((unsigned long) (sz)) >> 9) <= 20) ? 91 + (((unsigned long) (sz)) >> 9) :\
1479 ((((unsigned long) (sz)) >> 12) <= 10) ? 110 + (((unsigned long) (sz)) >> 12) :\
1480 ((((unsigned long) (sz)) >> 15) <= 4) ? 119 + (((unsigned long) (sz)) >> 15) :\
1481 ((((unsigned long) (sz)) >> 18) <= 2) ? 124 + (((unsigned long) (sz)) >> 18) :\
1482 126)
1484 #define largebin_index_32_big(sz) \
1485 (((((unsigned long) (sz)) >> 6) <= 45) ? 49 + (((unsigned long) (sz)) >> 6) :\
1486 ((((unsigned long) (sz)) >> 9) <= 20) ? 91 + (((unsigned long) (sz)) >> 9) :\
1487 ((((unsigned long) (sz)) >> 12) <= 10) ? 110 + (((unsigned long) (sz)) >> 12) :\
1488 ((((unsigned long) (sz)) >> 15) <= 4) ? 119 + (((unsigned long) (sz)) >> 15) :\
1489 ((((unsigned long) (sz)) >> 18) <= 2) ? 124 + (((unsigned long) (sz)) >> 18) :\
1490 126)
1492 // XXX It remains to be seen whether it is good to keep the widths of
1493 // XXX the buckets the same or whether it should be scaled by a factor
1494 // XXX of two as well.
1495 #define largebin_index_64(sz) \
1496 (((((unsigned long) (sz)) >> 6) <= 48) ? 48 + (((unsigned long) (sz)) >> 6) :\
1497 ((((unsigned long) (sz)) >> 9) <= 20) ? 91 + (((unsigned long) (sz)) >> 9) :\
1498 ((((unsigned long) (sz)) >> 12) <= 10) ? 110 + (((unsigned long) (sz)) >> 12) :\
1499 ((((unsigned long) (sz)) >> 15) <= 4) ? 119 + (((unsigned long) (sz)) >> 15) :\
1500 ((((unsigned long) (sz)) >> 18) <= 2) ? 124 + (((unsigned long) (sz)) >> 18) :\
1501 126)
1503 #define largebin_index(sz) \
1504 (SIZE_SZ == 8 ? largebin_index_64 (sz) \
1505 : MALLOC_ALIGNMENT == 16 ? largebin_index_32_big (sz) \
1506 : largebin_index_32 (sz))
1508 #define bin_index(sz) \
1509 ((in_smallbin_range (sz)) ? smallbin_index (sz) : largebin_index (sz))
1513 Unsorted chunks
1515 All remainders from chunk splits, as well as all returned chunks,
1516 are first placed in the "unsorted" bin. They are then placed
1517 in regular bins after malloc gives them ONE chance to be used before
1518 binning. So, basically, the unsorted_chunks list acts as a queue,
1519 with chunks being placed on it in free (and malloc_consolidate),
1520 and taken off (to be either used or placed in bins) in malloc.
1522 The NON_MAIN_ARENA flag is never set for unsorted chunks, so it
1523 does not have to be taken into account in size comparisons.
1526 /* The otherwise unindexable 1-bin is used to hold unsorted chunks. */
1527 #define unsorted_chunks(M) (bin_at (M, 1))
1532 The top-most available chunk (i.e., the one bordering the end of
1533 available memory) is treated specially. It is never included in
1534 any bin, is used only if no other chunk is available, and is
1535 released back to the system if it is very large (see
1536 M_TRIM_THRESHOLD). Because top initially
1537 points to its own bin with initial zero size, thus forcing
1538 extension on the first malloc request, we avoid having any special
1539 code in malloc to check whether it even exists yet. But we still
1540 need to do so when getting memory from system, so we make
1541 initial_top treat the bin as a legal but unusable chunk during the
1542 interval between initialization and the first call to
1543 sysmalloc. (This is somewhat delicate, since it relies on
1544 the 2 preceding words to be zero during this interval as well.)
1547 /* Conveniently, the unsorted bin can be used as dummy top on first call */
1548 #define initial_top(M) (unsorted_chunks (M))
1551 Binmap
1553 To help compensate for the large number of bins, a one-level index
1554 structure is used for bin-by-bin searching. `binmap' is a
1555 bitvector recording whether bins are definitely empty so they can
1556 be skipped over during during traversals. The bits are NOT always
1557 cleared as soon as bins are empty, but instead only
1558 when they are noticed to be empty during traversal in malloc.
1561 /* Conservatively use 32 bits per map word, even if on 64bit system */
1562 #define BINMAPSHIFT 5
1563 #define BITSPERMAP (1U << BINMAPSHIFT)
1564 #define BINMAPSIZE (NBINS / BITSPERMAP)
1566 #define idx2block(i) ((i) >> BINMAPSHIFT)
1567 #define idx2bit(i) ((1U << ((i) & ((1U << BINMAPSHIFT) - 1))))
1569 #define mark_bin(m, i) ((m)->binmap[idx2block (i)] |= idx2bit (i))
1570 #define unmark_bin(m, i) ((m)->binmap[idx2block (i)] &= ~(idx2bit (i)))
1571 #define get_binmap(m, i) ((m)->binmap[idx2block (i)] & idx2bit (i))
1574 Fastbins
1576 An array of lists holding recently freed small chunks. Fastbins
1577 are not doubly linked. It is faster to single-link them, and
1578 since chunks are never removed from the middles of these lists,
1579 double linking is not necessary. Also, unlike regular bins, they
1580 are not even processed in FIFO order (they use faster LIFO) since
1581 ordering doesn't much matter in the transient contexts in which
1582 fastbins are normally used.
1584 Chunks in fastbins keep their inuse bit set, so they cannot
1585 be consolidated with other free chunks. malloc_consolidate
1586 releases all chunks in fastbins and consolidates them with
1587 other free chunks.
1590 typedef struct malloc_chunk *mfastbinptr;
1591 #define fastbin(ar_ptr, idx) ((ar_ptr)->fastbinsY[idx])
1593 /* offset 2 to use otherwise unindexable first 2 bins */
1594 #define fastbin_index(sz) \
1595 ((((unsigned int) (sz)) >> (SIZE_SZ == 8 ? 4 : 3)) - 2)
1598 /* The maximum fastbin request size we support */
1599 #define MAX_FAST_SIZE (80 * SIZE_SZ / 4)
1601 #define NFASTBINS (fastbin_index (request2size (MAX_FAST_SIZE)) + 1)
1604 FASTBIN_CONSOLIDATION_THRESHOLD is the size of a chunk in free()
1605 that triggers automatic consolidation of possibly-surrounding
1606 fastbin chunks. This is a heuristic, so the exact value should not
1607 matter too much. It is defined at half the default trim threshold as a
1608 compromise heuristic to only attempt consolidation if it is likely
1609 to lead to trimming. However, it is not dynamically tunable, since
1610 consolidation reduces fragmentation surrounding large chunks even
1611 if trimming is not used.
1614 #define FASTBIN_CONSOLIDATION_THRESHOLD (65536UL)
1617 Since the lowest 2 bits in max_fast don't matter in size comparisons,
1618 they are used as flags.
1622 FASTCHUNKS_BIT held in max_fast indicates that there are probably
1623 some fastbin chunks. It is set true on entering a chunk into any
1624 fastbin, and cleared only in malloc_consolidate.
1626 The truth value is inverted so that have_fastchunks will be true
1627 upon startup (since statics are zero-filled), simplifying
1628 initialization checks.
1631 #define FASTCHUNKS_BIT (1U)
1633 #define have_fastchunks(M) (((M)->flags & FASTCHUNKS_BIT) == 0)
1634 #define clear_fastchunks(M) catomic_or (&(M)->flags, FASTCHUNKS_BIT)
1635 #define set_fastchunks(M) catomic_and (&(M)->flags, ~FASTCHUNKS_BIT)
1638 NONCONTIGUOUS_BIT indicates that MORECORE does not return contiguous
1639 regions. Otherwise, contiguity is exploited in merging together,
1640 when possible, results from consecutive MORECORE calls.
1642 The initial value comes from MORECORE_CONTIGUOUS, but is
1643 changed dynamically if mmap is ever used as an sbrk substitute.
1646 #define NONCONTIGUOUS_BIT (2U)
1648 #define contiguous(M) (((M)->flags & NONCONTIGUOUS_BIT) == 0)
1649 #define noncontiguous(M) (((M)->flags & NONCONTIGUOUS_BIT) != 0)
1650 #define set_noncontiguous(M) ((M)->flags |= NONCONTIGUOUS_BIT)
1651 #define set_contiguous(M) ((M)->flags &= ~NONCONTIGUOUS_BIT)
1653 /* ARENA_CORRUPTION_BIT is set if a memory corruption was detected on the
1654 arena. Such an arena is no longer used to allocate chunks. Chunks
1655 allocated in that arena before detecting corruption are not freed. */
1657 #define ARENA_CORRUPTION_BIT (4U)
1659 #define arena_is_corrupt(A) (((A)->flags & ARENA_CORRUPTION_BIT))
1660 #define set_arena_corrupt(A) ((A)->flags |= ARENA_CORRUPTION_BIT)
1663 Set value of max_fast.
1664 Use impossibly small value if 0.
1665 Precondition: there are no existing fastbin chunks.
1666 Setting the value clears fastchunk bit but preserves noncontiguous bit.
1669 #define set_max_fast(s) \
1670 global_max_fast = (((s) == 0) \
1671 ? SMALLBIN_WIDTH : ((s + SIZE_SZ) & ~MALLOC_ALIGN_MASK))
1672 #define get_max_fast() global_max_fast
1676 ----------- Internal state representation and initialization -----------
1679 struct malloc_state
1681 /* Serialize access. */
1682 mutex_t mutex;
1684 /* Flags (formerly in max_fast). */
1685 int flags;
1687 /* Fastbins */
1688 mfastbinptr fastbinsY[NFASTBINS];
1690 /* Base of the topmost chunk -- not otherwise kept in a bin */
1691 mchunkptr top;
1693 /* The remainder from the most recent split of a small request */
1694 mchunkptr last_remainder;
1696 /* Normal bins packed as described above */
1697 mchunkptr bins[NBINS * 2 - 2];
1699 /* Bitmap of bins */
1700 unsigned int binmap[BINMAPSIZE];
1702 /* Linked list */
1703 struct malloc_state *next;
1705 /* Linked list for free arenas. Access to this field is serialized
1706 by free_list_lock in arena.c. */
1707 struct malloc_state *next_free;
1709 /* Number of threads attached to this arena. 0 if the arena is on
1710 the free list. Access to this field is serialized by
1711 free_list_lock in arena.c. */
1712 INTERNAL_SIZE_T attached_threads;
1714 /* Memory allocated from the system in this arena. */
1715 INTERNAL_SIZE_T system_mem;
1716 INTERNAL_SIZE_T max_system_mem;
1719 struct malloc_par
1721 /* Tunable parameters */
1722 unsigned long trim_threshold;
1723 INTERNAL_SIZE_T top_pad;
1724 INTERNAL_SIZE_T mmap_threshold;
1725 INTERNAL_SIZE_T arena_test;
1726 INTERNAL_SIZE_T arena_max;
1728 /* Memory map support */
1729 int n_mmaps;
1730 int n_mmaps_max;
1731 int max_n_mmaps;
1732 /* the mmap_threshold is dynamic, until the user sets
1733 it manually, at which point we need to disable any
1734 dynamic behavior. */
1735 int no_dyn_threshold;
1737 /* Statistics */
1738 INTERNAL_SIZE_T mmapped_mem;
1739 INTERNAL_SIZE_T max_mmapped_mem;
1741 /* First address handed out by MORECORE/sbrk. */
1742 char *sbrk_base;
1745 /* There are several instances of this struct ("arenas") in this
1746 malloc. If you are adapting this malloc in a way that does NOT use
1747 a static or mmapped malloc_state, you MUST explicitly zero-fill it
1748 before using. This malloc relies on the property that malloc_state
1749 is initialized to all zeroes (as is true of C statics). */
1751 static struct malloc_state main_arena =
1753 .mutex = _LIBC_LOCK_INITIALIZER,
1754 .next = &main_arena,
1755 .attached_threads = 1
1758 /* These variables are used for undumping support. Chunked are marked
1759 as using mmap, but we leave them alone if they fall into this
1760 range. NB: The chunk size for these chunks only includes the
1761 initial size field (of SIZE_SZ bytes), there is no trailing size
1762 field (unlike with regular mmapped chunks). */
1763 static mchunkptr dumped_main_arena_start; /* Inclusive. */
1764 static mchunkptr dumped_main_arena_end; /* Exclusive. */
1766 /* True if the pointer falls into the dumped arena. Use this after
1767 chunk_is_mmapped indicates a chunk is mmapped. */
1768 #define DUMPED_MAIN_ARENA_CHUNK(p) \
1769 ((p) >= dumped_main_arena_start && (p) < dumped_main_arena_end)
1771 /* There is only one instance of the malloc parameters. */
1773 static struct malloc_par mp_ =
1775 .top_pad = DEFAULT_TOP_PAD,
1776 .n_mmaps_max = DEFAULT_MMAP_MAX,
1777 .mmap_threshold = DEFAULT_MMAP_THRESHOLD,
1778 .trim_threshold = DEFAULT_TRIM_THRESHOLD,
1779 #define NARENAS_FROM_NCORES(n) ((n) * (sizeof (long) == 4 ? 2 : 8))
1780 .arena_test = NARENAS_FROM_NCORES (1)
1784 /* Non public mallopt parameters. */
1785 #define M_ARENA_TEST -7
1786 #define M_ARENA_MAX -8
1789 /* Maximum size of memory handled in fastbins. */
1790 static INTERNAL_SIZE_T global_max_fast;
1793 Initialize a malloc_state struct.
1795 This is called only from within malloc_consolidate, which needs
1796 be called in the same contexts anyway. It is never called directly
1797 outside of malloc_consolidate because some optimizing compilers try
1798 to inline it at all call points, which turns out not to be an
1799 optimization at all. (Inlining it in malloc_consolidate is fine though.)
1802 static void
1803 malloc_init_state (mstate av)
1805 int i;
1806 mbinptr bin;
1808 /* Establish circular links for normal bins */
1809 for (i = 1; i < NBINS; ++i)
1811 bin = bin_at (av, i);
1812 bin->fd = bin->bk = bin;
1815 #if MORECORE_CONTIGUOUS
1816 if (av != &main_arena)
1817 #endif
1818 set_noncontiguous (av);
1819 if (av == &main_arena)
1820 set_max_fast (DEFAULT_MXFAST);
1821 av->flags |= FASTCHUNKS_BIT;
1823 av->top = initial_top (av);
1827 Other internal utilities operating on mstates
1830 static void *sysmalloc (INTERNAL_SIZE_T, mstate);
1831 static int systrim (size_t, mstate);
1832 static void malloc_consolidate (mstate);
1835 /* -------------- Early definitions for debugging hooks ---------------- */
1837 /* Define and initialize the hook variables. These weak definitions must
1838 appear before any use of the variables in a function (arena.c uses one). */
1839 #ifndef weak_variable
1840 /* In GNU libc we want the hook variables to be weak definitions to
1841 avoid a problem with Emacs. */
1842 # define weak_variable weak_function
1843 #endif
1845 /* Forward declarations. */
1846 static void *malloc_hook_ini (size_t sz,
1847 const void *caller) __THROW;
1848 static void *realloc_hook_ini (void *ptr, size_t sz,
1849 const void *caller) __THROW;
1850 static void *memalign_hook_ini (size_t alignment, size_t sz,
1851 const void *caller) __THROW;
1853 #if HAVE_MALLOC_INIT_HOOK
1854 void weak_variable (*__malloc_initialize_hook) (void) = NULL;
1855 compat_symbol (libc, __malloc_initialize_hook,
1856 __malloc_initialize_hook, GLIBC_2_0);
1857 #endif
1859 void weak_variable (*__free_hook) (void *__ptr,
1860 const void *) = NULL;
1861 void *weak_variable (*__malloc_hook)
1862 (size_t __size, const void *) = malloc_hook_ini;
1863 void *weak_variable (*__realloc_hook)
1864 (void *__ptr, size_t __size, const void *)
1865 = realloc_hook_ini;
1866 void *weak_variable (*__memalign_hook)
1867 (size_t __alignment, size_t __size, const void *)
1868 = memalign_hook_ini;
1869 void weak_variable (*__after_morecore_hook) (void) = NULL;
1872 /* ---------------- Error behavior ------------------------------------ */
1874 #ifndef DEFAULT_CHECK_ACTION
1875 # define DEFAULT_CHECK_ACTION 3
1876 #endif
1878 static int check_action = DEFAULT_CHECK_ACTION;
1881 /* ------------------ Testing support ----------------------------------*/
1883 static int perturb_byte;
1885 static void
1886 alloc_perturb (char *p, size_t n)
1888 if (__glibc_unlikely (perturb_byte))
1889 memset (p, perturb_byte ^ 0xff, n);
1892 static void
1893 free_perturb (char *p, size_t n)
1895 if (__glibc_unlikely (perturb_byte))
1896 memset (p, perturb_byte, n);
1901 #include <stap-probe.h>
1903 /* ------------------- Support for multiple arenas -------------------- */
1904 #include "arena.c"
1907 Debugging support
1909 These routines make a number of assertions about the states
1910 of data structures that should be true at all times. If any
1911 are not true, it's very likely that a user program has somehow
1912 trashed memory. (It's also possible that there is a coding error
1913 in malloc. In which case, please report it!)
1916 #if !MALLOC_DEBUG
1918 # define check_chunk(A, P)
1919 # define check_free_chunk(A, P)
1920 # define check_inuse_chunk(A, P)
1921 # define check_remalloced_chunk(A, P, N)
1922 # define check_malloced_chunk(A, P, N)
1923 # define check_malloc_state(A)
1925 #else
1927 # define check_chunk(A, P) do_check_chunk (A, P)
1928 # define check_free_chunk(A, P) do_check_free_chunk (A, P)
1929 # define check_inuse_chunk(A, P) do_check_inuse_chunk (A, P)
1930 # define check_remalloced_chunk(A, P, N) do_check_remalloced_chunk (A, P, N)
1931 # define check_malloced_chunk(A, P, N) do_check_malloced_chunk (A, P, N)
1932 # define check_malloc_state(A) do_check_malloc_state (A)
1935 Properties of all chunks
1938 static void
1939 do_check_chunk (mstate av, mchunkptr p)
1941 unsigned long sz = chunksize (p);
1942 /* min and max possible addresses assuming contiguous allocation */
1943 char *max_address = (char *) (av->top) + chunksize (av->top);
1944 char *min_address = max_address - av->system_mem;
1946 if (!chunk_is_mmapped (p))
1948 /* Has legal address ... */
1949 if (p != av->top)
1951 if (contiguous (av))
1953 assert (((char *) p) >= min_address);
1954 assert (((char *) p + sz) <= ((char *) (av->top)));
1957 else
1959 /* top size is always at least MINSIZE */
1960 assert ((unsigned long) (sz) >= MINSIZE);
1961 /* top predecessor always marked inuse */
1962 assert (prev_inuse (p));
1965 else if (!DUMPED_MAIN_ARENA_CHUNK (p))
1967 /* address is outside main heap */
1968 if (contiguous (av) && av->top != initial_top (av))
1970 assert (((char *) p) < min_address || ((char *) p) >= max_address);
1972 /* chunk is page-aligned */
1973 assert (((p->prev_size + sz) & (GLRO (dl_pagesize) - 1)) == 0);
1974 /* mem is aligned */
1975 assert (aligned_OK (chunk2mem (p)));
1980 Properties of free chunks
1983 static void
1984 do_check_free_chunk (mstate av, mchunkptr p)
1986 INTERNAL_SIZE_T sz = p->size & ~(PREV_INUSE | NON_MAIN_ARENA);
1987 mchunkptr next = chunk_at_offset (p, sz);
1989 do_check_chunk (av, p);
1991 /* Chunk must claim to be free ... */
1992 assert (!inuse (p));
1993 assert (!chunk_is_mmapped (p));
1995 /* Unless a special marker, must have OK fields */
1996 if ((unsigned long) (sz) >= MINSIZE)
1998 assert ((sz & MALLOC_ALIGN_MASK) == 0);
1999 assert (aligned_OK (chunk2mem (p)));
2000 /* ... matching footer field */
2001 assert (next->prev_size == sz);
2002 /* ... and is fully consolidated */
2003 assert (prev_inuse (p));
2004 assert (next == av->top || inuse (next));
2006 /* ... and has minimally sane links */
2007 assert (p->fd->bk == p);
2008 assert (p->bk->fd == p);
2010 else /* markers are always of size SIZE_SZ */
2011 assert (sz == SIZE_SZ);
2015 Properties of inuse chunks
2018 static void
2019 do_check_inuse_chunk (mstate av, mchunkptr p)
2021 mchunkptr next;
2023 do_check_chunk (av, p);
2025 if (chunk_is_mmapped (p))
2026 return; /* mmapped chunks have no next/prev */
2028 /* Check whether it claims to be in use ... */
2029 assert (inuse (p));
2031 next = next_chunk (p);
2033 /* ... and is surrounded by OK chunks.
2034 Since more things can be checked with free chunks than inuse ones,
2035 if an inuse chunk borders them and debug is on, it's worth doing them.
2037 if (!prev_inuse (p))
2039 /* Note that we cannot even look at prev unless it is not inuse */
2040 mchunkptr prv = prev_chunk (p);
2041 assert (next_chunk (prv) == p);
2042 do_check_free_chunk (av, prv);
2045 if (next == av->top)
2047 assert (prev_inuse (next));
2048 assert (chunksize (next) >= MINSIZE);
2050 else if (!inuse (next))
2051 do_check_free_chunk (av, next);
2055 Properties of chunks recycled from fastbins
2058 static void
2059 do_check_remalloced_chunk (mstate av, mchunkptr p, INTERNAL_SIZE_T s)
2061 INTERNAL_SIZE_T sz = p->size & ~(PREV_INUSE | NON_MAIN_ARENA);
2063 if (!chunk_is_mmapped (p))
2065 assert (av == arena_for_chunk (p));
2066 if (chunk_non_main_arena (p))
2067 assert (av != &main_arena);
2068 else
2069 assert (av == &main_arena);
2072 do_check_inuse_chunk (av, p);
2074 /* Legal size ... */
2075 assert ((sz & MALLOC_ALIGN_MASK) == 0);
2076 assert ((unsigned long) (sz) >= MINSIZE);
2077 /* ... and alignment */
2078 assert (aligned_OK (chunk2mem (p)));
2079 /* chunk is less than MINSIZE more than request */
2080 assert ((long) (sz) - (long) (s) >= 0);
2081 assert ((long) (sz) - (long) (s + MINSIZE) < 0);
2085 Properties of nonrecycled chunks at the point they are malloced
2088 static void
2089 do_check_malloced_chunk (mstate av, mchunkptr p, INTERNAL_SIZE_T s)
2091 /* same as recycled case ... */
2092 do_check_remalloced_chunk (av, p, s);
2095 ... plus, must obey implementation invariant that prev_inuse is
2096 always true of any allocated chunk; i.e., that each allocated
2097 chunk borders either a previously allocated and still in-use
2098 chunk, or the base of its memory arena. This is ensured
2099 by making all allocations from the `lowest' part of any found
2100 chunk. This does not necessarily hold however for chunks
2101 recycled via fastbins.
2104 assert (prev_inuse (p));
2109 Properties of malloc_state.
2111 This may be useful for debugging malloc, as well as detecting user
2112 programmer errors that somehow write into malloc_state.
2114 If you are extending or experimenting with this malloc, you can
2115 probably figure out how to hack this routine to print out or
2116 display chunk addresses, sizes, bins, and other instrumentation.
2119 static void
2120 do_check_malloc_state (mstate av)
2122 int i;
2123 mchunkptr p;
2124 mchunkptr q;
2125 mbinptr b;
2126 unsigned int idx;
2127 INTERNAL_SIZE_T size;
2128 unsigned long total = 0;
2129 int max_fast_bin;
2131 /* internal size_t must be no wider than pointer type */
2132 assert (sizeof (INTERNAL_SIZE_T) <= sizeof (char *));
2134 /* alignment is a power of 2 */
2135 assert ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT - 1)) == 0);
2137 /* cannot run remaining checks until fully initialized */
2138 if (av->top == 0 || av->top == initial_top (av))
2139 return;
2141 /* pagesize is a power of 2 */
2142 assert (powerof2(GLRO (dl_pagesize)));
2144 /* A contiguous main_arena is consistent with sbrk_base. */
2145 if (av == &main_arena && contiguous (av))
2146 assert ((char *) mp_.sbrk_base + av->system_mem ==
2147 (char *) av->top + chunksize (av->top));
2149 /* properties of fastbins */
2151 /* max_fast is in allowed range */
2152 assert ((get_max_fast () & ~1) <= request2size (MAX_FAST_SIZE));
2154 max_fast_bin = fastbin_index (get_max_fast ());
2156 for (i = 0; i < NFASTBINS; ++i)
2158 p = fastbin (av, i);
2160 /* The following test can only be performed for the main arena.
2161 While mallopt calls malloc_consolidate to get rid of all fast
2162 bins (especially those larger than the new maximum) this does
2163 only happen for the main arena. Trying to do this for any
2164 other arena would mean those arenas have to be locked and
2165 malloc_consolidate be called for them. This is excessive. And
2166 even if this is acceptable to somebody it still cannot solve
2167 the problem completely since if the arena is locked a
2168 concurrent malloc call might create a new arena which then
2169 could use the newly invalid fast bins. */
2171 /* all bins past max_fast are empty */
2172 if (av == &main_arena && i > max_fast_bin)
2173 assert (p == 0);
2175 while (p != 0)
2177 /* each chunk claims to be inuse */
2178 do_check_inuse_chunk (av, p);
2179 total += chunksize (p);
2180 /* chunk belongs in this bin */
2181 assert (fastbin_index (chunksize (p)) == i);
2182 p = p->fd;
2186 if (total != 0)
2187 assert (have_fastchunks (av));
2188 else if (!have_fastchunks (av))
2189 assert (total == 0);
2191 /* check normal bins */
2192 for (i = 1; i < NBINS; ++i)
2194 b = bin_at (av, i);
2196 /* binmap is accurate (except for bin 1 == unsorted_chunks) */
2197 if (i >= 2)
2199 unsigned int binbit = get_binmap (av, i);
2200 int empty = last (b) == b;
2201 if (!binbit)
2202 assert (empty);
2203 else if (!empty)
2204 assert (binbit);
2207 for (p = last (b); p != b; p = p->bk)
2209 /* each chunk claims to be free */
2210 do_check_free_chunk (av, p);
2211 size = chunksize (p);
2212 total += size;
2213 if (i >= 2)
2215 /* chunk belongs in bin */
2216 idx = bin_index (size);
2217 assert (idx == i);
2218 /* lists are sorted */
2219 assert (p->bk == b ||
2220 (unsigned long) chunksize (p->bk) >= (unsigned long) chunksize (p));
2222 if (!in_smallbin_range (size))
2224 if (p->fd_nextsize != NULL)
2226 if (p->fd_nextsize == p)
2227 assert (p->bk_nextsize == p);
2228 else
2230 if (p->fd_nextsize == first (b))
2231 assert (chunksize (p) < chunksize (p->fd_nextsize));
2232 else
2233 assert (chunksize (p) > chunksize (p->fd_nextsize));
2235 if (p == first (b))
2236 assert (chunksize (p) > chunksize (p->bk_nextsize));
2237 else
2238 assert (chunksize (p) < chunksize (p->bk_nextsize));
2241 else
2242 assert (p->bk_nextsize == NULL);
2245 else if (!in_smallbin_range (size))
2246 assert (p->fd_nextsize == NULL && p->bk_nextsize == NULL);
2247 /* chunk is followed by a legal chain of inuse chunks */
2248 for (q = next_chunk (p);
2249 (q != av->top && inuse (q) &&
2250 (unsigned long) (chunksize (q)) >= MINSIZE);
2251 q = next_chunk (q))
2252 do_check_inuse_chunk (av, q);
2256 /* top chunk is OK */
2257 check_chunk (av, av->top);
2259 #endif
2262 /* ----------------- Support for debugging hooks -------------------- */
2263 #include "hooks.c"
2266 /* ----------- Routines dealing with system allocation -------------- */
2269 sysmalloc handles malloc cases requiring more memory from the system.
2270 On entry, it is assumed that av->top does not have enough
2271 space to service request for nb bytes, thus requiring that av->top
2272 be extended or replaced.
2275 static void *
2276 sysmalloc (INTERNAL_SIZE_T nb, mstate av)
2278 mchunkptr old_top; /* incoming value of av->top */
2279 INTERNAL_SIZE_T old_size; /* its size */
2280 char *old_end; /* its end address */
2282 long size; /* arg to first MORECORE or mmap call */
2283 char *brk; /* return value from MORECORE */
2285 long correction; /* arg to 2nd MORECORE call */
2286 char *snd_brk; /* 2nd return val */
2288 INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of new space */
2289 INTERNAL_SIZE_T end_misalign; /* partial page left at end of new space */
2290 char *aligned_brk; /* aligned offset into brk */
2292 mchunkptr p; /* the allocated/returned chunk */
2293 mchunkptr remainder; /* remainder from allocation */
2294 unsigned long remainder_size; /* its size */
2297 size_t pagesize = GLRO (dl_pagesize);
2298 bool tried_mmap = false;
2302 If have mmap, and the request size meets the mmap threshold, and
2303 the system supports mmap, and there are few enough currently
2304 allocated mmapped regions, try to directly map this request
2305 rather than expanding top.
2308 if (av == NULL
2309 || ((unsigned long) (nb) >= (unsigned long) (mp_.mmap_threshold)
2310 && (mp_.n_mmaps < mp_.n_mmaps_max)))
2312 char *mm; /* return value from mmap call*/
2314 try_mmap:
2316 Round up size to nearest page. For mmapped chunks, the overhead
2317 is one SIZE_SZ unit larger than for normal chunks, because there
2318 is no following chunk whose prev_size field could be used.
2320 See the front_misalign handling below, for glibc there is no
2321 need for further alignments unless we have have high alignment.
2323 if (MALLOC_ALIGNMENT == 2 * SIZE_SZ)
2324 size = ALIGN_UP (nb + SIZE_SZ, pagesize);
2325 else
2326 size = ALIGN_UP (nb + SIZE_SZ + MALLOC_ALIGN_MASK, pagesize);
2327 tried_mmap = true;
2329 /* Don't try if size wraps around 0 */
2330 if ((unsigned long) (size) > (unsigned long) (nb))
2332 mm = (char *) (MMAP (0, size, PROT_READ | PROT_WRITE, 0));
2334 if (mm != MAP_FAILED)
2337 The offset to the start of the mmapped region is stored
2338 in the prev_size field of the chunk. This allows us to adjust
2339 returned start address to meet alignment requirements here
2340 and in memalign(), and still be able to compute proper
2341 address argument for later munmap in free() and realloc().
2344 if (MALLOC_ALIGNMENT == 2 * SIZE_SZ)
2346 /* For glibc, chunk2mem increases the address by 2*SIZE_SZ and
2347 MALLOC_ALIGN_MASK is 2*SIZE_SZ-1. Each mmap'ed area is page
2348 aligned and therefore definitely MALLOC_ALIGN_MASK-aligned. */
2349 assert (((INTERNAL_SIZE_T) chunk2mem (mm) & MALLOC_ALIGN_MASK) == 0);
2350 front_misalign = 0;
2352 else
2353 front_misalign = (INTERNAL_SIZE_T) chunk2mem (mm) & MALLOC_ALIGN_MASK;
2354 if (front_misalign > 0)
2356 correction = MALLOC_ALIGNMENT - front_misalign;
2357 p = (mchunkptr) (mm + correction);
2358 p->prev_size = correction;
2359 set_head (p, (size - correction) | IS_MMAPPED);
2361 else
2363 p = (mchunkptr) mm;
2364 set_head (p, size | IS_MMAPPED);
2367 /* update statistics */
2369 int new = atomic_exchange_and_add (&mp_.n_mmaps, 1) + 1;
2370 atomic_max (&mp_.max_n_mmaps, new);
2372 unsigned long sum;
2373 sum = atomic_exchange_and_add (&mp_.mmapped_mem, size) + size;
2374 atomic_max (&mp_.max_mmapped_mem, sum);
2376 check_chunk (av, p);
2378 return chunk2mem (p);
2383 /* There are no usable arenas and mmap also failed. */
2384 if (av == NULL)
2385 return 0;
2387 /* Record incoming configuration of top */
2389 old_top = av->top;
2390 old_size = chunksize (old_top);
2391 old_end = (char *) (chunk_at_offset (old_top, old_size));
2393 brk = snd_brk = (char *) (MORECORE_FAILURE);
2396 If not the first time through, we require old_size to be
2397 at least MINSIZE and to have prev_inuse set.
2400 assert ((old_top == initial_top (av) && old_size == 0) ||
2401 ((unsigned long) (old_size) >= MINSIZE &&
2402 prev_inuse (old_top) &&
2403 ((unsigned long) old_end & (pagesize - 1)) == 0));
2405 /* Precondition: not enough current space to satisfy nb request */
2406 assert ((unsigned long) (old_size) < (unsigned long) (nb + MINSIZE));
2409 if (av != &main_arena)
2411 heap_info *old_heap, *heap;
2412 size_t old_heap_size;
2414 /* First try to extend the current heap. */
2415 old_heap = heap_for_ptr (old_top);
2416 old_heap_size = old_heap->size;
2417 if ((long) (MINSIZE + nb - old_size) > 0
2418 && grow_heap (old_heap, MINSIZE + nb - old_size) == 0)
2420 av->system_mem += old_heap->size - old_heap_size;
2421 set_head (old_top, (((char *) old_heap + old_heap->size) - (char *) old_top)
2422 | PREV_INUSE);
2424 else if ((heap = new_heap (nb + (MINSIZE + sizeof (*heap)), mp_.top_pad)))
2426 /* Use a newly allocated heap. */
2427 heap->ar_ptr = av;
2428 heap->prev = old_heap;
2429 av->system_mem += heap->size;
2430 /* Set up the new top. */
2431 top (av) = chunk_at_offset (heap, sizeof (*heap));
2432 set_head (top (av), (heap->size - sizeof (*heap)) | PREV_INUSE);
2434 /* Setup fencepost and free the old top chunk with a multiple of
2435 MALLOC_ALIGNMENT in size. */
2436 /* The fencepost takes at least MINSIZE bytes, because it might
2437 become the top chunk again later. Note that a footer is set
2438 up, too, although the chunk is marked in use. */
2439 old_size = (old_size - MINSIZE) & ~MALLOC_ALIGN_MASK;
2440 set_head (chunk_at_offset (old_top, old_size + 2 * SIZE_SZ), 0 | PREV_INUSE);
2441 if (old_size >= MINSIZE)
2443 set_head (chunk_at_offset (old_top, old_size), (2 * SIZE_SZ) | PREV_INUSE);
2444 set_foot (chunk_at_offset (old_top, old_size), (2 * SIZE_SZ));
2445 set_head (old_top, old_size | PREV_INUSE | NON_MAIN_ARENA);
2446 _int_free (av, old_top, 1);
2448 else
2450 set_head (old_top, (old_size + 2 * SIZE_SZ) | PREV_INUSE);
2451 set_foot (old_top, (old_size + 2 * SIZE_SZ));
2454 else if (!tried_mmap)
2455 /* We can at least try to use to mmap memory. */
2456 goto try_mmap;
2458 else /* av == main_arena */
2461 { /* Request enough space for nb + pad + overhead */
2462 size = nb + mp_.top_pad + MINSIZE;
2465 If contiguous, we can subtract out existing space that we hope to
2466 combine with new space. We add it back later only if
2467 we don't actually get contiguous space.
2470 if (contiguous (av))
2471 size -= old_size;
2474 Round to a multiple of page size.
2475 If MORECORE is not contiguous, this ensures that we only call it
2476 with whole-page arguments. And if MORECORE is contiguous and
2477 this is not first time through, this preserves page-alignment of
2478 previous calls. Otherwise, we correct to page-align below.
2481 size = ALIGN_UP (size, pagesize);
2484 Don't try to call MORECORE if argument is so big as to appear
2485 negative. Note that since mmap takes size_t arg, it may succeed
2486 below even if we cannot call MORECORE.
2489 if (size > 0)
2491 brk = (char *) (MORECORE (size));
2492 LIBC_PROBE (memory_sbrk_more, 2, brk, size);
2495 if (brk != (char *) (MORECORE_FAILURE))
2497 /* Call the `morecore' hook if necessary. */
2498 void (*hook) (void) = atomic_forced_read (__after_morecore_hook);
2499 if (__builtin_expect (hook != NULL, 0))
2500 (*hook)();
2502 else
2505 If have mmap, try using it as a backup when MORECORE fails or
2506 cannot be used. This is worth doing on systems that have "holes" in
2507 address space, so sbrk cannot extend to give contiguous space, but
2508 space is available elsewhere. Note that we ignore mmap max count
2509 and threshold limits, since the space will not be used as a
2510 segregated mmap region.
2513 /* Cannot merge with old top, so add its size back in */
2514 if (contiguous (av))
2515 size = ALIGN_UP (size + old_size, pagesize);
2517 /* If we are relying on mmap as backup, then use larger units */
2518 if ((unsigned long) (size) < (unsigned long) (MMAP_AS_MORECORE_SIZE))
2519 size = MMAP_AS_MORECORE_SIZE;
2521 /* Don't try if size wraps around 0 */
2522 if ((unsigned long) (size) > (unsigned long) (nb))
2524 char *mbrk = (char *) (MMAP (0, size, PROT_READ | PROT_WRITE, 0));
2526 if (mbrk != MAP_FAILED)
2528 /* We do not need, and cannot use, another sbrk call to find end */
2529 brk = mbrk;
2530 snd_brk = brk + size;
2533 Record that we no longer have a contiguous sbrk region.
2534 After the first time mmap is used as backup, we do not
2535 ever rely on contiguous space since this could incorrectly
2536 bridge regions.
2538 set_noncontiguous (av);
2543 if (brk != (char *) (MORECORE_FAILURE))
2545 if (mp_.sbrk_base == 0)
2546 mp_.sbrk_base = brk;
2547 av->system_mem += size;
2550 If MORECORE extends previous space, we can likewise extend top size.
2553 if (brk == old_end && snd_brk == (char *) (MORECORE_FAILURE))
2554 set_head (old_top, (size + old_size) | PREV_INUSE);
2556 else if (contiguous (av) && old_size && brk < old_end)
2558 /* Oops! Someone else killed our space.. Can't touch anything. */
2559 malloc_printerr (3, "break adjusted to free malloc space", brk,
2560 av);
2564 Otherwise, make adjustments:
2566 * If the first time through or noncontiguous, we need to call sbrk
2567 just to find out where the end of memory lies.
2569 * We need to ensure that all returned chunks from malloc will meet
2570 MALLOC_ALIGNMENT
2572 * If there was an intervening foreign sbrk, we need to adjust sbrk
2573 request size to account for fact that we will not be able to
2574 combine new space with existing space in old_top.
2576 * Almost all systems internally allocate whole pages at a time, in
2577 which case we might as well use the whole last page of request.
2578 So we allocate enough more memory to hit a page boundary now,
2579 which in turn causes future contiguous calls to page-align.
2582 else
2584 front_misalign = 0;
2585 end_misalign = 0;
2586 correction = 0;
2587 aligned_brk = brk;
2589 /* handle contiguous cases */
2590 if (contiguous (av))
2592 /* Count foreign sbrk as system_mem. */
2593 if (old_size)
2594 av->system_mem += brk - old_end;
2596 /* Guarantee alignment of first new chunk made from this space */
2598 front_misalign = (INTERNAL_SIZE_T) chunk2mem (brk) & MALLOC_ALIGN_MASK;
2599 if (front_misalign > 0)
2602 Skip over some bytes to arrive at an aligned position.
2603 We don't need to specially mark these wasted front bytes.
2604 They will never be accessed anyway because
2605 prev_inuse of av->top (and any chunk created from its start)
2606 is always true after initialization.
2609 correction = MALLOC_ALIGNMENT - front_misalign;
2610 aligned_brk += correction;
2614 If this isn't adjacent to existing space, then we will not
2615 be able to merge with old_top space, so must add to 2nd request.
2618 correction += old_size;
2620 /* Extend the end address to hit a page boundary */
2621 end_misalign = (INTERNAL_SIZE_T) (brk + size + correction);
2622 correction += (ALIGN_UP (end_misalign, pagesize)) - end_misalign;
2624 assert (correction >= 0);
2625 snd_brk = (char *) (MORECORE (correction));
2628 If can't allocate correction, try to at least find out current
2629 brk. It might be enough to proceed without failing.
2631 Note that if second sbrk did NOT fail, we assume that space
2632 is contiguous with first sbrk. This is a safe assumption unless
2633 program is multithreaded but doesn't use locks and a foreign sbrk
2634 occurred between our first and second calls.
2637 if (snd_brk == (char *) (MORECORE_FAILURE))
2639 correction = 0;
2640 snd_brk = (char *) (MORECORE (0));
2642 else
2644 /* Call the `morecore' hook if necessary. */
2645 void (*hook) (void) = atomic_forced_read (__after_morecore_hook);
2646 if (__builtin_expect (hook != NULL, 0))
2647 (*hook)();
2651 /* handle non-contiguous cases */
2652 else
2654 if (MALLOC_ALIGNMENT == 2 * SIZE_SZ)
2655 /* MORECORE/mmap must correctly align */
2656 assert (((unsigned long) chunk2mem (brk) & MALLOC_ALIGN_MASK) == 0);
2657 else
2659 front_misalign = (INTERNAL_SIZE_T) chunk2mem (brk) & MALLOC_ALIGN_MASK;
2660 if (front_misalign > 0)
2663 Skip over some bytes to arrive at an aligned position.
2664 We don't need to specially mark these wasted front bytes.
2665 They will never be accessed anyway because
2666 prev_inuse of av->top (and any chunk created from its start)
2667 is always true after initialization.
2670 aligned_brk += MALLOC_ALIGNMENT - front_misalign;
2674 /* Find out current end of memory */
2675 if (snd_brk == (char *) (MORECORE_FAILURE))
2677 snd_brk = (char *) (MORECORE (0));
2681 /* Adjust top based on results of second sbrk */
2682 if (snd_brk != (char *) (MORECORE_FAILURE))
2684 av->top = (mchunkptr) aligned_brk;
2685 set_head (av->top, (snd_brk - aligned_brk + correction) | PREV_INUSE);
2686 av->system_mem += correction;
2689 If not the first time through, we either have a
2690 gap due to foreign sbrk or a non-contiguous region. Insert a
2691 double fencepost at old_top to prevent consolidation with space
2692 we don't own. These fenceposts are artificial chunks that are
2693 marked as inuse and are in any case too small to use. We need
2694 two to make sizes and alignments work out.
2697 if (old_size != 0)
2700 Shrink old_top to insert fenceposts, keeping size a
2701 multiple of MALLOC_ALIGNMENT. We know there is at least
2702 enough space in old_top to do this.
2704 old_size = (old_size - 4 * SIZE_SZ) & ~MALLOC_ALIGN_MASK;
2705 set_head (old_top, old_size | PREV_INUSE);
2708 Note that the following assignments completely overwrite
2709 old_top when old_size was previously MINSIZE. This is
2710 intentional. We need the fencepost, even if old_top otherwise gets
2711 lost.
2713 chunk_at_offset (old_top, old_size)->size =
2714 (2 * SIZE_SZ) | PREV_INUSE;
2716 chunk_at_offset (old_top, old_size + 2 * SIZE_SZ)->size =
2717 (2 * SIZE_SZ) | PREV_INUSE;
2719 /* If possible, release the rest. */
2720 if (old_size >= MINSIZE)
2722 _int_free (av, old_top, 1);
2728 } /* if (av != &main_arena) */
2730 if ((unsigned long) av->system_mem > (unsigned long) (av->max_system_mem))
2731 av->max_system_mem = av->system_mem;
2732 check_malloc_state (av);
2734 /* finally, do the allocation */
2735 p = av->top;
2736 size = chunksize (p);
2738 /* check that one of the above allocation paths succeeded */
2739 if ((unsigned long) (size) >= (unsigned long) (nb + MINSIZE))
2741 remainder_size = size - nb;
2742 remainder = chunk_at_offset (p, nb);
2743 av->top = remainder;
2744 set_head (p, nb | PREV_INUSE | (av != &main_arena ? NON_MAIN_ARENA : 0));
2745 set_head (remainder, remainder_size | PREV_INUSE);
2746 check_malloced_chunk (av, p, nb);
2747 return chunk2mem (p);
2750 /* catch all failure paths */
2751 __set_errno (ENOMEM);
2752 return 0;
2757 systrim is an inverse of sorts to sysmalloc. It gives memory back
2758 to the system (via negative arguments to sbrk) if there is unused
2759 memory at the `high' end of the malloc pool. It is called
2760 automatically by free() when top space exceeds the trim
2761 threshold. It is also called by the public malloc_trim routine. It
2762 returns 1 if it actually released any memory, else 0.
2765 static int
2766 systrim (size_t pad, mstate av)
2768 long top_size; /* Amount of top-most memory */
2769 long extra; /* Amount to release */
2770 long released; /* Amount actually released */
2771 char *current_brk; /* address returned by pre-check sbrk call */
2772 char *new_brk; /* address returned by post-check sbrk call */
2773 size_t pagesize;
2774 long top_area;
2776 pagesize = GLRO (dl_pagesize);
2777 top_size = chunksize (av->top);
2779 top_area = top_size - MINSIZE - 1;
2780 if (top_area <= pad)
2781 return 0;
2783 /* Release in pagesize units and round down to the nearest page. */
2784 extra = ALIGN_DOWN(top_area - pad, pagesize);
2786 if (extra == 0)
2787 return 0;
2790 Only proceed if end of memory is where we last set it.
2791 This avoids problems if there were foreign sbrk calls.
2793 current_brk = (char *) (MORECORE (0));
2794 if (current_brk == (char *) (av->top) + top_size)
2797 Attempt to release memory. We ignore MORECORE return value,
2798 and instead call again to find out where new end of memory is.
2799 This avoids problems if first call releases less than we asked,
2800 of if failure somehow altered brk value. (We could still
2801 encounter problems if it altered brk in some very bad way,
2802 but the only thing we can do is adjust anyway, which will cause
2803 some downstream failure.)
2806 MORECORE (-extra);
2807 /* Call the `morecore' hook if necessary. */
2808 void (*hook) (void) = atomic_forced_read (__after_morecore_hook);
2809 if (__builtin_expect (hook != NULL, 0))
2810 (*hook)();
2811 new_brk = (char *) (MORECORE (0));
2813 LIBC_PROBE (memory_sbrk_less, 2, new_brk, extra);
2815 if (new_brk != (char *) MORECORE_FAILURE)
2817 released = (long) (current_brk - new_brk);
2819 if (released != 0)
2821 /* Success. Adjust top. */
2822 av->system_mem -= released;
2823 set_head (av->top, (top_size - released) | PREV_INUSE);
2824 check_malloc_state (av);
2825 return 1;
2829 return 0;
2832 static void
2833 internal_function
2834 munmap_chunk (mchunkptr p)
2836 INTERNAL_SIZE_T size = chunksize (p);
2838 assert (chunk_is_mmapped (p));
2840 /* Do nothing if the chunk is a faked mmapped chunk in the dumped
2841 main arena. We never free this memory. */
2842 if (DUMPED_MAIN_ARENA_CHUNK (p))
2843 return;
2845 uintptr_t block = (uintptr_t) p - p->prev_size;
2846 size_t total_size = p->prev_size + size;
2847 /* Unfortunately we have to do the compilers job by hand here. Normally
2848 we would test BLOCK and TOTAL-SIZE separately for compliance with the
2849 page size. But gcc does not recognize the optimization possibility
2850 (in the moment at least) so we combine the two values into one before
2851 the bit test. */
2852 if (__builtin_expect (((block | total_size) & (GLRO (dl_pagesize) - 1)) != 0, 0))
2854 malloc_printerr (check_action, "munmap_chunk(): invalid pointer",
2855 chunk2mem (p), NULL);
2856 return;
2859 atomic_decrement (&mp_.n_mmaps);
2860 atomic_add (&mp_.mmapped_mem, -total_size);
2862 /* If munmap failed the process virtual memory address space is in a
2863 bad shape. Just leave the block hanging around, the process will
2864 terminate shortly anyway since not much can be done. */
2865 __munmap ((char *) block, total_size);
2868 #if HAVE_MREMAP
2870 static mchunkptr
2871 internal_function
2872 mremap_chunk (mchunkptr p, size_t new_size)
2874 size_t pagesize = GLRO (dl_pagesize);
2875 INTERNAL_SIZE_T offset = p->prev_size;
2876 INTERNAL_SIZE_T size = chunksize (p);
2877 char *cp;
2879 assert (chunk_is_mmapped (p));
2880 assert (((size + offset) & (GLRO (dl_pagesize) - 1)) == 0);
2882 /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */
2883 new_size = ALIGN_UP (new_size + offset + SIZE_SZ, pagesize);
2885 /* No need to remap if the number of pages does not change. */
2886 if (size + offset == new_size)
2887 return p;
2889 cp = (char *) __mremap ((char *) p - offset, size + offset, new_size,
2890 MREMAP_MAYMOVE);
2892 if (cp == MAP_FAILED)
2893 return 0;
2895 p = (mchunkptr) (cp + offset);
2897 assert (aligned_OK (chunk2mem (p)));
2899 assert ((p->prev_size == offset));
2900 set_head (p, (new_size - offset) | IS_MMAPPED);
2902 INTERNAL_SIZE_T new;
2903 new = atomic_exchange_and_add (&mp_.mmapped_mem, new_size - size - offset)
2904 + new_size - size - offset;
2905 atomic_max (&mp_.max_mmapped_mem, new);
2906 return p;
2908 #endif /* HAVE_MREMAP */
2910 /*------------------------ Public wrappers. --------------------------------*/
2912 void *
2913 __libc_malloc (size_t bytes)
2915 mstate ar_ptr;
2916 void *victim;
2918 void *(*hook) (size_t, const void *)
2919 = atomic_forced_read (__malloc_hook);
2920 if (__builtin_expect (hook != NULL, 0))
2921 return (*hook)(bytes, RETURN_ADDRESS (0));
2923 arena_get (ar_ptr, bytes);
2925 victim = _int_malloc (ar_ptr, bytes);
2926 /* Retry with another arena only if we were able to find a usable arena
2927 before. */
2928 if (!victim && ar_ptr != NULL)
2930 LIBC_PROBE (memory_malloc_retry, 1, bytes);
2931 ar_ptr = arena_get_retry (ar_ptr, bytes);
2932 victim = _int_malloc (ar_ptr, bytes);
2935 if (ar_ptr != NULL)
2936 (void) mutex_unlock (&ar_ptr->mutex);
2938 assert (!victim || chunk_is_mmapped (mem2chunk (victim)) ||
2939 ar_ptr == arena_for_chunk (mem2chunk (victim)));
2940 return victim;
2942 libc_hidden_def (__libc_malloc)
2944 void
2945 __libc_free (void *mem)
2947 mstate ar_ptr;
2948 mchunkptr p; /* chunk corresponding to mem */
2950 void (*hook) (void *, const void *)
2951 = atomic_forced_read (__free_hook);
2952 if (__builtin_expect (hook != NULL, 0))
2954 (*hook)(mem, RETURN_ADDRESS (0));
2955 return;
2958 if (mem == 0) /* free(0) has no effect */
2959 return;
2961 p = mem2chunk (mem);
2963 if (chunk_is_mmapped (p)) /* release mmapped memory. */
2965 /* See if the dynamic brk/mmap threshold needs adjusting.
2966 Dumped fake mmapped chunks do not affect the threshold. */
2967 if (!mp_.no_dyn_threshold
2968 && p->size > mp_.mmap_threshold
2969 && p->size <= DEFAULT_MMAP_THRESHOLD_MAX
2970 && !DUMPED_MAIN_ARENA_CHUNK (p))
2972 mp_.mmap_threshold = chunksize (p);
2973 mp_.trim_threshold = 2 * mp_.mmap_threshold;
2974 LIBC_PROBE (memory_mallopt_free_dyn_thresholds, 2,
2975 mp_.mmap_threshold, mp_.trim_threshold);
2977 munmap_chunk (p);
2978 return;
2981 ar_ptr = arena_for_chunk (p);
2982 _int_free (ar_ptr, p, 0);
2984 libc_hidden_def (__libc_free)
2986 void *
2987 __libc_realloc (void *oldmem, size_t bytes)
2989 mstate ar_ptr;
2990 INTERNAL_SIZE_T nb; /* padded request size */
2992 void *newp; /* chunk to return */
2994 void *(*hook) (void *, size_t, const void *) =
2995 atomic_forced_read (__realloc_hook);
2996 if (__builtin_expect (hook != NULL, 0))
2997 return (*hook)(oldmem, bytes, RETURN_ADDRESS (0));
2999 #if REALLOC_ZERO_BYTES_FREES
3000 if (bytes == 0 && oldmem != NULL)
3002 __libc_free (oldmem); return 0;
3004 #endif
3006 /* realloc of null is supposed to be same as malloc */
3007 if (oldmem == 0)
3008 return __libc_malloc (bytes);
3010 /* chunk corresponding to oldmem */
3011 const mchunkptr oldp = mem2chunk (oldmem);
3012 /* its size */
3013 const INTERNAL_SIZE_T oldsize = chunksize (oldp);
3015 if (chunk_is_mmapped (oldp))
3016 ar_ptr = NULL;
3017 else
3018 ar_ptr = arena_for_chunk (oldp);
3020 /* Little security check which won't hurt performance: the allocator
3021 never wrapps around at the end of the address space. Therefore
3022 we can exclude some size values which might appear here by
3023 accident or by "design" from some intruder. We need to bypass
3024 this check for dumped fake mmap chunks from the old main arena
3025 because the new malloc may provide additional alignment. */
3026 if ((__builtin_expect ((uintptr_t) oldp > (uintptr_t) -oldsize, 0)
3027 || __builtin_expect (misaligned_chunk (oldp), 0))
3028 && !DUMPED_MAIN_ARENA_CHUNK (oldp))
3030 malloc_printerr (check_action, "realloc(): invalid pointer", oldmem,
3031 ar_ptr);
3032 return NULL;
3035 checked_request2size (bytes, nb);
3037 if (chunk_is_mmapped (oldp))
3039 /* If this is a faked mmapped chunk from the dumped main arena,
3040 always make a copy (and do not free the old chunk). */
3041 if (DUMPED_MAIN_ARENA_CHUNK (oldp))
3043 /* Must alloc, copy, free. */
3044 void *newmem = __libc_malloc (bytes);
3045 if (newmem == 0)
3046 return NULL;
3047 /* Copy as many bytes as are available from the old chunk
3048 and fit into the new size. NB: The overhead for faked
3049 mmapped chunks is only SIZE_SZ, not 2 * SIZE_SZ as for
3050 regular mmapped chunks. */
3051 if (bytes > oldsize - SIZE_SZ)
3052 bytes = oldsize - SIZE_SZ;
3053 memcpy (newmem, oldmem, bytes);
3054 return newmem;
3057 void *newmem;
3059 #if HAVE_MREMAP
3060 newp = mremap_chunk (oldp, nb);
3061 if (newp)
3062 return chunk2mem (newp);
3063 #endif
3064 /* Note the extra SIZE_SZ overhead. */
3065 if (oldsize - SIZE_SZ >= nb)
3066 return oldmem; /* do nothing */
3068 /* Must alloc, copy, free. */
3069 newmem = __libc_malloc (bytes);
3070 if (newmem == 0)
3071 return 0; /* propagate failure */
3073 memcpy (newmem, oldmem, oldsize - 2 * SIZE_SZ);
3074 munmap_chunk (oldp);
3075 return newmem;
3078 (void) mutex_lock (&ar_ptr->mutex);
3080 newp = _int_realloc (ar_ptr, oldp, oldsize, nb);
3082 (void) mutex_unlock (&ar_ptr->mutex);
3083 assert (!newp || chunk_is_mmapped (mem2chunk (newp)) ||
3084 ar_ptr == arena_for_chunk (mem2chunk (newp)));
3086 if (newp == NULL)
3088 /* Try harder to allocate memory in other arenas. */
3089 LIBC_PROBE (memory_realloc_retry, 2, bytes, oldmem);
3090 newp = __libc_malloc (bytes);
3091 if (newp != NULL)
3093 memcpy (newp, oldmem, oldsize - SIZE_SZ);
3094 _int_free (ar_ptr, oldp, 0);
3098 return newp;
3100 libc_hidden_def (__libc_realloc)
3102 void *
3103 __libc_memalign (size_t alignment, size_t bytes)
3105 void *address = RETURN_ADDRESS (0);
3106 return _mid_memalign (alignment, bytes, address);
3109 static void *
3110 _mid_memalign (size_t alignment, size_t bytes, void *address)
3112 mstate ar_ptr;
3113 void *p;
3115 void *(*hook) (size_t, size_t, const void *) =
3116 atomic_forced_read (__memalign_hook);
3117 if (__builtin_expect (hook != NULL, 0))
3118 return (*hook)(alignment, bytes, address);
3120 /* If we need less alignment than we give anyway, just relay to malloc. */
3121 if (alignment <= MALLOC_ALIGNMENT)
3122 return __libc_malloc (bytes);
3124 /* Otherwise, ensure that it is at least a minimum chunk size */
3125 if (alignment < MINSIZE)
3126 alignment = MINSIZE;
3128 /* If the alignment is greater than SIZE_MAX / 2 + 1 it cannot be a
3129 power of 2 and will cause overflow in the check below. */
3130 if (alignment > SIZE_MAX / 2 + 1)
3132 __set_errno (EINVAL);
3133 return 0;
3136 /* Check for overflow. */
3137 if (bytes > SIZE_MAX - alignment - MINSIZE)
3139 __set_errno (ENOMEM);
3140 return 0;
3144 /* Make sure alignment is power of 2. */
3145 if (!powerof2 (alignment))
3147 size_t a = MALLOC_ALIGNMENT * 2;
3148 while (a < alignment)
3149 a <<= 1;
3150 alignment = a;
3153 arena_get (ar_ptr, bytes + alignment + MINSIZE);
3155 p = _int_memalign (ar_ptr, alignment, bytes);
3156 if (!p && ar_ptr != NULL)
3158 LIBC_PROBE (memory_memalign_retry, 2, bytes, alignment);
3159 ar_ptr = arena_get_retry (ar_ptr, bytes);
3160 p = _int_memalign (ar_ptr, alignment, bytes);
3163 if (ar_ptr != NULL)
3164 (void) mutex_unlock (&ar_ptr->mutex);
3166 assert (!p || chunk_is_mmapped (mem2chunk (p)) ||
3167 ar_ptr == arena_for_chunk (mem2chunk (p)));
3168 return p;
3170 /* For ISO C11. */
3171 weak_alias (__libc_memalign, aligned_alloc)
3172 libc_hidden_def (__libc_memalign)
3174 void *
3175 __libc_valloc (size_t bytes)
3177 if (__malloc_initialized < 0)
3178 ptmalloc_init ();
3180 void *address = RETURN_ADDRESS (0);
3181 size_t pagesize = GLRO (dl_pagesize);
3182 return _mid_memalign (pagesize, bytes, address);
3185 void *
3186 __libc_pvalloc (size_t bytes)
3188 if (__malloc_initialized < 0)
3189 ptmalloc_init ();
3191 void *address = RETURN_ADDRESS (0);
3192 size_t pagesize = GLRO (dl_pagesize);
3193 size_t rounded_bytes = ALIGN_UP (bytes, pagesize);
3195 /* Check for overflow. */
3196 if (bytes > SIZE_MAX - 2 * pagesize - MINSIZE)
3198 __set_errno (ENOMEM);
3199 return 0;
3202 return _mid_memalign (pagesize, rounded_bytes, address);
3205 void *
3206 __libc_calloc (size_t n, size_t elem_size)
3208 mstate av;
3209 mchunkptr oldtop, p;
3210 INTERNAL_SIZE_T bytes, sz, csz, oldtopsize;
3211 void *mem;
3212 unsigned long clearsize;
3213 unsigned long nclears;
3214 INTERNAL_SIZE_T *d;
3216 /* size_t is unsigned so the behavior on overflow is defined. */
3217 bytes = n * elem_size;
3218 #define HALF_INTERNAL_SIZE_T \
3219 (((INTERNAL_SIZE_T) 1) << (8 * sizeof (INTERNAL_SIZE_T) / 2))
3220 if (__builtin_expect ((n | elem_size) >= HALF_INTERNAL_SIZE_T, 0))
3222 if (elem_size != 0 && bytes / elem_size != n)
3224 __set_errno (ENOMEM);
3225 return 0;
3229 void *(*hook) (size_t, const void *) =
3230 atomic_forced_read (__malloc_hook);
3231 if (__builtin_expect (hook != NULL, 0))
3233 sz = bytes;
3234 mem = (*hook)(sz, RETURN_ADDRESS (0));
3235 if (mem == 0)
3236 return 0;
3238 return memset (mem, 0, sz);
3241 sz = bytes;
3243 arena_get (av, sz);
3244 if (av)
3246 /* Check if we hand out the top chunk, in which case there may be no
3247 need to clear. */
3248 #if MORECORE_CLEARS
3249 oldtop = top (av);
3250 oldtopsize = chunksize (top (av));
3251 # if MORECORE_CLEARS < 2
3252 /* Only newly allocated memory is guaranteed to be cleared. */
3253 if (av == &main_arena &&
3254 oldtopsize < mp_.sbrk_base + av->max_system_mem - (char *) oldtop)
3255 oldtopsize = (mp_.sbrk_base + av->max_system_mem - (char *) oldtop);
3256 # endif
3257 if (av != &main_arena)
3259 heap_info *heap = heap_for_ptr (oldtop);
3260 if (oldtopsize < (char *) heap + heap->mprotect_size - (char *) oldtop)
3261 oldtopsize = (char *) heap + heap->mprotect_size - (char *) oldtop;
3263 #endif
3265 else
3267 /* No usable arenas. */
3268 oldtop = 0;
3269 oldtopsize = 0;
3271 mem = _int_malloc (av, sz);
3274 assert (!mem || chunk_is_mmapped (mem2chunk (mem)) ||
3275 av == arena_for_chunk (mem2chunk (mem)));
3277 if (mem == 0 && av != NULL)
3279 LIBC_PROBE (memory_calloc_retry, 1, sz);
3280 av = arena_get_retry (av, sz);
3281 mem = _int_malloc (av, sz);
3284 if (av != NULL)
3285 (void) mutex_unlock (&av->mutex);
3287 /* Allocation failed even after a retry. */
3288 if (mem == 0)
3289 return 0;
3291 p = mem2chunk (mem);
3293 /* Two optional cases in which clearing not necessary */
3294 if (chunk_is_mmapped (p))
3296 if (__builtin_expect (perturb_byte, 0))
3297 return memset (mem, 0, sz);
3299 return mem;
3302 csz = chunksize (p);
3304 #if MORECORE_CLEARS
3305 if (perturb_byte == 0 && (p == oldtop && csz > oldtopsize))
3307 /* clear only the bytes from non-freshly-sbrked memory */
3308 csz = oldtopsize;
3310 #endif
3312 /* Unroll clear of <= 36 bytes (72 if 8byte sizes). We know that
3313 contents have an odd number of INTERNAL_SIZE_T-sized words;
3314 minimally 3. */
3315 d = (INTERNAL_SIZE_T *) mem;
3316 clearsize = csz - SIZE_SZ;
3317 nclears = clearsize / sizeof (INTERNAL_SIZE_T);
3318 assert (nclears >= 3);
3320 if (nclears > 9)
3321 return memset (d, 0, clearsize);
3323 else
3325 *(d + 0) = 0;
3326 *(d + 1) = 0;
3327 *(d + 2) = 0;
3328 if (nclears > 4)
3330 *(d + 3) = 0;
3331 *(d + 4) = 0;
3332 if (nclears > 6)
3334 *(d + 5) = 0;
3335 *(d + 6) = 0;
3336 if (nclears > 8)
3338 *(d + 7) = 0;
3339 *(d + 8) = 0;
3345 return mem;
3349 ------------------------------ malloc ------------------------------
3352 static void *
3353 _int_malloc (mstate av, size_t bytes)
3355 INTERNAL_SIZE_T nb; /* normalized request size */
3356 unsigned int idx; /* associated bin index */
3357 mbinptr bin; /* associated bin */
3359 mchunkptr victim; /* inspected/selected chunk */
3360 INTERNAL_SIZE_T size; /* its size */
3361 int victim_index; /* its bin index */
3363 mchunkptr remainder; /* remainder from a split */
3364 unsigned long remainder_size; /* its size */
3366 unsigned int block; /* bit map traverser */
3367 unsigned int bit; /* bit map traverser */
3368 unsigned int map; /* current word of binmap */
3370 mchunkptr fwd; /* misc temp for linking */
3371 mchunkptr bck; /* misc temp for linking */
3373 const char *errstr = NULL;
3376 Convert request size to internal form by adding SIZE_SZ bytes
3377 overhead plus possibly more to obtain necessary alignment and/or
3378 to obtain a size of at least MINSIZE, the smallest allocatable
3379 size. Also, checked_request2size traps (returning 0) request sizes
3380 that are so large that they wrap around zero when padded and
3381 aligned.
3384 checked_request2size (bytes, nb);
3386 /* There are no usable arenas. Fall back to sysmalloc to get a chunk from
3387 mmap. */
3388 if (__glibc_unlikely (av == NULL))
3390 void *p = sysmalloc (nb, av);
3391 if (p != NULL)
3392 alloc_perturb (p, bytes);
3393 return p;
3397 If the size qualifies as a fastbin, first check corresponding bin.
3398 This code is safe to execute even if av is not yet initialized, so we
3399 can try it without checking, which saves some time on this fast path.
3402 if ((unsigned long) (nb) <= (unsigned long) (get_max_fast ()))
3404 idx = fastbin_index (nb);
3405 mfastbinptr *fb = &fastbin (av, idx);
3406 mchunkptr pp = *fb;
3409 victim = pp;
3410 if (victim == NULL)
3411 break;
3413 while ((pp = catomic_compare_and_exchange_val_acq (fb, victim->fd, victim))
3414 != victim);
3415 if (victim != 0)
3417 if (__builtin_expect (fastbin_index (chunksize (victim)) != idx, 0))
3419 errstr = "malloc(): memory corruption (fast)";
3420 errout:
3421 malloc_printerr (check_action, errstr, chunk2mem (victim), av);
3422 return NULL;
3424 check_remalloced_chunk (av, victim, nb);
3425 void *p = chunk2mem (victim);
3426 alloc_perturb (p, bytes);
3427 return p;
3432 If a small request, check regular bin. Since these "smallbins"
3433 hold one size each, no searching within bins is necessary.
3434 (For a large request, we need to wait until unsorted chunks are
3435 processed to find best fit. But for small ones, fits are exact
3436 anyway, so we can check now, which is faster.)
3439 if (in_smallbin_range (nb))
3441 idx = smallbin_index (nb);
3442 bin = bin_at (av, idx);
3444 if ((victim = last (bin)) != bin)
3446 if (victim == 0) /* initialization check */
3447 malloc_consolidate (av);
3448 else
3450 bck = victim->bk;
3451 if (__glibc_unlikely (bck->fd != victim))
3453 errstr = "malloc(): smallbin double linked list corrupted";
3454 goto errout;
3456 set_inuse_bit_at_offset (victim, nb);
3457 bin->bk = bck;
3458 bck->fd = bin;
3460 if (av != &main_arena)
3461 victim->size |= NON_MAIN_ARENA;
3462 check_malloced_chunk (av, victim, nb);
3463 void *p = chunk2mem (victim);
3464 alloc_perturb (p, bytes);
3465 return p;
3471 If this is a large request, consolidate fastbins before continuing.
3472 While it might look excessive to kill all fastbins before
3473 even seeing if there is space available, this avoids
3474 fragmentation problems normally associated with fastbins.
3475 Also, in practice, programs tend to have runs of either small or
3476 large requests, but less often mixtures, so consolidation is not
3477 invoked all that often in most programs. And the programs that
3478 it is called frequently in otherwise tend to fragment.
3481 else
3483 idx = largebin_index (nb);
3484 if (have_fastchunks (av))
3485 malloc_consolidate (av);
3489 Process recently freed or remaindered chunks, taking one only if
3490 it is exact fit, or, if this a small request, the chunk is remainder from
3491 the most recent non-exact fit. Place other traversed chunks in
3492 bins. Note that this step is the only place in any routine where
3493 chunks are placed in bins.
3495 The outer loop here is needed because we might not realize until
3496 near the end of malloc that we should have consolidated, so must
3497 do so and retry. This happens at most once, and only when we would
3498 otherwise need to expand memory to service a "small" request.
3501 for (;; )
3503 int iters = 0;
3504 while ((victim = unsorted_chunks (av)->bk) != unsorted_chunks (av))
3506 bck = victim->bk;
3507 if (__builtin_expect (victim->size <= 2 * SIZE_SZ, 0)
3508 || __builtin_expect (victim->size > av->system_mem, 0))
3509 malloc_printerr (check_action, "malloc(): memory corruption",
3510 chunk2mem (victim), av);
3511 size = chunksize (victim);
3514 If a small request, try to use last remainder if it is the
3515 only chunk in unsorted bin. This helps promote locality for
3516 runs of consecutive small requests. This is the only
3517 exception to best-fit, and applies only when there is
3518 no exact fit for a small chunk.
3521 if (in_smallbin_range (nb) &&
3522 bck == unsorted_chunks (av) &&
3523 victim == av->last_remainder &&
3524 (unsigned long) (size) > (unsigned long) (nb + MINSIZE))
3526 /* split and reattach remainder */
3527 remainder_size = size - nb;
3528 remainder = chunk_at_offset (victim, nb);
3529 unsorted_chunks (av)->bk = unsorted_chunks (av)->fd = remainder;
3530 av->last_remainder = remainder;
3531 remainder->bk = remainder->fd = unsorted_chunks (av);
3532 if (!in_smallbin_range (remainder_size))
3534 remainder->fd_nextsize = NULL;
3535 remainder->bk_nextsize = NULL;
3538 set_head (victim, nb | PREV_INUSE |
3539 (av != &main_arena ? NON_MAIN_ARENA : 0));
3540 set_head (remainder, remainder_size | PREV_INUSE);
3541 set_foot (remainder, remainder_size);
3543 check_malloced_chunk (av, victim, nb);
3544 void *p = chunk2mem (victim);
3545 alloc_perturb (p, bytes);
3546 return p;
3549 /* remove from unsorted list */
3550 unsorted_chunks (av)->bk = bck;
3551 bck->fd = unsorted_chunks (av);
3553 /* Take now instead of binning if exact fit */
3555 if (size == nb)
3557 set_inuse_bit_at_offset (victim, size);
3558 if (av != &main_arena)
3559 victim->size |= NON_MAIN_ARENA;
3560 check_malloced_chunk (av, victim, nb);
3561 void *p = chunk2mem (victim);
3562 alloc_perturb (p, bytes);
3563 return p;
3566 /* place chunk in bin */
3568 if (in_smallbin_range (size))
3570 victim_index = smallbin_index (size);
3571 bck = bin_at (av, victim_index);
3572 fwd = bck->fd;
3574 else
3576 victim_index = largebin_index (size);
3577 bck = bin_at (av, victim_index);
3578 fwd = bck->fd;
3580 /* maintain large bins in sorted order */
3581 if (fwd != bck)
3583 /* Or with inuse bit to speed comparisons */
3584 size |= PREV_INUSE;
3585 /* if smaller than smallest, bypass loop below */
3586 assert ((bck->bk->size & NON_MAIN_ARENA) == 0);
3587 if ((unsigned long) (size) < (unsigned long) (bck->bk->size))
3589 fwd = bck;
3590 bck = bck->bk;
3592 victim->fd_nextsize = fwd->fd;
3593 victim->bk_nextsize = fwd->fd->bk_nextsize;
3594 fwd->fd->bk_nextsize = victim->bk_nextsize->fd_nextsize = victim;
3596 else
3598 assert ((fwd->size & NON_MAIN_ARENA) == 0);
3599 while ((unsigned long) size < fwd->size)
3601 fwd = fwd->fd_nextsize;
3602 assert ((fwd->size & NON_MAIN_ARENA) == 0);
3605 if ((unsigned long) size == (unsigned long) fwd->size)
3606 /* Always insert in the second position. */
3607 fwd = fwd->fd;
3608 else
3610 victim->fd_nextsize = fwd;
3611 victim->bk_nextsize = fwd->bk_nextsize;
3612 fwd->bk_nextsize = victim;
3613 victim->bk_nextsize->fd_nextsize = victim;
3615 bck = fwd->bk;
3618 else
3619 victim->fd_nextsize = victim->bk_nextsize = victim;
3622 mark_bin (av, victim_index);
3623 victim->bk = bck;
3624 victim->fd = fwd;
3625 fwd->bk = victim;
3626 bck->fd = victim;
3628 #define MAX_ITERS 10000
3629 if (++iters >= MAX_ITERS)
3630 break;
3634 If a large request, scan through the chunks of current bin in
3635 sorted order to find smallest that fits. Use the skip list for this.
3638 if (!in_smallbin_range (nb))
3640 bin = bin_at (av, idx);
3642 /* skip scan if empty or largest chunk is too small */
3643 if ((victim = first (bin)) != bin &&
3644 (unsigned long) (victim->size) >= (unsigned long) (nb))
3646 victim = victim->bk_nextsize;
3647 while (((unsigned long) (size = chunksize (victim)) <
3648 (unsigned long) (nb)))
3649 victim = victim->bk_nextsize;
3651 /* Avoid removing the first entry for a size so that the skip
3652 list does not have to be rerouted. */
3653 if (victim != last (bin) && victim->size == victim->fd->size)
3654 victim = victim->fd;
3656 remainder_size = size - nb;
3657 unlink (av, victim, bck, fwd);
3659 /* Exhaust */
3660 if (remainder_size < MINSIZE)
3662 set_inuse_bit_at_offset (victim, size);
3663 if (av != &main_arena)
3664 victim->size |= NON_MAIN_ARENA;
3666 /* Split */
3667 else
3669 remainder = chunk_at_offset (victim, nb);
3670 /* We cannot assume the unsorted list is empty and therefore
3671 have to perform a complete insert here. */
3672 bck = unsorted_chunks (av);
3673 fwd = bck->fd;
3674 if (__glibc_unlikely (fwd->bk != bck))
3676 errstr = "malloc(): corrupted unsorted chunks";
3677 goto errout;
3679 remainder->bk = bck;
3680 remainder->fd = fwd;
3681 bck->fd = remainder;
3682 fwd->bk = remainder;
3683 if (!in_smallbin_range (remainder_size))
3685 remainder->fd_nextsize = NULL;
3686 remainder->bk_nextsize = NULL;
3688 set_head (victim, nb | PREV_INUSE |
3689 (av != &main_arena ? NON_MAIN_ARENA : 0));
3690 set_head (remainder, remainder_size | PREV_INUSE);
3691 set_foot (remainder, remainder_size);
3693 check_malloced_chunk (av, victim, nb);
3694 void *p = chunk2mem (victim);
3695 alloc_perturb (p, bytes);
3696 return p;
3701 Search for a chunk by scanning bins, starting with next largest
3702 bin. This search is strictly by best-fit; i.e., the smallest
3703 (with ties going to approximately the least recently used) chunk
3704 that fits is selected.
3706 The bitmap avoids needing to check that most blocks are nonempty.
3707 The particular case of skipping all bins during warm-up phases
3708 when no chunks have been returned yet is faster than it might look.
3711 ++idx;
3712 bin = bin_at (av, idx);
3713 block = idx2block (idx);
3714 map = av->binmap[block];
3715 bit = idx2bit (idx);
3717 for (;; )
3719 /* Skip rest of block if there are no more set bits in this block. */
3720 if (bit > map || bit == 0)
3724 if (++block >= BINMAPSIZE) /* out of bins */
3725 goto use_top;
3727 while ((map = av->binmap[block]) == 0);
3729 bin = bin_at (av, (block << BINMAPSHIFT));
3730 bit = 1;
3733 /* Advance to bin with set bit. There must be one. */
3734 while ((bit & map) == 0)
3736 bin = next_bin (bin);
3737 bit <<= 1;
3738 assert (bit != 0);
3741 /* Inspect the bin. It is likely to be non-empty */
3742 victim = last (bin);
3744 /* If a false alarm (empty bin), clear the bit. */
3745 if (victim == bin)
3747 av->binmap[block] = map &= ~bit; /* Write through */
3748 bin = next_bin (bin);
3749 bit <<= 1;
3752 else
3754 size = chunksize (victim);
3756 /* We know the first chunk in this bin is big enough to use. */
3757 assert ((unsigned long) (size) >= (unsigned long) (nb));
3759 remainder_size = size - nb;
3761 /* unlink */
3762 unlink (av, victim, bck, fwd);
3764 /* Exhaust */
3765 if (remainder_size < MINSIZE)
3767 set_inuse_bit_at_offset (victim, size);
3768 if (av != &main_arena)
3769 victim->size |= NON_MAIN_ARENA;
3772 /* Split */
3773 else
3775 remainder = chunk_at_offset (victim, nb);
3777 /* We cannot assume the unsorted list is empty and therefore
3778 have to perform a complete insert here. */
3779 bck = unsorted_chunks (av);
3780 fwd = bck->fd;
3781 if (__glibc_unlikely (fwd->bk != bck))
3783 errstr = "malloc(): corrupted unsorted chunks 2";
3784 goto errout;
3786 remainder->bk = bck;
3787 remainder->fd = fwd;
3788 bck->fd = remainder;
3789 fwd->bk = remainder;
3791 /* advertise as last remainder */
3792 if (in_smallbin_range (nb))
3793 av->last_remainder = remainder;
3794 if (!in_smallbin_range (remainder_size))
3796 remainder->fd_nextsize = NULL;
3797 remainder->bk_nextsize = NULL;
3799 set_head (victim, nb | PREV_INUSE |
3800 (av != &main_arena ? NON_MAIN_ARENA : 0));
3801 set_head (remainder, remainder_size | PREV_INUSE);
3802 set_foot (remainder, remainder_size);
3804 check_malloced_chunk (av, victim, nb);
3805 void *p = chunk2mem (victim);
3806 alloc_perturb (p, bytes);
3807 return p;
3811 use_top:
3813 If large enough, split off the chunk bordering the end of memory
3814 (held in av->top). Note that this is in accord with the best-fit
3815 search rule. In effect, av->top is treated as larger (and thus
3816 less well fitting) than any other available chunk since it can
3817 be extended to be as large as necessary (up to system
3818 limitations).
3820 We require that av->top always exists (i.e., has size >=
3821 MINSIZE) after initialization, so if it would otherwise be
3822 exhausted by current request, it is replenished. (The main
3823 reason for ensuring it exists is that we may need MINSIZE space
3824 to put in fenceposts in sysmalloc.)
3827 victim = av->top;
3828 size = chunksize (victim);
3830 if ((unsigned long) (size) >= (unsigned long) (nb + MINSIZE))
3832 remainder_size = size - nb;
3833 remainder = chunk_at_offset (victim, nb);
3834 av->top = remainder;
3835 set_head (victim, nb | PREV_INUSE |
3836 (av != &main_arena ? NON_MAIN_ARENA : 0));
3837 set_head (remainder, remainder_size | PREV_INUSE);
3839 check_malloced_chunk (av, victim, nb);
3840 void *p = chunk2mem (victim);
3841 alloc_perturb (p, bytes);
3842 return p;
3845 /* When we are using atomic ops to free fast chunks we can get
3846 here for all block sizes. */
3847 else if (have_fastchunks (av))
3849 malloc_consolidate (av);
3850 /* restore original bin index */
3851 if (in_smallbin_range (nb))
3852 idx = smallbin_index (nb);
3853 else
3854 idx = largebin_index (nb);
3858 Otherwise, relay to handle system-dependent cases
3860 else
3862 void *p = sysmalloc (nb, av);
3863 if (p != NULL)
3864 alloc_perturb (p, bytes);
3865 return p;
3871 ------------------------------ free ------------------------------
3874 static void
3875 _int_free (mstate av, mchunkptr p, int have_lock)
3877 INTERNAL_SIZE_T size; /* its size */
3878 mfastbinptr *fb; /* associated fastbin */
3879 mchunkptr nextchunk; /* next contiguous chunk */
3880 INTERNAL_SIZE_T nextsize; /* its size */
3881 int nextinuse; /* true if nextchunk is used */
3882 INTERNAL_SIZE_T prevsize; /* size of previous contiguous chunk */
3883 mchunkptr bck; /* misc temp for linking */
3884 mchunkptr fwd; /* misc temp for linking */
3886 const char *errstr = NULL;
3887 int locked = 0;
3889 size = chunksize (p);
3891 /* Little security check which won't hurt performance: the
3892 allocator never wrapps around at the end of the address space.
3893 Therefore we can exclude some size values which might appear
3894 here by accident or by "design" from some intruder. */
3895 if (__builtin_expect ((uintptr_t) p > (uintptr_t) -size, 0)
3896 || __builtin_expect (misaligned_chunk (p), 0))
3898 errstr = "free(): invalid pointer";
3899 errout:
3900 if (!have_lock && locked)
3901 (void) mutex_unlock (&av->mutex);
3902 malloc_printerr (check_action, errstr, chunk2mem (p), av);
3903 return;
3905 /* We know that each chunk is at least MINSIZE bytes in size or a
3906 multiple of MALLOC_ALIGNMENT. */
3907 if (__glibc_unlikely (size < MINSIZE || !aligned_OK (size)))
3909 errstr = "free(): invalid size";
3910 goto errout;
3913 check_inuse_chunk(av, p);
3916 If eligible, place chunk on a fastbin so it can be found
3917 and used quickly in malloc.
3920 if ((unsigned long)(size) <= (unsigned long)(get_max_fast ())
3922 #if TRIM_FASTBINS
3924 If TRIM_FASTBINS set, don't place chunks
3925 bordering top into fastbins
3927 && (chunk_at_offset(p, size) != av->top)
3928 #endif
3931 if (__builtin_expect (chunk_at_offset (p, size)->size <= 2 * SIZE_SZ, 0)
3932 || __builtin_expect (chunksize (chunk_at_offset (p, size))
3933 >= av->system_mem, 0))
3935 /* We might not have a lock at this point and concurrent modifications
3936 of system_mem might have let to a false positive. Redo the test
3937 after getting the lock. */
3938 if (have_lock
3939 || ({ assert (locked == 0);
3940 mutex_lock(&av->mutex);
3941 locked = 1;
3942 chunk_at_offset (p, size)->size <= 2 * SIZE_SZ
3943 || chunksize (chunk_at_offset (p, size)) >= av->system_mem;
3946 errstr = "free(): invalid next size (fast)";
3947 goto errout;
3949 if (! have_lock)
3951 (void)mutex_unlock(&av->mutex);
3952 locked = 0;
3956 free_perturb (chunk2mem(p), size - 2 * SIZE_SZ);
3958 set_fastchunks(av);
3959 unsigned int idx = fastbin_index(size);
3960 fb = &fastbin (av, idx);
3962 /* Atomically link P to its fastbin: P->FD = *FB; *FB = P; */
3963 mchunkptr old = *fb, old2;
3964 unsigned int old_idx = ~0u;
3967 /* Check that the top of the bin is not the record we are going to add
3968 (i.e., double free). */
3969 if (__builtin_expect (old == p, 0))
3971 errstr = "double free or corruption (fasttop)";
3972 goto errout;
3974 /* Check that size of fastbin chunk at the top is the same as
3975 size of the chunk that we are adding. We can dereference OLD
3976 only if we have the lock, otherwise it might have already been
3977 deallocated. See use of OLD_IDX below for the actual check. */
3978 if (have_lock && old != NULL)
3979 old_idx = fastbin_index(chunksize(old));
3980 p->fd = old2 = old;
3982 while ((old = catomic_compare_and_exchange_val_rel (fb, p, old2)) != old2);
3984 if (have_lock && old != NULL && __builtin_expect (old_idx != idx, 0))
3986 errstr = "invalid fastbin entry (free)";
3987 goto errout;
3992 Consolidate other non-mmapped chunks as they arrive.
3995 else if (!chunk_is_mmapped(p)) {
3996 if (! have_lock) {
3997 (void)mutex_lock(&av->mutex);
3998 locked = 1;
4001 nextchunk = chunk_at_offset(p, size);
4003 /* Lightweight tests: check whether the block is already the
4004 top block. */
4005 if (__glibc_unlikely (p == av->top))
4007 errstr = "double free or corruption (top)";
4008 goto errout;
4010 /* Or whether the next chunk is beyond the boundaries of the arena. */
4011 if (__builtin_expect (contiguous (av)
4012 && (char *) nextchunk
4013 >= ((char *) av->top + chunksize(av->top)), 0))
4015 errstr = "double free or corruption (out)";
4016 goto errout;
4018 /* Or whether the block is actually not marked used. */
4019 if (__glibc_unlikely (!prev_inuse(nextchunk)))
4021 errstr = "double free or corruption (!prev)";
4022 goto errout;
4025 nextsize = chunksize(nextchunk);
4026 if (__builtin_expect (nextchunk->size <= 2 * SIZE_SZ, 0)
4027 || __builtin_expect (nextsize >= av->system_mem, 0))
4029 errstr = "free(): invalid next size (normal)";
4030 goto errout;
4033 free_perturb (chunk2mem(p), size - 2 * SIZE_SZ);
4035 /* consolidate backward */
4036 if (!prev_inuse(p)) {
4037 prevsize = p->prev_size;
4038 size += prevsize;
4039 p = chunk_at_offset(p, -((long) prevsize));
4040 unlink(av, p, bck, fwd);
4043 if (nextchunk != av->top) {
4044 /* get and clear inuse bit */
4045 nextinuse = inuse_bit_at_offset(nextchunk, nextsize);
4047 /* consolidate forward */
4048 if (!nextinuse) {
4049 unlink(av, nextchunk, bck, fwd);
4050 size += nextsize;
4051 } else
4052 clear_inuse_bit_at_offset(nextchunk, 0);
4055 Place the chunk in unsorted chunk list. Chunks are
4056 not placed into regular bins until after they have
4057 been given one chance to be used in malloc.
4060 bck = unsorted_chunks(av);
4061 fwd = bck->fd;
4062 if (__glibc_unlikely (fwd->bk != bck))
4064 errstr = "free(): corrupted unsorted chunks";
4065 goto errout;
4067 p->fd = fwd;
4068 p->bk = bck;
4069 if (!in_smallbin_range(size))
4071 p->fd_nextsize = NULL;
4072 p->bk_nextsize = NULL;
4074 bck->fd = p;
4075 fwd->bk = p;
4077 set_head(p, size | PREV_INUSE);
4078 set_foot(p, size);
4080 check_free_chunk(av, p);
4084 If the chunk borders the current high end of memory,
4085 consolidate into top
4088 else {
4089 size += nextsize;
4090 set_head(p, size | PREV_INUSE);
4091 av->top = p;
4092 check_chunk(av, p);
4096 If freeing a large space, consolidate possibly-surrounding
4097 chunks. Then, if the total unused topmost memory exceeds trim
4098 threshold, ask malloc_trim to reduce top.
4100 Unless max_fast is 0, we don't know if there are fastbins
4101 bordering top, so we cannot tell for sure whether threshold
4102 has been reached unless fastbins are consolidated. But we
4103 don't want to consolidate on each free. As a compromise,
4104 consolidation is performed if FASTBIN_CONSOLIDATION_THRESHOLD
4105 is reached.
4108 if ((unsigned long)(size) >= FASTBIN_CONSOLIDATION_THRESHOLD) {
4109 if (have_fastchunks(av))
4110 malloc_consolidate(av);
4112 if (av == &main_arena) {
4113 #ifndef MORECORE_CANNOT_TRIM
4114 if ((unsigned long)(chunksize(av->top)) >=
4115 (unsigned long)(mp_.trim_threshold))
4116 systrim(mp_.top_pad, av);
4117 #endif
4118 } else {
4119 /* Always try heap_trim(), even if the top chunk is not
4120 large, because the corresponding heap might go away. */
4121 heap_info *heap = heap_for_ptr(top(av));
4123 assert(heap->ar_ptr == av);
4124 heap_trim(heap, mp_.top_pad);
4128 if (! have_lock) {
4129 assert (locked);
4130 (void)mutex_unlock(&av->mutex);
4134 If the chunk was allocated via mmap, release via munmap().
4137 else {
4138 munmap_chunk (p);
4143 ------------------------- malloc_consolidate -------------------------
4145 malloc_consolidate is a specialized version of free() that tears
4146 down chunks held in fastbins. Free itself cannot be used for this
4147 purpose since, among other things, it might place chunks back onto
4148 fastbins. So, instead, we need to use a minor variant of the same
4149 code.
4151 Also, because this routine needs to be called the first time through
4152 malloc anyway, it turns out to be the perfect place to trigger
4153 initialization code.
4156 static void malloc_consolidate(mstate av)
4158 mfastbinptr* fb; /* current fastbin being consolidated */
4159 mfastbinptr* maxfb; /* last fastbin (for loop control) */
4160 mchunkptr p; /* current chunk being consolidated */
4161 mchunkptr nextp; /* next chunk to consolidate */
4162 mchunkptr unsorted_bin; /* bin header */
4163 mchunkptr first_unsorted; /* chunk to link to */
4165 /* These have same use as in free() */
4166 mchunkptr nextchunk;
4167 INTERNAL_SIZE_T size;
4168 INTERNAL_SIZE_T nextsize;
4169 INTERNAL_SIZE_T prevsize;
4170 int nextinuse;
4171 mchunkptr bck;
4172 mchunkptr fwd;
4175 If max_fast is 0, we know that av hasn't
4176 yet been initialized, in which case do so below
4179 if (get_max_fast () != 0) {
4180 clear_fastchunks(av);
4182 unsorted_bin = unsorted_chunks(av);
4185 Remove each chunk from fast bin and consolidate it, placing it
4186 then in unsorted bin. Among other reasons for doing this,
4187 placing in unsorted bin avoids needing to calculate actual bins
4188 until malloc is sure that chunks aren't immediately going to be
4189 reused anyway.
4192 maxfb = &fastbin (av, NFASTBINS - 1);
4193 fb = &fastbin (av, 0);
4194 do {
4195 p = atomic_exchange_acq (fb, NULL);
4196 if (p != 0) {
4197 do {
4198 check_inuse_chunk(av, p);
4199 nextp = p->fd;
4201 /* Slightly streamlined version of consolidation code in free() */
4202 size = p->size & ~(PREV_INUSE|NON_MAIN_ARENA);
4203 nextchunk = chunk_at_offset(p, size);
4204 nextsize = chunksize(nextchunk);
4206 if (!prev_inuse(p)) {
4207 prevsize = p->prev_size;
4208 size += prevsize;
4209 p = chunk_at_offset(p, -((long) prevsize));
4210 unlink(av, p, bck, fwd);
4213 if (nextchunk != av->top) {
4214 nextinuse = inuse_bit_at_offset(nextchunk, nextsize);
4216 if (!nextinuse) {
4217 size += nextsize;
4218 unlink(av, nextchunk, bck, fwd);
4219 } else
4220 clear_inuse_bit_at_offset(nextchunk, 0);
4222 first_unsorted = unsorted_bin->fd;
4223 unsorted_bin->fd = p;
4224 first_unsorted->bk = p;
4226 if (!in_smallbin_range (size)) {
4227 p->fd_nextsize = NULL;
4228 p->bk_nextsize = NULL;
4231 set_head(p, size | PREV_INUSE);
4232 p->bk = unsorted_bin;
4233 p->fd = first_unsorted;
4234 set_foot(p, size);
4237 else {
4238 size += nextsize;
4239 set_head(p, size | PREV_INUSE);
4240 av->top = p;
4243 } while ( (p = nextp) != 0);
4246 } while (fb++ != maxfb);
4248 else {
4249 malloc_init_state(av);
4250 check_malloc_state(av);
4255 ------------------------------ realloc ------------------------------
4258 void*
4259 _int_realloc(mstate av, mchunkptr oldp, INTERNAL_SIZE_T oldsize,
4260 INTERNAL_SIZE_T nb)
4262 mchunkptr newp; /* chunk to return */
4263 INTERNAL_SIZE_T newsize; /* its size */
4264 void* newmem; /* corresponding user mem */
4266 mchunkptr next; /* next contiguous chunk after oldp */
4268 mchunkptr remainder; /* extra space at end of newp */
4269 unsigned long remainder_size; /* its size */
4271 mchunkptr bck; /* misc temp for linking */
4272 mchunkptr fwd; /* misc temp for linking */
4274 unsigned long copysize; /* bytes to copy */
4275 unsigned int ncopies; /* INTERNAL_SIZE_T words to copy */
4276 INTERNAL_SIZE_T* s; /* copy source */
4277 INTERNAL_SIZE_T* d; /* copy destination */
4279 const char *errstr = NULL;
4281 /* oldmem size */
4282 if (__builtin_expect (oldp->size <= 2 * SIZE_SZ, 0)
4283 || __builtin_expect (oldsize >= av->system_mem, 0))
4285 errstr = "realloc(): invalid old size";
4286 errout:
4287 malloc_printerr (check_action, errstr, chunk2mem (oldp), av);
4288 return NULL;
4291 check_inuse_chunk (av, oldp);
4293 /* All callers already filter out mmap'ed chunks. */
4294 assert (!chunk_is_mmapped (oldp));
4296 next = chunk_at_offset (oldp, oldsize);
4297 INTERNAL_SIZE_T nextsize = chunksize (next);
4298 if (__builtin_expect (next->size <= 2 * SIZE_SZ, 0)
4299 || __builtin_expect (nextsize >= av->system_mem, 0))
4301 errstr = "realloc(): invalid next size";
4302 goto errout;
4305 if ((unsigned long) (oldsize) >= (unsigned long) (nb))
4307 /* already big enough; split below */
4308 newp = oldp;
4309 newsize = oldsize;
4312 else
4314 /* Try to expand forward into top */
4315 if (next == av->top &&
4316 (unsigned long) (newsize = oldsize + nextsize) >=
4317 (unsigned long) (nb + MINSIZE))
4319 set_head_size (oldp, nb | (av != &main_arena ? NON_MAIN_ARENA : 0));
4320 av->top = chunk_at_offset (oldp, nb);
4321 set_head (av->top, (newsize - nb) | PREV_INUSE);
4322 check_inuse_chunk (av, oldp);
4323 return chunk2mem (oldp);
4326 /* Try to expand forward into next chunk; split off remainder below */
4327 else if (next != av->top &&
4328 !inuse (next) &&
4329 (unsigned long) (newsize = oldsize + nextsize) >=
4330 (unsigned long) (nb))
4332 newp = oldp;
4333 unlink (av, next, bck, fwd);
4336 /* allocate, copy, free */
4337 else
4339 newmem = _int_malloc (av, nb - MALLOC_ALIGN_MASK);
4340 if (newmem == 0)
4341 return 0; /* propagate failure */
4343 newp = mem2chunk (newmem);
4344 newsize = chunksize (newp);
4347 Avoid copy if newp is next chunk after oldp.
4349 if (newp == next)
4351 newsize += oldsize;
4352 newp = oldp;
4354 else
4357 Unroll copy of <= 36 bytes (72 if 8byte sizes)
4358 We know that contents have an odd number of
4359 INTERNAL_SIZE_T-sized words; minimally 3.
4362 copysize = oldsize - SIZE_SZ;
4363 s = (INTERNAL_SIZE_T *) (chunk2mem (oldp));
4364 d = (INTERNAL_SIZE_T *) (newmem);
4365 ncopies = copysize / sizeof (INTERNAL_SIZE_T);
4366 assert (ncopies >= 3);
4368 if (ncopies > 9)
4369 memcpy (d, s, copysize);
4371 else
4373 *(d + 0) = *(s + 0);
4374 *(d + 1) = *(s + 1);
4375 *(d + 2) = *(s + 2);
4376 if (ncopies > 4)
4378 *(d + 3) = *(s + 3);
4379 *(d + 4) = *(s + 4);
4380 if (ncopies > 6)
4382 *(d + 5) = *(s + 5);
4383 *(d + 6) = *(s + 6);
4384 if (ncopies > 8)
4386 *(d + 7) = *(s + 7);
4387 *(d + 8) = *(s + 8);
4393 _int_free (av, oldp, 1);
4394 check_inuse_chunk (av, newp);
4395 return chunk2mem (newp);
4400 /* If possible, free extra space in old or extended chunk */
4402 assert ((unsigned long) (newsize) >= (unsigned long) (nb));
4404 remainder_size = newsize - nb;
4406 if (remainder_size < MINSIZE) /* not enough extra to split off */
4408 set_head_size (newp, newsize | (av != &main_arena ? NON_MAIN_ARENA : 0));
4409 set_inuse_bit_at_offset (newp, newsize);
4411 else /* split remainder */
4413 remainder = chunk_at_offset (newp, nb);
4414 set_head_size (newp, nb | (av != &main_arena ? NON_MAIN_ARENA : 0));
4415 set_head (remainder, remainder_size | PREV_INUSE |
4416 (av != &main_arena ? NON_MAIN_ARENA : 0));
4417 /* Mark remainder as inuse so free() won't complain */
4418 set_inuse_bit_at_offset (remainder, remainder_size);
4419 _int_free (av, remainder, 1);
4422 check_inuse_chunk (av, newp);
4423 return chunk2mem (newp);
4427 ------------------------------ memalign ------------------------------
4430 static void *
4431 _int_memalign (mstate av, size_t alignment, size_t bytes)
4433 INTERNAL_SIZE_T nb; /* padded request size */
4434 char *m; /* memory returned by malloc call */
4435 mchunkptr p; /* corresponding chunk */
4436 char *brk; /* alignment point within p */
4437 mchunkptr newp; /* chunk to return */
4438 INTERNAL_SIZE_T newsize; /* its size */
4439 INTERNAL_SIZE_T leadsize; /* leading space before alignment point */
4440 mchunkptr remainder; /* spare room at end to split off */
4441 unsigned long remainder_size; /* its size */
4442 INTERNAL_SIZE_T size;
4446 checked_request2size (bytes, nb);
4449 Strategy: find a spot within that chunk that meets the alignment
4450 request, and then possibly free the leading and trailing space.
4454 /* Call malloc with worst case padding to hit alignment. */
4456 m = (char *) (_int_malloc (av, nb + alignment + MINSIZE));
4458 if (m == 0)
4459 return 0; /* propagate failure */
4461 p = mem2chunk (m);
4463 if ((((unsigned long) (m)) % alignment) != 0) /* misaligned */
4465 { /*
4466 Find an aligned spot inside chunk. Since we need to give back
4467 leading space in a chunk of at least MINSIZE, if the first
4468 calculation places us at a spot with less than MINSIZE leader,
4469 we can move to the next aligned spot -- we've allocated enough
4470 total room so that this is always possible.
4472 brk = (char *) mem2chunk (((unsigned long) (m + alignment - 1)) &
4473 - ((signed long) alignment));
4474 if ((unsigned long) (brk - (char *) (p)) < MINSIZE)
4475 brk += alignment;
4477 newp = (mchunkptr) brk;
4478 leadsize = brk - (char *) (p);
4479 newsize = chunksize (p) - leadsize;
4481 /* For mmapped chunks, just adjust offset */
4482 if (chunk_is_mmapped (p))
4484 newp->prev_size = p->prev_size + leadsize;
4485 set_head (newp, newsize | IS_MMAPPED);
4486 return chunk2mem (newp);
4489 /* Otherwise, give back leader, use the rest */
4490 set_head (newp, newsize | PREV_INUSE |
4491 (av != &main_arena ? NON_MAIN_ARENA : 0));
4492 set_inuse_bit_at_offset (newp, newsize);
4493 set_head_size (p, leadsize | (av != &main_arena ? NON_MAIN_ARENA : 0));
4494 _int_free (av, p, 1);
4495 p = newp;
4497 assert (newsize >= nb &&
4498 (((unsigned long) (chunk2mem (p))) % alignment) == 0);
4501 /* Also give back spare room at the end */
4502 if (!chunk_is_mmapped (p))
4504 size = chunksize (p);
4505 if ((unsigned long) (size) > (unsigned long) (nb + MINSIZE))
4507 remainder_size = size - nb;
4508 remainder = chunk_at_offset (p, nb);
4509 set_head (remainder, remainder_size | PREV_INUSE |
4510 (av != &main_arena ? NON_MAIN_ARENA : 0));
4511 set_head_size (p, nb);
4512 _int_free (av, remainder, 1);
4516 check_inuse_chunk (av, p);
4517 return chunk2mem (p);
4522 ------------------------------ malloc_trim ------------------------------
4525 static int
4526 mtrim (mstate av, size_t pad)
4528 /* Don't touch corrupt arenas. */
4529 if (arena_is_corrupt (av))
4530 return 0;
4532 /* Ensure initialization/consolidation */
4533 malloc_consolidate (av);
4535 const size_t ps = GLRO (dl_pagesize);
4536 int psindex = bin_index (ps);
4537 const size_t psm1 = ps - 1;
4539 int result = 0;
4540 for (int i = 1; i < NBINS; ++i)
4541 if (i == 1 || i >= psindex)
4543 mbinptr bin = bin_at (av, i);
4545 for (mchunkptr p = last (bin); p != bin; p = p->bk)
4547 INTERNAL_SIZE_T size = chunksize (p);
4549 if (size > psm1 + sizeof (struct malloc_chunk))
4551 /* See whether the chunk contains at least one unused page. */
4552 char *paligned_mem = (char *) (((uintptr_t) p
4553 + sizeof (struct malloc_chunk)
4554 + psm1) & ~psm1);
4556 assert ((char *) chunk2mem (p) + 4 * SIZE_SZ <= paligned_mem);
4557 assert ((char *) p + size > paligned_mem);
4559 /* This is the size we could potentially free. */
4560 size -= paligned_mem - (char *) p;
4562 if (size > psm1)
4564 #if MALLOC_DEBUG
4565 /* When debugging we simulate destroying the memory
4566 content. */
4567 memset (paligned_mem, 0x89, size & ~psm1);
4568 #endif
4569 __madvise (paligned_mem, size & ~psm1, MADV_DONTNEED);
4571 result = 1;
4577 #ifndef MORECORE_CANNOT_TRIM
4578 return result | (av == &main_arena ? systrim (pad, av) : 0);
4580 #else
4581 return result;
4582 #endif
4587 __malloc_trim (size_t s)
4589 int result = 0;
4591 if (__malloc_initialized < 0)
4592 ptmalloc_init ();
4594 mstate ar_ptr = &main_arena;
4597 (void) mutex_lock (&ar_ptr->mutex);
4598 result |= mtrim (ar_ptr, s);
4599 (void) mutex_unlock (&ar_ptr->mutex);
4601 ar_ptr = ar_ptr->next;
4603 while (ar_ptr != &main_arena);
4605 return result;
4610 ------------------------- malloc_usable_size -------------------------
4613 static size_t
4614 musable (void *mem)
4616 mchunkptr p;
4617 if (mem != 0)
4619 p = mem2chunk (mem);
4621 if (__builtin_expect (using_malloc_checking == 1, 0))
4622 return malloc_check_get_size (p);
4624 if (chunk_is_mmapped (p))
4626 if (DUMPED_MAIN_ARENA_CHUNK (p))
4627 return chunksize (p) - SIZE_SZ;
4628 else
4629 return chunksize (p) - 2 * SIZE_SZ;
4631 else if (inuse (p))
4632 return chunksize (p) - SIZE_SZ;
4634 return 0;
4638 size_t
4639 __malloc_usable_size (void *m)
4641 size_t result;
4643 result = musable (m);
4644 return result;
4648 ------------------------------ mallinfo ------------------------------
4649 Accumulate malloc statistics for arena AV into M.
4652 static void
4653 int_mallinfo (mstate av, struct mallinfo *m)
4655 size_t i;
4656 mbinptr b;
4657 mchunkptr p;
4658 INTERNAL_SIZE_T avail;
4659 INTERNAL_SIZE_T fastavail;
4660 int nblocks;
4661 int nfastblocks;
4663 /* Ensure initialization */
4664 if (av->top == 0)
4665 malloc_consolidate (av);
4667 check_malloc_state (av);
4669 /* Account for top */
4670 avail = chunksize (av->top);
4671 nblocks = 1; /* top always exists */
4673 /* traverse fastbins */
4674 nfastblocks = 0;
4675 fastavail = 0;
4677 for (i = 0; i < NFASTBINS; ++i)
4679 for (p = fastbin (av, i); p != 0; p = p->fd)
4681 ++nfastblocks;
4682 fastavail += chunksize (p);
4686 avail += fastavail;
4688 /* traverse regular bins */
4689 for (i = 1; i < NBINS; ++i)
4691 b = bin_at (av, i);
4692 for (p = last (b); p != b; p = p->bk)
4694 ++nblocks;
4695 avail += chunksize (p);
4699 m->smblks += nfastblocks;
4700 m->ordblks += nblocks;
4701 m->fordblks += avail;
4702 m->uordblks += av->system_mem - avail;
4703 m->arena += av->system_mem;
4704 m->fsmblks += fastavail;
4705 if (av == &main_arena)
4707 m->hblks = mp_.n_mmaps;
4708 m->hblkhd = mp_.mmapped_mem;
4709 m->usmblks = 0;
4710 m->keepcost = chunksize (av->top);
4715 struct mallinfo
4716 __libc_mallinfo (void)
4718 struct mallinfo m;
4719 mstate ar_ptr;
4721 if (__malloc_initialized < 0)
4722 ptmalloc_init ();
4724 memset (&m, 0, sizeof (m));
4725 ar_ptr = &main_arena;
4728 (void) mutex_lock (&ar_ptr->mutex);
4729 int_mallinfo (ar_ptr, &m);
4730 (void) mutex_unlock (&ar_ptr->mutex);
4732 ar_ptr = ar_ptr->next;
4734 while (ar_ptr != &main_arena);
4736 return m;
4740 ------------------------------ malloc_stats ------------------------------
4743 void
4744 __malloc_stats (void)
4746 int i;
4747 mstate ar_ptr;
4748 unsigned int in_use_b = mp_.mmapped_mem, system_b = in_use_b;
4750 if (__malloc_initialized < 0)
4751 ptmalloc_init ();
4752 _IO_flockfile (stderr);
4753 int old_flags2 = ((_IO_FILE *) stderr)->_flags2;
4754 ((_IO_FILE *) stderr)->_flags2 |= _IO_FLAGS2_NOTCANCEL;
4755 for (i = 0, ar_ptr = &main_arena;; i++)
4757 struct mallinfo mi;
4759 memset (&mi, 0, sizeof (mi));
4760 (void) mutex_lock (&ar_ptr->mutex);
4761 int_mallinfo (ar_ptr, &mi);
4762 fprintf (stderr, "Arena %d:\n", i);
4763 fprintf (stderr, "system bytes = %10u\n", (unsigned int) mi.arena);
4764 fprintf (stderr, "in use bytes = %10u\n", (unsigned int) mi.uordblks);
4765 #if MALLOC_DEBUG > 1
4766 if (i > 0)
4767 dump_heap (heap_for_ptr (top (ar_ptr)));
4768 #endif
4769 system_b += mi.arena;
4770 in_use_b += mi.uordblks;
4771 (void) mutex_unlock (&ar_ptr->mutex);
4772 ar_ptr = ar_ptr->next;
4773 if (ar_ptr == &main_arena)
4774 break;
4776 fprintf (stderr, "Total (incl. mmap):\n");
4777 fprintf (stderr, "system bytes = %10u\n", system_b);
4778 fprintf (stderr, "in use bytes = %10u\n", in_use_b);
4779 fprintf (stderr, "max mmap regions = %10u\n", (unsigned int) mp_.max_n_mmaps);
4780 fprintf (stderr, "max mmap bytes = %10lu\n",
4781 (unsigned long) mp_.max_mmapped_mem);
4782 ((_IO_FILE *) stderr)->_flags2 |= old_flags2;
4783 _IO_funlockfile (stderr);
4788 ------------------------------ mallopt ------------------------------
4792 __libc_mallopt (int param_number, int value)
4794 mstate av = &main_arena;
4795 int res = 1;
4797 if (__malloc_initialized < 0)
4798 ptmalloc_init ();
4799 (void) mutex_lock (&av->mutex);
4800 /* Ensure initialization/consolidation */
4801 malloc_consolidate (av);
4803 LIBC_PROBE (memory_mallopt, 2, param_number, value);
4805 switch (param_number)
4807 case M_MXFAST:
4808 if (value >= 0 && value <= MAX_FAST_SIZE)
4810 LIBC_PROBE (memory_mallopt_mxfast, 2, value, get_max_fast ());
4811 set_max_fast (value);
4813 else
4814 res = 0;
4815 break;
4817 case M_TRIM_THRESHOLD:
4818 LIBC_PROBE (memory_mallopt_trim_threshold, 3, value,
4819 mp_.trim_threshold, mp_.no_dyn_threshold);
4820 mp_.trim_threshold = value;
4821 mp_.no_dyn_threshold = 1;
4822 break;
4824 case M_TOP_PAD:
4825 LIBC_PROBE (memory_mallopt_top_pad, 3, value,
4826 mp_.top_pad, mp_.no_dyn_threshold);
4827 mp_.top_pad = value;
4828 mp_.no_dyn_threshold = 1;
4829 break;
4831 case M_MMAP_THRESHOLD:
4832 /* Forbid setting the threshold too high. */
4833 if ((unsigned long) value > HEAP_MAX_SIZE / 2)
4834 res = 0;
4835 else
4837 LIBC_PROBE (memory_mallopt_mmap_threshold, 3, value,
4838 mp_.mmap_threshold, mp_.no_dyn_threshold);
4839 mp_.mmap_threshold = value;
4840 mp_.no_dyn_threshold = 1;
4842 break;
4844 case M_MMAP_MAX:
4845 LIBC_PROBE (memory_mallopt_mmap_max, 3, value,
4846 mp_.n_mmaps_max, mp_.no_dyn_threshold);
4847 mp_.n_mmaps_max = value;
4848 mp_.no_dyn_threshold = 1;
4849 break;
4851 case M_CHECK_ACTION:
4852 LIBC_PROBE (memory_mallopt_check_action, 2, value, check_action);
4853 check_action = value;
4854 break;
4856 case M_PERTURB:
4857 LIBC_PROBE (memory_mallopt_perturb, 2, value, perturb_byte);
4858 perturb_byte = value;
4859 break;
4861 case M_ARENA_TEST:
4862 if (value > 0)
4864 LIBC_PROBE (memory_mallopt_arena_test, 2, value, mp_.arena_test);
4865 mp_.arena_test = value;
4867 break;
4869 case M_ARENA_MAX:
4870 if (value > 0)
4872 LIBC_PROBE (memory_mallopt_arena_max, 2, value, mp_.arena_max);
4873 mp_.arena_max = value;
4875 break;
4877 (void) mutex_unlock (&av->mutex);
4878 return res;
4880 libc_hidden_def (__libc_mallopt)
4884 -------------------- Alternative MORECORE functions --------------------
4889 General Requirements for MORECORE.
4891 The MORECORE function must have the following properties:
4893 If MORECORE_CONTIGUOUS is false:
4895 * MORECORE must allocate in multiples of pagesize. It will
4896 only be called with arguments that are multiples of pagesize.
4898 * MORECORE(0) must return an address that is at least
4899 MALLOC_ALIGNMENT aligned. (Page-aligning always suffices.)
4901 else (i.e. If MORECORE_CONTIGUOUS is true):
4903 * Consecutive calls to MORECORE with positive arguments
4904 return increasing addresses, indicating that space has been
4905 contiguously extended.
4907 * MORECORE need not allocate in multiples of pagesize.
4908 Calls to MORECORE need not have args of multiples of pagesize.
4910 * MORECORE need not page-align.
4912 In either case:
4914 * MORECORE may allocate more memory than requested. (Or even less,
4915 but this will generally result in a malloc failure.)
4917 * MORECORE must not allocate memory when given argument zero, but
4918 instead return one past the end address of memory from previous
4919 nonzero call. This malloc does NOT call MORECORE(0)
4920 until at least one call with positive arguments is made, so
4921 the initial value returned is not important.
4923 * Even though consecutive calls to MORECORE need not return contiguous
4924 addresses, it must be OK for malloc'ed chunks to span multiple
4925 regions in those cases where they do happen to be contiguous.
4927 * MORECORE need not handle negative arguments -- it may instead
4928 just return MORECORE_FAILURE when given negative arguments.
4929 Negative arguments are always multiples of pagesize. MORECORE
4930 must not misinterpret negative args as large positive unsigned
4931 args. You can suppress all such calls from even occurring by defining
4932 MORECORE_CANNOT_TRIM,
4934 There is some variation across systems about the type of the
4935 argument to sbrk/MORECORE. If size_t is unsigned, then it cannot
4936 actually be size_t, because sbrk supports negative args, so it is
4937 normally the signed type of the same width as size_t (sometimes
4938 declared as "intptr_t", and sometimes "ptrdiff_t"). It doesn't much
4939 matter though. Internally, we use "long" as arguments, which should
4940 work across all reasonable possibilities.
4942 Additionally, if MORECORE ever returns failure for a positive
4943 request, then mmap is used as a noncontiguous system allocator. This
4944 is a useful backup strategy for systems with holes in address spaces
4945 -- in this case sbrk cannot contiguously expand the heap, but mmap
4946 may be able to map noncontiguous space.
4948 If you'd like mmap to ALWAYS be used, you can define MORECORE to be
4949 a function that always returns MORECORE_FAILURE.
4951 If you are using this malloc with something other than sbrk (or its
4952 emulation) to supply memory regions, you probably want to set
4953 MORECORE_CONTIGUOUS as false. As an example, here is a custom
4954 allocator kindly contributed for pre-OSX macOS. It uses virtually
4955 but not necessarily physically contiguous non-paged memory (locked
4956 in, present and won't get swapped out). You can use it by
4957 uncommenting this section, adding some #includes, and setting up the
4958 appropriate defines above:
4960 *#define MORECORE osMoreCore
4961 *#define MORECORE_CONTIGUOUS 0
4963 There is also a shutdown routine that should somehow be called for
4964 cleanup upon program exit.
4966 *#define MAX_POOL_ENTRIES 100
4967 *#define MINIMUM_MORECORE_SIZE (64 * 1024)
4968 static int next_os_pool;
4969 void *our_os_pools[MAX_POOL_ENTRIES];
4971 void *osMoreCore(int size)
4973 void *ptr = 0;
4974 static void *sbrk_top = 0;
4976 if (size > 0)
4978 if (size < MINIMUM_MORECORE_SIZE)
4979 size = MINIMUM_MORECORE_SIZE;
4980 if (CurrentExecutionLevel() == kTaskLevel)
4981 ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
4982 if (ptr == 0)
4984 return (void *) MORECORE_FAILURE;
4986 // save ptrs so they can be freed during cleanup
4987 our_os_pools[next_os_pool] = ptr;
4988 next_os_pool++;
4989 ptr = (void *) ((((unsigned long) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
4990 sbrk_top = (char *) ptr + size;
4991 return ptr;
4993 else if (size < 0)
4995 // we don't currently support shrink behavior
4996 return (void *) MORECORE_FAILURE;
4998 else
5000 return sbrk_top;
5004 // cleanup any allocated memory pools
5005 // called as last thing before shutting down driver
5007 void osCleanupMem(void)
5009 void **ptr;
5011 for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
5012 if (*ptr)
5014 PoolDeallocate(*ptr);
5015 * ptr = 0;
5022 /* Helper code. */
5024 extern char **__libc_argv attribute_hidden;
5026 static void
5027 malloc_printerr (int action, const char *str, void *ptr, mstate ar_ptr)
5029 /* Avoid using this arena in future. We do not attempt to synchronize this
5030 with anything else because we minimally want to ensure that __libc_message
5031 gets its resources safely without stumbling on the current corruption. */
5032 if (ar_ptr)
5033 set_arena_corrupt (ar_ptr);
5035 if ((action & 5) == 5)
5036 __libc_message (action & 2, "%s\n", str);
5037 else if (action & 1)
5039 char buf[2 * sizeof (uintptr_t) + 1];
5041 buf[sizeof (buf) - 1] = '\0';
5042 char *cp = _itoa_word ((uintptr_t) ptr, &buf[sizeof (buf) - 1], 16, 0);
5043 while (cp > buf)
5044 *--cp = '0';
5046 __libc_message (action & 2, "*** Error in `%s': %s: 0x%s ***\n",
5047 __libc_argv[0] ? : "<unknown>", str, cp);
5049 else if (action & 2)
5050 abort ();
5053 /* We need a wrapper function for one of the additions of POSIX. */
5055 __posix_memalign (void **memptr, size_t alignment, size_t size)
5057 void *mem;
5059 /* Test whether the SIZE argument is valid. It must be a power of
5060 two multiple of sizeof (void *). */
5061 if (alignment % sizeof (void *) != 0
5062 || !powerof2 (alignment / sizeof (void *))
5063 || alignment == 0)
5064 return EINVAL;
5067 void *address = RETURN_ADDRESS (0);
5068 mem = _mid_memalign (alignment, size, address);
5070 if (mem != NULL)
5072 *memptr = mem;
5073 return 0;
5076 return ENOMEM;
5078 weak_alias (__posix_memalign, posix_memalign)
5082 __malloc_info (int options, FILE *fp)
5084 /* For now, at least. */
5085 if (options != 0)
5086 return EINVAL;
5088 int n = 0;
5089 size_t total_nblocks = 0;
5090 size_t total_nfastblocks = 0;
5091 size_t total_avail = 0;
5092 size_t total_fastavail = 0;
5093 size_t total_system = 0;
5094 size_t total_max_system = 0;
5095 size_t total_aspace = 0;
5096 size_t total_aspace_mprotect = 0;
5100 if (__malloc_initialized < 0)
5101 ptmalloc_init ();
5103 fputs ("<malloc version=\"1\">\n", fp);
5105 /* Iterate over all arenas currently in use. */
5106 mstate ar_ptr = &main_arena;
5109 fprintf (fp, "<heap nr=\"%d\">\n<sizes>\n", n++);
5111 size_t nblocks = 0;
5112 size_t nfastblocks = 0;
5113 size_t avail = 0;
5114 size_t fastavail = 0;
5115 struct
5117 size_t from;
5118 size_t to;
5119 size_t total;
5120 size_t count;
5121 } sizes[NFASTBINS + NBINS - 1];
5122 #define nsizes (sizeof (sizes) / sizeof (sizes[0]))
5124 mutex_lock (&ar_ptr->mutex);
5126 for (size_t i = 0; i < NFASTBINS; ++i)
5128 mchunkptr p = fastbin (ar_ptr, i);
5129 if (p != NULL)
5131 size_t nthissize = 0;
5132 size_t thissize = chunksize (p);
5134 while (p != NULL)
5136 ++nthissize;
5137 p = p->fd;
5140 fastavail += nthissize * thissize;
5141 nfastblocks += nthissize;
5142 sizes[i].from = thissize - (MALLOC_ALIGNMENT - 1);
5143 sizes[i].to = thissize;
5144 sizes[i].count = nthissize;
5146 else
5147 sizes[i].from = sizes[i].to = sizes[i].count = 0;
5149 sizes[i].total = sizes[i].count * sizes[i].to;
5153 mbinptr bin;
5154 struct malloc_chunk *r;
5156 for (size_t i = 1; i < NBINS; ++i)
5158 bin = bin_at (ar_ptr, i);
5159 r = bin->fd;
5160 sizes[NFASTBINS - 1 + i].from = ~((size_t) 0);
5161 sizes[NFASTBINS - 1 + i].to = sizes[NFASTBINS - 1 + i].total
5162 = sizes[NFASTBINS - 1 + i].count = 0;
5164 if (r != NULL)
5165 while (r != bin)