Bug 1828638 - Log the number of freed cells in PHC r=glandium
[gecko.git] / memory / replace / phc / PHC.cpp
blobaabc1ee87f3c4a5c0db24cde2e64fdec814664f8
1 /* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*- */
2 /* vim: set ts=8 sts=2 et sw=2 tw=80: */
3 /* This Source Code Form is subject to the terms of the Mozilla Public
4 * License, v. 2.0. If a copy of the MPL was not distributed with this
5 * file, You can obtain one at http://mozilla.org/MPL/2.0/. */
7 // PHC is a probabilistic heap checker. A tiny fraction of randomly chosen heap
8 // allocations are subject to some expensive checking via the use of OS page
9 // access protection. A failed check triggers a crash, whereupon useful
10 // information about the failure is put into the crash report. The cost and
11 // coverage for each user is minimal, but spread over the entire user base the
12 // coverage becomes significant.
14 // The idea comes from Chromium, where it is called GWP-ASAN. (Firefox uses PHC
15 // as the name because GWP-ASAN is long, awkward, and doesn't have any
16 // particular meaning.)
18 // In the current implementation up to 64 allocations per process can become
19 // PHC allocations. These allocations must be page-sized or smaller. Each PHC
20 // allocation gets its own page, and when the allocation is freed its page is
21 // marked inaccessible until the page is reused for another allocation. This
22 // means that a use-after-free defect (which includes double-frees) will be
23 // caught if the use occurs before the page is reused for another allocation.
24 // The crash report will contain stack traces for the allocation site, the free
25 // site, and the use-after-free site, which is often enough to diagnose the
26 // defect.
28 // Also, each PHC allocation is followed by a guard page. The PHC allocation is
29 // positioned so that its end abuts the guard page (or as close as possible,
30 // given alignment constraints). This means that a bounds violation at the end
31 // of the allocation (overflow) will be caught. The crash report will contain
32 // stack traces for the allocation site and the bounds violation use site,
33 // which is often enough to diagnose the defect.
35 // (A bounds violation at the start of the allocation (underflow) will not be
36 // caught, unless it is sufficiently large to hit the preceding allocation's
37 // guard page, which is not that likely. It would be possible to look more
38 // assiduously for underflow by randomly placing some allocations at the end of
39 // the page and some at the start of the page, and GWP-ASAN does this. PHC does
40 // not, however, because overflow is likely to be much more common than
41 // underflow in practice.)
43 // We use a simple heuristic to categorize a guard page access as overflow or
44 // underflow: if the address falls in the lower half of the guard page, we
45 // assume it is overflow, otherwise we assume it is underflow. More
46 // sophisticated heuristics are possible, but this one is very simple, and it is
47 // likely that most overflows/underflows in practice are very close to the page
48 // boundary.
50 // The design space for the randomization strategy is large. The current
51 // implementation has a large random delay before it starts operating, and a
52 // small random delay between each PHC allocation attempt. Each freed PHC
53 // allocation is quarantined for a medium random delay before being reused, in
54 // order to increase the chance of catching UAFs.
56 // The basic cost of PHC's operation is as follows.
58 // - The physical memory cost is 64 pages plus some metadata (including stack
59 // traces) for each page. This amounts to 256 KiB per process on
60 // architectures with 4 KiB pages and 1024 KiB on macOS/AArch64 which uses
61 // 16 KiB pages.
63 // - The virtual memory cost is the physical memory cost plus the guard pages:
64 // another 64 pages. This amounts to another 256 KiB per process on
65 // architectures with 4 KiB pages and 1024 KiB on macOS/AArch64 which uses
66 // 16 KiB pages. PHC is currently only enabled on 64-bit platforms so the
67 // impact of the virtual memory usage is negligible.
69 // - Every allocation requires a size check and a decrement-and-check of an
70 // atomic counter. When the counter reaches zero a PHC allocation can occur,
71 // which involves marking a page as accessible and getting a stack trace for
72 // the allocation site. Otherwise, mozjemalloc performs the allocation.
74 // - Every deallocation requires a range check on the pointer to see if it
75 // involves a PHC allocation. (The choice to only do PHC allocations that are
76 // a page or smaller enables this range check, because the 64 pages are
77 // contiguous. Allowing larger allocations would make this more complicated,
78 // and we definitely don't want something as slow as a hash table lookup on
79 // every deallocation.) PHC deallocations involve marking a page as
80 // inaccessible and getting a stack trace for the deallocation site.
82 // Note that calls to realloc(), free(), and malloc_usable_size() will
83 // immediately crash if the given pointer falls within a page allocation's
84 // page, but does not point to the start of the allocation itself.
86 // void* p = malloc(64);
87 // free(p + 1); // p+1 doesn't point to the allocation start; crash
89 // Such crashes will not have the PHC fields in the crash report.
91 // PHC-specific tests can be run with the following commands:
92 // - gtests: `./mach gtest '*PHC*'`
93 // - xpcshell-tests: `./mach test toolkit/crashreporter/test/unit`
94 // - This runs some non-PHC tests as well.
96 #include "PHC.h"
98 #include <stdlib.h>
99 #include <time.h>
101 #include <algorithm>
103 #ifdef XP_WIN
104 # include <process.h>
105 #else
106 # include <sys/mman.h>
107 # include <sys/types.h>
108 # include <pthread.h>
109 # include <unistd.h>
110 #endif
112 #include "replace_malloc.h"
113 #include "FdPrintf.h"
114 #include "Mutex.h"
115 #include "mozilla/Assertions.h"
116 #include "mozilla/Atomics.h"
117 #include "mozilla/Attributes.h"
118 #include "mozilla/CheckedInt.h"
119 #include "mozilla/Maybe.h"
120 #include "mozilla/StackWalk.h"
121 #include "mozilla/ThreadLocal.h"
122 #include "mozilla/XorShift128PlusRNG.h"
124 using namespace mozilla;
126 //---------------------------------------------------------------------------
127 // Utilities
128 //---------------------------------------------------------------------------
130 #ifdef ANDROID
131 // Android doesn't have pthread_atfork defined in pthread.h.
132 extern "C" MOZ_EXPORT int pthread_atfork(void (*)(void), void (*)(void),
133 void (*)(void));
134 #endif
136 #ifndef DISALLOW_COPY_AND_ASSIGN
137 # define DISALLOW_COPY_AND_ASSIGN(T) \
138 T(const T&); \
139 void operator=(const T&)
140 #endif
142 static malloc_table_t sMallocTable;
144 // This class provides infallible operations for the small number of heap
145 // allocations that PHC does for itself. It would be nice if we could use the
146 // InfallibleAllocPolicy from mozalloc, but PHC cannot use mozalloc.
147 class InfallibleAllocPolicy {
148 public:
149 static void AbortOnFailure(const void* aP) {
150 if (!aP) {
151 MOZ_CRASH("PHC failed to allocate");
155 template <class T>
156 static T* new_() {
157 void* p = sMallocTable.malloc(sizeof(T));
158 AbortOnFailure(p);
159 return new (p) T;
163 //---------------------------------------------------------------------------
164 // Stack traces
165 //---------------------------------------------------------------------------
167 // This code is similar to the equivalent code within DMD.
169 class StackTrace : public phc::StackTrace {
170 public:
171 StackTrace() : phc::StackTrace() {}
173 void Clear() { mLength = 0; }
175 void Fill();
177 private:
178 static void StackWalkCallback(uint32_t aFrameNumber, void* aPc, void* aSp,
179 void* aClosure) {
180 StackTrace* st = (StackTrace*)aClosure;
181 MOZ_ASSERT(st->mLength < kMaxFrames);
182 st->mPcs[st->mLength] = aPc;
183 st->mLength++;
184 MOZ_ASSERT(st->mLength == aFrameNumber);
188 // WARNING WARNING WARNING: this function must only be called when GMut::sMutex
189 // is *not* locked, otherwise we might get deadlocks.
191 // How? On Windows, MozStackWalk() can lock a mutex, M, from the shared library
192 // loader. Another thread might call malloc() while holding M locked (when
193 // loading a shared library) and try to lock GMut::sMutex, causing a deadlock.
194 // So GMut::sMutex can't be locked during the call to MozStackWalk(). (For
195 // details, see https://bugzilla.mozilla.org/show_bug.cgi?id=374829#c8. On
196 // Linux, something similar can happen; see bug 824340. So we just disallow it
197 // on all platforms.)
199 // In DMD, to avoid this problem we temporarily unlock the equivalent mutex for
200 // the MozStackWalk() call. But that's grotty, and things are a bit different
201 // here, so we just require that stack traces be obtained before locking
202 // GMut::sMutex.
204 // Unfortunately, there is no reliable way at compile-time or run-time to ensure
205 // this pre-condition. Hence this large comment.
207 void StackTrace::Fill() {
208 mLength = 0;
210 #if defined(XP_WIN) && defined(_M_IX86)
211 // This avoids MozStackWalk(), which causes unusably slow startup on Win32
212 // when it is called during static initialization (see bug 1241684).
214 // This code is cribbed from the Gecko Profiler, which also uses
215 // FramePointerStackWalk() on Win32: Registers::SyncPopulate() for the
216 // frame pointer, and GetStackTop() for the stack end.
217 CONTEXT context;
218 RtlCaptureContext(&context);
219 void** fp = reinterpret_cast<void**>(context.Ebp);
221 PNT_TIB pTib = reinterpret_cast<PNT_TIB>(NtCurrentTeb());
222 void* stackEnd = static_cast<void*>(pTib->StackBase);
223 FramePointerStackWalk(StackWalkCallback, kMaxFrames, this, fp, stackEnd);
224 #elif defined(XP_MACOSX)
225 // This avoids MozStackWalk(), which has become unusably slow on Mac due to
226 // changes in libunwind.
228 // This code is cribbed from the Gecko Profiler, which also uses
229 // FramePointerStackWalk() on Mac: Registers::SyncPopulate() for the frame
230 // pointer, and GetStackTop() for the stack end.
231 # pragma GCC diagnostic push
232 # pragma GCC diagnostic ignored "-Wframe-address"
233 void** fp = reinterpret_cast<void**>(__builtin_frame_address(1));
234 # pragma GCC diagnostic pop
235 void* stackEnd = pthread_get_stackaddr_np(pthread_self());
236 FramePointerStackWalk(StackWalkCallback, kMaxFrames, this, fp, stackEnd);
237 #else
238 MozStackWalk(StackWalkCallback, nullptr, kMaxFrames, this);
239 #endif
242 //---------------------------------------------------------------------------
243 // Logging
244 //---------------------------------------------------------------------------
246 // Change this to 1 to enable some PHC logging. Useful for debugging.
247 #define PHC_LOGGING 0
249 #if PHC_LOGGING
251 static size_t GetPid() { return size_t(getpid()); }
253 static size_t GetTid() {
254 # if defined(XP_WIN)
255 return size_t(GetCurrentThreadId());
256 # else
257 return size_t(pthread_self());
258 # endif
261 # if defined(XP_WIN)
262 # define LOG_STDERR \
263 reinterpret_cast<intptr_t>(GetStdHandle(STD_ERROR_HANDLE))
264 # else
265 # define LOG_STDERR 2
266 # endif
267 # define LOG(fmt, ...) \
268 FdPrintf(LOG_STDERR, "PHC[%zu,%zu,~%zu] " fmt, GetPid(), GetTid(), \
269 size_t(GAtomic::Now()), __VA_ARGS__)
271 #else
273 # define LOG(fmt, ...)
275 #endif // PHC_LOGGING
277 //---------------------------------------------------------------------------
278 // Global state
279 //---------------------------------------------------------------------------
281 // Throughout this entire file time is measured as the number of sub-page
282 // allocations performed (by PHC and mozjemalloc combined). `Time` is 64-bit
283 // because we could have more than 2**32 allocations in a long-running session.
284 // `Delay` is 32-bit because the delays used within PHC are always much smaller
285 // than 2**32.
286 using Time = uint64_t; // A moment in time.
287 using Delay = uint32_t; // A time duration.
289 // PHC only runs if the page size is 4 KiB; anything more is uncommon and would
290 // use too much memory. So we hardwire this size for all platforms but macOS
291 // on ARM processors. For the latter we make an exception because the minimum
292 // page size supported is 16KiB so there's no way to go below that.
293 static const size_t kPageSize =
294 #if defined(XP_MACOSX) && defined(__aarch64__)
295 16384
296 #else
297 4096
298 #endif
301 // There are two kinds of page.
302 // - Allocation pages, from which allocations are made.
303 // - Guard pages, which are never touched by PHC.
305 // These page kinds are interleaved; each allocation page has a guard page on
306 // either side.
307 static const size_t kNumAllocPages = kPageSize == 4096 ? 4096 : 1024;
308 static const size_t kNumAllPages = kNumAllocPages * 2 + 1;
310 // The total size of the allocation pages and guard pages.
311 static const size_t kAllPagesSize = kNumAllPages * kPageSize;
313 // The junk value used to fill new allocation in debug builds. It's same value
314 // as the one used by mozjemalloc. PHC applies it unconditionally in debug
315 // builds. Unlike mozjemalloc, PHC doesn't consult the MALLOC_OPTIONS
316 // environment variable to possibly change that behaviour.
318 // Also note that, unlike mozjemalloc, PHC doesn't have a poison value for freed
319 // allocations because freed allocations are protected by OS page protection.
320 #ifdef DEBUG
321 const uint8_t kAllocJunk = 0xe4;
322 #endif
324 // The maximum time.
325 static const Time kMaxTime = ~(Time(0));
327 // The average delay before doing any page allocations at the start of a
328 // process. Note that roughly 1 million allocations occur in the main process
329 // while starting the browser. The delay range is 1..kAvgFirstAllocDelay*2.
330 static const Delay kAvgFirstAllocDelay = 64 * 1024;
332 // The average delay until the next attempted page allocation, once we get past
333 // the first delay. The delay range is 1..kAvgAllocDelay*2.
334 static const Delay kAvgAllocDelay = 16 * 1024;
336 // The average delay before reusing a freed page. Should be significantly larger
337 // than kAvgAllocDelay, otherwise there's not much point in having it. The delay
338 // range is (kAvgAllocDelay / 2)..(kAvgAllocDelay / 2 * 3). This is different to
339 // the other delay ranges in not having a minimum of 1, because that's such a
340 // short delay that there is a high likelihood of bad stacks in any crash
341 // report.
342 static const Delay kAvgPageReuseDelay = 256 * 1024;
344 // Truncate aRnd to the range (1 .. AvgDelay*2). If aRnd is random, this
345 // results in an average value of aAvgDelay + 0.5, which is close enough to
346 // aAvgDelay. aAvgDelay must be a power-of-two (otherwise it will crash) for
347 // speed.
348 template <Delay AvgDelay>
349 constexpr Delay Rnd64ToDelay(uint64_t aRnd) {
350 static_assert(IsPowerOfTwo(AvgDelay), "must be a power of two");
352 return aRnd % (AvgDelay * 2) + 1;
355 // Maps a pointer to a PHC-specific structure:
356 // - Nothing
357 // - A guard page (it is unspecified which one)
358 // - An allocation page (with an index < kNumAllocPages)
360 // The standard way of handling a PtrKind is to check IsNothing(), and if that
361 // fails, to check IsGuardPage(), and if that fails, to call AllocPage().
362 class PtrKind {
363 private:
364 enum class Tag : uint8_t {
365 Nothing,
366 GuardPage,
367 AllocPage,
370 Tag mTag;
371 uintptr_t mIndex; // Only used if mTag == Tag::AllocPage.
373 public:
374 // Detect what a pointer points to. This constructor must be fast because it
375 // is called for every call to free(), realloc(), malloc_usable_size(), and
376 // jemalloc_ptr_info().
377 PtrKind(const void* aPtr, const uint8_t* aPagesStart,
378 const uint8_t* aPagesLimit) {
379 if (!(aPagesStart <= aPtr && aPtr < aPagesLimit)) {
380 mTag = Tag::Nothing;
381 } else {
382 uintptr_t offset = static_cast<const uint8_t*>(aPtr) - aPagesStart;
383 uintptr_t allPageIndex = offset / kPageSize;
384 MOZ_ASSERT(allPageIndex < kNumAllPages);
385 if (allPageIndex & 1) {
386 // Odd-indexed pages are allocation pages.
387 uintptr_t allocPageIndex = allPageIndex / 2;
388 MOZ_ASSERT(allocPageIndex < kNumAllocPages);
389 mTag = Tag::AllocPage;
390 mIndex = allocPageIndex;
391 } else {
392 // Even-numbered pages are guard pages.
393 mTag = Tag::GuardPage;
398 bool IsNothing() const { return mTag == Tag::Nothing; }
399 bool IsGuardPage() const { return mTag == Tag::GuardPage; }
401 // This should only be called after IsNothing() and IsGuardPage() have been
402 // checked and failed.
403 uintptr_t AllocPageIndex() const {
404 MOZ_RELEASE_ASSERT(mTag == Tag::AllocPage);
405 return mIndex;
409 // Shared, atomic, mutable global state.
410 class GAtomic {
411 public:
412 static void Init(Delay aFirstDelay) {
413 sAllocDelay = aFirstDelay;
415 LOG("Initial sAllocDelay <- %zu\n", size_t(aFirstDelay));
418 static Time Now() { return sNow; }
420 static void IncrementNow() { sNow++; }
422 // Decrements the delay and returns the decremented value.
423 static int32_t DecrementDelay() { return --sAllocDelay; }
425 static void SetAllocDelay(Delay aAllocDelay) { sAllocDelay = aAllocDelay; }
427 private:
428 // The current time. Relaxed semantics because it's primarily used for
429 // determining if an allocation can be recycled yet and therefore it doesn't
430 // need to be exact.
431 static Atomic<Time, Relaxed> sNow;
433 // Delay until the next attempt at a page allocation. See the comment in
434 // MaybePageAlloc() for an explanation of why it is a signed integer, and why
435 // it uses ReleaseAcquire semantics.
436 static Atomic<Delay, ReleaseAcquire> sAllocDelay;
439 Atomic<Time, Relaxed> GAtomic::sNow;
440 Atomic<Delay, ReleaseAcquire> GAtomic::sAllocDelay;
442 // Shared, immutable global state. Initialized by replace_init() and never
443 // changed after that. replace_init() runs early enough that no synchronization
444 // is needed.
445 class GConst {
446 private:
447 // The bounds of the allocated pages.
448 uint8_t* const mPagesStart;
449 uint8_t* const mPagesLimit;
451 // Allocates the allocation pages and the guard pages, contiguously.
452 uint8_t* AllocAllPages() {
453 // Allocate the pages so that they are inaccessible. They are never freed,
454 // because it would happen at process termination when it would be of little
455 // use.
456 void* pages =
457 #ifdef XP_WIN
458 VirtualAlloc(nullptr, kAllPagesSize, MEM_RESERVE, PAGE_NOACCESS);
459 #else
460 mmap(nullptr, kAllPagesSize, PROT_NONE, MAP_ANONYMOUS | MAP_PRIVATE, -1,
462 #endif
463 if (!pages) {
464 MOZ_CRASH();
467 return static_cast<uint8_t*>(pages);
470 public:
471 GConst()
472 : mPagesStart(AllocAllPages()), mPagesLimit(mPagesStart + kAllPagesSize) {
473 LOG("AllocAllPages at %p..%p\n", mPagesStart, mPagesLimit);
476 class PtrKind PtrKind(const void* aPtr) {
477 class PtrKind pk(aPtr, mPagesStart, mPagesLimit);
478 return pk;
481 bool IsInFirstGuardPage(const void* aPtr) {
482 return mPagesStart <= aPtr && aPtr < mPagesStart + kPageSize;
485 // Get the address of the allocation page referred to via an index. Used when
486 // marking the page as accessible/inaccessible.
487 uint8_t* AllocPagePtr(uintptr_t aIndex) {
488 MOZ_ASSERT(aIndex < kNumAllocPages);
489 // Multiply by two and add one to account for allocation pages *and* guard
490 // pages.
491 return mPagesStart + (2 * aIndex + 1) * kPageSize;
495 static GConst* gConst;
497 // On MacOS, the first __thread/thread_local access calls malloc, which leads
498 // to an infinite loop. So we use pthread-based TLS instead, which somehow
499 // doesn't have this problem.
500 #if !defined(XP_DARWIN)
501 # define PHC_THREAD_LOCAL(T) MOZ_THREAD_LOCAL(T)
502 #else
503 # define PHC_THREAD_LOCAL(T) \
504 detail::ThreadLocal<T, detail::ThreadLocalKeyStorage>
505 #endif
507 // Thread-local state.
508 class GTls {
509 GTls(const GTls&) = delete;
511 const GTls& operator=(const GTls&) = delete;
513 // When true, PHC does as little as possible.
515 // (a) It does not allocate any new page allocations.
517 // (b) It avoids doing any operations that might call malloc/free/etc., which
518 // would cause re-entry into PHC. (In practice, MozStackWalk() is the
519 // only such operation.) Note that calls to the functions in sMallocTable
520 // are ok.
522 // For example, replace_malloc() will just fall back to mozjemalloc. However,
523 // operations involving existing allocations are more complex, because those
524 // existing allocations may be page allocations. For example, if
525 // replace_free() is passed a page allocation on a PHC-disabled thread, it
526 // will free the page allocation in the usual way, but it will get a dummy
527 // freeStack in order to avoid calling MozStackWalk(), as per (b) above.
529 // This single disabling mechanism has two distinct uses.
531 // - It's used to prevent re-entry into PHC, which can cause correctness
532 // problems. For example, consider this sequence.
534 // 1. enter replace_free()
535 // 2. which calls PageFree()
536 // 3. which calls MozStackWalk()
537 // 4. which locks a mutex M, and then calls malloc
538 // 5. enter replace_malloc()
539 // 6. which calls MaybePageAlloc()
540 // 7. which calls MozStackWalk()
541 // 8. which (re)locks a mutex M --> deadlock
543 // We avoid this sequence by "disabling" the thread in PageFree() (at step
544 // 2), which causes MaybePageAlloc() to fail, avoiding the call to
545 // MozStackWalk() (at step 7).
547 // In practice, realloc or free of a PHC allocation is unlikely on a thread
548 // that is disabled because of this use: MozStackWalk() will probably only
549 // realloc/free allocations that it allocated itself, but those won't be
550 // page allocations because PHC is disabled before calling MozStackWalk().
552 // (Note that MaybePageAlloc() could safely do a page allocation so long as
553 // it avoided calling MozStackWalk() by getting a dummy allocStack. But it
554 // wouldn't be useful, and it would prevent the second use below.)
556 // - It's used to prevent PHC allocations in some tests that rely on
557 // mozjemalloc's exact allocation behaviour, which PHC does not replicate
558 // exactly. (Note that (b) isn't necessary for this use -- MozStackWalk()
559 // could be safely called -- but it is necessary for the first use above.)
561 static PHC_THREAD_LOCAL(bool) tlsIsDisabled;
563 public:
564 static void Init() {
565 if (!tlsIsDisabled.init()) {
566 MOZ_CRASH();
570 static void DisableOnCurrentThread() {
571 MOZ_ASSERT(!GTls::tlsIsDisabled.get());
572 tlsIsDisabled.set(true);
575 static void EnableOnCurrentThread() {
576 MOZ_ASSERT(GTls::tlsIsDisabled.get());
577 tlsIsDisabled.set(false);
580 static bool IsDisabledOnCurrentThread() { return tlsIsDisabled.get(); }
583 PHC_THREAD_LOCAL(bool) GTls::tlsIsDisabled;
585 class AutoDisableOnCurrentThread {
586 AutoDisableOnCurrentThread(const AutoDisableOnCurrentThread&) = delete;
588 const AutoDisableOnCurrentThread& operator=(
589 const AutoDisableOnCurrentThread&) = delete;
591 public:
592 explicit AutoDisableOnCurrentThread() { GTls::DisableOnCurrentThread(); }
593 ~AutoDisableOnCurrentThread() { GTls::EnableOnCurrentThread(); }
596 // This type is used as a proof-of-lock token, to make it clear which functions
597 // require sMutex to be locked.
598 using GMutLock = const MutexAutoLock&;
600 // Shared, mutable global state. Protected by sMutex; all accessing functions
601 // take a GMutLock as proof that sMutex is held.
602 class GMut {
603 enum class AllocPageState {
604 NeverAllocated = 0,
605 InUse = 1,
606 Freed = 2,
609 // Metadata for each allocation page.
610 class AllocPageInfo {
611 public:
612 AllocPageInfo()
613 : mState(AllocPageState::NeverAllocated),
614 mArenaId(),
615 mBaseAddr(nullptr),
616 mAllocStack(),
617 mFreeStack(),
618 mReuseTime(0) {}
620 // The current allocation page state.
621 AllocPageState mState;
623 // The arena that the allocation is nominally from. This isn't meaningful
624 // within PHC, which has no arenas. But it is necessary for reallocation of
625 // page allocations as normal allocations, such as in this code:
627 // p = moz_arena_malloc(arenaId, 4096);
628 // realloc(p, 8192);
630 // The realloc is more than one page, and thus too large for PHC to handle.
631 // Therefore, if PHC handles the first allocation, it must ask mozjemalloc
632 // to allocate the 8192 bytes in the correct arena, and to do that, it must
633 // call sMallocTable.moz_arena_malloc with the correct arenaId under the
634 // covers. Therefore it must record that arenaId.
636 // This field is also needed for jemalloc_ptr_info() to work, because it
637 // also returns the arena ID (but only in debug builds).
639 // - NeverAllocated: must be 0.
640 // - InUse | Freed: can be any valid arena ID value.
641 Maybe<arena_id_t> mArenaId;
643 // The starting address of the allocation. Will not be the same as the page
644 // address unless the allocation is a full page.
645 // - NeverAllocated: must be 0.
646 // - InUse | Freed: must be within the allocation page.
647 uint8_t* mBaseAddr;
649 // Usable size is computed as the number of bytes between the pointer and
650 // the end of the allocation page. This might be bigger than the requested
651 // size, especially if an outsized alignment is requested.
652 size_t UsableSize() const {
653 return mState == AllocPageState::NeverAllocated
655 : kPageSize - (reinterpret_cast<uintptr_t>(mBaseAddr) &
656 (kPageSize - 1));
659 // The internal fragmentation for this allocation.
660 size_t FragmentationBytes() const {
661 MOZ_ASSERT(kPageSize >= UsableSize());
662 return mState == AllocPageState::InUse ? kPageSize - UsableSize() : 0;
665 // The allocation stack.
666 // - NeverAllocated: Nothing.
667 // - InUse | Freed: Some.
668 Maybe<StackTrace> mAllocStack;
670 // The free stack.
671 // - NeverAllocated | InUse: Nothing.
672 // - Freed: Some.
673 Maybe<StackTrace> mFreeStack;
675 // The time at which the page is available for reuse, as measured against
676 // GAtomic::sNow. When the page is in use this value will be kMaxTime.
677 // - NeverAllocated: must be 0.
678 // - InUse: must be kMaxTime.
679 // - Freed: must be > 0 and < kMaxTime.
680 Time mReuseTime;
683 public:
684 // The mutex that protects the other members.
685 static Mutex sMutex MOZ_UNANNOTATED;
687 GMut()
688 : mRNG(RandomSeed<0>(), RandomSeed<1>()),
689 mAllocPages(),
690 mPageAllocHits(0),
691 mPageAllocMisses(0) {
692 sMutex.Init();
695 uint64_t Random64(GMutLock) { return mRNG.next(); }
697 bool IsPageInUse(GMutLock, uintptr_t aIndex) {
698 return mAllocPages[aIndex].mState == AllocPageState::InUse;
701 // Is the page free? And if so, has enough time passed that we can use it?
702 bool IsPageAllocatable(GMutLock, uintptr_t aIndex, Time aNow) {
703 const AllocPageInfo& page = mAllocPages[aIndex];
704 return page.mState != AllocPageState::InUse && aNow >= page.mReuseTime;
707 // Get the address of the allocation page referred to via an index. Used
708 // when checking pointers against page boundaries.
709 uint8_t* AllocPageBaseAddr(GMutLock, uintptr_t aIndex) {
710 return mAllocPages[aIndex].mBaseAddr;
713 Maybe<arena_id_t> PageArena(GMutLock aLock, uintptr_t aIndex) {
714 const AllocPageInfo& page = mAllocPages[aIndex];
715 AssertAllocPageInUse(aLock, page);
717 return page.mArenaId;
720 size_t PageUsableSize(GMutLock aLock, uintptr_t aIndex) {
721 const AllocPageInfo& page = mAllocPages[aIndex];
722 AssertAllocPageInUse(aLock, page);
724 return page.UsableSize();
727 // The total fragmentation in PHC
728 size_t FragmentationBytes() const {
729 size_t sum = 0;
730 for (const auto& page : mAllocPages) {
731 sum += page.FragmentationBytes();
733 return sum;
736 void SetPageInUse(GMutLock aLock, uintptr_t aIndex,
737 const Maybe<arena_id_t>& aArenaId, uint8_t* aBaseAddr,
738 const StackTrace& aAllocStack) {
739 AllocPageInfo& page = mAllocPages[aIndex];
740 AssertAllocPageNotInUse(aLock, page);
742 page.mState = AllocPageState::InUse;
743 page.mArenaId = aArenaId;
744 page.mBaseAddr = aBaseAddr;
745 page.mAllocStack = Some(aAllocStack);
746 page.mFreeStack = Nothing();
747 page.mReuseTime = kMaxTime;
750 #if PHC_LOGGING
751 Time GetFreeTime(uintptr_t aIndex) const { return mFreeTime[aIndex]; }
752 #endif
754 void ResizePageInUse(GMutLock aLock, uintptr_t aIndex,
755 const Maybe<arena_id_t>& aArenaId, uint8_t* aNewBaseAddr,
756 const StackTrace& aAllocStack) {
757 AllocPageInfo& page = mAllocPages[aIndex];
758 AssertAllocPageInUse(aLock, page);
760 // page.mState is not changed.
761 if (aArenaId.isSome()) {
762 // Crash if the arenas don't match.
763 MOZ_RELEASE_ASSERT(page.mArenaId == aArenaId);
765 page.mBaseAddr = aNewBaseAddr;
766 // We could just keep the original alloc stack, but the realloc stack is
767 // more recent and therefore seems more useful.
768 page.mAllocStack = Some(aAllocStack);
769 // page.mFreeStack is not changed.
770 // page.mReuseTime is not changed.
773 void SetPageFreed(GMutLock aLock, uintptr_t aIndex,
774 const Maybe<arena_id_t>& aArenaId,
775 const StackTrace& aFreeStack, Delay aReuseDelay) {
776 AllocPageInfo& page = mAllocPages[aIndex];
777 AssertAllocPageInUse(aLock, page);
779 page.mState = AllocPageState::Freed;
781 // page.mArenaId is left unchanged, for jemalloc_ptr_info() calls that
782 // occur after freeing (e.g. in the PtrInfo test in TestJemalloc.cpp).
783 if (aArenaId.isSome()) {
784 // Crash if the arenas don't match.
785 MOZ_RELEASE_ASSERT(page.mArenaId == aArenaId);
788 // page.musableSize is left unchanged, for reporting on UAF, and for
789 // jemalloc_ptr_info() calls that occur after freeing (e.g. in the PtrInfo
790 // test in TestJemalloc.cpp).
792 // page.mAllocStack is left unchanged, for reporting on UAF.
794 page.mFreeStack = Some(aFreeStack);
795 Time now = GAtomic::Now();
796 #if PHC_LOGGING
797 mFreeTime[aIndex] = now;
798 #endif
799 page.mReuseTime = now + aReuseDelay;
802 static void CrashOnGuardPage(void* aPtr) {
803 // An operation on a guard page? This is a bounds violation. Deliberately
804 // touch the page in question, to cause a crash that triggers the usual PHC
805 // machinery.
806 LOG("CrashOnGuardPage(%p), bounds violation\n", aPtr);
807 *static_cast<uint8_t*>(aPtr) = 0;
808 MOZ_CRASH("unreachable");
811 void EnsureValidAndInUse(GMutLock, void* aPtr, uintptr_t aIndex)
812 MOZ_REQUIRES(sMutex) {
813 const AllocPageInfo& page = mAllocPages[aIndex];
815 // The pointer must point to the start of the allocation.
816 MOZ_RELEASE_ASSERT(page.mBaseAddr == aPtr);
818 if (page.mState == AllocPageState::Freed) {
819 LOG("EnsureValidAndInUse(%p), use-after-free\n", aPtr);
820 // An operation on a freed page? This is a particular kind of
821 // use-after-free. Deliberately touch the page in question, in order to
822 // cause a crash that triggers the usual PHC machinery. But unlock sMutex
823 // first, because that self-same PHC machinery needs to re-lock it, and
824 // the crash causes non-local control flow so sMutex won't be unlocked
825 // the normal way in the caller.
826 sMutex.Unlock();
827 *static_cast<uint8_t*>(aPtr) = 0;
828 MOZ_CRASH("unreachable");
832 void FillAddrInfo(GMutLock, uintptr_t aIndex, const void* aBaseAddr,
833 bool isGuardPage, phc::AddrInfo& aOut) {
834 const AllocPageInfo& page = mAllocPages[aIndex];
835 if (isGuardPage) {
836 aOut.mKind = phc::AddrInfo::Kind::GuardPage;
837 } else {
838 switch (page.mState) {
839 case AllocPageState::NeverAllocated:
840 aOut.mKind = phc::AddrInfo::Kind::NeverAllocatedPage;
841 break;
843 case AllocPageState::InUse:
844 aOut.mKind = phc::AddrInfo::Kind::InUsePage;
845 break;
847 case AllocPageState::Freed:
848 aOut.mKind = phc::AddrInfo::Kind::FreedPage;
849 break;
851 default:
852 MOZ_CRASH();
855 aOut.mBaseAddr = page.mBaseAddr;
856 aOut.mUsableSize = page.UsableSize();
857 aOut.mAllocStack = page.mAllocStack;
858 aOut.mFreeStack = page.mFreeStack;
861 void FillJemallocPtrInfo(GMutLock, const void* aPtr, uintptr_t aIndex,
862 jemalloc_ptr_info_t* aInfo) {
863 const AllocPageInfo& page = mAllocPages[aIndex];
864 switch (page.mState) {
865 case AllocPageState::NeverAllocated:
866 break;
868 case AllocPageState::InUse: {
869 // Only return TagLiveAlloc if the pointer is within the bounds of the
870 // allocation's usable size.
871 uint8_t* base = page.mBaseAddr;
872 uint8_t* limit = base + page.UsableSize();
873 if (base <= aPtr && aPtr < limit) {
874 *aInfo = {TagLiveAlloc, page.mBaseAddr, page.UsableSize(),
875 page.mArenaId.valueOr(0)};
876 return;
878 break;
881 case AllocPageState::Freed: {
882 // Only return TagFreedAlloc if the pointer is within the bounds of the
883 // former allocation's usable size.
884 uint8_t* base = page.mBaseAddr;
885 uint8_t* limit = base + page.UsableSize();
886 if (base <= aPtr && aPtr < limit) {
887 *aInfo = {TagFreedAlloc, page.mBaseAddr, page.UsableSize(),
888 page.mArenaId.valueOr(0)};
889 return;
891 break;
894 default:
895 MOZ_CRASH();
898 // Pointers into guard pages will end up here, as will pointers into
899 // allocation pages that aren't within the allocation's bounds.
900 *aInfo = {TagUnknown, nullptr, 0, 0};
903 #ifndef XP_WIN
904 static void prefork() MOZ_NO_THREAD_SAFETY_ANALYSIS { sMutex.Lock(); }
905 static void postfork_parent() MOZ_NO_THREAD_SAFETY_ANALYSIS {
906 sMutex.Unlock();
908 static void postfork_child() { sMutex.Init(); }
909 #endif
911 void IncPageAllocHits(GMutLock) { mPageAllocHits++; }
912 void IncPageAllocMisses(GMutLock) { mPageAllocMisses++; }
914 #if PHC_LOGGING
915 struct PageStats {
916 size_t mNumAlloced = 0;
917 size_t mNumFreed = 0;
920 PageStats GetPageStats(GMutLock) {
921 PageStats stats;
923 for (const auto& page : mAllocPages) {
924 stats.mNumAlloced += page.mState == AllocPageState::InUse ? 1 : 0;
925 stats.mNumFreed += page.mState == AllocPageState::Freed ? 1 : 0;
928 return stats;
930 #endif
932 size_t PageAllocHits(GMutLock) { return mPageAllocHits; }
933 size_t PageAllocAttempts(GMutLock) {
934 return mPageAllocHits + mPageAllocMisses;
937 // This is an integer because FdPrintf only supports integer printing.
938 size_t PageAllocHitRate(GMutLock) {
939 return mPageAllocHits * 100 / (mPageAllocHits + mPageAllocMisses);
942 private:
943 template <int N>
944 uint64_t RandomSeed() {
945 // An older version of this code used RandomUint64() here, but on Mac that
946 // function uses arc4random(), which can allocate, which would cause
947 // re-entry, which would be bad. So we just use time() and a local variable
948 // address. These are mediocre sources of entropy, but good enough for PHC.
949 static_assert(N == 0 || N == 1, "must be 0 or 1");
950 uint64_t seed;
951 if (N == 0) {
952 time_t t = time(nullptr);
953 seed = t ^ (t << 32);
954 } else {
955 seed = uintptr_t(&seed) ^ (uintptr_t(&seed) << 32);
957 return seed;
960 void AssertAllocPageInUse(GMutLock, const AllocPageInfo& aPage) {
961 MOZ_ASSERT(aPage.mState == AllocPageState::InUse);
962 // There is nothing to assert about aPage.mArenaId.
963 MOZ_ASSERT(aPage.mBaseAddr);
964 MOZ_ASSERT(aPage.UsableSize() > 0);
965 MOZ_ASSERT(aPage.mAllocStack.isSome());
966 MOZ_ASSERT(aPage.mFreeStack.isNothing());
967 MOZ_ASSERT(aPage.mReuseTime == kMaxTime);
970 void AssertAllocPageNotInUse(GMutLock, const AllocPageInfo& aPage) {
971 // We can assert a lot about `NeverAllocated` pages, but not much about
972 // `Freed` pages.
973 #ifdef DEBUG
974 bool isFresh = aPage.mState == AllocPageState::NeverAllocated;
975 MOZ_ASSERT(isFresh || aPage.mState == AllocPageState::Freed);
976 MOZ_ASSERT_IF(isFresh, aPage.mArenaId == Nothing());
977 MOZ_ASSERT(isFresh == (aPage.mBaseAddr == nullptr));
978 MOZ_ASSERT(isFresh == (aPage.mAllocStack.isNothing()));
979 MOZ_ASSERT(isFresh == (aPage.mFreeStack.isNothing()));
980 MOZ_ASSERT(aPage.mReuseTime != kMaxTime);
981 #endif
984 // RNG for deciding which allocations to treat specially. It doesn't need to
985 // be high quality.
987 // This is a raw pointer for the reason explained in the comment above
988 // GMut's constructor. Don't change it to UniquePtr or anything like that.
989 non_crypto::XorShift128PlusRNG mRNG;
991 AllocPageInfo mAllocPages[kNumAllocPages];
992 #if PHC_LOGGING
993 Time mFreeTime[kNumAllocPages];
994 #endif
996 // How many allocations that could have been page allocs actually were? As
997 // constrained kNumAllocPages. If the hit ratio isn't close to 100% it's
998 // likely that the global constants are poorly chosen.
999 size_t mPageAllocHits;
1000 size_t mPageAllocMisses;
1003 Mutex GMut::sMutex;
1005 static GMut* gMut;
1007 //---------------------------------------------------------------------------
1008 // Page allocation operations
1009 //---------------------------------------------------------------------------
1011 // Attempt a page allocation if the time and the size are right. Allocated
1012 // memory is zeroed if aZero is true. On failure, the caller should attempt a
1013 // normal allocation via sMallocTable. Can be called in a context where
1014 // GMut::sMutex is locked.
1015 static void* MaybePageAlloc(const Maybe<arena_id_t>& aArenaId, size_t aReqSize,
1016 size_t aAlignment, bool aZero) {
1017 MOZ_ASSERT(IsPowerOfTwo(aAlignment));
1019 if (aReqSize > kPageSize) {
1020 return nullptr;
1023 GAtomic::IncrementNow();
1025 // Decrement the delay. If it's zero, we do a page allocation and reset the
1026 // delay to a random number. Because the assignment to the random number isn't
1027 // atomic w.r.t. the decrement, we might have a sequence like this:
1029 // Thread 1 Thread 2 Thread 3
1030 // -------- -------- --------
1031 // (a) newDelay = --sAllocDelay (-> 0)
1032 // (b) --sAllocDelay (-> -1)
1033 // (c) (newDelay != 0) fails
1034 // (d) --sAllocDelay (-> -2)
1035 // (e) sAllocDelay = new_random_number()
1037 // It's critical that sAllocDelay has ReleaseAcquire semantics, because that
1038 // guarantees that exactly one thread will see sAllocDelay have the value 0.
1039 // (Relaxed semantics wouldn't guarantee that.)
1041 // It's also nice that sAllocDelay is signed, given that we can decrement to
1042 // below zero. (Strictly speaking, an unsigned integer would also work due
1043 // to wrapping, but a signed integer is conceptually cleaner.)
1045 // Finally, note that the decrements that occur between (a) and (e) above are
1046 // effectively ignored, because (e) clobbers them. This shouldn't be a
1047 // problem; it effectively just adds a little more randomness to
1048 // new_random_number(). An early version of this code tried to account for
1049 // these decrements by doing `sAllocDelay += new_random_number()`. However, if
1050 // new_random_value() is small, the number of decrements between (a) and (e)
1051 // can easily exceed it, whereupon sAllocDelay ends up negative after
1052 // `sAllocDelay += new_random_number()`, and the zero-check never succeeds
1053 // again. (At least, not until sAllocDelay wraps around on overflow, which
1054 // would take a very long time indeed.)
1056 int32_t newDelay = GAtomic::DecrementDelay();
1057 if (newDelay != 0) {
1058 return nullptr;
1061 if (GTls::IsDisabledOnCurrentThread()) {
1062 return nullptr;
1065 // Disable on this thread *before* getting the stack trace.
1066 AutoDisableOnCurrentThread disable;
1068 // Get the stack trace *before* locking the mutex. If we return nullptr then
1069 // it was a waste, but it's not so frequent, and doing a stack walk while
1070 // the mutex is locked is problematic (see the big comment on
1071 // StackTrace::Fill() for details).
1072 StackTrace allocStack;
1073 allocStack.Fill();
1075 MutexAutoLock lock(GMut::sMutex);
1077 Time now = GAtomic::Now();
1078 Delay newAllocDelay = Rnd64ToDelay<kAvgAllocDelay>(gMut->Random64(lock));
1080 // We start at a random page alloc and wrap around, to ensure pages get even
1081 // amounts of use.
1082 uint8_t* ptr = nullptr;
1083 uint8_t* pagePtr = nullptr;
1084 for (uintptr_t n = 0, i = size_t(gMut->Random64(lock)) % kNumAllocPages;
1085 n < kNumAllocPages; n++, i = (i + 1) % kNumAllocPages) {
1086 if (!gMut->IsPageAllocatable(lock, i, now)) {
1087 continue;
1090 #if PHC_LOGGING
1091 Time lifetime = 0;
1092 #endif
1093 pagePtr = gConst->AllocPagePtr(i);
1094 MOZ_ASSERT(pagePtr);
1095 bool ok =
1096 #ifdef XP_WIN
1097 !!VirtualAlloc(pagePtr, kPageSize, MEM_COMMIT, PAGE_READWRITE);
1098 #else
1099 mprotect(pagePtr, kPageSize, PROT_READ | PROT_WRITE) == 0;
1100 #endif
1101 size_t usableSize = sMallocTable.malloc_good_size(aReqSize);
1102 if (ok) {
1103 MOZ_ASSERT(usableSize > 0);
1105 // Put the allocation as close to the end of the page as possible,
1106 // allowing for alignment requirements.
1107 ptr = pagePtr + kPageSize - usableSize;
1108 if (aAlignment != 1) {
1109 ptr = reinterpret_cast<uint8_t*>(
1110 (reinterpret_cast<uintptr_t>(ptr) & ~(aAlignment - 1)));
1113 #if PHC_LOGGING
1114 Time then = gMut->GetFreeTime(i);
1115 lifetime = then != 0 ? now - then : 0;
1116 #endif
1118 gMut->SetPageInUse(lock, i, aArenaId, ptr, allocStack);
1120 if (aZero) {
1121 memset(ptr, 0, usableSize);
1122 } else {
1123 #ifdef DEBUG
1124 memset(ptr, kAllocJunk, usableSize);
1125 #endif
1129 gMut->IncPageAllocHits(lock);
1130 #if PHC_LOGGING
1131 GMut::PageStats stats = gMut->GetPageStats(lock);
1132 #endif
1133 LOG("PageAlloc(%zu, %zu) -> %p[%zu]/%p (%zu) (z%zu), sAllocDelay <- %zu, "
1134 "fullness %zu/%zu/%zu, hits %zu/%zu (%zu%%), lifetime %zu\n",
1135 aReqSize, aAlignment, pagePtr, i, ptr, usableSize, size_t(aZero),
1136 size_t(newAllocDelay), stats.mNumAlloced, stats.mNumFreed,
1137 kNumAllocPages, gMut->PageAllocHits(lock),
1138 gMut->PageAllocAttempts(lock), gMut->PageAllocHitRate(lock), lifetime);
1139 break;
1142 if (!pagePtr) {
1143 // No pages are available, or VirtualAlloc/mprotect failed.
1144 gMut->IncPageAllocMisses(lock);
1145 #if PHC_LOGGING
1146 GMut::PageStats stats = gMut->GetPageStats(lock);
1147 #endif
1148 LOG("No PageAlloc(%zu, %zu), sAllocDelay <- %zu, fullness %zu/%zu/%zu, "
1149 "hits %zu/%zu (%zu%%)\n",
1150 aReqSize, aAlignment, size_t(newAllocDelay), stats.mNumAlloced,
1151 stats.mNumFreed, kNumAllocPages, gMut->PageAllocHits(lock),
1152 gMut->PageAllocAttempts(lock), gMut->PageAllocHitRate(lock));
1155 // Set the new alloc delay.
1156 GAtomic::SetAllocDelay(newAllocDelay);
1158 return ptr;
1161 static void FreePage(GMutLock aLock, uintptr_t aIndex,
1162 const Maybe<arena_id_t>& aArenaId,
1163 const StackTrace& aFreeStack, Delay aReuseDelay) {
1164 void* pagePtr = gConst->AllocPagePtr(aIndex);
1165 #ifdef XP_WIN
1166 if (!VirtualFree(pagePtr, kPageSize, MEM_DECOMMIT)) {
1167 return;
1169 #else
1170 if (mmap(pagePtr, kPageSize, PROT_NONE, MAP_FIXED | MAP_PRIVATE | MAP_ANON,
1171 -1, 0) == MAP_FAILED) {
1172 return;
1174 #endif
1176 gMut->SetPageFreed(aLock, aIndex, aArenaId, aFreeStack, aReuseDelay);
1179 //---------------------------------------------------------------------------
1180 // replace-malloc machinery
1181 //---------------------------------------------------------------------------
1183 // This handles malloc, moz_arena_malloc, and realloc-with-a-nullptr.
1184 MOZ_ALWAYS_INLINE static void* PageMalloc(const Maybe<arena_id_t>& aArenaId,
1185 size_t aReqSize) {
1186 void* ptr = MaybePageAlloc(aArenaId, aReqSize, /* aAlignment */ 1,
1187 /* aZero */ false);
1188 return ptr ? ptr
1189 : (aArenaId.isSome()
1190 ? sMallocTable.moz_arena_malloc(*aArenaId, aReqSize)
1191 : sMallocTable.malloc(aReqSize));
1194 static void* replace_malloc(size_t aReqSize) {
1195 return PageMalloc(Nothing(), aReqSize);
1198 static Delay ReuseDelay(GMutLock aLock) {
1199 return (kAvgPageReuseDelay / 2) +
1200 Rnd64ToDelay<kAvgPageReuseDelay / 2>(gMut->Random64(aLock));
1203 // This handles both calloc and moz_arena_calloc.
1204 MOZ_ALWAYS_INLINE static void* PageCalloc(const Maybe<arena_id_t>& aArenaId,
1205 size_t aNum, size_t aReqSize) {
1206 CheckedInt<size_t> checkedSize = CheckedInt<size_t>(aNum) * aReqSize;
1207 if (!checkedSize.isValid()) {
1208 return nullptr;
1211 void* ptr = MaybePageAlloc(aArenaId, checkedSize.value(), /* aAlignment */ 1,
1212 /* aZero */ true);
1213 return ptr ? ptr
1214 : (aArenaId.isSome()
1215 ? sMallocTable.moz_arena_calloc(*aArenaId, aNum, aReqSize)
1216 : sMallocTable.calloc(aNum, aReqSize));
1219 static void* replace_calloc(size_t aNum, size_t aReqSize) {
1220 return PageCalloc(Nothing(), aNum, aReqSize);
1223 // This function handles both realloc and moz_arena_realloc.
1225 // As always, realloc is complicated, and doubly so when there are two
1226 // different kinds of allocations in play. Here are the possible transitions,
1227 // and what we do in practice.
1229 // - normal-to-normal: This is straightforward and obviously necessary.
1231 // - normal-to-page: This is disallowed because it would require getting the
1232 // arenaId of the normal allocation, which isn't possible in non-DEBUG builds
1233 // for security reasons.
1235 // - page-to-page: This is done whenever possible, i.e. whenever the new size
1236 // is less than or equal to 4 KiB. This choice counterbalances the
1237 // disallowing of normal-to-page allocations, in order to avoid biasing
1238 // towards or away from page allocations. It always occurs in-place.
1240 // - page-to-normal: this is done only when necessary, i.e. only when the new
1241 // size is greater than 4 KiB. This choice naturally flows from the
1242 // prior choice on page-to-page transitions.
1244 // In summary: realloc doesn't change the allocation kind unless it must.
1246 MOZ_ALWAYS_INLINE static void* PageRealloc(const Maybe<arena_id_t>& aArenaId,
1247 void* aOldPtr, size_t aNewSize) {
1248 if (!aOldPtr) {
1249 // Null pointer. Treat like malloc(aNewSize).
1250 return PageMalloc(aArenaId, aNewSize);
1253 PtrKind pk = gConst->PtrKind(aOldPtr);
1254 if (pk.IsNothing()) {
1255 // A normal-to-normal transition.
1256 return aArenaId.isSome()
1257 ? sMallocTable.moz_arena_realloc(*aArenaId, aOldPtr, aNewSize)
1258 : sMallocTable.realloc(aOldPtr, aNewSize);
1261 if (pk.IsGuardPage()) {
1262 GMut::CrashOnGuardPage(aOldPtr);
1265 // At this point we know we have an allocation page.
1266 uintptr_t index = pk.AllocPageIndex();
1268 // A page-to-something transition.
1270 // Note that `disable` has no effect unless it is emplaced below.
1271 Maybe<AutoDisableOnCurrentThread> disable;
1272 // Get the stack trace *before* locking the mutex.
1273 StackTrace stack;
1274 if (GTls::IsDisabledOnCurrentThread()) {
1275 // PHC is disabled on this thread. Leave the stack empty.
1276 } else {
1277 // Disable on this thread *before* getting the stack trace.
1278 disable.emplace();
1279 stack.Fill();
1282 MutexAutoLock lock(GMut::sMutex);
1284 // Check for realloc() of a freed block.
1285 gMut->EnsureValidAndInUse(lock, aOldPtr, index);
1287 if (aNewSize <= kPageSize) {
1288 // A page-to-page transition. Just keep using the page allocation. We do
1289 // this even if the thread is disabled, because it doesn't create a new
1290 // page allocation. Note that ResizePageInUse() checks aArenaId.
1292 // Move the bytes with memmove(), because the old allocation and the new
1293 // allocation overlap. Move the usable size rather than the requested size,
1294 // because the user might have used malloc_usable_size() and filled up the
1295 // usable size.
1296 size_t oldUsableSize = gMut->PageUsableSize(lock, index);
1297 size_t newUsableSize = sMallocTable.malloc_good_size(aNewSize);
1298 uint8_t* pagePtr = gConst->AllocPagePtr(index);
1299 uint8_t* newPtr = pagePtr + kPageSize - newUsableSize;
1300 memmove(newPtr, aOldPtr, std::min(oldUsableSize, aNewSize));
1301 gMut->ResizePageInUse(lock, index, aArenaId, newPtr, stack);
1302 LOG("PageRealloc-Reuse(%p, %zu) -> %p\n", aOldPtr, aNewSize, newPtr);
1303 return newPtr;
1306 // A page-to-normal transition (with the new size greater than page-sized).
1307 // (Note that aArenaId is checked below.)
1308 void* newPtr;
1309 if (aArenaId.isSome()) {
1310 newPtr = sMallocTable.moz_arena_malloc(*aArenaId, aNewSize);
1311 } else {
1312 Maybe<arena_id_t> oldArenaId = gMut->PageArena(lock, index);
1313 newPtr = (oldArenaId.isSome()
1314 ? sMallocTable.moz_arena_malloc(*oldArenaId, aNewSize)
1315 : sMallocTable.malloc(aNewSize));
1317 if (!newPtr) {
1318 return nullptr;
1321 MOZ_ASSERT(aNewSize > kPageSize);
1323 Delay reuseDelay = ReuseDelay(lock);
1325 // Copy the usable size rather than the requested size, because the user
1326 // might have used malloc_usable_size() and filled up the usable size. Note
1327 // that FreePage() checks aArenaId (via SetPageFreed()).
1328 size_t oldUsableSize = gMut->PageUsableSize(lock, index);
1329 memcpy(newPtr, aOldPtr, std::min(oldUsableSize, aNewSize));
1330 FreePage(lock, index, aArenaId, stack, reuseDelay);
1331 LOG("PageRealloc-Free(%p[%zu], %zu) -> %p, %zu delay, reuse at ~%zu\n",
1332 aOldPtr, index, aNewSize, newPtr, size_t(reuseDelay),
1333 size_t(GAtomic::Now()) + reuseDelay);
1335 return newPtr;
1338 static void* replace_realloc(void* aOldPtr, size_t aNewSize) {
1339 return PageRealloc(Nothing(), aOldPtr, aNewSize);
1342 // This handles both free and moz_arena_free.
1343 MOZ_ALWAYS_INLINE static void PageFree(const Maybe<arena_id_t>& aArenaId,
1344 void* aPtr) {
1345 PtrKind pk = gConst->PtrKind(aPtr);
1346 if (pk.IsNothing()) {
1347 // Not a page allocation.
1348 return aArenaId.isSome() ? sMallocTable.moz_arena_free(*aArenaId, aPtr)
1349 : sMallocTable.free(aPtr);
1352 if (pk.IsGuardPage()) {
1353 GMut::CrashOnGuardPage(aPtr);
1356 // At this point we know we have an allocation page.
1357 uintptr_t index = pk.AllocPageIndex();
1359 // Note that `disable` has no effect unless it is emplaced below.
1360 Maybe<AutoDisableOnCurrentThread> disable;
1361 // Get the stack trace *before* locking the mutex.
1362 StackTrace freeStack;
1363 if (GTls::IsDisabledOnCurrentThread()) {
1364 // PHC is disabled on this thread. Leave the stack empty.
1365 } else {
1366 // Disable on this thread *before* getting the stack trace.
1367 disable.emplace();
1368 freeStack.Fill();
1371 MutexAutoLock lock(GMut::sMutex);
1373 // Check for a double-free.
1374 gMut->EnsureValidAndInUse(lock, aPtr, index);
1376 // Note that FreePage() checks aArenaId (via SetPageFreed()).
1377 Delay reuseDelay = ReuseDelay(lock);
1378 FreePage(lock, index, aArenaId, freeStack, reuseDelay);
1380 #if PHC_LOGGING
1381 GMut::PageStats stats = gMut->GetPageStats(lock);
1382 #endif
1383 LOG("PageFree(%p[%zu]), %zu delay, reuse at ~%zu, fullness %zu/%zu/%zu\n",
1384 aPtr, index, size_t(reuseDelay), size_t(GAtomic::Now()) + reuseDelay,
1385 stats.mNumAlloced, stats.mNumFreed, kNumAllocPages);
1388 static void replace_free(void* aPtr) { return PageFree(Nothing(), aPtr); }
1390 // This handles memalign and moz_arena_memalign.
1391 MOZ_ALWAYS_INLINE static void* PageMemalign(const Maybe<arena_id_t>& aArenaId,
1392 size_t aAlignment,
1393 size_t aReqSize) {
1394 MOZ_RELEASE_ASSERT(IsPowerOfTwo(aAlignment));
1396 // PHC can't satisfy an alignment greater than a page size, so fall back to
1397 // mozjemalloc in that case.
1398 void* ptr = nullptr;
1399 if (aAlignment <= kPageSize) {
1400 ptr = MaybePageAlloc(aArenaId, aReqSize, aAlignment, /* aZero */ false);
1402 return ptr ? ptr
1403 : (aArenaId.isSome()
1404 ? sMallocTable.moz_arena_memalign(*aArenaId, aAlignment,
1405 aReqSize)
1406 : sMallocTable.memalign(aAlignment, aReqSize));
1409 static void* replace_memalign(size_t aAlignment, size_t aReqSize) {
1410 return PageMemalign(Nothing(), aAlignment, aReqSize);
1413 static size_t replace_malloc_usable_size(usable_ptr_t aPtr) {
1414 PtrKind pk = gConst->PtrKind(aPtr);
1415 if (pk.IsNothing()) {
1416 // Not a page allocation. Measure it normally.
1417 return sMallocTable.malloc_usable_size(aPtr);
1420 if (pk.IsGuardPage()) {
1421 GMut::CrashOnGuardPage(const_cast<void*>(aPtr));
1424 // At this point we know aPtr lands within an allocation page, due to the
1425 // math done in the PtrKind constructor. But if aPtr points to memory
1426 // before the base address of the allocation, we return 0.
1427 uintptr_t index = pk.AllocPageIndex();
1429 MutexAutoLock lock(GMut::sMutex);
1431 void* pageBaseAddr = gMut->AllocPageBaseAddr(lock, index);
1433 if (MOZ_UNLIKELY(aPtr < pageBaseAddr)) {
1434 return 0;
1437 return gMut->PageUsableSize(lock, index);
1440 static size_t metadata_size() {
1441 return sMallocTable.malloc_usable_size(gConst) +
1442 sMallocTable.malloc_usable_size(gMut);
1445 void replace_jemalloc_stats(jemalloc_stats_t* aStats,
1446 jemalloc_bin_stats_t* aBinStats) {
1447 sMallocTable.jemalloc_stats_internal(aStats, aBinStats);
1449 // Add all the pages to `mapped`.
1450 size_t mapped = kAllPagesSize;
1451 aStats->mapped += mapped;
1453 size_t allocated = 0;
1455 MutexAutoLock lock(GMut::sMutex);
1457 // Add usable space of in-use allocations to `allocated`.
1458 for (size_t i = 0; i < kNumAllocPages; i++) {
1459 if (gMut->IsPageInUse(lock, i)) {
1460 allocated += gMut->PageUsableSize(lock, i);
1464 aStats->allocated += allocated;
1466 // guards is the gap between `allocated` and `mapped`. In some ways this
1467 // almost fits into aStats->wasted since it feels like wasted memory. However
1468 // wasted should only include committed memory and these guard pages are
1469 // uncommitted. Therefore we don't include it anywhere.
1470 // size_t guards = mapped - allocated;
1472 // aStats.page_cache and aStats.bin_unused are left unchanged because PHC
1473 // doesn't have anything corresponding to those.
1475 // The metadata is stored in normal heap allocations, so they're measured by
1476 // mozjemalloc as `allocated`. Move them into `bookkeeping`.
1477 // They're also reported under explicit/heap-overhead/phc/fragmentation in
1478 // about:memory.
1479 size_t bookkeeping = metadata_size();
1480 aStats->allocated -= bookkeeping;
1481 aStats->bookkeeping += bookkeeping;
1484 void replace_jemalloc_ptr_info(const void* aPtr, jemalloc_ptr_info_t* aInfo) {
1485 // We need to implement this properly, because various code locations do
1486 // things like checking that allocations are in the expected arena.
1487 PtrKind pk = gConst->PtrKind(aPtr);
1488 if (pk.IsNothing()) {
1489 // Not a page allocation.
1490 return sMallocTable.jemalloc_ptr_info(aPtr, aInfo);
1493 if (pk.IsGuardPage()) {
1494 // Treat a guard page as unknown because there's no better alternative.
1495 *aInfo = {TagUnknown, nullptr, 0, 0};
1496 return;
1499 // At this point we know we have an allocation page.
1500 uintptr_t index = pk.AllocPageIndex();
1502 MutexAutoLock lock(GMut::sMutex);
1504 gMut->FillJemallocPtrInfo(lock, aPtr, index, aInfo);
1505 #if DEBUG
1506 LOG("JemallocPtrInfo(%p[%zu]) -> {%zu, %p, %zu, %zu}\n", aPtr, index,
1507 size_t(aInfo->tag), aInfo->addr, aInfo->size, aInfo->arenaId);
1508 #else
1509 LOG("JemallocPtrInfo(%p[%zu]) -> {%zu, %p, %zu}\n", aPtr, index,
1510 size_t(aInfo->tag), aInfo->addr, aInfo->size);
1511 #endif
1514 arena_id_t replace_moz_create_arena_with_params(arena_params_t* aParams) {
1515 // No need to do anything special here.
1516 return sMallocTable.moz_create_arena_with_params(aParams);
1519 void replace_moz_dispose_arena(arena_id_t aArenaId) {
1520 // No need to do anything special here.
1521 return sMallocTable.moz_dispose_arena(aArenaId);
1524 void replace_moz_set_max_dirty_page_modifier(int32_t aModifier) {
1525 // No need to do anything special here.
1526 return sMallocTable.moz_set_max_dirty_page_modifier(aModifier);
1529 void* replace_moz_arena_malloc(arena_id_t aArenaId, size_t aReqSize) {
1530 return PageMalloc(Some(aArenaId), aReqSize);
1533 void* replace_moz_arena_calloc(arena_id_t aArenaId, size_t aNum,
1534 size_t aReqSize) {
1535 return PageCalloc(Some(aArenaId), aNum, aReqSize);
1538 void* replace_moz_arena_realloc(arena_id_t aArenaId, void* aOldPtr,
1539 size_t aNewSize) {
1540 return PageRealloc(Some(aArenaId), aOldPtr, aNewSize);
1543 void replace_moz_arena_free(arena_id_t aArenaId, void* aPtr) {
1544 return PageFree(Some(aArenaId), aPtr);
1547 void* replace_moz_arena_memalign(arena_id_t aArenaId, size_t aAlignment,
1548 size_t aReqSize) {
1549 return PageMemalign(Some(aArenaId), aAlignment, aReqSize);
1552 class PHCBridge : public ReplaceMallocBridge {
1553 virtual bool IsPHCAllocation(const void* aPtr, phc::AddrInfo* aOut) override {
1554 PtrKind pk = gConst->PtrKind(aPtr);
1555 if (pk.IsNothing()) {
1556 return false;
1559 bool isGuardPage = false;
1560 if (pk.IsGuardPage()) {
1561 if ((uintptr_t(aPtr) % kPageSize) < (kPageSize / 2)) {
1562 // The address is in the lower half of a guard page, so it's probably an
1563 // overflow. But first check that it is not on the very first guard
1564 // page, in which case it cannot be an overflow, and we ignore it.
1565 if (gConst->IsInFirstGuardPage(aPtr)) {
1566 return false;
1569 // Get the allocation page preceding this guard page.
1570 pk = gConst->PtrKind(static_cast<const uint8_t*>(aPtr) - kPageSize);
1572 } else {
1573 // The address is in the upper half of a guard page, so it's probably an
1574 // underflow. Get the allocation page following this guard page.
1575 pk = gConst->PtrKind(static_cast<const uint8_t*>(aPtr) + kPageSize);
1578 // Make a note of the fact that we hit a guard page.
1579 isGuardPage = true;
1582 // At this point we know we have an allocation page.
1583 uintptr_t index = pk.AllocPageIndex();
1585 if (aOut) {
1586 MutexAutoLock lock(GMut::sMutex);
1587 gMut->FillAddrInfo(lock, index, aPtr, isGuardPage, *aOut);
1588 LOG("IsPHCAllocation: %zu, %p, %zu, %zu, %zu\n", size_t(aOut->mKind),
1589 aOut->mBaseAddr, aOut->mUsableSize,
1590 aOut->mAllocStack.isSome() ? aOut->mAllocStack->mLength : 0,
1591 aOut->mFreeStack.isSome() ? aOut->mFreeStack->mLength : 0);
1593 return true;
1596 virtual void DisablePHCOnCurrentThread() override {
1597 GTls::DisableOnCurrentThread();
1598 LOG("DisablePHCOnCurrentThread: %zu\n", 0ul);
1601 virtual void ReenablePHCOnCurrentThread() override {
1602 GTls::EnableOnCurrentThread();
1603 LOG("ReenablePHCOnCurrentThread: %zu\n", 0ul);
1606 virtual bool IsPHCEnabledOnCurrentThread() override {
1607 bool enabled = !GTls::IsDisabledOnCurrentThread();
1608 LOG("IsPHCEnabledOnCurrentThread: %zu\n", size_t(enabled));
1609 return enabled;
1612 virtual void PHCMemoryUsage(
1613 mozilla::phc::MemoryUsage& aMemoryUsage) override {
1614 aMemoryUsage.mMetadataBytes = metadata_size();
1615 if (gMut) {
1616 MutexAutoLock lock(GMut::sMutex);
1617 aMemoryUsage.mFragmentationBytes = gMut->FragmentationBytes();
1618 } else {
1619 aMemoryUsage.mFragmentationBytes = 0;
1624 // WARNING: this function runs *very* early -- before all static initializers
1625 // have run. For this reason, non-scalar globals (gConst, gMut) are allocated
1626 // dynamically (so we can guarantee their construction in this function) rather
1627 // than statically. GAtomic and GTls contain simple static data that doesn't
1628 // involve static initializers so they don't need to be allocated dynamically.
1629 void replace_init(malloc_table_t* aMallocTable, ReplaceMallocBridge** aBridge) {
1630 // Don't run PHC if the page size isn't 4 KiB.
1631 jemalloc_stats_t stats;
1632 aMallocTable->jemalloc_stats_internal(&stats, nullptr);
1633 if (stats.page_size != kPageSize) {
1634 return;
1637 sMallocTable = *aMallocTable;
1639 // The choices of which functions to replace are complex enough that we set
1640 // them individually instead of using MALLOC_FUNCS/malloc_decls.h.
1642 aMallocTable->malloc = replace_malloc;
1643 aMallocTable->calloc = replace_calloc;
1644 aMallocTable->realloc = replace_realloc;
1645 aMallocTable->free = replace_free;
1646 aMallocTable->memalign = replace_memalign;
1648 // posix_memalign, aligned_alloc & valloc: unset, which means they fall back
1649 // to replace_memalign.
1650 aMallocTable->malloc_usable_size = replace_malloc_usable_size;
1651 // default malloc_good_size: the default suffices.
1653 aMallocTable->jemalloc_stats_internal = replace_jemalloc_stats;
1654 // jemalloc_purge_freed_pages: the default suffices.
1655 // jemalloc_free_dirty_pages: the default suffices.
1656 // jemalloc_thread_local_arena: the default suffices.
1657 aMallocTable->jemalloc_ptr_info = replace_jemalloc_ptr_info;
1659 aMallocTable->moz_create_arena_with_params =
1660 replace_moz_create_arena_with_params;
1661 aMallocTable->moz_dispose_arena = replace_moz_dispose_arena;
1662 aMallocTable->moz_arena_malloc = replace_moz_arena_malloc;
1663 aMallocTable->moz_arena_calloc = replace_moz_arena_calloc;
1664 aMallocTable->moz_arena_realloc = replace_moz_arena_realloc;
1665 aMallocTable->moz_arena_free = replace_moz_arena_free;
1666 aMallocTable->moz_arena_memalign = replace_moz_arena_memalign;
1668 static PHCBridge bridge;
1669 *aBridge = &bridge;
1671 #ifndef XP_WIN
1672 // Avoid deadlocks when forking by acquiring our state lock prior to forking
1673 // and releasing it after forking. See |LogAlloc|'s |replace_init| for
1674 // in-depth details.
1676 // Note: This must run after attempting an allocation so as to give the
1677 // system malloc a chance to insert its own atfork handler.
1678 sMallocTable.malloc(-1);
1679 pthread_atfork(GMut::prefork, GMut::postfork_parent, GMut::postfork_child);
1680 #endif
1682 // gConst and gMut are never freed. They live for the life of the process.
1683 gConst = InfallibleAllocPolicy::new_<GConst>();
1684 GTls::Init();
1685 gMut = InfallibleAllocPolicy::new_<GMut>();
1687 MutexAutoLock lock(GMut::sMutex);
1688 Delay firstAllocDelay =
1689 Rnd64ToDelay<kAvgFirstAllocDelay>(gMut->Random64(lock));
1690 GAtomic::Init(firstAllocDelay);