1 /* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*-
2 * vim: set ts=8 sts=2 et sw=2 tw=80:
3 * This Source Code Form is subject to the terms of the Mozilla Public
4 * License, v. 2.0. If a copy of the MPL was not distributed with this
5 * file, You can obtain one at http://mozilla.org/MPL/2.0/. */
8 * [SMDOC] Garbage Collector
10 * This code implements an incremental mark-and-sweep garbage collector, with
11 * most sweeping carried out in the background on a parallel thread.
16 * The collector can collect all zones at once, or a subset. These types of
17 * collection are referred to as a full GC and a zone GC respectively.
19 * It is possible for an incremental collection that started out as a full GC to
20 * become a zone GC if new zones are created during the course of the
23 * Incremental collection
24 * ----------------------
26 * For a collection to be carried out incrementally the following conditions
28 * - the collection must be run by calling js::GCSlice() rather than js::GC()
29 * - the GC parameter JSGC_INCREMENTAL_GC_ENABLED must be true.
31 * The last condition is an engine-internal mechanism to ensure that incremental
32 * collection is not carried out without the correct barriers being implemented.
33 * For more information see 'Incremental marking' below.
35 * If the collection is not incremental, all foreground activity happens inside
36 * a single call to GC() or GCSlice(). However the collection is not complete
37 * until the background sweeping activity has finished.
39 * An incremental collection proceeds as a series of slices, interleaved with
40 * mutator activity, i.e. running JavaScript code. Slices are limited by a time
41 * budget. The slice finishes as soon as possible after the requested time has
47 * The collector proceeds through the following states, the current state being
48 * held in JSRuntime::gcIncrementalState:
50 * - Prepare - unmarks GC things, discards JIT code and other setup
51 * - MarkRoots - marks the stack and other roots
52 * - Mark - incrementally marks reachable things
53 * - Sweep - sweeps zones in groups and continues marking unswept zones
54 * - Finalize - performs background finalization, concurrent with mutator
55 * - Compact - incrementally compacts by zone
56 * - Decommit - performs background decommit and chunk removal
58 * Roots are marked in the first MarkRoots slice; this is the start of the GC
59 * proper. The following states can take place over one or more slices.
61 * In other words an incremental collection proceeds like this:
63 * Slice 1: Prepare: Starts background task to unmark GC things
65 * ... JS code runs, background unmarking finishes ...
67 * Slice 2: MarkRoots: Roots are pushed onto the mark stack.
68 * Mark: The mark stack is processed by popping an element,
69 * marking it, and pushing its children.
71 * ... JS code runs ...
73 * Slice 3: Mark: More mark stack processing.
75 * ... JS code runs ...
77 * Slice n-1: Mark: More mark stack processing.
79 * ... JS code runs ...
81 * Slice n: Mark: Mark stack is completely drained.
82 * Sweep: Select first group of zones to sweep and sweep them.
84 * ... JS code runs ...
86 * Slice n+1: Sweep: Mark objects in unswept zones that were newly
87 * identified as alive (see below). Then sweep more zone
90 * ... JS code runs ...
92 * Slice n+2: Sweep: Mark objects in unswept zones that were newly
93 * identified as alive. Then sweep more zones.
95 * ... JS code runs ...
97 * Slice m: Sweep: Sweeping is finished, and background sweeping
98 * started on the helper thread.
100 * ... JS code runs, remaining sweeping done on background thread ...
102 * When background sweeping finishes the GC is complete.
104 * Incremental marking
105 * -------------------
107 * Incremental collection requires close collaboration with the mutator (i.e.,
108 * JS code) to guarantee correctness.
110 * - During an incremental GC, if a memory location (except a root) is written
111 * to, then the value it previously held must be marked. Write barriers
114 * - Any object that is allocated during incremental GC must start out marked.
116 * - Roots are marked in the first slice and hence don't need write barriers.
117 * Roots are things like the C stack and the VM stack.
119 * The problem that write barriers solve is that between slices the mutator can
120 * change the object graph. We must ensure that it cannot do this in such a way
121 * that makes us fail to mark a reachable object (marking an unreachable object
124 * We use a snapshot-at-the-beginning algorithm to do this. This means that we
125 * promise to mark at least everything that is reachable at the beginning of
126 * collection. To implement it we mark the old contents of every non-root memory
127 * location written to by the mutator while the collection is in progress, using
128 * write barriers. This is described in gc/Barrier.h.
130 * Incremental sweeping
131 * --------------------
133 * Sweeping is difficult to do incrementally because object finalizers must be
134 * run at the start of sweeping, before any mutator code runs. The reason is
135 * that some objects use their finalizers to remove themselves from caches. If
136 * mutator code was allowed to run after the start of sweeping, it could observe
137 * the state of the cache and create a new reference to an object that was just
138 * about to be destroyed.
140 * Sweeping all finalizable objects in one go would introduce long pauses, so
141 * instead sweeping broken up into groups of zones. Zones which are not yet
142 * being swept are still marked, so the issue above does not apply.
144 * The order of sweeping is restricted by cross compartment pointers - for
145 * example say that object |a| from zone A points to object |b| in zone B and
146 * neither object was marked when we transitioned to the Sweep phase. Imagine we
147 * sweep B first and then return to the mutator. It's possible that the mutator
148 * could cause |a| to become alive through a read barrier (perhaps it was a
149 * shape that was accessed via a shape table). Then we would need to mark |b|,
150 * which |a| points to, but |b| has already been swept.
152 * So if there is such a pointer then marking of zone B must not finish before
153 * marking of zone A. Pointers which form a cycle between zones therefore
154 * restrict those zones to being swept at the same time, and these are found
155 * using Tarjan's algorithm for finding the strongly connected components of a
158 * GC things without finalizers, and things with finalizers that are able to run
159 * in the background, are swept on the background thread. This accounts for most
160 * of the sweeping work.
165 * During incremental collection it is possible, although unlikely, for
166 * conditions to change such that incremental collection is no longer safe. In
167 * this case, the collection is 'reset' by resetIncrementalGC(). If we are in
168 * the mark state, this just stops marking, but if we have started sweeping
169 * already, we continue non-incrementally until we have swept the current sweep
170 * group. Following a reset, a new collection is started.
175 * Compacting GC happens at the end of a major GC as part of the last slice.
176 * There are three parts:
178 * - Arenas are selected for compaction.
179 * - The contents of those arenas are moved to new arenas.
180 * - All references to moved things are updated.
185 * Atoms are collected differently from other GC things. They are contained in
186 * a special zone and things in other zones may have pointers to them that are
187 * not recorded in the cross compartment pointer map. Each zone holds a bitmap
188 * with the atoms it might be keeping alive, and atoms are only collected if
189 * they are not included in any zone's atom bitmap. See AtomMarking.cpp for how
190 * this bitmap is managed.
193 #include "gc/GC-inl.h"
195 #include "mozilla/Range.h"
196 #include "mozilla/ScopeExit.h"
197 #include "mozilla/TextUtils.h"
198 #include "mozilla/TimeStamp.h"
201 #include <initializer_list>
207 #include "jsapi.h" // JS_AbortIfWrongThread
210 #include "debugger/DebugAPI.h"
211 #include "gc/ClearEdgesTracer.h"
212 #include "gc/GCContext.h"
213 #include "gc/GCInternals.h"
214 #include "gc/GCLock.h"
215 #include "gc/GCProbes.h"
216 #include "gc/Memory.h"
217 #include "gc/ParallelMarking.h"
218 #include "gc/ParallelWork.h"
219 #include "gc/WeakMap.h"
220 #include "jit/ExecutableAllocator.h"
221 #include "jit/JitCode.h"
222 #include "jit/JitRealm.h"
223 #include "jit/JitRuntime.h"
224 #include "jit/ProcessExecutableMemory.h"
225 #include "js/HeapAPI.h" // JS::GCCellPtr
226 #include "js/Printer.h"
227 #include "js/SliceBudget.h"
228 #include "util/DifferentialTesting.h"
229 #include "vm/BigIntType.h"
230 #include "vm/EnvironmentObject.h"
231 #include "vm/GetterSetter.h"
232 #include "vm/HelperThreadState.h"
233 #include "vm/JitActivation.h"
234 #include "vm/JSObject.h"
235 #include "vm/JSScript.h"
236 #include "vm/PropMap.h"
237 #include "vm/Realm.h"
238 #include "vm/Shape.h"
239 #include "vm/StringType.h"
240 #include "vm/SymbolType.h"
243 #include "gc/Heap-inl.h"
244 #include "gc/Nursery-inl.h"
245 #include "gc/ObjectKind-inl.h"
246 #include "gc/PrivateIterators-inl.h"
247 #include "vm/GeckoProfiler-inl.h"
248 #include "vm/JSContext-inl.h"
249 #include "vm/Realm-inl.h"
250 #include "vm/Stack-inl.h"
253 using namespace js::gc
;
255 using mozilla::MakeScopeExit
;
256 using mozilla::Maybe
;
257 using mozilla::Nothing
;
259 using mozilla::TimeDuration
;
260 using mozilla::TimeStamp
;
262 using JS::AutoGCRooter
;
264 const AllocKind
gc::slotsToThingKind
[] = {
266 /* 0 */ AllocKind::OBJECT0
, AllocKind::OBJECT2
, AllocKind::OBJECT2
, AllocKind::OBJECT4
,
267 /* 4 */ AllocKind::OBJECT4
, AllocKind::OBJECT8
, AllocKind::OBJECT8
, AllocKind::OBJECT8
,
268 /* 8 */ AllocKind::OBJECT8
, AllocKind::OBJECT12
, AllocKind::OBJECT12
, AllocKind::OBJECT12
,
269 /* 12 */ AllocKind::OBJECT12
, AllocKind::OBJECT16
, AllocKind::OBJECT16
, AllocKind::OBJECT16
,
270 /* 16 */ AllocKind::OBJECT16
274 static_assert(std::size(slotsToThingKind
) == SLOTS_TO_THING_KIND_LIMIT
,
275 "We have defined a slot count for each kind.");
277 MOZ_THREAD_LOCAL(JS::GCContext
*) js::TlsGCContext
;
279 JS::GCContext::GCContext(JSRuntime
* runtime
) : runtime_(runtime
) {}
281 JS::GCContext::~GCContext() {
282 MOZ_ASSERT(!hasJitCodeToPoison());
283 MOZ_ASSERT(!isCollecting());
284 MOZ_ASSERT(gcUse() == GCUse::None
);
285 MOZ_ASSERT(!gcSweepZone());
286 MOZ_ASSERT(!isTouchingGrayThings());
289 void JS::GCContext::poisonJitCode() {
290 if (hasJitCodeToPoison()) {
291 jit::ExecutableAllocator::poisonCode(runtime(), jitPoisonRanges
);
292 jitPoisonRanges
.clearAndFree();
297 void GCRuntime::verifyAllChunks() {
298 AutoLockGC
lock(this);
299 fullChunks(lock
).verifyChunks();
300 availableChunks(lock
).verifyChunks();
301 emptyChunks(lock
).verifyChunks();
305 void GCRuntime::setMinEmptyChunkCount(uint32_t value
, const AutoLockGC
& lock
) {
306 minEmptyChunkCount_
= value
;
307 if (minEmptyChunkCount_
> maxEmptyChunkCount_
) {
308 maxEmptyChunkCount_
= minEmptyChunkCount_
;
310 MOZ_ASSERT(maxEmptyChunkCount_
>= minEmptyChunkCount_
);
313 void GCRuntime::setMaxEmptyChunkCount(uint32_t value
, const AutoLockGC
& lock
) {
314 maxEmptyChunkCount_
= value
;
315 if (minEmptyChunkCount_
> maxEmptyChunkCount_
) {
316 minEmptyChunkCount_
= maxEmptyChunkCount_
;
318 MOZ_ASSERT(maxEmptyChunkCount_
>= minEmptyChunkCount_
);
321 inline bool GCRuntime::tooManyEmptyChunks(const AutoLockGC
& lock
) {
322 return emptyChunks(lock
).count() > minEmptyChunkCount(lock
);
325 ChunkPool
GCRuntime::expireEmptyChunkPool(const AutoLockGC
& lock
) {
326 MOZ_ASSERT(emptyChunks(lock
).verify());
327 MOZ_ASSERT(minEmptyChunkCount(lock
) <= maxEmptyChunkCount(lock
));
330 while (tooManyEmptyChunks(lock
)) {
331 TenuredChunk
* chunk
= emptyChunks(lock
).pop();
332 prepareToFreeChunk(chunk
->info
);
336 MOZ_ASSERT(expired
.verify());
337 MOZ_ASSERT(emptyChunks(lock
).verify());
338 MOZ_ASSERT(emptyChunks(lock
).count() <= maxEmptyChunkCount(lock
));
339 MOZ_ASSERT(emptyChunks(lock
).count() <= minEmptyChunkCount(lock
));
343 static void FreeChunkPool(ChunkPool
& pool
) {
344 for (ChunkPool::Iter
iter(pool
); !iter
.done();) {
345 TenuredChunk
* chunk
= iter
.get();
348 MOZ_ASSERT(chunk
->unused());
349 UnmapPages(static_cast<void*>(chunk
), ChunkSize
);
351 MOZ_ASSERT(pool
.count() == 0);
354 void GCRuntime::freeEmptyChunks(const AutoLockGC
& lock
) {
355 FreeChunkPool(emptyChunks(lock
));
358 inline void GCRuntime::prepareToFreeChunk(TenuredChunkInfo
& info
) {
359 MOZ_ASSERT(numArenasFreeCommitted
>= info
.numArenasFreeCommitted
);
360 numArenasFreeCommitted
-= info
.numArenasFreeCommitted
;
361 stats().count(gcstats::COUNT_DESTROY_CHUNK
);
364 * Let FreeChunkPool detect a missing prepareToFreeChunk call before it
367 info
.numArenasFreeCommitted
= 0;
371 void GCRuntime::releaseArena(Arena
* arena
, const AutoLockGC
& lock
) {
372 MOZ_ASSERT(arena
->allocated());
373 MOZ_ASSERT(!arena
->onDelayedMarkingList());
374 MOZ_ASSERT(TlsGCContext
.get()->isFinalizing());
376 arena
->zone
->gcHeapSize
.removeGCArena(heapSize
);
377 arena
->release(lock
);
378 arena
->chunk()->releaseArena(this, arena
, lock
);
381 GCRuntime::GCRuntime(JSRuntime
* rt
)
384 mainThreadContext(rt
),
385 heapState_(JS::HeapState::Idle
),
388 fullGCRequested(false),
389 helperThreadRatio(TuningDefaults::HelperThreadRatio
),
390 maxHelperThreads(TuningDefaults::MaxHelperThreads
),
391 helperThreadCount(1),
392 createBudgetCallback(nullptr),
393 minEmptyChunkCount_(TuningDefaults::MinEmptyChunkCount
),
394 maxEmptyChunkCount_(TuningDefaults::MaxEmptyChunkCount
),
396 nextCellUniqueId_(LargestTaggedNullCellPointer
+
397 1), // Ensure disjoint from null tagged pointers.
398 numArenasFreeCommitted(0),
399 verifyPreData(nullptr),
400 lastGCStartTime_(TimeStamp::Now()),
401 lastGCEndTime_(TimeStamp::Now()),
402 incrementalGCEnabled(TuningDefaults::IncrementalGCEnabled
),
403 perZoneGCEnabled(TuningDefaults::PerZoneGCEnabled
),
404 numActiveZoneIters(0),
405 cleanUpEverything(false),
407 majorGCTriggerReason(JS::GCReason::NO_REASON
),
413 incrementalState(gc::State::NotActive
),
414 initialState(gc::State::NotActive
),
416 lastMarkSlice(false),
418 markOnBackgroundThreadDuringSweeping(false),
419 useBackgroundThreads(false),
421 hadShutdownGC(false),
423 requestSliceAfterBackgroundTask(false),
424 lifoBlocksToFree((size_t)JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE
),
425 lifoBlocksToFreeAfterMinorGC(
426 (size_t)JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE
),
428 sweepGroups(nullptr),
429 currentSweepGroup(nullptr),
431 abortSweepAfterCurrentGroup(false),
432 sweepMarkResult(IncrementalProgress::NotFinished
),
436 startedCompacting(false),
439 relocatedArenasToRelease(nullptr),
442 markingValidator(nullptr),
444 defaultTimeBudgetMS_(TuningDefaults::DefaultTimeBudgetMS
),
445 incrementalAllowed(true),
446 compactingEnabled(TuningDefaults::CompactingEnabled
),
447 parallelMarkingEnabled(TuningDefaults::ParallelMarkingEnabled
),
453 deterministicOnly(false),
455 selectedForMarking(rt
),
457 fullCompartmentChecks(false),
459 alwaysPreserveCode(false),
460 lowMemoryState(false),
461 lock(mutexid::GCLock
),
462 delayedMarkingLock(mutexid::GCDelayedMarkingLock
),
463 allocTask(this, emptyChunks_
.ref()),
470 storeBuffer_(rt
, nursery()),
471 lastAllocRateUpdateTime(TimeStamp::Now()) {
474 using CharRange
= mozilla::Range
<const char>;
475 using CharRangeVector
= Vector
<CharRange
, 0, SystemAllocPolicy
>;
477 static bool SplitStringBy(CharRange text
, char delimiter
,
478 CharRangeVector
* result
) {
479 auto start
= text
.begin();
480 for (auto ptr
= start
; ptr
!= text
.end(); ptr
++) {
481 if (*ptr
== delimiter
) {
482 if (!result
->emplaceBack(start
, ptr
)) {
489 return result
->emplaceBack(start
, text
.end());
492 static bool ParseTimeDuration(CharRange text
, TimeDuration
* durationOut
) {
493 const char* str
= text
.begin().get();
495 *durationOut
= TimeDuration::FromMilliseconds(strtol(str
, &end
, 10));
496 return str
!= end
&& end
== text
.end().get();
499 static void PrintProfileHelpAndExit(const char* envName
, const char* helpText
) {
500 fprintf(stderr
, "%s=N[,(main|all)]\n", envName
);
501 fprintf(stderr
, "%s", helpText
);
505 void js::gc::ReadProfileEnv(const char* envName
, const char* helpText
,
506 bool* enableOut
, bool* workersOut
,
507 TimeDuration
* thresholdOut
) {
510 *thresholdOut
= TimeDuration();
512 const char* env
= getenv(envName
);
517 if (strcmp(env
, "help") == 0) {
518 PrintProfileHelpAndExit(envName
, helpText
);
521 CharRangeVector parts
;
522 auto text
= CharRange(env
, strlen(env
));
523 if (!SplitStringBy(text
, ',', &parts
)) {
524 MOZ_CRASH("OOM parsing environment variable");
527 if (parts
.length() == 0 || parts
.length() > 2) {
528 PrintProfileHelpAndExit(envName
, helpText
);
533 if (!ParseTimeDuration(parts
[0], thresholdOut
)) {
534 PrintProfileHelpAndExit(envName
, helpText
);
537 if (parts
.length() == 2) {
538 const char* threads
= parts
[1].begin().get();
539 if (strcmp(threads
, "all") == 0) {
541 } else if (strcmp(threads
, "main") != 0) {
542 PrintProfileHelpAndExit(envName
, helpText
);
547 bool js::gc::ShouldPrintProfile(JSRuntime
* runtime
, bool enable
,
548 bool profileWorkers
, TimeDuration threshold
,
549 TimeDuration duration
) {
550 return enable
&& (runtime
->isMainRuntime() || profileWorkers
) &&
551 duration
>= threshold
;
556 void GCRuntime::getZealBits(uint32_t* zealBits
, uint32_t* frequency
,
557 uint32_t* scheduled
) {
558 *zealBits
= zealModeBits
;
559 *frequency
= zealFrequency
;
560 *scheduled
= nextScheduled
;
563 const char gc::ZealModeHelpText
[] =
564 " Specifies how zealous the garbage collector should be. Some of these "
566 " be set simultaneously, by passing multiple level options, e.g. \"2;4\" "
568 " both modes 2 and 4. Modes can be specified by name or number.\n"
571 " 0: (None) Normal amount of collection (resets all modes)\n"
572 " 1: (RootsChange) Collect when roots are added or removed\n"
573 " 2: (Alloc) Collect when every N allocations (default: 100)\n"
574 " 4: (VerifierPre) Verify pre write barriers between instructions\n"
575 " 6: (YieldBeforeRootMarking) Incremental GC in two slices that yields "
576 "before root marking\n"
577 " 7: (GenerationalGC) Collect the nursery every N nursery allocations\n"
578 " 8: (YieldBeforeMarking) Incremental GC in two slices that yields "
580 " the root marking and marking phases\n"
581 " 9: (YieldBeforeSweeping) Incremental GC in two slices that yields "
583 " the marking and sweeping phases\n"
584 " 10: (IncrementalMultipleSlices) Incremental GC in many slices\n"
585 " 11: (IncrementalMarkingValidator) Verify incremental marking\n"
586 " 12: (ElementsBarrier) Use the individual element post-write barrier\n"
587 " regardless of elements size\n"
588 " 13: (CheckHashTablesOnMinorGC) Check internal hashtables on minor GC\n"
589 " 14: (Compact) Perform a shrinking collection every N allocations\n"
590 " 15: (CheckHeapAfterGC) Walk the heap to check its integrity after "
592 " 17: (YieldBeforeSweepingAtoms) Incremental GC in two slices that "
594 " before sweeping the atoms table\n"
595 " 18: (CheckGrayMarking) Check gray marking invariants after every GC\n"
596 " 19: (YieldBeforeSweepingCaches) Incremental GC in two slices that "
598 " before sweeping weak caches\n"
599 " 21: (YieldBeforeSweepingObjects) Incremental GC in two slices that "
601 " before sweeping foreground finalized objects\n"
602 " 22: (YieldBeforeSweepingNonObjects) Incremental GC in two slices that "
604 " before sweeping non-object GC things\n"
605 " 23: (YieldBeforeSweepingPropMapTrees) Incremental GC in two slices "
608 " before sweeping shape trees\n"
609 " 24: (CheckWeakMapMarking) Check weak map marking invariants after "
611 " 25: (YieldWhileGrayMarking) Incremental GC in two slices that yields\n"
612 " during gray marking\n";
614 // The set of zeal modes that control incremental slices. These modes are
615 // mutually exclusive.
616 static const mozilla::EnumSet
<ZealMode
> IncrementalSliceZealModes
= {
617 ZealMode::YieldBeforeRootMarking
,
618 ZealMode::YieldBeforeMarking
,
619 ZealMode::YieldBeforeSweeping
,
620 ZealMode::IncrementalMultipleSlices
,
621 ZealMode::YieldBeforeSweepingAtoms
,
622 ZealMode::YieldBeforeSweepingCaches
,
623 ZealMode::YieldBeforeSweepingObjects
,
624 ZealMode::YieldBeforeSweepingNonObjects
,
625 ZealMode::YieldBeforeSweepingPropMapTrees
};
627 void GCRuntime::setZeal(uint8_t zeal
, uint32_t frequency
) {
628 MOZ_ASSERT(zeal
<= unsigned(ZealMode::Limit
));
631 VerifyBarriers(rt
, PreBarrierVerifier
);
635 if (hasZealMode(ZealMode::GenerationalGC
)) {
636 evictNursery(JS::GCReason::DEBUG_GC
);
637 nursery().leaveZealMode();
640 if (isIncrementalGCInProgress()) {
641 finishGC(JS::GCReason::DEBUG_GC
);
645 ZealMode zealMode
= ZealMode(zeal
);
646 if (zealMode
== ZealMode::GenerationalGC
) {
647 evictNursery(JS::GCReason::DEBUG_GC
);
648 nursery().enterZealMode();
651 // Some modes are mutually exclusive. If we're setting one of those, we
652 // first reset all of them.
653 if (IncrementalSliceZealModes
.contains(zealMode
)) {
654 for (auto mode
: IncrementalSliceZealModes
) {
659 bool schedule
= zealMode
>= ZealMode::Alloc
;
661 zealModeBits
|= 1 << unsigned(zeal
);
665 zealFrequency
= frequency
;
666 nextScheduled
= schedule
? frequency
: 0;
669 void GCRuntime::unsetZeal(uint8_t zeal
) {
670 MOZ_ASSERT(zeal
<= unsigned(ZealMode::Limit
));
671 ZealMode zealMode
= ZealMode(zeal
);
673 if (!hasZealMode(zealMode
)) {
678 VerifyBarriers(rt
, PreBarrierVerifier
);
681 if (zealMode
== ZealMode::GenerationalGC
) {
682 evictNursery(JS::GCReason::DEBUG_GC
);
683 nursery().leaveZealMode();
686 clearZealMode(zealMode
);
688 if (zealModeBits
== 0) {
689 if (isIncrementalGCInProgress()) {
690 finishGC(JS::GCReason::DEBUG_GC
);
698 void GCRuntime::setNextScheduled(uint32_t count
) { nextScheduled
= count
; }
700 static bool ParseZealModeName(CharRange text
, uint32_t* modeOut
) {
707 static const ModeInfo zealModes
[] = {{"None", 0},
708 # define ZEAL_MODE(name, value) {#name, strlen(#name), value},
709 JS_FOR_EACH_ZEAL_MODE(ZEAL_MODE
)
713 for (auto mode
: zealModes
) {
714 if (text
.length() == mode
.length
&&
715 memcmp(text
.begin().get(), mode
.name
, mode
.length
) == 0) {
716 *modeOut
= mode
.value
;
724 static bool ParseZealModeNumericParam(CharRange text
, uint32_t* paramOut
) {
725 if (text
.length() == 0) {
729 for (auto c
: text
) {
730 if (!mozilla::IsAsciiDigit(c
)) {
735 *paramOut
= atoi(text
.begin().get());
739 static bool PrintZealHelpAndFail() {
740 fprintf(stderr
, "Format: JS_GC_ZEAL=level(;level)*[,N]\n");
741 fputs(ZealModeHelpText
, stderr
);
745 bool GCRuntime::parseAndSetZeal(const char* str
) {
746 // Set the zeal mode from a string consisting of one or more mode specifiers
747 // separated by ';', optionally followed by a ',' and the trigger frequency.
748 // The mode specifiers can by a mode name or its number.
750 auto text
= CharRange(str
, strlen(str
));
752 CharRangeVector parts
;
753 if (!SplitStringBy(text
, ',', &parts
)) {
757 if (parts
.length() == 0 || parts
.length() > 2) {
758 return PrintZealHelpAndFail();
761 uint32_t frequency
= JS_DEFAULT_ZEAL_FREQ
;
762 if (parts
.length() == 2 && !ParseZealModeNumericParam(parts
[1], &frequency
)) {
763 return PrintZealHelpAndFail();
766 CharRangeVector modes
;
767 if (!SplitStringBy(parts
[0], ';', &modes
)) {
771 for (const auto& descr
: modes
) {
773 if (!ParseZealModeName(descr
, &mode
) &&
774 !(ParseZealModeNumericParam(descr
, &mode
) &&
775 mode
<= unsigned(ZealMode::Limit
))) {
776 return PrintZealHelpAndFail();
779 setZeal(mode
, frequency
);
785 const char* js::gc::AllocKindName(AllocKind kind
) {
786 static const char* const names
[] = {
787 # define EXPAND_THING_NAME(allocKind, _1, _2, _3, _4, _5, _6) #allocKind,
788 FOR_EACH_ALLOCKIND(EXPAND_THING_NAME
)
789 # undef EXPAND_THING_NAME
791 static_assert(std::size(names
) == AllocKindCount
,
792 "names array should have an entry for every AllocKind");
794 size_t i
= size_t(kind
);
795 MOZ_ASSERT(i
< std::size(names
));
799 void js::gc::DumpArenaInfo() {
800 fprintf(stderr
, "Arena header size: %zu\n\n", ArenaHeaderSize
);
802 fprintf(stderr
, "GC thing kinds:\n");
803 fprintf(stderr
, "%25s %8s %8s %8s\n",
804 "AllocKind:", "Size:", "Count:", "Padding:");
805 for (auto kind
: AllAllocKinds()) {
806 fprintf(stderr
, "%25s %8zu %8zu %8zu\n", AllocKindName(kind
),
807 Arena::thingSize(kind
), Arena::thingsPerArena(kind
),
808 Arena::firstThingOffset(kind
) - ArenaHeaderSize
);
814 bool GCRuntime::init(uint32_t maxbytes
) {
815 MOZ_ASSERT(!wasInitialized());
817 MOZ_ASSERT(SystemPageSize());
818 Arena::checkLookupTables();
820 if (!TlsGCContext
.init()) {
823 TlsGCContext
.set(&mainThreadContext
.ref());
825 updateHelperThreadCount();
828 const char* size
= getenv("JSGC_MARK_STACK_LIMIT");
830 maybeMarkStackLimit
= atoi(size
);
834 if (!updateMarkersVector()) {
839 AutoLockGCBgAlloc
lock(this);
841 MOZ_ALWAYS_TRUE(tunables
.setParameter(JSGC_MAX_BYTES
, maxbytes
));
843 if (!nursery().init(lock
)) {
847 const char* pretenureThresholdStr
= getenv("JSGC_PRETENURE_THRESHOLD");
848 if (pretenureThresholdStr
&& pretenureThresholdStr
[0]) {
850 long pretenureThreshold
= strtol(pretenureThresholdStr
, &last
, 10);
851 if (last
[0] || !tunables
.setParameter(JSGC_PRETENURE_THRESHOLD
,
852 pretenureThreshold
)) {
853 fprintf(stderr
, "Invalid value for JSGC_PRETENURE_THRESHOLD: %s\n",
854 pretenureThresholdStr
);
860 const char* zealSpec
= getenv("JS_GC_ZEAL");
861 if (zealSpec
&& zealSpec
[0] && !parseAndSetZeal(zealSpec
)) {
866 for (auto& marker
: markers
) {
867 if (!marker
->init()) {
872 if (!initSweepActions()) {
876 UniquePtr
<Zone
> zone
= MakeUnique
<Zone
>(rt
, Zone::AtomsZone
);
877 if (!zone
|| !zone
->init()) {
881 // The atoms zone is stored as the first element of the zones vector.
882 MOZ_ASSERT(zone
->isAtomsZone());
883 MOZ_ASSERT(zones().empty());
884 MOZ_ALWAYS_TRUE(zones().reserve(1)); // ZonesVector has inline capacity 4.
885 zones().infallibleAppend(zone
.release());
887 gcprobes::Init(this);
893 void GCRuntime::finish() {
894 MOZ_ASSERT(inPageLoadCount
== 0);
895 MOZ_ASSERT(!sharedAtomsZone_
);
897 // Wait for nursery background free to end and disable it to release memory.
898 if (nursery().isEnabled()) {
902 // Wait until the background finalization and allocation stops and the
903 // helper thread shuts down before we forcefully release any remaining GC
908 allocTask
.cancelAndWait();
909 decommitTask
.cancelAndWait();
912 // Free memory associated with GC verification.
916 // Delete all remaining zones.
917 for (ZonesIter
zone(this, WithAtoms
); !zone
.done(); zone
.next()) {
918 AutoSetThreadIsSweeping
threadIsSweeping(rt
->gcContext(), zone
);
919 for (CompartmentsInZoneIter
comp(zone
); !comp
.done(); comp
.next()) {
920 for (RealmsInCompartmentIter
realm(comp
); !realm
.done(); realm
.next()) {
921 js_delete(realm
.get());
923 comp
->realms().clear();
924 js_delete(comp
.get());
926 zone
->compartments().clear();
927 js_delete(zone
.get());
932 FreeChunkPool(fullChunks_
.ref());
933 FreeChunkPool(availableChunks_
.ref());
934 FreeChunkPool(emptyChunks_
.ref());
936 TlsGCContext
.set(nullptr);
938 gcprobes::Finish(this);
940 nursery().printTotalProfileTimes();
941 stats().printTotalProfileTimes();
944 bool GCRuntime::freezeSharedAtomsZone() {
945 // This is called just after permanent atoms and well-known symbols have been
946 // created. At this point all existing atoms and symbols are permanent.
948 // This method makes the current atoms zone into a shared atoms zone and
949 // removes it from the zones list. Everything in it is marked black. A new
950 // empty atoms zone is created, where all atoms local to this runtime will
953 // The shared atoms zone will not be collected until shutdown when it is
954 // returned to the zone list by restoreSharedAtomsZone().
956 MOZ_ASSERT(rt
->isMainRuntime());
957 MOZ_ASSERT(!sharedAtomsZone_
);
958 MOZ_ASSERT(zones().length() == 1);
959 MOZ_ASSERT(atomsZone());
960 MOZ_ASSERT(!atomsZone()->wasGCStarted());
961 MOZ_ASSERT(!atomsZone()->needsIncrementalBarrier());
963 AutoAssertEmptyNursery
nurseryIsEmpty(rt
->mainContextFromOwnThread());
965 atomsZone()->arenas
.clearFreeLists();
967 for (auto kind
: AllAllocKinds()) {
969 atomsZone()->cellIterUnsafe
<TenuredCell
>(kind
, nurseryIsEmpty
);
970 !thing
.done(); thing
.next()) {
971 TenuredCell
* cell
= thing
.getCell();
972 MOZ_ASSERT((cell
->is
<JSString
>() &&
973 cell
->as
<JSString
>()->isPermanentAndMayBeShared()) ||
974 (cell
->is
<JS::Symbol
>() &&
975 cell
->as
<JS::Symbol
>()->isPermanentAndMayBeShared()));
980 sharedAtomsZone_
= atomsZone();
983 UniquePtr
<Zone
> zone
= MakeUnique
<Zone
>(rt
, Zone::AtomsZone
);
984 if (!zone
|| !zone
->init()) {
988 MOZ_ASSERT(zone
->isAtomsZone());
989 zones().infallibleAppend(zone
.release());
994 void GCRuntime::restoreSharedAtomsZone() {
995 // Return the shared atoms zone to the zone list. This allows the contents of
996 // the shared atoms zone to be collected when the parent runtime is shut down.
998 if (!sharedAtomsZone_
) {
1002 MOZ_ASSERT(rt
->isMainRuntime());
1003 MOZ_ASSERT(rt
->childRuntimeCount
== 0);
1005 AutoEnterOOMUnsafeRegion oomUnsafe
;
1006 if (!zones().append(sharedAtomsZone_
)) {
1007 oomUnsafe
.crash("restoreSharedAtomsZone");
1010 sharedAtomsZone_
= nullptr;
1013 bool GCRuntime::setParameter(JSContext
* cx
, JSGCParamKey key
, uint32_t value
) {
1014 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt
));
1016 AutoStopVerifyingBarriers
pauseVerification(rt
, false);
1018 waitBackgroundSweepEnd();
1020 AutoLockGC
lock(this);
1021 return setParameter(key
, value
, lock
);
1024 static bool IsGCThreadParameter(JSGCParamKey key
) {
1025 return key
== JSGC_HELPER_THREAD_RATIO
|| key
== JSGC_MAX_HELPER_THREADS
||
1026 key
== JSGC_MARKING_THREAD_COUNT
;
1029 bool GCRuntime::setParameter(JSGCParamKey key
, uint32_t value
,
1032 case JSGC_SLICE_TIME_BUDGET_MS
:
1033 defaultTimeBudgetMS_
= value
;
1035 case JSGC_INCREMENTAL_GC_ENABLED
:
1036 setIncrementalGCEnabled(value
!= 0);
1038 case JSGC_PER_ZONE_GC_ENABLED
:
1039 perZoneGCEnabled
= value
!= 0;
1041 case JSGC_COMPACTING_ENABLED
:
1042 compactingEnabled
= value
!= 0;
1044 case JSGC_PARALLEL_MARKING_ENABLED
:
1045 // Not supported on workers.
1046 parallelMarkingEnabled
= rt
->isMainRuntime() && value
!= 0;
1047 updateMarkersVector();
1049 case JSGC_INCREMENTAL_WEAKMAP_ENABLED
:
1050 for (auto& marker
: markers
) {
1051 marker
->incrementalWeakMapMarkingEnabled
= value
!= 0;
1054 case JSGC_MIN_EMPTY_CHUNK_COUNT
:
1055 setMinEmptyChunkCount(value
, lock
);
1057 case JSGC_MAX_EMPTY_CHUNK_COUNT
:
1058 setMaxEmptyChunkCount(value
, lock
);
1061 if (IsGCThreadParameter(key
)) {
1062 return setThreadParameter(key
, value
, lock
);
1065 if (!tunables
.setParameter(key
, value
)) {
1068 updateAllGCStartThresholds();
1074 bool GCRuntime::setThreadParameter(JSGCParamKey key
, uint32_t value
,
1076 if (rt
->parentRuntime
) {
1077 // Don't allow these to be set for worker runtimes.
1082 case JSGC_HELPER_THREAD_RATIO
:
1086 helperThreadRatio
= double(value
) / 100.0;
1088 case JSGC_MAX_HELPER_THREADS
:
1092 maxHelperThreads
= value
;
1094 case JSGC_MARKING_THREAD_COUNT
:
1095 markingThreadCount
= std::min(size_t(value
), MaxParallelWorkers
);
1098 MOZ_CRASH("Unexpected parameter key");
1101 updateHelperThreadCount();
1102 updateMarkersVector();
1107 void GCRuntime::resetParameter(JSContext
* cx
, JSGCParamKey key
) {
1108 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt
));
1110 AutoStopVerifyingBarriers
pauseVerification(rt
, false);
1112 waitBackgroundSweepEnd();
1114 AutoLockGC
lock(this);
1115 resetParameter(key
, lock
);
1118 void GCRuntime::resetParameter(JSGCParamKey key
, AutoLockGC
& lock
) {
1120 case JSGC_SLICE_TIME_BUDGET_MS
:
1121 defaultTimeBudgetMS_
= TuningDefaults::DefaultTimeBudgetMS
;
1123 case JSGC_INCREMENTAL_GC_ENABLED
:
1124 setIncrementalGCEnabled(TuningDefaults::IncrementalGCEnabled
);
1126 case JSGC_PER_ZONE_GC_ENABLED
:
1127 perZoneGCEnabled
= TuningDefaults::PerZoneGCEnabled
;
1129 case JSGC_COMPACTING_ENABLED
:
1130 compactingEnabled
= TuningDefaults::CompactingEnabled
;
1132 case JSGC_PARALLEL_MARKING_ENABLED
:
1133 parallelMarkingEnabled
= TuningDefaults::ParallelMarkingEnabled
;
1134 updateMarkersVector();
1136 case JSGC_INCREMENTAL_WEAKMAP_ENABLED
:
1137 for (auto& marker
: markers
) {
1138 marker
->incrementalWeakMapMarkingEnabled
=
1139 TuningDefaults::IncrementalWeakMapMarkingEnabled
;
1142 case JSGC_MIN_EMPTY_CHUNK_COUNT
:
1143 setMinEmptyChunkCount(TuningDefaults::MinEmptyChunkCount
, lock
);
1145 case JSGC_MAX_EMPTY_CHUNK_COUNT
:
1146 setMaxEmptyChunkCount(TuningDefaults::MaxEmptyChunkCount
, lock
);
1149 if (IsGCThreadParameter(key
)) {
1150 resetThreadParameter(key
, lock
);
1154 tunables
.resetParameter(key
);
1155 updateAllGCStartThresholds();
1159 void GCRuntime::resetThreadParameter(JSGCParamKey key
, AutoLockGC
& lock
) {
1160 if (rt
->parentRuntime
) {
1165 case JSGC_HELPER_THREAD_RATIO
:
1166 helperThreadRatio
= TuningDefaults::HelperThreadRatio
;
1168 case JSGC_MAX_HELPER_THREADS
:
1169 maxHelperThreads
= TuningDefaults::MaxHelperThreads
;
1171 case JSGC_MARKING_THREAD_COUNT
:
1172 markingThreadCount
= 0;
1175 MOZ_CRASH("Unexpected parameter key");
1178 updateHelperThreadCount();
1179 updateMarkersVector();
1182 uint32_t GCRuntime::getParameter(JSGCParamKey key
) {
1183 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt
));
1184 AutoLockGC
lock(this);
1185 return getParameter(key
, lock
);
1188 uint32_t GCRuntime::getParameter(JSGCParamKey key
, const AutoLockGC
& lock
) {
1191 return uint32_t(heapSize
.bytes());
1192 case JSGC_NURSERY_BYTES
:
1193 return nursery().capacity();
1195 return uint32_t(number
);
1196 case JSGC_MAJOR_GC_NUMBER
:
1197 return uint32_t(majorGCNumber
);
1198 case JSGC_MINOR_GC_NUMBER
:
1199 return uint32_t(minorGCNumber
);
1200 case JSGC_INCREMENTAL_GC_ENABLED
:
1201 return incrementalGCEnabled
;
1202 case JSGC_PER_ZONE_GC_ENABLED
:
1203 return perZoneGCEnabled
;
1204 case JSGC_UNUSED_CHUNKS
:
1205 return uint32_t(emptyChunks(lock
).count());
1206 case JSGC_TOTAL_CHUNKS
:
1207 return uint32_t(fullChunks(lock
).count() + availableChunks(lock
).count() +
1208 emptyChunks(lock
).count());
1209 case JSGC_SLICE_TIME_BUDGET_MS
:
1210 MOZ_RELEASE_ASSERT(defaultTimeBudgetMS_
>= 0);
1211 MOZ_RELEASE_ASSERT(defaultTimeBudgetMS_
<= UINT32_MAX
);
1212 return uint32_t(defaultTimeBudgetMS_
);
1213 case JSGC_MIN_EMPTY_CHUNK_COUNT
:
1214 return minEmptyChunkCount(lock
);
1215 case JSGC_MAX_EMPTY_CHUNK_COUNT
:
1216 return maxEmptyChunkCount(lock
);
1217 case JSGC_COMPACTING_ENABLED
:
1218 return compactingEnabled
;
1219 case JSGC_PARALLEL_MARKING_ENABLED
:
1220 return parallelMarkingEnabled
;
1221 case JSGC_INCREMENTAL_WEAKMAP_ENABLED
:
1222 return marker().incrementalWeakMapMarkingEnabled
;
1223 case JSGC_CHUNK_BYTES
:
1225 case JSGC_HELPER_THREAD_RATIO
:
1226 MOZ_ASSERT(helperThreadRatio
> 0.0);
1227 return uint32_t(helperThreadRatio
* 100.0);
1228 case JSGC_MAX_HELPER_THREADS
:
1229 MOZ_ASSERT(maxHelperThreads
<= UINT32_MAX
);
1230 return maxHelperThreads
;
1231 case JSGC_HELPER_THREAD_COUNT
:
1232 return helperThreadCount
;
1233 case JSGC_MARKING_THREAD_COUNT
:
1234 return markingThreadCount
;
1235 case JSGC_SYSTEM_PAGE_SIZE_KB
:
1236 return SystemPageSize() / 1024;
1238 return tunables
.getParameter(key
);
1243 void GCRuntime::setMarkStackLimit(size_t limit
, AutoLockGC
& lock
) {
1244 MOZ_ASSERT(!JS::RuntimeHeapIsBusy());
1246 maybeMarkStackLimit
= limit
;
1248 AutoUnlockGC
unlock(lock
);
1249 AutoStopVerifyingBarriers
pauseVerification(rt
, false);
1250 for (auto& marker
: markers
) {
1251 marker
->setMaxCapacity(limit
);
1256 void GCRuntime::setIncrementalGCEnabled(bool enabled
) {
1257 incrementalGCEnabled
= enabled
;
1260 void GCRuntime::updateHelperThreadCount() {
1261 if (!CanUseExtraThreads()) {
1262 // startTask will run the work on the main thread if the count is 1.
1263 MOZ_ASSERT(helperThreadCount
== 1);
1267 // Number of extra threads required during parallel marking to ensure we can
1268 // start the necessary marking tasks. Background free and background
1269 // allocation may already be running and we want to avoid these tasks blocking
1270 // marking. In real configurations there will be enough threads that this
1271 // won't affect anything.
1272 static constexpr size_t SpareThreadsDuringParallelMarking
= 2;
1274 // The count of helper threads used for GC tasks is process wide. Don't set it
1275 // for worker JS runtimes.
1276 if (rt
->parentRuntime
) {
1277 helperThreadCount
= rt
->parentRuntime
->gc
.helperThreadCount
;
1281 // Calculate the target thread count for GC parallel tasks.
1282 double cpuCount
= GetHelperThreadCPUCount();
1283 helperThreadCount
= std::clamp(size_t(cpuCount
* helperThreadRatio
.ref()),
1284 size_t(1), maxHelperThreads
.ref());
1286 // Calculate the overall target thread count taking into account the separate
1287 // parameter for parallel marking threads. Add spare threads to avoid blocking
1288 // parallel marking when there is other GC work happening.
1289 size_t targetCount
=
1290 std::max(helperThreadCount
.ref(),
1291 markingThreadCount
.ref() + SpareThreadsDuringParallelMarking
);
1293 // Attempt to create extra threads if possible. This is not supported when
1294 // using an external thread pool.
1295 AutoLockHelperThreadState lock
;
1296 (void)HelperThreadState().ensureThreadCount(targetCount
, lock
);
1298 // Limit all thread counts based on the number of threads available, which may
1299 // be fewer than requested.
1300 size_t availableThreadCount
= GetHelperThreadCount();
1301 MOZ_ASSERT(availableThreadCount
!= 0);
1302 targetCount
= std::min(targetCount
, availableThreadCount
);
1303 helperThreadCount
= std::min(helperThreadCount
.ref(), availableThreadCount
);
1304 markingThreadCount
=
1305 std::min(markingThreadCount
.ref(),
1306 availableThreadCount
- SpareThreadsDuringParallelMarking
);
1308 // Update the maximum number of threads that will be used for GC work.
1309 HelperThreadState().setGCParallelThreadCount(targetCount
, lock
);
1312 size_t GCRuntime::markingWorkerCount() const {
1313 if (!CanUseExtraThreads() || !parallelMarkingEnabled
) {
1317 if (markingThreadCount
) {
1318 return markingThreadCount
;
1321 // Limit parallel marking to use at most two threads initially.
1326 void GCRuntime::assertNoMarkingWork() const {
1327 for (auto& marker
: markers
) {
1328 MOZ_ASSERT(marker
->isDrained());
1330 MOZ_ASSERT(!hasDelayedMarking());
1334 bool GCRuntime::updateMarkersVector() {
1335 MOZ_ASSERT(helperThreadCount
>= 1,
1336 "There must always be at least one mark task");
1337 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt
));
1338 assertNoMarkingWork();
1340 // Limit worker count to number of GC parallel tasks that can run
1341 // concurrently, otherwise one thread can deadlock waiting on another.
1342 size_t targetCount
= std::min(markingWorkerCount(),
1343 HelperThreadState().getGCParallelThreadCount());
1345 if (markers
.length() > targetCount
) {
1346 return markers
.resize(targetCount
);
1349 while (markers
.length() < targetCount
) {
1350 auto marker
= MakeUnique
<GCMarker
>(rt
);
1356 if (maybeMarkStackLimit
) {
1357 marker
->setMaxCapacity(maybeMarkStackLimit
);
1361 if (!marker
->init()) {
1365 if (!markers
.emplaceBack(std::move(marker
))) {
1373 bool GCRuntime::addBlackRootsTracer(JSTraceDataOp traceOp
, void* data
) {
1375 return !!blackRootTracers
.ref().append(
1376 Callback
<JSTraceDataOp
>(traceOp
, data
));
1379 void GCRuntime::removeBlackRootsTracer(JSTraceDataOp traceOp
, void* data
) {
1380 // Can be called from finalizers
1381 for (size_t i
= 0; i
< blackRootTracers
.ref().length(); i
++) {
1382 Callback
<JSTraceDataOp
>* e
= &blackRootTracers
.ref()[i
];
1383 if (e
->op
== traceOp
&& e
->data
== data
) {
1384 blackRootTracers
.ref().erase(e
);
1390 void GCRuntime::setGrayRootsTracer(JSGrayRootsTracer traceOp
, void* data
) {
1392 grayRootTracer
.ref() = {traceOp
, data
};
1395 void GCRuntime::clearBlackAndGrayRootTracers() {
1396 MOZ_ASSERT(rt
->isBeingDestroyed());
1397 blackRootTracers
.ref().clear();
1398 setGrayRootsTracer(nullptr, nullptr);
1401 void GCRuntime::setGCCallback(JSGCCallback callback
, void* data
) {
1402 gcCallback
.ref() = {callback
, data
};
1405 void GCRuntime::callGCCallback(JSGCStatus status
, JS::GCReason reason
) const {
1406 const auto& callback
= gcCallback
.ref();
1407 MOZ_ASSERT(callback
.op
);
1408 callback
.op(rt
->mainContextFromOwnThread(), status
, reason
, callback
.data
);
1411 void GCRuntime::setObjectsTenuredCallback(JSObjectsTenuredCallback callback
,
1413 tenuredCallback
.ref() = {callback
, data
};
1416 void GCRuntime::callObjectsTenuredCallback() {
1417 JS::AutoSuppressGCAnalysis nogc
;
1418 const auto& callback
= tenuredCallback
.ref();
1420 callback
.op(rt
->mainContextFromOwnThread(), callback
.data
);
1424 bool GCRuntime::addFinalizeCallback(JSFinalizeCallback callback
, void* data
) {
1425 return finalizeCallbacks
.ref().append(
1426 Callback
<JSFinalizeCallback
>(callback
, data
));
1429 template <typename F
>
1430 static void EraseCallback(CallbackVector
<F
>& vector
, F callback
) {
1431 for (Callback
<F
>* p
= vector
.begin(); p
!= vector
.end(); p
++) {
1432 if (p
->op
== callback
) {
1439 void GCRuntime::removeFinalizeCallback(JSFinalizeCallback callback
) {
1440 EraseCallback(finalizeCallbacks
.ref(), callback
);
1443 void GCRuntime::callFinalizeCallbacks(JS::GCContext
* gcx
,
1444 JSFinalizeStatus status
) const {
1445 for (auto& p
: finalizeCallbacks
.ref()) {
1446 p
.op(gcx
, status
, p
.data
);
1450 void GCRuntime::setHostCleanupFinalizationRegistryCallback(
1451 JSHostCleanupFinalizationRegistryCallback callback
, void* data
) {
1452 hostCleanupFinalizationRegistryCallback
.ref() = {callback
, data
};
1455 void GCRuntime::callHostCleanupFinalizationRegistryCallback(
1456 JSFunction
* doCleanup
, GlobalObject
* incumbentGlobal
) {
1457 JS::AutoSuppressGCAnalysis nogc
;
1458 const auto& callback
= hostCleanupFinalizationRegistryCallback
.ref();
1460 callback
.op(doCleanup
, incumbentGlobal
, callback
.data
);
1464 bool GCRuntime::addWeakPointerZonesCallback(JSWeakPointerZonesCallback callback
,
1466 return updateWeakPointerZonesCallbacks
.ref().append(
1467 Callback
<JSWeakPointerZonesCallback
>(callback
, data
));
1470 void GCRuntime::removeWeakPointerZonesCallback(
1471 JSWeakPointerZonesCallback callback
) {
1472 EraseCallback(updateWeakPointerZonesCallbacks
.ref(), callback
);
1475 void GCRuntime::callWeakPointerZonesCallbacks(JSTracer
* trc
) const {
1476 for (auto const& p
: updateWeakPointerZonesCallbacks
.ref()) {
1481 bool GCRuntime::addWeakPointerCompartmentCallback(
1482 JSWeakPointerCompartmentCallback callback
, void* data
) {
1483 return updateWeakPointerCompartmentCallbacks
.ref().append(
1484 Callback
<JSWeakPointerCompartmentCallback
>(callback
, data
));
1487 void GCRuntime::removeWeakPointerCompartmentCallback(
1488 JSWeakPointerCompartmentCallback callback
) {
1489 EraseCallback(updateWeakPointerCompartmentCallbacks
.ref(), callback
);
1492 void GCRuntime::callWeakPointerCompartmentCallbacks(
1493 JSTracer
* trc
, JS::Compartment
* comp
) const {
1494 for (auto const& p
: updateWeakPointerCompartmentCallbacks
.ref()) {
1495 p
.op(trc
, comp
, p
.data
);
1499 JS::GCSliceCallback
GCRuntime::setSliceCallback(JS::GCSliceCallback callback
) {
1500 return stats().setSliceCallback(callback
);
1503 JS::GCNurseryCollectionCallback
GCRuntime::setNurseryCollectionCallback(
1504 JS::GCNurseryCollectionCallback callback
) {
1505 return stats().setNurseryCollectionCallback(callback
);
1508 JS::DoCycleCollectionCallback
GCRuntime::setDoCycleCollectionCallback(
1509 JS::DoCycleCollectionCallback callback
) {
1510 const auto prior
= gcDoCycleCollectionCallback
.ref();
1511 gcDoCycleCollectionCallback
.ref() = {callback
, nullptr};
1515 void GCRuntime::callDoCycleCollectionCallback(JSContext
* cx
) {
1516 const auto& callback
= gcDoCycleCollectionCallback
.ref();
1522 bool GCRuntime::addRoot(Value
* vp
, const char* name
) {
1524 * Sometimes Firefox will hold weak references to objects and then convert
1525 * them to strong references by calling AddRoot (e.g., via PreserveWrapper,
1526 * or ModifyBusyCount in workers). We need a read barrier to cover these
1531 if (value
.isGCThing()) {
1532 ValuePreWriteBarrier(value
);
1535 return rootsHash
.ref().put(vp
, name
);
1538 void GCRuntime::removeRoot(Value
* vp
) {
1539 rootsHash
.ref().remove(vp
);
1540 notifyRootsRemoved();
1545 bool js::gc::IsCurrentlyAnimating(const TimeStamp
& lastAnimationTime
,
1546 const TimeStamp
& currentTime
) {
1547 // Assume that we're currently animating if js::NotifyAnimationActivity has
1548 // been called in the last second.
1549 static const auto oneSecond
= TimeDuration::FromSeconds(1);
1550 return !lastAnimationTime
.IsNull() &&
1551 currentTime
< (lastAnimationTime
+ oneSecond
);
1554 static bool DiscardedCodeRecently(Zone
* zone
, const TimeStamp
& currentTime
) {
1555 static const auto thirtySeconds
= TimeDuration::FromSeconds(30);
1556 return !zone
->lastDiscardedCodeTime().IsNull() &&
1557 currentTime
< (zone
->lastDiscardedCodeTime() + thirtySeconds
);
1560 bool GCRuntime::shouldCompact() {
1561 // Compact on shrinking GC if enabled. Skip compacting in incremental GCs
1562 // if we are currently animating, unless the user is inactive or we're
1563 // responding to memory pressure.
1565 if (!isShrinkingGC() || !isCompactingGCEnabled()) {
1569 if (initialReason
== JS::GCReason::USER_INACTIVE
||
1570 initialReason
== JS::GCReason::MEM_PRESSURE
) {
1574 return !isIncremental
||
1575 !IsCurrentlyAnimating(rt
->lastAnimationTime
, TimeStamp::Now());
1578 bool GCRuntime::isCompactingGCEnabled() const {
1579 return compactingEnabled
&&
1580 rt
->mainContextFromOwnThread()->compactingDisabledCount
== 0;
1583 JS_PUBLIC_API
void JS::SetCreateGCSliceBudgetCallback(
1584 JSContext
* cx
, JS::CreateSliceBudgetCallback cb
) {
1585 cx
->runtime()->gc
.createBudgetCallback
= cb
;
1588 void TimeBudget::setDeadlineFromNow() { deadline
= TimeStamp::Now() + budget
; }
1590 SliceBudget::SliceBudget(TimeBudget time
, InterruptRequestFlag
* interrupt
)
1591 : budget(TimeBudget(time
)),
1592 interruptRequested(interrupt
),
1593 counter(StepsPerExpensiveCheck
) {
1594 budget
.as
<TimeBudget
>().setDeadlineFromNow();
1597 SliceBudget::SliceBudget(WorkBudget work
)
1598 : budget(work
), interruptRequested(nullptr), counter(work
.budget
) {}
1600 int SliceBudget::describe(char* buffer
, size_t maxlen
) const {
1601 if (isUnlimited()) {
1602 return snprintf(buffer
, maxlen
, "unlimited");
1603 } else if (isWorkBudget()) {
1604 return snprintf(buffer
, maxlen
, "work(%" PRId64
")", workBudget());
1606 const char* interruptStr
= "";
1607 if (interruptRequested
) {
1608 interruptStr
= interrupted
? "INTERRUPTED " : "interruptible ";
1610 const char* extra
= "";
1612 extra
= extended
? " (started idle but extended)" : " (idle)";
1614 return snprintf(buffer
, maxlen
, "%s%" PRId64
"ms%s", interruptStr
,
1615 timeBudget(), extra
);
1619 bool SliceBudget::checkOverBudget() {
1620 MOZ_ASSERT(counter
<= 0);
1621 MOZ_ASSERT(!isUnlimited());
1623 if (isWorkBudget()) {
1627 if (interruptRequested
&& *interruptRequested
) {
1628 *interruptRequested
= false;
1636 if (TimeStamp::Now() >= budget
.as
<TimeBudget
>().deadline
) {
1640 counter
= StepsPerExpensiveCheck
;
1644 void GCRuntime::requestMajorGC(JS::GCReason reason
) {
1645 MOZ_ASSERT_IF(reason
!= JS::GCReason::BG_TASK_FINISHED
,
1646 !CurrentThreadIsPerformingGC());
1648 if (majorGCRequested()) {
1652 majorGCTriggerReason
= reason
;
1653 rt
->mainContextFromAnyThread()->requestInterrupt(InterruptReason::GC
);
1656 void Nursery::requestMinorGC(JS::GCReason reason
) const {
1657 MOZ_ASSERT(CurrentThreadCanAccessRuntime(runtime()));
1659 if (minorGCRequested()) {
1663 minorGCTriggerReason_
= reason
;
1664 runtime()->mainContextFromOwnThread()->requestInterrupt(InterruptReason::GC
);
1667 bool GCRuntime::triggerGC(JS::GCReason reason
) {
1669 * Don't trigger GCs if this is being called off the main thread from
1670 * onTooMuchMalloc().
1672 if (!CurrentThreadCanAccessRuntime(rt
)) {
1676 /* GC is already running. */
1677 if (JS::RuntimeHeapIsCollecting()) {
1681 JS::PrepareForFullGC(rt
->mainContextFromOwnThread());
1682 requestMajorGC(reason
);
1686 void GCRuntime::maybeTriggerGCAfterAlloc(Zone
* zone
) {
1687 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt
));
1688 MOZ_ASSERT(!JS::RuntimeHeapIsCollecting());
1690 TriggerResult trigger
=
1691 checkHeapThreshold(zone
, zone
->gcHeapSize
, zone
->gcHeapThreshold
);
1693 if (trigger
.shouldTrigger
) {
1694 // Start or continue an in progress incremental GC. We do this to try to
1695 // avoid performing non-incremental GCs on zones which allocate a lot of
1696 // data, even when incremental slices can't be triggered via scheduling in
1698 triggerZoneGC(zone
, JS::GCReason::ALLOC_TRIGGER
, trigger
.usedBytes
,
1699 trigger
.thresholdBytes
);
1703 void js::gc::MaybeMallocTriggerZoneGC(JSRuntime
* rt
, ZoneAllocator
* zoneAlloc
,
1704 const HeapSize
& heap
,
1705 const HeapThreshold
& threshold
,
1706 JS::GCReason reason
) {
1707 rt
->gc
.maybeTriggerGCAfterMalloc(Zone::from(zoneAlloc
), heap
, threshold
,
1711 void GCRuntime::maybeTriggerGCAfterMalloc(Zone
* zone
) {
1712 if (maybeTriggerGCAfterMalloc(zone
, zone
->mallocHeapSize
,
1713 zone
->mallocHeapThreshold
,
1714 JS::GCReason::TOO_MUCH_MALLOC
)) {
1718 maybeTriggerGCAfterMalloc(zone
, zone
->jitHeapSize
, zone
->jitHeapThreshold
,
1719 JS::GCReason::TOO_MUCH_JIT_CODE
);
1722 bool GCRuntime::maybeTriggerGCAfterMalloc(Zone
* zone
, const HeapSize
& heap
,
1723 const HeapThreshold
& threshold
,
1724 JS::GCReason reason
) {
1725 // Ignore malloc during sweeping, for example when we resize hash tables.
1726 if (heapState() != JS::HeapState::Idle
) {
1730 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt
));
1732 TriggerResult trigger
= checkHeapThreshold(zone
, heap
, threshold
);
1733 if (!trigger
.shouldTrigger
) {
1737 // Trigger a zone GC. budgetIncrementalGC() will work out whether to do an
1738 // incremental or non-incremental collection.
1739 triggerZoneGC(zone
, reason
, trigger
.usedBytes
, trigger
.thresholdBytes
);
1743 TriggerResult
GCRuntime::checkHeapThreshold(
1744 Zone
* zone
, const HeapSize
& heapSize
, const HeapThreshold
& heapThreshold
) {
1745 MOZ_ASSERT_IF(heapThreshold
.hasSliceThreshold(), zone
->wasGCStarted());
1747 size_t usedBytes
= heapSize
.bytes();
1748 size_t thresholdBytes
= heapThreshold
.hasSliceThreshold()
1749 ? heapThreshold
.sliceBytes()
1750 : heapThreshold
.startBytes();
1752 // The incremental limit will be checked if we trigger a GC slice.
1753 MOZ_ASSERT(thresholdBytes
<= heapThreshold
.incrementalLimitBytes());
1755 return TriggerResult
{usedBytes
>= thresholdBytes
, usedBytes
, thresholdBytes
};
1758 bool GCRuntime::triggerZoneGC(Zone
* zone
, JS::GCReason reason
, size_t used
,
1760 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt
));
1762 /* GC is already running. */
1763 if (JS::RuntimeHeapIsBusy()) {
1768 if (hasZealMode(ZealMode::Alloc
)) {
1769 MOZ_RELEASE_ASSERT(triggerGC(reason
));
1774 if (zone
->isAtomsZone()) {
1775 stats().recordTrigger(used
, threshold
);
1776 MOZ_RELEASE_ASSERT(triggerGC(reason
));
1780 stats().recordTrigger(used
, threshold
);
1782 requestMajorGC(reason
);
1786 void GCRuntime::maybeGC() {
1787 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt
));
1790 if (hasZealMode(ZealMode::Alloc
) || hasZealMode(ZealMode::RootsChange
)) {
1791 JS::PrepareForFullGC(rt
->mainContextFromOwnThread());
1792 gc(JS::GCOptions::Normal
, JS::GCReason::DEBUG_GC
);
1797 (void)gcIfRequestedImpl(/* eagerOk = */ true);
1800 JS::GCReason
GCRuntime::wantMajorGC(bool eagerOk
) {
1801 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt
));
1803 if (majorGCRequested()) {
1804 return majorGCTriggerReason
;
1807 if (isIncrementalGCInProgress() || !eagerOk
) {
1808 return JS::GCReason::NO_REASON
;
1811 JS::GCReason reason
= JS::GCReason::NO_REASON
;
1812 for (ZonesIter
zone(this, WithAtoms
); !zone
.done(); zone
.next()) {
1813 if (checkEagerAllocTrigger(zone
->gcHeapSize
, zone
->gcHeapThreshold
) ||
1814 checkEagerAllocTrigger(zone
->mallocHeapSize
,
1815 zone
->mallocHeapThreshold
)) {
1817 reason
= JS::GCReason::EAGER_ALLOC_TRIGGER
;
1824 bool GCRuntime::checkEagerAllocTrigger(const HeapSize
& size
,
1825 const HeapThreshold
& threshold
) {
1826 double thresholdBytes
=
1827 threshold
.eagerAllocTrigger(schedulingState
.inHighFrequencyGCMode());
1828 double usedBytes
= size
.bytes();
1829 if (usedBytes
<= 1024 * 1024 || usedBytes
< thresholdBytes
) {
1833 stats().recordTrigger(usedBytes
, thresholdBytes
);
1837 bool GCRuntime::shouldDecommit() const {
1838 // If we're doing a shrinking GC we always decommit to release as much memory
1840 if (cleanUpEverything
) {
1844 // If we are allocating heavily enough to trigger "high frequency" GC then
1845 // skip decommit so that we do not compete with the mutator.
1846 return !schedulingState
.inHighFrequencyGCMode();
1849 void GCRuntime::startDecommit() {
1850 gcstats::AutoPhase
ap(stats(), gcstats::PhaseKind::DECOMMIT
);
1853 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt
));
1854 MOZ_ASSERT(decommitTask
.isIdle());
1857 AutoLockGC
lock(this);
1858 MOZ_ASSERT(fullChunks(lock
).verify());
1859 MOZ_ASSERT(availableChunks(lock
).verify());
1860 MOZ_ASSERT(emptyChunks(lock
).verify());
1862 // Verify that all entries in the empty chunks pool are unused.
1863 for (ChunkPool::Iter
chunk(emptyChunks(lock
)); !chunk
.done();
1865 MOZ_ASSERT(chunk
->unused());
1870 if (!shouldDecommit()) {
1875 AutoLockGC
lock(this);
1876 if (availableChunks(lock
).empty() && !tooManyEmptyChunks(lock
) &&
1877 emptyChunks(lock
).empty()) {
1878 return; // Nothing to do.
1884 AutoLockHelperThreadState lock
;
1885 MOZ_ASSERT(!requestSliceAfterBackgroundTask
);
1889 if (useBackgroundThreads
) {
1890 decommitTask
.start();
1894 decommitTask
.runFromMainThread();
1897 BackgroundDecommitTask::BackgroundDecommitTask(GCRuntime
* gc
)
1898 : GCParallelTask(gc
, gcstats::PhaseKind::DECOMMIT
) {}
1900 void js::gc::BackgroundDecommitTask::run(AutoLockHelperThreadState
& lock
) {
1902 AutoUnlockHelperThreadState
unlock(lock
);
1904 ChunkPool emptyChunksToFree
;
1906 AutoLockGC
gcLock(gc
);
1907 emptyChunksToFree
= gc
->expireEmptyChunkPool(gcLock
);
1910 FreeChunkPool(emptyChunksToFree
);
1913 AutoLockGC
gcLock(gc
);
1915 // To help minimize the total number of chunks needed over time, sort the
1916 // available chunks list so that we allocate into more-used chunks first.
1917 gc
->availableChunks(gcLock
).sort();
1919 if (DecommitEnabled()) {
1920 gc
->decommitEmptyChunks(cancel_
, gcLock
);
1921 gc
->decommitFreeArenas(cancel_
, gcLock
);
1926 gc
->maybeRequestGCAfterBackgroundTask(lock
);
1929 static inline bool CanDecommitWholeChunk(TenuredChunk
* chunk
) {
1930 return chunk
->unused() && chunk
->info
.numArenasFreeCommitted
!= 0;
1933 // Called from a background thread to decommit free arenas. Releases the GC
1935 void GCRuntime::decommitEmptyChunks(const bool& cancel
, AutoLockGC
& lock
) {
1936 Vector
<TenuredChunk
*, 0, SystemAllocPolicy
> chunksToDecommit
;
1937 for (ChunkPool::Iter
chunk(emptyChunks(lock
)); !chunk
.done(); chunk
.next()) {
1938 if (CanDecommitWholeChunk(chunk
) && !chunksToDecommit
.append(chunk
)) {
1939 onOutOfMallocMemory(lock
);
1944 for (TenuredChunk
* chunk
: chunksToDecommit
) {
1949 // Check whether something used the chunk while lock was released.
1950 if (!CanDecommitWholeChunk(chunk
)) {
1954 // Temporarily remove the chunk while decommitting its memory so that the
1955 // mutator doesn't start allocating from it when we drop the lock.
1956 emptyChunks(lock
).remove(chunk
);
1959 AutoUnlockGC
unlock(lock
);
1960 chunk
->decommitAllArenas();
1961 MOZ_ASSERT(chunk
->info
.numArenasFreeCommitted
== 0);
1964 emptyChunks(lock
).push(chunk
);
1968 // Called from a background thread to decommit free arenas. Releases the GC
1970 void GCRuntime::decommitFreeArenas(const bool& cancel
, AutoLockGC
& lock
) {
1971 MOZ_ASSERT(DecommitEnabled());
1973 // Since we release the GC lock while doing the decommit syscall below,
1974 // it is dangerous to iterate the available list directly, as the active
1975 // thread could modify it concurrently. Instead, we build and pass an
1976 // explicit Vector containing the Chunks we want to visit.
1977 Vector
<TenuredChunk
*, 0, SystemAllocPolicy
> chunksToDecommit
;
1978 for (ChunkPool::Iter
chunk(availableChunks(lock
)); !chunk
.done();
1980 if (chunk
->info
.numArenasFreeCommitted
!= 0 &&
1981 !chunksToDecommit
.append(chunk
)) {
1982 onOutOfMallocMemory(lock
);
1987 for (TenuredChunk
* chunk
: chunksToDecommit
) {
1988 chunk
->decommitFreeArenas(this, cancel
, lock
);
1992 // Do all possible decommit immediately from the current thread without
1993 // releasing the GC lock or allocating any memory.
1994 void GCRuntime::decommitFreeArenasWithoutUnlocking(const AutoLockGC
& lock
) {
1995 MOZ_ASSERT(DecommitEnabled());
1996 for (ChunkPool::Iter
chunk(availableChunks(lock
)); !chunk
.done();
1998 chunk
->decommitFreeArenasWithoutUnlocking(lock
);
2000 MOZ_ASSERT(availableChunks(lock
).verify());
2003 void GCRuntime::maybeRequestGCAfterBackgroundTask(
2004 const AutoLockHelperThreadState
& lock
) {
2005 if (requestSliceAfterBackgroundTask
) {
2006 // Trigger a slice so the main thread can continue the collection
2008 requestSliceAfterBackgroundTask
= false;
2009 requestMajorGC(JS::GCReason::BG_TASK_FINISHED
);
2013 void GCRuntime::cancelRequestedGCAfterBackgroundTask() {
2014 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt
));
2018 AutoLockHelperThreadState lock
;
2019 MOZ_ASSERT(!requestSliceAfterBackgroundTask
);
2023 majorGCTriggerReason
.compareExchange(JS::GCReason::BG_TASK_FINISHED
,
2024 JS::GCReason::NO_REASON
);
2027 bool GCRuntime::isWaitingOnBackgroundTask() const {
2028 AutoLockHelperThreadState lock
;
2029 return requestSliceAfterBackgroundTask
;
2032 void GCRuntime::queueUnusedLifoBlocksForFree(LifoAlloc
* lifo
) {
2033 MOZ_ASSERT(JS::RuntimeHeapIsBusy());
2034 AutoLockHelperThreadState lock
;
2035 lifoBlocksToFree
.ref().transferUnusedFrom(lifo
);
2038 void GCRuntime::queueAllLifoBlocksForFreeAfterMinorGC(LifoAlloc
* lifo
) {
2039 lifoBlocksToFreeAfterMinorGC
.ref().transferFrom(lifo
);
2042 void GCRuntime::queueBuffersForFreeAfterMinorGC(Nursery::BufferSet
& buffers
) {
2043 AutoLockHelperThreadState lock
;
2045 if (!buffersToFreeAfterMinorGC
.ref().empty()) {
2046 // In the rare case that this hasn't processed the buffers from a previous
2047 // minor GC we have to wait here.
2048 MOZ_ASSERT(!freeTask
.isIdle(lock
));
2049 freeTask
.joinWithLockHeld(lock
);
2052 MOZ_ASSERT(buffersToFreeAfterMinorGC
.ref().empty());
2053 std::swap(buffersToFreeAfterMinorGC
.ref(), buffers
);
2056 void Realm::destroy(JS::GCContext
* gcx
) {
2057 JSRuntime
* rt
= gcx
->runtime();
2058 if (auto callback
= rt
->destroyRealmCallback
) {
2059 callback(gcx
, this);
2062 JS_DropPrincipals(rt
->mainContextFromOwnThread(), principals());
2064 // Bug 1560019: Malloc memory associated with a zone but not with a specific
2065 // GC thing is not currently tracked.
2066 gcx
->deleteUntracked(this);
2069 void Compartment::destroy(JS::GCContext
* gcx
) {
2070 JSRuntime
* rt
= gcx
->runtime();
2071 if (auto callback
= rt
->destroyCompartmentCallback
) {
2072 callback(gcx
, this);
2074 // Bug 1560019: Malloc memory associated with a zone but not with a specific
2075 // GC thing is not currently tracked.
2076 gcx
->deleteUntracked(this);
2077 rt
->gc
.stats().sweptCompartment();
2080 void Zone::destroy(JS::GCContext
* gcx
) {
2081 MOZ_ASSERT(compartments().empty());
2082 JSRuntime
* rt
= gcx
->runtime();
2083 if (auto callback
= rt
->destroyZoneCallback
) {
2084 callback(gcx
, this);
2086 // Bug 1560019: Malloc memory associated with a zone but not with a specific
2087 // GC thing is not currently tracked.
2088 gcx
->deleteUntracked(this);
2089 gcx
->runtime()->gc
.stats().sweptZone();
2093 * It's simpler if we preserve the invariant that every zone (except atoms
2094 * zones) has at least one compartment, and every compartment has at least one
2095 * realm. If we know we're deleting the entire zone, then sweepCompartments is
2096 * allowed to delete all compartments. In this case, |keepAtleastOne| is false.
2097 * If any cells remain alive in the zone, set |keepAtleastOne| true to prohibit
2098 * sweepCompartments from deleting every compartment. Instead, it preserves an
2099 * arbitrary compartment in the zone.
2101 void Zone::sweepCompartments(JS::GCContext
* gcx
, bool keepAtleastOne
,
2102 bool destroyingRuntime
) {
2103 MOZ_ASSERT_IF(!isAtomsZone(), !compartments().empty());
2104 MOZ_ASSERT_IF(destroyingRuntime
, !keepAtleastOne
);
2106 Compartment
** read
= compartments().begin();
2107 Compartment
** end
= compartments().end();
2108 Compartment
** write
= read
;
2109 while (read
< end
) {
2110 Compartment
* comp
= *read
++;
2113 * Don't delete the last compartment and realm if keepAtleastOne is
2114 * still true, meaning all the other compartments were deleted.
2116 bool keepAtleastOneRealm
= read
== end
&& keepAtleastOne
;
2117 comp
->sweepRealms(gcx
, keepAtleastOneRealm
, destroyingRuntime
);
2119 if (!comp
->realms().empty()) {
2121 keepAtleastOne
= false;
2126 compartments().shrinkTo(write
- compartments().begin());
2127 MOZ_ASSERT_IF(keepAtleastOne
, !compartments().empty());
2128 MOZ_ASSERT_IF(destroyingRuntime
, compartments().empty());
2131 void Compartment::sweepRealms(JS::GCContext
* gcx
, bool keepAtleastOne
,
2132 bool destroyingRuntime
) {
2133 MOZ_ASSERT(!realms().empty());
2134 MOZ_ASSERT_IF(destroyingRuntime
, !keepAtleastOne
);
2136 Realm
** read
= realms().begin();
2137 Realm
** end
= realms().end();
2138 Realm
** write
= read
;
2139 while (read
< end
) {
2140 Realm
* realm
= *read
++;
2143 * Don't delete the last realm if keepAtleastOne is still true, meaning
2144 * all the other realms were deleted.
2146 bool dontDelete
= read
== end
&& keepAtleastOne
;
2147 if ((realm
->marked() || dontDelete
) && !destroyingRuntime
) {
2149 keepAtleastOne
= false;
2151 realm
->destroy(gcx
);
2154 realms().shrinkTo(write
- realms().begin());
2155 MOZ_ASSERT_IF(keepAtleastOne
, !realms().empty());
2156 MOZ_ASSERT_IF(destroyingRuntime
, realms().empty());
2159 void GCRuntime::sweepZones(JS::GCContext
* gcx
, bool destroyingRuntime
) {
2160 MOZ_ASSERT_IF(destroyingRuntime
, numActiveZoneIters
== 0);
2162 if (numActiveZoneIters
) {
2166 assertBackgroundSweepingFinished();
2168 // Sweep zones following the atoms zone.
2169 MOZ_ASSERT(zones()[0]->isAtomsZone());
2170 Zone
** read
= zones().begin() + 1;
2171 Zone
** end
= zones().end();
2172 Zone
** write
= read
;
2174 while (read
< end
) {
2175 Zone
* zone
= *read
++;
2177 if (zone
->wasGCStarted()) {
2178 MOZ_ASSERT(!zone
->isQueuedForBackgroundSweep());
2179 AutoSetThreadIsSweeping
threadIsSweeping(zone
);
2180 const bool zoneIsDead
=
2181 zone
->arenas
.arenaListsAreEmpty() && !zone
->hasMarkedRealms();
2182 MOZ_ASSERT_IF(destroyingRuntime
, zoneIsDead
);
2184 zone
->arenas
.checkEmptyFreeLists();
2185 zone
->sweepCompartments(gcx
, false, destroyingRuntime
);
2186 MOZ_ASSERT(zone
->compartments().empty());
2190 zone
->sweepCompartments(gcx
, true, destroyingRuntime
);
2194 zones().shrinkTo(write
- zones().begin());
2197 void ArenaLists::checkEmptyArenaList(AllocKind kind
) {
2198 MOZ_ASSERT(arenaList(kind
).isEmpty());
2201 void GCRuntime::purgeRuntimeForMinorGC() {
2202 // If external strings become nursery allocable, remember to call
2203 // zone->externalStringCache().purge() (and delete this assert.)
2204 MOZ_ASSERT(!IsNurseryAllocable(AllocKind::EXTERNAL_STRING
));
2206 for (ZonesIter
zone(this, SkipAtoms
); !zone
.done(); zone
.next()) {
2207 zone
->functionToStringCache().purge();
2211 void GCRuntime::purgeRuntime() {
2212 gcstats::AutoPhase
ap(stats(), gcstats::PhaseKind::PURGE
);
2214 for (GCRealmsIter
realm(rt
); !realm
.done(); realm
.next()) {
2218 for (GCZonesIter
zone(this); !zone
.done(); zone
.next()) {
2219 zone
->purgeAtomCache();
2220 zone
->externalStringCache().purge();
2221 zone
->functionToStringCache().purge();
2222 zone
->boundPrefixCache().clearAndCompact();
2223 zone
->shapeZone().purgeShapeCaches(rt
->gcContext());
2226 JSContext
* cx
= rt
->mainContextFromOwnThread();
2227 queueUnusedLifoBlocksForFree(&cx
->tempLifoAlloc());
2228 cx
->interpreterStack().purge(rt
);
2229 cx
->frontendCollectionPool().purge();
2231 rt
->caches().purge();
2233 if (rt
->isMainRuntime()) {
2234 SharedImmutableStringsCache::getSingleton().purge();
2237 MOZ_ASSERT(marker().unmarkGrayStack
.empty());
2238 marker().unmarkGrayStack
.clearAndFree();
2240 // If we're the main runtime, tell helper threads to free their unused
2241 // memory when they are next idle.
2242 if (!rt
->parentRuntime
) {
2243 HelperThreadState().triggerFreeUnusedMemory();
2247 bool GCRuntime::shouldPreserveJITCode(Realm
* realm
,
2248 const TimeStamp
& currentTime
,
2249 JS::GCReason reason
,
2250 bool canAllocateMoreCode
,
2251 bool isActiveCompartment
) {
2252 if (cleanUpEverything
) {
2255 if (!canAllocateMoreCode
) {
2259 if (isActiveCompartment
) {
2262 if (alwaysPreserveCode
) {
2265 if (realm
->preserveJitCode()) {
2268 if (IsCurrentlyAnimating(realm
->lastAnimationTime
, currentTime
) &&
2269 DiscardedCodeRecently(realm
->zone(), currentTime
)) {
2272 if (reason
== JS::GCReason::DEBUG_GC
) {
2280 class CompartmentCheckTracer final
: public JS::CallbackTracer
{
2281 void onChild(JS::GCCellPtr thing
, const char* name
) override
;
2282 bool edgeIsInCrossCompartmentMap(JS::GCCellPtr dst
);
2285 explicit CompartmentCheckTracer(JSRuntime
* rt
)
2286 : JS::CallbackTracer(rt
, JS::TracerKind::CompartmentCheck
,
2287 JS::WeakEdgeTraceAction::Skip
),
2290 compartment(nullptr) {}
2293 JS::TraceKind srcKind
;
2295 Compartment
* compartment
;
2298 static bool InCrossCompartmentMap(JSRuntime
* rt
, JSObject
* src
,
2299 JS::GCCellPtr dst
) {
2300 // Cross compartment edges are either in the cross compartment map or in a
2301 // debugger weakmap.
2303 Compartment
* srccomp
= src
->compartment();
2305 if (dst
.is
<JSObject
>()) {
2306 if (ObjectWrapperMap::Ptr p
= srccomp
->lookupWrapper(&dst
.as
<JSObject
>())) {
2307 if (*p
->value().unsafeGet() == src
) {
2313 if (DebugAPI::edgeIsInDebuggerWeakmap(rt
, src
, dst
)) {
2320 void CompartmentCheckTracer::onChild(JS::GCCellPtr thing
, const char* name
) {
2322 MapGCThingTyped(thing
, [](auto t
) { return t
->maybeCompartment(); });
2323 if (comp
&& compartment
) {
2324 MOZ_ASSERT(comp
== compartment
|| edgeIsInCrossCompartmentMap(thing
));
2326 TenuredCell
* tenured
= &thing
.asCell()->asTenured();
2327 Zone
* thingZone
= tenured
->zoneFromAnyThread();
2328 MOZ_ASSERT(thingZone
== zone
|| thingZone
->isAtomsZone());
2332 bool CompartmentCheckTracer::edgeIsInCrossCompartmentMap(JS::GCCellPtr dst
) {
2333 return srcKind
== JS::TraceKind::Object
&&
2334 InCrossCompartmentMap(runtime(), static_cast<JSObject
*>(src
), dst
);
2337 static bool IsPartiallyInitializedObject(Cell
* cell
) {
2338 if (!cell
->is
<JSObject
>()) {
2342 JSObject
* obj
= cell
->as
<JSObject
>();
2343 if (!obj
->is
<NativeObject
>()) {
2347 NativeObject
* nobj
= &obj
->as
<NativeObject
>();
2349 // Check for failed allocation of dynamic slots in
2350 // NativeObject::allocateInitialSlots.
2351 size_t nDynamicSlots
= NativeObject::calculateDynamicSlots(
2352 nobj
->numFixedSlots(), nobj
->slotSpan(), nobj
->getClass());
2353 return nDynamicSlots
!= 0 && !nobj
->hasDynamicSlots();
2356 void GCRuntime::checkForCompartmentMismatches() {
2357 JSContext
* cx
= rt
->mainContextFromOwnThread();
2358 if (cx
->disableStrictProxyCheckingCount
) {
2362 CompartmentCheckTracer
trc(rt
);
2363 AutoAssertEmptyNursery
empty(cx
);
2364 for (ZonesIter
zone(this, SkipAtoms
); !zone
.done(); zone
.next()) {
2366 for (auto thingKind
: AllAllocKinds()) {
2367 for (auto i
= zone
->cellIterUnsafe
<TenuredCell
>(thingKind
, empty
);
2368 !i
.done(); i
.next()) {
2369 // We may encounter partially initialized objects. These are unreachable
2370 // and it's safe to ignore them.
2371 if (IsPartiallyInitializedObject(i
.getCell())) {
2375 trc
.src
= i
.getCell();
2376 trc
.srcKind
= MapAllocToTraceKind(thingKind
);
2377 trc
.compartment
= MapGCThingTyped(
2378 trc
.src
, trc
.srcKind
, [](auto t
) { return t
->maybeCompartment(); });
2379 JS::TraceChildren(&trc
, JS::GCCellPtr(trc
.src
, trc
.srcKind
));
2386 static bool ShouldCleanUpEverything(JS::GCOptions options
) {
2387 // During shutdown, we must clean everything up, for the sake of leak
2388 // detection. When a runtime has no contexts, or we're doing a GC before a
2389 // shutdown CC, those are strong indications that we're shutting down.
2390 return options
== JS::GCOptions::Shutdown
|| options
== JS::GCOptions::Shrink
;
2393 static bool ShouldUseBackgroundThreads(bool isIncremental
,
2394 JS::GCReason reason
) {
2395 bool shouldUse
= isIncremental
&& CanUseExtraThreads();
2396 MOZ_ASSERT_IF(reason
== JS::GCReason::DESTROY_RUNTIME
, !shouldUse
);
2400 void GCRuntime::startCollection(JS::GCReason reason
) {
2401 checkGCStateNotInUse();
2405 reason
== JS::GCReason::XPCONNECT_SHUTDOWN
/* Bug 1650075 */);
2407 initialReason
= reason
;
2408 cleanUpEverything
= ShouldCleanUpEverything(gcOptions());
2409 isCompacting
= shouldCompact();
2410 rootsRemoved
= false;
2411 sweepGroupIndex
= 0;
2412 lastGCStartTime_
= TimeStamp::Now();
2415 if (isShutdownGC()) {
2416 hadShutdownGC
= true;
2419 for (ZonesIter
zone(this, WithAtoms
); !zone
.done(); zone
.next()) {
2420 zone
->gcSweepGroupIndex
= 0;
2425 static void RelazifyFunctions(Zone
* zone
, AllocKind kind
) {
2426 MOZ_ASSERT(kind
== AllocKind::FUNCTION
||
2427 kind
== AllocKind::FUNCTION_EXTENDED
);
2429 JSRuntime
* rt
= zone
->runtimeFromMainThread();
2430 AutoAssertEmptyNursery
empty(rt
->mainContextFromOwnThread());
2432 for (auto i
= zone
->cellIterUnsafe
<JSObject
>(kind
, empty
); !i
.done();
2434 JSFunction
* fun
= &i
->as
<JSFunction
>();
2435 // When iterating over the GC-heap, we may encounter function objects that
2436 // are incomplete (missing a BaseScript when we expect one). We must check
2437 // for this case before we can call JSFunction::hasBytecode().
2438 if (fun
->isIncomplete()) {
2441 if (fun
->hasBytecode()) {
2442 fun
->maybeRelazify(rt
);
2447 static bool ShouldCollectZone(Zone
* zone
, JS::GCReason reason
) {
2448 // If we are repeating a GC because we noticed dead compartments haven't
2449 // been collected, then only collect zones containing those compartments.
2450 if (reason
== JS::GCReason::COMPARTMENT_REVIVED
) {
2451 for (CompartmentsInZoneIter
comp(zone
); !comp
.done(); comp
.next()) {
2452 if (comp
->gcState
.scheduledForDestruction
) {
2460 // Otherwise we only collect scheduled zones.
2461 return zone
->isGCScheduled();
2464 bool GCRuntime::prepareZonesForCollection(JS::GCReason reason
,
2467 /* Assert that zone state is as we expect */
2468 for (ZonesIter
zone(this, WithAtoms
); !zone
.done(); zone
.next()) {
2469 MOZ_ASSERT(!zone
->isCollecting());
2470 MOZ_ASSERT_IF(!zone
->isAtomsZone(), !zone
->compartments().empty());
2471 for (auto i
: AllAllocKinds()) {
2472 MOZ_ASSERT(zone
->arenas
.collectingArenaList(i
).isEmpty());
2480 for (ZonesIter
zone(this, WithAtoms
); !zone
.done(); zone
.next()) {
2481 /* Set up which zones will be collected. */
2482 bool shouldCollect
= ShouldCollectZone(zone
, reason
);
2483 if (shouldCollect
) {
2485 zone
->changeGCState(Zone::NoGC
, Zone::Prepare
);
2490 zone
->setWasCollected(shouldCollect
);
2493 /* Check that at least one zone is scheduled for collection. */
2497 void GCRuntime::discardJITCodeForGC() {
2498 size_t nurserySiteResetCount
= 0;
2499 size_t pretenuredSiteResetCount
= 0;
2501 js::CancelOffThreadIonCompile(rt
, JS::Zone::Prepare
);
2502 for (GCZonesIter
zone(this); !zone
.done(); zone
.next()) {
2503 gcstats::AutoPhase
ap(stats(), gcstats::PhaseKind::MARK_DISCARD_CODE
);
2505 // We may need to reset allocation sites and discard JIT code to recover if
2506 // we find object lifetimes have changed.
2507 PretenuringZone
& pz
= zone
->pretenuring
;
2508 bool resetNurserySites
= pz
.shouldResetNurseryAllocSites();
2509 bool resetPretenuredSites
= pz
.shouldResetPretenuredAllocSites();
2511 if (!zone
->isPreservingCode()) {
2512 Zone::DiscardOptions options
;
2513 options
.discardBaselineCode
= true;
2514 options
.discardJitScripts
= true;
2515 options
.resetNurseryAllocSites
= resetNurserySites
;
2516 options
.resetPretenuredAllocSites
= resetPretenuredSites
;
2517 zone
->discardJitCode(rt
->gcContext(), options
);
2518 } else if (resetNurserySites
|| resetPretenuredSites
) {
2519 zone
->resetAllocSitesAndInvalidate(resetNurserySites
,
2520 resetPretenuredSites
);
2523 if (resetNurserySites
) {
2524 nurserySiteResetCount
++;
2526 if (resetPretenuredSites
) {
2527 pretenuredSiteResetCount
++;
2531 if (nursery().reportPretenuring()) {
2532 if (nurserySiteResetCount
) {
2535 "GC reset nursery alloc sites and invalidated code in %zu zones\n",
2536 nurserySiteResetCount
);
2538 if (pretenuredSiteResetCount
) {
2541 "GC reset pretenured alloc sites and invalidated code in %zu zones\n",
2542 pretenuredSiteResetCount
);
2547 void GCRuntime::relazifyFunctionsForShrinkingGC() {
2548 gcstats::AutoPhase
ap(stats(), gcstats::PhaseKind::RELAZIFY_FUNCTIONS
);
2549 for (GCZonesIter
zone(this); !zone
.done(); zone
.next()) {
2550 RelazifyFunctions(zone
, AllocKind::FUNCTION
);
2551 RelazifyFunctions(zone
, AllocKind::FUNCTION_EXTENDED
);
2555 void GCRuntime::purgePropMapTablesForShrinkingGC() {
2556 gcstats::AutoPhase
ap(stats(), gcstats::PhaseKind::PURGE_PROP_MAP_TABLES
);
2557 for (GCZonesIter
zone(this); !zone
.done(); zone
.next()) {
2558 if (!canRelocateZone(zone
) || zone
->keepPropMapTables()) {
2562 // Note: CompactPropMaps never have a table.
2563 for (auto map
= zone
->cellIterUnsafe
<NormalPropMap
>(); !map
.done();
2565 if (map
->asLinked()->hasTable()) {
2566 map
->asLinked()->purgeTable(rt
->gcContext());
2569 for (auto map
= zone
->cellIterUnsafe
<DictionaryPropMap
>(); !map
.done();
2571 if (map
->asLinked()->hasTable()) {
2572 map
->asLinked()->purgeTable(rt
->gcContext());
2578 // The debugger keeps track of the URLs for the sources of each realm's scripts.
2579 // These URLs are purged on shrinking GCs.
2580 void GCRuntime::purgeSourceURLsForShrinkingGC() {
2581 gcstats::AutoPhase
ap(stats(), gcstats::PhaseKind::PURGE_SOURCE_URLS
);
2582 for (GCZonesIter
zone(this); !zone
.done(); zone
.next()) {
2583 // URLs are not tracked for realms in the system zone.
2584 if (!canRelocateZone(zone
) || zone
->isSystemZone()) {
2587 for (CompartmentsInZoneIter
comp(zone
); !comp
.done(); comp
.next()) {
2588 for (RealmsInCompartmentIter
realm(comp
); !realm
.done(); realm
.next()) {
2589 GlobalObject
* global
= realm
.get()->unsafeUnbarrieredMaybeGlobal();
2591 global
->clearSourceURLSHolder();
2598 void GCRuntime::unmarkWeakMaps() {
2599 for (GCZonesIter
zone(this); !zone
.done(); zone
.next()) {
2600 /* Unmark all weak maps in the zones being collected. */
2601 WeakMapBase::unmarkZone(zone
);
2605 bool GCRuntime::beginPreparePhase(JS::GCReason reason
, AutoGCSession
& session
) {
2606 gcstats::AutoPhase
ap(stats(), gcstats::PhaseKind::PREPARE
);
2608 if (!prepareZonesForCollection(reason
, &isFull
.ref())) {
2613 * Start a parallel task to clear all mark state for the zones we are
2614 * collecting. This is linear in the size of the heap we are collecting and so
2615 * can be slow. This usually happens concurrently with the mutator and GC
2616 * proper does not start until this is complete.
2618 unmarkTask
.initZones();
2619 if (useBackgroundThreads
) {
2622 unmarkTask
.runFromMainThread();
2626 * Process any queued source compressions during the start of a major
2629 * Bug 1650075: When we start passing GCOptions::Shutdown for
2630 * GCReason::XPCONNECT_SHUTDOWN GCs we can remove the extra check.
2632 if (!isShutdownGC() && reason
!= JS::GCReason::XPCONNECT_SHUTDOWN
) {
2633 StartHandlingCompressionsOnGC(rt
);
2639 BackgroundUnmarkTask::BackgroundUnmarkTask(GCRuntime
* gc
)
2640 : GCParallelTask(gc
, gcstats::PhaseKind::UNMARK
) {}
2642 void BackgroundUnmarkTask::initZones() {
2643 MOZ_ASSERT(isIdle());
2644 MOZ_ASSERT(zones
.empty());
2645 MOZ_ASSERT(!isCancelled());
2647 // We can't safely iterate the zones vector from another thread so we copy the
2648 // zones to be collected into another vector.
2649 AutoEnterOOMUnsafeRegion oomUnsafe
;
2650 for (GCZonesIter
zone(gc
); !zone
.done(); zone
.next()) {
2651 if (!zones
.append(zone
.get())) {
2652 oomUnsafe
.crash("BackgroundUnmarkTask::initZones");
2655 zone
->arenas
.clearFreeLists();
2656 zone
->arenas
.moveArenasToCollectingLists();
2660 void BackgroundUnmarkTask::run(AutoLockHelperThreadState
& helperTheadLock
) {
2661 AutoUnlockHelperThreadState
unlock(helperTheadLock
);
2663 for (Zone
* zone
: zones
) {
2664 for (auto kind
: AllAllocKinds()) {
2665 ArenaList
& arenas
= zone
->arenas
.collectingArenaList(kind
);
2666 for (ArenaListIter
arena(arenas
.head()); !arena
.done(); arena
.next()) {
2668 if (isCancelled()) {
2678 void GCRuntime::endPreparePhase(JS::GCReason reason
) {
2679 MOZ_ASSERT(unmarkTask
.isIdle());
2681 for (GCZonesIter
zone(this); !zone
.done(); zone
.next()) {
2683 * In an incremental GC, clear the area free lists to ensure that subsequent
2684 * allocations refill them and end up marking new cells back. See
2685 * arenaAllocatedDuringGC().
2687 zone
->arenas
.clearFreeLists();
2689 zone
->markedStrings
= 0;
2690 zone
->finalizedStrings
= 0;
2692 zone
->setPreservingCode(false);
2695 if (hasZealMode(ZealMode::YieldBeforeRootMarking
)) {
2696 for (auto kind
: AllAllocKinds()) {
2697 for (ArenaIter
arena(zone
, kind
); !arena
.done(); arena
.next()) {
2698 arena
->checkNoMarkedCells();
2705 // Discard JIT code more aggressively if the process is approaching its
2706 // executable code limit.
2707 bool canAllocateMoreCode
= jit::CanLikelyAllocateMoreExecutableMemory();
2708 auto currentTime
= TimeStamp::Now();
2710 Compartment
* activeCompartment
= nullptr;
2711 jit::JitActivationIterator
activation(rt
->mainContextFromOwnThread());
2712 if (!activation
.done()) {
2713 activeCompartment
= activation
->compartment();
2716 for (CompartmentsIter
c(rt
); !c
.done(); c
.next()) {
2717 c
->gcState
.scheduledForDestruction
= false;
2718 c
->gcState
.maybeAlive
= false;
2719 c
->gcState
.hasEnteredRealm
= false;
2720 bool isActiveCompartment
= c
== activeCompartment
;
2721 for (RealmsInCompartmentIter
r(c
); !r
.done(); r
.next()) {
2722 if (r
->shouldTraceGlobal() || !r
->zone()->isGCScheduled()) {
2723 c
->gcState
.maybeAlive
= true;
2725 if (shouldPreserveJITCode(r
, currentTime
, reason
, canAllocateMoreCode
,
2726 isActiveCompartment
)) {
2727 r
->zone()->setPreservingCode(true);
2729 if (r
->hasBeenEnteredIgnoringJit()) {
2730 c
->gcState
.hasEnteredRealm
= true;
2736 * Perform remaining preparation work that must take place in the first true
2741 gcstats::AutoPhase
ap1(stats(), gcstats::PhaseKind::PREPARE
);
2743 AutoLockHelperThreadState helperLock
;
2745 /* Clear mark state for WeakMaps in parallel with other work. */
2746 AutoRunParallelTask
unmarkWeakMaps(this, &GCRuntime::unmarkWeakMaps
,
2747 gcstats::PhaseKind::UNMARK_WEAKMAPS
,
2748 GCUse::Unspecified
, helperLock
);
2750 AutoUnlockHelperThreadState
unlock(helperLock
);
2752 // Discard JIT code. For incremental collections, the sweep phase will
2753 // also discard JIT code.
2754 discardJITCodeForGC();
2757 * Relazify functions after discarding JIT code (we can't relazify
2758 * functions with JIT code) and before the actual mark phase, so that
2759 * the current GC can collect the JSScripts we're unlinking here. We do
2760 * this only when we're performing a shrinking GC, as too much
2761 * relazification can cause performance issues when we have to reparse
2762 * the same functions over and over.
2764 if (isShrinkingGC()) {
2765 relazifyFunctionsForShrinkingGC();
2766 purgePropMapTablesForShrinkingGC();
2767 purgeSourceURLsForShrinkingGC();
2771 * We must purge the runtime at the beginning of an incremental GC. The
2772 * danger if we purge later is that the snapshot invariant of
2773 * incremental GC will be broken, as follows. If some object is
2774 * reachable only through some cache (say the dtoaCache) then it will
2775 * not be part of the snapshot. If we purge after root marking, then
2776 * the mutator could obtain a pointer to the object and start using
2777 * it. This object might never be marked, so a GC hazard would exist.
2781 startBackgroundFreeAfterMinorGC();
2783 if (isShutdownGC()) {
2784 /* Clear any engine roots that may hold external data live. */
2785 for (GCZonesIter
zone(this); !zone
.done(); zone
.next()) {
2786 zone
->clearRootsForShutdownGC();
2790 testMarkQueue
.clear();
2797 if (fullCompartmentChecks
) {
2798 checkForCompartmentMismatches();
2803 AutoUpdateLiveCompartments::AutoUpdateLiveCompartments(GCRuntime
* gc
) : gc(gc
) {
2804 for (GCCompartmentsIter
c(gc
->rt
); !c
.done(); c
.next()) {
2805 c
->gcState
.hasMarkedCells
= false;
2809 AutoUpdateLiveCompartments::~AutoUpdateLiveCompartments() {
2810 for (GCCompartmentsIter
c(gc
->rt
); !c
.done(); c
.next()) {
2811 if (c
->gcState
.hasMarkedCells
) {
2812 c
->gcState
.maybeAlive
= true;
2817 Zone::GCState
Zone::initialMarkingState() const {
2818 if (isAtomsZone()) {
2819 // Don't delay gray marking in the atoms zone like we do in other zones.
2820 return MarkBlackAndGray
;
2823 return MarkBlackOnly
;
2826 void GCRuntime::beginMarkPhase(AutoGCSession
& session
) {
2830 gcstats::AutoPhase
ap(stats(), gcstats::PhaseKind::MARK
);
2832 // This is the slice we actually start collecting. The number can be used to
2833 // check whether a major GC has started so we must not increment it until we
2837 MOZ_ASSERT(!hasDelayedMarking());
2838 for (auto& marker
: markers
) {
2844 queueMarkColor
.reset();
2847 for (GCZonesIter
zone(this); !zone
.done(); zone
.next()) {
2848 // Incremental marking barriers are enabled at this point.
2849 zone
->changeGCState(Zone::Prepare
, zone
->initialMarkingState());
2851 // Merge arenas allocated during the prepare phase, then move all arenas to
2852 // the collecting arena lists.
2853 zone
->arenas
.mergeArenasFromCollectingLists();
2854 zone
->arenas
.moveArenasToCollectingLists();
2856 for (RealmsInZoneIter
realm(zone
); !realm
.done(); realm
.next()) {
2857 realm
->clearAllocatedDuringGC();
2861 if (rt
->isBeingDestroyed()) {
2862 checkNoRuntimeRoots(session
);
2864 AutoUpdateLiveCompartments
updateLive(this);
2865 marker().setRootMarkingMode(true);
2866 traceRuntimeForMajorGC(marker().tracer(), session
);
2867 marker().setRootMarkingMode(false);
2870 updateSchedulingStateOnGCStart();
2871 stats().measureInitialHeapSize();
2874 void GCRuntime::findDeadCompartments() {
2875 gcstats::AutoPhase
ap1(stats(), gcstats::PhaseKind::FIND_DEAD_COMPARTMENTS
);
2878 * This code ensures that if a compartment is "dead", then it will be
2879 * collected in this GC. A compartment is considered dead if its maybeAlive
2880 * flag is false. The maybeAlive flag is set if:
2882 * (1) the compartment has been entered (set in beginMarkPhase() above)
2883 * (2) the compartment's zone is not being collected (set in
2884 * beginMarkPhase() above)
2885 * (3) an object in the compartment was marked during root marking, either
2886 * as a black root or a gray root. This is arranged by
2887 * SetCompartmentHasMarkedCells and AutoUpdateLiveCompartments.
2888 * (4) the compartment has incoming cross-compartment edges from another
2889 * compartment that has maybeAlive set (set by this method).
2891 * If the maybeAlive is false, then we set the scheduledForDestruction flag.
2892 * At the end of the GC, we look for compartments where
2893 * scheduledForDestruction is true. These are compartments that were somehow
2894 * "revived" during the incremental GC. If any are found, we do a special,
2895 * non-incremental GC of those compartments to try to collect them.
2897 * Compartments can be revived for a variety of reasons. On reason is bug
2898 * 811587, where a reflector that was dead can be revived by DOM code that
2899 * still refers to the underlying DOM node.
2901 * Read barriers and allocations can also cause revival. This might happen
2902 * during a function like JS_TransplantObject, which iterates over all
2903 * compartments, live or dead, and operates on their objects. See bug 803376
2904 * for details on this problem. To avoid the problem, we try to avoid
2905 * allocation and read barriers during JS_TransplantObject and the like.
2908 // Propagate the maybeAlive flag via cross-compartment edges.
2910 Vector
<Compartment
*, 0, js::SystemAllocPolicy
> workList
;
2912 for (CompartmentsIter
comp(rt
); !comp
.done(); comp
.next()) {
2913 if (comp
->gcState
.maybeAlive
) {
2914 if (!workList
.append(comp
)) {
2920 while (!workList
.empty()) {
2921 Compartment
* comp
= workList
.popCopy();
2922 for (Compartment::WrappedObjectCompartmentEnum
e(comp
); !e
.empty();
2924 Compartment
* dest
= e
.front();
2925 if (!dest
->gcState
.maybeAlive
) {
2926 dest
->gcState
.maybeAlive
= true;
2927 if (!workList
.append(dest
)) {
2934 // Set scheduledForDestruction based on maybeAlive.
2936 for (GCCompartmentsIter
comp(rt
); !comp
.done(); comp
.next()) {
2937 MOZ_ASSERT(!comp
->gcState
.scheduledForDestruction
);
2938 if (!comp
->gcState
.maybeAlive
) {
2939 comp
->gcState
.scheduledForDestruction
= true;
2944 void GCRuntime::updateSchedulingStateOnGCStart() {
2945 heapSize
.updateOnGCStart();
2947 // Update memory counters for the zones we are collecting.
2948 for (GCZonesIter
zone(this); !zone
.done(); zone
.next()) {
2949 zone
->updateSchedulingStateOnGCStart();
2953 inline bool GCRuntime::canMarkInParallel() const {
2954 #if defined(DEBUG) || defined(JS_OOM_BREAKPOINT)
2955 // OOM testing limits the engine to using a single helper thread.
2956 if (oom::simulator
.targetThread() == THREAD_TYPE_GCPARALLEL
) {
2961 return markers
.length() > 1 && stats().initialCollectedBytes() >=
2962 tunables
.parallelMarkingThresholdBytes();
2965 IncrementalProgress
GCRuntime::markUntilBudgetExhausted(
2966 SliceBudget
& sliceBudget
, ParallelMarking allowParallelMarking
,
2967 ShouldReportMarkTime reportTime
) {
2968 // Run a marking slice and return whether the stack is now empty.
2970 AutoMajorGCProfilerEntry
s(this);
2972 if (processTestMarkQueue() == QueueYielded
) {
2976 if (allowParallelMarking
&& canMarkInParallel()) {
2977 MOZ_ASSERT(parallelMarkingEnabled
);
2978 MOZ_ASSERT(reportTime
);
2979 MOZ_ASSERT(!isBackgroundMarking());
2981 ParallelMarker
pm(this);
2982 if (!pm
.mark(sliceBudget
)) {
2986 assertNoMarkingWork();
2991 AutoSetThreadIsMarking threadIsMarking
;
2994 return marker().markUntilBudgetExhausted(sliceBudget
, reportTime
)
2999 void GCRuntime::drainMarkStack() {
3000 auto unlimited
= SliceBudget::unlimited();
3001 MOZ_RELEASE_ASSERT(marker().markUntilBudgetExhausted(unlimited
));
3006 const GCVector
<HeapPtr
<JS::Value
>, 0, SystemAllocPolicy
>&
3007 GCRuntime::getTestMarkQueue() const {
3008 return testMarkQueue
.get();
3011 bool GCRuntime::appendTestMarkQueue(const JS::Value
& value
) {
3012 return testMarkQueue
.append(value
);
3015 void GCRuntime::clearTestMarkQueue() {
3016 testMarkQueue
.clear();
3020 size_t GCRuntime::testMarkQueuePos() const { return queuePos
; }
3024 GCRuntime::MarkQueueProgress
GCRuntime::processTestMarkQueue() {
3026 if (testMarkQueue
.empty()) {
3027 return QueueComplete
;
3030 if (queueMarkColor
== mozilla::Some(MarkColor::Gray
) &&
3031 state() != State::Sweep
) {
3032 return QueueSuspended
;
3035 // If the queue wants to be gray marking, but we've pushed a black object
3036 // since set-color-gray was processed, then we can't switch to gray and must
3037 // again wait until gray marking is possible.
3039 // Remove this code if the restriction against marking gray during black is
3041 if (queueMarkColor
== mozilla::Some(MarkColor::Gray
) &&
3042 marker().hasBlackEntries()) {
3043 return QueueSuspended
;
3046 // If the queue wants to be marking a particular color, switch to that color.
3047 // In any case, restore the mark color to whatever it was when we entered
3049 bool willRevertToGray
= marker().markColor() == MarkColor::Gray
;
3050 AutoSetMarkColor
autoRevertColor(
3051 marker(), queueMarkColor
.valueOr(marker().markColor()));
3053 // Process the mark queue by taking each object in turn, pushing it onto the
3054 // mark stack, and processing just the top element with processMarkStackTop
3055 // without recursing into reachable objects.
3056 while (queuePos
< testMarkQueue
.length()) {
3057 Value val
= testMarkQueue
[queuePos
++].get();
3058 if (val
.isObject()) {
3059 JSObject
* obj
= &val
.toObject();
3060 JS::Zone
* zone
= obj
->zone();
3061 if (!zone
->isGCMarking() || obj
->isMarkedAtLeast(marker().markColor())) {
3065 // If we have started sweeping, obey sweep group ordering. But note that
3066 // we will first be called during the initial sweep slice, when the sweep
3067 // group indexes have not yet been computed. In that case, we can mark
3069 if (state() == State::Sweep
&& initialState
!= State::Sweep
) {
3070 if (zone
->gcSweepGroupIndex
< getCurrentSweepGroupIndex()) {
3071 // Too late. This must have been added after we started collecting,
3072 // and we've already processed its sweep group. Skip it.
3075 if (zone
->gcSweepGroupIndex
> getCurrentSweepGroupIndex()) {
3076 // Not ready yet. Wait until we reach the object's sweep group.
3078 return QueueSuspended
;
3082 if (marker().markColor() == MarkColor::Gray
&&
3083 zone
->isGCMarkingBlackOnly()) {
3084 // Have not yet reached the point where we can mark this object, so
3085 // continue with the GC.
3087 return QueueSuspended
;
3090 if (marker().markColor() == MarkColor::Black
&& willRevertToGray
) {
3091 // If we put any black objects on the stack, we wouldn't be able to
3092 // return to gray marking. So delay the marking until we're back to
3095 return QueueSuspended
;
3098 // Mark the object and push it onto the stack.
3099 size_t oldPosition
= marker().stack
.position();
3100 marker().markAndTraverse
<NormalMarkingOptions
>(obj
);
3102 // If we overflow the stack here and delay marking, then we won't be
3103 // testing what we think we're testing.
3104 if (marker().stack
.position() == oldPosition
) {
3105 MOZ_ASSERT(obj
->asTenured().arena()->onDelayedMarkingList());
3106 AutoEnterOOMUnsafeRegion oomUnsafe
;
3107 oomUnsafe
.crash("Overflowed stack while marking test queue");
3110 SliceBudget unlimited
= SliceBudget::unlimited();
3111 marker().processMarkStackTop
<NormalMarkingOptions
>(unlimited
);
3112 } else if (val
.isString()) {
3113 JSLinearString
* str
= &val
.toString()->asLinear();
3114 if (js::StringEqualsLiteral(str
, "yield") && isIncrementalGc()) {
3115 return QueueYielded
;
3116 } else if (js::StringEqualsLiteral(str
, "enter-weak-marking-mode") ||
3117 js::StringEqualsLiteral(str
, "abort-weak-marking-mode")) {
3118 if (marker().isRegularMarking()) {
3119 // We can't enter weak marking mode at just any time, so instead
3120 // we'll stop processing the queue and continue on with the GC. Once
3121 // we enter weak marking mode, we can continue to the rest of the
3122 // queue. Note that we will also suspend for aborting, and then abort
3123 // the earliest following weak marking mode.
3125 return QueueSuspended
;
3127 if (js::StringEqualsLiteral(str
, "abort-weak-marking-mode")) {
3128 marker().abortLinearWeakMarking();
3130 } else if (js::StringEqualsLiteral(str
, "drain")) {
3131 auto unlimited
= SliceBudget::unlimited();
3133 marker().markUntilBudgetExhausted(unlimited
, DontReportMarkTime
));
3134 } else if (js::StringEqualsLiteral(str
, "set-color-gray")) {
3135 queueMarkColor
= mozilla::Some(MarkColor::Gray
);
3136 if (state() != State::Sweep
|| marker().hasBlackEntries()) {
3137 // Cannot mark gray yet, so continue with the GC.
3139 return QueueSuspended
;
3141 marker().setMarkColor(MarkColor::Gray
);
3142 } else if (js::StringEqualsLiteral(str
, "set-color-black")) {
3143 queueMarkColor
= mozilla::Some(MarkColor::Black
);
3144 marker().setMarkColor(MarkColor::Black
);
3145 } else if (js::StringEqualsLiteral(str
, "unset-color")) {
3146 queueMarkColor
.reset();
3152 return QueueComplete
;
3155 static bool IsEmergencyGC(JS::GCReason reason
) {
3156 return reason
== JS::GCReason::LAST_DITCH
||
3157 reason
== JS::GCReason::MEM_PRESSURE
;
3160 void GCRuntime::finishCollection(JS::GCReason reason
) {
3161 assertBackgroundSweepingFinished();
3163 MOZ_ASSERT(!hasDelayedMarking());
3164 for (auto& marker
: markers
) {
3168 maybeStopPretenuring();
3170 if (IsEmergencyGC(reason
)) {
3171 waitBackgroundFreeEnd();
3174 TimeStamp currentTime
= TimeStamp::Now();
3176 updateSchedulingStateAfterCollection(currentTime
);
3178 for (GCZonesIter
zone(this); !zone
.done(); zone
.next()) {
3179 zone
->changeGCState(Zone::Finished
, Zone::NoGC
);
3180 zone
->notifyObservingDebuggers();
3184 clearSelectedForMarking();
3187 schedulingState
.updateHighFrequencyMode(lastGCEndTime_
, currentTime
,
3189 lastGCEndTime_
= currentTime
;
3191 checkGCStateNotInUse();
3194 void GCRuntime::checkGCStateNotInUse() {
3196 for (auto& marker
: markers
) {
3197 MOZ_ASSERT(!marker
->isActive());
3198 MOZ_ASSERT(marker
->isDrained());
3200 MOZ_ASSERT(!hasDelayedMarking());
3202 MOZ_ASSERT(!lastMarkSlice
);
3204 for (ZonesIter
zone(this, WithAtoms
); !zone
.done(); zone
.next()) {
3205 if (zone
->wasCollected()) {
3206 zone
->arenas
.checkGCStateNotInUse();
3208 MOZ_ASSERT(!zone
->wasGCStarted());
3209 MOZ_ASSERT(!zone
->needsIncrementalBarrier());
3210 MOZ_ASSERT(!zone
->isOnList());
3213 MOZ_ASSERT(zonesToMaybeCompact
.ref().isEmpty());
3214 MOZ_ASSERT(cellsToAssertNotGray
.ref().empty());
3216 AutoLockHelperThreadState lock
;
3217 MOZ_ASSERT(!requestSliceAfterBackgroundTask
);
3218 MOZ_ASSERT(unmarkTask
.isIdle(lock
));
3219 MOZ_ASSERT(markTask
.isIdle(lock
));
3220 MOZ_ASSERT(sweepTask
.isIdle(lock
));
3221 MOZ_ASSERT(decommitTask
.isIdle(lock
));
3225 void GCRuntime::maybeStopPretenuring() {
3226 nursery().maybeStopPretenuring(this);
3228 for (GCZonesIter
zone(this); !zone
.done(); zone
.next()) {
3229 if (!zone
->nurseryStringsDisabled
) {
3233 // Count the number of strings before the major GC.
3234 size_t numStrings
= zone
->markedStrings
+ zone
->finalizedStrings
;
3235 double rate
= double(zone
->finalizedStrings
) / double(numStrings
);
3236 if (rate
> tunables
.stopPretenureStringThreshold()) {
3237 zone
->markedStrings
= 0;
3238 zone
->finalizedStrings
= 0;
3239 zone
->nurseryStringsDisabled
= false;
3240 nursery().updateAllocFlagsForZone(zone
);
3245 void GCRuntime::updateSchedulingStateAfterCollection(TimeStamp currentTime
) {
3246 TimeDuration totalGCTime
= stats().totalGCTime();
3247 size_t totalInitialBytes
= stats().initialCollectedBytes();
3249 for (GCZonesIter
zone(this); !zone
.done(); zone
.next()) {
3250 if (tunables
.balancedHeapLimitsEnabled() && totalInitialBytes
!= 0) {
3251 zone
->updateCollectionRate(totalGCTime
, totalInitialBytes
);
3253 zone
->clearGCSliceThresholds();
3254 zone
->updateGCStartThresholds(*this);
3258 void GCRuntime::updateAllGCStartThresholds() {
3259 for (ZonesIter
zone(this, WithAtoms
); !zone
.done(); zone
.next()) {
3260 zone
->updateGCStartThresholds(*this);
3264 void GCRuntime::updateAllocationRates() {
3265 // Calculate mutator time since the last update. This ignores the fact that
3266 // the zone could have been created since the last update.
3268 TimeStamp currentTime
= TimeStamp::Now();
3269 TimeDuration totalTime
= currentTime
- lastAllocRateUpdateTime
;
3270 if (collectorTimeSinceAllocRateUpdate
>= totalTime
) {
3271 // It shouldn't happen but occasionally we see collector time being larger
3272 // than total time. Skip the update in that case.
3276 TimeDuration mutatorTime
= totalTime
- collectorTimeSinceAllocRateUpdate
;
3278 for (AllZonesIter
zone(this); !zone
.done(); zone
.next()) {
3279 zone
->updateAllocationRate(mutatorTime
);
3280 zone
->updateGCStartThresholds(*this);
3283 lastAllocRateUpdateTime
= currentTime
;
3284 collectorTimeSinceAllocRateUpdate
= TimeDuration();
3287 static const char* GCHeapStateToLabel(JS::HeapState heapState
) {
3288 switch (heapState
) {
3289 case JS::HeapState::MinorCollecting
:
3290 return "js::Nursery::collect";
3291 case JS::HeapState::MajorCollecting
:
3292 return "js::GCRuntime::collect";
3294 MOZ_CRASH("Unexpected heap state when pushing GC profiling stack frame");
3296 MOZ_ASSERT_UNREACHABLE("Should have exhausted every JS::HeapState variant!");
3300 static JS::ProfilingCategoryPair
GCHeapStateToProfilingCategory(
3301 JS::HeapState heapState
) {
3302 return heapState
== JS::HeapState::MinorCollecting
3303 ? JS::ProfilingCategoryPair::GCCC_MinorGC
3304 : JS::ProfilingCategoryPair::GCCC_MajorGC
;
3307 /* Start a new heap session. */
3308 AutoHeapSession::AutoHeapSession(GCRuntime
* gc
, JS::HeapState heapState
)
3309 : gc(gc
), prevState(gc
->heapState_
) {
3310 MOZ_ASSERT(CurrentThreadCanAccessRuntime(gc
->rt
));
3311 MOZ_ASSERT(prevState
== JS::HeapState::Idle
||
3312 (prevState
== JS::HeapState::MajorCollecting
&&
3313 heapState
== JS::HeapState::MinorCollecting
));
3314 MOZ_ASSERT(heapState
!= JS::HeapState::Idle
);
3316 gc
->heapState_
= heapState
;
3318 if (heapState
== JS::HeapState::MinorCollecting
||
3319 heapState
== JS::HeapState::MajorCollecting
) {
3320 profilingStackFrame
.emplace(gc
->rt
->mainContextFromOwnThread(),
3321 GCHeapStateToLabel(heapState
),
3322 GCHeapStateToProfilingCategory(heapState
));
3326 AutoHeapSession::~AutoHeapSession() {
3327 MOZ_ASSERT(JS::RuntimeHeapIsBusy());
3328 gc
->heapState_
= prevState
;
3331 static const char* MajorGCStateToLabel(State state
) {
3334 return "js::GCRuntime::markUntilBudgetExhausted";
3336 return "js::GCRuntime::performSweepActions";
3337 case State::Compact
:
3338 return "js::GCRuntime::compactPhase";
3340 MOZ_CRASH("Unexpected heap state when pushing GC profiling stack frame");
3343 MOZ_ASSERT_UNREACHABLE("Should have exhausted every State variant!");
3347 static JS::ProfilingCategoryPair
MajorGCStateToProfilingCategory(State state
) {
3350 return JS::ProfilingCategoryPair::GCCC_MajorGC_Mark
;
3352 return JS::ProfilingCategoryPair::GCCC_MajorGC_Sweep
;
3353 case State::Compact
:
3354 return JS::ProfilingCategoryPair::GCCC_MajorGC_Compact
;
3356 MOZ_CRASH("Unexpected heap state when pushing GC profiling stack frame");
3360 AutoMajorGCProfilerEntry::AutoMajorGCProfilerEntry(GCRuntime
* gc
)
3361 : AutoGeckoProfilerEntry(gc
->rt
->mainContextFromAnyThread(),
3362 MajorGCStateToLabel(gc
->state()),
3363 MajorGCStateToProfilingCategory(gc
->state())) {
3364 MOZ_ASSERT(gc
->heapState() == JS::HeapState::MajorCollecting
);
3367 GCRuntime::IncrementalResult
GCRuntime::resetIncrementalGC(
3368 GCAbortReason reason
) {
3369 MOZ_ASSERT(reason
!= GCAbortReason::None
);
3371 // Drop as much work as possible from an ongoing incremental GC so
3372 // we can start a new GC after it has finished.
3373 if (incrementalState
== State::NotActive
) {
3374 return IncrementalResult::Ok
;
3377 AutoGCSession
session(this, JS::HeapState::MajorCollecting
);
3379 switch (incrementalState
) {
3380 case State::NotActive
:
3381 case State::MarkRoots
:
3383 MOZ_CRASH("Unexpected GC state in resetIncrementalGC");
3386 case State::Prepare
:
3387 unmarkTask
.cancelAndWait();
3389 for (GCZonesIter
zone(this); !zone
.done(); zone
.next()) {
3390 zone
->changeGCState(Zone::Prepare
, Zone::NoGC
);
3391 zone
->clearGCSliceThresholds();
3392 zone
->arenas
.clearFreeLists();
3393 zone
->arenas
.mergeArenasFromCollectingLists();
3396 incrementalState
= State::NotActive
;
3397 checkGCStateNotInUse();
3401 // Cancel any ongoing marking.
3402 for (auto& marker
: markers
) {
3405 resetDelayedMarking();
3407 for (GCCompartmentsIter
c(rt
); !c
.done(); c
.next()) {
3411 for (GCZonesIter
zone(this); !zone
.done(); zone
.next()) {
3412 zone
->changeGCState(zone
->initialMarkingState(), Zone::NoGC
);
3413 zone
->clearGCSliceThresholds();
3414 zone
->arenas
.unmarkPreMarkedFreeCells();
3415 zone
->arenas
.mergeArenasFromCollectingLists();
3419 AutoLockHelperThreadState lock
;
3420 lifoBlocksToFree
.ref().freeAll();
3423 lastMarkSlice
= false;
3424 incrementalState
= State::Finish
;
3427 for (auto& marker
: markers
) {
3428 MOZ_ASSERT(!marker
->shouldCheckCompartments());
3435 case State::Sweep
: {
3436 // Finish sweeping the current sweep group, then abort.
3437 for (CompartmentsIter
c(rt
); !c
.done(); c
.next()) {
3438 c
->gcState
.scheduledForDestruction
= false;
3441 abortSweepAfterCurrentGroup
= true;
3442 isCompacting
= false;
3447 case State::Finalize
: {
3448 isCompacting
= false;
3452 case State::Compact
: {
3453 // Skip any remaining zones that would have been compacted.
3454 MOZ_ASSERT(isCompacting
);
3455 startedCompacting
= true;
3456 zonesToMaybeCompact
.ref().clear();
3460 case State::Decommit
: {
3465 stats().reset(reason
);
3467 return IncrementalResult::ResetIncremental
;
3470 AutoDisableBarriers::AutoDisableBarriers(GCRuntime
* gc
) : gc(gc
) {
3472 * Clear needsIncrementalBarrier early so we don't do any write barriers
3475 for (GCZonesIter
zone(gc
); !zone
.done(); zone
.next()) {
3476 if (zone
->isGCMarking()) {
3477 MOZ_ASSERT(zone
->needsIncrementalBarrier());
3478 zone
->setNeedsIncrementalBarrier(false);
3480 MOZ_ASSERT(!zone
->needsIncrementalBarrier());
3484 AutoDisableBarriers::~AutoDisableBarriers() {
3485 for (GCZonesIter
zone(gc
); !zone
.done(); zone
.next()) {
3486 MOZ_ASSERT(!zone
->needsIncrementalBarrier());
3487 if (zone
->isGCMarking()) {
3488 zone
->setNeedsIncrementalBarrier(true);
3493 static bool NeedToCollectNursery(GCRuntime
* gc
) {
3494 return !gc
->nursery().isEmpty() || !gc
->storeBuffer().isEmpty();
3498 static const char* DescribeBudget(const SliceBudget
& budget
) {
3499 MOZ_ASSERT(TlsContext
.get()->isMainThreadContext());
3500 constexpr size_t length
= 32;
3501 static char buffer
[length
];
3502 budget
.describe(buffer
, length
);
3507 static bool ShouldPauseMutatorWhileWaiting(const SliceBudget
& budget
,
3508 JS::GCReason reason
,
3509 bool budgetWasIncreased
) {
3510 // When we're nearing the incremental limit at which we will finish the
3511 // collection synchronously, pause the main thread if there is only background
3512 // GC work happening. This allows the GC to catch up and avoid hitting the
3514 return budget
.isTimeBudget() &&
3515 (reason
== JS::GCReason::ALLOC_TRIGGER
||
3516 reason
== JS::GCReason::TOO_MUCH_MALLOC
) &&
3520 void GCRuntime::incrementalSlice(SliceBudget
& budget
, JS::GCReason reason
,
3521 bool budgetWasIncreased
) {
3522 MOZ_ASSERT_IF(isIncrementalGCInProgress(), isIncremental
);
3524 AutoSetThreadIsPerformingGC
performingGC(rt
->gcContext());
3526 AutoGCSession
session(this, JS::HeapState::MajorCollecting
);
3528 bool destroyingRuntime
= (reason
== JS::GCReason::DESTROY_RUNTIME
);
3530 initialState
= incrementalState
;
3531 isIncremental
= !budget
.isUnlimited();
3532 useBackgroundThreads
= ShouldUseBackgroundThreads(isIncremental
, reason
);
3535 // Do the incremental collection type specified by zeal mode if the collection
3536 // was triggered by runDebugGC() and incremental GC has not been cancelled by
3537 // resetIncrementalGC().
3538 useZeal
= isIncremental
&& reason
== JS::GCReason::DEBUG_GC
;
3543 "Incremental: %d, lastMarkSlice: %d, useZeal: %d, budget: %s, "
3544 "budgetWasIncreased: %d",
3545 bool(isIncremental
), bool(lastMarkSlice
), bool(useZeal
),
3546 DescribeBudget(budget
), budgetWasIncreased
);
3549 if (useZeal
&& hasIncrementalTwoSliceZealMode()) {
3550 // Yields between slices occurs at predetermined points in these modes; the
3551 // budget is not used. |isIncremental| is still true.
3552 stats().log("Using unlimited budget for two-slice zeal mode");
3553 budget
= SliceBudget::unlimited();
3556 bool shouldPauseMutator
=
3557 ShouldPauseMutatorWhileWaiting(budget
, reason
, budgetWasIncreased
);
3559 switch (incrementalState
) {
3560 case State::NotActive
:
3561 startCollection(reason
);
3563 incrementalState
= State::Prepare
;
3564 if (!beginPreparePhase(reason
, session
)) {
3565 incrementalState
= State::NotActive
;
3569 if (useZeal
&& hasZealMode(ZealMode::YieldBeforeRootMarking
)) {
3575 case State::Prepare
:
3576 if (waitForBackgroundTask(unmarkTask
, budget
, shouldPauseMutator
,
3577 DontTriggerSliceWhenFinished
) == NotFinished
) {
3581 incrementalState
= State::MarkRoots
;
3584 case State::MarkRoots
:
3585 if (NeedToCollectNursery(this)) {
3586 collectNurseryFromMajorGC(reason
);
3589 endPreparePhase(reason
);
3590 beginMarkPhase(session
);
3591 incrementalState
= State::Mark
;
3593 if (useZeal
&& hasZealMode(ZealMode::YieldBeforeMarking
) &&
3601 if (mightSweepInThisSlice(budget
.isUnlimited())) {
3602 // Trace wrapper rooters before marking if we might start sweeping in
3604 rt
->mainContextFromOwnThread()->traceWrapperGCRooters(
3609 gcstats::AutoPhase
ap(stats(), gcstats::PhaseKind::MARK
);
3610 if (markUntilBudgetExhausted(budget
, AllowParallelMarking
) ==
3616 assertNoMarkingWork();
3619 * There are a number of reasons why we break out of collection here,
3620 * either ending the slice or to run a new interation of the loop in
3621 * GCRuntime::collect()
3625 * In incremental GCs where we have already performed more than one
3626 * slice we yield after marking with the aim of starting the sweep in
3627 * the next slice, since the first slice of sweeping can be expensive.
3629 * This is modified by the various zeal modes. We don't yield in
3630 * YieldBeforeMarking mode and we always yield in YieldBeforeSweeping
3633 * We will need to mark anything new on the stack when we resume, so
3634 * we stay in Mark state.
3636 if (isIncremental
&& !lastMarkSlice
) {
3637 if ((initialState
== State::Mark
&&
3638 !(useZeal
&& hasZealMode(ZealMode::YieldBeforeMarking
))) ||
3639 (useZeal
&& hasZealMode(ZealMode::YieldBeforeSweeping
))) {
3640 lastMarkSlice
= true;
3641 stats().log("Yielding before starting sweeping");
3646 incrementalState
= State::Sweep
;
3647 lastMarkSlice
= false;
3649 beginSweepPhase(reason
, session
);
3654 if (storeBuffer().mayHavePointersToDeadCells()) {
3655 collectNurseryFromMajorGC(reason
);
3658 if (initialState
== State::Sweep
) {
3659 rt
->mainContextFromOwnThread()->traceWrapperGCRooters(
3663 if (performSweepActions(budget
) == NotFinished
) {
3667 endSweepPhase(destroyingRuntime
);
3669 incrementalState
= State::Finalize
;
3673 case State::Finalize
:
3674 if (waitForBackgroundTask(sweepTask
, budget
, shouldPauseMutator
,
3675 TriggerSliceWhenFinished
) == NotFinished
) {
3679 assertBackgroundSweepingFinished();
3682 // Sweep the zones list now that background finalization is finished to
3683 // remove and free dead zones, compartments and realms.
3684 gcstats::AutoPhase
ap1(stats(), gcstats::PhaseKind::SWEEP
);
3685 gcstats::AutoPhase
ap2(stats(), gcstats::PhaseKind::DESTROY
);
3686 sweepZones(rt
->gcContext(), destroyingRuntime
);
3689 MOZ_ASSERT(!startedCompacting
);
3690 incrementalState
= State::Compact
;
3692 // Always yield before compacting since it is not incremental.
3693 if (isCompacting
&& !budget
.isUnlimited()) {
3699 case State::Compact
:
3701 if (NeedToCollectNursery(this)) {
3702 collectNurseryFromMajorGC(reason
);
3705 storeBuffer().checkEmpty();
3706 if (!startedCompacting
) {
3707 beginCompactPhase();
3710 if (compactPhase(reason
, budget
, session
) == NotFinished
) {
3718 incrementalState
= State::Decommit
;
3722 case State::Decommit
:
3723 if (waitForBackgroundTask(decommitTask
, budget
, shouldPauseMutator
,
3724 TriggerSliceWhenFinished
) == NotFinished
) {
3728 incrementalState
= State::Finish
;
3733 finishCollection(reason
);
3734 incrementalState
= State::NotActive
;
3739 MOZ_ASSERT(safeToYield
);
3740 for (auto& marker
: markers
) {
3741 MOZ_ASSERT(marker
->markColor() == MarkColor::Black
);
3743 MOZ_ASSERT(!rt
->gcContext()->hasJitCodeToPoison());
3747 void GCRuntime::collectNurseryFromMajorGC(JS::GCReason reason
) {
3748 collectNursery(gcOptions(), reason
,
3749 gcstats::PhaseKind::EVICT_NURSERY_FOR_MAJOR_GC
);
3752 bool GCRuntime::hasForegroundWork() const {
3753 switch (incrementalState
) {
3754 case State::NotActive
:
3755 // Incremental GC is not running and no work is pending.
3757 case State::Prepare
:
3758 // We yield in the Prepare state after starting unmarking.
3759 return !unmarkTask
.wasStarted();
3760 case State::Finalize
:
3761 // We yield in the Finalize state to wait for background sweeping.
3762 return !isBackgroundSweeping();
3763 case State::Decommit
:
3764 // We yield in the Decommit state to wait for background decommit.
3765 return !decommitTask
.wasStarted();
3767 // In all other states there is still work to do.
3772 IncrementalProgress
GCRuntime::waitForBackgroundTask(
3773 GCParallelTask
& task
, const SliceBudget
& budget
, bool shouldPauseMutator
,
3774 ShouldTriggerSliceWhenFinished triggerSlice
) {
3775 // Wait here in non-incremental collections, or if we want to pause the
3776 // mutator to let the GC catch up.
3777 if (budget
.isUnlimited() || shouldPauseMutator
) {
3778 gcstats::AutoPhase
ap(stats(), gcstats::PhaseKind::WAIT_BACKGROUND_THREAD
);
3779 Maybe
<TimeStamp
> deadline
;
3780 if (budget
.isTimeBudget()) {
3781 deadline
.emplace(budget
.deadline());
3783 task
.join(deadline
);
3786 // In incremental collections, yield if the task has not finished and
3787 // optionally request a slice to notify us when this happens.
3788 if (!budget
.isUnlimited()) {
3789 AutoLockHelperThreadState lock
;
3790 if (task
.wasStarted(lock
)) {
3792 requestSliceAfterBackgroundTask
= true;
3797 task
.joinWithLockHeld(lock
);
3800 MOZ_ASSERT(task
.isIdle());
3803 cancelRequestedGCAfterBackgroundTask();
3809 GCAbortReason
gc::IsIncrementalGCUnsafe(JSRuntime
* rt
) {
3810 MOZ_ASSERT(!rt
->mainContextFromOwnThread()->suppressGC
);
3812 if (!rt
->gc
.isIncrementalGCAllowed()) {
3813 return GCAbortReason::IncrementalDisabled
;
3816 return GCAbortReason::None
;
3819 inline void GCRuntime::checkZoneIsScheduled(Zone
* zone
, JS::GCReason reason
,
3820 const char* trigger
) {
3822 if (zone
->isGCScheduled()) {
3827 "checkZoneIsScheduled: Zone %p not scheduled as expected in %s GC "
3829 zone
, JS::ExplainGCReason(reason
), trigger
);
3830 for (ZonesIter
zone(this, WithAtoms
); !zone
.done(); zone
.next()) {
3831 fprintf(stderr
, " Zone %p:%s%s\n", zone
.get(),
3832 zone
->isAtomsZone() ? " atoms" : "",
3833 zone
->isGCScheduled() ? " scheduled" : "");
3836 MOZ_CRASH("Zone not scheduled");
3840 GCRuntime::IncrementalResult
GCRuntime::budgetIncrementalGC(
3841 bool nonincrementalByAPI
, JS::GCReason reason
, SliceBudget
& budget
) {
3842 if (nonincrementalByAPI
) {
3843 stats().nonincremental(GCAbortReason::NonIncrementalRequested
);
3844 budget
= SliceBudget::unlimited();
3846 // Reset any in progress incremental GC if this was triggered via the
3847 // API. This isn't required for correctness, but sometimes during tests
3848 // the caller expects this GC to collect certain objects, and we need
3849 // to make sure to collect everything possible.
3850 if (reason
!= JS::GCReason::ALLOC_TRIGGER
) {
3851 return resetIncrementalGC(GCAbortReason::NonIncrementalRequested
);
3854 return IncrementalResult::Ok
;
3857 if (reason
== JS::GCReason::ABORT_GC
) {
3858 budget
= SliceBudget::unlimited();
3859 stats().nonincremental(GCAbortReason::AbortRequested
);
3860 return resetIncrementalGC(GCAbortReason::AbortRequested
);
3863 if (!budget
.isUnlimited()) {
3864 GCAbortReason unsafeReason
= IsIncrementalGCUnsafe(rt
);
3865 if (unsafeReason
== GCAbortReason::None
) {
3866 if (reason
== JS::GCReason::COMPARTMENT_REVIVED
) {
3867 unsafeReason
= GCAbortReason::CompartmentRevived
;
3868 } else if (!incrementalGCEnabled
) {
3869 unsafeReason
= GCAbortReason::ModeChange
;
3873 if (unsafeReason
!= GCAbortReason::None
) {
3874 budget
= SliceBudget::unlimited();
3875 stats().nonincremental(unsafeReason
);
3876 return resetIncrementalGC(unsafeReason
);
3880 GCAbortReason resetReason
= GCAbortReason::None
;
3881 for (ZonesIter
zone(this, WithAtoms
); !zone
.done(); zone
.next()) {
3882 if (zone
->gcHeapSize
.bytes() >=
3883 zone
->gcHeapThreshold
.incrementalLimitBytes()) {
3884 checkZoneIsScheduled(zone
, reason
, "GC bytes");
3885 budget
= SliceBudget::unlimited();
3886 stats().nonincremental(GCAbortReason::GCBytesTrigger
);
3887 if (zone
->wasGCStarted() && zone
->gcState() > Zone::Sweep
) {
3888 resetReason
= GCAbortReason::GCBytesTrigger
;
3892 if (zone
->mallocHeapSize
.bytes() >=
3893 zone
->mallocHeapThreshold
.incrementalLimitBytes()) {
3894 checkZoneIsScheduled(zone
, reason
, "malloc bytes");
3895 budget
= SliceBudget::unlimited();
3896 stats().nonincremental(GCAbortReason::MallocBytesTrigger
);
3897 if (zone
->wasGCStarted() && zone
->gcState() > Zone::Sweep
) {
3898 resetReason
= GCAbortReason::MallocBytesTrigger
;
3902 if (zone
->jitHeapSize
.bytes() >=
3903 zone
->jitHeapThreshold
.incrementalLimitBytes()) {
3904 checkZoneIsScheduled(zone
, reason
, "JIT code bytes");
3905 budget
= SliceBudget::unlimited();
3906 stats().nonincremental(GCAbortReason::JitCodeBytesTrigger
);
3907 if (zone
->wasGCStarted() && zone
->gcState() > Zone::Sweep
) {
3908 resetReason
= GCAbortReason::JitCodeBytesTrigger
;
3912 if (isIncrementalGCInProgress() &&
3913 zone
->isGCScheduled() != zone
->wasGCStarted()) {
3914 budget
= SliceBudget::unlimited();
3915 resetReason
= GCAbortReason::ZoneChange
;
3919 if (resetReason
!= GCAbortReason::None
) {
3920 return resetIncrementalGC(resetReason
);
3923 return IncrementalResult::Ok
;
3926 bool GCRuntime::maybeIncreaseSliceBudget(SliceBudget
& budget
) {
3927 if (js::SupportDifferentialTesting()) {
3931 if (!budget
.isTimeBudget() || !isIncrementalGCInProgress()) {
3935 bool wasIncreasedForLongCollections
=
3936 maybeIncreaseSliceBudgetForLongCollections(budget
);
3937 bool wasIncreasedForUgentCollections
=
3938 maybeIncreaseSliceBudgetForUrgentCollections(budget
);
3940 return wasIncreasedForLongCollections
|| wasIncreasedForUgentCollections
;
3943 // Return true if the budget is actually extended after rounding.
3944 static bool ExtendBudget(SliceBudget
& budget
, double newDuration
) {
3945 long newDurationMS
= lround(newDuration
);
3946 if (newDurationMS
<= budget
.timeBudget()) {
3950 bool idleTriggered
= budget
.idle
;
3951 budget
= SliceBudget(TimeBudget(newDuration
), nullptr); // Uninterruptible.
3952 budget
.idle
= idleTriggered
;
3953 budget
.extended
= true;
3957 bool GCRuntime::maybeIncreaseSliceBudgetForLongCollections(
3958 SliceBudget
& budget
) {
3959 // For long-running collections, enforce a minimum time budget that increases
3960 // linearly with time up to a maximum.
3962 // All times are in milliseconds.
3963 struct BudgetAtTime
{
3967 const BudgetAtTime MinBudgetStart
{1500, 0.0};
3968 const BudgetAtTime MinBudgetEnd
{2500, 100.0};
3970 double totalTime
= (TimeStamp::Now() - lastGCStartTime()).ToMilliseconds();
3973 LinearInterpolate(totalTime
, MinBudgetStart
.time
, MinBudgetStart
.budget
,
3974 MinBudgetEnd
.time
, MinBudgetEnd
.budget
);
3976 return ExtendBudget(budget
, minBudget
);
3979 bool GCRuntime::maybeIncreaseSliceBudgetForUrgentCollections(
3980 SliceBudget
& budget
) {
3981 // Enforce a minimum time budget based on how close we are to the incremental
3984 size_t minBytesRemaining
= SIZE_MAX
;
3985 for (AllZonesIter
zone(this); !zone
.done(); zone
.next()) {
3986 if (!zone
->wasGCStarted()) {
3989 size_t gcBytesRemaining
=
3990 zone
->gcHeapThreshold
.incrementalBytesRemaining(zone
->gcHeapSize
);
3991 minBytesRemaining
= std::min(minBytesRemaining
, gcBytesRemaining
);
3992 size_t mallocBytesRemaining
=
3993 zone
->mallocHeapThreshold
.incrementalBytesRemaining(
3994 zone
->mallocHeapSize
);
3995 minBytesRemaining
= std::min(minBytesRemaining
, mallocBytesRemaining
);
3998 if (minBytesRemaining
< tunables
.urgentThresholdBytes() &&
3999 minBytesRemaining
!= 0) {
4000 // Increase budget based on the reciprocal of the fraction remaining.
4001 double fractionRemaining
=
4002 double(minBytesRemaining
) / double(tunables
.urgentThresholdBytes());
4003 double minBudget
= double(defaultSliceBudgetMS()) / fractionRemaining
;
4004 return ExtendBudget(budget
, minBudget
);
4010 static void ScheduleZones(GCRuntime
* gc
, JS::GCReason reason
) {
4011 for (ZonesIter
zone(gc
, WithAtoms
); !zone
.done(); zone
.next()) {
4012 // Re-check heap threshold for alloc-triggered zones that were not
4013 // previously collected. Now we have allocation rate data, the heap limit
4014 // may have been increased beyond the current size.
4015 if (gc
->tunables
.balancedHeapLimitsEnabled() && zone
->isGCScheduled() &&
4016 zone
->smoothedCollectionRate
.ref().isNothing() &&
4017 reason
== JS::GCReason::ALLOC_TRIGGER
&&
4018 zone
->gcHeapSize
.bytes() < zone
->gcHeapThreshold
.startBytes()) {
4019 zone
->unscheduleGC(); // May still be re-scheduled below.
4022 if (gc
->isShutdownGC()) {
4026 if (!gc
->isPerZoneGCEnabled()) {
4030 // To avoid resets, continue to collect any zones that were being
4031 // collected in a previous slice.
4032 if (gc
->isIncrementalGCInProgress() && zone
->wasGCStarted()) {
4036 // This is a heuristic to reduce the total number of collections.
4037 bool inHighFrequencyMode
= gc
->schedulingState
.inHighFrequencyGCMode();
4038 if (zone
->gcHeapSize
.bytes() >=
4039 zone
->gcHeapThreshold
.eagerAllocTrigger(inHighFrequencyMode
) ||
4040 zone
->mallocHeapSize
.bytes() >=
4041 zone
->mallocHeapThreshold
.eagerAllocTrigger(inHighFrequencyMode
) ||
4042 zone
->jitHeapSize
.bytes() >= zone
->jitHeapThreshold
.startBytes()) {
4048 static void UnscheduleZones(GCRuntime
* gc
) {
4049 for (ZonesIter
zone(gc
->rt
, WithAtoms
); !zone
.done(); zone
.next()) {
4050 zone
->unscheduleGC();
4054 class js::gc::AutoCallGCCallbacks
{
4056 JS::GCReason reason_
;
4059 explicit AutoCallGCCallbacks(GCRuntime
& gc
, JS::GCReason reason
)
4060 : gc_(gc
), reason_(reason
) {
4061 gc_
.maybeCallGCCallback(JSGC_BEGIN
, reason
);
4063 ~AutoCallGCCallbacks() { gc_
.maybeCallGCCallback(JSGC_END
, reason_
); }
4066 void GCRuntime::maybeCallGCCallback(JSGCStatus status
, JS::GCReason reason
) {
4067 if (!gcCallback
.ref().op
) {
4071 if (isIncrementalGCInProgress()) {
4075 if (gcCallbackDepth
== 0) {
4076 // Save scheduled zone information in case the callback clears it.
4077 for (ZonesIter
zone(this, WithAtoms
); !zone
.done(); zone
.next()) {
4078 zone
->gcScheduledSaved_
= zone
->gcScheduled_
;
4082 // Save and clear GC options and state in case the callback reenters GC.
4083 JS::GCOptions options
= gcOptions();
4084 maybeGcOptions
= Nothing();
4085 bool savedFullGCRequested
= fullGCRequested
;
4086 fullGCRequested
= false;
4090 callGCCallback(status
, reason
);
4092 MOZ_ASSERT(gcCallbackDepth
!= 0);
4095 // Restore the original GC options.
4096 maybeGcOptions
= Some(options
);
4098 // At the end of a GC, clear out the fullGCRequested state. At the start,
4099 // restore the previous setting.
4100 fullGCRequested
= (status
== JSGC_END
) ? false : savedFullGCRequested
;
4102 if (gcCallbackDepth
== 0) {
4103 // Ensure any zone that was originally scheduled stays scheduled.
4104 for (ZonesIter
zone(this, WithAtoms
); !zone
.done(); zone
.next()) {
4105 zone
->gcScheduled_
= zone
->gcScheduled_
|| zone
->gcScheduledSaved_
;
4111 * We disable inlining to ensure that the bottom of the stack with possible GC
4112 * roots recorded in MarkRuntime excludes any pointers we use during the marking
4115 MOZ_NEVER_INLINE
GCRuntime::IncrementalResult
GCRuntime::gcCycle(
4116 bool nonincrementalByAPI
, const SliceBudget
& budgetArg
,
4117 JS::GCReason reason
) {
4118 // Assert if this is a GC unsafe region.
4119 rt
->mainContextFromOwnThread()->verifyIsSafeToGC();
4121 // It's ok if threads other than the main thread have suppressGC set, as
4122 // they are operating on zones which will not be collected from here.
4123 MOZ_ASSERT(!rt
->mainContextFromOwnThread()->suppressGC
);
4125 // This reason is used internally. See below.
4126 MOZ_ASSERT(reason
!= JS::GCReason::RESET
);
4128 // Background finalization and decommit are finished by definition before we
4129 // can start a new major GC. Background allocation may still be running, but
4130 // that's OK because chunk pools are protected by the GC lock.
4131 if (!isIncrementalGCInProgress()) {
4132 assertBackgroundSweepingFinished();
4133 MOZ_ASSERT(decommitTask
.isIdle());
4136 // Note that GC callbacks are allowed to re-enter GC.
4137 AutoCallGCCallbacks
callCallbacks(*this, reason
);
4139 // Increase slice budget for long running collections before it is recorded by
4141 SliceBudget
budget(budgetArg
);
4142 bool budgetWasIncreased
= maybeIncreaseSliceBudget(budget
);
4144 ScheduleZones(this, reason
);
4146 auto updateCollectorTime
= MakeScopeExit([&] {
4147 if (const gcstats::Statistics::SliceData
* slice
= stats().lastSlice()) {
4148 collectorTimeSinceAllocRateUpdate
+= slice
->duration();
4152 gcstats::AutoGCSlice
agc(stats(), scanZonesBeforeGC(), gcOptions(), budget
,
4153 reason
, budgetWasIncreased
);
4155 IncrementalResult result
=
4156 budgetIncrementalGC(nonincrementalByAPI
, reason
, budget
);
4157 if (result
== IncrementalResult::ResetIncremental
) {
4158 if (incrementalState
== State::NotActive
) {
4159 // The collection was reset and has finished.
4163 // The collection was reset but we must finish up some remaining work.
4164 reason
= JS::GCReason::RESET
;
4167 majorGCTriggerReason
= JS::GCReason::NO_REASON
;
4168 MOZ_ASSERT(!stats().hasTrigger());
4173 gcprobes::MajorGCStart();
4174 incrementalSlice(budget
, reason
, budgetWasIncreased
);
4175 gcprobes::MajorGCEnd();
4177 MOZ_ASSERT_IF(result
== IncrementalResult::ResetIncremental
,
4178 !isIncrementalGCInProgress());
4182 inline bool GCRuntime::mightSweepInThisSlice(bool nonIncremental
) {
4183 MOZ_ASSERT(incrementalState
< State::Sweep
);
4184 return nonIncremental
|| lastMarkSlice
|| hasIncrementalTwoSliceZealMode();
4188 static bool IsDeterministicGCReason(JS::GCReason reason
) {
4190 case JS::GCReason::API
:
4191 case JS::GCReason::DESTROY_RUNTIME
:
4192 case JS::GCReason::LAST_DITCH
:
4193 case JS::GCReason::TOO_MUCH_MALLOC
:
4194 case JS::GCReason::TOO_MUCH_WASM_MEMORY
:
4195 case JS::GCReason::TOO_MUCH_JIT_CODE
:
4196 case JS::GCReason::ALLOC_TRIGGER
:
4197 case JS::GCReason::DEBUG_GC
:
4198 case JS::GCReason::CC_FORCED
:
4199 case JS::GCReason::SHUTDOWN_CC
:
4200 case JS::GCReason::ABORT_GC
:
4201 case JS::GCReason::DISABLE_GENERATIONAL_GC
:
4202 case JS::GCReason::FINISH_GC
:
4203 case JS::GCReason::PREPARE_FOR_TRACING
:
4212 gcstats::ZoneGCStats
GCRuntime::scanZonesBeforeGC() {
4213 gcstats::ZoneGCStats zoneStats
;
4214 for (ZonesIter
zone(this, WithAtoms
); !zone
.done(); zone
.next()) {
4215 zoneStats
.zoneCount
++;
4216 zoneStats
.compartmentCount
+= zone
->compartments().length();
4217 if (zone
->isGCScheduled()) {
4218 zoneStats
.collectedZoneCount
++;
4219 zoneStats
.collectedCompartmentCount
+= zone
->compartments().length();
4226 // The GC can only clean up scheduledForDestruction realms that were marked live
4227 // by a barrier (e.g. by RemapWrappers from a navigation event). It is also
4228 // common to have realms held live because they are part of a cycle in gecko,
4229 // e.g. involving the HTMLDocument wrapper. In this case, we need to run the
4230 // CycleCollector in order to remove these edges before the realm can be freed.
4231 void GCRuntime::maybeDoCycleCollection() {
4232 const static float ExcessiveGrayRealms
= 0.8f
;
4233 const static size_t LimitGrayRealms
= 200;
4235 size_t realmsTotal
= 0;
4236 size_t realmsGray
= 0;
4237 for (RealmsIter
realm(rt
); !realm
.done(); realm
.next()) {
4239 GlobalObject
* global
= realm
->unsafeUnbarrieredMaybeGlobal();
4240 if (global
&& global
->isMarkedGray()) {
4244 float grayFraction
= float(realmsGray
) / float(realmsTotal
);
4245 if (grayFraction
> ExcessiveGrayRealms
|| realmsGray
> LimitGrayRealms
) {
4246 callDoCycleCollectionCallback(rt
->mainContextFromOwnThread());
4250 void GCRuntime::checkCanCallAPI() {
4251 MOZ_RELEASE_ASSERT(CurrentThreadCanAccessRuntime(rt
));
4253 /* If we attempt to invoke the GC while we are running in the GC, assert. */
4254 MOZ_RELEASE_ASSERT(!JS::RuntimeHeapIsBusy());
4257 bool GCRuntime::checkIfGCAllowedInCurrentState(JS::GCReason reason
) {
4258 if (rt
->mainContextFromOwnThread()->suppressGC
) {
4262 // Only allow shutdown GCs when we're destroying the runtime. This keeps
4263 // the GC callback from triggering a nested GC and resetting global state.
4264 if (rt
->isBeingDestroyed() && !isShutdownGC()) {
4269 if (deterministicOnly
&& !IsDeterministicGCReason(reason
)) {
4277 bool GCRuntime::shouldRepeatForDeadZone(JS::GCReason reason
) {
4278 MOZ_ASSERT_IF(reason
== JS::GCReason::COMPARTMENT_REVIVED
, !isIncremental
);
4279 MOZ_ASSERT(!isIncrementalGCInProgress());
4281 if (!isIncremental
) {
4285 for (CompartmentsIter
c(rt
); !c
.done(); c
.next()) {
4286 if (c
->gcState
.scheduledForDestruction
) {
4294 struct MOZ_RAII AutoSetZoneSliceThresholds
{
4295 explicit AutoSetZoneSliceThresholds(GCRuntime
* gc
) : gc(gc
) {
4296 // On entry, zones that are already collecting should have a slice threshold
4298 for (ZonesIter
zone(gc
, WithAtoms
); !zone
.done(); zone
.next()) {
4299 MOZ_ASSERT(zone
->wasGCStarted() ==
4300 zone
->gcHeapThreshold
.hasSliceThreshold());
4301 MOZ_ASSERT(zone
->wasGCStarted() ==
4302 zone
->mallocHeapThreshold
.hasSliceThreshold());
4306 ~AutoSetZoneSliceThresholds() {
4307 // On exit, update the thresholds for all collecting zones.
4308 bool waitingOnBGTask
= gc
->isWaitingOnBackgroundTask();
4309 for (ZonesIter
zone(gc
, WithAtoms
); !zone
.done(); zone
.next()) {
4310 if (zone
->wasGCStarted()) {
4311 zone
->setGCSliceThresholds(*gc
, waitingOnBGTask
);
4313 MOZ_ASSERT(!zone
->gcHeapThreshold
.hasSliceThreshold());
4314 MOZ_ASSERT(!zone
->mallocHeapThreshold
.hasSliceThreshold());
4322 void GCRuntime::collect(bool nonincrementalByAPI
, const SliceBudget
& budget
,
4323 JS::GCReason reason
) {
4324 TimeStamp startTime
= TimeStamp::Now();
4325 auto timer
= MakeScopeExit([&] {
4326 if (Realm
* realm
= rt
->mainContextFromOwnThread()->realm()) {
4327 realm
->timers
.gcTime
+= TimeStamp::Now() - startTime
;
4331 auto clearGCOptions
= MakeScopeExit([&] {
4332 if (!isIncrementalGCInProgress()) {
4333 maybeGcOptions
= Nothing();
4337 MOZ_ASSERT(reason
!= JS::GCReason::NO_REASON
);
4339 // Checks run for each request, even if we do not actually GC.
4342 // Check if we are allowed to GC at this time before proceeding.
4343 if (!checkIfGCAllowedInCurrentState(reason
)) {
4347 stats().log("GC slice starting in state %s", StateName(incrementalState
));
4349 AutoStopVerifyingBarriers
av(rt
, isShutdownGC());
4350 AutoMaybeLeaveAtomsZone
leaveAtomsZone(rt
->mainContextFromOwnThread());
4351 AutoSetZoneSliceThresholds
sliceThresholds(this);
4353 schedulingState
.updateHighFrequencyModeForReason(reason
);
4355 if (!isIncrementalGCInProgress() && tunables
.balancedHeapLimitsEnabled()) {
4356 updateAllocationRates();
4361 IncrementalResult cycleResult
=
4362 gcCycle(nonincrementalByAPI
, budget
, reason
);
4364 if (reason
== JS::GCReason::ABORT_GC
) {
4365 MOZ_ASSERT(!isIncrementalGCInProgress());
4366 stats().log("GC aborted by request");
4371 * Sometimes when we finish a GC we need to immediately start a new one.
4372 * This happens in the following cases:
4373 * - when we reset the current GC
4374 * - when finalizers drop roots during shutdown
4375 * - when zones that we thought were dead at the start of GC are
4376 * not collected (see the large comment in beginMarkPhase)
4379 if (!isIncrementalGCInProgress()) {
4380 if (cycleResult
== ResetIncremental
) {
4382 } else if (rootsRemoved
&& isShutdownGC()) {
4383 /* Need to re-schedule all zones for GC. */
4384 JS::PrepareForFullGC(rt
->mainContextFromOwnThread());
4386 reason
= JS::GCReason::ROOTS_REMOVED
;
4387 } else if (shouldRepeatForDeadZone(reason
)) {
4389 reason
= JS::GCReason::COMPARTMENT_REVIVED
;
4394 if (reason
== JS::GCReason::COMPARTMENT_REVIVED
) {
4395 maybeDoCycleCollection();
4399 if (hasZealMode(ZealMode::CheckHeapAfterGC
)) {
4400 gcstats::AutoPhase
ap(stats(), gcstats::PhaseKind::TRACE_HEAP
);
4401 CheckHeapAfterGC(rt
);
4403 if (hasZealMode(ZealMode::CheckGrayMarking
) && !isIncrementalGCInProgress()) {
4404 MOZ_RELEASE_ASSERT(CheckGrayMarkingState(rt
));
4407 stats().log("GC slice ending in state %s", StateName(incrementalState
));
4409 UnscheduleZones(this);
4412 SliceBudget
GCRuntime::defaultBudget(JS::GCReason reason
, int64_t millis
) {
4413 // millis == 0 means use internal GC scheduling logic to come up with
4414 // a duration for the slice budget. This may end up still being zero
4415 // based on preferences.
4417 millis
= defaultSliceBudgetMS();
4420 // If the embedding has registered a callback for creating SliceBudgets,
4422 if (createBudgetCallback
) {
4423 return createBudgetCallback(reason
, millis
);
4426 // Otherwise, the preference can request an unlimited duration slice.
4428 return SliceBudget::unlimited();
4431 return SliceBudget(TimeBudget(millis
));
4434 void GCRuntime::gc(JS::GCOptions options
, JS::GCReason reason
) {
4435 if (!isIncrementalGCInProgress()) {
4436 setGCOptions(options
);
4439 collect(true, SliceBudget::unlimited(), reason
);
4442 void GCRuntime::startGC(JS::GCOptions options
, JS::GCReason reason
,
4443 const js::SliceBudget
& budget
) {
4444 MOZ_ASSERT(!isIncrementalGCInProgress());
4445 setGCOptions(options
);
4447 if (!JS::IsIncrementalGCEnabled(rt
->mainContextFromOwnThread())) {
4448 collect(true, SliceBudget::unlimited(), reason
);
4452 collect(false, budget
, reason
);
4455 void GCRuntime::setGCOptions(JS::GCOptions options
) {
4456 MOZ_ASSERT(maybeGcOptions
== Nothing());
4457 maybeGcOptions
= Some(options
);
4460 void GCRuntime::gcSlice(JS::GCReason reason
, const js::SliceBudget
& budget
) {
4461 MOZ_ASSERT(isIncrementalGCInProgress());
4462 collect(false, budget
, reason
);
4465 void GCRuntime::finishGC(JS::GCReason reason
) {
4466 MOZ_ASSERT(isIncrementalGCInProgress());
4468 // If we're not collecting because we're out of memory then skip the
4469 // compacting phase if we need to finish an ongoing incremental GC
4470 // non-incrementally to avoid janking the browser.
4471 if (!IsOOMReason(initialReason
)) {
4472 if (incrementalState
== State::Compact
) {
4477 isCompacting
= false;
4480 collect(false, SliceBudget::unlimited(), reason
);
4483 void GCRuntime::abortGC() {
4484 MOZ_ASSERT(isIncrementalGCInProgress());
4486 MOZ_ASSERT(!rt
->mainContextFromOwnThread()->suppressGC
);
4488 collect(false, SliceBudget::unlimited(), JS::GCReason::ABORT_GC
);
4491 static bool ZonesSelected(GCRuntime
* gc
) {
4492 for (ZonesIter
zone(gc
, WithAtoms
); !zone
.done(); zone
.next()) {
4493 if (zone
->isGCScheduled()) {
4500 void GCRuntime::startDebugGC(JS::GCOptions options
, const SliceBudget
& budget
) {
4501 MOZ_ASSERT(!isIncrementalGCInProgress());
4502 setGCOptions(options
);
4504 if (!ZonesSelected(this)) {
4505 JS::PrepareForFullGC(rt
->mainContextFromOwnThread());
4508 collect(false, budget
, JS::GCReason::DEBUG_GC
);
4511 void GCRuntime::debugGCSlice(const SliceBudget
& budget
) {
4512 MOZ_ASSERT(isIncrementalGCInProgress());
4514 if (!ZonesSelected(this)) {
4515 JS::PrepareForIncrementalGC(rt
->mainContextFromOwnThread());
4518 collect(false, budget
, JS::GCReason::DEBUG_GC
);
4521 /* Schedule a full GC unless a zone will already be collected. */
4522 void js::PrepareForDebugGC(JSRuntime
* rt
) {
4523 if (!ZonesSelected(&rt
->gc
)) {
4524 JS::PrepareForFullGC(rt
->mainContextFromOwnThread());
4528 void GCRuntime::onOutOfMallocMemory() {
4529 // Stop allocating new chunks.
4530 allocTask
.cancelAndWait();
4532 // Make sure we release anything queued for release.
4533 decommitTask
.join();
4534 nursery().joinDecommitTask();
4536 // Wait for background free of nursery huge slots to finish.
4539 AutoLockGC
lock(this);
4540 onOutOfMallocMemory(lock
);
4543 void GCRuntime::onOutOfMallocMemory(const AutoLockGC
& lock
) {
4545 // Release any relocated arenas we may be holding on to, without releasing
4547 releaseHeldRelocatedArenasWithoutUnlocking(lock
);
4550 // Throw away any excess chunks we have lying around.
4551 freeEmptyChunks(lock
);
4553 // Immediately decommit as many arenas as possible in the hopes that this
4554 // might let the OS scrape together enough pages to satisfy the failing
4556 if (DecommitEnabled()) {
4557 decommitFreeArenasWithoutUnlocking(lock
);
4561 void GCRuntime::minorGC(JS::GCReason reason
, gcstats::PhaseKind phase
) {
4562 MOZ_ASSERT(!JS::RuntimeHeapIsBusy());
4564 MOZ_ASSERT_IF(reason
== JS::GCReason::EVICT_NURSERY
,
4565 !rt
->mainContextFromOwnThread()->suppressGC
);
4566 if (rt
->mainContextFromOwnThread()->suppressGC
) {
4572 collectNursery(JS::GCOptions::Normal
, reason
, phase
);
4575 if (hasZealMode(ZealMode::CheckHeapAfterGC
)) {
4576 gcstats::AutoPhase
ap(stats(), phase
);
4577 CheckHeapAfterGC(rt
);
4581 for (ZonesIter
zone(this, WithAtoms
); !zone
.done(); zone
.next()) {
4582 maybeTriggerGCAfterAlloc(zone
);
4583 maybeTriggerGCAfterMalloc(zone
);
4587 void GCRuntime::collectNursery(JS::GCOptions options
, JS::GCReason reason
,
4588 gcstats::PhaseKind phase
) {
4589 AutoMaybeLeaveAtomsZone
leaveAtomsZone(rt
->mainContextFromOwnThread());
4591 uint32_t numAllocs
= 0;
4592 for (ZonesIter
zone(this, WithAtoms
); !zone
.done(); zone
.next()) {
4593 numAllocs
+= zone
->getAndResetTenuredAllocsSinceMinorGC();
4595 stats().setAllocsSinceMinorGCTenured(numAllocs
);
4597 gcstats::AutoPhase
ap(stats(), phase
);
4599 nursery().clearMinorGCRequest();
4600 nursery().collect(options
, reason
);
4601 MOZ_ASSERT(nursery().isEmpty());
4603 startBackgroundFreeAfterMinorGC();
4606 void GCRuntime::startBackgroundFreeAfterMinorGC() {
4607 MOZ_ASSERT(nursery().isEmpty());
4610 AutoLockHelperThreadState lock
;
4612 lifoBlocksToFree
.ref().transferFrom(&lifoBlocksToFreeAfterMinorGC
.ref());
4614 if (lifoBlocksToFree
.ref().isEmpty() &&
4615 buffersToFreeAfterMinorGC
.ref().empty()) {
4620 startBackgroundFree();
4623 bool GCRuntime::gcIfRequestedImpl(bool eagerOk
) {
4624 // This method returns whether a major GC was performed.
4626 if (nursery().minorGCRequested()) {
4627 minorGC(nursery().minorGCTriggerReason());
4630 JS::GCReason reason
= wantMajorGC(eagerOk
);
4631 if (reason
== JS::GCReason::NO_REASON
) {
4635 SliceBudget budget
= defaultBudget(reason
, 0);
4636 if (!isIncrementalGCInProgress()) {
4637 startGC(JS::GCOptions::Normal
, reason
, budget
);
4639 gcSlice(reason
, budget
);
4644 void js::gc::FinishGC(JSContext
* cx
, JS::GCReason reason
) {
4645 // Calling this when GC is suppressed won't have any effect.
4646 MOZ_ASSERT(!cx
->suppressGC
);
4648 // GC callbacks may run arbitrary code, including JS. Check this regardless of
4649 // whether we GC for this invocation.
4650 MOZ_ASSERT(cx
->isNurseryAllocAllowed());
4652 if (JS::IsIncrementalGCInProgress(cx
)) {
4653 JS::PrepareForIncrementalGC(cx
);
4654 JS::FinishIncrementalGC(cx
, reason
);
4658 void js::gc::WaitForBackgroundTasks(JSContext
* cx
) {
4659 cx
->runtime()->gc
.waitForBackgroundTasks();
4662 void GCRuntime::waitForBackgroundTasks() {
4663 MOZ_ASSERT(!isIncrementalGCInProgress());
4664 MOZ_ASSERT(sweepTask
.isIdle());
4665 MOZ_ASSERT(decommitTask
.isIdle());
4666 MOZ_ASSERT(markTask
.isIdle());
4670 nursery().joinDecommitTask();
4673 Realm
* js::NewRealm(JSContext
* cx
, JSPrincipals
* principals
,
4674 const JS::RealmOptions
& options
) {
4675 JSRuntime
* rt
= cx
->runtime();
4676 JS_AbortIfWrongThread(cx
);
4678 UniquePtr
<Zone
> zoneHolder
;
4679 UniquePtr
<Compartment
> compHolder
;
4681 Compartment
* comp
= nullptr;
4682 Zone
* zone
= nullptr;
4683 JS::CompartmentSpecifier compSpec
=
4684 options
.creationOptions().compartmentSpecifier();
4686 case JS::CompartmentSpecifier::NewCompartmentInSystemZone
:
4687 // systemZone might be null here, in which case we'll make a zone and
4688 // set this field below.
4689 zone
= rt
->gc
.systemZone
;
4691 case JS::CompartmentSpecifier::NewCompartmentInExistingZone
:
4692 zone
= options
.creationOptions().zone();
4695 case JS::CompartmentSpecifier::ExistingCompartment
:
4696 comp
= options
.creationOptions().compartment();
4697 zone
= comp
->zone();
4699 case JS::CompartmentSpecifier::NewCompartmentAndZone
:
4704 Zone::Kind kind
= Zone::NormalZone
;
4705 const JSPrincipals
* trusted
= rt
->trustedPrincipals();
4706 if (compSpec
== JS::CompartmentSpecifier::NewCompartmentInSystemZone
||
4707 (principals
&& principals
== trusted
)) {
4708 kind
= Zone::SystemZone
;
4711 zoneHolder
= MakeUnique
<Zone
>(cx
->runtime(), kind
);
4712 if (!zoneHolder
|| !zoneHolder
->init()) {
4713 ReportOutOfMemory(cx
);
4717 zone
= zoneHolder
.get();
4720 bool invisibleToDebugger
= options
.creationOptions().invisibleToDebugger();
4722 // Debugger visibility is per-compartment, not per-realm, so make sure the
4723 // new realm's visibility matches its compartment's.
4724 MOZ_ASSERT(comp
->invisibleToDebugger() == invisibleToDebugger
);
4726 compHolder
= cx
->make_unique
<JS::Compartment
>(zone
, invisibleToDebugger
);
4731 comp
= compHolder
.get();
4734 UniquePtr
<Realm
> realm(cx
->new_
<Realm
>(comp
, options
));
4738 realm
->init(cx
, principals
);
4740 // Make sure we don't put system and non-system realms in the same
4743 MOZ_RELEASE_ASSERT(realm
->isSystem() == IsSystemCompartment(comp
));
4746 AutoLockGC
lock(rt
);
4748 // Reserve space in the Vectors before we start mutating them.
4749 if (!comp
->realms().reserve(comp
->realms().length() + 1) ||
4751 !zone
->compartments().reserve(zone
->compartments().length() + 1)) ||
4752 (zoneHolder
&& !rt
->gc
.zones().reserve(rt
->gc
.zones().length() + 1))) {
4753 ReportOutOfMemory(cx
);
4757 // After this everything must be infallible.
4759 comp
->realms().infallibleAppend(realm
.get());
4762 zone
->compartments().infallibleAppend(compHolder
.release());
4766 rt
->gc
.zones().infallibleAppend(zoneHolder
.release());
4768 // Lazily set the runtime's system zone.
4769 if (compSpec
== JS::CompartmentSpecifier::NewCompartmentInSystemZone
) {
4770 MOZ_RELEASE_ASSERT(!rt
->gc
.systemZone
);
4771 MOZ_ASSERT(zone
->isSystemZone());
4772 rt
->gc
.systemZone
= zone
;
4776 return realm
.release();
4779 void GCRuntime::runDebugGC() {
4781 if (rt
->mainContextFromOwnThread()->suppressGC
) {
4785 if (hasZealMode(ZealMode::GenerationalGC
)) {
4786 return minorGC(JS::GCReason::DEBUG_GC
);
4789 PrepareForDebugGC(rt
);
4791 auto budget
= SliceBudget::unlimited();
4792 if (hasZealMode(ZealMode::IncrementalMultipleSlices
)) {
4794 * Start with a small slice limit and double it every slice. This
4795 * ensure that we get multiple slices, and collection runs to
4798 if (!isIncrementalGCInProgress()) {
4799 zealSliceBudget
= zealFrequency
/ 2;
4801 zealSliceBudget
*= 2;
4803 budget
= SliceBudget(WorkBudget(zealSliceBudget
));
4805 js::gc::State initialState
= incrementalState
;
4806 if (!isIncrementalGCInProgress()) {
4807 setGCOptions(JS::GCOptions::Shrink
);
4809 collect(false, budget
, JS::GCReason::DEBUG_GC
);
4811 /* Reset the slice size when we get to the sweep or compact phases. */
4812 if ((initialState
== State::Mark
&& incrementalState
== State::Sweep
) ||
4813 (initialState
== State::Sweep
&& incrementalState
== State::Compact
)) {
4814 zealSliceBudget
= zealFrequency
/ 2;
4816 } else if (hasIncrementalTwoSliceZealMode()) {
4817 // These modes trigger incremental GC that happens in two slices and the
4818 // supplied budget is ignored by incrementalSlice.
4819 budget
= SliceBudget(WorkBudget(1));
4821 if (!isIncrementalGCInProgress()) {
4822 setGCOptions(JS::GCOptions::Normal
);
4824 collect(false, budget
, JS::GCReason::DEBUG_GC
);
4825 } else if (hasZealMode(ZealMode::Compact
)) {
4826 gc(JS::GCOptions::Shrink
, JS::GCReason::DEBUG_GC
);
4828 gc(JS::GCOptions::Normal
, JS::GCReason::DEBUG_GC
);
4834 void GCRuntime::setFullCompartmentChecks(bool enabled
) {
4835 MOZ_ASSERT(!JS::RuntimeHeapIsMajorCollecting());
4836 fullCompartmentChecks
= enabled
;
4839 void GCRuntime::notifyRootsRemoved() {
4840 rootsRemoved
= true;
4843 /* Schedule a GC to happen "soon". */
4844 if (hasZealMode(ZealMode::RootsChange
)) {
4851 bool GCRuntime::selectForMarking(JSObject
* object
) {
4852 MOZ_ASSERT(!JS::RuntimeHeapIsMajorCollecting());
4853 return selectedForMarking
.ref().get().append(object
);
4856 void GCRuntime::clearSelectedForMarking() {
4857 selectedForMarking
.ref().get().clearAndFree();
4860 void GCRuntime::setDeterministic(bool enabled
) {
4861 MOZ_ASSERT(!JS::RuntimeHeapIsMajorCollecting());
4862 deterministicOnly
= enabled
;
4868 AutoAssertNoNurseryAlloc::AutoAssertNoNurseryAlloc() {
4869 TlsContext
.get()->disallowNurseryAlloc();
4872 AutoAssertNoNurseryAlloc::~AutoAssertNoNurseryAlloc() {
4873 TlsContext
.get()->allowNurseryAlloc();
4878 #ifdef JSGC_HASH_TABLE_CHECKS
4879 void GCRuntime::checkHashTablesAfterMovingGC() {
4881 * Check that internal hash tables no longer have any pointers to things
4882 * that have been moved.
4884 rt
->geckoProfiler().checkStringsMapAfterMovingGC();
4885 if (rt
->hasJitRuntime() && rt
->jitRuntime()->hasInterpreterEntryMap()) {
4886 rt
->jitRuntime()->getInterpreterEntryMap()->checkScriptsAfterMovingGC();
4888 for (ZonesIter
zone(this, SkipAtoms
); !zone
.done(); zone
.next()) {
4889 zone
->checkUniqueIdTableAfterMovingGC();
4890 zone
->shapeZone().checkTablesAfterMovingGC();
4891 zone
->checkAllCrossCompartmentWrappersAfterMovingGC();
4892 zone
->checkScriptMapsAfterMovingGC();
4894 // Note: CompactPropMaps never have a table.
4895 JS::AutoCheckCannotGC nogc
;
4896 for (auto map
= zone
->cellIterUnsafe
<NormalPropMap
>(); !map
.done();
4898 if (PropMapTable
* table
= map
->asLinked()->maybeTable(nogc
)) {
4899 table
->checkAfterMovingGC();
4902 for (auto map
= zone
->cellIterUnsafe
<DictionaryPropMap
>(); !map
.done();
4904 if (PropMapTable
* table
= map
->asLinked()->maybeTable(nogc
)) {
4905 table
->checkAfterMovingGC();
4910 for (CompartmentsIter
c(this); !c
.done(); c
.next()) {
4911 for (RealmsInCompartmentIter
r(c
); !r
.done(); r
.next()) {
4912 r
->dtoaCache
.checkCacheAfterMovingGC();
4913 if (r
->debugEnvs()) {
4914 r
->debugEnvs()->checkHashTablesAfterMovingGC();
4922 bool GCRuntime::hasZone(Zone
* target
) {
4923 for (AllZonesIter
zone(this); !zone
.done(); zone
.next()) {
4924 if (zone
== target
) {
4932 void AutoAssertEmptyNursery::checkCondition(JSContext
* cx
) {
4937 MOZ_ASSERT(cx
->nursery().isEmpty());
4940 AutoEmptyNursery::AutoEmptyNursery(JSContext
* cx
) : AutoAssertEmptyNursery() {
4941 MOZ_ASSERT(!cx
->suppressGC
);
4942 cx
->runtime()->gc
.stats().suspendPhases();
4943 cx
->runtime()->gc
.evictNursery(JS::GCReason::EVICT_NURSERY
);
4944 cx
->runtime()->gc
.stats().resumePhases();
4952 // We don't want jsfriendapi.h to depend on GenericPrinter,
4953 // so these functions are declared directly in the cpp.
4955 extern JS_PUBLIC_API
void DumpString(JSString
* str
, js::GenericPrinter
& out
);
4959 void js::gc::Cell::dump(js::GenericPrinter
& out
) const {
4960 switch (getTraceKind()) {
4961 case JS::TraceKind::Object
:
4962 reinterpret_cast<const JSObject
*>(this)->dump(out
);
4965 case JS::TraceKind::String
:
4966 js::DumpString(reinterpret_cast<JSString
*>(const_cast<Cell
*>(this)), out
);
4969 case JS::TraceKind::Shape
:
4970 reinterpret_cast<const Shape
*>(this)->dump(out
);
4974 out
.printf("%s(%p)\n", JS::GCTraceKindToAscii(getTraceKind()),
4979 // For use in a debugger.
4980 void js::gc::Cell::dump() const {
4981 js::Fprinter
out(stderr
);
4986 JS_PUBLIC_API
bool js::gc::detail::CanCheckGrayBits(const TenuredCell
* cell
) {
4987 // We do not check the gray marking state of cells in the following cases:
4989 // 1) When OOM has caused us to clear the gcGrayBitsValid_ flag.
4991 // 2) When we are in an incremental GC and examine a cell that is in a zone
4992 // that is not being collected. Gray targets of CCWs that are marked black
4993 // by a barrier will eventually be marked black in a later GC slice.
4995 // 3) When mark bits are being cleared concurrently by a helper thread.
4999 auto runtime
= cell
->runtimeFromAnyThread();
5000 MOZ_ASSERT(CurrentThreadCanAccessRuntime(runtime
));
5002 if (!runtime
->gc
.areGrayBitsValid()) {
5006 JS::Zone
* zone
= cell
->zone();
5008 if (runtime
->gc
.isIncrementalGCInProgress() && !zone
->wasGCStarted()) {
5012 return !zone
->isGCPreparing();
5015 JS_PUBLIC_API
bool js::gc::detail::CellIsMarkedGrayIfKnown(
5016 const TenuredCell
* cell
) {
5017 MOZ_ASSERT_IF(cell
->isPermanentAndMayBeShared(), cell
->isMarkedBlack());
5018 if (!cell
->isMarkedGray()) {
5022 return CanCheckGrayBits(cell
);
5027 JS_PUBLIC_API
void js::gc::detail::AssertCellIsNotGray(const Cell
* cell
) {
5028 if (!cell
->isTenured()) {
5032 // Check that a cell is not marked gray.
5034 // Since this is a debug-only check, take account of the eventual mark state
5035 // of cells that will be marked black by the next GC slice in an incremental
5036 // GC. For performance reasons we don't do this in CellIsMarkedGrayIfKnown.
5038 auto tc
= &cell
->asTenured();
5039 if (!tc
->isMarkedGray() || !CanCheckGrayBits(tc
)) {
5043 // TODO: I'd like to AssertHeapIsIdle() here, but this ends up getting
5044 // called during GC and while iterating the heap for memory reporting.
5045 MOZ_ASSERT(!JS::RuntimeHeapIsCycleCollecting());
5047 if (tc
->zone()->isGCMarkingBlackAndGray()) {
5048 // We are doing gray marking in the cell's zone. Even if the cell is
5049 // currently marked gray it may eventually be marked black. Delay checking
5050 // non-black cells until we finish gray marking.
5052 if (!tc
->isMarkedBlack()) {
5053 JSRuntime
* rt
= tc
->zone()->runtimeFromMainThread();
5054 AutoEnterOOMUnsafeRegion oomUnsafe
;
5055 if (!rt
->gc
.cellsToAssertNotGray
.ref().append(cell
)) {
5056 oomUnsafe
.crash("Can't append to delayed gray checks list");
5062 MOZ_ASSERT(!tc
->isMarkedGray());
5065 extern JS_PUBLIC_API
bool js::gc::detail::ObjectIsMarkedBlack(
5066 const JSObject
* obj
) {
5067 return obj
->isMarkedBlack();
5072 js::gc::ClearEdgesTracer::ClearEdgesTracer(JSRuntime
* rt
)
5073 : GenericTracerImpl(rt
, JS::TracerKind::ClearEdges
,
5074 JS::WeakMapTraceAction::TraceKeysAndValues
) {}
5076 template <typename T
>
5077 void js::gc::ClearEdgesTracer::onEdge(T
** thingp
, const char* name
) {
5078 // We don't handle removing pointers to nursery edges from the store buffer
5079 // with this tracer. Check that this doesn't happen.
5081 MOZ_ASSERT(!IsInsideNursery(thing
));
5083 // Fire the pre-barrier since we're removing an edge from the graph.
5084 InternalBarrierMethods
<T
*>::preBarrier(thing
);
5089 void GCRuntime::setPerformanceHint(PerformanceHint hint
) {
5090 if (hint
== PerformanceHint::InPageLoad
) {
5093 MOZ_ASSERT(inPageLoadCount
);