1 /* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*-
2 * vim: set ts=8 sts=2 et sw=2 tw=80:
3 * This Source Code Form is subject to the terms of the Mozilla Public
4 * License, v. 2.0. If a copy of the MPL was not distributed with this
5 * file, You can obtain one at http://mozilla.org/MPL/2.0/. */
8 * [SMDOC] Garbage Collector
10 * This code implements an incremental mark-and-sweep garbage collector, with
11 * most sweeping carried out in the background on a parallel thread.
16 * The collector can collect all zones at once, or a subset. These types of
17 * collection are referred to as a full GC and a zone GC respectively.
19 * It is possible for an incremental collection that started out as a full GC to
20 * become a zone GC if new zones are created during the course of the
23 * Incremental collection
24 * ----------------------
26 * For a collection to be carried out incrementally the following conditions
28 * - the collection must be run by calling js::GCSlice() rather than js::GC()
29 * - the GC parameter JSGC_INCREMENTAL_GC_ENABLED must be true.
31 * The last condition is an engine-internal mechanism to ensure that incremental
32 * collection is not carried out without the correct barriers being implemented.
33 * For more information see 'Incremental marking' below.
35 * If the collection is not incremental, all foreground activity happens inside
36 * a single call to GC() or GCSlice(). However the collection is not complete
37 * until the background sweeping activity has finished.
39 * An incremental collection proceeds as a series of slices, interleaved with
40 * mutator activity, i.e. running JavaScript code. Slices are limited by a time
41 * budget. The slice finishes as soon as possible after the requested time has
47 * The collector proceeds through the following states, the current state being
48 * held in JSRuntime::gcIncrementalState:
50 * - Prepare - unmarks GC things, discards JIT code and other setup
51 * - MarkRoots - marks the stack and other roots
52 * - Mark - incrementally marks reachable things
53 * - Sweep - sweeps zones in groups and continues marking unswept zones
54 * - Finalize - performs background finalization, concurrent with mutator
55 * - Compact - incrementally compacts by zone
56 * - Decommit - performs background decommit and chunk removal
58 * Roots are marked in the first MarkRoots slice; this is the start of the GC
59 * proper. The following states can take place over one or more slices.
61 * In other words an incremental collection proceeds like this:
63 * Slice 1: Prepare: Starts background task to unmark GC things
65 * ... JS code runs, background unmarking finishes ...
67 * Slice 2: MarkRoots: Roots are pushed onto the mark stack.
68 * Mark: The mark stack is processed by popping an element,
69 * marking it, and pushing its children.
71 * ... JS code runs ...
73 * Slice 3: Mark: More mark stack processing.
75 * ... JS code runs ...
77 * Slice n-1: Mark: More mark stack processing.
79 * ... JS code runs ...
81 * Slice n: Mark: Mark stack is completely drained.
82 * Sweep: Select first group of zones to sweep and sweep them.
84 * ... JS code runs ...
86 * Slice n+1: Sweep: Mark objects in unswept zones that were newly
87 * identified as alive (see below). Then sweep more zone
90 * ... JS code runs ...
92 * Slice n+2: Sweep: Mark objects in unswept zones that were newly
93 * identified as alive. Then sweep more zones.
95 * ... JS code runs ...
97 * Slice m: Sweep: Sweeping is finished, and background sweeping
98 * started on the helper thread.
100 * ... JS code runs, remaining sweeping done on background thread ...
102 * When background sweeping finishes the GC is complete.
104 * Incremental marking
105 * -------------------
107 * Incremental collection requires close collaboration with the mutator (i.e.,
108 * JS code) to guarantee correctness.
110 * - During an incremental GC, if a memory location (except a root) is written
111 * to, then the value it previously held must be marked. Write barriers
114 * - Any object that is allocated during incremental GC must start out marked.
116 * - Roots are marked in the first slice and hence don't need write barriers.
117 * Roots are things like the C stack and the VM stack.
119 * The problem that write barriers solve is that between slices the mutator can
120 * change the object graph. We must ensure that it cannot do this in such a way
121 * that makes us fail to mark a reachable object (marking an unreachable object
124 * We use a snapshot-at-the-beginning algorithm to do this. This means that we
125 * promise to mark at least everything that is reachable at the beginning of
126 * collection. To implement it we mark the old contents of every non-root memory
127 * location written to by the mutator while the collection is in progress, using
128 * write barriers. This is described in gc/Barrier.h.
130 * Incremental sweeping
131 * --------------------
133 * Sweeping is difficult to do incrementally because object finalizers must be
134 * run at the start of sweeping, before any mutator code runs. The reason is
135 * that some objects use their finalizers to remove themselves from caches. If
136 * mutator code was allowed to run after the start of sweeping, it could observe
137 * the state of the cache and create a new reference to an object that was just
138 * about to be destroyed.
140 * Sweeping all finalizable objects in one go would introduce long pauses, so
141 * instead sweeping broken up into groups of zones. Zones which are not yet
142 * being swept are still marked, so the issue above does not apply.
144 * The order of sweeping is restricted by cross compartment pointers - for
145 * example say that object |a| from zone A points to object |b| in zone B and
146 * neither object was marked when we transitioned to the Sweep phase. Imagine we
147 * sweep B first and then return to the mutator. It's possible that the mutator
148 * could cause |a| to become alive through a read barrier (perhaps it was a
149 * shape that was accessed via a shape table). Then we would need to mark |b|,
150 * which |a| points to, but |b| has already been swept.
152 * So if there is such a pointer then marking of zone B must not finish before
153 * marking of zone A. Pointers which form a cycle between zones therefore
154 * restrict those zones to being swept at the same time, and these are found
155 * using Tarjan's algorithm for finding the strongly connected components of a
158 * GC things without finalizers, and things with finalizers that are able to run
159 * in the background, are swept on the background thread. This accounts for most
160 * of the sweeping work.
165 * During incremental collection it is possible, although unlikely, for
166 * conditions to change such that incremental collection is no longer safe. In
167 * this case, the collection is 'reset' by resetIncrementalGC(). If we are in
168 * the mark state, this just stops marking, but if we have started sweeping
169 * already, we continue non-incrementally until we have swept the current sweep
170 * group. Following a reset, a new collection is started.
175 * Compacting GC happens at the end of a major GC as part of the last slice.
176 * There are three parts:
178 * - Arenas are selected for compaction.
179 * - The contents of those arenas are moved to new arenas.
180 * - All references to moved things are updated.
185 * Atoms are collected differently from other GC things. They are contained in
186 * a special zone and things in other zones may have pointers to them that are
187 * not recorded in the cross compartment pointer map. Each zone holds a bitmap
188 * with the atoms it might be keeping alive, and atoms are only collected if
189 * they are not included in any zone's atom bitmap. See AtomMarking.cpp for how
190 * this bitmap is managed.
193 #include "gc/GC-inl.h"
195 #include "mozilla/Range.h"
196 #include "mozilla/ScopeExit.h"
197 #include "mozilla/TextUtils.h"
198 #include "mozilla/TimeStamp.h"
201 #include <initializer_list>
207 #include "jsapi.h" // JS_AbortIfWrongThread
210 #include "debugger/DebugAPI.h"
211 #include "gc/ClearEdgesTracer.h"
212 #include "gc/GCContext.h"
213 #include "gc/GCInternals.h"
214 #include "gc/GCLock.h"
215 #include "gc/GCProbes.h"
216 #include "gc/Memory.h"
217 #include "gc/ParallelMarking.h"
218 #include "gc/ParallelWork.h"
219 #include "gc/WeakMap.h"
220 #include "jit/ExecutableAllocator.h"
221 #include "jit/JitCode.h"
222 #include "jit/JitRuntime.h"
223 #include "jit/ProcessExecutableMemory.h"
224 #include "js/HeapAPI.h" // JS::GCCellPtr
225 #include "js/Printer.h"
226 #include "js/SliceBudget.h"
227 #include "util/DifferentialTesting.h"
228 #include "vm/BigIntType.h"
229 #include "vm/EnvironmentObject.h"
230 #include "vm/GetterSetter.h"
231 #include "vm/HelperThreadState.h"
232 #include "vm/JitActivation.h"
233 #include "vm/JSObject.h"
234 #include "vm/JSScript.h"
235 #include "vm/PropMap.h"
236 #include "vm/Realm.h"
237 #include "vm/Shape.h"
238 #include "vm/StringType.h"
239 #include "vm/SymbolType.h"
242 #include "gc/Heap-inl.h"
243 #include "gc/Nursery-inl.h"
244 #include "gc/ObjectKind-inl.h"
245 #include "gc/PrivateIterators-inl.h"
246 #include "vm/GeckoProfiler-inl.h"
247 #include "vm/JSContext-inl.h"
248 #include "vm/Realm-inl.h"
249 #include "vm/Stack-inl.h"
252 using namespace js::gc
;
254 using mozilla::MakeScopeExit
;
255 using mozilla::Maybe
;
256 using mozilla::Nothing
;
258 using mozilla::TimeDuration
;
259 using mozilla::TimeStamp
;
261 using JS::AutoGCRooter
;
263 const AllocKind
gc::slotsToThingKind
[] = {
265 /* 0 */ AllocKind::OBJECT0
, AllocKind::OBJECT2
, AllocKind::OBJECT2
, AllocKind::OBJECT4
,
266 /* 4 */ AllocKind::OBJECT4
, AllocKind::OBJECT8
, AllocKind::OBJECT8
, AllocKind::OBJECT8
,
267 /* 8 */ AllocKind::OBJECT8
, AllocKind::OBJECT12
, AllocKind::OBJECT12
, AllocKind::OBJECT12
,
268 /* 12 */ AllocKind::OBJECT12
, AllocKind::OBJECT16
, AllocKind::OBJECT16
, AllocKind::OBJECT16
,
269 /* 16 */ AllocKind::OBJECT16
273 static_assert(std::size(slotsToThingKind
) == SLOTS_TO_THING_KIND_LIMIT
,
274 "We have defined a slot count for each kind.");
276 // A table converting an object size in "slots" (increments of
277 // sizeof(js::Value)) to the total number of bytes in the corresponding
278 // AllocKind. See gc::slotsToThingKind. This primarily allows wasm jit code to
279 // remain compliant with the AllocKind system.
281 // To use this table, subtract sizeof(NativeObject) from your desired allocation
282 // size, divide by sizeof(js::Value) to get the number of "slots", and then
283 // index into this table. See gc::GetGCObjectKindForBytes.
284 const constexpr uint32_t gc::slotsToAllocKindBytes
[] = {
285 // These entries correspond exactly to gc::slotsToThingKind. The numeric
286 // comments therefore indicate the number of slots that the "bytes" would
289 /* 0 */ sizeof(JSObject_Slots0
), sizeof(JSObject_Slots2
), sizeof(JSObject_Slots2
), sizeof(JSObject_Slots4
),
290 /* 4 */ sizeof(JSObject_Slots4
), sizeof(JSObject_Slots8
), sizeof(JSObject_Slots8
), sizeof(JSObject_Slots8
),
291 /* 8 */ sizeof(JSObject_Slots8
), sizeof(JSObject_Slots12
), sizeof(JSObject_Slots12
), sizeof(JSObject_Slots12
),
292 /* 12 */ sizeof(JSObject_Slots12
), sizeof(JSObject_Slots16
), sizeof(JSObject_Slots16
), sizeof(JSObject_Slots16
),
293 /* 16 */ sizeof(JSObject_Slots16
)
297 static_assert(std::size(slotsToAllocKindBytes
) == SLOTS_TO_THING_KIND_LIMIT
);
299 MOZ_THREAD_LOCAL(JS::GCContext
*) js::TlsGCContext
;
301 JS::GCContext::GCContext(JSRuntime
* runtime
) : runtime_(runtime
) {}
303 JS::GCContext::~GCContext() {
304 MOZ_ASSERT(!hasJitCodeToPoison());
305 MOZ_ASSERT(!isCollecting());
306 MOZ_ASSERT(gcUse() == GCUse::None
);
307 MOZ_ASSERT(!gcSweepZone());
308 MOZ_ASSERT(!isTouchingGrayThings());
311 void JS::GCContext::poisonJitCode() {
312 if (hasJitCodeToPoison()) {
313 jit::ExecutableAllocator::poisonCode(runtime(), jitPoisonRanges
);
314 jitPoisonRanges
.clearAndFree();
319 void GCRuntime::verifyAllChunks() {
320 AutoLockGC
lock(this);
321 fullChunks(lock
).verifyChunks();
322 availableChunks(lock
).verifyChunks();
323 emptyChunks(lock
).verifyChunks();
327 void GCRuntime::setMinEmptyChunkCount(uint32_t value
, const AutoLockGC
& lock
) {
328 minEmptyChunkCount_
= value
;
329 if (minEmptyChunkCount_
> maxEmptyChunkCount_
) {
330 maxEmptyChunkCount_
= minEmptyChunkCount_
;
332 MOZ_ASSERT(maxEmptyChunkCount_
>= minEmptyChunkCount_
);
335 void GCRuntime::setMaxEmptyChunkCount(uint32_t value
, const AutoLockGC
& lock
) {
336 maxEmptyChunkCount_
= value
;
337 if (minEmptyChunkCount_
> maxEmptyChunkCount_
) {
338 minEmptyChunkCount_
= maxEmptyChunkCount_
;
340 MOZ_ASSERT(maxEmptyChunkCount_
>= minEmptyChunkCount_
);
343 inline bool GCRuntime::tooManyEmptyChunks(const AutoLockGC
& lock
) {
344 return emptyChunks(lock
).count() > minEmptyChunkCount(lock
);
347 ChunkPool
GCRuntime::expireEmptyChunkPool(const AutoLockGC
& lock
) {
348 MOZ_ASSERT(emptyChunks(lock
).verify());
349 MOZ_ASSERT(minEmptyChunkCount(lock
) <= maxEmptyChunkCount(lock
));
352 while (tooManyEmptyChunks(lock
)) {
353 TenuredChunk
* chunk
= emptyChunks(lock
).pop();
354 prepareToFreeChunk(chunk
->info
);
358 MOZ_ASSERT(expired
.verify());
359 MOZ_ASSERT(emptyChunks(lock
).verify());
360 MOZ_ASSERT(emptyChunks(lock
).count() <= maxEmptyChunkCount(lock
));
361 MOZ_ASSERT(emptyChunks(lock
).count() <= minEmptyChunkCount(lock
));
365 static void FreeChunkPool(ChunkPool
& pool
) {
366 for (ChunkPool::Iter
iter(pool
); !iter
.done();) {
367 TenuredChunk
* chunk
= iter
.get();
370 MOZ_ASSERT(chunk
->unused());
371 UnmapPages(static_cast<void*>(chunk
), ChunkSize
);
373 MOZ_ASSERT(pool
.count() == 0);
376 void GCRuntime::freeEmptyChunks(const AutoLockGC
& lock
) {
377 FreeChunkPool(emptyChunks(lock
));
380 inline void GCRuntime::prepareToFreeChunk(TenuredChunkInfo
& info
) {
381 MOZ_ASSERT(numArenasFreeCommitted
>= info
.numArenasFreeCommitted
);
382 numArenasFreeCommitted
-= info
.numArenasFreeCommitted
;
383 stats().count(gcstats::COUNT_DESTROY_CHUNK
);
386 * Let FreeChunkPool detect a missing prepareToFreeChunk call before it
389 info
.numArenasFreeCommitted
= 0;
393 void GCRuntime::releaseArena(Arena
* arena
, const AutoLockGC
& lock
) {
394 MOZ_ASSERT(arena
->allocated());
395 MOZ_ASSERT(!arena
->onDelayedMarkingList());
396 MOZ_ASSERT(TlsGCContext
.get()->isFinalizing());
398 arena
->zone
->gcHeapSize
.removeGCArena(heapSize
);
399 arena
->release(lock
);
400 arena
->chunk()->releaseArena(this, arena
, lock
);
403 GCRuntime::GCRuntime(JSRuntime
* rt
)
406 mainThreadContext(rt
),
407 heapState_(JS::HeapState::Idle
),
410 fullGCRequested(false),
411 helperThreadRatio(TuningDefaults::HelperThreadRatio
),
412 maxHelperThreads(TuningDefaults::MaxHelperThreads
),
413 helperThreadCount(1),
414 maxMarkingThreads(TuningDefaults::MaxMarkingThreads
),
415 markingThreadCount(1),
416 createBudgetCallback(nullptr),
417 minEmptyChunkCount_(TuningDefaults::MinEmptyChunkCount
),
418 maxEmptyChunkCount_(TuningDefaults::MaxEmptyChunkCount
),
420 nextCellUniqueId_(LargestTaggedNullCellPointer
+
421 1), // Ensure disjoint from null tagged pointers.
422 numArenasFreeCommitted(0),
423 verifyPreData(nullptr),
424 lastGCStartTime_(TimeStamp::Now()),
425 lastGCEndTime_(TimeStamp::Now()),
426 incrementalGCEnabled(TuningDefaults::IncrementalGCEnabled
),
427 perZoneGCEnabled(TuningDefaults::PerZoneGCEnabled
),
428 numActiveZoneIters(0),
429 cleanUpEverything(false),
431 majorGCTriggerReason(JS::GCReason::NO_REASON
),
437 incrementalState(gc::State::NotActive
),
438 initialState(gc::State::NotActive
),
440 lastMarkSlice(false),
442 markOnBackgroundThreadDuringSweeping(false),
443 useBackgroundThreads(false),
445 hadShutdownGC(false),
447 requestSliceAfterBackgroundTask(false),
448 lifoBlocksToFree((size_t)JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE
),
449 lifoBlocksToFreeAfterFullMinorGC(
450 (size_t)JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE
),
451 lifoBlocksToFreeAfterNextMinorGC(
452 (size_t)JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE
),
454 sweepGroups(nullptr),
455 currentSweepGroup(nullptr),
457 abortSweepAfterCurrentGroup(false),
458 sweepMarkResult(IncrementalProgress::NotFinished
),
462 startedCompacting(false),
465 relocatedArenasToRelease(nullptr),
468 markingValidator(nullptr),
470 defaultTimeBudgetMS_(TuningDefaults::DefaultTimeBudgetMS
),
471 incrementalAllowed(true),
472 compactingEnabled(TuningDefaults::CompactingEnabled
),
473 parallelMarkingEnabled(TuningDefaults::ParallelMarkingEnabled
),
479 deterministicOnly(false),
481 selectedForMarking(rt
),
483 fullCompartmentChecks(false),
485 alwaysPreserveCode(false),
486 lowMemoryState(false),
487 lock(mutexid::GCLock
),
488 storeBufferLock(mutexid::StoreBuffer
),
489 delayedMarkingLock(mutexid::GCDelayedMarkingLock
),
490 allocTask(this, emptyChunks_
.ref()),
498 lastAllocRateUpdateTime(TimeStamp::Now()) {
501 using CharRange
= mozilla::Range
<const char>;
502 using CharRangeVector
= Vector
<CharRange
, 0, SystemAllocPolicy
>;
504 static bool SplitStringBy(const CharRange
& text
, char delimiter
,
505 CharRangeVector
* result
) {
506 auto start
= text
.begin();
507 for (auto ptr
= start
; ptr
!= text
.end(); ptr
++) {
508 if (*ptr
== delimiter
) {
509 if (!result
->emplaceBack(start
, ptr
)) {
516 return result
->emplaceBack(start
, text
.end());
519 static bool ParseTimeDuration(const CharRange
& text
,
520 TimeDuration
* durationOut
) {
521 const char* str
= text
.begin().get();
523 long millis
= strtol(str
, &end
, 10);
524 *durationOut
= TimeDuration::FromMilliseconds(double(millis
));
525 return str
!= end
&& end
== text
.end().get();
528 static void PrintProfileHelpAndExit(const char* envName
, const char* helpText
) {
529 fprintf(stderr
, "%s=N[,(main|all)]\n", envName
);
530 fprintf(stderr
, "%s", helpText
);
534 void js::gc::ReadProfileEnv(const char* envName
, const char* helpText
,
535 bool* enableOut
, bool* workersOut
,
536 TimeDuration
* thresholdOut
) {
539 *thresholdOut
= TimeDuration::Zero();
541 const char* env
= getenv(envName
);
546 if (strcmp(env
, "help") == 0) {
547 PrintProfileHelpAndExit(envName
, helpText
);
550 CharRangeVector parts
;
551 auto text
= CharRange(env
, strlen(env
));
552 if (!SplitStringBy(text
, ',', &parts
)) {
553 MOZ_CRASH("OOM parsing environment variable");
556 if (parts
.length() == 0 || parts
.length() > 2) {
557 PrintProfileHelpAndExit(envName
, helpText
);
562 if (!ParseTimeDuration(parts
[0], thresholdOut
)) {
563 PrintProfileHelpAndExit(envName
, helpText
);
566 if (parts
.length() == 2) {
567 const char* threads
= parts
[1].begin().get();
568 if (strcmp(threads
, "all") == 0) {
570 } else if (strcmp(threads
, "main") != 0) {
571 PrintProfileHelpAndExit(envName
, helpText
);
576 bool js::gc::ShouldPrintProfile(JSRuntime
* runtime
, bool enable
,
577 bool profileWorkers
, TimeDuration threshold
,
578 TimeDuration duration
) {
579 return enable
&& (runtime
->isMainRuntime() || profileWorkers
) &&
580 duration
>= threshold
;
585 void GCRuntime::getZealBits(uint32_t* zealBits
, uint32_t* frequency
,
586 uint32_t* scheduled
) {
587 *zealBits
= zealModeBits
;
588 *frequency
= zealFrequency
;
589 *scheduled
= nextScheduled
;
592 const char gc::ZealModeHelpText
[] =
593 " Specifies how zealous the garbage collector should be. Some of these "
595 " be set simultaneously, by passing multiple level options, e.g. \"2;4\" "
597 " both modes 2 and 4. Modes can be specified by name or number.\n"
600 " 0: (None) Normal amount of collection (resets all modes)\n"
601 " 1: (RootsChange) Collect when roots are added or removed\n"
602 " 2: (Alloc) Collect when every N allocations (default: 100)\n"
603 " 4: (VerifierPre) Verify pre write barriers between instructions\n"
604 " 6: (YieldBeforeRootMarking) Incremental GC in two slices that yields "
605 "before root marking\n"
606 " 7: (GenerationalGC) Collect the nursery every N nursery allocations\n"
607 " 8: (YieldBeforeMarking) Incremental GC in two slices that yields "
609 " the root marking and marking phases\n"
610 " 9: (YieldBeforeSweeping) Incremental GC in two slices that yields "
612 " the marking and sweeping phases\n"
613 " 10: (IncrementalMultipleSlices) Incremental GC in many slices\n"
614 " 11: (IncrementalMarkingValidator) Verify incremental marking\n"
615 " 12: (ElementsBarrier) Use the individual element post-write barrier\n"
616 " regardless of elements size\n"
617 " 13: (CheckHashTablesOnMinorGC) Check internal hashtables on minor GC\n"
618 " 14: (Compact) Perform a shrinking collection every N allocations\n"
619 " 15: (CheckHeapAfterGC) Walk the heap to check its integrity after "
621 " 17: (YieldBeforeSweepingAtoms) Incremental GC in two slices that "
623 " before sweeping the atoms table\n"
624 " 18: (CheckGrayMarking) Check gray marking invariants after every GC\n"
625 " 19: (YieldBeforeSweepingCaches) Incremental GC in two slices that "
627 " before sweeping weak caches\n"
628 " 21: (YieldBeforeSweepingObjects) Incremental GC in two slices that "
630 " before sweeping foreground finalized objects\n"
631 " 22: (YieldBeforeSweepingNonObjects) Incremental GC in two slices that "
633 " before sweeping non-object GC things\n"
634 " 23: (YieldBeforeSweepingPropMapTrees) Incremental GC in two slices "
637 " before sweeping shape trees\n"
638 " 24: (CheckWeakMapMarking) Check weak map marking invariants after "
640 " 25: (YieldWhileGrayMarking) Incremental GC in two slices that yields\n"
641 " during gray marking\n";
643 // The set of zeal modes that control incremental slices. These modes are
644 // mutually exclusive.
645 static const mozilla::EnumSet
<ZealMode
> IncrementalSliceZealModes
= {
646 ZealMode::YieldBeforeRootMarking
,
647 ZealMode::YieldBeforeMarking
,
648 ZealMode::YieldBeforeSweeping
,
649 ZealMode::IncrementalMultipleSlices
,
650 ZealMode::YieldBeforeSweepingAtoms
,
651 ZealMode::YieldBeforeSweepingCaches
,
652 ZealMode::YieldBeforeSweepingObjects
,
653 ZealMode::YieldBeforeSweepingNonObjects
,
654 ZealMode::YieldBeforeSweepingPropMapTrees
};
656 void GCRuntime::setZeal(uint8_t zeal
, uint32_t frequency
) {
657 MOZ_ASSERT(zeal
<= unsigned(ZealMode::Limit
));
660 VerifyBarriers(rt
, PreBarrierVerifier
);
664 if (hasZealMode(ZealMode::GenerationalGC
)) {
666 nursery().leaveZealMode();
669 if (isIncrementalGCInProgress()) {
670 finishGC(JS::GCReason::DEBUG_GC
);
674 ZealMode zealMode
= ZealMode(zeal
);
675 if (zealMode
== ZealMode::GenerationalGC
) {
676 evictNursery(JS::GCReason::EVICT_NURSERY
);
677 nursery().enterZealMode();
680 // Some modes are mutually exclusive. If we're setting one of those, we
681 // first reset all of them.
682 if (IncrementalSliceZealModes
.contains(zealMode
)) {
683 for (auto mode
: IncrementalSliceZealModes
) {
688 bool schedule
= zealMode
>= ZealMode::Alloc
;
690 zealModeBits
|= 1 << unsigned(zeal
);
694 zealFrequency
= frequency
;
695 nextScheduled
= schedule
? frequency
: 0;
698 void GCRuntime::unsetZeal(uint8_t zeal
) {
699 MOZ_ASSERT(zeal
<= unsigned(ZealMode::Limit
));
700 ZealMode zealMode
= ZealMode(zeal
);
702 if (!hasZealMode(zealMode
)) {
707 VerifyBarriers(rt
, PreBarrierVerifier
);
710 if (zealMode
== ZealMode::GenerationalGC
) {
712 nursery().leaveZealMode();
715 clearZealMode(zealMode
);
717 if (zealModeBits
== 0) {
718 if (isIncrementalGCInProgress()) {
719 finishGC(JS::GCReason::DEBUG_GC
);
727 void GCRuntime::setNextScheduled(uint32_t count
) { nextScheduled
= count
; }
729 static bool ParseZealModeName(const CharRange
& text
, uint32_t* modeOut
) {
736 static const ModeInfo zealModes
[] = {{"None", 0},
737 # define ZEAL_MODE(name, value) {#name, strlen(#name), value},
738 JS_FOR_EACH_ZEAL_MODE(ZEAL_MODE
)
742 for (auto mode
: zealModes
) {
743 if (text
.length() == mode
.length
&&
744 memcmp(text
.begin().get(), mode
.name
, mode
.length
) == 0) {
745 *modeOut
= mode
.value
;
753 static bool ParseZealModeNumericParam(const CharRange
& text
,
754 uint32_t* paramOut
) {
755 if (text
.length() == 0) {
759 for (auto c
: text
) {
760 if (!mozilla::IsAsciiDigit(c
)) {
765 *paramOut
= atoi(text
.begin().get());
769 static bool PrintZealHelpAndFail() {
770 fprintf(stderr
, "Format: JS_GC_ZEAL=level(;level)*[,N]\n");
771 fputs(ZealModeHelpText
, stderr
);
775 bool GCRuntime::parseAndSetZeal(const char* str
) {
776 // Set the zeal mode from a string consisting of one or more mode specifiers
777 // separated by ';', optionally followed by a ',' and the trigger frequency.
778 // The mode specifiers can by a mode name or its number.
780 auto text
= CharRange(str
, strlen(str
));
782 CharRangeVector parts
;
783 if (!SplitStringBy(text
, ',', &parts
)) {
787 if (parts
.length() == 0 || parts
.length() > 2) {
788 return PrintZealHelpAndFail();
791 uint32_t frequency
= JS_DEFAULT_ZEAL_FREQ
;
792 if (parts
.length() == 2 && !ParseZealModeNumericParam(parts
[1], &frequency
)) {
793 return PrintZealHelpAndFail();
796 CharRangeVector modes
;
797 if (!SplitStringBy(parts
[0], ';', &modes
)) {
801 for (const auto& descr
: modes
) {
803 if (!ParseZealModeName(descr
, &mode
) &&
804 !(ParseZealModeNumericParam(descr
, &mode
) &&
805 mode
<= unsigned(ZealMode::Limit
))) {
806 return PrintZealHelpAndFail();
809 setZeal(mode
, frequency
);
815 const char* js::gc::AllocKindName(AllocKind kind
) {
816 static const char* const names
[] = {
817 # define EXPAND_THING_NAME(allocKind, _1, _2, _3, _4, _5, _6) #allocKind,
818 FOR_EACH_ALLOCKIND(EXPAND_THING_NAME
)
819 # undef EXPAND_THING_NAME
821 static_assert(std::size(names
) == AllocKindCount
,
822 "names array should have an entry for every AllocKind");
824 size_t i
= size_t(kind
);
825 MOZ_ASSERT(i
< std::size(names
));
829 void js::gc::DumpArenaInfo() {
830 fprintf(stderr
, "Arena header size: %zu\n\n", ArenaHeaderSize
);
832 fprintf(stderr
, "GC thing kinds:\n");
833 fprintf(stderr
, "%25s %8s %8s %8s\n",
834 "AllocKind:", "Size:", "Count:", "Padding:");
835 for (auto kind
: AllAllocKinds()) {
836 fprintf(stderr
, "%25s %8zu %8zu %8zu\n", AllocKindName(kind
),
837 Arena::thingSize(kind
), Arena::thingsPerArena(kind
),
838 Arena::firstThingOffset(kind
) - ArenaHeaderSize
);
844 bool GCRuntime::init(uint32_t maxbytes
) {
845 MOZ_ASSERT(!wasInitialized());
847 MOZ_ASSERT(SystemPageSize());
848 Arena::checkLookupTables();
850 if (!TlsGCContext
.init()) {
853 TlsGCContext
.set(&mainThreadContext
.ref());
855 updateHelperThreadCount();
858 const char* size
= getenv("JSGC_MARK_STACK_LIMIT");
860 maybeMarkStackLimit
= atoi(size
);
864 if (!updateMarkersVector()) {
869 AutoLockGCBgAlloc
lock(this);
871 MOZ_ALWAYS_TRUE(tunables
.setParameter(JSGC_MAX_BYTES
, maxbytes
));
873 if (!nursery().init(lock
)) {
879 const char* zealSpec
= getenv("JS_GC_ZEAL");
880 if (zealSpec
&& zealSpec
[0] && !parseAndSetZeal(zealSpec
)) {
885 for (auto& marker
: markers
) {
886 if (!marker
->init()) {
891 if (!initSweepActions()) {
895 UniquePtr
<Zone
> zone
= MakeUnique
<Zone
>(rt
, Zone::AtomsZone
);
896 if (!zone
|| !zone
->init()) {
900 // The atoms zone is stored as the first element of the zones vector.
901 MOZ_ASSERT(zone
->isAtomsZone());
902 MOZ_ASSERT(zones().empty());
903 MOZ_ALWAYS_TRUE(zones().reserve(1)); // ZonesVector has inline capacity 4.
904 zones().infallibleAppend(zone
.release());
906 gcprobes::Init(this);
912 void GCRuntime::finish() {
913 MOZ_ASSERT(inPageLoadCount
== 0);
914 MOZ_ASSERT(!sharedAtomsZone_
);
916 // Wait for nursery background free to end and disable it to release memory.
917 if (nursery().isEnabled()) {
921 // Wait until the background finalization and allocation stops and the
922 // helper thread shuts down before we forcefully release any remaining GC
927 allocTask
.cancelAndWait();
928 decommitTask
.cancelAndWait();
931 MOZ_ASSERT(dispatchedParallelTasks
== 0);
932 AutoLockHelperThreadState lock
;
933 MOZ_ASSERT(queuedParallelTasks
.ref().isEmpty(lock
));
937 releaseMarkingThreads();
940 // Free memory associated with GC verification.
944 // Delete all remaining zones.
945 for (ZonesIter
zone(this, WithAtoms
); !zone
.done(); zone
.next()) {
946 AutoSetThreadIsSweeping
threadIsSweeping(rt
->gcContext(), zone
);
947 for (CompartmentsInZoneIter
comp(zone
); !comp
.done(); comp
.next()) {
948 for (RealmsInCompartmentIter
realm(comp
); !realm
.done(); realm
.next()) {
949 js_delete(realm
.get());
951 comp
->realms().clear();
952 js_delete(comp
.get());
954 zone
->compartments().clear();
955 js_delete(zone
.get());
960 FreeChunkPool(fullChunks_
.ref());
961 FreeChunkPool(availableChunks_
.ref());
962 FreeChunkPool(emptyChunks_
.ref());
964 TlsGCContext
.set(nullptr);
966 gcprobes::Finish(this);
968 nursery().printTotalProfileTimes();
969 stats().printTotalProfileTimes();
972 bool GCRuntime::freezeSharedAtomsZone() {
973 // This is called just after permanent atoms and well-known symbols have been
974 // created. At this point all existing atoms and symbols are permanent.
976 // This method makes the current atoms zone into a shared atoms zone and
977 // removes it from the zones list. Everything in it is marked black. A new
978 // empty atoms zone is created, where all atoms local to this runtime will
981 // The shared atoms zone will not be collected until shutdown when it is
982 // returned to the zone list by restoreSharedAtomsZone().
984 MOZ_ASSERT(rt
->isMainRuntime());
985 MOZ_ASSERT(!sharedAtomsZone_
);
986 MOZ_ASSERT(zones().length() == 1);
987 MOZ_ASSERT(atomsZone());
988 MOZ_ASSERT(!atomsZone()->wasGCStarted());
989 MOZ_ASSERT(!atomsZone()->needsIncrementalBarrier());
991 AutoAssertEmptyNursery
nurseryIsEmpty(rt
->mainContextFromOwnThread());
993 atomsZone()->arenas
.clearFreeLists();
995 for (auto kind
: AllAllocKinds()) {
997 atomsZone()->cellIterUnsafe
<TenuredCell
>(kind
, nurseryIsEmpty
);
998 !thing
.done(); thing
.next()) {
999 TenuredCell
* cell
= thing
.getCell();
1000 MOZ_ASSERT((cell
->is
<JSString
>() &&
1001 cell
->as
<JSString
>()->isPermanentAndMayBeShared()) ||
1002 (cell
->is
<JS::Symbol
>() &&
1003 cell
->as
<JS::Symbol
>()->isPermanentAndMayBeShared()));
1008 sharedAtomsZone_
= atomsZone();
1011 UniquePtr
<Zone
> zone
= MakeUnique
<Zone
>(rt
, Zone::AtomsZone
);
1012 if (!zone
|| !zone
->init()) {
1016 MOZ_ASSERT(zone
->isAtomsZone());
1017 zones().infallibleAppend(zone
.release());
1022 void GCRuntime::restoreSharedAtomsZone() {
1023 // Return the shared atoms zone to the zone list. This allows the contents of
1024 // the shared atoms zone to be collected when the parent runtime is shut down.
1026 if (!sharedAtomsZone_
) {
1030 MOZ_ASSERT(rt
->isMainRuntime());
1031 MOZ_ASSERT(rt
->childRuntimeCount
== 0);
1033 AutoEnterOOMUnsafeRegion oomUnsafe
;
1034 if (!zones().append(sharedAtomsZone_
)) {
1035 oomUnsafe
.crash("restoreSharedAtomsZone");
1038 sharedAtomsZone_
= nullptr;
1041 bool GCRuntime::setParameter(JSContext
* cx
, JSGCParamKey key
, uint32_t value
) {
1042 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt
));
1044 AutoStopVerifyingBarriers
pauseVerification(rt
, false);
1046 waitBackgroundSweepEnd();
1048 AutoLockGC
lock(this);
1049 return setParameter(key
, value
, lock
);
1052 static bool IsGCThreadParameter(JSGCParamKey key
) {
1053 return key
== JSGC_HELPER_THREAD_RATIO
|| key
== JSGC_MAX_HELPER_THREADS
||
1054 key
== JSGC_MAX_MARKING_THREADS
;
1057 bool GCRuntime::setParameter(JSGCParamKey key
, uint32_t value
,
1060 case JSGC_SLICE_TIME_BUDGET_MS
:
1061 defaultTimeBudgetMS_
= value
;
1063 case JSGC_INCREMENTAL_GC_ENABLED
:
1064 setIncrementalGCEnabled(value
!= 0);
1066 case JSGC_PER_ZONE_GC_ENABLED
:
1067 perZoneGCEnabled
= value
!= 0;
1069 case JSGC_COMPACTING_ENABLED
:
1070 compactingEnabled
= value
!= 0;
1072 case JSGC_PARALLEL_MARKING_ENABLED
:
1073 setParallelMarkingEnabled(value
!= 0);
1075 case JSGC_INCREMENTAL_WEAKMAP_ENABLED
:
1076 for (auto& marker
: markers
) {
1077 marker
->incrementalWeakMapMarkingEnabled
= value
!= 0;
1080 case JSGC_SEMISPACE_NURSERY_ENABLED
: {
1081 AutoUnlockGC
unlock(lock
);
1082 nursery().setSemispaceEnabled(value
);
1085 case JSGC_MIN_EMPTY_CHUNK_COUNT
:
1086 setMinEmptyChunkCount(value
, lock
);
1088 case JSGC_MAX_EMPTY_CHUNK_COUNT
:
1089 setMaxEmptyChunkCount(value
, lock
);
1092 if (IsGCThreadParameter(key
)) {
1093 return setThreadParameter(key
, value
, lock
);
1096 if (!tunables
.setParameter(key
, value
)) {
1099 updateAllGCStartThresholds();
1105 bool GCRuntime::setThreadParameter(JSGCParamKey key
, uint32_t value
,
1107 if (rt
->parentRuntime
) {
1108 // Don't allow these to be set for worker runtimes.
1113 case JSGC_HELPER_THREAD_RATIO
:
1117 helperThreadRatio
= double(value
) / 100.0;
1119 case JSGC_MAX_HELPER_THREADS
:
1123 maxHelperThreads
= value
;
1125 case JSGC_MAX_MARKING_THREADS
:
1126 maxMarkingThreads
= std::min(size_t(value
), MaxParallelWorkers
);
1129 MOZ_CRASH("Unexpected parameter key");
1132 updateHelperThreadCount();
1133 initOrDisableParallelMarking();
1138 void GCRuntime::resetParameter(JSContext
* cx
, JSGCParamKey key
) {
1139 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt
));
1141 AutoStopVerifyingBarriers
pauseVerification(rt
, false);
1143 waitBackgroundSweepEnd();
1145 AutoLockGC
lock(this);
1146 resetParameter(key
, lock
);
1149 void GCRuntime::resetParameter(JSGCParamKey key
, AutoLockGC
& lock
) {
1151 case JSGC_SLICE_TIME_BUDGET_MS
:
1152 defaultTimeBudgetMS_
= TuningDefaults::DefaultTimeBudgetMS
;
1154 case JSGC_INCREMENTAL_GC_ENABLED
:
1155 setIncrementalGCEnabled(TuningDefaults::IncrementalGCEnabled
);
1157 case JSGC_PER_ZONE_GC_ENABLED
:
1158 perZoneGCEnabled
= TuningDefaults::PerZoneGCEnabled
;
1160 case JSGC_COMPACTING_ENABLED
:
1161 compactingEnabled
= TuningDefaults::CompactingEnabled
;
1163 case JSGC_PARALLEL_MARKING_ENABLED
:
1164 setParallelMarkingEnabled(TuningDefaults::ParallelMarkingEnabled
);
1166 case JSGC_INCREMENTAL_WEAKMAP_ENABLED
:
1167 for (auto& marker
: markers
) {
1168 marker
->incrementalWeakMapMarkingEnabled
=
1169 TuningDefaults::IncrementalWeakMapMarkingEnabled
;
1172 case JSGC_SEMISPACE_NURSERY_ENABLED
: {
1173 AutoUnlockGC
unlock(lock
);
1174 nursery().setSemispaceEnabled(TuningDefaults::SemispaceNurseryEnabled
);
1177 case JSGC_MIN_EMPTY_CHUNK_COUNT
:
1178 setMinEmptyChunkCount(TuningDefaults::MinEmptyChunkCount
, lock
);
1180 case JSGC_MAX_EMPTY_CHUNK_COUNT
:
1181 setMaxEmptyChunkCount(TuningDefaults::MaxEmptyChunkCount
, lock
);
1184 if (IsGCThreadParameter(key
)) {
1185 resetThreadParameter(key
, lock
);
1189 tunables
.resetParameter(key
);
1190 updateAllGCStartThresholds();
1194 void GCRuntime::resetThreadParameter(JSGCParamKey key
, AutoLockGC
& lock
) {
1195 if (rt
->parentRuntime
) {
1200 case JSGC_HELPER_THREAD_RATIO
:
1201 helperThreadRatio
= TuningDefaults::HelperThreadRatio
;
1203 case JSGC_MAX_HELPER_THREADS
:
1204 maxHelperThreads
= TuningDefaults::MaxHelperThreads
;
1206 case JSGC_MAX_MARKING_THREADS
:
1207 maxMarkingThreads
= TuningDefaults::MaxMarkingThreads
;
1210 MOZ_CRASH("Unexpected parameter key");
1213 updateHelperThreadCount();
1214 initOrDisableParallelMarking();
1217 uint32_t GCRuntime::getParameter(JSGCParamKey key
) {
1218 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt
));
1219 AutoLockGC
lock(this);
1220 return getParameter(key
, lock
);
1223 uint32_t GCRuntime::getParameter(JSGCParamKey key
, const AutoLockGC
& lock
) {
1226 return uint32_t(heapSize
.bytes());
1227 case JSGC_NURSERY_BYTES
:
1228 return nursery().capacity();
1230 return uint32_t(number
);
1231 case JSGC_MAJOR_GC_NUMBER
:
1232 return uint32_t(majorGCNumber
);
1233 case JSGC_MINOR_GC_NUMBER
:
1234 return uint32_t(minorGCNumber
);
1235 case JSGC_INCREMENTAL_GC_ENABLED
:
1236 return incrementalGCEnabled
;
1237 case JSGC_PER_ZONE_GC_ENABLED
:
1238 return perZoneGCEnabled
;
1239 case JSGC_UNUSED_CHUNKS
:
1240 return uint32_t(emptyChunks(lock
).count());
1241 case JSGC_TOTAL_CHUNKS
:
1242 return uint32_t(fullChunks(lock
).count() + availableChunks(lock
).count() +
1243 emptyChunks(lock
).count());
1244 case JSGC_SLICE_TIME_BUDGET_MS
:
1245 MOZ_RELEASE_ASSERT(defaultTimeBudgetMS_
>= 0);
1246 MOZ_RELEASE_ASSERT(defaultTimeBudgetMS_
<= UINT32_MAX
);
1247 return uint32_t(defaultTimeBudgetMS_
);
1248 case JSGC_MIN_EMPTY_CHUNK_COUNT
:
1249 return minEmptyChunkCount(lock
);
1250 case JSGC_MAX_EMPTY_CHUNK_COUNT
:
1251 return maxEmptyChunkCount(lock
);
1252 case JSGC_COMPACTING_ENABLED
:
1253 return compactingEnabled
;
1254 case JSGC_PARALLEL_MARKING_ENABLED
:
1255 return parallelMarkingEnabled
;
1256 case JSGC_INCREMENTAL_WEAKMAP_ENABLED
:
1257 return marker().incrementalWeakMapMarkingEnabled
;
1258 case JSGC_SEMISPACE_NURSERY_ENABLED
:
1259 return nursery().semispaceEnabled();
1260 case JSGC_CHUNK_BYTES
:
1262 case JSGC_HELPER_THREAD_RATIO
:
1263 MOZ_ASSERT(helperThreadRatio
> 0.0);
1264 return uint32_t(helperThreadRatio
* 100.0);
1265 case JSGC_MAX_HELPER_THREADS
:
1266 MOZ_ASSERT(maxHelperThreads
<= UINT32_MAX
);
1267 return maxHelperThreads
;
1268 case JSGC_HELPER_THREAD_COUNT
:
1269 return helperThreadCount
;
1270 case JSGC_MAX_MARKING_THREADS
:
1271 return maxMarkingThreads
;
1272 case JSGC_MARKING_THREAD_COUNT
:
1273 return markingThreadCount
;
1274 case JSGC_SYSTEM_PAGE_SIZE_KB
:
1275 return SystemPageSize() / 1024;
1277 return tunables
.getParameter(key
);
1282 void GCRuntime::setMarkStackLimit(size_t limit
, AutoLockGC
& lock
) {
1283 MOZ_ASSERT(!JS::RuntimeHeapIsBusy());
1285 maybeMarkStackLimit
= limit
;
1287 AutoUnlockGC
unlock(lock
);
1288 AutoStopVerifyingBarriers
pauseVerification(rt
, false);
1289 for (auto& marker
: markers
) {
1290 marker
->setMaxCapacity(limit
);
1295 void GCRuntime::setIncrementalGCEnabled(bool enabled
) {
1296 incrementalGCEnabled
= enabled
;
1299 void GCRuntime::updateHelperThreadCount() {
1300 if (!CanUseExtraThreads()) {
1301 // startTask will run the work on the main thread if the count is 1.
1302 MOZ_ASSERT(helperThreadCount
== 1);
1303 markingThreadCount
= 1;
1305 AutoLockHelperThreadState lock
;
1306 maxParallelThreads
= 1;
1310 // Number of extra threads required during parallel marking to ensure we can
1311 // start the necessary marking tasks. Background free and background
1312 // allocation may already be running and we want to avoid these tasks blocking
1313 // marking. In real configurations there will be enough threads that this
1314 // won't affect anything.
1315 static constexpr size_t SpareThreadsDuringParallelMarking
= 2;
1317 // Calculate the target thread count for GC parallel tasks.
1318 size_t cpuCount
= GetHelperThreadCPUCount();
1320 std::clamp(size_t(double(cpuCount
) * helperThreadRatio
.ref()), size_t(1),
1321 maxHelperThreads
.ref());
1323 // Calculate the target thread count for parallel marking, which uses separate
1324 // parameters to let us adjust this independently.
1325 markingThreadCount
= std::min(cpuCount
/ 2, maxMarkingThreads
.ref());
1327 // Calculate the overall target thread count taking into account the separate
1328 // target for parallel marking threads. Add spare threads to avoid blocking
1329 // parallel marking when there is other GC work happening.
1330 size_t targetCount
=
1331 std::max(helperThreadCount
.ref(),
1332 markingThreadCount
.ref() + SpareThreadsDuringParallelMarking
);
1334 // Attempt to create extra threads if possible. This is not supported when
1335 // using an external thread pool.
1336 AutoLockHelperThreadState lock
;
1337 (void)HelperThreadState().ensureThreadCount(targetCount
, lock
);
1339 // Limit all thread counts based on the number of threads available, which may
1340 // be fewer than requested.
1341 size_t availableThreadCount
= GetHelperThreadCount();
1342 MOZ_ASSERT(availableThreadCount
!= 0);
1343 targetCount
= std::min(targetCount
, availableThreadCount
);
1344 helperThreadCount
= std::min(helperThreadCount
.ref(), availableThreadCount
);
1345 if (availableThreadCount
< SpareThreadsDuringParallelMarking
) {
1346 markingThreadCount
= 1;
1348 markingThreadCount
=
1349 std::min(markingThreadCount
.ref(),
1350 availableThreadCount
- SpareThreadsDuringParallelMarking
);
1353 // Update the maximum number of threads that will be used for GC work.
1354 maxParallelThreads
= targetCount
;
1357 size_t GCRuntime::markingWorkerCount() const {
1358 if (!CanUseExtraThreads() || !parallelMarkingEnabled
) {
1362 if (markingThreadCount
) {
1363 return markingThreadCount
;
1366 // Limit parallel marking to use at most two threads initially.
1371 void GCRuntime::assertNoMarkingWork() const {
1372 for (const auto& marker
: markers
) {
1373 MOZ_ASSERT(marker
->isDrained());
1375 MOZ_ASSERT(!hasDelayedMarking());
1379 bool GCRuntime::setParallelMarkingEnabled(bool enabled
) {
1380 if (enabled
== parallelMarkingEnabled
) {
1384 parallelMarkingEnabled
= enabled
;
1385 return initOrDisableParallelMarking();
1388 bool GCRuntime::initOrDisableParallelMarking() {
1389 // Attempt to initialize parallel marking state or disable it on failure. This
1390 // is called when parallel marking is enabled or disabled.
1392 MOZ_ASSERT(markers
.length() != 0);
1394 if (updateMarkersVector()) {
1398 // Failed to initialize parallel marking so disable it instead.
1399 MOZ_ASSERT(parallelMarkingEnabled
);
1400 parallelMarkingEnabled
= false;
1401 MOZ_ALWAYS_TRUE(updateMarkersVector());
1405 void GCRuntime::releaseMarkingThreads() {
1406 MOZ_ALWAYS_TRUE(reserveMarkingThreads(0));
1409 bool GCRuntime::reserveMarkingThreads(size_t newCount
) {
1410 if (reservedMarkingThreads
== newCount
) {
1414 // Update the helper thread system's global count by subtracting this
1415 // runtime's current contribution |reservedMarkingThreads| and adding the new
1416 // contribution |newCount|.
1418 AutoLockHelperThreadState lock
;
1419 auto& globalCount
= HelperThreadState().gcParallelMarkingThreads
;
1420 MOZ_ASSERT(globalCount
>= reservedMarkingThreads
);
1421 size_t newGlobalCount
= globalCount
- reservedMarkingThreads
+ newCount
;
1422 if (newGlobalCount
> HelperThreadState().threadCount
) {
1423 // Not enough total threads.
1427 globalCount
= newGlobalCount
;
1428 reservedMarkingThreads
= newCount
;
1432 size_t GCRuntime::getMaxParallelThreads() const {
1433 AutoLockHelperThreadState lock
;
1434 return maxParallelThreads
.ref();
1437 bool GCRuntime::updateMarkersVector() {
1438 MOZ_ASSERT(helperThreadCount
>= 1,
1439 "There must always be at least one mark task");
1440 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt
));
1441 assertNoMarkingWork();
1443 // Limit worker count to number of GC parallel tasks that can run
1444 // concurrently, otherwise one thread can deadlock waiting on another.
1445 size_t targetCount
= std::min(markingWorkerCount(), getMaxParallelThreads());
1447 if (rt
->isMainRuntime()) {
1448 // For the main runtime, reserve helper threads as long as parallel marking
1449 // is enabled. Worker runtimes may not mark in parallel if there are
1450 // insufficient threads available at the time.
1451 size_t threadsToReserve
= targetCount
> 1 ? targetCount
: 0;
1452 if (!reserveMarkingThreads(threadsToReserve
)) {
1457 if (markers
.length() > targetCount
) {
1458 return markers
.resize(targetCount
);
1461 while (markers
.length() < targetCount
) {
1462 auto marker
= MakeUnique
<GCMarker
>(rt
);
1468 if (maybeMarkStackLimit
) {
1469 marker
->setMaxCapacity(maybeMarkStackLimit
);
1473 if (!marker
->init()) {
1477 if (!markers
.emplaceBack(std::move(marker
))) {
1485 template <typename F
>
1486 static bool EraseCallback(CallbackVector
<F
>& vector
, F callback
) {
1487 for (Callback
<F
>* p
= vector
.begin(); p
!= vector
.end(); p
++) {
1488 if (p
->op
== callback
) {
1497 template <typename F
>
1498 static bool EraseCallback(CallbackVector
<F
>& vector
, F callback
, void* data
) {
1499 for (Callback
<F
>* p
= vector
.begin(); p
!= vector
.end(); p
++) {
1500 if (p
->op
== callback
&& p
->data
== data
) {
1509 bool GCRuntime::addBlackRootsTracer(JSTraceDataOp traceOp
, void* data
) {
1511 return blackRootTracers
.ref().append(Callback
<JSTraceDataOp
>(traceOp
, data
));
1514 void GCRuntime::removeBlackRootsTracer(JSTraceDataOp traceOp
, void* data
) {
1515 // Can be called from finalizers
1516 MOZ_ALWAYS_TRUE(EraseCallback(blackRootTracers
.ref(), traceOp
));
1519 void GCRuntime::setGrayRootsTracer(JSGrayRootsTracer traceOp
, void* data
) {
1521 grayRootTracer
.ref() = {traceOp
, data
};
1524 void GCRuntime::clearBlackAndGrayRootTracers() {
1525 MOZ_ASSERT(rt
->isBeingDestroyed());
1526 blackRootTracers
.ref().clear();
1527 setGrayRootsTracer(nullptr, nullptr);
1530 void GCRuntime::setGCCallback(JSGCCallback callback
, void* data
) {
1531 gcCallback
.ref() = {callback
, data
};
1534 void GCRuntime::callGCCallback(JSGCStatus status
, JS::GCReason reason
) const {
1535 const auto& callback
= gcCallback
.ref();
1536 MOZ_ASSERT(callback
.op
);
1537 callback
.op(rt
->mainContextFromOwnThread(), status
, reason
, callback
.data
);
1540 void GCRuntime::setObjectsTenuredCallback(JSObjectsTenuredCallback callback
,
1542 tenuredCallback
.ref() = {callback
, data
};
1545 void GCRuntime::callObjectsTenuredCallback() {
1546 JS::AutoSuppressGCAnalysis nogc
;
1547 const auto& callback
= tenuredCallback
.ref();
1549 callback
.op(rt
->mainContextFromOwnThread(), callback
.data
);
1553 bool GCRuntime::addFinalizeCallback(JSFinalizeCallback callback
, void* data
) {
1554 return finalizeCallbacks
.ref().append(
1555 Callback
<JSFinalizeCallback
>(callback
, data
));
1558 void GCRuntime::removeFinalizeCallback(JSFinalizeCallback callback
) {
1559 MOZ_ALWAYS_TRUE(EraseCallback(finalizeCallbacks
.ref(), callback
));
1562 void GCRuntime::callFinalizeCallbacks(JS::GCContext
* gcx
,
1563 JSFinalizeStatus status
) const {
1564 for (const auto& p
: finalizeCallbacks
.ref()) {
1565 p
.op(gcx
, status
, p
.data
);
1569 void GCRuntime::setHostCleanupFinalizationRegistryCallback(
1570 JSHostCleanupFinalizationRegistryCallback callback
, void* data
) {
1571 hostCleanupFinalizationRegistryCallback
.ref() = {callback
, data
};
1574 void GCRuntime::callHostCleanupFinalizationRegistryCallback(
1575 JSFunction
* doCleanup
, GlobalObject
* incumbentGlobal
) {
1576 JS::AutoSuppressGCAnalysis nogc
;
1577 const auto& callback
= hostCleanupFinalizationRegistryCallback
.ref();
1579 callback
.op(doCleanup
, incumbentGlobal
, callback
.data
);
1583 bool GCRuntime::addWeakPointerZonesCallback(JSWeakPointerZonesCallback callback
,
1585 return updateWeakPointerZonesCallbacks
.ref().append(
1586 Callback
<JSWeakPointerZonesCallback
>(callback
, data
));
1589 void GCRuntime::removeWeakPointerZonesCallback(
1590 JSWeakPointerZonesCallback callback
) {
1592 EraseCallback(updateWeakPointerZonesCallbacks
.ref(), callback
));
1595 void GCRuntime::callWeakPointerZonesCallbacks(JSTracer
* trc
) const {
1596 for (auto const& p
: updateWeakPointerZonesCallbacks
.ref()) {
1601 bool GCRuntime::addWeakPointerCompartmentCallback(
1602 JSWeakPointerCompartmentCallback callback
, void* data
) {
1603 return updateWeakPointerCompartmentCallbacks
.ref().append(
1604 Callback
<JSWeakPointerCompartmentCallback
>(callback
, data
));
1607 void GCRuntime::removeWeakPointerCompartmentCallback(
1608 JSWeakPointerCompartmentCallback callback
) {
1610 EraseCallback(updateWeakPointerCompartmentCallbacks
.ref(), callback
));
1613 void GCRuntime::callWeakPointerCompartmentCallbacks(
1614 JSTracer
* trc
, JS::Compartment
* comp
) const {
1615 for (auto const& p
: updateWeakPointerCompartmentCallbacks
.ref()) {
1616 p
.op(trc
, comp
, p
.data
);
1620 JS::GCSliceCallback
GCRuntime::setSliceCallback(JS::GCSliceCallback callback
) {
1621 return stats().setSliceCallback(callback
);
1624 bool GCRuntime::addNurseryCollectionCallback(
1625 JS::GCNurseryCollectionCallback callback
, void* data
) {
1626 return nurseryCollectionCallbacks
.ref().append(
1627 Callback
<JS::GCNurseryCollectionCallback
>(callback
, data
));
1630 void GCRuntime::removeNurseryCollectionCallback(
1631 JS::GCNurseryCollectionCallback callback
, void* data
) {
1633 EraseCallback(nurseryCollectionCallbacks
.ref(), callback
, data
));
1636 void GCRuntime::callNurseryCollectionCallbacks(JS::GCNurseryProgress progress
,
1637 JS::GCReason reason
) {
1638 for (auto const& p
: nurseryCollectionCallbacks
.ref()) {
1639 p
.op(rt
->mainContextFromOwnThread(), progress
, reason
, p
.data
);
1643 JS::DoCycleCollectionCallback
GCRuntime::setDoCycleCollectionCallback(
1644 JS::DoCycleCollectionCallback callback
) {
1645 const auto prior
= gcDoCycleCollectionCallback
.ref();
1646 gcDoCycleCollectionCallback
.ref() = {callback
, nullptr};
1650 void GCRuntime::callDoCycleCollectionCallback(JSContext
* cx
) {
1651 const auto& callback
= gcDoCycleCollectionCallback
.ref();
1657 bool GCRuntime::addRoot(Value
* vp
, const char* name
) {
1659 * Sometimes Firefox will hold weak references to objects and then convert
1660 * them to strong references by calling AddRoot (e.g., via PreserveWrapper,
1661 * or ModifyBusyCount in workers). We need a read barrier to cover these
1666 if (value
.isGCThing()) {
1667 ValuePreWriteBarrier(value
);
1670 return rootsHash
.ref().put(vp
, name
);
1673 void GCRuntime::removeRoot(Value
* vp
) {
1674 rootsHash
.ref().remove(vp
);
1675 notifyRootsRemoved();
1680 bool js::gc::IsCurrentlyAnimating(const TimeStamp
& lastAnimationTime
,
1681 const TimeStamp
& currentTime
) {
1682 // Assume that we're currently animating if js::NotifyAnimationActivity has
1683 // been called in the last second.
1684 static const auto oneSecond
= TimeDuration::FromSeconds(1);
1685 return !lastAnimationTime
.IsNull() &&
1686 currentTime
< (lastAnimationTime
+ oneSecond
);
1689 static bool DiscardedCodeRecently(Zone
* zone
, const TimeStamp
& currentTime
) {
1690 static const auto thirtySeconds
= TimeDuration::FromSeconds(30);
1691 return !zone
->lastDiscardedCodeTime().IsNull() &&
1692 currentTime
< (zone
->lastDiscardedCodeTime() + thirtySeconds
);
1695 bool GCRuntime::shouldCompact() {
1696 // Compact on shrinking GC if enabled. Skip compacting in incremental GCs
1697 // if we are currently animating, unless the user is inactive or we're
1698 // responding to memory pressure.
1700 if (!isShrinkingGC() || !isCompactingGCEnabled()) {
1704 if (initialReason
== JS::GCReason::USER_INACTIVE
||
1705 initialReason
== JS::GCReason::MEM_PRESSURE
) {
1709 return !isIncremental
||
1710 !IsCurrentlyAnimating(rt
->lastAnimationTime
, TimeStamp::Now());
1713 bool GCRuntime::isCompactingGCEnabled() const {
1714 return compactingEnabled
&&
1715 rt
->mainContextFromOwnThread()->compactingDisabledCount
== 0;
1718 JS_PUBLIC_API
void JS::SetCreateGCSliceBudgetCallback(
1719 JSContext
* cx
, JS::CreateSliceBudgetCallback cb
) {
1720 cx
->runtime()->gc
.createBudgetCallback
= cb
;
1723 void TimeBudget::setDeadlineFromNow() { deadline
= TimeStamp::Now() + budget
; }
1725 SliceBudget::SliceBudget(TimeBudget time
, InterruptRequestFlag
* interrupt
)
1726 : counter(StepsPerExpensiveCheck
),
1727 interruptRequested(interrupt
),
1728 budget(TimeBudget(time
)) {
1729 budget
.as
<TimeBudget
>().setDeadlineFromNow();
1732 SliceBudget::SliceBudget(WorkBudget work
)
1733 : counter(work
.budget
), interruptRequested(nullptr), budget(work
) {}
1735 int SliceBudget::describe(char* buffer
, size_t maxlen
) const {
1736 if (isUnlimited()) {
1737 return snprintf(buffer
, maxlen
, "unlimited");
1740 if (isWorkBudget()) {
1741 return snprintf(buffer
, maxlen
, "work(%" PRId64
")", workBudget());
1744 const char* interruptStr
= "";
1745 if (interruptRequested
) {
1746 interruptStr
= interrupted
? "INTERRUPTED " : "interruptible ";
1748 const char* extra
= "";
1750 extra
= extended
? " (started idle but extended)" : " (idle)";
1752 return snprintf(buffer
, maxlen
, "%s%" PRId64
"ms%s", interruptStr
,
1753 timeBudget(), extra
);
1756 bool SliceBudget::checkOverBudget() {
1757 MOZ_ASSERT(counter
<= 0);
1758 MOZ_ASSERT(!isUnlimited());
1760 if (isWorkBudget()) {
1764 if (interruptRequested
&& *interruptRequested
) {
1772 if (TimeStamp::Now() >= budget
.as
<TimeBudget
>().deadline
) {
1776 counter
= StepsPerExpensiveCheck
;
1780 void GCRuntime::requestMajorGC(JS::GCReason reason
) {
1781 MOZ_ASSERT_IF(reason
!= JS::GCReason::BG_TASK_FINISHED
,
1782 !CurrentThreadIsPerformingGC());
1784 if (majorGCRequested()) {
1788 majorGCTriggerReason
= reason
;
1789 rt
->mainContextFromAnyThread()->requestInterrupt(InterruptReason::MajorGC
);
1792 bool GCRuntime::triggerGC(JS::GCReason reason
) {
1794 * Don't trigger GCs if this is being called off the main thread from
1795 * onTooMuchMalloc().
1797 if (!CurrentThreadCanAccessRuntime(rt
)) {
1801 /* GC is already running. */
1802 if (JS::RuntimeHeapIsCollecting()) {
1806 JS::PrepareForFullGC(rt
->mainContextFromOwnThread());
1807 requestMajorGC(reason
);
1811 void GCRuntime::maybeTriggerGCAfterAlloc(Zone
* zone
) {
1812 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt
));
1813 MOZ_ASSERT(!JS::RuntimeHeapIsCollecting());
1815 TriggerResult trigger
=
1816 checkHeapThreshold(zone
, zone
->gcHeapSize
, zone
->gcHeapThreshold
);
1818 if (trigger
.shouldTrigger
) {
1819 // Start or continue an in progress incremental GC. We do this to try to
1820 // avoid performing non-incremental GCs on zones which allocate a lot of
1821 // data, even when incremental slices can't be triggered via scheduling in
1823 triggerZoneGC(zone
, JS::GCReason::ALLOC_TRIGGER
, trigger
.usedBytes
,
1824 trigger
.thresholdBytes
);
1828 void js::gc::MaybeMallocTriggerZoneGC(JSRuntime
* rt
, ZoneAllocator
* zoneAlloc
,
1829 const HeapSize
& heap
,
1830 const HeapThreshold
& threshold
,
1831 JS::GCReason reason
) {
1832 rt
->gc
.maybeTriggerGCAfterMalloc(Zone::from(zoneAlloc
), heap
, threshold
,
1836 void GCRuntime::maybeTriggerGCAfterMalloc(Zone
* zone
) {
1837 if (maybeTriggerGCAfterMalloc(zone
, zone
->mallocHeapSize
,
1838 zone
->mallocHeapThreshold
,
1839 JS::GCReason::TOO_MUCH_MALLOC
)) {
1843 maybeTriggerGCAfterMalloc(zone
, zone
->jitHeapSize
, zone
->jitHeapThreshold
,
1844 JS::GCReason::TOO_MUCH_JIT_CODE
);
1847 bool GCRuntime::maybeTriggerGCAfterMalloc(Zone
* zone
, const HeapSize
& heap
,
1848 const HeapThreshold
& threshold
,
1849 JS::GCReason reason
) {
1850 // Ignore malloc during sweeping, for example when we resize hash tables.
1851 if (heapState() != JS::HeapState::Idle
) {
1855 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt
));
1857 TriggerResult trigger
= checkHeapThreshold(zone
, heap
, threshold
);
1858 if (!trigger
.shouldTrigger
) {
1862 // Trigger a zone GC. budgetIncrementalGC() will work out whether to do an
1863 // incremental or non-incremental collection.
1864 triggerZoneGC(zone
, reason
, trigger
.usedBytes
, trigger
.thresholdBytes
);
1868 TriggerResult
GCRuntime::checkHeapThreshold(
1869 Zone
* zone
, const HeapSize
& heapSize
, const HeapThreshold
& heapThreshold
) {
1870 MOZ_ASSERT_IF(heapThreshold
.hasSliceThreshold(), zone
->wasGCStarted());
1872 size_t usedBytes
= heapSize
.bytes();
1873 size_t thresholdBytes
= heapThreshold
.hasSliceThreshold()
1874 ? heapThreshold
.sliceBytes()
1875 : heapThreshold
.startBytes();
1877 // The incremental limit will be checked if we trigger a GC slice.
1878 MOZ_ASSERT(thresholdBytes
<= heapThreshold
.incrementalLimitBytes());
1880 return TriggerResult
{usedBytes
>= thresholdBytes
, usedBytes
, thresholdBytes
};
1883 bool GCRuntime::triggerZoneGC(Zone
* zone
, JS::GCReason reason
, size_t used
,
1885 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt
));
1887 /* GC is already running. */
1888 if (JS::RuntimeHeapIsBusy()) {
1893 if (hasZealMode(ZealMode::Alloc
)) {
1894 MOZ_RELEASE_ASSERT(triggerGC(reason
));
1899 if (zone
->isAtomsZone()) {
1900 stats().recordTrigger(used
, threshold
);
1901 MOZ_RELEASE_ASSERT(triggerGC(reason
));
1905 stats().recordTrigger(used
, threshold
);
1907 requestMajorGC(reason
);
1911 void GCRuntime::maybeGC() {
1912 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt
));
1915 if (hasZealMode(ZealMode::Alloc
) || hasZealMode(ZealMode::RootsChange
)) {
1916 JS::PrepareForFullGC(rt
->mainContextFromOwnThread());
1917 gc(JS::GCOptions::Normal
, JS::GCReason::DEBUG_GC
);
1922 (void)gcIfRequestedImpl(/* eagerOk = */ true);
1925 JS::GCReason
GCRuntime::wantMajorGC(bool eagerOk
) {
1926 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt
));
1928 if (majorGCRequested()) {
1929 return majorGCTriggerReason
;
1932 if (isIncrementalGCInProgress() || !eagerOk
) {
1933 return JS::GCReason::NO_REASON
;
1936 JS::GCReason reason
= JS::GCReason::NO_REASON
;
1937 for (ZonesIter
zone(this, WithAtoms
); !zone
.done(); zone
.next()) {
1938 if (checkEagerAllocTrigger(zone
->gcHeapSize
, zone
->gcHeapThreshold
) ||
1939 checkEagerAllocTrigger(zone
->mallocHeapSize
,
1940 zone
->mallocHeapThreshold
)) {
1942 reason
= JS::GCReason::EAGER_ALLOC_TRIGGER
;
1949 bool GCRuntime::checkEagerAllocTrigger(const HeapSize
& size
,
1950 const HeapThreshold
& threshold
) {
1951 size_t thresholdBytes
=
1952 threshold
.eagerAllocTrigger(schedulingState
.inHighFrequencyGCMode());
1953 size_t usedBytes
= size
.bytes();
1954 if (usedBytes
<= 1024 * 1024 || usedBytes
< thresholdBytes
) {
1958 stats().recordTrigger(usedBytes
, thresholdBytes
);
1962 bool GCRuntime::shouldDecommit() const {
1963 // If we're doing a shrinking GC we always decommit to release as much memory
1965 if (cleanUpEverything
) {
1969 // If we are allocating heavily enough to trigger "high frequency" GC then
1970 // skip decommit so that we do not compete with the mutator.
1971 return !schedulingState
.inHighFrequencyGCMode();
1974 void GCRuntime::startDecommit() {
1975 gcstats::AutoPhase
ap(stats(), gcstats::PhaseKind::DECOMMIT
);
1978 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt
));
1979 MOZ_ASSERT(decommitTask
.isIdle());
1982 AutoLockGC
lock(this);
1983 MOZ_ASSERT(fullChunks(lock
).verify());
1984 MOZ_ASSERT(availableChunks(lock
).verify());
1985 MOZ_ASSERT(emptyChunks(lock
).verify());
1987 // Verify that all entries in the empty chunks pool are unused.
1988 for (ChunkPool::Iter
chunk(emptyChunks(lock
)); !chunk
.done();
1990 MOZ_ASSERT(chunk
->unused());
1995 if (!shouldDecommit()) {
2000 AutoLockGC
lock(this);
2001 if (availableChunks(lock
).empty() && !tooManyEmptyChunks(lock
) &&
2002 emptyChunks(lock
).empty()) {
2003 return; // Nothing to do.
2009 AutoLockHelperThreadState lock
;
2010 MOZ_ASSERT(!requestSliceAfterBackgroundTask
);
2014 if (useBackgroundThreads
) {
2015 decommitTask
.start();
2019 decommitTask
.runFromMainThread();
2022 BackgroundDecommitTask::BackgroundDecommitTask(GCRuntime
* gc
)
2023 : GCParallelTask(gc
, gcstats::PhaseKind::DECOMMIT
) {}
2025 void js::gc::BackgroundDecommitTask::run(AutoLockHelperThreadState
& lock
) {
2027 AutoUnlockHelperThreadState
unlock(lock
);
2029 ChunkPool emptyChunksToFree
;
2031 AutoLockGC
gcLock(gc
);
2032 emptyChunksToFree
= gc
->expireEmptyChunkPool(gcLock
);
2035 FreeChunkPool(emptyChunksToFree
);
2038 AutoLockGC
gcLock(gc
);
2040 // To help minimize the total number of chunks needed over time, sort the
2041 // available chunks list so that we allocate into more-used chunks first.
2042 gc
->availableChunks(gcLock
).sort();
2044 if (DecommitEnabled()) {
2045 gc
->decommitEmptyChunks(cancel_
, gcLock
);
2046 gc
->decommitFreeArenas(cancel_
, gcLock
);
2051 gc
->maybeRequestGCAfterBackgroundTask(lock
);
2054 static inline bool CanDecommitWholeChunk(TenuredChunk
* chunk
) {
2055 return chunk
->unused() && chunk
->info
.numArenasFreeCommitted
!= 0;
2058 // Called from a background thread to decommit free arenas. Releases the GC
2060 void GCRuntime::decommitEmptyChunks(const bool& cancel
, AutoLockGC
& lock
) {
2061 Vector
<TenuredChunk
*, 0, SystemAllocPolicy
> chunksToDecommit
;
2062 for (ChunkPool::Iter
chunk(emptyChunks(lock
)); !chunk
.done(); chunk
.next()) {
2063 if (CanDecommitWholeChunk(chunk
) && !chunksToDecommit
.append(chunk
)) {
2064 onOutOfMallocMemory(lock
);
2069 for (TenuredChunk
* chunk
: chunksToDecommit
) {
2074 // Check whether something used the chunk while lock was released.
2075 if (!CanDecommitWholeChunk(chunk
)) {
2079 // Temporarily remove the chunk while decommitting its memory so that the
2080 // mutator doesn't start allocating from it when we drop the lock.
2081 emptyChunks(lock
).remove(chunk
);
2084 AutoUnlockGC
unlock(lock
);
2085 chunk
->decommitAllArenas();
2086 MOZ_ASSERT(chunk
->info
.numArenasFreeCommitted
== 0);
2089 emptyChunks(lock
).push(chunk
);
2093 // Called from a background thread to decommit free arenas. Releases the GC
2095 void GCRuntime::decommitFreeArenas(const bool& cancel
, AutoLockGC
& lock
) {
2096 MOZ_ASSERT(DecommitEnabled());
2098 // Since we release the GC lock while doing the decommit syscall below,
2099 // it is dangerous to iterate the available list directly, as the active
2100 // thread could modify it concurrently. Instead, we build and pass an
2101 // explicit Vector containing the Chunks we want to visit.
2102 Vector
<TenuredChunk
*, 0, SystemAllocPolicy
> chunksToDecommit
;
2103 for (ChunkPool::Iter
chunk(availableChunks(lock
)); !chunk
.done();
2105 if (chunk
->info
.numArenasFreeCommitted
!= 0 &&
2106 !chunksToDecommit
.append(chunk
)) {
2107 onOutOfMallocMemory(lock
);
2112 for (TenuredChunk
* chunk
: chunksToDecommit
) {
2113 chunk
->decommitFreeArenas(this, cancel
, lock
);
2117 // Do all possible decommit immediately from the current thread without
2118 // releasing the GC lock or allocating any memory.
2119 void GCRuntime::decommitFreeArenasWithoutUnlocking(const AutoLockGC
& lock
) {
2120 MOZ_ASSERT(DecommitEnabled());
2121 for (ChunkPool::Iter
chunk(availableChunks(lock
)); !chunk
.done();
2123 chunk
->decommitFreeArenasWithoutUnlocking(lock
);
2125 MOZ_ASSERT(availableChunks(lock
).verify());
2128 void GCRuntime::maybeRequestGCAfterBackgroundTask(
2129 const AutoLockHelperThreadState
& lock
) {
2130 if (requestSliceAfterBackgroundTask
) {
2131 // Trigger a slice so the main thread can continue the collection
2133 requestSliceAfterBackgroundTask
= false;
2134 requestMajorGC(JS::GCReason::BG_TASK_FINISHED
);
2138 void GCRuntime::cancelRequestedGCAfterBackgroundTask() {
2139 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt
));
2143 AutoLockHelperThreadState lock
;
2144 MOZ_ASSERT(!requestSliceAfterBackgroundTask
);
2148 majorGCTriggerReason
.compareExchange(JS::GCReason::BG_TASK_FINISHED
,
2149 JS::GCReason::NO_REASON
);
2152 bool GCRuntime::isWaitingOnBackgroundTask() const {
2153 AutoLockHelperThreadState lock
;
2154 return requestSliceAfterBackgroundTask
;
2157 void GCRuntime::queueUnusedLifoBlocksForFree(LifoAlloc
* lifo
) {
2158 MOZ_ASSERT(JS::RuntimeHeapIsBusy());
2159 AutoLockHelperThreadState lock
;
2160 lifoBlocksToFree
.ref().transferUnusedFrom(lifo
);
2163 void GCRuntime::queueAllLifoBlocksForFreeAfterMinorGC(LifoAlloc
* lifo
) {
2164 lifoBlocksToFreeAfterFullMinorGC
.ref().transferFrom(lifo
);
2167 void GCRuntime::queueBuffersForFreeAfterMinorGC(Nursery::BufferSet
& buffers
) {
2168 AutoLockHelperThreadState lock
;
2170 if (!buffersToFreeAfterMinorGC
.ref().empty()) {
2171 // In the rare case that this hasn't processed the buffers from a previous
2172 // minor GC we have to wait here.
2173 MOZ_ASSERT(!freeTask
.isIdle(lock
));
2174 freeTask
.joinWithLockHeld(lock
);
2177 MOZ_ASSERT(buffersToFreeAfterMinorGC
.ref().empty());
2178 std::swap(buffersToFreeAfterMinorGC
.ref(), buffers
);
2181 void Realm::destroy(JS::GCContext
* gcx
) {
2182 JSRuntime
* rt
= gcx
->runtime();
2183 if (auto callback
= rt
->destroyRealmCallback
) {
2184 callback(gcx
, this);
2187 JS_DropPrincipals(rt
->mainContextFromOwnThread(), principals());
2189 // Bug 1560019: Malloc memory associated with a zone but not with a specific
2190 // GC thing is not currently tracked.
2191 gcx
->deleteUntracked(this);
2194 void Compartment::destroy(JS::GCContext
* gcx
) {
2195 JSRuntime
* rt
= gcx
->runtime();
2196 if (auto callback
= rt
->destroyCompartmentCallback
) {
2197 callback(gcx
, this);
2199 // Bug 1560019: Malloc memory associated with a zone but not with a specific
2200 // GC thing is not currently tracked.
2201 gcx
->deleteUntracked(this);
2202 rt
->gc
.stats().sweptCompartment();
2205 void Zone::destroy(JS::GCContext
* gcx
) {
2206 MOZ_ASSERT(compartments().empty());
2207 JSRuntime
* rt
= gcx
->runtime();
2208 if (auto callback
= rt
->destroyZoneCallback
) {
2209 callback(gcx
, this);
2211 // Bug 1560019: Malloc memory associated with a zone but not with a specific
2212 // GC thing is not currently tracked.
2213 gcx
->deleteUntracked(this);
2214 gcx
->runtime()->gc
.stats().sweptZone();
2218 * It's simpler if we preserve the invariant that every zone (except atoms
2219 * zones) has at least one compartment, and every compartment has at least one
2220 * realm. If we know we're deleting the entire zone, then sweepCompartments is
2221 * allowed to delete all compartments. In this case, |keepAtleastOne| is false.
2222 * If any cells remain alive in the zone, set |keepAtleastOne| true to prohibit
2223 * sweepCompartments from deleting every compartment. Instead, it preserves an
2224 * arbitrary compartment in the zone.
2226 void Zone::sweepCompartments(JS::GCContext
* gcx
, bool keepAtleastOne
,
2227 bool destroyingRuntime
) {
2228 MOZ_ASSERT_IF(!isAtomsZone(), !compartments().empty());
2229 MOZ_ASSERT_IF(destroyingRuntime
, !keepAtleastOne
);
2231 Compartment
** read
= compartments().begin();
2232 Compartment
** end
= compartments().end();
2233 Compartment
** write
= read
;
2234 while (read
< end
) {
2235 Compartment
* comp
= *read
++;
2238 * Don't delete the last compartment and realm if keepAtleastOne is
2239 * still true, meaning all the other compartments were deleted.
2241 bool keepAtleastOneRealm
= read
== end
&& keepAtleastOne
;
2242 comp
->sweepRealms(gcx
, keepAtleastOneRealm
, destroyingRuntime
);
2244 if (!comp
->realms().empty()) {
2246 keepAtleastOne
= false;
2251 compartments().shrinkTo(write
- compartments().begin());
2252 MOZ_ASSERT_IF(keepAtleastOne
, !compartments().empty());
2253 MOZ_ASSERT_IF(destroyingRuntime
, compartments().empty());
2256 void Compartment::sweepRealms(JS::GCContext
* gcx
, bool keepAtleastOne
,
2257 bool destroyingRuntime
) {
2258 MOZ_ASSERT(!realms().empty());
2259 MOZ_ASSERT_IF(destroyingRuntime
, !keepAtleastOne
);
2261 Realm
** read
= realms().begin();
2262 Realm
** end
= realms().end();
2263 Realm
** write
= read
;
2264 while (read
< end
) {
2265 Realm
* realm
= *read
++;
2268 * Don't delete the last realm if keepAtleastOne is still true, meaning
2269 * all the other realms were deleted.
2271 bool dontDelete
= read
== end
&& keepAtleastOne
;
2272 if ((realm
->marked() || dontDelete
) && !destroyingRuntime
) {
2274 keepAtleastOne
= false;
2276 realm
->destroy(gcx
);
2279 realms().shrinkTo(write
- realms().begin());
2280 MOZ_ASSERT_IF(keepAtleastOne
, !realms().empty());
2281 MOZ_ASSERT_IF(destroyingRuntime
, realms().empty());
2284 void GCRuntime::sweepZones(JS::GCContext
* gcx
, bool destroyingRuntime
) {
2285 MOZ_ASSERT_IF(destroyingRuntime
, numActiveZoneIters
== 0);
2286 MOZ_ASSERT(foregroundFinalizedArenas
.ref().isNothing());
2288 if (numActiveZoneIters
) {
2292 assertBackgroundSweepingFinished();
2294 // Sweep zones following the atoms zone.
2295 MOZ_ASSERT(zones()[0]->isAtomsZone());
2296 Zone
** read
= zones().begin() + 1;
2297 Zone
** end
= zones().end();
2298 Zone
** write
= read
;
2300 while (read
< end
) {
2301 Zone
* zone
= *read
++;
2303 if (zone
->wasGCStarted()) {
2304 MOZ_ASSERT(!zone
->isQueuedForBackgroundSweep());
2305 AutoSetThreadIsSweeping
threadIsSweeping(zone
);
2306 const bool zoneIsDead
=
2307 zone
->arenas
.arenaListsAreEmpty() && !zone
->hasMarkedRealms();
2308 MOZ_ASSERT_IF(destroyingRuntime
, zoneIsDead
);
2310 zone
->arenas
.checkEmptyFreeLists();
2311 zone
->sweepCompartments(gcx
, false, destroyingRuntime
);
2312 MOZ_ASSERT(zone
->compartments().empty());
2316 zone
->sweepCompartments(gcx
, true, destroyingRuntime
);
2320 zones().shrinkTo(write
- zones().begin());
2323 void ArenaLists::checkEmptyArenaList(AllocKind kind
) {
2324 MOZ_ASSERT(arenaList(kind
).isEmpty());
2327 void GCRuntime::purgeRuntimeForMinorGC() {
2328 for (ZonesIter
zone(this, SkipAtoms
); !zone
.done(); zone
.next()) {
2329 zone
->externalStringCache().purge();
2330 zone
->functionToStringCache().purge();
2334 void GCRuntime::purgeRuntime() {
2335 gcstats::AutoPhase
ap(stats(), gcstats::PhaseKind::PURGE
);
2337 for (GCRealmsIter
realm(rt
); !realm
.done(); realm
.next()) {
2341 for (GCZonesIter
zone(this); !zone
.done(); zone
.next()) {
2342 zone
->purgeAtomCache();
2343 zone
->externalStringCache().purge();
2344 zone
->functionToStringCache().purge();
2345 zone
->boundPrefixCache().clearAndCompact();
2346 zone
->shapeZone().purgeShapeCaches(rt
->gcContext());
2349 JSContext
* cx
= rt
->mainContextFromOwnThread();
2350 queueUnusedLifoBlocksForFree(&cx
->tempLifoAlloc());
2351 cx
->interpreterStack().purge(rt
);
2352 cx
->frontendCollectionPool().purge();
2354 rt
->caches().purge();
2356 if (rt
->isMainRuntime()) {
2357 SharedImmutableStringsCache::getSingleton().purge();
2360 MOZ_ASSERT(marker().unmarkGrayStack
.empty());
2361 marker().unmarkGrayStack
.clearAndFree();
2364 bool GCRuntime::shouldPreserveJITCode(Realm
* realm
,
2365 const TimeStamp
& currentTime
,
2366 JS::GCReason reason
,
2367 bool canAllocateMoreCode
,
2368 bool isActiveCompartment
) {
2369 if (cleanUpEverything
) {
2372 if (!canAllocateMoreCode
) {
2376 if (isActiveCompartment
) {
2379 if (alwaysPreserveCode
) {
2382 if (realm
->preserveJitCode()) {
2385 if (IsCurrentlyAnimating(realm
->lastAnimationTime
, currentTime
) &&
2386 DiscardedCodeRecently(realm
->zone(), currentTime
)) {
2389 if (reason
== JS::GCReason::DEBUG_GC
) {
2397 class CompartmentCheckTracer final
: public JS::CallbackTracer
{
2398 void onChild(JS::GCCellPtr thing
, const char* name
) override
;
2399 bool edgeIsInCrossCompartmentMap(JS::GCCellPtr dst
);
2402 explicit CompartmentCheckTracer(JSRuntime
* rt
)
2403 : JS::CallbackTracer(rt
, JS::TracerKind::CompartmentCheck
,
2404 JS::WeakEdgeTraceAction::Skip
) {}
2406 Cell
* src
= nullptr;
2407 JS::TraceKind srcKind
= JS::TraceKind::Null
;
2408 Zone
* zone
= nullptr;
2409 Compartment
* compartment
= nullptr;
2412 static bool InCrossCompartmentMap(JSRuntime
* rt
, JSObject
* src
,
2413 JS::GCCellPtr dst
) {
2414 // Cross compartment edges are either in the cross compartment map or in a
2415 // debugger weakmap.
2417 Compartment
* srccomp
= src
->compartment();
2419 if (dst
.is
<JSObject
>()) {
2420 if (ObjectWrapperMap::Ptr p
= srccomp
->lookupWrapper(&dst
.as
<JSObject
>())) {
2421 if (*p
->value().unsafeGet() == src
) {
2427 if (DebugAPI::edgeIsInDebuggerWeakmap(rt
, src
, dst
)) {
2434 void CompartmentCheckTracer::onChild(JS::GCCellPtr thing
, const char* name
) {
2436 MapGCThingTyped(thing
, [](auto t
) { return t
->maybeCompartment(); });
2437 if (comp
&& compartment
) {
2438 MOZ_ASSERT(comp
== compartment
|| edgeIsInCrossCompartmentMap(thing
));
2440 TenuredCell
* tenured
= &thing
.asCell()->asTenured();
2441 Zone
* thingZone
= tenured
->zoneFromAnyThread();
2442 MOZ_ASSERT(thingZone
== zone
|| thingZone
->isAtomsZone());
2446 bool CompartmentCheckTracer::edgeIsInCrossCompartmentMap(JS::GCCellPtr dst
) {
2447 return srcKind
== JS::TraceKind::Object
&&
2448 InCrossCompartmentMap(runtime(), static_cast<JSObject
*>(src
), dst
);
2451 void GCRuntime::checkForCompartmentMismatches() {
2452 JSContext
* cx
= rt
->mainContextFromOwnThread();
2453 if (cx
->disableStrictProxyCheckingCount
) {
2457 CompartmentCheckTracer
trc(rt
);
2458 AutoAssertEmptyNursery
empty(cx
);
2459 for (ZonesIter
zone(this, SkipAtoms
); !zone
.done(); zone
.next()) {
2461 for (auto thingKind
: AllAllocKinds()) {
2462 for (auto i
= zone
->cellIterUnsafe
<TenuredCell
>(thingKind
, empty
);
2463 !i
.done(); i
.next()) {
2464 trc
.src
= i
.getCell();
2465 trc
.srcKind
= MapAllocToTraceKind(thingKind
);
2466 trc
.compartment
= MapGCThingTyped(
2467 trc
.src
, trc
.srcKind
, [](auto t
) { return t
->maybeCompartment(); });
2468 JS::TraceChildren(&trc
, JS::GCCellPtr(trc
.src
, trc
.srcKind
));
2475 static bool ShouldCleanUpEverything(JS::GCOptions options
) {
2476 // During shutdown, we must clean everything up, for the sake of leak
2477 // detection. When a runtime has no contexts, or we're doing a GC before a
2478 // shutdown CC, those are strong indications that we're shutting down.
2479 return options
== JS::GCOptions::Shutdown
|| options
== JS::GCOptions::Shrink
;
2482 static bool ShouldUseBackgroundThreads(bool isIncremental
,
2483 JS::GCReason reason
) {
2484 bool shouldUse
= isIncremental
&& CanUseExtraThreads();
2485 MOZ_ASSERT_IF(reason
== JS::GCReason::DESTROY_RUNTIME
, !shouldUse
);
2489 void GCRuntime::startCollection(JS::GCReason reason
) {
2490 checkGCStateNotInUse();
2494 reason
== JS::GCReason::XPCONNECT_SHUTDOWN
/* Bug 1650075 */);
2496 initialReason
= reason
;
2497 cleanUpEverything
= ShouldCleanUpEverything(gcOptions());
2498 isCompacting
= shouldCompact();
2499 rootsRemoved
= false;
2500 sweepGroupIndex
= 0;
2501 lastGCStartTime_
= TimeStamp::Now();
2504 if (isShutdownGC()) {
2505 hadShutdownGC
= true;
2508 for (ZonesIter
zone(this, WithAtoms
); !zone
.done(); zone
.next()) {
2509 zone
->gcSweepGroupIndex
= 0;
2514 static void RelazifyFunctions(Zone
* zone
, AllocKind kind
) {
2515 MOZ_ASSERT(kind
== AllocKind::FUNCTION
||
2516 kind
== AllocKind::FUNCTION_EXTENDED
);
2518 JSRuntime
* rt
= zone
->runtimeFromMainThread();
2519 AutoAssertEmptyNursery
empty(rt
->mainContextFromOwnThread());
2521 for (auto i
= zone
->cellIterUnsafe
<JSObject
>(kind
, empty
); !i
.done();
2523 JSFunction
* fun
= &i
->as
<JSFunction
>();
2524 // When iterating over the GC-heap, we may encounter function objects that
2525 // are incomplete (missing a BaseScript when we expect one). We must check
2526 // for this case before we can call JSFunction::hasBytecode().
2527 if (fun
->isIncomplete()) {
2530 if (fun
->hasBytecode()) {
2531 fun
->maybeRelazify(rt
);
2536 static bool ShouldCollectZone(Zone
* zone
, JS::GCReason reason
) {
2537 // If we are repeating a GC because we noticed dead compartments haven't
2538 // been collected, then only collect zones containing those compartments.
2539 if (reason
== JS::GCReason::COMPARTMENT_REVIVED
) {
2540 for (CompartmentsInZoneIter
comp(zone
); !comp
.done(); comp
.next()) {
2541 if (comp
->gcState
.scheduledForDestruction
) {
2549 // Otherwise we only collect scheduled zones.
2550 return zone
->isGCScheduled();
2553 bool GCRuntime::prepareZonesForCollection(JS::GCReason reason
,
2556 /* Assert that zone state is as we expect */
2557 for (ZonesIter
zone(this, WithAtoms
); !zone
.done(); zone
.next()) {
2558 MOZ_ASSERT(!zone
->isCollecting());
2559 MOZ_ASSERT_IF(!zone
->isAtomsZone(), !zone
->compartments().empty());
2560 for (auto i
: AllAllocKinds()) {
2561 MOZ_ASSERT(zone
->arenas
.collectingArenaList(i
).isEmpty());
2569 for (ZonesIter
zone(this, WithAtoms
); !zone
.done(); zone
.next()) {
2570 /* Set up which zones will be collected. */
2571 bool shouldCollect
= ShouldCollectZone(zone
, reason
);
2572 if (shouldCollect
) {
2574 zone
->changeGCState(Zone::NoGC
, Zone::Prepare
);
2579 zone
->setWasCollected(shouldCollect
);
2582 /* Check that at least one zone is scheduled for collection. */
2586 void GCRuntime::discardJITCodeForGC() {
2587 size_t nurserySiteResetCount
= 0;
2588 size_t pretenuredSiteResetCount
= 0;
2590 js::CancelOffThreadIonCompile(rt
, JS::Zone::Prepare
);
2591 for (GCZonesIter
zone(this); !zone
.done(); zone
.next()) {
2592 gcstats::AutoPhase
ap(stats(), gcstats::PhaseKind::MARK_DISCARD_CODE
);
2594 // We may need to reset allocation sites and discard JIT code to recover if
2595 // we find object lifetimes have changed.
2596 PretenuringZone
& pz
= zone
->pretenuring
;
2597 bool resetNurserySites
= pz
.shouldResetNurseryAllocSites();
2598 bool resetPretenuredSites
= pz
.shouldResetPretenuredAllocSites();
2600 if (!zone
->isPreservingCode()) {
2601 Zone::DiscardOptions options
;
2602 options
.discardJitScripts
= true;
2603 options
.resetNurseryAllocSites
= resetNurserySites
;
2604 options
.resetPretenuredAllocSites
= resetPretenuredSites
;
2605 zone
->discardJitCode(rt
->gcContext(), options
);
2606 } else if (resetNurserySites
|| resetPretenuredSites
) {
2607 zone
->resetAllocSitesAndInvalidate(resetNurserySites
,
2608 resetPretenuredSites
);
2611 if (resetNurserySites
) {
2612 nurserySiteResetCount
++;
2614 if (resetPretenuredSites
) {
2615 pretenuredSiteResetCount
++;
2619 if (nursery().reportPretenuring()) {
2620 if (nurserySiteResetCount
) {
2623 "GC reset nursery alloc sites and invalidated code in %zu zones\n",
2624 nurserySiteResetCount
);
2626 if (pretenuredSiteResetCount
) {
2629 "GC reset pretenured alloc sites and invalidated code in %zu zones\n",
2630 pretenuredSiteResetCount
);
2635 void GCRuntime::relazifyFunctionsForShrinkingGC() {
2636 gcstats::AutoPhase
ap(stats(), gcstats::PhaseKind::RELAZIFY_FUNCTIONS
);
2637 for (GCZonesIter
zone(this); !zone
.done(); zone
.next()) {
2638 RelazifyFunctions(zone
, AllocKind::FUNCTION
);
2639 RelazifyFunctions(zone
, AllocKind::FUNCTION_EXTENDED
);
2643 void GCRuntime::purgePropMapTablesForShrinkingGC() {
2644 gcstats::AutoPhase
ap(stats(), gcstats::PhaseKind::PURGE_PROP_MAP_TABLES
);
2645 for (GCZonesIter
zone(this); !zone
.done(); zone
.next()) {
2646 if (!canRelocateZone(zone
) || zone
->keepPropMapTables()) {
2650 // Note: CompactPropMaps never have a table.
2651 for (auto map
= zone
->cellIterUnsafe
<NormalPropMap
>(); !map
.done();
2653 if (map
->asLinked()->hasTable()) {
2654 map
->asLinked()->purgeTable(rt
->gcContext());
2657 for (auto map
= zone
->cellIterUnsafe
<DictionaryPropMap
>(); !map
.done();
2659 if (map
->asLinked()->hasTable()) {
2660 map
->asLinked()->purgeTable(rt
->gcContext());
2666 // The debugger keeps track of the URLs for the sources of each realm's scripts.
2667 // These URLs are purged on shrinking GCs.
2668 void GCRuntime::purgeSourceURLsForShrinkingGC() {
2669 gcstats::AutoPhase
ap(stats(), gcstats::PhaseKind::PURGE_SOURCE_URLS
);
2670 for (GCZonesIter
zone(this); !zone
.done(); zone
.next()) {
2671 // URLs are not tracked for realms in the system zone.
2672 if (!canRelocateZone(zone
) || zone
->isSystemZone()) {
2675 for (CompartmentsInZoneIter
comp(zone
); !comp
.done(); comp
.next()) {
2676 for (RealmsInCompartmentIter
realm(comp
); !realm
.done(); realm
.next()) {
2677 GlobalObject
* global
= realm
.get()->unsafeUnbarrieredMaybeGlobal();
2679 global
->clearSourceURLSHolder();
2686 void GCRuntime::unmarkWeakMaps() {
2687 for (GCZonesIter
zone(this); !zone
.done(); zone
.next()) {
2688 /* Unmark all weak maps in the zones being collected. */
2689 WeakMapBase::unmarkZone(zone
);
2693 bool GCRuntime::beginPreparePhase(JS::GCReason reason
, AutoGCSession
& session
) {
2694 gcstats::AutoPhase
ap(stats(), gcstats::PhaseKind::PREPARE
);
2696 if (!prepareZonesForCollection(reason
, &isFull
.ref())) {
2701 * Start a parallel task to clear all mark state for the zones we are
2702 * collecting. This is linear in the size of the heap we are collecting and so
2703 * can be slow. This usually happens concurrently with the mutator and GC
2704 * proper does not start until this is complete.
2706 unmarkTask
.initZones();
2707 if (useBackgroundThreads
) {
2710 unmarkTask
.runFromMainThread();
2714 * Process any queued source compressions during the start of a major
2717 * Bug 1650075: When we start passing GCOptions::Shutdown for
2718 * GCReason::XPCONNECT_SHUTDOWN GCs we can remove the extra check.
2720 if (!isShutdownGC() && reason
!= JS::GCReason::XPCONNECT_SHUTDOWN
) {
2721 StartHandlingCompressionsOnGC(rt
);
2727 BackgroundUnmarkTask::BackgroundUnmarkTask(GCRuntime
* gc
)
2728 : GCParallelTask(gc
, gcstats::PhaseKind::UNMARK
) {}
2730 void BackgroundUnmarkTask::initZones() {
2731 MOZ_ASSERT(isIdle());
2732 MOZ_ASSERT(zones
.empty());
2733 MOZ_ASSERT(!isCancelled());
2735 // We can't safely iterate the zones vector from another thread so we copy the
2736 // zones to be collected into another vector.
2737 AutoEnterOOMUnsafeRegion oomUnsafe
;
2738 for (GCZonesIter
zone(gc
); !zone
.done(); zone
.next()) {
2739 if (!zones
.append(zone
.get())) {
2740 oomUnsafe
.crash("BackgroundUnmarkTask::initZones");
2743 zone
->arenas
.clearFreeLists();
2744 zone
->arenas
.moveArenasToCollectingLists();
2748 void BackgroundUnmarkTask::run(AutoLockHelperThreadState
& helperTheadLock
) {
2749 AutoUnlockHelperThreadState
unlock(helperTheadLock
);
2751 for (Zone
* zone
: zones
) {
2752 for (auto kind
: AllAllocKinds()) {
2753 ArenaList
& arenas
= zone
->arenas
.collectingArenaList(kind
);
2754 for (ArenaListIter
arena(arenas
.head()); !arena
.done(); arena
.next()) {
2756 if (isCancelled()) {
2766 void GCRuntime::endPreparePhase(JS::GCReason reason
) {
2767 MOZ_ASSERT(unmarkTask
.isIdle());
2769 for (GCZonesIter
zone(this); !zone
.done(); zone
.next()) {
2770 zone
->setPreservingCode(false);
2773 // Discard JIT code more aggressively if the process is approaching its
2774 // executable code limit.
2775 bool canAllocateMoreCode
= jit::CanLikelyAllocateMoreExecutableMemory();
2776 auto currentTime
= TimeStamp::Now();
2778 Compartment
* activeCompartment
= nullptr;
2779 jit::JitActivationIterator
activation(rt
->mainContextFromOwnThread());
2780 if (!activation
.done()) {
2781 activeCompartment
= activation
->compartment();
2784 for (CompartmentsIter
c(rt
); !c
.done(); c
.next()) {
2785 c
->gcState
.scheduledForDestruction
= false;
2786 c
->gcState
.maybeAlive
= false;
2787 c
->gcState
.hasEnteredRealm
= false;
2788 if (c
->invisibleToDebugger()) {
2789 c
->gcState
.maybeAlive
= true; // Presumed to be a system compartment.
2791 bool isActiveCompartment
= c
== activeCompartment
;
2792 for (RealmsInCompartmentIter
r(c
); !r
.done(); r
.next()) {
2793 if (r
->shouldTraceGlobal() || !r
->zone()->isGCScheduled()) {
2794 c
->gcState
.maybeAlive
= true;
2796 if (shouldPreserveJITCode(r
, currentTime
, reason
, canAllocateMoreCode
,
2797 isActiveCompartment
)) {
2798 r
->zone()->setPreservingCode(true);
2800 if (r
->hasBeenEnteredIgnoringJit()) {
2801 c
->gcState
.hasEnteredRealm
= true;
2807 * Perform remaining preparation work that must take place in the first true
2812 gcstats::AutoPhase
ap(stats(), gcstats::PhaseKind::PREPARE
);
2814 AutoLockHelperThreadState helperLock
;
2816 /* Clear mark state for WeakMaps in parallel with other work. */
2817 AutoRunParallelTask
unmarkWeakMaps(this, &GCRuntime::unmarkWeakMaps
,
2818 gcstats::PhaseKind::UNMARK_WEAKMAPS
,
2819 GCUse::Unspecified
, helperLock
);
2821 AutoUnlockHelperThreadState
unlock(helperLock
);
2823 // Discard JIT code. For incremental collections, the sweep phase may
2824 // also discard JIT code.
2825 discardJITCodeForGC();
2826 haveDiscardedJITCodeThisSlice
= true;
2829 * We must purge the runtime at the beginning of an incremental GC. The
2830 * danger if we purge later is that the snapshot invariant of
2831 * incremental GC will be broken, as follows. If some object is
2832 * reachable only through some cache (say the dtoaCache) then it will
2833 * not be part of the snapshot. If we purge after root marking, then
2834 * the mutator could obtain a pointer to the object and start using
2835 * it. This object might never be marked, so a GC hazard would exist.
2840 // This will start background free for lifo blocks queued by purgeRuntime,
2841 // even if there's nothing in the nursery.
2842 collectNurseryFromMajorGC(reason
);
2845 gcstats::AutoPhase
ap(stats(), gcstats::PhaseKind::PREPARE
);
2846 // Relazify functions after discarding JIT code (we can't relazify functions
2847 // with JIT code) and before the actual mark phase, so that the current GC
2848 // can collect the JSScripts we're unlinking here. We do this only when
2849 // we're performing a shrinking GC, as too much relazification can cause
2850 // performance issues when we have to reparse the same functions over and
2852 if (isShrinkingGC()) {
2853 relazifyFunctionsForShrinkingGC();
2854 purgePropMapTablesForShrinkingGC();
2855 purgeSourceURLsForShrinkingGC();
2858 if (isShutdownGC()) {
2859 /* Clear any engine roots that may hold external data live. */
2860 for (GCZonesIter
zone(this); !zone
.done(); zone
.next()) {
2861 zone
->clearRootsForShutdownGC();
2865 testMarkQueue
.clear();
2872 if (fullCompartmentChecks
) {
2873 checkForCompartmentMismatches();
2878 AutoUpdateLiveCompartments::AutoUpdateLiveCompartments(GCRuntime
* gc
) : gc(gc
) {
2879 for (GCCompartmentsIter
c(gc
->rt
); !c
.done(); c
.next()) {
2880 c
->gcState
.hasMarkedCells
= false;
2884 AutoUpdateLiveCompartments::~AutoUpdateLiveCompartments() {
2885 for (GCCompartmentsIter
c(gc
->rt
); !c
.done(); c
.next()) {
2886 if (c
->gcState
.hasMarkedCells
) {
2887 c
->gcState
.maybeAlive
= true;
2892 Zone::GCState
Zone::initialMarkingState() const {
2893 if (isAtomsZone()) {
2894 // Don't delay gray marking in the atoms zone like we do in other zones.
2895 return MarkBlackAndGray
;
2898 return MarkBlackOnly
;
2901 void GCRuntime::beginMarkPhase(AutoGCSession
& session
) {
2905 gcstats::AutoPhase
ap(stats(), gcstats::PhaseKind::MARK
);
2907 // This is the slice we actually start collecting. The number can be used to
2908 // check whether a major GC has started so we must not increment it until we
2914 queueMarkColor
.reset();
2917 for (GCZonesIter
zone(this); !zone
.done(); zone
.next()) {
2918 // In an incremental GC, clear the arena free lists to ensure that
2919 // subsequent allocations refill them and end up marking new cells black.
2920 // See arenaAllocatedDuringGC().
2921 zone
->arenas
.clearFreeLists();
2924 if (hasZealMode(ZealMode::YieldBeforeRootMarking
)) {
2925 for (auto kind
: AllAllocKinds()) {
2926 for (ArenaIter
arena(zone
, kind
); !arena
.done(); arena
.next()) {
2927 arena
->checkNoMarkedCells();
2933 // Incremental marking barriers are enabled at this point.
2934 zone
->changeGCState(Zone::Prepare
, zone
->initialMarkingState());
2936 // Merge arenas allocated during the prepare phase, then move all arenas to
2937 // the collecting arena lists.
2938 zone
->arenas
.mergeArenasFromCollectingLists();
2939 zone
->arenas
.moveArenasToCollectingLists();
2941 for (RealmsInZoneIter
realm(zone
); !realm
.done(); realm
.next()) {
2942 realm
->clearAllocatedDuringGC();
2946 updateSchedulingStateOnGCStart();
2947 stats().measureInitialHeapSize();
2949 useParallelMarking
= SingleThreadedMarking
;
2950 if (canMarkInParallel() && initParallelMarking()) {
2951 useParallelMarking
= AllowParallelMarking
;
2954 MOZ_ASSERT(!hasDelayedMarking());
2955 for (auto& marker
: markers
) {
2959 if (rt
->isBeingDestroyed()) {
2960 checkNoRuntimeRoots(session
);
2962 AutoUpdateLiveCompartments
updateLive(this);
2963 marker().setRootMarkingMode(true);
2964 traceRuntimeForMajorGC(marker().tracer(), session
);
2965 marker().setRootMarkingMode(false);
2969 void GCRuntime::findDeadCompartments() {
2970 gcstats::AutoPhase
ap1(stats(), gcstats::PhaseKind::FIND_DEAD_COMPARTMENTS
);
2973 * This code ensures that if a compartment is "dead", then it will be
2974 * collected in this GC. A compartment is considered dead if its maybeAlive
2975 * flag is false. The maybeAlive flag is set if:
2977 * (1) the compartment has been entered (set in beginMarkPhase() above)
2978 * (2) the compartment's zone is not being collected (set in
2979 * endPreparePhase() above)
2980 * (3) an object in the compartment was marked during root marking, either
2981 * as a black root or a gray root. This is arranged by
2982 * SetCompartmentHasMarkedCells and AutoUpdateLiveCompartments.
2983 * (4) the compartment has incoming cross-compartment edges from another
2984 * compartment that has maybeAlive set (set by this method).
2985 * (5) the compartment has the invisibleToDebugger flag set, as it is
2986 * presumed to be a system compartment (set in endPreparePhase() above)
2988 * If the maybeAlive is false, then we set the scheduledForDestruction flag.
2989 * At the end of the GC, we look for compartments where
2990 * scheduledForDestruction is true. These are compartments that were somehow
2991 * "revived" during the incremental GC. If any are found, we do a special,
2992 * non-incremental GC of those compartments to try to collect them.
2994 * Compartments can be revived for a variety of reasons, including:
2996 * (1) A dead reflector can be revived by DOM code that still refers to the
2997 * underlying DOM node (see bug 811587).
2998 * (2) JS_TransplantObject iterates over all compartments, live or dead, and
2999 * operates on their objects. This can trigger read barriers and mark
3000 * unreachable objects. See bug 803376 for details on this problem. To
3001 * avoid the problem, we try to avoid allocation and read barriers
3002 * during JS_TransplantObject and the like.
3003 * (3) Read barriers. A compartment may only have weak roots and reading one
3004 * of these will cause the compartment to stay alive even though the GC
3005 * thought it should die. An example of this is Gecko's unprivileged
3006 * junk scope, which is handled by ignoring system compartments (see bug
3010 // Propagate the maybeAlive flag via cross-compartment edges.
3012 Vector
<Compartment
*, 0, js::SystemAllocPolicy
> workList
;
3014 for (CompartmentsIter
comp(rt
); !comp
.done(); comp
.next()) {
3015 if (comp
->gcState
.maybeAlive
) {
3016 if (!workList
.append(comp
)) {
3022 while (!workList
.empty()) {
3023 Compartment
* comp
= workList
.popCopy();
3024 for (Compartment::WrappedObjectCompartmentEnum
e(comp
); !e
.empty();
3026 Compartment
* dest
= e
.front();
3027 if (!dest
->gcState
.maybeAlive
) {
3028 dest
->gcState
.maybeAlive
= true;
3029 if (!workList
.append(dest
)) {
3036 // Set scheduledForDestruction based on maybeAlive.
3038 for (GCCompartmentsIter
comp(rt
); !comp
.done(); comp
.next()) {
3039 MOZ_ASSERT(!comp
->gcState
.scheduledForDestruction
);
3040 if (!comp
->gcState
.maybeAlive
) {
3041 comp
->gcState
.scheduledForDestruction
= true;
3046 void GCRuntime::updateSchedulingStateOnGCStart() {
3047 heapSize
.updateOnGCStart();
3049 // Update memory counters for the zones we are collecting.
3050 for (GCZonesIter
zone(this); !zone
.done(); zone
.next()) {
3051 zone
->updateSchedulingStateOnGCStart();
3055 inline bool GCRuntime::canMarkInParallel() const {
3056 MOZ_ASSERT(state() >= gc::State::MarkRoots
);
3058 #if defined(DEBUG) || defined(JS_OOM_BREAKPOINT)
3059 // OOM testing limits the engine to using a single helper thread.
3060 if (oom::simulator
.targetThread() == THREAD_TYPE_GCPARALLEL
) {
3065 return markers
.length() > 1 && stats().initialCollectedBytes() >=
3066 tunables
.parallelMarkingThresholdBytes();
3069 bool GCRuntime::initParallelMarking() {
3070 // This is called at the start of collection.
3072 MOZ_ASSERT(canMarkInParallel());
3074 // Reserve/release helper threads for worker runtimes. These are released at
3075 // the end of sweeping. If there are not enough helper threads because
3076 // other runtimes are marking in parallel then parallel marking will not be
3078 if (!rt
->isMainRuntime() && !reserveMarkingThreads(markers
.length())) {
3082 // Allocate stack for parallel markers. The first marker always has stack
3083 // allocated. Other markers have their stack freed in
3084 // GCRuntime::finishCollection.
3085 for (size_t i
= 1; i
< markers
.length(); i
++) {
3086 if (!markers
[i
]->initStack()) {
3094 IncrementalProgress
GCRuntime::markUntilBudgetExhausted(
3095 SliceBudget
& sliceBudget
, ParallelMarking allowParallelMarking
,
3096 ShouldReportMarkTime reportTime
) {
3097 // Run a marking slice and return whether the stack is now empty.
3099 AutoMajorGCProfilerEntry
s(this);
3101 if (initialState
!= State::Mark
) {
3102 sliceBudget
.forceCheck();
3103 if (sliceBudget
.isOverBudget()) {
3108 if (processTestMarkQueue() == QueueYielded
) {
3112 if (allowParallelMarking
) {
3113 MOZ_ASSERT(canMarkInParallel());
3114 MOZ_ASSERT(parallelMarkingEnabled
);
3115 MOZ_ASSERT(reportTime
);
3116 MOZ_ASSERT(!isBackgroundMarking());
3118 ParallelMarker
pm(this);
3119 if (!pm
.mark(sliceBudget
)) {
3123 assertNoMarkingWork();
3128 AutoSetThreadIsMarking threadIsMarking
;
3131 return marker().markUntilBudgetExhausted(sliceBudget
, reportTime
)
3136 void GCRuntime::drainMarkStack() {
3137 auto unlimited
= SliceBudget::unlimited();
3138 MOZ_RELEASE_ASSERT(marker().markUntilBudgetExhausted(unlimited
));
3143 const GCVector
<HeapPtr
<JS::Value
>, 0, SystemAllocPolicy
>&
3144 GCRuntime::getTestMarkQueue() const {
3145 return testMarkQueue
.get();
3148 bool GCRuntime::appendTestMarkQueue(const JS::Value
& value
) {
3149 return testMarkQueue
.append(value
);
3152 void GCRuntime::clearTestMarkQueue() {
3153 testMarkQueue
.clear();
3157 size_t GCRuntime::testMarkQueuePos() const { return queuePos
; }
3161 GCRuntime::MarkQueueProgress
GCRuntime::processTestMarkQueue() {
3163 if (testMarkQueue
.empty()) {
3164 return QueueComplete
;
3167 if (queueMarkColor
== mozilla::Some(MarkColor::Gray
) &&
3168 state() != State::Sweep
) {
3169 return QueueSuspended
;
3172 // If the queue wants to be gray marking, but we've pushed a black object
3173 // since set-color-gray was processed, then we can't switch to gray and must
3174 // again wait until gray marking is possible.
3176 // Remove this code if the restriction against marking gray during black is
3178 if (queueMarkColor
== mozilla::Some(MarkColor::Gray
) &&
3179 marker().hasBlackEntries()) {
3180 return QueueSuspended
;
3183 // If the queue wants to be marking a particular color, switch to that color.
3184 // In any case, restore the mark color to whatever it was when we entered
3186 bool willRevertToGray
= marker().markColor() == MarkColor::Gray
;
3187 AutoSetMarkColor
autoRevertColor(
3188 marker(), queueMarkColor
.valueOr(marker().markColor()));
3190 // Process the mark queue by taking each object in turn, pushing it onto the
3191 // mark stack, and processing just the top element with processMarkStackTop
3192 // without recursing into reachable objects.
3193 while (queuePos
< testMarkQueue
.length()) {
3194 Value val
= testMarkQueue
[queuePos
++].get();
3195 if (val
.isObject()) {
3196 JSObject
* obj
= &val
.toObject();
3197 JS::Zone
* zone
= obj
->zone();
3198 if (!zone
->isGCMarking() || obj
->isMarkedAtLeast(marker().markColor())) {
3202 // If we have started sweeping, obey sweep group ordering. But note that
3203 // we will first be called during the initial sweep slice, when the sweep
3204 // group indexes have not yet been computed. In that case, we can mark
3206 if (state() == State::Sweep
&& initialState
!= State::Sweep
) {
3207 if (zone
->gcSweepGroupIndex
< getCurrentSweepGroupIndex()) {
3208 // Too late. This must have been added after we started collecting,
3209 // and we've already processed its sweep group. Skip it.
3212 if (zone
->gcSweepGroupIndex
> getCurrentSweepGroupIndex()) {
3213 // Not ready yet. Wait until we reach the object's sweep group.
3215 return QueueSuspended
;
3219 if (marker().markColor() == MarkColor::Gray
&&
3220 zone
->isGCMarkingBlackOnly()) {
3221 // Have not yet reached the point where we can mark this object, so
3222 // continue with the GC.
3224 return QueueSuspended
;
3227 if (marker().markColor() == MarkColor::Black
&& willRevertToGray
) {
3228 // If we put any black objects on the stack, we wouldn't be able to
3229 // return to gray marking. So delay the marking until we're back to
3232 return QueueSuspended
;
3236 AutoEnterOOMUnsafeRegion oomUnsafe
;
3237 if (!marker().markOneObjectForTest(obj
)) {
3238 // If we overflowed the stack here and delayed marking, then we won't be
3239 // testing what we think we're testing.
3240 MOZ_ASSERT(obj
->asTenured().arena()->onDelayedMarkingList());
3241 oomUnsafe
.crash("Overflowed stack while marking test queue");
3243 } else if (val
.isString()) {
3244 JSLinearString
* str
= &val
.toString()->asLinear();
3245 if (js::StringEqualsLiteral(str
, "yield") && isIncrementalGc()) {
3246 return QueueYielded
;
3249 if (js::StringEqualsLiteral(str
, "enter-weak-marking-mode") ||
3250 js::StringEqualsLiteral(str
, "abort-weak-marking-mode")) {
3251 if (marker().isRegularMarking()) {
3252 // We can't enter weak marking mode at just any time, so instead
3253 // we'll stop processing the queue and continue on with the GC. Once
3254 // we enter weak marking mode, we can continue to the rest of the
3255 // queue. Note that we will also suspend for aborting, and then abort
3256 // the earliest following weak marking mode.
3258 return QueueSuspended
;
3260 if (js::StringEqualsLiteral(str
, "abort-weak-marking-mode")) {
3261 marker().abortLinearWeakMarking();
3263 } else if (js::StringEqualsLiteral(str
, "drain")) {
3264 auto unlimited
= SliceBudget::unlimited();
3266 marker().markUntilBudgetExhausted(unlimited
, DontReportMarkTime
));
3267 } else if (js::StringEqualsLiteral(str
, "set-color-gray")) {
3268 queueMarkColor
= mozilla::Some(MarkColor::Gray
);
3269 if (state() != State::Sweep
|| marker().hasBlackEntries()) {
3270 // Cannot mark gray yet, so continue with the GC.
3272 return QueueSuspended
;
3274 marker().setMarkColor(MarkColor::Gray
);
3275 } else if (js::StringEqualsLiteral(str
, "set-color-black")) {
3276 queueMarkColor
= mozilla::Some(MarkColor::Black
);
3277 marker().setMarkColor(MarkColor::Black
);
3278 } else if (js::StringEqualsLiteral(str
, "unset-color")) {
3279 queueMarkColor
.reset();
3285 return QueueComplete
;
3288 static bool IsEmergencyGC(JS::GCReason reason
) {
3289 return reason
== JS::GCReason::LAST_DITCH
||
3290 reason
== JS::GCReason::MEM_PRESSURE
;
3293 void GCRuntime::finishCollection(JS::GCReason reason
) {
3294 assertBackgroundSweepingFinished();
3296 MOZ_ASSERT(!hasDelayedMarking());
3297 for (size_t i
= 0; i
< markers
.length(); i
++) {
3298 const auto& marker
= markers
[i
];
3301 marker
->resetStackCapacity();
3303 marker
->freeStack();
3307 maybeStopPretenuring();
3309 if (IsEmergencyGC(reason
)) {
3310 waitBackgroundFreeEnd();
3313 TimeStamp currentTime
= TimeStamp::Now();
3315 updateSchedulingStateAfterCollection(currentTime
);
3317 for (GCZonesIter
zone(this); !zone
.done(); zone
.next()) {
3318 zone
->changeGCState(Zone::Finished
, Zone::NoGC
);
3319 zone
->notifyObservingDebuggers();
3323 clearSelectedForMarking();
3326 schedulingState
.updateHighFrequencyMode(lastGCEndTime_
, currentTime
,
3328 lastGCEndTime_
= currentTime
;
3330 checkGCStateNotInUse();
3333 void GCRuntime::checkGCStateNotInUse() {
3335 for (auto& marker
: markers
) {
3336 MOZ_ASSERT(!marker
->isActive());
3337 MOZ_ASSERT(marker
->isDrained());
3339 MOZ_ASSERT(!hasDelayedMarking());
3341 MOZ_ASSERT(!lastMarkSlice
);
3343 MOZ_ASSERT(foregroundFinalizedArenas
.ref().isNothing());
3345 for (ZonesIter
zone(this, WithAtoms
); !zone
.done(); zone
.next()) {
3346 if (zone
->wasCollected()) {
3347 zone
->arenas
.checkGCStateNotInUse();
3349 MOZ_ASSERT(!zone
->wasGCStarted());
3350 MOZ_ASSERT(!zone
->needsIncrementalBarrier());
3351 MOZ_ASSERT(!zone
->isOnList());
3354 MOZ_ASSERT(zonesToMaybeCompact
.ref().isEmpty());
3355 MOZ_ASSERT(cellsToAssertNotGray
.ref().empty());
3357 AutoLockHelperThreadState lock
;
3358 MOZ_ASSERT(!requestSliceAfterBackgroundTask
);
3359 MOZ_ASSERT(unmarkTask
.isIdle(lock
));
3360 MOZ_ASSERT(markTask
.isIdle(lock
));
3361 MOZ_ASSERT(sweepTask
.isIdle(lock
));
3362 MOZ_ASSERT(decommitTask
.isIdle(lock
));
3366 void GCRuntime::maybeStopPretenuring() {
3367 nursery().maybeStopPretenuring(this);
3369 size_t zonesWhereStringsEnabled
= 0;
3370 size_t zonesWhereBigIntsEnabled
= 0;
3372 for (GCZonesIter
zone(this); !zone
.done(); zone
.next()) {
3373 if (zone
->nurseryStringsDisabled
|| zone
->nurseryBigIntsDisabled
) {
3374 // We may need to reset allocation sites and discard JIT code to recover
3375 // if we find object lifetimes have changed.
3376 if (zone
->pretenuring
.shouldResetPretenuredAllocSites()) {
3377 zone
->unknownAllocSite(JS::TraceKind::String
)->maybeResetState();
3378 zone
->unknownAllocSite(JS::TraceKind::BigInt
)->maybeResetState();
3379 if (zone
->nurseryStringsDisabled
) {
3380 zone
->nurseryStringsDisabled
= false;
3381 zonesWhereStringsEnabled
++;
3383 if (zone
->nurseryBigIntsDisabled
) {
3384 zone
->nurseryBigIntsDisabled
= false;
3385 zonesWhereBigIntsEnabled
++;
3387 nursery().updateAllocFlagsForZone(zone
);
3392 if (nursery().reportPretenuring()) {
3393 if (zonesWhereStringsEnabled
) {
3394 fprintf(stderr
, "GC re-enabled nursery string allocation in %zu zones\n",
3395 zonesWhereStringsEnabled
);
3397 if (zonesWhereBigIntsEnabled
) {
3398 fprintf(stderr
, "GC re-enabled nursery big int allocation in %zu zones\n",
3399 zonesWhereBigIntsEnabled
);
3404 void GCRuntime::updateSchedulingStateAfterCollection(TimeStamp currentTime
) {
3405 TimeDuration totalGCTime
= stats().totalGCTime();
3406 size_t totalInitialBytes
= stats().initialCollectedBytes();
3408 for (GCZonesIter
zone(this); !zone
.done(); zone
.next()) {
3409 if (tunables
.balancedHeapLimitsEnabled() && totalInitialBytes
!= 0) {
3410 zone
->updateCollectionRate(totalGCTime
, totalInitialBytes
);
3412 zone
->clearGCSliceThresholds();
3413 zone
->updateGCStartThresholds(*this);
3417 void GCRuntime::updateAllGCStartThresholds() {
3418 for (ZonesIter
zone(this, WithAtoms
); !zone
.done(); zone
.next()) {
3419 zone
->updateGCStartThresholds(*this);
3423 void GCRuntime::updateAllocationRates() {
3424 // Calculate mutator time since the last update. This ignores the fact that
3425 // the zone could have been created since the last update.
3427 TimeStamp currentTime
= TimeStamp::Now();
3428 TimeDuration totalTime
= currentTime
- lastAllocRateUpdateTime
;
3429 if (collectorTimeSinceAllocRateUpdate
>= totalTime
) {
3430 // It shouldn't happen but occasionally we see collector time being larger
3431 // than total time. Skip the update in that case.
3435 TimeDuration mutatorTime
= totalTime
- collectorTimeSinceAllocRateUpdate
;
3437 for (AllZonesIter
zone(this); !zone
.done(); zone
.next()) {
3438 zone
->updateAllocationRate(mutatorTime
);
3439 zone
->updateGCStartThresholds(*this);
3442 lastAllocRateUpdateTime
= currentTime
;
3443 collectorTimeSinceAllocRateUpdate
= TimeDuration::Zero();
3446 static const char* GCHeapStateToLabel(JS::HeapState heapState
) {
3447 switch (heapState
) {
3448 case JS::HeapState::MinorCollecting
:
3450 case JS::HeapState::MajorCollecting
:
3453 MOZ_CRASH("Unexpected heap state when pushing GC profiling stack frame");
3455 MOZ_ASSERT_UNREACHABLE("Should have exhausted every JS::HeapState variant!");
3459 static JS::ProfilingCategoryPair
GCHeapStateToProfilingCategory(
3460 JS::HeapState heapState
) {
3461 return heapState
== JS::HeapState::MinorCollecting
3462 ? JS::ProfilingCategoryPair::GCCC_MinorGC
3463 : JS::ProfilingCategoryPair::GCCC_MajorGC
;
3466 /* Start a new heap session. */
3467 AutoHeapSession::AutoHeapSession(GCRuntime
* gc
, JS::HeapState heapState
)
3468 : gc(gc
), prevState(gc
->heapState_
) {
3469 MOZ_ASSERT(CurrentThreadCanAccessRuntime(gc
->rt
));
3470 MOZ_ASSERT(prevState
== JS::HeapState::Idle
||
3471 (prevState
== JS::HeapState::MajorCollecting
&&
3472 heapState
== JS::HeapState::MinorCollecting
));
3473 MOZ_ASSERT(heapState
!= JS::HeapState::Idle
);
3475 gc
->heapState_
= heapState
;
3477 if (heapState
== JS::HeapState::MinorCollecting
||
3478 heapState
== JS::HeapState::MajorCollecting
) {
3479 profilingStackFrame
.emplace(
3480 gc
->rt
->mainContextFromOwnThread(), GCHeapStateToLabel(heapState
),
3481 GCHeapStateToProfilingCategory(heapState
),
3482 uint32_t(ProfilingStackFrame::Flags::RELEVANT_FOR_JS
));
3486 AutoHeapSession::~AutoHeapSession() {
3487 MOZ_ASSERT(JS::RuntimeHeapIsBusy());
3488 gc
->heapState_
= prevState
;
3491 static const char* MajorGCStateToLabel(State state
) {
3494 return "js::GCRuntime::markUntilBudgetExhausted";
3496 return "js::GCRuntime::performSweepActions";
3497 case State::Compact
:
3498 return "js::GCRuntime::compactPhase";
3500 MOZ_CRASH("Unexpected heap state when pushing GC profiling stack frame");
3503 MOZ_ASSERT_UNREACHABLE("Should have exhausted every State variant!");
3507 static JS::ProfilingCategoryPair
MajorGCStateToProfilingCategory(State state
) {
3510 return JS::ProfilingCategoryPair::GCCC_MajorGC_Mark
;
3512 return JS::ProfilingCategoryPair::GCCC_MajorGC_Sweep
;
3513 case State::Compact
:
3514 return JS::ProfilingCategoryPair::GCCC_MajorGC_Compact
;
3516 MOZ_CRASH("Unexpected heap state when pushing GC profiling stack frame");
3520 AutoMajorGCProfilerEntry::AutoMajorGCProfilerEntry(GCRuntime
* gc
)
3521 : AutoGeckoProfilerEntry(gc
->rt
->mainContextFromAnyThread(),
3522 MajorGCStateToLabel(gc
->state()),
3523 MajorGCStateToProfilingCategory(gc
->state())) {
3524 MOZ_ASSERT(gc
->heapState() == JS::HeapState::MajorCollecting
);
3527 GCRuntime::IncrementalResult
GCRuntime::resetIncrementalGC(
3528 GCAbortReason reason
) {
3529 MOZ_ASSERT(reason
!= GCAbortReason::None
);
3531 // Drop as much work as possible from an ongoing incremental GC so
3532 // we can start a new GC after it has finished.
3533 if (incrementalState
== State::NotActive
) {
3534 return IncrementalResult::Ok
;
3537 AutoGCSession
session(this, JS::HeapState::MajorCollecting
);
3539 switch (incrementalState
) {
3540 case State::NotActive
:
3541 case State::MarkRoots
:
3543 MOZ_CRASH("Unexpected GC state in resetIncrementalGC");
3546 case State::Prepare
:
3547 unmarkTask
.cancelAndWait();
3549 for (GCZonesIter
zone(this); !zone
.done(); zone
.next()) {
3550 zone
->changeGCState(Zone::Prepare
, Zone::NoGC
);
3551 zone
->clearGCSliceThresholds();
3552 zone
->arenas
.clearFreeLists();
3553 zone
->arenas
.mergeArenasFromCollectingLists();
3556 incrementalState
= State::NotActive
;
3557 checkGCStateNotInUse();
3561 // Cancel any ongoing marking.
3562 for (auto& marker
: markers
) {
3565 resetDelayedMarking();
3567 for (GCCompartmentsIter
c(rt
); !c
.done(); c
.next()) {
3571 for (GCZonesIter
zone(this); !zone
.done(); zone
.next()) {
3572 zone
->changeGCState(zone
->initialMarkingState(), Zone::NoGC
);
3573 zone
->clearGCSliceThresholds();
3574 zone
->arenas
.unmarkPreMarkedFreeCells();
3575 zone
->arenas
.mergeArenasFromCollectingLists();
3579 AutoLockHelperThreadState lock
;
3580 lifoBlocksToFree
.ref().freeAll();
3583 lastMarkSlice
= false;
3584 incrementalState
= State::Finish
;
3587 for (auto& marker
: markers
) {
3588 MOZ_ASSERT(!marker
->shouldCheckCompartments());
3595 case State::Sweep
: {
3596 // Finish sweeping the current sweep group, then abort.
3597 for (CompartmentsIter
c(rt
); !c
.done(); c
.next()) {
3598 c
->gcState
.scheduledForDestruction
= false;
3601 abortSweepAfterCurrentGroup
= true;
3602 isCompacting
= false;
3607 case State::Finalize
: {
3608 isCompacting
= false;
3612 case State::Compact
: {
3613 // Skip any remaining zones that would have been compacted.
3614 MOZ_ASSERT(isCompacting
);
3615 startedCompacting
= true;
3616 zonesToMaybeCompact
.ref().clear();
3620 case State::Decommit
: {
3625 stats().reset(reason
);
3627 return IncrementalResult::ResetIncremental
;
3630 AutoDisableBarriers::AutoDisableBarriers(GCRuntime
* gc
) : gc(gc
) {
3632 * Clear needsIncrementalBarrier early so we don't do any write barriers
3635 for (GCZonesIter
zone(gc
); !zone
.done(); zone
.next()) {
3636 if (zone
->isGCMarking()) {
3637 MOZ_ASSERT(zone
->needsIncrementalBarrier());
3638 zone
->setNeedsIncrementalBarrier(false);
3640 MOZ_ASSERT(!zone
->needsIncrementalBarrier());
3644 AutoDisableBarriers::~AutoDisableBarriers() {
3645 for (GCZonesIter
zone(gc
); !zone
.done(); zone
.next()) {
3646 MOZ_ASSERT(!zone
->needsIncrementalBarrier());
3647 if (zone
->isGCMarking()) {
3648 zone
->setNeedsIncrementalBarrier(true);
3653 static bool NeedToCollectNursery(GCRuntime
* gc
) {
3654 return !gc
->nursery().isEmpty() || !gc
->storeBuffer().isEmpty();
3658 static const char* DescribeBudget(const SliceBudget
& budget
) {
3659 constexpr size_t length
= 32;
3660 static char buffer
[length
];
3661 budget
.describe(buffer
, length
);
3666 static bool ShouldPauseMutatorWhileWaiting(const SliceBudget
& budget
,
3667 JS::GCReason reason
,
3668 bool budgetWasIncreased
) {
3669 // When we're nearing the incremental limit at which we will finish the
3670 // collection synchronously, pause the main thread if there is only background
3671 // GC work happening. This allows the GC to catch up and avoid hitting the
3673 return budget
.isTimeBudget() &&
3674 (reason
== JS::GCReason::ALLOC_TRIGGER
||
3675 reason
== JS::GCReason::TOO_MUCH_MALLOC
) &&
3679 void GCRuntime::incrementalSlice(SliceBudget
& budget
, JS::GCReason reason
,
3680 bool budgetWasIncreased
) {
3681 MOZ_ASSERT_IF(isIncrementalGCInProgress(), isIncremental
);
3683 AutoSetThreadIsPerformingGC
performingGC(rt
->gcContext());
3685 AutoGCSession
session(this, JS::HeapState::MajorCollecting
);
3687 bool destroyingRuntime
= (reason
== JS::GCReason::DESTROY_RUNTIME
);
3689 initialState
= incrementalState
;
3690 isIncremental
= !budget
.isUnlimited();
3691 useBackgroundThreads
= ShouldUseBackgroundThreads(isIncremental
, reason
);
3692 haveDiscardedJITCodeThisSlice
= false;
3695 // Do the incremental collection type specified by zeal mode if the collection
3696 // was triggered by runDebugGC() and incremental GC has not been cancelled by
3697 // resetIncrementalGC().
3698 useZeal
= isIncremental
&& reason
== JS::GCReason::DEBUG_GC
;
3703 "Incremental: %d, lastMarkSlice: %d, useZeal: %d, budget: %s, "
3704 "budgetWasIncreased: %d",
3705 bool(isIncremental
), bool(lastMarkSlice
), bool(useZeal
),
3706 DescribeBudget(budget
), budgetWasIncreased
);
3709 if (useZeal
&& hasIncrementalTwoSliceZealMode()) {
3710 // Yields between slices occurs at predetermined points in these modes; the
3711 // budget is not used. |isIncremental| is still true.
3712 stats().log("Using unlimited budget for two-slice zeal mode");
3713 budget
= SliceBudget::unlimited();
3716 bool shouldPauseMutator
=
3717 ShouldPauseMutatorWhileWaiting(budget
, reason
, budgetWasIncreased
);
3719 switch (incrementalState
) {
3720 case State::NotActive
:
3721 startCollection(reason
);
3723 incrementalState
= State::Prepare
;
3724 if (!beginPreparePhase(reason
, session
)) {
3725 incrementalState
= State::NotActive
;
3729 if (useZeal
&& hasZealMode(ZealMode::YieldBeforeRootMarking
)) {
3735 case State::Prepare
:
3736 if (waitForBackgroundTask(unmarkTask
, budget
, shouldPauseMutator
,
3737 DontTriggerSliceWhenFinished
) == NotFinished
) {
3741 incrementalState
= State::MarkRoots
;
3744 case State::MarkRoots
:
3745 endPreparePhase(reason
);
3747 beginMarkPhase(session
);
3748 incrementalState
= State::Mark
;
3750 if (useZeal
&& hasZealMode(ZealMode::YieldBeforeMarking
) &&
3758 if (mightSweepInThisSlice(budget
.isUnlimited())) {
3759 // Trace wrapper rooters before marking if we might start sweeping in
3761 rt
->mainContextFromOwnThread()->traceWrapperGCRooters(
3766 gcstats::AutoPhase
ap(stats(), gcstats::PhaseKind::MARK
);
3767 if (markUntilBudgetExhausted(budget
, useParallelMarking
) ==
3773 assertNoMarkingWork();
3776 * There are a number of reasons why we break out of collection here,
3777 * either ending the slice or to run a new interation of the loop in
3778 * GCRuntime::collect()
3782 * In incremental GCs where we have already performed more than one
3783 * slice we yield after marking with the aim of starting the sweep in
3784 * the next slice, since the first slice of sweeping can be expensive.
3786 * This is modified by the various zeal modes. We don't yield in
3787 * YieldBeforeMarking mode and we always yield in YieldBeforeSweeping
3790 * We will need to mark anything new on the stack when we resume, so
3791 * we stay in Mark state.
3793 if (isIncremental
&& !lastMarkSlice
) {
3794 if ((initialState
== State::Mark
&&
3795 !(useZeal
&& hasZealMode(ZealMode::YieldBeforeMarking
))) ||
3796 (useZeal
&& hasZealMode(ZealMode::YieldBeforeSweeping
))) {
3797 lastMarkSlice
= true;
3798 stats().log("Yielding before starting sweeping");
3803 incrementalState
= State::Sweep
;
3804 lastMarkSlice
= false;
3806 beginSweepPhase(reason
, session
);
3811 if (storeBuffer().mayHavePointersToDeadCells()) {
3812 collectNurseryFromMajorGC(reason
);
3815 if (initialState
== State::Sweep
) {
3816 rt
->mainContextFromOwnThread()->traceWrapperGCRooters(
3820 if (performSweepActions(budget
) == NotFinished
) {
3824 endSweepPhase(destroyingRuntime
);
3826 incrementalState
= State::Finalize
;
3830 case State::Finalize
:
3831 if (waitForBackgroundTask(sweepTask
, budget
, shouldPauseMutator
,
3832 TriggerSliceWhenFinished
) == NotFinished
) {
3836 assertBackgroundSweepingFinished();
3839 // Sweep the zones list now that background finalization is finished to
3840 // remove and free dead zones, compartments and realms.
3841 gcstats::AutoPhase
ap1(stats(), gcstats::PhaseKind::SWEEP
);
3842 gcstats::AutoPhase
ap2(stats(), gcstats::PhaseKind::DESTROY
);
3843 sweepZones(rt
->gcContext(), destroyingRuntime
);
3846 MOZ_ASSERT(!startedCompacting
);
3847 incrementalState
= State::Compact
;
3849 // Always yield before compacting since it is not incremental.
3850 if (isCompacting
&& !budget
.isUnlimited()) {
3856 case State::Compact
:
3858 if (NeedToCollectNursery(this)) {
3859 collectNurseryFromMajorGC(reason
);
3862 storeBuffer().checkEmpty();
3863 if (!startedCompacting
) {
3864 beginCompactPhase();
3867 if (compactPhase(reason
, budget
, session
) == NotFinished
) {
3875 incrementalState
= State::Decommit
;
3879 case State::Decommit
:
3880 if (waitForBackgroundTask(decommitTask
, budget
, shouldPauseMutator
,
3881 TriggerSliceWhenFinished
) == NotFinished
) {
3885 incrementalState
= State::Finish
;
3890 finishCollection(reason
);
3891 incrementalState
= State::NotActive
;
3896 MOZ_ASSERT(safeToYield
);
3897 for (auto& marker
: markers
) {
3898 MOZ_ASSERT(marker
->markColor() == MarkColor::Black
);
3900 MOZ_ASSERT(!rt
->gcContext()->hasJitCodeToPoison());
3904 void GCRuntime::collectNurseryFromMajorGC(JS::GCReason reason
) {
3905 collectNursery(gcOptions(), JS::GCReason::EVICT_NURSERY
,
3906 gcstats::PhaseKind::EVICT_NURSERY_FOR_MAJOR_GC
);
3908 MOZ_ASSERT(nursery().isEmpty());
3909 MOZ_ASSERT(storeBuffer().isEmpty());
3912 bool GCRuntime::hasForegroundWork() const {
3913 switch (incrementalState
) {
3914 case State::NotActive
:
3915 // Incremental GC is not running and no work is pending.
3917 case State::Prepare
:
3918 // We yield in the Prepare state after starting unmarking.
3919 return !unmarkTask
.wasStarted();
3920 case State::Finalize
:
3921 // We yield in the Finalize state to wait for background sweeping.
3922 return !isBackgroundSweeping();
3923 case State::Decommit
:
3924 // We yield in the Decommit state to wait for background decommit.
3925 return !decommitTask
.wasStarted();
3927 // In all other states there is still work to do.
3932 IncrementalProgress
GCRuntime::waitForBackgroundTask(
3933 GCParallelTask
& task
, const SliceBudget
& budget
, bool shouldPauseMutator
,
3934 ShouldTriggerSliceWhenFinished triggerSlice
) {
3935 // Wait here in non-incremental collections, or if we want to pause the
3936 // mutator to let the GC catch up.
3937 if (budget
.isUnlimited() || shouldPauseMutator
) {
3938 gcstats::AutoPhase
ap(stats(), gcstats::PhaseKind::WAIT_BACKGROUND_THREAD
);
3939 Maybe
<TimeStamp
> deadline
;
3940 if (budget
.isTimeBudget()) {
3941 deadline
.emplace(budget
.deadline());
3943 task
.join(deadline
);
3946 // In incremental collections, yield if the task has not finished and
3947 // optionally request a slice to notify us when this happens.
3948 if (!budget
.isUnlimited()) {
3949 AutoLockHelperThreadState lock
;
3950 if (task
.wasStarted(lock
)) {
3952 requestSliceAfterBackgroundTask
= true;
3957 task
.joinWithLockHeld(lock
);
3960 MOZ_ASSERT(task
.isIdle());
3963 cancelRequestedGCAfterBackgroundTask();
3969 GCAbortReason
gc::IsIncrementalGCUnsafe(JSRuntime
* rt
) {
3970 MOZ_ASSERT(!rt
->mainContextFromOwnThread()->suppressGC
);
3972 if (!rt
->gc
.isIncrementalGCAllowed()) {
3973 return GCAbortReason::IncrementalDisabled
;
3976 return GCAbortReason::None
;
3979 inline void GCRuntime::checkZoneIsScheduled(Zone
* zone
, JS::GCReason reason
,
3980 const char* trigger
) {
3982 if (zone
->isGCScheduled()) {
3987 "checkZoneIsScheduled: Zone %p not scheduled as expected in %s GC "
3989 zone
, JS::ExplainGCReason(reason
), trigger
);
3990 for (ZonesIter
zone(this, WithAtoms
); !zone
.done(); zone
.next()) {
3991 fprintf(stderr
, " Zone %p:%s%s\n", zone
.get(),
3992 zone
->isAtomsZone() ? " atoms" : "",
3993 zone
->isGCScheduled() ? " scheduled" : "");
3996 MOZ_CRASH("Zone not scheduled");
4000 GCRuntime::IncrementalResult
GCRuntime::budgetIncrementalGC(
4001 bool nonincrementalByAPI
, JS::GCReason reason
, SliceBudget
& budget
) {
4002 if (nonincrementalByAPI
) {
4003 stats().nonincremental(GCAbortReason::NonIncrementalRequested
);
4004 budget
= SliceBudget::unlimited();
4006 // Reset any in progress incremental GC if this was triggered via the
4007 // API. This isn't required for correctness, but sometimes during tests
4008 // the caller expects this GC to collect certain objects, and we need
4009 // to make sure to collect everything possible.
4010 if (reason
!= JS::GCReason::ALLOC_TRIGGER
) {
4011 return resetIncrementalGC(GCAbortReason::NonIncrementalRequested
);
4014 return IncrementalResult::Ok
;
4017 if (reason
== JS::GCReason::ABORT_GC
) {
4018 budget
= SliceBudget::unlimited();
4019 stats().nonincremental(GCAbortReason::AbortRequested
);
4020 return resetIncrementalGC(GCAbortReason::AbortRequested
);
4023 if (!budget
.isUnlimited()) {
4024 GCAbortReason unsafeReason
= IsIncrementalGCUnsafe(rt
);
4025 if (unsafeReason
== GCAbortReason::None
) {
4026 if (reason
== JS::GCReason::COMPARTMENT_REVIVED
) {
4027 unsafeReason
= GCAbortReason::CompartmentRevived
;
4028 } else if (!incrementalGCEnabled
) {
4029 unsafeReason
= GCAbortReason::ModeChange
;
4033 if (unsafeReason
!= GCAbortReason::None
) {
4034 budget
= SliceBudget::unlimited();
4035 stats().nonincremental(unsafeReason
);
4036 return resetIncrementalGC(unsafeReason
);
4040 GCAbortReason resetReason
= GCAbortReason::None
;
4041 for (ZonesIter
zone(this, WithAtoms
); !zone
.done(); zone
.next()) {
4042 if (zone
->gcHeapSize
.bytes() >=
4043 zone
->gcHeapThreshold
.incrementalLimitBytes()) {
4044 checkZoneIsScheduled(zone
, reason
, "GC bytes");
4045 budget
= SliceBudget::unlimited();
4046 stats().nonincremental(GCAbortReason::GCBytesTrigger
);
4047 if (zone
->wasGCStarted() && zone
->gcState() > Zone::Sweep
) {
4048 resetReason
= GCAbortReason::GCBytesTrigger
;
4052 if (zone
->mallocHeapSize
.bytes() >=
4053 zone
->mallocHeapThreshold
.incrementalLimitBytes()) {
4054 checkZoneIsScheduled(zone
, reason
, "malloc bytes");
4055 budget
= SliceBudget::unlimited();
4056 stats().nonincremental(GCAbortReason::MallocBytesTrigger
);
4057 if (zone
->wasGCStarted() && zone
->gcState() > Zone::Sweep
) {
4058 resetReason
= GCAbortReason::MallocBytesTrigger
;
4062 if (zone
->jitHeapSize
.bytes() >=
4063 zone
->jitHeapThreshold
.incrementalLimitBytes()) {
4064 checkZoneIsScheduled(zone
, reason
, "JIT code bytes");
4065 budget
= SliceBudget::unlimited();
4066 stats().nonincremental(GCAbortReason::JitCodeBytesTrigger
);
4067 if (zone
->wasGCStarted() && zone
->gcState() > Zone::Sweep
) {
4068 resetReason
= GCAbortReason::JitCodeBytesTrigger
;
4072 if (isIncrementalGCInProgress() &&
4073 zone
->isGCScheduled() != zone
->wasGCStarted()) {
4074 budget
= SliceBudget::unlimited();
4075 resetReason
= GCAbortReason::ZoneChange
;
4079 if (resetReason
!= GCAbortReason::None
) {
4080 return resetIncrementalGC(resetReason
);
4083 return IncrementalResult::Ok
;
4086 bool GCRuntime::maybeIncreaseSliceBudget(SliceBudget
& budget
) {
4087 if (js::SupportDifferentialTesting()) {
4091 if (!budget
.isTimeBudget() || !isIncrementalGCInProgress()) {
4095 bool wasIncreasedForLongCollections
=
4096 maybeIncreaseSliceBudgetForLongCollections(budget
);
4097 bool wasIncreasedForUgentCollections
=
4098 maybeIncreaseSliceBudgetForUrgentCollections(budget
);
4100 return wasIncreasedForLongCollections
|| wasIncreasedForUgentCollections
;
4103 // Return true if the budget is actually extended after rounding.
4104 static bool ExtendBudget(SliceBudget
& budget
, double newDuration
) {
4105 long millis
= lround(newDuration
);
4106 if (millis
<= budget
.timeBudget()) {
4110 bool idleTriggered
= budget
.idle
;
4111 budget
= SliceBudget(TimeBudget(millis
), nullptr); // Uninterruptible.
4112 budget
.idle
= idleTriggered
;
4113 budget
.extended
= true;
4117 bool GCRuntime::maybeIncreaseSliceBudgetForLongCollections(
4118 SliceBudget
& budget
) {
4119 // For long-running collections, enforce a minimum time budget that increases
4120 // linearly with time up to a maximum.
4122 // All times are in milliseconds.
4123 struct BudgetAtTime
{
4127 const BudgetAtTime MinBudgetStart
{1500, 0.0};
4128 const BudgetAtTime MinBudgetEnd
{2500, 100.0};
4130 double totalTime
= (TimeStamp::Now() - lastGCStartTime()).ToMilliseconds();
4133 LinearInterpolate(totalTime
, MinBudgetStart
.time
, MinBudgetStart
.budget
,
4134 MinBudgetEnd
.time
, MinBudgetEnd
.budget
);
4136 return ExtendBudget(budget
, minBudget
);
4139 bool GCRuntime::maybeIncreaseSliceBudgetForUrgentCollections(
4140 SliceBudget
& budget
) {
4141 // Enforce a minimum time budget based on how close we are to the incremental
4144 size_t minBytesRemaining
= SIZE_MAX
;
4145 for (AllZonesIter
zone(this); !zone
.done(); zone
.next()) {
4146 if (!zone
->wasGCStarted()) {
4149 size_t gcBytesRemaining
=
4150 zone
->gcHeapThreshold
.incrementalBytesRemaining(zone
->gcHeapSize
);
4151 minBytesRemaining
= std::min(minBytesRemaining
, gcBytesRemaining
);
4152 size_t mallocBytesRemaining
=
4153 zone
->mallocHeapThreshold
.incrementalBytesRemaining(
4154 zone
->mallocHeapSize
);
4155 minBytesRemaining
= std::min(minBytesRemaining
, mallocBytesRemaining
);
4158 if (minBytesRemaining
< tunables
.urgentThresholdBytes() &&
4159 minBytesRemaining
!= 0) {
4160 // Increase budget based on the reciprocal of the fraction remaining.
4161 double fractionRemaining
=
4162 double(minBytesRemaining
) / double(tunables
.urgentThresholdBytes());
4163 double minBudget
= double(defaultSliceBudgetMS()) / fractionRemaining
;
4164 return ExtendBudget(budget
, minBudget
);
4170 static void ScheduleZones(GCRuntime
* gc
, JS::GCReason reason
) {
4171 for (ZonesIter
zone(gc
, WithAtoms
); !zone
.done(); zone
.next()) {
4172 // Re-check heap threshold for alloc-triggered zones that were not
4173 // previously collected. Now we have allocation rate data, the heap limit
4174 // may have been increased beyond the current size.
4175 if (gc
->tunables
.balancedHeapLimitsEnabled() && zone
->isGCScheduled() &&
4176 zone
->smoothedCollectionRate
.ref().isNothing() &&
4177 reason
== JS::GCReason::ALLOC_TRIGGER
&&
4178 zone
->gcHeapSize
.bytes() < zone
->gcHeapThreshold
.startBytes()) {
4179 zone
->unscheduleGC(); // May still be re-scheduled below.
4182 if (gc
->isShutdownGC()) {
4186 if (!gc
->isPerZoneGCEnabled()) {
4190 // To avoid resets, continue to collect any zones that were being
4191 // collected in a previous slice.
4192 if (gc
->isIncrementalGCInProgress() && zone
->wasGCStarted()) {
4196 // This is a heuristic to reduce the total number of collections.
4197 bool inHighFrequencyMode
= gc
->schedulingState
.inHighFrequencyGCMode();
4198 if (zone
->gcHeapSize
.bytes() >=
4199 zone
->gcHeapThreshold
.eagerAllocTrigger(inHighFrequencyMode
) ||
4200 zone
->mallocHeapSize
.bytes() >=
4201 zone
->mallocHeapThreshold
.eagerAllocTrigger(inHighFrequencyMode
) ||
4202 zone
->jitHeapSize
.bytes() >= zone
->jitHeapThreshold
.startBytes()) {
4208 static void UnscheduleZones(GCRuntime
* gc
) {
4209 for (ZonesIter
zone(gc
->rt
, WithAtoms
); !zone
.done(); zone
.next()) {
4210 zone
->unscheduleGC();
4214 class js::gc::AutoCallGCCallbacks
{
4216 JS::GCReason reason_
;
4219 explicit AutoCallGCCallbacks(GCRuntime
& gc
, JS::GCReason reason
)
4220 : gc_(gc
), reason_(reason
) {
4221 gc_
.maybeCallGCCallback(JSGC_BEGIN
, reason
);
4223 ~AutoCallGCCallbacks() { gc_
.maybeCallGCCallback(JSGC_END
, reason_
); }
4226 void GCRuntime::maybeCallGCCallback(JSGCStatus status
, JS::GCReason reason
) {
4227 if (!gcCallback
.ref().op
) {
4231 if (isIncrementalGCInProgress()) {
4235 if (gcCallbackDepth
== 0) {
4236 // Save scheduled zone information in case the callback clears it.
4237 for (ZonesIter
zone(this, WithAtoms
); !zone
.done(); zone
.next()) {
4238 zone
->gcScheduledSaved_
= zone
->gcScheduled_
;
4242 // Save and clear GC options and state in case the callback reenters GC.
4243 JS::GCOptions options
= gcOptions();
4244 maybeGcOptions
= Nothing();
4245 bool savedFullGCRequested
= fullGCRequested
;
4246 fullGCRequested
= false;
4250 callGCCallback(status
, reason
);
4252 MOZ_ASSERT(gcCallbackDepth
!= 0);
4255 // Restore the original GC options.
4256 maybeGcOptions
= Some(options
);
4258 // At the end of a GC, clear out the fullGCRequested state. At the start,
4259 // restore the previous setting.
4260 fullGCRequested
= (status
== JSGC_END
) ? false : savedFullGCRequested
;
4262 if (gcCallbackDepth
== 0) {
4263 // Ensure any zone that was originally scheduled stays scheduled.
4264 for (ZonesIter
zone(this, WithAtoms
); !zone
.done(); zone
.next()) {
4265 zone
->gcScheduled_
= zone
->gcScheduled_
|| zone
->gcScheduledSaved_
;
4271 * We disable inlining to ensure that the bottom of the stack with possible GC
4272 * roots recorded in MarkRuntime excludes any pointers we use during the marking
4275 MOZ_NEVER_INLINE
GCRuntime::IncrementalResult
GCRuntime::gcCycle(
4276 bool nonincrementalByAPI
, const SliceBudget
& budgetArg
,
4277 JS::GCReason reason
) {
4278 // Assert if this is a GC unsafe region.
4279 rt
->mainContextFromOwnThread()->verifyIsSafeToGC();
4281 // It's ok if threads other than the main thread have suppressGC set, as
4282 // they are operating on zones which will not be collected from here.
4283 MOZ_ASSERT(!rt
->mainContextFromOwnThread()->suppressGC
);
4285 // This reason is used internally. See below.
4286 MOZ_ASSERT(reason
!= JS::GCReason::RESET
);
4288 // Background finalization and decommit are finished by definition before we
4289 // can start a new major GC. Background allocation may still be running, but
4290 // that's OK because chunk pools are protected by the GC lock.
4291 if (!isIncrementalGCInProgress()) {
4292 assertBackgroundSweepingFinished();
4293 MOZ_ASSERT(decommitTask
.isIdle());
4296 // Note that GC callbacks are allowed to re-enter GC.
4297 AutoCallGCCallbacks
callCallbacks(*this, reason
);
4299 // Increase slice budget for long running collections before it is recorded by
4301 SliceBudget
budget(budgetArg
);
4302 bool budgetWasIncreased
= maybeIncreaseSliceBudget(budget
);
4304 ScheduleZones(this, reason
);
4306 auto updateCollectorTime
= MakeScopeExit([&] {
4307 if (const gcstats::Statistics::SliceData
* slice
= stats().lastSlice()) {
4308 collectorTimeSinceAllocRateUpdate
+= slice
->duration();
4312 gcstats::AutoGCSlice
agc(stats(), scanZonesBeforeGC(), gcOptions(), budget
,
4313 reason
, budgetWasIncreased
);
4315 IncrementalResult result
=
4316 budgetIncrementalGC(nonincrementalByAPI
, reason
, budget
);
4317 if (result
== IncrementalResult::ResetIncremental
) {
4318 if (incrementalState
== State::NotActive
) {
4319 // The collection was reset and has finished.
4323 // The collection was reset but we must finish up some remaining work.
4324 reason
= JS::GCReason::RESET
;
4327 majorGCTriggerReason
= JS::GCReason::NO_REASON
;
4328 MOZ_ASSERT(!stats().hasTrigger());
4333 gcprobes::MajorGCStart();
4334 incrementalSlice(budget
, reason
, budgetWasIncreased
);
4335 gcprobes::MajorGCEnd();
4337 MOZ_ASSERT_IF(result
== IncrementalResult::ResetIncremental
,
4338 !isIncrementalGCInProgress());
4342 inline bool GCRuntime::mightSweepInThisSlice(bool nonIncremental
) {
4343 MOZ_ASSERT(incrementalState
< State::Sweep
);
4344 return nonIncremental
|| lastMarkSlice
|| hasIncrementalTwoSliceZealMode();
4348 static bool IsDeterministicGCReason(JS::GCReason reason
) {
4350 case JS::GCReason::API
:
4351 case JS::GCReason::DESTROY_RUNTIME
:
4352 case JS::GCReason::LAST_DITCH
:
4353 case JS::GCReason::TOO_MUCH_MALLOC
:
4354 case JS::GCReason::TOO_MUCH_WASM_MEMORY
:
4355 case JS::GCReason::TOO_MUCH_JIT_CODE
:
4356 case JS::GCReason::ALLOC_TRIGGER
:
4357 case JS::GCReason::DEBUG_GC
:
4358 case JS::GCReason::CC_FORCED
:
4359 case JS::GCReason::SHUTDOWN_CC
:
4360 case JS::GCReason::ABORT_GC
:
4361 case JS::GCReason::DISABLE_GENERATIONAL_GC
:
4362 case JS::GCReason::FINISH_GC
:
4363 case JS::GCReason::PREPARE_FOR_TRACING
:
4372 gcstats::ZoneGCStats
GCRuntime::scanZonesBeforeGC() {
4373 gcstats::ZoneGCStats zoneStats
;
4374 for (ZonesIter
zone(this, WithAtoms
); !zone
.done(); zone
.next()) {
4375 zoneStats
.zoneCount
++;
4376 zoneStats
.compartmentCount
+= zone
->compartments().length();
4377 for (CompartmentsInZoneIter
comp(zone
); !comp
.done(); comp
.next()) {
4378 zoneStats
.realmCount
+= comp
->realms().length();
4380 if (zone
->isGCScheduled()) {
4381 zoneStats
.collectedZoneCount
++;
4382 zoneStats
.collectedCompartmentCount
+= zone
->compartments().length();
4389 // The GC can only clean up scheduledForDestruction realms that were marked live
4390 // by a barrier (e.g. by RemapWrappers from a navigation event). It is also
4391 // common to have realms held live because they are part of a cycle in gecko,
4392 // e.g. involving the HTMLDocument wrapper. In this case, we need to run the
4393 // CycleCollector in order to remove these edges before the realm can be freed.
4394 void GCRuntime::maybeDoCycleCollection() {
4395 const static float ExcessiveGrayRealms
= 0.8f
;
4396 const static size_t LimitGrayRealms
= 200;
4398 size_t realmsTotal
= 0;
4399 size_t realmsGray
= 0;
4400 for (RealmsIter
realm(rt
); !realm
.done(); realm
.next()) {
4402 GlobalObject
* global
= realm
->unsafeUnbarrieredMaybeGlobal();
4403 if (global
&& global
->isMarkedGray()) {
4407 float grayFraction
= float(realmsGray
) / float(realmsTotal
);
4408 if (grayFraction
> ExcessiveGrayRealms
|| realmsGray
> LimitGrayRealms
) {
4409 callDoCycleCollectionCallback(rt
->mainContextFromOwnThread());
4413 void GCRuntime::checkCanCallAPI() {
4414 MOZ_RELEASE_ASSERT(CurrentThreadCanAccessRuntime(rt
));
4416 /* If we attempt to invoke the GC while we are running in the GC, assert. */
4417 MOZ_RELEASE_ASSERT(!JS::RuntimeHeapIsBusy());
4420 bool GCRuntime::checkIfGCAllowedInCurrentState(JS::GCReason reason
) {
4421 if (rt
->mainContextFromOwnThread()->suppressGC
) {
4425 // Only allow shutdown GCs when we're destroying the runtime. This keeps
4426 // the GC callback from triggering a nested GC and resetting global state.
4427 if (rt
->isBeingDestroyed() && !isShutdownGC()) {
4432 if (deterministicOnly
&& !IsDeterministicGCReason(reason
)) {
4440 bool GCRuntime::shouldRepeatForDeadZone(JS::GCReason reason
) {
4441 MOZ_ASSERT_IF(reason
== JS::GCReason::COMPARTMENT_REVIVED
, !isIncremental
);
4442 MOZ_ASSERT(!isIncrementalGCInProgress());
4444 if (!isIncremental
) {
4448 for (CompartmentsIter
c(rt
); !c
.done(); c
.next()) {
4449 if (c
->gcState
.scheduledForDestruction
) {
4457 struct MOZ_RAII AutoSetZoneSliceThresholds
{
4458 explicit AutoSetZoneSliceThresholds(GCRuntime
* gc
) : gc(gc
) {
4459 // On entry, zones that are already collecting should have a slice threshold
4461 for (ZonesIter
zone(gc
, WithAtoms
); !zone
.done(); zone
.next()) {
4462 MOZ_ASSERT(zone
->wasGCStarted() ==
4463 zone
->gcHeapThreshold
.hasSliceThreshold());
4464 MOZ_ASSERT(zone
->wasGCStarted() ==
4465 zone
->mallocHeapThreshold
.hasSliceThreshold());
4469 ~AutoSetZoneSliceThresholds() {
4470 // On exit, update the thresholds for all collecting zones.
4471 bool waitingOnBGTask
= gc
->isWaitingOnBackgroundTask();
4472 for (ZonesIter
zone(gc
, WithAtoms
); !zone
.done(); zone
.next()) {
4473 if (zone
->wasGCStarted()) {
4474 zone
->setGCSliceThresholds(*gc
, waitingOnBGTask
);
4476 MOZ_ASSERT(!zone
->gcHeapThreshold
.hasSliceThreshold());
4477 MOZ_ASSERT(!zone
->mallocHeapThreshold
.hasSliceThreshold());
4485 void GCRuntime::collect(bool nonincrementalByAPI
, const SliceBudget
& budget
,
4486 JS::GCReason reason
) {
4487 TimeStamp startTime
= TimeStamp::Now();
4488 auto timer
= MakeScopeExit([&] {
4489 if (Realm
* realm
= rt
->mainContextFromOwnThread()->realm()) {
4490 realm
->timers
.gcTime
+= TimeStamp::Now() - startTime
;
4494 auto clearGCOptions
= MakeScopeExit([&] {
4495 if (!isIncrementalGCInProgress()) {
4496 maybeGcOptions
= Nothing();
4500 MOZ_ASSERT(reason
!= JS::GCReason::NO_REASON
);
4502 // Checks run for each request, even if we do not actually GC.
4505 // Check if we are allowed to GC at this time before proceeding.
4506 if (!checkIfGCAllowedInCurrentState(reason
)) {
4510 stats().log("GC slice starting in state %s", StateName(incrementalState
));
4512 AutoStopVerifyingBarriers
av(rt
, isShutdownGC());
4513 AutoMaybeLeaveAtomsZone
leaveAtomsZone(rt
->mainContextFromOwnThread());
4514 AutoSetZoneSliceThresholds
sliceThresholds(this);
4516 schedulingState
.updateHighFrequencyModeForReason(reason
);
4518 if (!isIncrementalGCInProgress() && tunables
.balancedHeapLimitsEnabled()) {
4519 updateAllocationRates();
4524 IncrementalResult cycleResult
=
4525 gcCycle(nonincrementalByAPI
, budget
, reason
);
4527 if (reason
== JS::GCReason::ABORT_GC
) {
4528 MOZ_ASSERT(!isIncrementalGCInProgress());
4529 stats().log("GC aborted by request");
4534 * Sometimes when we finish a GC we need to immediately start a new one.
4535 * This happens in the following cases:
4536 * - when we reset the current GC
4537 * - when finalizers drop roots during shutdown
4538 * - when zones that we thought were dead at the start of GC are
4539 * not collected (see the large comment in beginMarkPhase)
4542 if (!isIncrementalGCInProgress()) {
4543 if (cycleResult
== ResetIncremental
) {
4545 } else if (rootsRemoved
&& isShutdownGC()) {
4546 /* Need to re-schedule all zones for GC. */
4547 JS::PrepareForFullGC(rt
->mainContextFromOwnThread());
4549 reason
= JS::GCReason::ROOTS_REMOVED
;
4550 } else if (shouldRepeatForDeadZone(reason
)) {
4552 reason
= JS::GCReason::COMPARTMENT_REVIVED
;
4557 if (reason
== JS::GCReason::COMPARTMENT_REVIVED
) {
4558 maybeDoCycleCollection();
4562 if (hasZealMode(ZealMode::CheckHeapAfterGC
)) {
4563 gcstats::AutoPhase
ap(stats(), gcstats::PhaseKind::TRACE_HEAP
);
4564 CheckHeapAfterGC(rt
);
4566 if (hasZealMode(ZealMode::CheckGrayMarking
) && !isIncrementalGCInProgress()) {
4567 MOZ_RELEASE_ASSERT(CheckGrayMarkingState(rt
));
4570 stats().log("GC slice ending in state %s", StateName(incrementalState
));
4572 UnscheduleZones(this);
4575 SliceBudget
GCRuntime::defaultBudget(JS::GCReason reason
, int64_t millis
) {
4576 // millis == 0 means use internal GC scheduling logic to come up with
4577 // a duration for the slice budget. This may end up still being zero
4578 // based on preferences.
4580 millis
= defaultSliceBudgetMS();
4583 // If the embedding has registered a callback for creating SliceBudgets,
4585 if (createBudgetCallback
) {
4586 return createBudgetCallback(reason
, millis
);
4589 // Otherwise, the preference can request an unlimited duration slice.
4591 return SliceBudget::unlimited();
4594 return SliceBudget(TimeBudget(millis
));
4597 void GCRuntime::gc(JS::GCOptions options
, JS::GCReason reason
) {
4598 if (!isIncrementalGCInProgress()) {
4599 setGCOptions(options
);
4602 collect(true, SliceBudget::unlimited(), reason
);
4605 void GCRuntime::startGC(JS::GCOptions options
, JS::GCReason reason
,
4606 const js::SliceBudget
& budget
) {
4607 MOZ_ASSERT(!isIncrementalGCInProgress());
4608 setGCOptions(options
);
4610 if (!JS::IsIncrementalGCEnabled(rt
->mainContextFromOwnThread())) {
4611 collect(true, SliceBudget::unlimited(), reason
);
4615 collect(false, budget
, reason
);
4618 void GCRuntime::setGCOptions(JS::GCOptions options
) {
4619 MOZ_ASSERT(maybeGcOptions
== Nothing());
4620 maybeGcOptions
= Some(options
);
4623 void GCRuntime::gcSlice(JS::GCReason reason
, const js::SliceBudget
& budget
) {
4624 MOZ_ASSERT(isIncrementalGCInProgress());
4625 collect(false, budget
, reason
);
4628 void GCRuntime::finishGC(JS::GCReason reason
) {
4629 MOZ_ASSERT(isIncrementalGCInProgress());
4631 // If we're not collecting because we're out of memory then skip the
4632 // compacting phase if we need to finish an ongoing incremental GC
4633 // non-incrementally to avoid janking the browser.
4634 if (!IsOOMReason(initialReason
)) {
4635 if (incrementalState
== State::Compact
) {
4640 isCompacting
= false;
4643 collect(false, SliceBudget::unlimited(), reason
);
4646 void GCRuntime::abortGC() {
4647 MOZ_ASSERT(isIncrementalGCInProgress());
4649 MOZ_ASSERT(!rt
->mainContextFromOwnThread()->suppressGC
);
4651 collect(false, SliceBudget::unlimited(), JS::GCReason::ABORT_GC
);
4654 static bool ZonesSelected(GCRuntime
* gc
) {
4655 for (ZonesIter
zone(gc
, WithAtoms
); !zone
.done(); zone
.next()) {
4656 if (zone
->isGCScheduled()) {
4663 void GCRuntime::startDebugGC(JS::GCOptions options
, const SliceBudget
& budget
) {
4664 MOZ_ASSERT(!isIncrementalGCInProgress());
4665 setGCOptions(options
);
4667 if (!ZonesSelected(this)) {
4668 JS::PrepareForFullGC(rt
->mainContextFromOwnThread());
4671 collect(false, budget
, JS::GCReason::DEBUG_GC
);
4674 void GCRuntime::debugGCSlice(const SliceBudget
& budget
) {
4675 MOZ_ASSERT(isIncrementalGCInProgress());
4677 if (!ZonesSelected(this)) {
4678 JS::PrepareForIncrementalGC(rt
->mainContextFromOwnThread());
4681 collect(false, budget
, JS::GCReason::DEBUG_GC
);
4684 /* Schedule a full GC unless a zone will already be collected. */
4685 void js::PrepareForDebugGC(JSRuntime
* rt
) {
4686 if (!ZonesSelected(&rt
->gc
)) {
4687 JS::PrepareForFullGC(rt
->mainContextFromOwnThread());
4691 void GCRuntime::onOutOfMallocMemory() {
4692 // Stop allocating new chunks.
4693 allocTask
.cancelAndWait();
4695 // Make sure we release anything queued for release.
4696 decommitTask
.join();
4697 nursery().joinDecommitTask();
4699 // Wait for background free of nursery huge slots to finish.
4702 AutoLockGC
lock(this);
4703 onOutOfMallocMemory(lock
);
4706 void GCRuntime::onOutOfMallocMemory(const AutoLockGC
& lock
) {
4708 // Release any relocated arenas we may be holding on to, without releasing
4710 releaseHeldRelocatedArenasWithoutUnlocking(lock
);
4713 // Throw away any excess chunks we have lying around.
4714 freeEmptyChunks(lock
);
4716 // Immediately decommit as many arenas as possible in the hopes that this
4717 // might let the OS scrape together enough pages to satisfy the failing
4719 if (DecommitEnabled()) {
4720 decommitFreeArenasWithoutUnlocking(lock
);
4724 void GCRuntime::minorGC(JS::GCReason reason
, gcstats::PhaseKind phase
) {
4725 MOZ_ASSERT(!JS::RuntimeHeapIsBusy());
4727 MOZ_ASSERT_IF(reason
== JS::GCReason::EVICT_NURSERY
,
4728 !rt
->mainContextFromOwnThread()->suppressGC
);
4729 if (rt
->mainContextFromOwnThread()->suppressGC
) {
4735 collectNursery(JS::GCOptions::Normal
, reason
, phase
);
4738 if (hasZealMode(ZealMode::CheckHeapAfterGC
)) {
4739 gcstats::AutoPhase
ap(stats(), phase
);
4740 CheckHeapAfterGC(rt
);
4744 for (ZonesIter
zone(this, WithAtoms
); !zone
.done(); zone
.next()) {
4745 maybeTriggerGCAfterAlloc(zone
);
4746 maybeTriggerGCAfterMalloc(zone
);
4750 void GCRuntime::collectNursery(JS::GCOptions options
, JS::GCReason reason
,
4751 gcstats::PhaseKind phase
) {
4752 AutoMaybeLeaveAtomsZone
leaveAtomsZone(rt
->mainContextFromOwnThread());
4754 uint32_t numAllocs
= 0;
4755 for (ZonesIter
zone(this, WithAtoms
); !zone
.done(); zone
.next()) {
4756 numAllocs
+= zone
->getAndResetTenuredAllocsSinceMinorGC();
4758 stats().setAllocsSinceMinorGCTenured(numAllocs
);
4760 gcstats::AutoPhase
ap(stats(), phase
);
4762 nursery().collect(options
, reason
);
4764 startBackgroundFreeAfterMinorGC();
4766 // We ignore gcMaxBytes when allocating for minor collection. However, if we
4767 // overflowed, we disable the nursery. The next time we allocate, we'll fail
4768 // because bytes >= gcMaxBytes.
4769 if (heapSize
.bytes() >= tunables
.gcMaxBytes()) {
4770 if (!nursery().isEmpty()) {
4771 nursery().collect(options
, JS::GCReason::DISABLE_GENERATIONAL_GC
);
4772 MOZ_ASSERT(nursery().isEmpty());
4773 startBackgroundFreeAfterMinorGC();
4775 nursery().disable();
4779 void GCRuntime::startBackgroundFreeAfterMinorGC() {
4780 // Called after nursery collection. Free whatever blocks are safe to free now.
4782 AutoLockHelperThreadState lock
;
4784 lifoBlocksToFree
.ref().transferFrom(&lifoBlocksToFreeAfterNextMinorGC
.ref());
4786 if (nursery().tenuredEverything
) {
4787 lifoBlocksToFree
.ref().transferFrom(
4788 &lifoBlocksToFreeAfterFullMinorGC
.ref());
4790 lifoBlocksToFreeAfterNextMinorGC
.ref().transferFrom(
4791 &lifoBlocksToFreeAfterFullMinorGC
.ref());
4794 if (lifoBlocksToFree
.ref().isEmpty() &&
4795 buffersToFreeAfterMinorGC
.ref().empty()) {
4799 freeTask
.startOrRunIfIdle(lock
);
4802 bool GCRuntime::gcIfRequestedImpl(bool eagerOk
) {
4803 // This method returns whether a major GC was performed.
4805 if (nursery().minorGCRequested()) {
4806 minorGC(nursery().minorGCTriggerReason());
4809 JS::GCReason reason
= wantMajorGC(eagerOk
);
4810 if (reason
== JS::GCReason::NO_REASON
) {
4814 SliceBudget budget
= defaultBudget(reason
, 0);
4815 if (!isIncrementalGCInProgress()) {
4816 startGC(JS::GCOptions::Normal
, reason
, budget
);
4818 gcSlice(reason
, budget
);
4823 void js::gc::FinishGC(JSContext
* cx
, JS::GCReason reason
) {
4824 // Calling this when GC is suppressed won't have any effect.
4825 MOZ_ASSERT(!cx
->suppressGC
);
4827 // GC callbacks may run arbitrary code, including JS. Check this regardless of
4828 // whether we GC for this invocation.
4829 MOZ_ASSERT(cx
->isNurseryAllocAllowed());
4831 if (JS::IsIncrementalGCInProgress(cx
)) {
4832 JS::PrepareForIncrementalGC(cx
);
4833 JS::FinishIncrementalGC(cx
, reason
);
4837 void js::gc::WaitForBackgroundTasks(JSContext
* cx
) {
4838 cx
->runtime()->gc
.waitForBackgroundTasks();
4841 void GCRuntime::waitForBackgroundTasks() {
4842 MOZ_ASSERT(!isIncrementalGCInProgress());
4843 MOZ_ASSERT(sweepTask
.isIdle());
4844 MOZ_ASSERT(decommitTask
.isIdle());
4845 MOZ_ASSERT(markTask
.isIdle());
4849 nursery().joinDecommitTask();
4852 Realm
* js::NewRealm(JSContext
* cx
, JSPrincipals
* principals
,
4853 const JS::RealmOptions
& options
) {
4854 JSRuntime
* rt
= cx
->runtime();
4855 JS_AbortIfWrongThread(cx
);
4857 UniquePtr
<Zone
> zoneHolder
;
4858 UniquePtr
<Compartment
> compHolder
;
4860 Compartment
* comp
= nullptr;
4861 Zone
* zone
= nullptr;
4862 JS::CompartmentSpecifier compSpec
=
4863 options
.creationOptions().compartmentSpecifier();
4865 case JS::CompartmentSpecifier::NewCompartmentInSystemZone
:
4866 // systemZone might be null here, in which case we'll make a zone and
4867 // set this field below.
4868 zone
= rt
->gc
.systemZone
;
4870 case JS::CompartmentSpecifier::NewCompartmentInExistingZone
:
4871 zone
= options
.creationOptions().zone();
4874 case JS::CompartmentSpecifier::ExistingCompartment
:
4875 comp
= options
.creationOptions().compartment();
4876 zone
= comp
->zone();
4878 case JS::CompartmentSpecifier::NewCompartmentAndZone
:
4883 Zone::Kind kind
= Zone::NormalZone
;
4884 const JSPrincipals
* trusted
= rt
->trustedPrincipals();
4885 if (compSpec
== JS::CompartmentSpecifier::NewCompartmentInSystemZone
||
4886 (principals
&& principals
== trusted
)) {
4887 kind
= Zone::SystemZone
;
4890 zoneHolder
= MakeUnique
<Zone
>(cx
->runtime(), kind
);
4891 if (!zoneHolder
|| !zoneHolder
->init()) {
4892 ReportOutOfMemory(cx
);
4896 zone
= zoneHolder
.get();
4899 bool invisibleToDebugger
= options
.creationOptions().invisibleToDebugger();
4901 // Debugger visibility is per-compartment, not per-realm, so make sure the
4902 // new realm's visibility matches its compartment's.
4903 MOZ_ASSERT(comp
->invisibleToDebugger() == invisibleToDebugger
);
4905 compHolder
= cx
->make_unique
<JS::Compartment
>(zone
, invisibleToDebugger
);
4910 comp
= compHolder
.get();
4913 UniquePtr
<Realm
> realm(cx
->new_
<Realm
>(comp
, options
));
4917 realm
->init(cx
, principals
);
4919 // Make sure we don't put system and non-system realms in the same
4922 MOZ_RELEASE_ASSERT(realm
->isSystem() == IsSystemCompartment(comp
));
4925 AutoLockGC
lock(rt
);
4927 // Reserve space in the Vectors before we start mutating them.
4928 if (!comp
->realms().reserve(comp
->realms().length() + 1) ||
4930 !zone
->compartments().reserve(zone
->compartments().length() + 1)) ||
4931 (zoneHolder
&& !rt
->gc
.zones().reserve(rt
->gc
.zones().length() + 1))) {
4932 ReportOutOfMemory(cx
);
4936 // After this everything must be infallible.
4938 comp
->realms().infallibleAppend(realm
.get());
4941 zone
->compartments().infallibleAppend(compHolder
.release());
4945 rt
->gc
.zones().infallibleAppend(zoneHolder
.release());
4947 // Lazily set the runtime's system zone.
4948 if (compSpec
== JS::CompartmentSpecifier::NewCompartmentInSystemZone
) {
4949 MOZ_RELEASE_ASSERT(!rt
->gc
.systemZone
);
4950 MOZ_ASSERT(zone
->isSystemZone());
4951 rt
->gc
.systemZone
= zone
;
4955 return realm
.release();
4958 void GCRuntime::runDebugGC() {
4960 if (rt
->mainContextFromOwnThread()->suppressGC
) {
4964 if (hasZealMode(ZealMode::GenerationalGC
)) {
4965 return minorGC(JS::GCReason::DEBUG_GC
);
4968 PrepareForDebugGC(rt
);
4970 auto budget
= SliceBudget::unlimited();
4971 if (hasZealMode(ZealMode::IncrementalMultipleSlices
)) {
4973 * Start with a small slice limit and double it every slice. This
4974 * ensure that we get multiple slices, and collection runs to
4977 if (!isIncrementalGCInProgress()) {
4978 zealSliceBudget
= zealFrequency
/ 2;
4980 zealSliceBudget
*= 2;
4982 budget
= SliceBudget(WorkBudget(zealSliceBudget
));
4984 js::gc::State initialState
= incrementalState
;
4985 if (!isIncrementalGCInProgress()) {
4986 setGCOptions(JS::GCOptions::Shrink
);
4988 collect(false, budget
, JS::GCReason::DEBUG_GC
);
4990 /* Reset the slice size when we get to the sweep or compact phases. */
4991 if ((initialState
== State::Mark
&& incrementalState
== State::Sweep
) ||
4992 (initialState
== State::Sweep
&& incrementalState
== State::Compact
)) {
4993 zealSliceBudget
= zealFrequency
/ 2;
4995 } else if (hasIncrementalTwoSliceZealMode()) {
4996 // These modes trigger incremental GC that happens in two slices and the
4997 // supplied budget is ignored by incrementalSlice.
4998 budget
= SliceBudget(WorkBudget(1));
5000 if (!isIncrementalGCInProgress()) {
5001 setGCOptions(JS::GCOptions::Normal
);
5003 collect(false, budget
, JS::GCReason::DEBUG_GC
);
5004 } else if (hasZealMode(ZealMode::Compact
)) {
5005 gc(JS::GCOptions::Shrink
, JS::GCReason::DEBUG_GC
);
5007 gc(JS::GCOptions::Normal
, JS::GCReason::DEBUG_GC
);
5013 void GCRuntime::setFullCompartmentChecks(bool enabled
) {
5014 MOZ_ASSERT(!JS::RuntimeHeapIsMajorCollecting());
5015 fullCompartmentChecks
= enabled
;
5018 void GCRuntime::notifyRootsRemoved() {
5019 rootsRemoved
= true;
5022 /* Schedule a GC to happen "soon". */
5023 if (hasZealMode(ZealMode::RootsChange
)) {
5030 bool GCRuntime::selectForMarking(JSObject
* object
) {
5031 MOZ_ASSERT(!JS::RuntimeHeapIsMajorCollecting());
5032 return selectedForMarking
.ref().get().append(object
);
5035 void GCRuntime::clearSelectedForMarking() {
5036 selectedForMarking
.ref().get().clearAndFree();
5039 void GCRuntime::setDeterministic(bool enabled
) {
5040 MOZ_ASSERT(!JS::RuntimeHeapIsMajorCollecting());
5041 deterministicOnly
= enabled
;
5047 AutoAssertNoNurseryAlloc::AutoAssertNoNurseryAlloc() {
5048 TlsContext
.get()->disallowNurseryAlloc();
5051 AutoAssertNoNurseryAlloc::~AutoAssertNoNurseryAlloc() {
5052 TlsContext
.get()->allowNurseryAlloc();
5057 #ifdef JSGC_HASH_TABLE_CHECKS
5058 void GCRuntime::checkHashTablesAfterMovingGC() {
5060 * Check that internal hash tables no longer have any pointers to things
5061 * that have been moved.
5063 rt
->geckoProfiler().checkStringsMapAfterMovingGC();
5064 if (rt
->hasJitRuntime() && rt
->jitRuntime()->hasInterpreterEntryMap()) {
5065 rt
->jitRuntime()->getInterpreterEntryMap()->checkScriptsAfterMovingGC();
5067 for (ZonesIter
zone(this, SkipAtoms
); !zone
.done(); zone
.next()) {
5068 zone
->checkUniqueIdTableAfterMovingGC();
5069 zone
->shapeZone().checkTablesAfterMovingGC();
5070 zone
->checkAllCrossCompartmentWrappersAfterMovingGC();
5071 zone
->checkScriptMapsAfterMovingGC();
5073 // Note: CompactPropMaps never have a table.
5074 JS::AutoCheckCannotGC nogc
;
5075 for (auto map
= zone
->cellIterUnsafe
<NormalPropMap
>(); !map
.done();
5077 if (PropMapTable
* table
= map
->asLinked()->maybeTable(nogc
)) {
5078 table
->checkAfterMovingGC();
5081 for (auto map
= zone
->cellIterUnsafe
<DictionaryPropMap
>(); !map
.done();
5083 if (PropMapTable
* table
= map
->asLinked()->maybeTable(nogc
)) {
5084 table
->checkAfterMovingGC();
5089 for (CompartmentsIter
c(this); !c
.done(); c
.next()) {
5090 for (RealmsInCompartmentIter
r(c
); !r
.done(); r
.next()) {
5091 r
->dtoaCache
.checkCacheAfterMovingGC();
5092 if (r
->debugEnvs()) {
5093 r
->debugEnvs()->checkHashTablesAfterMovingGC();
5101 bool GCRuntime::hasZone(Zone
* target
) {
5102 for (AllZonesIter
zone(this); !zone
.done(); zone
.next()) {
5103 if (zone
== target
) {
5111 void AutoAssertEmptyNursery::checkCondition(JSContext
* cx
) {
5116 MOZ_ASSERT(cx
->nursery().isEmpty());
5119 AutoEmptyNursery::AutoEmptyNursery(JSContext
* cx
) {
5120 MOZ_ASSERT(!cx
->suppressGC
);
5121 cx
->runtime()->gc
.stats().suspendPhases();
5122 cx
->runtime()->gc
.evictNursery(JS::GCReason::EVICT_NURSERY
);
5123 cx
->runtime()->gc
.stats().resumePhases();
5131 // We don't want jsfriendapi.h to depend on GenericPrinter,
5132 // so these functions are declared directly in the cpp.
5134 extern JS_PUBLIC_API
void DumpString(JSString
* str
, js::GenericPrinter
& out
);
5138 void js::gc::Cell::dump(js::GenericPrinter
& out
) const {
5139 switch (getTraceKind()) {
5140 case JS::TraceKind::Object
:
5141 reinterpret_cast<const JSObject
*>(this)->dump(out
);
5144 case JS::TraceKind::String
:
5145 js::DumpString(reinterpret_cast<JSString
*>(const_cast<Cell
*>(this)), out
);
5148 case JS::TraceKind::Shape
:
5149 reinterpret_cast<const Shape
*>(this)->dump(out
);
5153 out
.printf("%s(%p)\n", JS::GCTraceKindToAscii(getTraceKind()),
5158 // For use in a debugger.
5159 void js::gc::Cell::dump() const {
5160 js::Fprinter
out(stderr
);
5165 JS_PUBLIC_API
bool js::gc::detail::CanCheckGrayBits(const TenuredCell
* cell
) {
5166 // We do not check the gray marking state of cells in the following cases:
5168 // 1) When OOM has caused us to clear the gcGrayBitsValid_ flag.
5170 // 2) When we are in an incremental GC and examine a cell that is in a zone
5171 // that is not being collected. Gray targets of CCWs that are marked black
5172 // by a barrier will eventually be marked black in a later GC slice.
5174 // 3) When mark bits are being cleared concurrently by a helper thread.
5178 auto* runtime
= cell
->runtimeFromAnyThread();
5179 MOZ_ASSERT(CurrentThreadCanAccessRuntime(runtime
));
5181 if (!runtime
->gc
.areGrayBitsValid()) {
5185 JS::Zone
* zone
= cell
->zone();
5187 if (runtime
->gc
.isIncrementalGCInProgress() && !zone
->wasGCStarted()) {
5191 return !zone
->isGCPreparing();
5194 JS_PUBLIC_API
bool js::gc::detail::CellIsMarkedGrayIfKnown(
5195 const TenuredCell
* cell
) {
5196 MOZ_ASSERT_IF(cell
->isPermanentAndMayBeShared(), cell
->isMarkedBlack());
5197 if (!cell
->isMarkedGray()) {
5201 return CanCheckGrayBits(cell
);
5206 JS_PUBLIC_API
void js::gc::detail::AssertCellIsNotGray(const Cell
* cell
) {
5207 if (!cell
->isTenured()) {
5211 // Check that a cell is not marked gray.
5213 // Since this is a debug-only check, take account of the eventual mark state
5214 // of cells that will be marked black by the next GC slice in an incremental
5215 // GC. For performance reasons we don't do this in CellIsMarkedGrayIfKnown.
5217 const auto* tc
= &cell
->asTenured();
5218 if (!tc
->isMarkedGray() || !CanCheckGrayBits(tc
)) {
5222 // TODO: I'd like to AssertHeapIsIdle() here, but this ends up getting
5223 // called during GC and while iterating the heap for memory reporting.
5224 MOZ_ASSERT(!JS::RuntimeHeapIsCycleCollecting());
5226 if (tc
->zone()->isGCMarkingBlackAndGray()) {
5227 // We are doing gray marking in the cell's zone. Even if the cell is
5228 // currently marked gray it may eventually be marked black. Delay checking
5229 // non-black cells until we finish gray marking.
5231 if (!tc
->isMarkedBlack()) {
5232 JSRuntime
* rt
= tc
->zone()->runtimeFromMainThread();
5233 AutoEnterOOMUnsafeRegion oomUnsafe
;
5234 if (!rt
->gc
.cellsToAssertNotGray
.ref().append(cell
)) {
5235 oomUnsafe
.crash("Can't append to delayed gray checks list");
5241 MOZ_ASSERT(!tc
->isMarkedGray());
5244 extern JS_PUBLIC_API
bool js::gc::detail::ObjectIsMarkedBlack(
5245 const JSObject
* obj
) {
5246 return obj
->isMarkedBlack();
5251 js::gc::ClearEdgesTracer::ClearEdgesTracer(JSRuntime
* rt
)
5252 : GenericTracerImpl(rt
, JS::TracerKind::ClearEdges
,
5253 JS::WeakMapTraceAction::TraceKeysAndValues
) {}
5255 template <typename T
>
5256 void js::gc::ClearEdgesTracer::onEdge(T
** thingp
, const char* name
) {
5257 // We don't handle removing pointers to nursery edges from the store buffer
5258 // with this tracer. Check that this doesn't happen.
5260 MOZ_ASSERT(!IsInsideNursery(thing
));
5262 // Fire the pre-barrier since we're removing an edge from the graph.
5263 InternalBarrierMethods
<T
*>::preBarrier(thing
);
5268 void GCRuntime::setPerformanceHint(PerformanceHint hint
) {
5269 if (hint
== PerformanceHint::InPageLoad
) {
5272 MOZ_ASSERT(inPageLoadCount
);