Bug 1865597 - Add error checking when initializing parallel marking and disable on...
[gecko.git] / js / src / gc / GC.cpp
blob7fb1295cf0d6af496feeb0a7b28e12f8a9712e98
1 /* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*-
2 * vim: set ts=8 sts=2 et sw=2 tw=80:
3 * This Source Code Form is subject to the terms of the Mozilla Public
4 * License, v. 2.0. If a copy of the MPL was not distributed with this
5 * file, You can obtain one at http://mozilla.org/MPL/2.0/. */
7 /*
8 * [SMDOC] Garbage Collector
10 * This code implements an incremental mark-and-sweep garbage collector, with
11 * most sweeping carried out in the background on a parallel thread.
13 * Full vs. zone GC
14 * ----------------
16 * The collector can collect all zones at once, or a subset. These types of
17 * collection are referred to as a full GC and a zone GC respectively.
19 * It is possible for an incremental collection that started out as a full GC to
20 * become a zone GC if new zones are created during the course of the
21 * collection.
23 * Incremental collection
24 * ----------------------
26 * For a collection to be carried out incrementally the following conditions
27 * must be met:
28 * - the collection must be run by calling js::GCSlice() rather than js::GC()
29 * - the GC parameter JSGC_INCREMENTAL_GC_ENABLED must be true.
31 * The last condition is an engine-internal mechanism to ensure that incremental
32 * collection is not carried out without the correct barriers being implemented.
33 * For more information see 'Incremental marking' below.
35 * If the collection is not incremental, all foreground activity happens inside
36 * a single call to GC() or GCSlice(). However the collection is not complete
37 * until the background sweeping activity has finished.
39 * An incremental collection proceeds as a series of slices, interleaved with
40 * mutator activity, i.e. running JavaScript code. Slices are limited by a time
41 * budget. The slice finishes as soon as possible after the requested time has
42 * passed.
44 * Collector states
45 * ----------------
47 * The collector proceeds through the following states, the current state being
48 * held in JSRuntime::gcIncrementalState:
50 * - Prepare - unmarks GC things, discards JIT code and other setup
51 * - MarkRoots - marks the stack and other roots
52 * - Mark - incrementally marks reachable things
53 * - Sweep - sweeps zones in groups and continues marking unswept zones
54 * - Finalize - performs background finalization, concurrent with mutator
55 * - Compact - incrementally compacts by zone
56 * - Decommit - performs background decommit and chunk removal
58 * Roots are marked in the first MarkRoots slice; this is the start of the GC
59 * proper. The following states can take place over one or more slices.
61 * In other words an incremental collection proceeds like this:
63 * Slice 1: Prepare: Starts background task to unmark GC things
65 * ... JS code runs, background unmarking finishes ...
67 * Slice 2: MarkRoots: Roots are pushed onto the mark stack.
68 * Mark: The mark stack is processed by popping an element,
69 * marking it, and pushing its children.
71 * ... JS code runs ...
73 * Slice 3: Mark: More mark stack processing.
75 * ... JS code runs ...
77 * Slice n-1: Mark: More mark stack processing.
79 * ... JS code runs ...
81 * Slice n: Mark: Mark stack is completely drained.
82 * Sweep: Select first group of zones to sweep and sweep them.
84 * ... JS code runs ...
86 * Slice n+1: Sweep: Mark objects in unswept zones that were newly
87 * identified as alive (see below). Then sweep more zone
88 * sweep groups.
90 * ... JS code runs ...
92 * Slice n+2: Sweep: Mark objects in unswept zones that were newly
93 * identified as alive. Then sweep more zones.
95 * ... JS code runs ...
97 * Slice m: Sweep: Sweeping is finished, and background sweeping
98 * started on the helper thread.
100 * ... JS code runs, remaining sweeping done on background thread ...
102 * When background sweeping finishes the GC is complete.
104 * Incremental marking
105 * -------------------
107 * Incremental collection requires close collaboration with the mutator (i.e.,
108 * JS code) to guarantee correctness.
110 * - During an incremental GC, if a memory location (except a root) is written
111 * to, then the value it previously held must be marked. Write barriers
112 * ensure this.
114 * - Any object that is allocated during incremental GC must start out marked.
116 * - Roots are marked in the first slice and hence don't need write barriers.
117 * Roots are things like the C stack and the VM stack.
119 * The problem that write barriers solve is that between slices the mutator can
120 * change the object graph. We must ensure that it cannot do this in such a way
121 * that makes us fail to mark a reachable object (marking an unreachable object
122 * is tolerable).
124 * We use a snapshot-at-the-beginning algorithm to do this. This means that we
125 * promise to mark at least everything that is reachable at the beginning of
126 * collection. To implement it we mark the old contents of every non-root memory
127 * location written to by the mutator while the collection is in progress, using
128 * write barriers. This is described in gc/Barrier.h.
130 * Incremental sweeping
131 * --------------------
133 * Sweeping is difficult to do incrementally because object finalizers must be
134 * run at the start of sweeping, before any mutator code runs. The reason is
135 * that some objects use their finalizers to remove themselves from caches. If
136 * mutator code was allowed to run after the start of sweeping, it could observe
137 * the state of the cache and create a new reference to an object that was just
138 * about to be destroyed.
140 * Sweeping all finalizable objects in one go would introduce long pauses, so
141 * instead sweeping broken up into groups of zones. Zones which are not yet
142 * being swept are still marked, so the issue above does not apply.
144 * The order of sweeping is restricted by cross compartment pointers - for
145 * example say that object |a| from zone A points to object |b| in zone B and
146 * neither object was marked when we transitioned to the Sweep phase. Imagine we
147 * sweep B first and then return to the mutator. It's possible that the mutator
148 * could cause |a| to become alive through a read barrier (perhaps it was a
149 * shape that was accessed via a shape table). Then we would need to mark |b|,
150 * which |a| points to, but |b| has already been swept.
152 * So if there is such a pointer then marking of zone B must not finish before
153 * marking of zone A. Pointers which form a cycle between zones therefore
154 * restrict those zones to being swept at the same time, and these are found
155 * using Tarjan's algorithm for finding the strongly connected components of a
156 * graph.
158 * GC things without finalizers, and things with finalizers that are able to run
159 * in the background, are swept on the background thread. This accounts for most
160 * of the sweeping work.
162 * Reset
163 * -----
165 * During incremental collection it is possible, although unlikely, for
166 * conditions to change such that incremental collection is no longer safe. In
167 * this case, the collection is 'reset' by resetIncrementalGC(). If we are in
168 * the mark state, this just stops marking, but if we have started sweeping
169 * already, we continue non-incrementally until we have swept the current sweep
170 * group. Following a reset, a new collection is started.
172 * Compacting GC
173 * -------------
175 * Compacting GC happens at the end of a major GC as part of the last slice.
176 * There are three parts:
178 * - Arenas are selected for compaction.
179 * - The contents of those arenas are moved to new arenas.
180 * - All references to moved things are updated.
182 * Collecting Atoms
183 * ----------------
185 * Atoms are collected differently from other GC things. They are contained in
186 * a special zone and things in other zones may have pointers to them that are
187 * not recorded in the cross compartment pointer map. Each zone holds a bitmap
188 * with the atoms it might be keeping alive, and atoms are only collected if
189 * they are not included in any zone's atom bitmap. See AtomMarking.cpp for how
190 * this bitmap is managed.
193 #include "gc/GC-inl.h"
195 #include "mozilla/Range.h"
196 #include "mozilla/ScopeExit.h"
197 #include "mozilla/TextUtils.h"
198 #include "mozilla/TimeStamp.h"
200 #include <algorithm>
201 #include <initializer_list>
202 #include <iterator>
203 #include <stdlib.h>
204 #include <string.h>
205 #include <utility>
207 #include "jsapi.h" // JS_AbortIfWrongThread
208 #include "jstypes.h"
210 #include "debugger/DebugAPI.h"
211 #include "gc/ClearEdgesTracer.h"
212 #include "gc/GCContext.h"
213 #include "gc/GCInternals.h"
214 #include "gc/GCLock.h"
215 #include "gc/GCProbes.h"
216 #include "gc/Memory.h"
217 #include "gc/ParallelMarking.h"
218 #include "gc/ParallelWork.h"
219 #include "gc/WeakMap.h"
220 #include "jit/ExecutableAllocator.h"
221 #include "jit/JitCode.h"
222 #include "jit/JitRuntime.h"
223 #include "jit/ProcessExecutableMemory.h"
224 #include "js/HeapAPI.h" // JS::GCCellPtr
225 #include "js/Printer.h"
226 #include "js/SliceBudget.h"
227 #include "util/DifferentialTesting.h"
228 #include "vm/BigIntType.h"
229 #include "vm/EnvironmentObject.h"
230 #include "vm/GetterSetter.h"
231 #include "vm/HelperThreadState.h"
232 #include "vm/JitActivation.h"
233 #include "vm/JSObject.h"
234 #include "vm/JSScript.h"
235 #include "vm/PropMap.h"
236 #include "vm/Realm.h"
237 #include "vm/Shape.h"
238 #include "vm/StringType.h"
239 #include "vm/SymbolType.h"
240 #include "vm/Time.h"
242 #include "gc/Heap-inl.h"
243 #include "gc/Nursery-inl.h"
244 #include "gc/ObjectKind-inl.h"
245 #include "gc/PrivateIterators-inl.h"
246 #include "vm/GeckoProfiler-inl.h"
247 #include "vm/JSContext-inl.h"
248 #include "vm/Realm-inl.h"
249 #include "vm/Stack-inl.h"
251 using namespace js;
252 using namespace js::gc;
254 using mozilla::MakeScopeExit;
255 using mozilla::Maybe;
256 using mozilla::Nothing;
257 using mozilla::Some;
258 using mozilla::TimeDuration;
259 using mozilla::TimeStamp;
261 using JS::AutoGCRooter;
263 const AllocKind gc::slotsToThingKind[] = {
264 // clang-format off
265 /* 0 */ AllocKind::OBJECT0, AllocKind::OBJECT2, AllocKind::OBJECT2, AllocKind::OBJECT4,
266 /* 4 */ AllocKind::OBJECT4, AllocKind::OBJECT8, AllocKind::OBJECT8, AllocKind::OBJECT8,
267 /* 8 */ AllocKind::OBJECT8, AllocKind::OBJECT12, AllocKind::OBJECT12, AllocKind::OBJECT12,
268 /* 12 */ AllocKind::OBJECT12, AllocKind::OBJECT16, AllocKind::OBJECT16, AllocKind::OBJECT16,
269 /* 16 */ AllocKind::OBJECT16
270 // clang-format on
273 static_assert(std::size(slotsToThingKind) == SLOTS_TO_THING_KIND_LIMIT,
274 "We have defined a slot count for each kind.");
276 MOZ_THREAD_LOCAL(JS::GCContext*) js::TlsGCContext;
278 JS::GCContext::GCContext(JSRuntime* runtime) : runtime_(runtime) {}
280 JS::GCContext::~GCContext() {
281 MOZ_ASSERT(!hasJitCodeToPoison());
282 MOZ_ASSERT(!isCollecting());
283 MOZ_ASSERT(gcUse() == GCUse::None);
284 MOZ_ASSERT(!gcSweepZone());
285 MOZ_ASSERT(!isTouchingGrayThings());
288 void JS::GCContext::poisonJitCode() {
289 if (hasJitCodeToPoison()) {
290 jit::ExecutableAllocator::poisonCode(runtime(), jitPoisonRanges);
291 jitPoisonRanges.clearAndFree();
295 #ifdef DEBUG
296 void GCRuntime::verifyAllChunks() {
297 AutoLockGC lock(this);
298 fullChunks(lock).verifyChunks();
299 availableChunks(lock).verifyChunks();
300 emptyChunks(lock).verifyChunks();
302 #endif
304 void GCRuntime::setMinEmptyChunkCount(uint32_t value, const AutoLockGC& lock) {
305 minEmptyChunkCount_ = value;
306 if (minEmptyChunkCount_ > maxEmptyChunkCount_) {
307 maxEmptyChunkCount_ = minEmptyChunkCount_;
309 MOZ_ASSERT(maxEmptyChunkCount_ >= minEmptyChunkCount_);
312 void GCRuntime::setMaxEmptyChunkCount(uint32_t value, const AutoLockGC& lock) {
313 maxEmptyChunkCount_ = value;
314 if (minEmptyChunkCount_ > maxEmptyChunkCount_) {
315 minEmptyChunkCount_ = maxEmptyChunkCount_;
317 MOZ_ASSERT(maxEmptyChunkCount_ >= minEmptyChunkCount_);
320 inline bool GCRuntime::tooManyEmptyChunks(const AutoLockGC& lock) {
321 return emptyChunks(lock).count() > minEmptyChunkCount(lock);
324 ChunkPool GCRuntime::expireEmptyChunkPool(const AutoLockGC& lock) {
325 MOZ_ASSERT(emptyChunks(lock).verify());
326 MOZ_ASSERT(minEmptyChunkCount(lock) <= maxEmptyChunkCount(lock));
328 ChunkPool expired;
329 while (tooManyEmptyChunks(lock)) {
330 TenuredChunk* chunk = emptyChunks(lock).pop();
331 prepareToFreeChunk(chunk->info);
332 expired.push(chunk);
335 MOZ_ASSERT(expired.verify());
336 MOZ_ASSERT(emptyChunks(lock).verify());
337 MOZ_ASSERT(emptyChunks(lock).count() <= maxEmptyChunkCount(lock));
338 MOZ_ASSERT(emptyChunks(lock).count() <= minEmptyChunkCount(lock));
339 return expired;
342 static void FreeChunkPool(ChunkPool& pool) {
343 for (ChunkPool::Iter iter(pool); !iter.done();) {
344 TenuredChunk* chunk = iter.get();
345 iter.next();
346 pool.remove(chunk);
347 MOZ_ASSERT(chunk->unused());
348 UnmapPages(static_cast<void*>(chunk), ChunkSize);
350 MOZ_ASSERT(pool.count() == 0);
353 void GCRuntime::freeEmptyChunks(const AutoLockGC& lock) {
354 FreeChunkPool(emptyChunks(lock));
357 inline void GCRuntime::prepareToFreeChunk(TenuredChunkInfo& info) {
358 MOZ_ASSERT(numArenasFreeCommitted >= info.numArenasFreeCommitted);
359 numArenasFreeCommitted -= info.numArenasFreeCommitted;
360 stats().count(gcstats::COUNT_DESTROY_CHUNK);
361 #ifdef DEBUG
363 * Let FreeChunkPool detect a missing prepareToFreeChunk call before it
364 * frees chunk.
366 info.numArenasFreeCommitted = 0;
367 #endif
370 void GCRuntime::releaseArena(Arena* arena, const AutoLockGC& lock) {
371 MOZ_ASSERT(arena->allocated());
372 MOZ_ASSERT(!arena->onDelayedMarkingList());
373 MOZ_ASSERT(TlsGCContext.get()->isFinalizing());
375 arena->zone->gcHeapSize.removeGCArena(heapSize);
376 arena->release(lock);
377 arena->chunk()->releaseArena(this, arena, lock);
380 GCRuntime::GCRuntime(JSRuntime* rt)
381 : rt(rt),
382 systemZone(nullptr),
383 mainThreadContext(rt),
384 heapState_(JS::HeapState::Idle),
385 stats_(this),
386 sweepingTracer(rt),
387 fullGCRequested(false),
388 helperThreadRatio(TuningDefaults::HelperThreadRatio),
389 maxHelperThreads(TuningDefaults::MaxHelperThreads),
390 helperThreadCount(1),
391 createBudgetCallback(nullptr),
392 minEmptyChunkCount_(TuningDefaults::MinEmptyChunkCount),
393 maxEmptyChunkCount_(TuningDefaults::MaxEmptyChunkCount),
394 rootsHash(256),
395 nextCellUniqueId_(LargestTaggedNullCellPointer +
396 1), // Ensure disjoint from null tagged pointers.
397 numArenasFreeCommitted(0),
398 verifyPreData(nullptr),
399 lastGCStartTime_(TimeStamp::Now()),
400 lastGCEndTime_(TimeStamp::Now()),
401 incrementalGCEnabled(TuningDefaults::IncrementalGCEnabled),
402 perZoneGCEnabled(TuningDefaults::PerZoneGCEnabled),
403 numActiveZoneIters(0),
404 cleanUpEverything(false),
405 grayBitsValid(true),
406 majorGCTriggerReason(JS::GCReason::NO_REASON),
407 minorGCNumber(0),
408 majorGCNumber(0),
409 number(0),
410 sliceNumber(0),
411 isFull(false),
412 incrementalState(gc::State::NotActive),
413 initialState(gc::State::NotActive),
414 useZeal(false),
415 lastMarkSlice(false),
416 safeToYield(true),
417 markOnBackgroundThreadDuringSweeping(false),
418 useBackgroundThreads(false),
419 #ifdef DEBUG
420 hadShutdownGC(false),
421 #endif
422 requestSliceAfterBackgroundTask(false),
423 lifoBlocksToFree((size_t)JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE),
424 lifoBlocksToFreeAfterMinorGC(
425 (size_t)JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE),
426 sweepGroupIndex(0),
427 sweepGroups(nullptr),
428 currentSweepGroup(nullptr),
429 sweepZone(nullptr),
430 abortSweepAfterCurrentGroup(false),
431 sweepMarkResult(IncrementalProgress::NotFinished),
432 #ifdef DEBUG
433 testMarkQueue(rt),
434 #endif
435 startedCompacting(false),
436 zonesCompacted(0),
437 #ifdef DEBUG
438 relocatedArenasToRelease(nullptr),
439 #endif
440 #ifdef JS_GC_ZEAL
441 markingValidator(nullptr),
442 #endif
443 defaultTimeBudgetMS_(TuningDefaults::DefaultTimeBudgetMS),
444 incrementalAllowed(true),
445 compactingEnabled(TuningDefaults::CompactingEnabled),
446 parallelMarkingEnabled(TuningDefaults::ParallelMarkingEnabled),
447 rootsRemoved(false),
448 #ifdef JS_GC_ZEAL
449 zealModeBits(0),
450 zealFrequency(0),
451 nextScheduled(0),
452 deterministicOnly(false),
453 zealSliceBudget(0),
454 selectedForMarking(rt),
455 #endif
456 fullCompartmentChecks(false),
457 gcCallbackDepth(0),
458 alwaysPreserveCode(false),
459 lowMemoryState(false),
460 lock(mutexid::GCLock),
461 delayedMarkingLock(mutexid::GCDelayedMarkingLock),
462 allocTask(this, emptyChunks_.ref()),
463 unmarkTask(this),
464 markTask(this),
465 sweepTask(this),
466 freeTask(this),
467 decommitTask(this),
468 nursery_(this),
469 storeBuffer_(rt, nursery()),
470 lastAllocRateUpdateTime(TimeStamp::Now()) {
473 using CharRange = mozilla::Range<const char>;
474 using CharRangeVector = Vector<CharRange, 0, SystemAllocPolicy>;
476 static bool SplitStringBy(const CharRange& text, char delimiter,
477 CharRangeVector* result) {
478 auto start = text.begin();
479 for (auto ptr = start; ptr != text.end(); ptr++) {
480 if (*ptr == delimiter) {
481 if (!result->emplaceBack(start, ptr)) {
482 return false;
484 start = ptr + 1;
488 return result->emplaceBack(start, text.end());
491 static bool ParseTimeDuration(const CharRange& text,
492 TimeDuration* durationOut) {
493 const char* str = text.begin().get();
494 char* end;
495 long millis = strtol(str, &end, 10);
496 *durationOut = TimeDuration::FromMilliseconds(double(millis));
497 return str != end && end == text.end().get();
500 static void PrintProfileHelpAndExit(const char* envName, const char* helpText) {
501 fprintf(stderr, "%s=N[,(main|all)]\n", envName);
502 fprintf(stderr, "%s", helpText);
503 exit(0);
506 void js::gc::ReadProfileEnv(const char* envName, const char* helpText,
507 bool* enableOut, bool* workersOut,
508 TimeDuration* thresholdOut) {
509 *enableOut = false;
510 *workersOut = false;
511 *thresholdOut = TimeDuration::Zero();
513 const char* env = getenv(envName);
514 if (!env) {
515 return;
518 if (strcmp(env, "help") == 0) {
519 PrintProfileHelpAndExit(envName, helpText);
522 CharRangeVector parts;
523 auto text = CharRange(env, strlen(env));
524 if (!SplitStringBy(text, ',', &parts)) {
525 MOZ_CRASH("OOM parsing environment variable");
528 if (parts.length() == 0 || parts.length() > 2) {
529 PrintProfileHelpAndExit(envName, helpText);
532 *enableOut = true;
534 if (!ParseTimeDuration(parts[0], thresholdOut)) {
535 PrintProfileHelpAndExit(envName, helpText);
538 if (parts.length() == 2) {
539 const char* threads = parts[1].begin().get();
540 if (strcmp(threads, "all") == 0) {
541 *workersOut = true;
542 } else if (strcmp(threads, "main") != 0) {
543 PrintProfileHelpAndExit(envName, helpText);
548 bool js::gc::ShouldPrintProfile(JSRuntime* runtime, bool enable,
549 bool profileWorkers, TimeDuration threshold,
550 TimeDuration duration) {
551 return enable && (runtime->isMainRuntime() || profileWorkers) &&
552 duration >= threshold;
555 #ifdef JS_GC_ZEAL
557 void GCRuntime::getZealBits(uint32_t* zealBits, uint32_t* frequency,
558 uint32_t* scheduled) {
559 *zealBits = zealModeBits;
560 *frequency = zealFrequency;
561 *scheduled = nextScheduled;
564 const char gc::ZealModeHelpText[] =
565 " Specifies how zealous the garbage collector should be. Some of these "
566 "modes can\n"
567 " be set simultaneously, by passing multiple level options, e.g. \"2;4\" "
568 "will activate\n"
569 " both modes 2 and 4. Modes can be specified by name or number.\n"
570 " \n"
571 " Values:\n"
572 " 0: (None) Normal amount of collection (resets all modes)\n"
573 " 1: (RootsChange) Collect when roots are added or removed\n"
574 " 2: (Alloc) Collect when every N allocations (default: 100)\n"
575 " 4: (VerifierPre) Verify pre write barriers between instructions\n"
576 " 6: (YieldBeforeRootMarking) Incremental GC in two slices that yields "
577 "before root marking\n"
578 " 7: (GenerationalGC) Collect the nursery every N nursery allocations\n"
579 " 8: (YieldBeforeMarking) Incremental GC in two slices that yields "
580 "between\n"
581 " the root marking and marking phases\n"
582 " 9: (YieldBeforeSweeping) Incremental GC in two slices that yields "
583 "between\n"
584 " the marking and sweeping phases\n"
585 " 10: (IncrementalMultipleSlices) Incremental GC in many slices\n"
586 " 11: (IncrementalMarkingValidator) Verify incremental marking\n"
587 " 12: (ElementsBarrier) Use the individual element post-write barrier\n"
588 " regardless of elements size\n"
589 " 13: (CheckHashTablesOnMinorGC) Check internal hashtables on minor GC\n"
590 " 14: (Compact) Perform a shrinking collection every N allocations\n"
591 " 15: (CheckHeapAfterGC) Walk the heap to check its integrity after "
592 "every GC\n"
593 " 17: (YieldBeforeSweepingAtoms) Incremental GC in two slices that "
594 "yields\n"
595 " before sweeping the atoms table\n"
596 " 18: (CheckGrayMarking) Check gray marking invariants after every GC\n"
597 " 19: (YieldBeforeSweepingCaches) Incremental GC in two slices that "
598 "yields\n"
599 " before sweeping weak caches\n"
600 " 21: (YieldBeforeSweepingObjects) Incremental GC in two slices that "
601 "yields\n"
602 " before sweeping foreground finalized objects\n"
603 " 22: (YieldBeforeSweepingNonObjects) Incremental GC in two slices that "
604 "yields\n"
605 " before sweeping non-object GC things\n"
606 " 23: (YieldBeforeSweepingPropMapTrees) Incremental GC in two slices "
607 "that "
608 "yields\n"
609 " before sweeping shape trees\n"
610 " 24: (CheckWeakMapMarking) Check weak map marking invariants after "
611 "every GC\n"
612 " 25: (YieldWhileGrayMarking) Incremental GC in two slices that yields\n"
613 " during gray marking\n";
615 // The set of zeal modes that control incremental slices. These modes are
616 // mutually exclusive.
617 static const mozilla::EnumSet<ZealMode> IncrementalSliceZealModes = {
618 ZealMode::YieldBeforeRootMarking,
619 ZealMode::YieldBeforeMarking,
620 ZealMode::YieldBeforeSweeping,
621 ZealMode::IncrementalMultipleSlices,
622 ZealMode::YieldBeforeSweepingAtoms,
623 ZealMode::YieldBeforeSweepingCaches,
624 ZealMode::YieldBeforeSweepingObjects,
625 ZealMode::YieldBeforeSweepingNonObjects,
626 ZealMode::YieldBeforeSweepingPropMapTrees};
628 void GCRuntime::setZeal(uint8_t zeal, uint32_t frequency) {
629 MOZ_ASSERT(zeal <= unsigned(ZealMode::Limit));
631 if (verifyPreData) {
632 VerifyBarriers(rt, PreBarrierVerifier);
635 if (zeal == 0) {
636 if (hasZealMode(ZealMode::GenerationalGC)) {
637 evictNursery(JS::GCReason::DEBUG_GC);
638 nursery().leaveZealMode();
641 if (isIncrementalGCInProgress()) {
642 finishGC(JS::GCReason::DEBUG_GC);
646 ZealMode zealMode = ZealMode(zeal);
647 if (zealMode == ZealMode::GenerationalGC) {
648 evictNursery(JS::GCReason::DEBUG_GC);
649 nursery().enterZealMode();
652 // Some modes are mutually exclusive. If we're setting one of those, we
653 // first reset all of them.
654 if (IncrementalSliceZealModes.contains(zealMode)) {
655 for (auto mode : IncrementalSliceZealModes) {
656 clearZealMode(mode);
660 bool schedule = zealMode >= ZealMode::Alloc;
661 if (zeal != 0) {
662 zealModeBits |= 1 << unsigned(zeal);
663 } else {
664 zealModeBits = 0;
666 zealFrequency = frequency;
667 nextScheduled = schedule ? frequency : 0;
670 void GCRuntime::unsetZeal(uint8_t zeal) {
671 MOZ_ASSERT(zeal <= unsigned(ZealMode::Limit));
672 ZealMode zealMode = ZealMode(zeal);
674 if (!hasZealMode(zealMode)) {
675 return;
678 if (verifyPreData) {
679 VerifyBarriers(rt, PreBarrierVerifier);
682 if (zealMode == ZealMode::GenerationalGC) {
683 evictNursery(JS::GCReason::DEBUG_GC);
684 nursery().leaveZealMode();
687 clearZealMode(zealMode);
689 if (zealModeBits == 0) {
690 if (isIncrementalGCInProgress()) {
691 finishGC(JS::GCReason::DEBUG_GC);
694 zealFrequency = 0;
695 nextScheduled = 0;
699 void GCRuntime::setNextScheduled(uint32_t count) { nextScheduled = count; }
701 static bool ParseZealModeName(const CharRange& text, uint32_t* modeOut) {
702 struct ModeInfo {
703 const char* name;
704 size_t length;
705 uint32_t value;
708 static const ModeInfo zealModes[] = {{"None", 0},
709 # define ZEAL_MODE(name, value) {#name, strlen(#name), value},
710 JS_FOR_EACH_ZEAL_MODE(ZEAL_MODE)
711 # undef ZEAL_MODE
714 for (auto mode : zealModes) {
715 if (text.length() == mode.length &&
716 memcmp(text.begin().get(), mode.name, mode.length) == 0) {
717 *modeOut = mode.value;
718 return true;
722 return false;
725 static bool ParseZealModeNumericParam(const CharRange& text,
726 uint32_t* paramOut) {
727 if (text.length() == 0) {
728 return false;
731 for (auto c : text) {
732 if (!mozilla::IsAsciiDigit(c)) {
733 return false;
737 *paramOut = atoi(text.begin().get());
738 return true;
741 static bool PrintZealHelpAndFail() {
742 fprintf(stderr, "Format: JS_GC_ZEAL=level(;level)*[,N]\n");
743 fputs(ZealModeHelpText, stderr);
744 return false;
747 bool GCRuntime::parseAndSetZeal(const char* str) {
748 // Set the zeal mode from a string consisting of one or more mode specifiers
749 // separated by ';', optionally followed by a ',' and the trigger frequency.
750 // The mode specifiers can by a mode name or its number.
752 auto text = CharRange(str, strlen(str));
754 CharRangeVector parts;
755 if (!SplitStringBy(text, ',', &parts)) {
756 return false;
759 if (parts.length() == 0 || parts.length() > 2) {
760 return PrintZealHelpAndFail();
763 uint32_t frequency = JS_DEFAULT_ZEAL_FREQ;
764 if (parts.length() == 2 && !ParseZealModeNumericParam(parts[1], &frequency)) {
765 return PrintZealHelpAndFail();
768 CharRangeVector modes;
769 if (!SplitStringBy(parts[0], ';', &modes)) {
770 return false;
773 for (const auto& descr : modes) {
774 uint32_t mode;
775 if (!ParseZealModeName(descr, &mode) &&
776 !(ParseZealModeNumericParam(descr, &mode) &&
777 mode <= unsigned(ZealMode::Limit))) {
778 return PrintZealHelpAndFail();
781 setZeal(mode, frequency);
784 return true;
787 const char* js::gc::AllocKindName(AllocKind kind) {
788 static const char* const names[] = {
789 # define EXPAND_THING_NAME(allocKind, _1, _2, _3, _4, _5, _6) #allocKind,
790 FOR_EACH_ALLOCKIND(EXPAND_THING_NAME)
791 # undef EXPAND_THING_NAME
793 static_assert(std::size(names) == AllocKindCount,
794 "names array should have an entry for every AllocKind");
796 size_t i = size_t(kind);
797 MOZ_ASSERT(i < std::size(names));
798 return names[i];
801 void js::gc::DumpArenaInfo() {
802 fprintf(stderr, "Arena header size: %zu\n\n", ArenaHeaderSize);
804 fprintf(stderr, "GC thing kinds:\n");
805 fprintf(stderr, "%25s %8s %8s %8s\n",
806 "AllocKind:", "Size:", "Count:", "Padding:");
807 for (auto kind : AllAllocKinds()) {
808 fprintf(stderr, "%25s %8zu %8zu %8zu\n", AllocKindName(kind),
809 Arena::thingSize(kind), Arena::thingsPerArena(kind),
810 Arena::firstThingOffset(kind) - ArenaHeaderSize);
814 #endif // JS_GC_ZEAL
816 bool GCRuntime::init(uint32_t maxbytes) {
817 MOZ_ASSERT(!wasInitialized());
819 MOZ_ASSERT(SystemPageSize());
820 Arena::checkLookupTables();
822 if (!TlsGCContext.init()) {
823 return false;
825 TlsGCContext.set(&mainThreadContext.ref());
827 updateHelperThreadCount();
829 #ifdef JS_GC_ZEAL
830 const char* size = getenv("JSGC_MARK_STACK_LIMIT");
831 if (size) {
832 maybeMarkStackLimit = atoi(size);
834 #endif
836 initOrDisableParallelMarking();
839 AutoLockGCBgAlloc lock(this);
841 MOZ_ALWAYS_TRUE(tunables.setParameter(JSGC_MAX_BYTES, maxbytes));
843 if (!nursery().init(lock)) {
844 return false;
847 const char* pretenureThresholdStr = getenv("JSGC_PRETENURE_THRESHOLD");
848 if (pretenureThresholdStr && pretenureThresholdStr[0]) {
849 char* last;
850 long pretenureThreshold = strtol(pretenureThresholdStr, &last, 10);
851 if (last[0] || !tunables.setParameter(JSGC_PRETENURE_THRESHOLD,
852 pretenureThreshold)) {
853 fprintf(stderr, "Invalid value for JSGC_PRETENURE_THRESHOLD: %s\n",
854 pretenureThresholdStr);
859 #ifdef JS_GC_ZEAL
860 const char* zealSpec = getenv("JS_GC_ZEAL");
861 if (zealSpec && zealSpec[0] && !parseAndSetZeal(zealSpec)) {
862 return false;
864 #endif
866 for (auto& marker : markers) {
867 if (!marker->init()) {
868 return false;
872 if (!initSweepActions()) {
873 return false;
876 UniquePtr<Zone> zone = MakeUnique<Zone>(rt, Zone::AtomsZone);
877 if (!zone || !zone->init()) {
878 return false;
881 // The atoms zone is stored as the first element of the zones vector.
882 MOZ_ASSERT(zone->isAtomsZone());
883 MOZ_ASSERT(zones().empty());
884 MOZ_ALWAYS_TRUE(zones().reserve(1)); // ZonesVector has inline capacity 4.
885 zones().infallibleAppend(zone.release());
887 gcprobes::Init(this);
889 initialized = true;
890 return true;
893 void GCRuntime::finish() {
894 MOZ_ASSERT(inPageLoadCount == 0);
895 MOZ_ASSERT(!sharedAtomsZone_);
897 // Wait for nursery background free to end and disable it to release memory.
898 if (nursery().isEnabled()) {
899 nursery().disable();
902 // Wait until the background finalization and allocation stops and the
903 // helper thread shuts down before we forcefully release any remaining GC
904 // memory.
905 sweepTask.join();
906 markTask.join();
907 freeTask.join();
908 allocTask.cancelAndWait();
909 decommitTask.cancelAndWait();
911 #ifdef JS_GC_ZEAL
912 // Free memory associated with GC verification.
913 finishVerifier();
914 #endif
916 // Delete all remaining zones.
917 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
918 AutoSetThreadIsSweeping threadIsSweeping(rt->gcContext(), zone);
919 for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next()) {
920 for (RealmsInCompartmentIter realm(comp); !realm.done(); realm.next()) {
921 js_delete(realm.get());
923 comp->realms().clear();
924 js_delete(comp.get());
926 zone->compartments().clear();
927 js_delete(zone.get());
930 zones().clear();
932 FreeChunkPool(fullChunks_.ref());
933 FreeChunkPool(availableChunks_.ref());
934 FreeChunkPool(emptyChunks_.ref());
936 TlsGCContext.set(nullptr);
938 gcprobes::Finish(this);
940 nursery().printTotalProfileTimes();
941 stats().printTotalProfileTimes();
944 bool GCRuntime::freezeSharedAtomsZone() {
945 // This is called just after permanent atoms and well-known symbols have been
946 // created. At this point all existing atoms and symbols are permanent.
948 // This method makes the current atoms zone into a shared atoms zone and
949 // removes it from the zones list. Everything in it is marked black. A new
950 // empty atoms zone is created, where all atoms local to this runtime will
951 // live.
953 // The shared atoms zone will not be collected until shutdown when it is
954 // returned to the zone list by restoreSharedAtomsZone().
956 MOZ_ASSERT(rt->isMainRuntime());
957 MOZ_ASSERT(!sharedAtomsZone_);
958 MOZ_ASSERT(zones().length() == 1);
959 MOZ_ASSERT(atomsZone());
960 MOZ_ASSERT(!atomsZone()->wasGCStarted());
961 MOZ_ASSERT(!atomsZone()->needsIncrementalBarrier());
963 AutoAssertEmptyNursery nurseryIsEmpty(rt->mainContextFromOwnThread());
965 atomsZone()->arenas.clearFreeLists();
967 for (auto kind : AllAllocKinds()) {
968 for (auto thing =
969 atomsZone()->cellIterUnsafe<TenuredCell>(kind, nurseryIsEmpty);
970 !thing.done(); thing.next()) {
971 TenuredCell* cell = thing.getCell();
972 MOZ_ASSERT((cell->is<JSString>() &&
973 cell->as<JSString>()->isPermanentAndMayBeShared()) ||
974 (cell->is<JS::Symbol>() &&
975 cell->as<JS::Symbol>()->isPermanentAndMayBeShared()));
976 cell->markBlack();
980 sharedAtomsZone_ = atomsZone();
981 zones().clear();
983 UniquePtr<Zone> zone = MakeUnique<Zone>(rt, Zone::AtomsZone);
984 if (!zone || !zone->init()) {
985 return false;
988 MOZ_ASSERT(zone->isAtomsZone());
989 zones().infallibleAppend(zone.release());
991 return true;
994 void GCRuntime::restoreSharedAtomsZone() {
995 // Return the shared atoms zone to the zone list. This allows the contents of
996 // the shared atoms zone to be collected when the parent runtime is shut down.
998 if (!sharedAtomsZone_) {
999 return;
1002 MOZ_ASSERT(rt->isMainRuntime());
1003 MOZ_ASSERT(rt->childRuntimeCount == 0);
1005 AutoEnterOOMUnsafeRegion oomUnsafe;
1006 if (!zones().append(sharedAtomsZone_)) {
1007 oomUnsafe.crash("restoreSharedAtomsZone");
1010 sharedAtomsZone_ = nullptr;
1013 bool GCRuntime::setParameter(JSContext* cx, JSGCParamKey key, uint32_t value) {
1014 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
1016 AutoStopVerifyingBarriers pauseVerification(rt, false);
1017 FinishGC(cx);
1018 waitBackgroundSweepEnd();
1020 AutoLockGC lock(this);
1021 return setParameter(key, value, lock);
1024 static bool IsGCThreadParameter(JSGCParamKey key) {
1025 return key == JSGC_HELPER_THREAD_RATIO || key == JSGC_MAX_HELPER_THREADS ||
1026 key == JSGC_MARKING_THREAD_COUNT;
1029 bool GCRuntime::setParameter(JSGCParamKey key, uint32_t value,
1030 AutoLockGC& lock) {
1031 switch (key) {
1032 case JSGC_SLICE_TIME_BUDGET_MS:
1033 defaultTimeBudgetMS_ = value;
1034 break;
1035 case JSGC_INCREMENTAL_GC_ENABLED:
1036 setIncrementalGCEnabled(value != 0);
1037 break;
1038 case JSGC_PER_ZONE_GC_ENABLED:
1039 perZoneGCEnabled = value != 0;
1040 break;
1041 case JSGC_COMPACTING_ENABLED:
1042 compactingEnabled = value != 0;
1043 break;
1044 case JSGC_PARALLEL_MARKING_ENABLED:
1045 // Not supported on workers.
1046 parallelMarkingEnabled = rt->isMainRuntime() && value != 0;
1047 return initOrDisableParallelMarking();
1048 case JSGC_INCREMENTAL_WEAKMAP_ENABLED:
1049 for (auto& marker : markers) {
1050 marker->incrementalWeakMapMarkingEnabled = value != 0;
1052 break;
1053 case JSGC_MIN_EMPTY_CHUNK_COUNT:
1054 setMinEmptyChunkCount(value, lock);
1055 break;
1056 case JSGC_MAX_EMPTY_CHUNK_COUNT:
1057 setMaxEmptyChunkCount(value, lock);
1058 break;
1059 default:
1060 if (IsGCThreadParameter(key)) {
1061 return setThreadParameter(key, value, lock);
1064 if (!tunables.setParameter(key, value)) {
1065 return false;
1067 updateAllGCStartThresholds();
1070 return true;
1073 bool GCRuntime::setThreadParameter(JSGCParamKey key, uint32_t value,
1074 AutoLockGC& lock) {
1075 if (rt->parentRuntime) {
1076 // Don't allow these to be set for worker runtimes.
1077 return false;
1080 switch (key) {
1081 case JSGC_HELPER_THREAD_RATIO:
1082 if (value == 0) {
1083 return false;
1085 helperThreadRatio = double(value) / 100.0;
1086 break;
1087 case JSGC_MAX_HELPER_THREADS:
1088 if (value == 0) {
1089 return false;
1091 maxHelperThreads = value;
1092 break;
1093 case JSGC_MARKING_THREAD_COUNT:
1094 markingThreadCount = std::min(size_t(value), MaxParallelWorkers);
1095 break;
1096 default:
1097 MOZ_CRASH("Unexpected parameter key");
1100 updateHelperThreadCount();
1101 initOrDisableParallelMarking();
1103 return true;
1106 void GCRuntime::resetParameter(JSContext* cx, JSGCParamKey key) {
1107 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
1109 AutoStopVerifyingBarriers pauseVerification(rt, false);
1110 FinishGC(cx);
1111 waitBackgroundSweepEnd();
1113 AutoLockGC lock(this);
1114 resetParameter(key, lock);
1117 void GCRuntime::resetParameter(JSGCParamKey key, AutoLockGC& lock) {
1118 switch (key) {
1119 case JSGC_SLICE_TIME_BUDGET_MS:
1120 defaultTimeBudgetMS_ = TuningDefaults::DefaultTimeBudgetMS;
1121 break;
1122 case JSGC_INCREMENTAL_GC_ENABLED:
1123 setIncrementalGCEnabled(TuningDefaults::IncrementalGCEnabled);
1124 break;
1125 case JSGC_PER_ZONE_GC_ENABLED:
1126 perZoneGCEnabled = TuningDefaults::PerZoneGCEnabled;
1127 break;
1128 case JSGC_COMPACTING_ENABLED:
1129 compactingEnabled = TuningDefaults::CompactingEnabled;
1130 break;
1131 case JSGC_PARALLEL_MARKING_ENABLED:
1132 parallelMarkingEnabled = TuningDefaults::ParallelMarkingEnabled;
1133 initOrDisableParallelMarking();
1134 break;
1135 case JSGC_INCREMENTAL_WEAKMAP_ENABLED:
1136 for (auto& marker : markers) {
1137 marker->incrementalWeakMapMarkingEnabled =
1138 TuningDefaults::IncrementalWeakMapMarkingEnabled;
1140 break;
1141 case JSGC_MIN_EMPTY_CHUNK_COUNT:
1142 setMinEmptyChunkCount(TuningDefaults::MinEmptyChunkCount, lock);
1143 break;
1144 case JSGC_MAX_EMPTY_CHUNK_COUNT:
1145 setMaxEmptyChunkCount(TuningDefaults::MaxEmptyChunkCount, lock);
1146 break;
1147 default:
1148 if (IsGCThreadParameter(key)) {
1149 resetThreadParameter(key, lock);
1150 return;
1153 tunables.resetParameter(key);
1154 updateAllGCStartThresholds();
1158 void GCRuntime::resetThreadParameter(JSGCParamKey key, AutoLockGC& lock) {
1159 if (rt->parentRuntime) {
1160 return;
1163 switch (key) {
1164 case JSGC_HELPER_THREAD_RATIO:
1165 helperThreadRatio = TuningDefaults::HelperThreadRatio;
1166 break;
1167 case JSGC_MAX_HELPER_THREADS:
1168 maxHelperThreads = TuningDefaults::MaxHelperThreads;
1169 break;
1170 case JSGC_MARKING_THREAD_COUNT:
1171 markingThreadCount = 0;
1172 break;
1173 default:
1174 MOZ_CRASH("Unexpected parameter key");
1177 updateHelperThreadCount();
1178 initOrDisableParallelMarking();
1181 uint32_t GCRuntime::getParameter(JSGCParamKey key) {
1182 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
1183 AutoLockGC lock(this);
1184 return getParameter(key, lock);
1187 uint32_t GCRuntime::getParameter(JSGCParamKey key, const AutoLockGC& lock) {
1188 switch (key) {
1189 case JSGC_BYTES:
1190 return uint32_t(heapSize.bytes());
1191 case JSGC_NURSERY_BYTES:
1192 return nursery().capacity();
1193 case JSGC_NUMBER:
1194 return uint32_t(number);
1195 case JSGC_MAJOR_GC_NUMBER:
1196 return uint32_t(majorGCNumber);
1197 case JSGC_MINOR_GC_NUMBER:
1198 return uint32_t(minorGCNumber);
1199 case JSGC_INCREMENTAL_GC_ENABLED:
1200 return incrementalGCEnabled;
1201 case JSGC_PER_ZONE_GC_ENABLED:
1202 return perZoneGCEnabled;
1203 case JSGC_UNUSED_CHUNKS:
1204 return uint32_t(emptyChunks(lock).count());
1205 case JSGC_TOTAL_CHUNKS:
1206 return uint32_t(fullChunks(lock).count() + availableChunks(lock).count() +
1207 emptyChunks(lock).count());
1208 case JSGC_SLICE_TIME_BUDGET_MS:
1209 MOZ_RELEASE_ASSERT(defaultTimeBudgetMS_ >= 0);
1210 MOZ_RELEASE_ASSERT(defaultTimeBudgetMS_ <= UINT32_MAX);
1211 return uint32_t(defaultTimeBudgetMS_);
1212 case JSGC_MIN_EMPTY_CHUNK_COUNT:
1213 return minEmptyChunkCount(lock);
1214 case JSGC_MAX_EMPTY_CHUNK_COUNT:
1215 return maxEmptyChunkCount(lock);
1216 case JSGC_COMPACTING_ENABLED:
1217 return compactingEnabled;
1218 case JSGC_PARALLEL_MARKING_ENABLED:
1219 return parallelMarkingEnabled;
1220 case JSGC_INCREMENTAL_WEAKMAP_ENABLED:
1221 return marker().incrementalWeakMapMarkingEnabled;
1222 case JSGC_CHUNK_BYTES:
1223 return ChunkSize;
1224 case JSGC_HELPER_THREAD_RATIO:
1225 MOZ_ASSERT(helperThreadRatio > 0.0);
1226 return uint32_t(helperThreadRatio * 100.0);
1227 case JSGC_MAX_HELPER_THREADS:
1228 MOZ_ASSERT(maxHelperThreads <= UINT32_MAX);
1229 return maxHelperThreads;
1230 case JSGC_HELPER_THREAD_COUNT:
1231 return helperThreadCount;
1232 case JSGC_MARKING_THREAD_COUNT:
1233 return markingThreadCount;
1234 case JSGC_SYSTEM_PAGE_SIZE_KB:
1235 return SystemPageSize() / 1024;
1236 default:
1237 return tunables.getParameter(key);
1241 #ifdef JS_GC_ZEAL
1242 void GCRuntime::setMarkStackLimit(size_t limit, AutoLockGC& lock) {
1243 MOZ_ASSERT(!JS::RuntimeHeapIsBusy());
1245 maybeMarkStackLimit = limit;
1247 AutoUnlockGC unlock(lock);
1248 AutoStopVerifyingBarriers pauseVerification(rt, false);
1249 for (auto& marker : markers) {
1250 marker->setMaxCapacity(limit);
1253 #endif
1255 void GCRuntime::setIncrementalGCEnabled(bool enabled) {
1256 incrementalGCEnabled = enabled;
1259 void GCRuntime::updateHelperThreadCount() {
1260 if (!CanUseExtraThreads()) {
1261 // startTask will run the work on the main thread if the count is 1.
1262 MOZ_ASSERT(helperThreadCount == 1);
1263 return;
1266 // Number of extra threads required during parallel marking to ensure we can
1267 // start the necessary marking tasks. Background free and background
1268 // allocation may already be running and we want to avoid these tasks blocking
1269 // marking. In real configurations there will be enough threads that this
1270 // won't affect anything.
1271 static constexpr size_t SpareThreadsDuringParallelMarking = 2;
1273 // The count of helper threads used for GC tasks is process wide. Don't set it
1274 // for worker JS runtimes.
1275 if (rt->parentRuntime) {
1276 helperThreadCount = rt->parentRuntime->gc.helperThreadCount;
1277 return;
1280 // Calculate the target thread count for GC parallel tasks.
1281 size_t cpuCount = GetHelperThreadCPUCount();
1282 helperThreadCount =
1283 std::clamp(size_t(double(cpuCount) * helperThreadRatio.ref()), size_t(1),
1284 maxHelperThreads.ref());
1286 // Calculate the overall target thread count taking into account the separate
1287 // parameter for parallel marking threads. Add spare threads to avoid blocking
1288 // parallel marking when there is other GC work happening.
1289 size_t targetCount =
1290 std::max(helperThreadCount.ref(),
1291 markingThreadCount.ref() + SpareThreadsDuringParallelMarking);
1293 // Attempt to create extra threads if possible. This is not supported when
1294 // using an external thread pool.
1295 AutoLockHelperThreadState lock;
1296 (void)HelperThreadState().ensureThreadCount(targetCount, lock);
1298 // Limit all thread counts based on the number of threads available, which may
1299 // be fewer than requested.
1300 size_t availableThreadCount = GetHelperThreadCount();
1301 MOZ_ASSERT(availableThreadCount != 0);
1302 targetCount = std::min(targetCount, availableThreadCount);
1303 helperThreadCount = std::min(helperThreadCount.ref(), availableThreadCount);
1304 markingThreadCount =
1305 std::min(markingThreadCount.ref(),
1306 availableThreadCount - SpareThreadsDuringParallelMarking);
1308 // Update the maximum number of threads that will be used for GC work.
1309 HelperThreadState().setGCParallelThreadCount(targetCount, lock);
1312 size_t GCRuntime::markingWorkerCount() const {
1313 if (!CanUseExtraThreads() || !parallelMarkingEnabled) {
1314 return 1;
1317 if (markingThreadCount) {
1318 return markingThreadCount;
1321 // Limit parallel marking to use at most two threads initially.
1322 return 2;
1325 #ifdef DEBUG
1326 void GCRuntime::assertNoMarkingWork() const {
1327 for (const auto& marker : markers) {
1328 MOZ_ASSERT(marker->isDrained());
1330 MOZ_ASSERT(!hasDelayedMarking());
1332 #endif
1334 bool GCRuntime::initOrDisableParallelMarking() {
1335 // Attempt to initialize parallel marking state or disable it on failure.
1337 if (!updateMarkersVector()) {
1338 parallelMarkingEnabled = false;
1339 return false;
1342 return true;
1345 static size_t GetGCParallelThreadCount() {
1346 AutoLockHelperThreadState lock;
1347 return HelperThreadState().getGCParallelThreadCount(lock);
1350 bool GCRuntime::updateMarkersVector() {
1351 MOZ_ASSERT(helperThreadCount >= 1,
1352 "There must always be at least one mark task");
1353 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
1354 assertNoMarkingWork();
1356 // Limit worker count to number of GC parallel tasks that can run
1357 // concurrently, otherwise one thread can deadlock waiting on another.
1358 size_t targetCount =
1359 std::min(markingWorkerCount(), GetGCParallelThreadCount());
1361 if (markers.length() > targetCount) {
1362 return markers.resize(targetCount);
1365 while (markers.length() < targetCount) {
1366 auto marker = MakeUnique<GCMarker>(rt);
1367 if (!marker) {
1368 return false;
1371 #ifdef JS_GC_ZEAL
1372 if (maybeMarkStackLimit) {
1373 marker->setMaxCapacity(maybeMarkStackLimit);
1375 #endif
1377 if (!marker->init()) {
1378 return false;
1381 if (!markers.emplaceBack(std::move(marker))) {
1382 return false;
1386 return true;
1389 template <typename F>
1390 static bool EraseCallback(CallbackVector<F>& vector, F callback) {
1391 for (Callback<F>* p = vector.begin(); p != vector.end(); p++) {
1392 if (p->op == callback) {
1393 vector.erase(p);
1394 return true;
1398 return false;
1401 template <typename F>
1402 static bool EraseCallback(CallbackVector<F>& vector, F callback, void* data) {
1403 for (Callback<F>* p = vector.begin(); p != vector.end(); p++) {
1404 if (p->op == callback && p->data == data) {
1405 vector.erase(p);
1406 return true;
1410 return false;
1413 bool GCRuntime::addBlackRootsTracer(JSTraceDataOp traceOp, void* data) {
1414 AssertHeapIsIdle();
1415 return blackRootTracers.ref().append(Callback<JSTraceDataOp>(traceOp, data));
1418 void GCRuntime::removeBlackRootsTracer(JSTraceDataOp traceOp, void* data) {
1419 // Can be called from finalizers
1420 MOZ_ALWAYS_TRUE(EraseCallback(blackRootTracers.ref(), traceOp));
1423 void GCRuntime::setGrayRootsTracer(JSGrayRootsTracer traceOp, void* data) {
1424 AssertHeapIsIdle();
1425 grayRootTracer.ref() = {traceOp, data};
1428 void GCRuntime::clearBlackAndGrayRootTracers() {
1429 MOZ_ASSERT(rt->isBeingDestroyed());
1430 blackRootTracers.ref().clear();
1431 setGrayRootsTracer(nullptr, nullptr);
1434 void GCRuntime::setGCCallback(JSGCCallback callback, void* data) {
1435 gcCallback.ref() = {callback, data};
1438 void GCRuntime::callGCCallback(JSGCStatus status, JS::GCReason reason) const {
1439 const auto& callback = gcCallback.ref();
1440 MOZ_ASSERT(callback.op);
1441 callback.op(rt->mainContextFromOwnThread(), status, reason, callback.data);
1444 void GCRuntime::setObjectsTenuredCallback(JSObjectsTenuredCallback callback,
1445 void* data) {
1446 tenuredCallback.ref() = {callback, data};
1449 void GCRuntime::callObjectsTenuredCallback() {
1450 JS::AutoSuppressGCAnalysis nogc;
1451 const auto& callback = tenuredCallback.ref();
1452 if (callback.op) {
1453 callback.op(rt->mainContextFromOwnThread(), callback.data);
1457 bool GCRuntime::addFinalizeCallback(JSFinalizeCallback callback, void* data) {
1458 return finalizeCallbacks.ref().append(
1459 Callback<JSFinalizeCallback>(callback, data));
1462 void GCRuntime::removeFinalizeCallback(JSFinalizeCallback callback) {
1463 MOZ_ALWAYS_TRUE(EraseCallback(finalizeCallbacks.ref(), callback));
1466 void GCRuntime::callFinalizeCallbacks(JS::GCContext* gcx,
1467 JSFinalizeStatus status) const {
1468 for (const auto& p : finalizeCallbacks.ref()) {
1469 p.op(gcx, status, p.data);
1473 void GCRuntime::setHostCleanupFinalizationRegistryCallback(
1474 JSHostCleanupFinalizationRegistryCallback callback, void* data) {
1475 hostCleanupFinalizationRegistryCallback.ref() = {callback, data};
1478 void GCRuntime::callHostCleanupFinalizationRegistryCallback(
1479 JSFunction* doCleanup, GlobalObject* incumbentGlobal) {
1480 JS::AutoSuppressGCAnalysis nogc;
1481 const auto& callback = hostCleanupFinalizationRegistryCallback.ref();
1482 if (callback.op) {
1483 callback.op(doCleanup, incumbentGlobal, callback.data);
1487 bool GCRuntime::addWeakPointerZonesCallback(JSWeakPointerZonesCallback callback,
1488 void* data) {
1489 return updateWeakPointerZonesCallbacks.ref().append(
1490 Callback<JSWeakPointerZonesCallback>(callback, data));
1493 void GCRuntime::removeWeakPointerZonesCallback(
1494 JSWeakPointerZonesCallback callback) {
1495 MOZ_ALWAYS_TRUE(
1496 EraseCallback(updateWeakPointerZonesCallbacks.ref(), callback));
1499 void GCRuntime::callWeakPointerZonesCallbacks(JSTracer* trc) const {
1500 for (auto const& p : updateWeakPointerZonesCallbacks.ref()) {
1501 p.op(trc, p.data);
1505 bool GCRuntime::addWeakPointerCompartmentCallback(
1506 JSWeakPointerCompartmentCallback callback, void* data) {
1507 return updateWeakPointerCompartmentCallbacks.ref().append(
1508 Callback<JSWeakPointerCompartmentCallback>(callback, data));
1511 void GCRuntime::removeWeakPointerCompartmentCallback(
1512 JSWeakPointerCompartmentCallback callback) {
1513 MOZ_ALWAYS_TRUE(
1514 EraseCallback(updateWeakPointerCompartmentCallbacks.ref(), callback));
1517 void GCRuntime::callWeakPointerCompartmentCallbacks(
1518 JSTracer* trc, JS::Compartment* comp) const {
1519 for (auto const& p : updateWeakPointerCompartmentCallbacks.ref()) {
1520 p.op(trc, comp, p.data);
1524 JS::GCSliceCallback GCRuntime::setSliceCallback(JS::GCSliceCallback callback) {
1525 return stats().setSliceCallback(callback);
1528 bool GCRuntime::addNurseryCollectionCallback(
1529 JS::GCNurseryCollectionCallback callback, void* data) {
1530 return nurseryCollectionCallbacks.ref().append(
1531 Callback<JS::GCNurseryCollectionCallback>(callback, data));
1534 void GCRuntime::removeNurseryCollectionCallback(
1535 JS::GCNurseryCollectionCallback callback, void* data) {
1536 MOZ_ALWAYS_TRUE(
1537 EraseCallback(nurseryCollectionCallbacks.ref(), callback, data));
1540 void GCRuntime::callNurseryCollectionCallbacks(JS::GCNurseryProgress progress,
1541 JS::GCReason reason) {
1542 for (auto const& p : nurseryCollectionCallbacks.ref()) {
1543 p.op(rt->mainContextFromOwnThread(), progress, reason, p.data);
1547 JS::DoCycleCollectionCallback GCRuntime::setDoCycleCollectionCallback(
1548 JS::DoCycleCollectionCallback callback) {
1549 const auto prior = gcDoCycleCollectionCallback.ref();
1550 gcDoCycleCollectionCallback.ref() = {callback, nullptr};
1551 return prior.op;
1554 void GCRuntime::callDoCycleCollectionCallback(JSContext* cx) {
1555 const auto& callback = gcDoCycleCollectionCallback.ref();
1556 if (callback.op) {
1557 callback.op(cx);
1561 bool GCRuntime::addRoot(Value* vp, const char* name) {
1563 * Sometimes Firefox will hold weak references to objects and then convert
1564 * them to strong references by calling AddRoot (e.g., via PreserveWrapper,
1565 * or ModifyBusyCount in workers). We need a read barrier to cover these
1566 * cases.
1568 MOZ_ASSERT(vp);
1569 Value value = *vp;
1570 if (value.isGCThing()) {
1571 ValuePreWriteBarrier(value);
1574 return rootsHash.ref().put(vp, name);
1577 void GCRuntime::removeRoot(Value* vp) {
1578 rootsHash.ref().remove(vp);
1579 notifyRootsRemoved();
1582 /* Compacting GC */
1584 bool js::gc::IsCurrentlyAnimating(const TimeStamp& lastAnimationTime,
1585 const TimeStamp& currentTime) {
1586 // Assume that we're currently animating if js::NotifyAnimationActivity has
1587 // been called in the last second.
1588 static const auto oneSecond = TimeDuration::FromSeconds(1);
1589 return !lastAnimationTime.IsNull() &&
1590 currentTime < (lastAnimationTime + oneSecond);
1593 static bool DiscardedCodeRecently(Zone* zone, const TimeStamp& currentTime) {
1594 static const auto thirtySeconds = TimeDuration::FromSeconds(30);
1595 return !zone->lastDiscardedCodeTime().IsNull() &&
1596 currentTime < (zone->lastDiscardedCodeTime() + thirtySeconds);
1599 bool GCRuntime::shouldCompact() {
1600 // Compact on shrinking GC if enabled. Skip compacting in incremental GCs
1601 // if we are currently animating, unless the user is inactive or we're
1602 // responding to memory pressure.
1604 if (!isShrinkingGC() || !isCompactingGCEnabled()) {
1605 return false;
1608 if (initialReason == JS::GCReason::USER_INACTIVE ||
1609 initialReason == JS::GCReason::MEM_PRESSURE) {
1610 return true;
1613 return !isIncremental ||
1614 !IsCurrentlyAnimating(rt->lastAnimationTime, TimeStamp::Now());
1617 bool GCRuntime::isCompactingGCEnabled() const {
1618 return compactingEnabled &&
1619 rt->mainContextFromOwnThread()->compactingDisabledCount == 0;
1622 JS_PUBLIC_API void JS::SetCreateGCSliceBudgetCallback(
1623 JSContext* cx, JS::CreateSliceBudgetCallback cb) {
1624 cx->runtime()->gc.createBudgetCallback = cb;
1627 void TimeBudget::setDeadlineFromNow() { deadline = TimeStamp::Now() + budget; }
1629 SliceBudget::SliceBudget(TimeBudget time, InterruptRequestFlag* interrupt)
1630 : counter(StepsPerExpensiveCheck),
1631 interruptRequested(interrupt),
1632 budget(TimeBudget(time)) {
1633 budget.as<TimeBudget>().setDeadlineFromNow();
1636 SliceBudget::SliceBudget(WorkBudget work)
1637 : counter(work.budget), interruptRequested(nullptr), budget(work) {}
1639 int SliceBudget::describe(char* buffer, size_t maxlen) const {
1640 if (isUnlimited()) {
1641 return snprintf(buffer, maxlen, "unlimited");
1644 if (isWorkBudget()) {
1645 return snprintf(buffer, maxlen, "work(%" PRId64 ")", workBudget());
1648 const char* interruptStr = "";
1649 if (interruptRequested) {
1650 interruptStr = interrupted ? "INTERRUPTED " : "interruptible ";
1652 const char* extra = "";
1653 if (idle) {
1654 extra = extended ? " (started idle but extended)" : " (idle)";
1656 return snprintf(buffer, maxlen, "%s%" PRId64 "ms%s", interruptStr,
1657 timeBudget(), extra);
1660 bool SliceBudget::checkOverBudget() {
1661 MOZ_ASSERT(counter <= 0);
1662 MOZ_ASSERT(!isUnlimited());
1664 if (isWorkBudget()) {
1665 return true;
1668 if (interruptRequested && *interruptRequested) {
1669 interrupted = true;
1672 if (interrupted) {
1673 return true;
1676 if (TimeStamp::Now() >= budget.as<TimeBudget>().deadline) {
1677 return true;
1680 counter = StepsPerExpensiveCheck;
1681 return false;
1684 void GCRuntime::requestMajorGC(JS::GCReason reason) {
1685 MOZ_ASSERT_IF(reason != JS::GCReason::BG_TASK_FINISHED,
1686 !CurrentThreadIsPerformingGC());
1688 if (majorGCRequested()) {
1689 return;
1692 majorGCTriggerReason = reason;
1693 rt->mainContextFromAnyThread()->requestInterrupt(InterruptReason::MajorGC);
1696 bool GCRuntime::triggerGC(JS::GCReason reason) {
1698 * Don't trigger GCs if this is being called off the main thread from
1699 * onTooMuchMalloc().
1701 if (!CurrentThreadCanAccessRuntime(rt)) {
1702 return false;
1705 /* GC is already running. */
1706 if (JS::RuntimeHeapIsCollecting()) {
1707 return false;
1710 JS::PrepareForFullGC(rt->mainContextFromOwnThread());
1711 requestMajorGC(reason);
1712 return true;
1715 void GCRuntime::maybeTriggerGCAfterAlloc(Zone* zone) {
1716 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
1717 MOZ_ASSERT(!JS::RuntimeHeapIsCollecting());
1719 TriggerResult trigger =
1720 checkHeapThreshold(zone, zone->gcHeapSize, zone->gcHeapThreshold);
1722 if (trigger.shouldTrigger) {
1723 // Start or continue an in progress incremental GC. We do this to try to
1724 // avoid performing non-incremental GCs on zones which allocate a lot of
1725 // data, even when incremental slices can't be triggered via scheduling in
1726 // the event loop.
1727 triggerZoneGC(zone, JS::GCReason::ALLOC_TRIGGER, trigger.usedBytes,
1728 trigger.thresholdBytes);
1732 void js::gc::MaybeMallocTriggerZoneGC(JSRuntime* rt, ZoneAllocator* zoneAlloc,
1733 const HeapSize& heap,
1734 const HeapThreshold& threshold,
1735 JS::GCReason reason) {
1736 rt->gc.maybeTriggerGCAfterMalloc(Zone::from(zoneAlloc), heap, threshold,
1737 reason);
1740 void GCRuntime::maybeTriggerGCAfterMalloc(Zone* zone) {
1741 if (maybeTriggerGCAfterMalloc(zone, zone->mallocHeapSize,
1742 zone->mallocHeapThreshold,
1743 JS::GCReason::TOO_MUCH_MALLOC)) {
1744 return;
1747 maybeTriggerGCAfterMalloc(zone, zone->jitHeapSize, zone->jitHeapThreshold,
1748 JS::GCReason::TOO_MUCH_JIT_CODE);
1751 bool GCRuntime::maybeTriggerGCAfterMalloc(Zone* zone, const HeapSize& heap,
1752 const HeapThreshold& threshold,
1753 JS::GCReason reason) {
1754 // Ignore malloc during sweeping, for example when we resize hash tables.
1755 if (heapState() != JS::HeapState::Idle) {
1756 return false;
1759 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
1761 TriggerResult trigger = checkHeapThreshold(zone, heap, threshold);
1762 if (!trigger.shouldTrigger) {
1763 return false;
1766 // Trigger a zone GC. budgetIncrementalGC() will work out whether to do an
1767 // incremental or non-incremental collection.
1768 triggerZoneGC(zone, reason, trigger.usedBytes, trigger.thresholdBytes);
1769 return true;
1772 TriggerResult GCRuntime::checkHeapThreshold(
1773 Zone* zone, const HeapSize& heapSize, const HeapThreshold& heapThreshold) {
1774 MOZ_ASSERT_IF(heapThreshold.hasSliceThreshold(), zone->wasGCStarted());
1776 size_t usedBytes = heapSize.bytes();
1777 size_t thresholdBytes = heapThreshold.hasSliceThreshold()
1778 ? heapThreshold.sliceBytes()
1779 : heapThreshold.startBytes();
1781 // The incremental limit will be checked if we trigger a GC slice.
1782 MOZ_ASSERT(thresholdBytes <= heapThreshold.incrementalLimitBytes());
1784 return TriggerResult{usedBytes >= thresholdBytes, usedBytes, thresholdBytes};
1787 bool GCRuntime::triggerZoneGC(Zone* zone, JS::GCReason reason, size_t used,
1788 size_t threshold) {
1789 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
1791 /* GC is already running. */
1792 if (JS::RuntimeHeapIsBusy()) {
1793 return false;
1796 #ifdef JS_GC_ZEAL
1797 if (hasZealMode(ZealMode::Alloc)) {
1798 MOZ_RELEASE_ASSERT(triggerGC(reason));
1799 return true;
1801 #endif
1803 if (zone->isAtomsZone()) {
1804 stats().recordTrigger(used, threshold);
1805 MOZ_RELEASE_ASSERT(triggerGC(reason));
1806 return true;
1809 stats().recordTrigger(used, threshold);
1810 zone->scheduleGC();
1811 requestMajorGC(reason);
1812 return true;
1815 void GCRuntime::maybeGC() {
1816 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
1818 #ifdef JS_GC_ZEAL
1819 if (hasZealMode(ZealMode::Alloc) || hasZealMode(ZealMode::RootsChange)) {
1820 JS::PrepareForFullGC(rt->mainContextFromOwnThread());
1821 gc(JS::GCOptions::Normal, JS::GCReason::DEBUG_GC);
1822 return;
1824 #endif
1826 (void)gcIfRequestedImpl(/* eagerOk = */ true);
1829 JS::GCReason GCRuntime::wantMajorGC(bool eagerOk) {
1830 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
1832 if (majorGCRequested()) {
1833 return majorGCTriggerReason;
1836 if (isIncrementalGCInProgress() || !eagerOk) {
1837 return JS::GCReason::NO_REASON;
1840 JS::GCReason reason = JS::GCReason::NO_REASON;
1841 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
1842 if (checkEagerAllocTrigger(zone->gcHeapSize, zone->gcHeapThreshold) ||
1843 checkEagerAllocTrigger(zone->mallocHeapSize,
1844 zone->mallocHeapThreshold)) {
1845 zone->scheduleGC();
1846 reason = JS::GCReason::EAGER_ALLOC_TRIGGER;
1850 return reason;
1853 bool GCRuntime::checkEagerAllocTrigger(const HeapSize& size,
1854 const HeapThreshold& threshold) {
1855 size_t thresholdBytes =
1856 threshold.eagerAllocTrigger(schedulingState.inHighFrequencyGCMode());
1857 size_t usedBytes = size.bytes();
1858 if (usedBytes <= 1024 * 1024 || usedBytes < thresholdBytes) {
1859 return false;
1862 stats().recordTrigger(usedBytes, thresholdBytes);
1863 return true;
1866 bool GCRuntime::shouldDecommit() const {
1867 // If we're doing a shrinking GC we always decommit to release as much memory
1868 // as possible.
1869 if (cleanUpEverything) {
1870 return true;
1873 // If we are allocating heavily enough to trigger "high frequency" GC then
1874 // skip decommit so that we do not compete with the mutator.
1875 return !schedulingState.inHighFrequencyGCMode();
1878 void GCRuntime::startDecommit() {
1879 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::DECOMMIT);
1881 #ifdef DEBUG
1882 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
1883 MOZ_ASSERT(decommitTask.isIdle());
1886 AutoLockGC lock(this);
1887 MOZ_ASSERT(fullChunks(lock).verify());
1888 MOZ_ASSERT(availableChunks(lock).verify());
1889 MOZ_ASSERT(emptyChunks(lock).verify());
1891 // Verify that all entries in the empty chunks pool are unused.
1892 for (ChunkPool::Iter chunk(emptyChunks(lock)); !chunk.done();
1893 chunk.next()) {
1894 MOZ_ASSERT(chunk->unused());
1897 #endif
1899 if (!shouldDecommit()) {
1900 return;
1904 AutoLockGC lock(this);
1905 if (availableChunks(lock).empty() && !tooManyEmptyChunks(lock) &&
1906 emptyChunks(lock).empty()) {
1907 return; // Nothing to do.
1911 #ifdef DEBUG
1913 AutoLockHelperThreadState lock;
1914 MOZ_ASSERT(!requestSliceAfterBackgroundTask);
1916 #endif
1918 if (useBackgroundThreads) {
1919 decommitTask.start();
1920 return;
1923 decommitTask.runFromMainThread();
1926 BackgroundDecommitTask::BackgroundDecommitTask(GCRuntime* gc)
1927 : GCParallelTask(gc, gcstats::PhaseKind::DECOMMIT) {}
1929 void js::gc::BackgroundDecommitTask::run(AutoLockHelperThreadState& lock) {
1931 AutoUnlockHelperThreadState unlock(lock);
1933 ChunkPool emptyChunksToFree;
1935 AutoLockGC gcLock(gc);
1936 emptyChunksToFree = gc->expireEmptyChunkPool(gcLock);
1939 FreeChunkPool(emptyChunksToFree);
1942 AutoLockGC gcLock(gc);
1944 // To help minimize the total number of chunks needed over time, sort the
1945 // available chunks list so that we allocate into more-used chunks first.
1946 gc->availableChunks(gcLock).sort();
1948 if (DecommitEnabled()) {
1949 gc->decommitEmptyChunks(cancel_, gcLock);
1950 gc->decommitFreeArenas(cancel_, gcLock);
1955 gc->maybeRequestGCAfterBackgroundTask(lock);
1958 static inline bool CanDecommitWholeChunk(TenuredChunk* chunk) {
1959 return chunk->unused() && chunk->info.numArenasFreeCommitted != 0;
1962 // Called from a background thread to decommit free arenas. Releases the GC
1963 // lock.
1964 void GCRuntime::decommitEmptyChunks(const bool& cancel, AutoLockGC& lock) {
1965 Vector<TenuredChunk*, 0, SystemAllocPolicy> chunksToDecommit;
1966 for (ChunkPool::Iter chunk(emptyChunks(lock)); !chunk.done(); chunk.next()) {
1967 if (CanDecommitWholeChunk(chunk) && !chunksToDecommit.append(chunk)) {
1968 onOutOfMallocMemory(lock);
1969 return;
1973 for (TenuredChunk* chunk : chunksToDecommit) {
1974 if (cancel) {
1975 break;
1978 // Check whether something used the chunk while lock was released.
1979 if (!CanDecommitWholeChunk(chunk)) {
1980 continue;
1983 // Temporarily remove the chunk while decommitting its memory so that the
1984 // mutator doesn't start allocating from it when we drop the lock.
1985 emptyChunks(lock).remove(chunk);
1988 AutoUnlockGC unlock(lock);
1989 chunk->decommitAllArenas();
1990 MOZ_ASSERT(chunk->info.numArenasFreeCommitted == 0);
1993 emptyChunks(lock).push(chunk);
1997 // Called from a background thread to decommit free arenas. Releases the GC
1998 // lock.
1999 void GCRuntime::decommitFreeArenas(const bool& cancel, AutoLockGC& lock) {
2000 MOZ_ASSERT(DecommitEnabled());
2002 // Since we release the GC lock while doing the decommit syscall below,
2003 // it is dangerous to iterate the available list directly, as the active
2004 // thread could modify it concurrently. Instead, we build and pass an
2005 // explicit Vector containing the Chunks we want to visit.
2006 Vector<TenuredChunk*, 0, SystemAllocPolicy> chunksToDecommit;
2007 for (ChunkPool::Iter chunk(availableChunks(lock)); !chunk.done();
2008 chunk.next()) {
2009 if (chunk->info.numArenasFreeCommitted != 0 &&
2010 !chunksToDecommit.append(chunk)) {
2011 onOutOfMallocMemory(lock);
2012 return;
2016 for (TenuredChunk* chunk : chunksToDecommit) {
2017 chunk->decommitFreeArenas(this, cancel, lock);
2021 // Do all possible decommit immediately from the current thread without
2022 // releasing the GC lock or allocating any memory.
2023 void GCRuntime::decommitFreeArenasWithoutUnlocking(const AutoLockGC& lock) {
2024 MOZ_ASSERT(DecommitEnabled());
2025 for (ChunkPool::Iter chunk(availableChunks(lock)); !chunk.done();
2026 chunk.next()) {
2027 chunk->decommitFreeArenasWithoutUnlocking(lock);
2029 MOZ_ASSERT(availableChunks(lock).verify());
2032 void GCRuntime::maybeRequestGCAfterBackgroundTask(
2033 const AutoLockHelperThreadState& lock) {
2034 if (requestSliceAfterBackgroundTask) {
2035 // Trigger a slice so the main thread can continue the collection
2036 // immediately.
2037 requestSliceAfterBackgroundTask = false;
2038 requestMajorGC(JS::GCReason::BG_TASK_FINISHED);
2042 void GCRuntime::cancelRequestedGCAfterBackgroundTask() {
2043 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
2045 #ifdef DEBUG
2047 AutoLockHelperThreadState lock;
2048 MOZ_ASSERT(!requestSliceAfterBackgroundTask);
2050 #endif
2052 majorGCTriggerReason.compareExchange(JS::GCReason::BG_TASK_FINISHED,
2053 JS::GCReason::NO_REASON);
2056 bool GCRuntime::isWaitingOnBackgroundTask() const {
2057 AutoLockHelperThreadState lock;
2058 return requestSliceAfterBackgroundTask;
2061 void GCRuntime::queueUnusedLifoBlocksForFree(LifoAlloc* lifo) {
2062 MOZ_ASSERT(JS::RuntimeHeapIsBusy());
2063 AutoLockHelperThreadState lock;
2064 lifoBlocksToFree.ref().transferUnusedFrom(lifo);
2067 void GCRuntime::queueAllLifoBlocksForFreeAfterMinorGC(LifoAlloc* lifo) {
2068 lifoBlocksToFreeAfterMinorGC.ref().transferFrom(lifo);
2071 void GCRuntime::queueBuffersForFreeAfterMinorGC(Nursery::BufferSet& buffers) {
2072 AutoLockHelperThreadState lock;
2074 if (!buffersToFreeAfterMinorGC.ref().empty()) {
2075 // In the rare case that this hasn't processed the buffers from a previous
2076 // minor GC we have to wait here.
2077 MOZ_ASSERT(!freeTask.isIdle(lock));
2078 freeTask.joinWithLockHeld(lock);
2081 MOZ_ASSERT(buffersToFreeAfterMinorGC.ref().empty());
2082 std::swap(buffersToFreeAfterMinorGC.ref(), buffers);
2085 void Realm::destroy(JS::GCContext* gcx) {
2086 JSRuntime* rt = gcx->runtime();
2087 if (auto callback = rt->destroyRealmCallback) {
2088 callback(gcx, this);
2090 if (principals()) {
2091 JS_DropPrincipals(rt->mainContextFromOwnThread(), principals());
2093 // Bug 1560019: Malloc memory associated with a zone but not with a specific
2094 // GC thing is not currently tracked.
2095 gcx->deleteUntracked(this);
2098 void Compartment::destroy(JS::GCContext* gcx) {
2099 JSRuntime* rt = gcx->runtime();
2100 if (auto callback = rt->destroyCompartmentCallback) {
2101 callback(gcx, this);
2103 // Bug 1560019: Malloc memory associated with a zone but not with a specific
2104 // GC thing is not currently tracked.
2105 gcx->deleteUntracked(this);
2106 rt->gc.stats().sweptCompartment();
2109 void Zone::destroy(JS::GCContext* gcx) {
2110 MOZ_ASSERT(compartments().empty());
2111 JSRuntime* rt = gcx->runtime();
2112 if (auto callback = rt->destroyZoneCallback) {
2113 callback(gcx, this);
2115 // Bug 1560019: Malloc memory associated with a zone but not with a specific
2116 // GC thing is not currently tracked.
2117 gcx->deleteUntracked(this);
2118 gcx->runtime()->gc.stats().sweptZone();
2122 * It's simpler if we preserve the invariant that every zone (except atoms
2123 * zones) has at least one compartment, and every compartment has at least one
2124 * realm. If we know we're deleting the entire zone, then sweepCompartments is
2125 * allowed to delete all compartments. In this case, |keepAtleastOne| is false.
2126 * If any cells remain alive in the zone, set |keepAtleastOne| true to prohibit
2127 * sweepCompartments from deleting every compartment. Instead, it preserves an
2128 * arbitrary compartment in the zone.
2130 void Zone::sweepCompartments(JS::GCContext* gcx, bool keepAtleastOne,
2131 bool destroyingRuntime) {
2132 MOZ_ASSERT_IF(!isAtomsZone(), !compartments().empty());
2133 MOZ_ASSERT_IF(destroyingRuntime, !keepAtleastOne);
2135 Compartment** read = compartments().begin();
2136 Compartment** end = compartments().end();
2137 Compartment** write = read;
2138 while (read < end) {
2139 Compartment* comp = *read++;
2142 * Don't delete the last compartment and realm if keepAtleastOne is
2143 * still true, meaning all the other compartments were deleted.
2145 bool keepAtleastOneRealm = read == end && keepAtleastOne;
2146 comp->sweepRealms(gcx, keepAtleastOneRealm, destroyingRuntime);
2148 if (!comp->realms().empty()) {
2149 *write++ = comp;
2150 keepAtleastOne = false;
2151 } else {
2152 comp->destroy(gcx);
2155 compartments().shrinkTo(write - compartments().begin());
2156 MOZ_ASSERT_IF(keepAtleastOne, !compartments().empty());
2157 MOZ_ASSERT_IF(destroyingRuntime, compartments().empty());
2160 void Compartment::sweepRealms(JS::GCContext* gcx, bool keepAtleastOne,
2161 bool destroyingRuntime) {
2162 MOZ_ASSERT(!realms().empty());
2163 MOZ_ASSERT_IF(destroyingRuntime, !keepAtleastOne);
2165 Realm** read = realms().begin();
2166 Realm** end = realms().end();
2167 Realm** write = read;
2168 while (read < end) {
2169 Realm* realm = *read++;
2172 * Don't delete the last realm if keepAtleastOne is still true, meaning
2173 * all the other realms were deleted.
2175 bool dontDelete = read == end && keepAtleastOne;
2176 if ((realm->marked() || dontDelete) && !destroyingRuntime) {
2177 *write++ = realm;
2178 keepAtleastOne = false;
2179 } else {
2180 realm->destroy(gcx);
2183 realms().shrinkTo(write - realms().begin());
2184 MOZ_ASSERT_IF(keepAtleastOne, !realms().empty());
2185 MOZ_ASSERT_IF(destroyingRuntime, realms().empty());
2188 void GCRuntime::sweepZones(JS::GCContext* gcx, bool destroyingRuntime) {
2189 MOZ_ASSERT_IF(destroyingRuntime, numActiveZoneIters == 0);
2191 if (numActiveZoneIters) {
2192 return;
2195 assertBackgroundSweepingFinished();
2197 // Sweep zones following the atoms zone.
2198 MOZ_ASSERT(zones()[0]->isAtomsZone());
2199 Zone** read = zones().begin() + 1;
2200 Zone** end = zones().end();
2201 Zone** write = read;
2203 while (read < end) {
2204 Zone* zone = *read++;
2206 if (zone->wasGCStarted()) {
2207 MOZ_ASSERT(!zone->isQueuedForBackgroundSweep());
2208 AutoSetThreadIsSweeping threadIsSweeping(zone);
2209 const bool zoneIsDead =
2210 zone->arenas.arenaListsAreEmpty() && !zone->hasMarkedRealms();
2211 MOZ_ASSERT_IF(destroyingRuntime, zoneIsDead);
2212 if (zoneIsDead) {
2213 zone->arenas.checkEmptyFreeLists();
2214 zone->sweepCompartments(gcx, false, destroyingRuntime);
2215 MOZ_ASSERT(zone->compartments().empty());
2216 zone->destroy(gcx);
2217 continue;
2219 zone->sweepCompartments(gcx, true, destroyingRuntime);
2221 *write++ = zone;
2223 zones().shrinkTo(write - zones().begin());
2226 void ArenaLists::checkEmptyArenaList(AllocKind kind) {
2227 MOZ_ASSERT(arenaList(kind).isEmpty());
2230 void GCRuntime::purgeRuntimeForMinorGC() {
2231 for (ZonesIter zone(this, SkipAtoms); !zone.done(); zone.next()) {
2232 zone->externalStringCache().purge();
2233 zone->functionToStringCache().purge();
2237 void GCRuntime::purgeRuntime() {
2238 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::PURGE);
2240 for (GCRealmsIter realm(rt); !realm.done(); realm.next()) {
2241 realm->purge();
2244 for (GCZonesIter zone(this); !zone.done(); zone.next()) {
2245 zone->purgeAtomCache();
2246 zone->externalStringCache().purge();
2247 zone->functionToStringCache().purge();
2248 zone->boundPrefixCache().clearAndCompact();
2249 zone->shapeZone().purgeShapeCaches(rt->gcContext());
2252 JSContext* cx = rt->mainContextFromOwnThread();
2253 queueUnusedLifoBlocksForFree(&cx->tempLifoAlloc());
2254 cx->interpreterStack().purge(rt);
2255 cx->frontendCollectionPool().purge();
2257 rt->caches().purge();
2259 if (rt->isMainRuntime()) {
2260 SharedImmutableStringsCache::getSingleton().purge();
2263 MOZ_ASSERT(marker().unmarkGrayStack.empty());
2264 marker().unmarkGrayStack.clearAndFree();
2267 bool GCRuntime::shouldPreserveJITCode(Realm* realm,
2268 const TimeStamp& currentTime,
2269 JS::GCReason reason,
2270 bool canAllocateMoreCode,
2271 bool isActiveCompartment) {
2272 if (cleanUpEverything) {
2273 return false;
2275 if (!canAllocateMoreCode) {
2276 return false;
2279 if (isActiveCompartment) {
2280 return true;
2282 if (alwaysPreserveCode) {
2283 return true;
2285 if (realm->preserveJitCode()) {
2286 return true;
2288 if (IsCurrentlyAnimating(realm->lastAnimationTime, currentTime) &&
2289 DiscardedCodeRecently(realm->zone(), currentTime)) {
2290 return true;
2292 if (reason == JS::GCReason::DEBUG_GC) {
2293 return true;
2296 return false;
2299 #ifdef DEBUG
2300 class CompartmentCheckTracer final : public JS::CallbackTracer {
2301 void onChild(JS::GCCellPtr thing, const char* name) override;
2302 bool edgeIsInCrossCompartmentMap(JS::GCCellPtr dst);
2304 public:
2305 explicit CompartmentCheckTracer(JSRuntime* rt)
2306 : JS::CallbackTracer(rt, JS::TracerKind::CompartmentCheck,
2307 JS::WeakEdgeTraceAction::Skip) {}
2309 Cell* src = nullptr;
2310 JS::TraceKind srcKind = JS::TraceKind::Null;
2311 Zone* zone = nullptr;
2312 Compartment* compartment = nullptr;
2315 static bool InCrossCompartmentMap(JSRuntime* rt, JSObject* src,
2316 JS::GCCellPtr dst) {
2317 // Cross compartment edges are either in the cross compartment map or in a
2318 // debugger weakmap.
2320 Compartment* srccomp = src->compartment();
2322 if (dst.is<JSObject>()) {
2323 if (ObjectWrapperMap::Ptr p = srccomp->lookupWrapper(&dst.as<JSObject>())) {
2324 if (*p->value().unsafeGet() == src) {
2325 return true;
2330 if (DebugAPI::edgeIsInDebuggerWeakmap(rt, src, dst)) {
2331 return true;
2334 return false;
2337 void CompartmentCheckTracer::onChild(JS::GCCellPtr thing, const char* name) {
2338 Compartment* comp =
2339 MapGCThingTyped(thing, [](auto t) { return t->maybeCompartment(); });
2340 if (comp && compartment) {
2341 MOZ_ASSERT(comp == compartment || edgeIsInCrossCompartmentMap(thing));
2342 } else {
2343 TenuredCell* tenured = &thing.asCell()->asTenured();
2344 Zone* thingZone = tenured->zoneFromAnyThread();
2345 MOZ_ASSERT(thingZone == zone || thingZone->isAtomsZone());
2349 bool CompartmentCheckTracer::edgeIsInCrossCompartmentMap(JS::GCCellPtr dst) {
2350 return srcKind == JS::TraceKind::Object &&
2351 InCrossCompartmentMap(runtime(), static_cast<JSObject*>(src), dst);
2354 static bool IsPartiallyInitializedObject(Cell* cell) {
2355 if (!cell->is<JSObject>()) {
2356 return false;
2359 JSObject* obj = cell->as<JSObject>();
2360 if (!obj->is<NativeObject>()) {
2361 return false;
2364 NativeObject* nobj = &obj->as<NativeObject>();
2366 // Check for failed allocation of dynamic slots in
2367 // NativeObject::allocateInitialSlots.
2368 size_t nDynamicSlots = NativeObject::calculateDynamicSlots(
2369 nobj->numFixedSlots(), nobj->slotSpan(), nobj->getClass());
2370 return nDynamicSlots != 0 && !nobj->hasDynamicSlots();
2373 void GCRuntime::checkForCompartmentMismatches() {
2374 JSContext* cx = rt->mainContextFromOwnThread();
2375 if (cx->disableStrictProxyCheckingCount) {
2376 return;
2379 CompartmentCheckTracer trc(rt);
2380 AutoAssertEmptyNursery empty(cx);
2381 for (ZonesIter zone(this, SkipAtoms); !zone.done(); zone.next()) {
2382 trc.zone = zone;
2383 for (auto thingKind : AllAllocKinds()) {
2384 for (auto i = zone->cellIterUnsafe<TenuredCell>(thingKind, empty);
2385 !i.done(); i.next()) {
2386 // We may encounter partially initialized objects. These are unreachable
2387 // and it's safe to ignore them.
2388 if (IsPartiallyInitializedObject(i.getCell())) {
2389 continue;
2392 trc.src = i.getCell();
2393 trc.srcKind = MapAllocToTraceKind(thingKind);
2394 trc.compartment = MapGCThingTyped(
2395 trc.src, trc.srcKind, [](auto t) { return t->maybeCompartment(); });
2396 JS::TraceChildren(&trc, JS::GCCellPtr(trc.src, trc.srcKind));
2401 #endif
2403 static bool ShouldCleanUpEverything(JS::GCOptions options) {
2404 // During shutdown, we must clean everything up, for the sake of leak
2405 // detection. When a runtime has no contexts, or we're doing a GC before a
2406 // shutdown CC, those are strong indications that we're shutting down.
2407 return options == JS::GCOptions::Shutdown || options == JS::GCOptions::Shrink;
2410 static bool ShouldUseBackgroundThreads(bool isIncremental,
2411 JS::GCReason reason) {
2412 bool shouldUse = isIncremental && CanUseExtraThreads();
2413 MOZ_ASSERT_IF(reason == JS::GCReason::DESTROY_RUNTIME, !shouldUse);
2414 return shouldUse;
2417 void GCRuntime::startCollection(JS::GCReason reason) {
2418 checkGCStateNotInUse();
2419 MOZ_ASSERT_IF(
2420 isShuttingDown(),
2421 isShutdownGC() ||
2422 reason == JS::GCReason::XPCONNECT_SHUTDOWN /* Bug 1650075 */);
2424 initialReason = reason;
2425 cleanUpEverything = ShouldCleanUpEverything(gcOptions());
2426 isCompacting = shouldCompact();
2427 rootsRemoved = false;
2428 sweepGroupIndex = 0;
2429 lastGCStartTime_ = TimeStamp::Now();
2431 #ifdef DEBUG
2432 if (isShutdownGC()) {
2433 hadShutdownGC = true;
2436 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
2437 zone->gcSweepGroupIndex = 0;
2439 #endif
2442 static void RelazifyFunctions(Zone* zone, AllocKind kind) {
2443 MOZ_ASSERT(kind == AllocKind::FUNCTION ||
2444 kind == AllocKind::FUNCTION_EXTENDED);
2446 JSRuntime* rt = zone->runtimeFromMainThread();
2447 AutoAssertEmptyNursery empty(rt->mainContextFromOwnThread());
2449 for (auto i = zone->cellIterUnsafe<JSObject>(kind, empty); !i.done();
2450 i.next()) {
2451 JSFunction* fun = &i->as<JSFunction>();
2452 // When iterating over the GC-heap, we may encounter function objects that
2453 // are incomplete (missing a BaseScript when we expect one). We must check
2454 // for this case before we can call JSFunction::hasBytecode().
2455 if (fun->isIncomplete()) {
2456 continue;
2458 if (fun->hasBytecode()) {
2459 fun->maybeRelazify(rt);
2464 static bool ShouldCollectZone(Zone* zone, JS::GCReason reason) {
2465 // If we are repeating a GC because we noticed dead compartments haven't
2466 // been collected, then only collect zones containing those compartments.
2467 if (reason == JS::GCReason::COMPARTMENT_REVIVED) {
2468 for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next()) {
2469 if (comp->gcState.scheduledForDestruction) {
2470 return true;
2474 return false;
2477 // Otherwise we only collect scheduled zones.
2478 return zone->isGCScheduled();
2481 bool GCRuntime::prepareZonesForCollection(JS::GCReason reason,
2482 bool* isFullOut) {
2483 #ifdef DEBUG
2484 /* Assert that zone state is as we expect */
2485 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
2486 MOZ_ASSERT(!zone->isCollecting());
2487 MOZ_ASSERT_IF(!zone->isAtomsZone(), !zone->compartments().empty());
2488 for (auto i : AllAllocKinds()) {
2489 MOZ_ASSERT(zone->arenas.collectingArenaList(i).isEmpty());
2492 #endif
2494 *isFullOut = true;
2495 bool any = false;
2497 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
2498 /* Set up which zones will be collected. */
2499 bool shouldCollect = ShouldCollectZone(zone, reason);
2500 if (shouldCollect) {
2501 any = true;
2502 zone->changeGCState(Zone::NoGC, Zone::Prepare);
2503 } else {
2504 *isFullOut = false;
2507 zone->setWasCollected(shouldCollect);
2510 /* Check that at least one zone is scheduled for collection. */
2511 return any;
2514 void GCRuntime::discardJITCodeForGC() {
2515 size_t nurserySiteResetCount = 0;
2516 size_t pretenuredSiteResetCount = 0;
2518 js::CancelOffThreadIonCompile(rt, JS::Zone::Prepare);
2519 for (GCZonesIter zone(this); !zone.done(); zone.next()) {
2520 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::MARK_DISCARD_CODE);
2522 // We may need to reset allocation sites and discard JIT code to recover if
2523 // we find object lifetimes have changed.
2524 PretenuringZone& pz = zone->pretenuring;
2525 bool resetNurserySites = pz.shouldResetNurseryAllocSites();
2526 bool resetPretenuredSites = pz.shouldResetPretenuredAllocSites();
2528 if (!zone->isPreservingCode()) {
2529 Zone::DiscardOptions options;
2530 options.discardJitScripts = true;
2531 options.resetNurseryAllocSites = resetNurserySites;
2532 options.resetPretenuredAllocSites = resetPretenuredSites;
2533 zone->discardJitCode(rt->gcContext(), options);
2534 } else if (resetNurserySites || resetPretenuredSites) {
2535 zone->resetAllocSitesAndInvalidate(resetNurserySites,
2536 resetPretenuredSites);
2539 if (resetNurserySites) {
2540 nurserySiteResetCount++;
2542 if (resetPretenuredSites) {
2543 pretenuredSiteResetCount++;
2547 if (nursery().reportPretenuring()) {
2548 if (nurserySiteResetCount) {
2549 fprintf(
2550 stderr,
2551 "GC reset nursery alloc sites and invalidated code in %zu zones\n",
2552 nurserySiteResetCount);
2554 if (pretenuredSiteResetCount) {
2555 fprintf(
2556 stderr,
2557 "GC reset pretenured alloc sites and invalidated code in %zu zones\n",
2558 pretenuredSiteResetCount);
2563 void GCRuntime::relazifyFunctionsForShrinkingGC() {
2564 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::RELAZIFY_FUNCTIONS);
2565 for (GCZonesIter zone(this); !zone.done(); zone.next()) {
2566 RelazifyFunctions(zone, AllocKind::FUNCTION);
2567 RelazifyFunctions(zone, AllocKind::FUNCTION_EXTENDED);
2571 void GCRuntime::purgePropMapTablesForShrinkingGC() {
2572 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::PURGE_PROP_MAP_TABLES);
2573 for (GCZonesIter zone(this); !zone.done(); zone.next()) {
2574 if (!canRelocateZone(zone) || zone->keepPropMapTables()) {
2575 continue;
2578 // Note: CompactPropMaps never have a table.
2579 for (auto map = zone->cellIterUnsafe<NormalPropMap>(); !map.done();
2580 map.next()) {
2581 if (map->asLinked()->hasTable()) {
2582 map->asLinked()->purgeTable(rt->gcContext());
2585 for (auto map = zone->cellIterUnsafe<DictionaryPropMap>(); !map.done();
2586 map.next()) {
2587 if (map->asLinked()->hasTable()) {
2588 map->asLinked()->purgeTable(rt->gcContext());
2594 // The debugger keeps track of the URLs for the sources of each realm's scripts.
2595 // These URLs are purged on shrinking GCs.
2596 void GCRuntime::purgeSourceURLsForShrinkingGC() {
2597 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::PURGE_SOURCE_URLS);
2598 for (GCZonesIter zone(this); !zone.done(); zone.next()) {
2599 // URLs are not tracked for realms in the system zone.
2600 if (!canRelocateZone(zone) || zone->isSystemZone()) {
2601 continue;
2603 for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next()) {
2604 for (RealmsInCompartmentIter realm(comp); !realm.done(); realm.next()) {
2605 GlobalObject* global = realm.get()->unsafeUnbarrieredMaybeGlobal();
2606 if (global) {
2607 global->clearSourceURLSHolder();
2614 void GCRuntime::unmarkWeakMaps() {
2615 for (GCZonesIter zone(this); !zone.done(); zone.next()) {
2616 /* Unmark all weak maps in the zones being collected. */
2617 WeakMapBase::unmarkZone(zone);
2621 bool GCRuntime::beginPreparePhase(JS::GCReason reason, AutoGCSession& session) {
2622 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::PREPARE);
2624 if (!prepareZonesForCollection(reason, &isFull.ref())) {
2625 return false;
2629 * Start a parallel task to clear all mark state for the zones we are
2630 * collecting. This is linear in the size of the heap we are collecting and so
2631 * can be slow. This usually happens concurrently with the mutator and GC
2632 * proper does not start until this is complete.
2634 unmarkTask.initZones();
2635 if (useBackgroundThreads) {
2636 unmarkTask.start();
2637 } else {
2638 unmarkTask.runFromMainThread();
2642 * Process any queued source compressions during the start of a major
2643 * GC.
2645 * Bug 1650075: When we start passing GCOptions::Shutdown for
2646 * GCReason::XPCONNECT_SHUTDOWN GCs we can remove the extra check.
2648 if (!isShutdownGC() && reason != JS::GCReason::XPCONNECT_SHUTDOWN) {
2649 StartHandlingCompressionsOnGC(rt);
2652 return true;
2655 BackgroundUnmarkTask::BackgroundUnmarkTask(GCRuntime* gc)
2656 : GCParallelTask(gc, gcstats::PhaseKind::UNMARK) {}
2658 void BackgroundUnmarkTask::initZones() {
2659 MOZ_ASSERT(isIdle());
2660 MOZ_ASSERT(zones.empty());
2661 MOZ_ASSERT(!isCancelled());
2663 // We can't safely iterate the zones vector from another thread so we copy the
2664 // zones to be collected into another vector.
2665 AutoEnterOOMUnsafeRegion oomUnsafe;
2666 for (GCZonesIter zone(gc); !zone.done(); zone.next()) {
2667 if (!zones.append(zone.get())) {
2668 oomUnsafe.crash("BackgroundUnmarkTask::initZones");
2671 zone->arenas.clearFreeLists();
2672 zone->arenas.moveArenasToCollectingLists();
2676 void BackgroundUnmarkTask::run(AutoLockHelperThreadState& helperTheadLock) {
2677 AutoUnlockHelperThreadState unlock(helperTheadLock);
2679 for (Zone* zone : zones) {
2680 for (auto kind : AllAllocKinds()) {
2681 ArenaList& arenas = zone->arenas.collectingArenaList(kind);
2682 for (ArenaListIter arena(arenas.head()); !arena.done(); arena.next()) {
2683 arena->unmarkAll();
2684 if (isCancelled()) {
2685 break;
2691 zones.clear();
2694 void GCRuntime::endPreparePhase(JS::GCReason reason) {
2695 MOZ_ASSERT(unmarkTask.isIdle());
2697 for (GCZonesIter zone(this); !zone.done(); zone.next()) {
2699 * In an incremental GC, clear the area free lists to ensure that subsequent
2700 * allocations refill them and end up marking new cells back. See
2701 * arenaAllocatedDuringGC().
2703 zone->arenas.clearFreeLists();
2705 zone->markedStrings = 0;
2706 zone->finalizedStrings = 0;
2708 zone->setPreservingCode(false);
2710 #ifdef JS_GC_ZEAL
2711 if (hasZealMode(ZealMode::YieldBeforeRootMarking)) {
2712 for (auto kind : AllAllocKinds()) {
2713 for (ArenaIter arena(zone, kind); !arena.done(); arena.next()) {
2714 arena->checkNoMarkedCells();
2718 #endif
2721 // Discard JIT code more aggressively if the process is approaching its
2722 // executable code limit.
2723 bool canAllocateMoreCode = jit::CanLikelyAllocateMoreExecutableMemory();
2724 auto currentTime = TimeStamp::Now();
2726 Compartment* activeCompartment = nullptr;
2727 jit::JitActivationIterator activation(rt->mainContextFromOwnThread());
2728 if (!activation.done()) {
2729 activeCompartment = activation->compartment();
2732 for (CompartmentsIter c(rt); !c.done(); c.next()) {
2733 c->gcState.scheduledForDestruction = false;
2734 c->gcState.maybeAlive = false;
2735 c->gcState.hasEnteredRealm = false;
2736 bool isActiveCompartment = c == activeCompartment;
2737 for (RealmsInCompartmentIter r(c); !r.done(); r.next()) {
2738 if (r->shouldTraceGlobal() || !r->zone()->isGCScheduled()) {
2739 c->gcState.maybeAlive = true;
2741 if (shouldPreserveJITCode(r, currentTime, reason, canAllocateMoreCode,
2742 isActiveCompartment)) {
2743 r->zone()->setPreservingCode(true);
2745 if (r->hasBeenEnteredIgnoringJit()) {
2746 c->gcState.hasEnteredRealm = true;
2752 * Perform remaining preparation work that must take place in the first true
2753 * GC slice.
2757 gcstats::AutoPhase ap1(stats(), gcstats::PhaseKind::PREPARE);
2759 AutoLockHelperThreadState helperLock;
2761 /* Clear mark state for WeakMaps in parallel with other work. */
2762 AutoRunParallelTask unmarkWeakMaps(this, &GCRuntime::unmarkWeakMaps,
2763 gcstats::PhaseKind::UNMARK_WEAKMAPS,
2764 GCUse::Unspecified, helperLock);
2766 AutoUnlockHelperThreadState unlock(helperLock);
2768 // Discard JIT code. For incremental collections, the sweep phase may
2769 // also discard JIT code.
2770 discardJITCodeForGC();
2771 haveDiscardedJITCodeThisSlice = true;
2774 * Relazify functions after discarding JIT code (we can't relazify
2775 * functions with JIT code) and before the actual mark phase, so that
2776 * the current GC can collect the JSScripts we're unlinking here. We do
2777 * this only when we're performing a shrinking GC, as too much
2778 * relazification can cause performance issues when we have to reparse
2779 * the same functions over and over.
2781 if (isShrinkingGC()) {
2782 relazifyFunctionsForShrinkingGC();
2783 purgePropMapTablesForShrinkingGC();
2784 purgeSourceURLsForShrinkingGC();
2788 * We must purge the runtime at the beginning of an incremental GC. The
2789 * danger if we purge later is that the snapshot invariant of
2790 * incremental GC will be broken, as follows. If some object is
2791 * reachable only through some cache (say the dtoaCache) then it will
2792 * not be part of the snapshot. If we purge after root marking, then
2793 * the mutator could obtain a pointer to the object and start using
2794 * it. This object might never be marked, so a GC hazard would exist.
2796 purgeRuntime();
2798 startBackgroundFreeAfterMinorGC();
2800 if (isShutdownGC()) {
2801 /* Clear any engine roots that may hold external data live. */
2802 for (GCZonesIter zone(this); !zone.done(); zone.next()) {
2803 zone->clearRootsForShutdownGC();
2806 #ifdef DEBUG
2807 testMarkQueue.clear();
2808 queuePos = 0;
2809 #endif
2813 #ifdef DEBUG
2814 if (fullCompartmentChecks) {
2815 checkForCompartmentMismatches();
2817 #endif
2820 AutoUpdateLiveCompartments::AutoUpdateLiveCompartments(GCRuntime* gc) : gc(gc) {
2821 for (GCCompartmentsIter c(gc->rt); !c.done(); c.next()) {
2822 c->gcState.hasMarkedCells = false;
2826 AutoUpdateLiveCompartments::~AutoUpdateLiveCompartments() {
2827 for (GCCompartmentsIter c(gc->rt); !c.done(); c.next()) {
2828 if (c->gcState.hasMarkedCells) {
2829 c->gcState.maybeAlive = true;
2834 Zone::GCState Zone::initialMarkingState() const {
2835 if (isAtomsZone()) {
2836 // Don't delay gray marking in the atoms zone like we do in other zones.
2837 return MarkBlackAndGray;
2840 return MarkBlackOnly;
2843 void GCRuntime::beginMarkPhase(AutoGCSession& session) {
2845 * Mark phase.
2847 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::MARK);
2849 // This is the slice we actually start collecting. The number can be used to
2850 // check whether a major GC has started so we must not increment it until we
2851 // get here.
2852 incMajorGcNumber();
2854 #ifdef DEBUG
2855 queuePos = 0;
2856 queueMarkColor.reset();
2857 #endif
2859 for (GCZonesIter zone(this); !zone.done(); zone.next()) {
2860 // Incremental marking barriers are enabled at this point.
2861 zone->changeGCState(Zone::Prepare, zone->initialMarkingState());
2863 // Merge arenas allocated during the prepare phase, then move all arenas to
2864 // the collecting arena lists.
2865 zone->arenas.mergeArenasFromCollectingLists();
2866 zone->arenas.moveArenasToCollectingLists();
2868 for (RealmsInZoneIter realm(zone); !realm.done(); realm.next()) {
2869 realm->clearAllocatedDuringGC();
2873 updateSchedulingStateOnGCStart();
2874 stats().measureInitialHeapSize();
2876 useParallelMarking = SingleThreadedMarking;
2877 if (canMarkInParallel() && initParallelMarkers()) {
2878 useParallelMarking = AllowParallelMarking;
2881 MOZ_ASSERT(!hasDelayedMarking());
2882 for (auto& marker : markers) {
2883 marker->start();
2886 if (rt->isBeingDestroyed()) {
2887 checkNoRuntimeRoots(session);
2888 } else {
2889 AutoUpdateLiveCompartments updateLive(this);
2890 marker().setRootMarkingMode(true);
2891 traceRuntimeForMajorGC(marker().tracer(), session);
2892 marker().setRootMarkingMode(false);
2896 void GCRuntime::findDeadCompartments() {
2897 gcstats::AutoPhase ap1(stats(), gcstats::PhaseKind::FIND_DEAD_COMPARTMENTS);
2900 * This code ensures that if a compartment is "dead", then it will be
2901 * collected in this GC. A compartment is considered dead if its maybeAlive
2902 * flag is false. The maybeAlive flag is set if:
2904 * (1) the compartment has been entered (set in beginMarkPhase() above)
2905 * (2) the compartment's zone is not being collected (set in
2906 * beginMarkPhase() above)
2907 * (3) an object in the compartment was marked during root marking, either
2908 * as a black root or a gray root. This is arranged by
2909 * SetCompartmentHasMarkedCells and AutoUpdateLiveCompartments.
2910 * (4) the compartment has incoming cross-compartment edges from another
2911 * compartment that has maybeAlive set (set by this method).
2913 * If the maybeAlive is false, then we set the scheduledForDestruction flag.
2914 * At the end of the GC, we look for compartments where
2915 * scheduledForDestruction is true. These are compartments that were somehow
2916 * "revived" during the incremental GC. If any are found, we do a special,
2917 * non-incremental GC of those compartments to try to collect them.
2919 * Compartments can be revived for a variety of reasons. On reason is bug
2920 * 811587, where a reflector that was dead can be revived by DOM code that
2921 * still refers to the underlying DOM node.
2923 * Read barriers and allocations can also cause revival. This might happen
2924 * during a function like JS_TransplantObject, which iterates over all
2925 * compartments, live or dead, and operates on their objects. See bug 803376
2926 * for details on this problem. To avoid the problem, we try to avoid
2927 * allocation and read barriers during JS_TransplantObject and the like.
2930 // Propagate the maybeAlive flag via cross-compartment edges.
2932 Vector<Compartment*, 0, js::SystemAllocPolicy> workList;
2934 for (CompartmentsIter comp(rt); !comp.done(); comp.next()) {
2935 if (comp->gcState.maybeAlive) {
2936 if (!workList.append(comp)) {
2937 return;
2942 while (!workList.empty()) {
2943 Compartment* comp = workList.popCopy();
2944 for (Compartment::WrappedObjectCompartmentEnum e(comp); !e.empty();
2945 e.popFront()) {
2946 Compartment* dest = e.front();
2947 if (!dest->gcState.maybeAlive) {
2948 dest->gcState.maybeAlive = true;
2949 if (!workList.append(dest)) {
2950 return;
2956 // Set scheduledForDestruction based on maybeAlive.
2958 for (GCCompartmentsIter comp(rt); !comp.done(); comp.next()) {
2959 MOZ_ASSERT(!comp->gcState.scheduledForDestruction);
2960 if (!comp->gcState.maybeAlive) {
2961 comp->gcState.scheduledForDestruction = true;
2966 void GCRuntime::updateSchedulingStateOnGCStart() {
2967 heapSize.updateOnGCStart();
2969 // Update memory counters for the zones we are collecting.
2970 for (GCZonesIter zone(this); !zone.done(); zone.next()) {
2971 zone->updateSchedulingStateOnGCStart();
2975 inline bool GCRuntime::canMarkInParallel() const {
2976 MOZ_ASSERT(state() >= gc::State::MarkRoots);
2978 #if defined(DEBUG) || defined(JS_OOM_BREAKPOINT)
2979 // OOM testing limits the engine to using a single helper thread.
2980 if (oom::simulator.targetThread() == THREAD_TYPE_GCPARALLEL) {
2981 return false;
2983 #endif
2985 return markers.length() > 1 && stats().initialCollectedBytes() >=
2986 tunables.parallelMarkingThresholdBytes();
2989 bool GCRuntime::initParallelMarkers() {
2990 MOZ_ASSERT(canMarkInParallel());
2992 // Allocate stack for parallel markers. The first marker always has stack
2993 // allocated. Other markers have their stack freed in
2994 // GCRuntime::finishCollection.
2995 for (size_t i = 1; i < markers.length(); i++) {
2996 if (!markers[i]->initStack()) {
2997 return false;
3001 return true;
3004 IncrementalProgress GCRuntime::markUntilBudgetExhausted(
3005 SliceBudget& sliceBudget, ParallelMarking allowParallelMarking,
3006 ShouldReportMarkTime reportTime) {
3007 // Run a marking slice and return whether the stack is now empty.
3009 AutoMajorGCProfilerEntry s(this);
3011 if (initialState != State::Mark) {
3012 sliceBudget.forceCheck();
3013 if (sliceBudget.isOverBudget()) {
3014 return NotFinished;
3018 if (processTestMarkQueue() == QueueYielded) {
3019 return NotFinished;
3022 if (allowParallelMarking) {
3023 MOZ_ASSERT(canMarkInParallel());
3024 MOZ_ASSERT(parallelMarkingEnabled);
3025 MOZ_ASSERT(reportTime);
3026 MOZ_ASSERT(!isBackgroundMarking());
3028 ParallelMarker pm(this);
3029 if (!pm.mark(sliceBudget)) {
3030 return NotFinished;
3033 assertNoMarkingWork();
3034 return Finished;
3037 #ifdef DEBUG
3038 AutoSetThreadIsMarking threadIsMarking;
3039 #endif // DEBUG
3041 return marker().markUntilBudgetExhausted(sliceBudget, reportTime)
3042 ? Finished
3043 : NotFinished;
3046 void GCRuntime::drainMarkStack() {
3047 auto unlimited = SliceBudget::unlimited();
3048 MOZ_RELEASE_ASSERT(marker().markUntilBudgetExhausted(unlimited));
3051 #ifdef DEBUG
3053 const GCVector<HeapPtr<JS::Value>, 0, SystemAllocPolicy>&
3054 GCRuntime::getTestMarkQueue() const {
3055 return testMarkQueue.get();
3058 bool GCRuntime::appendTestMarkQueue(const JS::Value& value) {
3059 return testMarkQueue.append(value);
3062 void GCRuntime::clearTestMarkQueue() {
3063 testMarkQueue.clear();
3064 queuePos = 0;
3067 size_t GCRuntime::testMarkQueuePos() const { return queuePos; }
3069 #endif
3071 GCRuntime::MarkQueueProgress GCRuntime::processTestMarkQueue() {
3072 #ifdef DEBUG
3073 if (testMarkQueue.empty()) {
3074 return QueueComplete;
3077 if (queueMarkColor == mozilla::Some(MarkColor::Gray) &&
3078 state() != State::Sweep) {
3079 return QueueSuspended;
3082 // If the queue wants to be gray marking, but we've pushed a black object
3083 // since set-color-gray was processed, then we can't switch to gray and must
3084 // again wait until gray marking is possible.
3086 // Remove this code if the restriction against marking gray during black is
3087 // relaxed.
3088 if (queueMarkColor == mozilla::Some(MarkColor::Gray) &&
3089 marker().hasBlackEntries()) {
3090 return QueueSuspended;
3093 // If the queue wants to be marking a particular color, switch to that color.
3094 // In any case, restore the mark color to whatever it was when we entered
3095 // this function.
3096 bool willRevertToGray = marker().markColor() == MarkColor::Gray;
3097 AutoSetMarkColor autoRevertColor(
3098 marker(), queueMarkColor.valueOr(marker().markColor()));
3100 // Process the mark queue by taking each object in turn, pushing it onto the
3101 // mark stack, and processing just the top element with processMarkStackTop
3102 // without recursing into reachable objects.
3103 while (queuePos < testMarkQueue.length()) {
3104 Value val = testMarkQueue[queuePos++].get();
3105 if (val.isObject()) {
3106 JSObject* obj = &val.toObject();
3107 JS::Zone* zone = obj->zone();
3108 if (!zone->isGCMarking() || obj->isMarkedAtLeast(marker().markColor())) {
3109 continue;
3112 // If we have started sweeping, obey sweep group ordering. But note that
3113 // we will first be called during the initial sweep slice, when the sweep
3114 // group indexes have not yet been computed. In that case, we can mark
3115 // freely.
3116 if (state() == State::Sweep && initialState != State::Sweep) {
3117 if (zone->gcSweepGroupIndex < getCurrentSweepGroupIndex()) {
3118 // Too late. This must have been added after we started collecting,
3119 // and we've already processed its sweep group. Skip it.
3120 continue;
3122 if (zone->gcSweepGroupIndex > getCurrentSweepGroupIndex()) {
3123 // Not ready yet. Wait until we reach the object's sweep group.
3124 queuePos--;
3125 return QueueSuspended;
3129 if (marker().markColor() == MarkColor::Gray &&
3130 zone->isGCMarkingBlackOnly()) {
3131 // Have not yet reached the point where we can mark this object, so
3132 // continue with the GC.
3133 queuePos--;
3134 return QueueSuspended;
3137 if (marker().markColor() == MarkColor::Black && willRevertToGray) {
3138 // If we put any black objects on the stack, we wouldn't be able to
3139 // return to gray marking. So delay the marking until we're back to
3140 // black marking.
3141 queuePos--;
3142 return QueueSuspended;
3145 // Mark the object and push it onto the stack.
3146 size_t oldPosition = marker().stack.position();
3147 marker().markAndTraverse<NormalMarkingOptions>(obj);
3149 // If we overflow the stack here and delay marking, then we won't be
3150 // testing what we think we're testing.
3151 if (marker().stack.position() == oldPosition) {
3152 MOZ_ASSERT(obj->asTenured().arena()->onDelayedMarkingList());
3153 AutoEnterOOMUnsafeRegion oomUnsafe;
3154 oomUnsafe.crash("Overflowed stack while marking test queue");
3157 SliceBudget unlimited = SliceBudget::unlimited();
3158 marker().processMarkStackTop<NormalMarkingOptions>(unlimited);
3159 } else if (val.isString()) {
3160 JSLinearString* str = &val.toString()->asLinear();
3161 if (js::StringEqualsLiteral(str, "yield") && isIncrementalGc()) {
3162 return QueueYielded;
3165 if (js::StringEqualsLiteral(str, "enter-weak-marking-mode") ||
3166 js::StringEqualsLiteral(str, "abort-weak-marking-mode")) {
3167 if (marker().isRegularMarking()) {
3168 // We can't enter weak marking mode at just any time, so instead
3169 // we'll stop processing the queue and continue on with the GC. Once
3170 // we enter weak marking mode, we can continue to the rest of the
3171 // queue. Note that we will also suspend for aborting, and then abort
3172 // the earliest following weak marking mode.
3173 queuePos--;
3174 return QueueSuspended;
3176 if (js::StringEqualsLiteral(str, "abort-weak-marking-mode")) {
3177 marker().abortLinearWeakMarking();
3179 } else if (js::StringEqualsLiteral(str, "drain")) {
3180 auto unlimited = SliceBudget::unlimited();
3181 MOZ_RELEASE_ASSERT(
3182 marker().markUntilBudgetExhausted(unlimited, DontReportMarkTime));
3183 } else if (js::StringEqualsLiteral(str, "set-color-gray")) {
3184 queueMarkColor = mozilla::Some(MarkColor::Gray);
3185 if (state() != State::Sweep || marker().hasBlackEntries()) {
3186 // Cannot mark gray yet, so continue with the GC.
3187 queuePos--;
3188 return QueueSuspended;
3190 marker().setMarkColor(MarkColor::Gray);
3191 } else if (js::StringEqualsLiteral(str, "set-color-black")) {
3192 queueMarkColor = mozilla::Some(MarkColor::Black);
3193 marker().setMarkColor(MarkColor::Black);
3194 } else if (js::StringEqualsLiteral(str, "unset-color")) {
3195 queueMarkColor.reset();
3199 #endif
3201 return QueueComplete;
3204 static bool IsEmergencyGC(JS::GCReason reason) {
3205 return reason == JS::GCReason::LAST_DITCH ||
3206 reason == JS::GCReason::MEM_PRESSURE;
3209 void GCRuntime::finishCollection(JS::GCReason reason) {
3210 assertBackgroundSweepingFinished();
3212 MOZ_ASSERT(!hasDelayedMarking());
3213 for (size_t i = 0; i < markers.length(); i++) {
3214 const auto& marker = markers[i];
3215 marker->stop();
3216 if (i == 0) {
3217 marker->resetStackCapacity();
3218 } else {
3219 marker->freeStack();
3223 maybeStopPretenuring();
3225 if (IsEmergencyGC(reason)) {
3226 waitBackgroundFreeEnd();
3229 TimeStamp currentTime = TimeStamp::Now();
3231 updateSchedulingStateAfterCollection(currentTime);
3233 for (GCZonesIter zone(this); !zone.done(); zone.next()) {
3234 zone->changeGCState(Zone::Finished, Zone::NoGC);
3235 zone->notifyObservingDebuggers();
3238 #ifdef JS_GC_ZEAL
3239 clearSelectedForMarking();
3240 #endif
3242 schedulingState.updateHighFrequencyMode(lastGCEndTime_, currentTime,
3243 tunables);
3244 lastGCEndTime_ = currentTime;
3246 checkGCStateNotInUse();
3249 void GCRuntime::checkGCStateNotInUse() {
3250 #ifdef DEBUG
3251 for (auto& marker : markers) {
3252 MOZ_ASSERT(!marker->isActive());
3253 MOZ_ASSERT(marker->isDrained());
3255 MOZ_ASSERT(!hasDelayedMarking());
3257 MOZ_ASSERT(!lastMarkSlice);
3259 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
3260 if (zone->wasCollected()) {
3261 zone->arenas.checkGCStateNotInUse();
3263 MOZ_ASSERT(!zone->wasGCStarted());
3264 MOZ_ASSERT(!zone->needsIncrementalBarrier());
3265 MOZ_ASSERT(!zone->isOnList());
3268 MOZ_ASSERT(zonesToMaybeCompact.ref().isEmpty());
3269 MOZ_ASSERT(cellsToAssertNotGray.ref().empty());
3271 AutoLockHelperThreadState lock;
3272 MOZ_ASSERT(!requestSliceAfterBackgroundTask);
3273 MOZ_ASSERT(unmarkTask.isIdle(lock));
3274 MOZ_ASSERT(markTask.isIdle(lock));
3275 MOZ_ASSERT(sweepTask.isIdle(lock));
3276 MOZ_ASSERT(decommitTask.isIdle(lock));
3277 #endif
3280 void GCRuntime::maybeStopPretenuring() {
3281 nursery().maybeStopPretenuring(this);
3283 for (GCZonesIter zone(this); !zone.done(); zone.next()) {
3284 if (!zone->nurseryStringsDisabled) {
3285 continue;
3288 // Count the number of strings before the major GC.
3289 size_t numStrings = zone->markedStrings + zone->finalizedStrings;
3290 double rate = double(zone->finalizedStrings) / double(numStrings);
3291 if (rate > tunables.stopPretenureStringThreshold()) {
3292 zone->markedStrings = 0;
3293 zone->finalizedStrings = 0;
3294 zone->nurseryStringsDisabled = false;
3295 nursery().updateAllocFlagsForZone(zone);
3300 void GCRuntime::updateSchedulingStateAfterCollection(TimeStamp currentTime) {
3301 TimeDuration totalGCTime = stats().totalGCTime();
3302 size_t totalInitialBytes = stats().initialCollectedBytes();
3304 for (GCZonesIter zone(this); !zone.done(); zone.next()) {
3305 if (tunables.balancedHeapLimitsEnabled() && totalInitialBytes != 0) {
3306 zone->updateCollectionRate(totalGCTime, totalInitialBytes);
3308 zone->clearGCSliceThresholds();
3309 zone->updateGCStartThresholds(*this);
3313 void GCRuntime::updateAllGCStartThresholds() {
3314 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
3315 zone->updateGCStartThresholds(*this);
3319 void GCRuntime::updateAllocationRates() {
3320 // Calculate mutator time since the last update. This ignores the fact that
3321 // the zone could have been created since the last update.
3323 TimeStamp currentTime = TimeStamp::Now();
3324 TimeDuration totalTime = currentTime - lastAllocRateUpdateTime;
3325 if (collectorTimeSinceAllocRateUpdate >= totalTime) {
3326 // It shouldn't happen but occasionally we see collector time being larger
3327 // than total time. Skip the update in that case.
3328 return;
3331 TimeDuration mutatorTime = totalTime - collectorTimeSinceAllocRateUpdate;
3333 for (AllZonesIter zone(this); !zone.done(); zone.next()) {
3334 zone->updateAllocationRate(mutatorTime);
3335 zone->updateGCStartThresholds(*this);
3338 lastAllocRateUpdateTime = currentTime;
3339 collectorTimeSinceAllocRateUpdate = TimeDuration::Zero();
3342 static const char* GCHeapStateToLabel(JS::HeapState heapState) {
3343 switch (heapState) {
3344 case JS::HeapState::MinorCollecting:
3345 return "js::Nursery::collect";
3346 case JS::HeapState::MajorCollecting:
3347 return "js::GCRuntime::collect";
3348 default:
3349 MOZ_CRASH("Unexpected heap state when pushing GC profiling stack frame");
3351 MOZ_ASSERT_UNREACHABLE("Should have exhausted every JS::HeapState variant!");
3352 return nullptr;
3355 static JS::ProfilingCategoryPair GCHeapStateToProfilingCategory(
3356 JS::HeapState heapState) {
3357 return heapState == JS::HeapState::MinorCollecting
3358 ? JS::ProfilingCategoryPair::GCCC_MinorGC
3359 : JS::ProfilingCategoryPair::GCCC_MajorGC;
3362 /* Start a new heap session. */
3363 AutoHeapSession::AutoHeapSession(GCRuntime* gc, JS::HeapState heapState)
3364 : gc(gc), prevState(gc->heapState_) {
3365 MOZ_ASSERT(CurrentThreadCanAccessRuntime(gc->rt));
3366 MOZ_ASSERT(prevState == JS::HeapState::Idle ||
3367 (prevState == JS::HeapState::MajorCollecting &&
3368 heapState == JS::HeapState::MinorCollecting));
3369 MOZ_ASSERT(heapState != JS::HeapState::Idle);
3371 gc->heapState_ = heapState;
3373 if (heapState == JS::HeapState::MinorCollecting ||
3374 heapState == JS::HeapState::MajorCollecting) {
3375 profilingStackFrame.emplace(gc->rt->mainContextFromOwnThread(),
3376 GCHeapStateToLabel(heapState),
3377 GCHeapStateToProfilingCategory(heapState));
3381 AutoHeapSession::~AutoHeapSession() {
3382 MOZ_ASSERT(JS::RuntimeHeapIsBusy());
3383 gc->heapState_ = prevState;
3386 static const char* MajorGCStateToLabel(State state) {
3387 switch (state) {
3388 case State::Mark:
3389 return "js::GCRuntime::markUntilBudgetExhausted";
3390 case State::Sweep:
3391 return "js::GCRuntime::performSweepActions";
3392 case State::Compact:
3393 return "js::GCRuntime::compactPhase";
3394 default:
3395 MOZ_CRASH("Unexpected heap state when pushing GC profiling stack frame");
3398 MOZ_ASSERT_UNREACHABLE("Should have exhausted every State variant!");
3399 return nullptr;
3402 static JS::ProfilingCategoryPair MajorGCStateToProfilingCategory(State state) {
3403 switch (state) {
3404 case State::Mark:
3405 return JS::ProfilingCategoryPair::GCCC_MajorGC_Mark;
3406 case State::Sweep:
3407 return JS::ProfilingCategoryPair::GCCC_MajorGC_Sweep;
3408 case State::Compact:
3409 return JS::ProfilingCategoryPair::GCCC_MajorGC_Compact;
3410 default:
3411 MOZ_CRASH("Unexpected heap state when pushing GC profiling stack frame");
3415 AutoMajorGCProfilerEntry::AutoMajorGCProfilerEntry(GCRuntime* gc)
3416 : AutoGeckoProfilerEntry(gc->rt->mainContextFromAnyThread(),
3417 MajorGCStateToLabel(gc->state()),
3418 MajorGCStateToProfilingCategory(gc->state())) {
3419 MOZ_ASSERT(gc->heapState() == JS::HeapState::MajorCollecting);
3422 GCRuntime::IncrementalResult GCRuntime::resetIncrementalGC(
3423 GCAbortReason reason) {
3424 MOZ_ASSERT(reason != GCAbortReason::None);
3426 // Drop as much work as possible from an ongoing incremental GC so
3427 // we can start a new GC after it has finished.
3428 if (incrementalState == State::NotActive) {
3429 return IncrementalResult::Ok;
3432 AutoGCSession session(this, JS::HeapState::MajorCollecting);
3434 switch (incrementalState) {
3435 case State::NotActive:
3436 case State::MarkRoots:
3437 case State::Finish:
3438 MOZ_CRASH("Unexpected GC state in resetIncrementalGC");
3439 break;
3441 case State::Prepare:
3442 unmarkTask.cancelAndWait();
3444 for (GCZonesIter zone(this); !zone.done(); zone.next()) {
3445 zone->changeGCState(Zone::Prepare, Zone::NoGC);
3446 zone->clearGCSliceThresholds();
3447 zone->arenas.clearFreeLists();
3448 zone->arenas.mergeArenasFromCollectingLists();
3451 incrementalState = State::NotActive;
3452 checkGCStateNotInUse();
3453 break;
3455 case State::Mark: {
3456 // Cancel any ongoing marking.
3457 for (auto& marker : markers) {
3458 marker->reset();
3460 resetDelayedMarking();
3462 for (GCCompartmentsIter c(rt); !c.done(); c.next()) {
3463 resetGrayList(c);
3466 for (GCZonesIter zone(this); !zone.done(); zone.next()) {
3467 zone->changeGCState(zone->initialMarkingState(), Zone::NoGC);
3468 zone->clearGCSliceThresholds();
3469 zone->arenas.unmarkPreMarkedFreeCells();
3470 zone->arenas.mergeArenasFromCollectingLists();
3474 AutoLockHelperThreadState lock;
3475 lifoBlocksToFree.ref().freeAll();
3478 lastMarkSlice = false;
3479 incrementalState = State::Finish;
3481 #ifdef DEBUG
3482 for (auto& marker : markers) {
3483 MOZ_ASSERT(!marker->shouldCheckCompartments());
3485 #endif
3487 break;
3490 case State::Sweep: {
3491 // Finish sweeping the current sweep group, then abort.
3492 for (CompartmentsIter c(rt); !c.done(); c.next()) {
3493 c->gcState.scheduledForDestruction = false;
3496 abortSweepAfterCurrentGroup = true;
3497 isCompacting = false;
3499 break;
3502 case State::Finalize: {
3503 isCompacting = false;
3504 break;
3507 case State::Compact: {
3508 // Skip any remaining zones that would have been compacted.
3509 MOZ_ASSERT(isCompacting);
3510 startedCompacting = true;
3511 zonesToMaybeCompact.ref().clear();
3512 break;
3515 case State::Decommit: {
3516 break;
3520 stats().reset(reason);
3522 return IncrementalResult::ResetIncremental;
3525 AutoDisableBarriers::AutoDisableBarriers(GCRuntime* gc) : gc(gc) {
3527 * Clear needsIncrementalBarrier early so we don't do any write barriers
3528 * during sweeping.
3530 for (GCZonesIter zone(gc); !zone.done(); zone.next()) {
3531 if (zone->isGCMarking()) {
3532 MOZ_ASSERT(zone->needsIncrementalBarrier());
3533 zone->setNeedsIncrementalBarrier(false);
3535 MOZ_ASSERT(!zone->needsIncrementalBarrier());
3539 AutoDisableBarriers::~AutoDisableBarriers() {
3540 for (GCZonesIter zone(gc); !zone.done(); zone.next()) {
3541 MOZ_ASSERT(!zone->needsIncrementalBarrier());
3542 if (zone->isGCMarking()) {
3543 zone->setNeedsIncrementalBarrier(true);
3548 static bool NeedToCollectNursery(GCRuntime* gc) {
3549 return !gc->nursery().isEmpty() || !gc->storeBuffer().isEmpty();
3552 #ifdef DEBUG
3553 static const char* DescribeBudget(const SliceBudget& budget) {
3554 constexpr size_t length = 32;
3555 static char buffer[length];
3556 budget.describe(buffer, length);
3557 return buffer;
3559 #endif
3561 static bool ShouldPauseMutatorWhileWaiting(const SliceBudget& budget,
3562 JS::GCReason reason,
3563 bool budgetWasIncreased) {
3564 // When we're nearing the incremental limit at which we will finish the
3565 // collection synchronously, pause the main thread if there is only background
3566 // GC work happening. This allows the GC to catch up and avoid hitting the
3567 // limit.
3568 return budget.isTimeBudget() &&
3569 (reason == JS::GCReason::ALLOC_TRIGGER ||
3570 reason == JS::GCReason::TOO_MUCH_MALLOC) &&
3571 budgetWasIncreased;
3574 void GCRuntime::incrementalSlice(SliceBudget& budget, JS::GCReason reason,
3575 bool budgetWasIncreased) {
3576 MOZ_ASSERT_IF(isIncrementalGCInProgress(), isIncremental);
3578 AutoSetThreadIsPerformingGC performingGC(rt->gcContext());
3580 AutoGCSession session(this, JS::HeapState::MajorCollecting);
3582 bool destroyingRuntime = (reason == JS::GCReason::DESTROY_RUNTIME);
3584 initialState = incrementalState;
3585 isIncremental = !budget.isUnlimited();
3586 useBackgroundThreads = ShouldUseBackgroundThreads(isIncremental, reason);
3587 haveDiscardedJITCodeThisSlice = false;
3589 #ifdef JS_GC_ZEAL
3590 // Do the incremental collection type specified by zeal mode if the collection
3591 // was triggered by runDebugGC() and incremental GC has not been cancelled by
3592 // resetIncrementalGC().
3593 useZeal = isIncremental && reason == JS::GCReason::DEBUG_GC;
3594 #endif
3596 #ifdef DEBUG
3597 stats().log(
3598 "Incremental: %d, lastMarkSlice: %d, useZeal: %d, budget: %s, "
3599 "budgetWasIncreased: %d",
3600 bool(isIncremental), bool(lastMarkSlice), bool(useZeal),
3601 DescribeBudget(budget), budgetWasIncreased);
3602 #endif
3604 if (useZeal && hasIncrementalTwoSliceZealMode()) {
3605 // Yields between slices occurs at predetermined points in these modes; the
3606 // budget is not used. |isIncremental| is still true.
3607 stats().log("Using unlimited budget for two-slice zeal mode");
3608 budget = SliceBudget::unlimited();
3611 bool shouldPauseMutator =
3612 ShouldPauseMutatorWhileWaiting(budget, reason, budgetWasIncreased);
3614 switch (incrementalState) {
3615 case State::NotActive:
3616 startCollection(reason);
3618 incrementalState = State::Prepare;
3619 if (!beginPreparePhase(reason, session)) {
3620 incrementalState = State::NotActive;
3621 break;
3624 if (useZeal && hasZealMode(ZealMode::YieldBeforeRootMarking)) {
3625 break;
3628 [[fallthrough]];
3630 case State::Prepare:
3631 if (waitForBackgroundTask(unmarkTask, budget, shouldPauseMutator,
3632 DontTriggerSliceWhenFinished) == NotFinished) {
3633 break;
3636 incrementalState = State::MarkRoots;
3637 [[fallthrough]];
3639 case State::MarkRoots:
3640 if (NeedToCollectNursery(this)) {
3641 collectNurseryFromMajorGC(reason);
3644 endPreparePhase(reason);
3645 beginMarkPhase(session);
3646 incrementalState = State::Mark;
3648 if (useZeal && hasZealMode(ZealMode::YieldBeforeMarking) &&
3649 isIncremental) {
3650 break;
3653 [[fallthrough]];
3655 case State::Mark:
3656 if (mightSweepInThisSlice(budget.isUnlimited())) {
3657 // Trace wrapper rooters before marking if we might start sweeping in
3658 // this slice.
3659 rt->mainContextFromOwnThread()->traceWrapperGCRooters(
3660 marker().tracer());
3664 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::MARK);
3665 if (markUntilBudgetExhausted(budget, useParallelMarking) ==
3666 NotFinished) {
3667 break;
3671 assertNoMarkingWork();
3674 * There are a number of reasons why we break out of collection here,
3675 * either ending the slice or to run a new interation of the loop in
3676 * GCRuntime::collect()
3680 * In incremental GCs where we have already performed more than one
3681 * slice we yield after marking with the aim of starting the sweep in
3682 * the next slice, since the first slice of sweeping can be expensive.
3684 * This is modified by the various zeal modes. We don't yield in
3685 * YieldBeforeMarking mode and we always yield in YieldBeforeSweeping
3686 * mode.
3688 * We will need to mark anything new on the stack when we resume, so
3689 * we stay in Mark state.
3691 if (isIncremental && !lastMarkSlice) {
3692 if ((initialState == State::Mark &&
3693 !(useZeal && hasZealMode(ZealMode::YieldBeforeMarking))) ||
3694 (useZeal && hasZealMode(ZealMode::YieldBeforeSweeping))) {
3695 lastMarkSlice = true;
3696 stats().log("Yielding before starting sweeping");
3697 break;
3701 incrementalState = State::Sweep;
3702 lastMarkSlice = false;
3704 beginSweepPhase(reason, session);
3706 [[fallthrough]];
3708 case State::Sweep:
3709 if (storeBuffer().mayHavePointersToDeadCells()) {
3710 collectNurseryFromMajorGC(reason);
3713 if (initialState == State::Sweep) {
3714 rt->mainContextFromOwnThread()->traceWrapperGCRooters(
3715 marker().tracer());
3718 if (performSweepActions(budget) == NotFinished) {
3719 break;
3722 endSweepPhase(destroyingRuntime);
3724 incrementalState = State::Finalize;
3726 [[fallthrough]];
3728 case State::Finalize:
3729 if (waitForBackgroundTask(sweepTask, budget, shouldPauseMutator,
3730 TriggerSliceWhenFinished) == NotFinished) {
3731 break;
3734 assertBackgroundSweepingFinished();
3737 // Sweep the zones list now that background finalization is finished to
3738 // remove and free dead zones, compartments and realms.
3739 gcstats::AutoPhase ap1(stats(), gcstats::PhaseKind::SWEEP);
3740 gcstats::AutoPhase ap2(stats(), gcstats::PhaseKind::DESTROY);
3741 sweepZones(rt->gcContext(), destroyingRuntime);
3744 MOZ_ASSERT(!startedCompacting);
3745 incrementalState = State::Compact;
3747 // Always yield before compacting since it is not incremental.
3748 if (isCompacting && !budget.isUnlimited()) {
3749 break;
3752 [[fallthrough]];
3754 case State::Compact:
3755 if (isCompacting) {
3756 if (NeedToCollectNursery(this)) {
3757 collectNurseryFromMajorGC(reason);
3760 storeBuffer().checkEmpty();
3761 if (!startedCompacting) {
3762 beginCompactPhase();
3765 if (compactPhase(reason, budget, session) == NotFinished) {
3766 break;
3769 endCompactPhase();
3772 startDecommit();
3773 incrementalState = State::Decommit;
3775 [[fallthrough]];
3777 case State::Decommit:
3778 if (waitForBackgroundTask(decommitTask, budget, shouldPauseMutator,
3779 TriggerSliceWhenFinished) == NotFinished) {
3780 break;
3783 incrementalState = State::Finish;
3785 [[fallthrough]];
3787 case State::Finish:
3788 finishCollection(reason);
3789 incrementalState = State::NotActive;
3790 break;
3793 #ifdef DEBUG
3794 MOZ_ASSERT(safeToYield);
3795 for (auto& marker : markers) {
3796 MOZ_ASSERT(marker->markColor() == MarkColor::Black);
3798 MOZ_ASSERT(!rt->gcContext()->hasJitCodeToPoison());
3799 #endif
3802 void GCRuntime::collectNurseryFromMajorGC(JS::GCReason reason) {
3803 collectNursery(gcOptions(), reason,
3804 gcstats::PhaseKind::EVICT_NURSERY_FOR_MAJOR_GC);
3807 bool GCRuntime::hasForegroundWork() const {
3808 switch (incrementalState) {
3809 case State::NotActive:
3810 // Incremental GC is not running and no work is pending.
3811 return false;
3812 case State::Prepare:
3813 // We yield in the Prepare state after starting unmarking.
3814 return !unmarkTask.wasStarted();
3815 case State::Finalize:
3816 // We yield in the Finalize state to wait for background sweeping.
3817 return !isBackgroundSweeping();
3818 case State::Decommit:
3819 // We yield in the Decommit state to wait for background decommit.
3820 return !decommitTask.wasStarted();
3821 default:
3822 // In all other states there is still work to do.
3823 return true;
3827 IncrementalProgress GCRuntime::waitForBackgroundTask(
3828 GCParallelTask& task, const SliceBudget& budget, bool shouldPauseMutator,
3829 ShouldTriggerSliceWhenFinished triggerSlice) {
3830 // Wait here in non-incremental collections, or if we want to pause the
3831 // mutator to let the GC catch up.
3832 if (budget.isUnlimited() || shouldPauseMutator) {
3833 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::WAIT_BACKGROUND_THREAD);
3834 Maybe<TimeStamp> deadline;
3835 if (budget.isTimeBudget()) {
3836 deadline.emplace(budget.deadline());
3838 task.join(deadline);
3841 // In incremental collections, yield if the task has not finished and
3842 // optionally request a slice to notify us when this happens.
3843 if (!budget.isUnlimited()) {
3844 AutoLockHelperThreadState lock;
3845 if (task.wasStarted(lock)) {
3846 if (triggerSlice) {
3847 requestSliceAfterBackgroundTask = true;
3849 return NotFinished;
3852 task.joinWithLockHeld(lock);
3855 MOZ_ASSERT(task.isIdle());
3857 if (triggerSlice) {
3858 cancelRequestedGCAfterBackgroundTask();
3861 return Finished;
3864 GCAbortReason gc::IsIncrementalGCUnsafe(JSRuntime* rt) {
3865 MOZ_ASSERT(!rt->mainContextFromOwnThread()->suppressGC);
3867 if (!rt->gc.isIncrementalGCAllowed()) {
3868 return GCAbortReason::IncrementalDisabled;
3871 return GCAbortReason::None;
3874 inline void GCRuntime::checkZoneIsScheduled(Zone* zone, JS::GCReason reason,
3875 const char* trigger) {
3876 #ifdef DEBUG
3877 if (zone->isGCScheduled()) {
3878 return;
3881 fprintf(stderr,
3882 "checkZoneIsScheduled: Zone %p not scheduled as expected in %s GC "
3883 "for %s trigger\n",
3884 zone, JS::ExplainGCReason(reason), trigger);
3885 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
3886 fprintf(stderr, " Zone %p:%s%s\n", zone.get(),
3887 zone->isAtomsZone() ? " atoms" : "",
3888 zone->isGCScheduled() ? " scheduled" : "");
3890 fflush(stderr);
3891 MOZ_CRASH("Zone not scheduled");
3892 #endif
3895 GCRuntime::IncrementalResult GCRuntime::budgetIncrementalGC(
3896 bool nonincrementalByAPI, JS::GCReason reason, SliceBudget& budget) {
3897 if (nonincrementalByAPI) {
3898 stats().nonincremental(GCAbortReason::NonIncrementalRequested);
3899 budget = SliceBudget::unlimited();
3901 // Reset any in progress incremental GC if this was triggered via the
3902 // API. This isn't required for correctness, but sometimes during tests
3903 // the caller expects this GC to collect certain objects, and we need
3904 // to make sure to collect everything possible.
3905 if (reason != JS::GCReason::ALLOC_TRIGGER) {
3906 return resetIncrementalGC(GCAbortReason::NonIncrementalRequested);
3909 return IncrementalResult::Ok;
3912 if (reason == JS::GCReason::ABORT_GC) {
3913 budget = SliceBudget::unlimited();
3914 stats().nonincremental(GCAbortReason::AbortRequested);
3915 return resetIncrementalGC(GCAbortReason::AbortRequested);
3918 if (!budget.isUnlimited()) {
3919 GCAbortReason unsafeReason = IsIncrementalGCUnsafe(rt);
3920 if (unsafeReason == GCAbortReason::None) {
3921 if (reason == JS::GCReason::COMPARTMENT_REVIVED) {
3922 unsafeReason = GCAbortReason::CompartmentRevived;
3923 } else if (!incrementalGCEnabled) {
3924 unsafeReason = GCAbortReason::ModeChange;
3928 if (unsafeReason != GCAbortReason::None) {
3929 budget = SliceBudget::unlimited();
3930 stats().nonincremental(unsafeReason);
3931 return resetIncrementalGC(unsafeReason);
3935 GCAbortReason resetReason = GCAbortReason::None;
3936 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
3937 if (zone->gcHeapSize.bytes() >=
3938 zone->gcHeapThreshold.incrementalLimitBytes()) {
3939 checkZoneIsScheduled(zone, reason, "GC bytes");
3940 budget = SliceBudget::unlimited();
3941 stats().nonincremental(GCAbortReason::GCBytesTrigger);
3942 if (zone->wasGCStarted() && zone->gcState() > Zone::Sweep) {
3943 resetReason = GCAbortReason::GCBytesTrigger;
3947 if (zone->mallocHeapSize.bytes() >=
3948 zone->mallocHeapThreshold.incrementalLimitBytes()) {
3949 checkZoneIsScheduled(zone, reason, "malloc bytes");
3950 budget = SliceBudget::unlimited();
3951 stats().nonincremental(GCAbortReason::MallocBytesTrigger);
3952 if (zone->wasGCStarted() && zone->gcState() > Zone::Sweep) {
3953 resetReason = GCAbortReason::MallocBytesTrigger;
3957 if (zone->jitHeapSize.bytes() >=
3958 zone->jitHeapThreshold.incrementalLimitBytes()) {
3959 checkZoneIsScheduled(zone, reason, "JIT code bytes");
3960 budget = SliceBudget::unlimited();
3961 stats().nonincremental(GCAbortReason::JitCodeBytesTrigger);
3962 if (zone->wasGCStarted() && zone->gcState() > Zone::Sweep) {
3963 resetReason = GCAbortReason::JitCodeBytesTrigger;
3967 if (isIncrementalGCInProgress() &&
3968 zone->isGCScheduled() != zone->wasGCStarted()) {
3969 budget = SliceBudget::unlimited();
3970 resetReason = GCAbortReason::ZoneChange;
3974 if (resetReason != GCAbortReason::None) {
3975 return resetIncrementalGC(resetReason);
3978 return IncrementalResult::Ok;
3981 bool GCRuntime::maybeIncreaseSliceBudget(SliceBudget& budget) {
3982 if (js::SupportDifferentialTesting()) {
3983 return false;
3986 if (!budget.isTimeBudget() || !isIncrementalGCInProgress()) {
3987 return false;
3990 bool wasIncreasedForLongCollections =
3991 maybeIncreaseSliceBudgetForLongCollections(budget);
3992 bool wasIncreasedForUgentCollections =
3993 maybeIncreaseSliceBudgetForUrgentCollections(budget);
3995 return wasIncreasedForLongCollections || wasIncreasedForUgentCollections;
3998 // Return true if the budget is actually extended after rounding.
3999 static bool ExtendBudget(SliceBudget& budget, double newDuration) {
4000 long millis = lround(newDuration);
4001 if (millis <= budget.timeBudget()) {
4002 return false;
4005 bool idleTriggered = budget.idle;
4006 budget = SliceBudget(TimeBudget(millis), nullptr); // Uninterruptible.
4007 budget.idle = idleTriggered;
4008 budget.extended = true;
4009 return true;
4012 bool GCRuntime::maybeIncreaseSliceBudgetForLongCollections(
4013 SliceBudget& budget) {
4014 // For long-running collections, enforce a minimum time budget that increases
4015 // linearly with time up to a maximum.
4017 // All times are in milliseconds.
4018 struct BudgetAtTime {
4019 double time;
4020 double budget;
4022 const BudgetAtTime MinBudgetStart{1500, 0.0};
4023 const BudgetAtTime MinBudgetEnd{2500, 100.0};
4025 double totalTime = (TimeStamp::Now() - lastGCStartTime()).ToMilliseconds();
4027 double minBudget =
4028 LinearInterpolate(totalTime, MinBudgetStart.time, MinBudgetStart.budget,
4029 MinBudgetEnd.time, MinBudgetEnd.budget);
4031 return ExtendBudget(budget, minBudget);
4034 bool GCRuntime::maybeIncreaseSliceBudgetForUrgentCollections(
4035 SliceBudget& budget) {
4036 // Enforce a minimum time budget based on how close we are to the incremental
4037 // limit.
4039 size_t minBytesRemaining = SIZE_MAX;
4040 for (AllZonesIter zone(this); !zone.done(); zone.next()) {
4041 if (!zone->wasGCStarted()) {
4042 continue;
4044 size_t gcBytesRemaining =
4045 zone->gcHeapThreshold.incrementalBytesRemaining(zone->gcHeapSize);
4046 minBytesRemaining = std::min(minBytesRemaining, gcBytesRemaining);
4047 size_t mallocBytesRemaining =
4048 zone->mallocHeapThreshold.incrementalBytesRemaining(
4049 zone->mallocHeapSize);
4050 minBytesRemaining = std::min(minBytesRemaining, mallocBytesRemaining);
4053 if (minBytesRemaining < tunables.urgentThresholdBytes() &&
4054 minBytesRemaining != 0) {
4055 // Increase budget based on the reciprocal of the fraction remaining.
4056 double fractionRemaining =
4057 double(minBytesRemaining) / double(tunables.urgentThresholdBytes());
4058 double minBudget = double(defaultSliceBudgetMS()) / fractionRemaining;
4059 return ExtendBudget(budget, minBudget);
4062 return false;
4065 static void ScheduleZones(GCRuntime* gc, JS::GCReason reason) {
4066 for (ZonesIter zone(gc, WithAtoms); !zone.done(); zone.next()) {
4067 // Re-check heap threshold for alloc-triggered zones that were not
4068 // previously collected. Now we have allocation rate data, the heap limit
4069 // may have been increased beyond the current size.
4070 if (gc->tunables.balancedHeapLimitsEnabled() && zone->isGCScheduled() &&
4071 zone->smoothedCollectionRate.ref().isNothing() &&
4072 reason == JS::GCReason::ALLOC_TRIGGER &&
4073 zone->gcHeapSize.bytes() < zone->gcHeapThreshold.startBytes()) {
4074 zone->unscheduleGC(); // May still be re-scheduled below.
4077 if (gc->isShutdownGC()) {
4078 zone->scheduleGC();
4081 if (!gc->isPerZoneGCEnabled()) {
4082 zone->scheduleGC();
4085 // To avoid resets, continue to collect any zones that were being
4086 // collected in a previous slice.
4087 if (gc->isIncrementalGCInProgress() && zone->wasGCStarted()) {
4088 zone->scheduleGC();
4091 // This is a heuristic to reduce the total number of collections.
4092 bool inHighFrequencyMode = gc->schedulingState.inHighFrequencyGCMode();
4093 if (zone->gcHeapSize.bytes() >=
4094 zone->gcHeapThreshold.eagerAllocTrigger(inHighFrequencyMode) ||
4095 zone->mallocHeapSize.bytes() >=
4096 zone->mallocHeapThreshold.eagerAllocTrigger(inHighFrequencyMode) ||
4097 zone->jitHeapSize.bytes() >= zone->jitHeapThreshold.startBytes()) {
4098 zone->scheduleGC();
4103 static void UnscheduleZones(GCRuntime* gc) {
4104 for (ZonesIter zone(gc->rt, WithAtoms); !zone.done(); zone.next()) {
4105 zone->unscheduleGC();
4109 class js::gc::AutoCallGCCallbacks {
4110 GCRuntime& gc_;
4111 JS::GCReason reason_;
4113 public:
4114 explicit AutoCallGCCallbacks(GCRuntime& gc, JS::GCReason reason)
4115 : gc_(gc), reason_(reason) {
4116 gc_.maybeCallGCCallback(JSGC_BEGIN, reason);
4118 ~AutoCallGCCallbacks() { gc_.maybeCallGCCallback(JSGC_END, reason_); }
4121 void GCRuntime::maybeCallGCCallback(JSGCStatus status, JS::GCReason reason) {
4122 if (!gcCallback.ref().op) {
4123 return;
4126 if (isIncrementalGCInProgress()) {
4127 return;
4130 if (gcCallbackDepth == 0) {
4131 // Save scheduled zone information in case the callback clears it.
4132 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
4133 zone->gcScheduledSaved_ = zone->gcScheduled_;
4137 // Save and clear GC options and state in case the callback reenters GC.
4138 JS::GCOptions options = gcOptions();
4139 maybeGcOptions = Nothing();
4140 bool savedFullGCRequested = fullGCRequested;
4141 fullGCRequested = false;
4143 gcCallbackDepth++;
4145 callGCCallback(status, reason);
4147 MOZ_ASSERT(gcCallbackDepth != 0);
4148 gcCallbackDepth--;
4150 // Restore the original GC options.
4151 maybeGcOptions = Some(options);
4153 // At the end of a GC, clear out the fullGCRequested state. At the start,
4154 // restore the previous setting.
4155 fullGCRequested = (status == JSGC_END) ? false : savedFullGCRequested;
4157 if (gcCallbackDepth == 0) {
4158 // Ensure any zone that was originally scheduled stays scheduled.
4159 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
4160 zone->gcScheduled_ = zone->gcScheduled_ || zone->gcScheduledSaved_;
4166 * We disable inlining to ensure that the bottom of the stack with possible GC
4167 * roots recorded in MarkRuntime excludes any pointers we use during the marking
4168 * implementation.
4170 MOZ_NEVER_INLINE GCRuntime::IncrementalResult GCRuntime::gcCycle(
4171 bool nonincrementalByAPI, const SliceBudget& budgetArg,
4172 JS::GCReason reason) {
4173 // Assert if this is a GC unsafe region.
4174 rt->mainContextFromOwnThread()->verifyIsSafeToGC();
4176 // It's ok if threads other than the main thread have suppressGC set, as
4177 // they are operating on zones which will not be collected from here.
4178 MOZ_ASSERT(!rt->mainContextFromOwnThread()->suppressGC);
4180 // This reason is used internally. See below.
4181 MOZ_ASSERT(reason != JS::GCReason::RESET);
4183 // Background finalization and decommit are finished by definition before we
4184 // can start a new major GC. Background allocation may still be running, but
4185 // that's OK because chunk pools are protected by the GC lock.
4186 if (!isIncrementalGCInProgress()) {
4187 assertBackgroundSweepingFinished();
4188 MOZ_ASSERT(decommitTask.isIdle());
4191 // Note that GC callbacks are allowed to re-enter GC.
4192 AutoCallGCCallbacks callCallbacks(*this, reason);
4194 // Increase slice budget for long running collections before it is recorded by
4195 // AutoGCSlice.
4196 SliceBudget budget(budgetArg);
4197 bool budgetWasIncreased = maybeIncreaseSliceBudget(budget);
4199 ScheduleZones(this, reason);
4201 auto updateCollectorTime = MakeScopeExit([&] {
4202 if (const gcstats::Statistics::SliceData* slice = stats().lastSlice()) {
4203 collectorTimeSinceAllocRateUpdate += slice->duration();
4207 gcstats::AutoGCSlice agc(stats(), scanZonesBeforeGC(), gcOptions(), budget,
4208 reason, budgetWasIncreased);
4210 IncrementalResult result =
4211 budgetIncrementalGC(nonincrementalByAPI, reason, budget);
4212 if (result == IncrementalResult::ResetIncremental) {
4213 if (incrementalState == State::NotActive) {
4214 // The collection was reset and has finished.
4215 return result;
4218 // The collection was reset but we must finish up some remaining work.
4219 reason = JS::GCReason::RESET;
4222 majorGCTriggerReason = JS::GCReason::NO_REASON;
4223 MOZ_ASSERT(!stats().hasTrigger());
4225 incGcNumber();
4226 incGcSliceNumber();
4228 gcprobes::MajorGCStart();
4229 incrementalSlice(budget, reason, budgetWasIncreased);
4230 gcprobes::MajorGCEnd();
4232 MOZ_ASSERT_IF(result == IncrementalResult::ResetIncremental,
4233 !isIncrementalGCInProgress());
4234 return result;
4237 inline bool GCRuntime::mightSweepInThisSlice(bool nonIncremental) {
4238 MOZ_ASSERT(incrementalState < State::Sweep);
4239 return nonIncremental || lastMarkSlice || hasIncrementalTwoSliceZealMode();
4242 #ifdef JS_GC_ZEAL
4243 static bool IsDeterministicGCReason(JS::GCReason reason) {
4244 switch (reason) {
4245 case JS::GCReason::API:
4246 case JS::GCReason::DESTROY_RUNTIME:
4247 case JS::GCReason::LAST_DITCH:
4248 case JS::GCReason::TOO_MUCH_MALLOC:
4249 case JS::GCReason::TOO_MUCH_WASM_MEMORY:
4250 case JS::GCReason::TOO_MUCH_JIT_CODE:
4251 case JS::GCReason::ALLOC_TRIGGER:
4252 case JS::GCReason::DEBUG_GC:
4253 case JS::GCReason::CC_FORCED:
4254 case JS::GCReason::SHUTDOWN_CC:
4255 case JS::GCReason::ABORT_GC:
4256 case JS::GCReason::DISABLE_GENERATIONAL_GC:
4257 case JS::GCReason::FINISH_GC:
4258 case JS::GCReason::PREPARE_FOR_TRACING:
4259 return true;
4261 default:
4262 return false;
4265 #endif
4267 gcstats::ZoneGCStats GCRuntime::scanZonesBeforeGC() {
4268 gcstats::ZoneGCStats zoneStats;
4269 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
4270 zoneStats.zoneCount++;
4271 zoneStats.compartmentCount += zone->compartments().length();
4272 for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next()) {
4273 zoneStats.realmCount += comp->realms().length();
4275 if (zone->isGCScheduled()) {
4276 zoneStats.collectedZoneCount++;
4277 zoneStats.collectedCompartmentCount += zone->compartments().length();
4281 return zoneStats;
4284 // The GC can only clean up scheduledForDestruction realms that were marked live
4285 // by a barrier (e.g. by RemapWrappers from a navigation event). It is also
4286 // common to have realms held live because they are part of a cycle in gecko,
4287 // e.g. involving the HTMLDocument wrapper. In this case, we need to run the
4288 // CycleCollector in order to remove these edges before the realm can be freed.
4289 void GCRuntime::maybeDoCycleCollection() {
4290 const static float ExcessiveGrayRealms = 0.8f;
4291 const static size_t LimitGrayRealms = 200;
4293 size_t realmsTotal = 0;
4294 size_t realmsGray = 0;
4295 for (RealmsIter realm(rt); !realm.done(); realm.next()) {
4296 ++realmsTotal;
4297 GlobalObject* global = realm->unsafeUnbarrieredMaybeGlobal();
4298 if (global && global->isMarkedGray()) {
4299 ++realmsGray;
4302 float grayFraction = float(realmsGray) / float(realmsTotal);
4303 if (grayFraction > ExcessiveGrayRealms || realmsGray > LimitGrayRealms) {
4304 callDoCycleCollectionCallback(rt->mainContextFromOwnThread());
4308 void GCRuntime::checkCanCallAPI() {
4309 MOZ_RELEASE_ASSERT(CurrentThreadCanAccessRuntime(rt));
4311 /* If we attempt to invoke the GC while we are running in the GC, assert. */
4312 MOZ_RELEASE_ASSERT(!JS::RuntimeHeapIsBusy());
4315 bool GCRuntime::checkIfGCAllowedInCurrentState(JS::GCReason reason) {
4316 if (rt->mainContextFromOwnThread()->suppressGC) {
4317 return false;
4320 // Only allow shutdown GCs when we're destroying the runtime. This keeps
4321 // the GC callback from triggering a nested GC and resetting global state.
4322 if (rt->isBeingDestroyed() && !isShutdownGC()) {
4323 return false;
4326 #ifdef JS_GC_ZEAL
4327 if (deterministicOnly && !IsDeterministicGCReason(reason)) {
4328 return false;
4330 #endif
4332 return true;
4335 bool GCRuntime::shouldRepeatForDeadZone(JS::GCReason reason) {
4336 MOZ_ASSERT_IF(reason == JS::GCReason::COMPARTMENT_REVIVED, !isIncremental);
4337 MOZ_ASSERT(!isIncrementalGCInProgress());
4339 if (!isIncremental) {
4340 return false;
4343 for (CompartmentsIter c(rt); !c.done(); c.next()) {
4344 if (c->gcState.scheduledForDestruction) {
4345 return true;
4349 return false;
4352 struct MOZ_RAII AutoSetZoneSliceThresholds {
4353 explicit AutoSetZoneSliceThresholds(GCRuntime* gc) : gc(gc) {
4354 // On entry, zones that are already collecting should have a slice threshold
4355 // set.
4356 for (ZonesIter zone(gc, WithAtoms); !zone.done(); zone.next()) {
4357 MOZ_ASSERT(zone->wasGCStarted() ==
4358 zone->gcHeapThreshold.hasSliceThreshold());
4359 MOZ_ASSERT(zone->wasGCStarted() ==
4360 zone->mallocHeapThreshold.hasSliceThreshold());
4364 ~AutoSetZoneSliceThresholds() {
4365 // On exit, update the thresholds for all collecting zones.
4366 bool waitingOnBGTask = gc->isWaitingOnBackgroundTask();
4367 for (ZonesIter zone(gc, WithAtoms); !zone.done(); zone.next()) {
4368 if (zone->wasGCStarted()) {
4369 zone->setGCSliceThresholds(*gc, waitingOnBGTask);
4370 } else {
4371 MOZ_ASSERT(!zone->gcHeapThreshold.hasSliceThreshold());
4372 MOZ_ASSERT(!zone->mallocHeapThreshold.hasSliceThreshold());
4377 GCRuntime* gc;
4380 void GCRuntime::collect(bool nonincrementalByAPI, const SliceBudget& budget,
4381 JS::GCReason reason) {
4382 TimeStamp startTime = TimeStamp::Now();
4383 auto timer = MakeScopeExit([&] {
4384 if (Realm* realm = rt->mainContextFromOwnThread()->realm()) {
4385 realm->timers.gcTime += TimeStamp::Now() - startTime;
4389 auto clearGCOptions = MakeScopeExit([&] {
4390 if (!isIncrementalGCInProgress()) {
4391 maybeGcOptions = Nothing();
4395 MOZ_ASSERT(reason != JS::GCReason::NO_REASON);
4397 // Checks run for each request, even if we do not actually GC.
4398 checkCanCallAPI();
4400 // Check if we are allowed to GC at this time before proceeding.
4401 if (!checkIfGCAllowedInCurrentState(reason)) {
4402 return;
4405 stats().log("GC slice starting in state %s", StateName(incrementalState));
4407 AutoStopVerifyingBarriers av(rt, isShutdownGC());
4408 AutoMaybeLeaveAtomsZone leaveAtomsZone(rt->mainContextFromOwnThread());
4409 AutoSetZoneSliceThresholds sliceThresholds(this);
4411 schedulingState.updateHighFrequencyModeForReason(reason);
4413 if (!isIncrementalGCInProgress() && tunables.balancedHeapLimitsEnabled()) {
4414 updateAllocationRates();
4417 bool repeat;
4418 do {
4419 IncrementalResult cycleResult =
4420 gcCycle(nonincrementalByAPI, budget, reason);
4422 if (reason == JS::GCReason::ABORT_GC) {
4423 MOZ_ASSERT(!isIncrementalGCInProgress());
4424 stats().log("GC aborted by request");
4425 break;
4429 * Sometimes when we finish a GC we need to immediately start a new one.
4430 * This happens in the following cases:
4431 * - when we reset the current GC
4432 * - when finalizers drop roots during shutdown
4433 * - when zones that we thought were dead at the start of GC are
4434 * not collected (see the large comment in beginMarkPhase)
4436 repeat = false;
4437 if (!isIncrementalGCInProgress()) {
4438 if (cycleResult == ResetIncremental) {
4439 repeat = true;
4440 } else if (rootsRemoved && isShutdownGC()) {
4441 /* Need to re-schedule all zones for GC. */
4442 JS::PrepareForFullGC(rt->mainContextFromOwnThread());
4443 repeat = true;
4444 reason = JS::GCReason::ROOTS_REMOVED;
4445 } else if (shouldRepeatForDeadZone(reason)) {
4446 repeat = true;
4447 reason = JS::GCReason::COMPARTMENT_REVIVED;
4450 } while (repeat);
4452 if (reason == JS::GCReason::COMPARTMENT_REVIVED) {
4453 maybeDoCycleCollection();
4456 #ifdef JS_GC_ZEAL
4457 if (hasZealMode(ZealMode::CheckHeapAfterGC)) {
4458 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::TRACE_HEAP);
4459 CheckHeapAfterGC(rt);
4461 if (hasZealMode(ZealMode::CheckGrayMarking) && !isIncrementalGCInProgress()) {
4462 MOZ_RELEASE_ASSERT(CheckGrayMarkingState(rt));
4464 #endif
4465 stats().log("GC slice ending in state %s", StateName(incrementalState));
4467 UnscheduleZones(this);
4470 SliceBudget GCRuntime::defaultBudget(JS::GCReason reason, int64_t millis) {
4471 // millis == 0 means use internal GC scheduling logic to come up with
4472 // a duration for the slice budget. This may end up still being zero
4473 // based on preferences.
4474 if (millis == 0) {
4475 millis = defaultSliceBudgetMS();
4478 // If the embedding has registered a callback for creating SliceBudgets,
4479 // then use it.
4480 if (createBudgetCallback) {
4481 return createBudgetCallback(reason, millis);
4484 // Otherwise, the preference can request an unlimited duration slice.
4485 if (millis == 0) {
4486 return SliceBudget::unlimited();
4489 return SliceBudget(TimeBudget(millis));
4492 void GCRuntime::gc(JS::GCOptions options, JS::GCReason reason) {
4493 if (!isIncrementalGCInProgress()) {
4494 setGCOptions(options);
4497 collect(true, SliceBudget::unlimited(), reason);
4500 void GCRuntime::startGC(JS::GCOptions options, JS::GCReason reason,
4501 const js::SliceBudget& budget) {
4502 MOZ_ASSERT(!isIncrementalGCInProgress());
4503 setGCOptions(options);
4505 if (!JS::IsIncrementalGCEnabled(rt->mainContextFromOwnThread())) {
4506 collect(true, SliceBudget::unlimited(), reason);
4507 return;
4510 collect(false, budget, reason);
4513 void GCRuntime::setGCOptions(JS::GCOptions options) {
4514 MOZ_ASSERT(maybeGcOptions == Nothing());
4515 maybeGcOptions = Some(options);
4518 void GCRuntime::gcSlice(JS::GCReason reason, const js::SliceBudget& budget) {
4519 MOZ_ASSERT(isIncrementalGCInProgress());
4520 collect(false, budget, reason);
4523 void GCRuntime::finishGC(JS::GCReason reason) {
4524 MOZ_ASSERT(isIncrementalGCInProgress());
4526 // If we're not collecting because we're out of memory then skip the
4527 // compacting phase if we need to finish an ongoing incremental GC
4528 // non-incrementally to avoid janking the browser.
4529 if (!IsOOMReason(initialReason)) {
4530 if (incrementalState == State::Compact) {
4531 abortGC();
4532 return;
4535 isCompacting = false;
4538 collect(false, SliceBudget::unlimited(), reason);
4541 void GCRuntime::abortGC() {
4542 MOZ_ASSERT(isIncrementalGCInProgress());
4543 checkCanCallAPI();
4544 MOZ_ASSERT(!rt->mainContextFromOwnThread()->suppressGC);
4546 collect(false, SliceBudget::unlimited(), JS::GCReason::ABORT_GC);
4549 static bool ZonesSelected(GCRuntime* gc) {
4550 for (ZonesIter zone(gc, WithAtoms); !zone.done(); zone.next()) {
4551 if (zone->isGCScheduled()) {
4552 return true;
4555 return false;
4558 void GCRuntime::startDebugGC(JS::GCOptions options, const SliceBudget& budget) {
4559 MOZ_ASSERT(!isIncrementalGCInProgress());
4560 setGCOptions(options);
4562 if (!ZonesSelected(this)) {
4563 JS::PrepareForFullGC(rt->mainContextFromOwnThread());
4566 collect(false, budget, JS::GCReason::DEBUG_GC);
4569 void GCRuntime::debugGCSlice(const SliceBudget& budget) {
4570 MOZ_ASSERT(isIncrementalGCInProgress());
4572 if (!ZonesSelected(this)) {
4573 JS::PrepareForIncrementalGC(rt->mainContextFromOwnThread());
4576 collect(false, budget, JS::GCReason::DEBUG_GC);
4579 /* Schedule a full GC unless a zone will already be collected. */
4580 void js::PrepareForDebugGC(JSRuntime* rt) {
4581 if (!ZonesSelected(&rt->gc)) {
4582 JS::PrepareForFullGC(rt->mainContextFromOwnThread());
4586 void GCRuntime::onOutOfMallocMemory() {
4587 // Stop allocating new chunks.
4588 allocTask.cancelAndWait();
4590 // Make sure we release anything queued for release.
4591 decommitTask.join();
4592 nursery().joinDecommitTask();
4594 // Wait for background free of nursery huge slots to finish.
4595 sweepTask.join();
4597 AutoLockGC lock(this);
4598 onOutOfMallocMemory(lock);
4601 void GCRuntime::onOutOfMallocMemory(const AutoLockGC& lock) {
4602 #ifdef DEBUG
4603 // Release any relocated arenas we may be holding on to, without releasing
4604 // the GC lock.
4605 releaseHeldRelocatedArenasWithoutUnlocking(lock);
4606 #endif
4608 // Throw away any excess chunks we have lying around.
4609 freeEmptyChunks(lock);
4611 // Immediately decommit as many arenas as possible in the hopes that this
4612 // might let the OS scrape together enough pages to satisfy the failing
4613 // malloc request.
4614 if (DecommitEnabled()) {
4615 decommitFreeArenasWithoutUnlocking(lock);
4619 void GCRuntime::minorGC(JS::GCReason reason, gcstats::PhaseKind phase) {
4620 MOZ_ASSERT(!JS::RuntimeHeapIsBusy());
4622 MOZ_ASSERT_IF(reason == JS::GCReason::EVICT_NURSERY,
4623 !rt->mainContextFromOwnThread()->suppressGC);
4624 if (rt->mainContextFromOwnThread()->suppressGC) {
4625 return;
4628 incGcNumber();
4630 collectNursery(JS::GCOptions::Normal, reason, phase);
4632 #ifdef JS_GC_ZEAL
4633 if (hasZealMode(ZealMode::CheckHeapAfterGC)) {
4634 gcstats::AutoPhase ap(stats(), phase);
4635 CheckHeapAfterGC(rt);
4637 #endif
4639 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
4640 maybeTriggerGCAfterAlloc(zone);
4641 maybeTriggerGCAfterMalloc(zone);
4645 void GCRuntime::collectNursery(JS::GCOptions options, JS::GCReason reason,
4646 gcstats::PhaseKind phase) {
4647 AutoMaybeLeaveAtomsZone leaveAtomsZone(rt->mainContextFromOwnThread());
4649 uint32_t numAllocs = 0;
4650 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
4651 numAllocs += zone->getAndResetTenuredAllocsSinceMinorGC();
4653 stats().setAllocsSinceMinorGCTenured(numAllocs);
4655 gcstats::AutoPhase ap(stats(), phase);
4657 nursery().collect(options, reason);
4658 MOZ_ASSERT(nursery().isEmpty());
4660 startBackgroundFreeAfterMinorGC();
4663 void GCRuntime::startBackgroundFreeAfterMinorGC() {
4664 MOZ_ASSERT(nursery().isEmpty());
4667 AutoLockHelperThreadState lock;
4669 lifoBlocksToFree.ref().transferFrom(&lifoBlocksToFreeAfterMinorGC.ref());
4671 if (lifoBlocksToFree.ref().isEmpty() &&
4672 buffersToFreeAfterMinorGC.ref().empty()) {
4673 return;
4677 startBackgroundFree();
4680 bool GCRuntime::gcIfRequestedImpl(bool eagerOk) {
4681 // This method returns whether a major GC was performed.
4683 if (nursery().minorGCRequested()) {
4684 minorGC(nursery().minorGCTriggerReason());
4687 JS::GCReason reason = wantMajorGC(eagerOk);
4688 if (reason == JS::GCReason::NO_REASON) {
4689 return false;
4692 SliceBudget budget = defaultBudget(reason, 0);
4693 if (!isIncrementalGCInProgress()) {
4694 startGC(JS::GCOptions::Normal, reason, budget);
4695 } else {
4696 gcSlice(reason, budget);
4698 return true;
4701 void js::gc::FinishGC(JSContext* cx, JS::GCReason reason) {
4702 // Calling this when GC is suppressed won't have any effect.
4703 MOZ_ASSERT(!cx->suppressGC);
4705 // GC callbacks may run arbitrary code, including JS. Check this regardless of
4706 // whether we GC for this invocation.
4707 MOZ_ASSERT(cx->isNurseryAllocAllowed());
4709 if (JS::IsIncrementalGCInProgress(cx)) {
4710 JS::PrepareForIncrementalGC(cx);
4711 JS::FinishIncrementalGC(cx, reason);
4715 void js::gc::WaitForBackgroundTasks(JSContext* cx) {
4716 cx->runtime()->gc.waitForBackgroundTasks();
4719 void GCRuntime::waitForBackgroundTasks() {
4720 MOZ_ASSERT(!isIncrementalGCInProgress());
4721 MOZ_ASSERT(sweepTask.isIdle());
4722 MOZ_ASSERT(decommitTask.isIdle());
4723 MOZ_ASSERT(markTask.isIdle());
4725 allocTask.join();
4726 freeTask.join();
4727 nursery().joinDecommitTask();
4730 Realm* js::NewRealm(JSContext* cx, JSPrincipals* principals,
4731 const JS::RealmOptions& options) {
4732 JSRuntime* rt = cx->runtime();
4733 JS_AbortIfWrongThread(cx);
4735 UniquePtr<Zone> zoneHolder;
4736 UniquePtr<Compartment> compHolder;
4738 Compartment* comp = nullptr;
4739 Zone* zone = nullptr;
4740 JS::CompartmentSpecifier compSpec =
4741 options.creationOptions().compartmentSpecifier();
4742 switch (compSpec) {
4743 case JS::CompartmentSpecifier::NewCompartmentInSystemZone:
4744 // systemZone might be null here, in which case we'll make a zone and
4745 // set this field below.
4746 zone = rt->gc.systemZone;
4747 break;
4748 case JS::CompartmentSpecifier::NewCompartmentInExistingZone:
4749 zone = options.creationOptions().zone();
4750 MOZ_ASSERT(zone);
4751 break;
4752 case JS::CompartmentSpecifier::ExistingCompartment:
4753 comp = options.creationOptions().compartment();
4754 zone = comp->zone();
4755 break;
4756 case JS::CompartmentSpecifier::NewCompartmentAndZone:
4757 break;
4760 if (!zone) {
4761 Zone::Kind kind = Zone::NormalZone;
4762 const JSPrincipals* trusted = rt->trustedPrincipals();
4763 if (compSpec == JS::CompartmentSpecifier::NewCompartmentInSystemZone ||
4764 (principals && principals == trusted)) {
4765 kind = Zone::SystemZone;
4768 zoneHolder = MakeUnique<Zone>(cx->runtime(), kind);
4769 if (!zoneHolder || !zoneHolder->init()) {
4770 ReportOutOfMemory(cx);
4771 return nullptr;
4774 zone = zoneHolder.get();
4777 bool invisibleToDebugger = options.creationOptions().invisibleToDebugger();
4778 if (comp) {
4779 // Debugger visibility is per-compartment, not per-realm, so make sure the
4780 // new realm's visibility matches its compartment's.
4781 MOZ_ASSERT(comp->invisibleToDebugger() == invisibleToDebugger);
4782 } else {
4783 compHolder = cx->make_unique<JS::Compartment>(zone, invisibleToDebugger);
4784 if (!compHolder) {
4785 return nullptr;
4788 comp = compHolder.get();
4791 UniquePtr<Realm> realm(cx->new_<Realm>(comp, options));
4792 if (!realm) {
4793 return nullptr;
4795 realm->init(cx, principals);
4797 // Make sure we don't put system and non-system realms in the same
4798 // compartment.
4799 if (!compHolder) {
4800 MOZ_RELEASE_ASSERT(realm->isSystem() == IsSystemCompartment(comp));
4803 AutoLockGC lock(rt);
4805 // Reserve space in the Vectors before we start mutating them.
4806 if (!comp->realms().reserve(comp->realms().length() + 1) ||
4807 (compHolder &&
4808 !zone->compartments().reserve(zone->compartments().length() + 1)) ||
4809 (zoneHolder && !rt->gc.zones().reserve(rt->gc.zones().length() + 1))) {
4810 ReportOutOfMemory(cx);
4811 return nullptr;
4814 // After this everything must be infallible.
4816 comp->realms().infallibleAppend(realm.get());
4818 if (compHolder) {
4819 zone->compartments().infallibleAppend(compHolder.release());
4822 if (zoneHolder) {
4823 rt->gc.zones().infallibleAppend(zoneHolder.release());
4825 // Lazily set the runtime's system zone.
4826 if (compSpec == JS::CompartmentSpecifier::NewCompartmentInSystemZone) {
4827 MOZ_RELEASE_ASSERT(!rt->gc.systemZone);
4828 MOZ_ASSERT(zone->isSystemZone());
4829 rt->gc.systemZone = zone;
4833 return realm.release();
4836 void GCRuntime::runDebugGC() {
4837 #ifdef JS_GC_ZEAL
4838 if (rt->mainContextFromOwnThread()->suppressGC) {
4839 return;
4842 if (hasZealMode(ZealMode::GenerationalGC)) {
4843 return minorGC(JS::GCReason::DEBUG_GC);
4846 PrepareForDebugGC(rt);
4848 auto budget = SliceBudget::unlimited();
4849 if (hasZealMode(ZealMode::IncrementalMultipleSlices)) {
4851 * Start with a small slice limit and double it every slice. This
4852 * ensure that we get multiple slices, and collection runs to
4853 * completion.
4855 if (!isIncrementalGCInProgress()) {
4856 zealSliceBudget = zealFrequency / 2;
4857 } else {
4858 zealSliceBudget *= 2;
4860 budget = SliceBudget(WorkBudget(zealSliceBudget));
4862 js::gc::State initialState = incrementalState;
4863 if (!isIncrementalGCInProgress()) {
4864 setGCOptions(JS::GCOptions::Shrink);
4866 collect(false, budget, JS::GCReason::DEBUG_GC);
4868 /* Reset the slice size when we get to the sweep or compact phases. */
4869 if ((initialState == State::Mark && incrementalState == State::Sweep) ||
4870 (initialState == State::Sweep && incrementalState == State::Compact)) {
4871 zealSliceBudget = zealFrequency / 2;
4873 } else if (hasIncrementalTwoSliceZealMode()) {
4874 // These modes trigger incremental GC that happens in two slices and the
4875 // supplied budget is ignored by incrementalSlice.
4876 budget = SliceBudget(WorkBudget(1));
4878 if (!isIncrementalGCInProgress()) {
4879 setGCOptions(JS::GCOptions::Normal);
4881 collect(false, budget, JS::GCReason::DEBUG_GC);
4882 } else if (hasZealMode(ZealMode::Compact)) {
4883 gc(JS::GCOptions::Shrink, JS::GCReason::DEBUG_GC);
4884 } else {
4885 gc(JS::GCOptions::Normal, JS::GCReason::DEBUG_GC);
4888 #endif
4891 void GCRuntime::setFullCompartmentChecks(bool enabled) {
4892 MOZ_ASSERT(!JS::RuntimeHeapIsMajorCollecting());
4893 fullCompartmentChecks = enabled;
4896 void GCRuntime::notifyRootsRemoved() {
4897 rootsRemoved = true;
4899 #ifdef JS_GC_ZEAL
4900 /* Schedule a GC to happen "soon". */
4901 if (hasZealMode(ZealMode::RootsChange)) {
4902 nextScheduled = 1;
4904 #endif
4907 #ifdef JS_GC_ZEAL
4908 bool GCRuntime::selectForMarking(JSObject* object) {
4909 MOZ_ASSERT(!JS::RuntimeHeapIsMajorCollecting());
4910 return selectedForMarking.ref().get().append(object);
4913 void GCRuntime::clearSelectedForMarking() {
4914 selectedForMarking.ref().get().clearAndFree();
4917 void GCRuntime::setDeterministic(bool enabled) {
4918 MOZ_ASSERT(!JS::RuntimeHeapIsMajorCollecting());
4919 deterministicOnly = enabled;
4921 #endif
4923 #ifdef DEBUG
4925 AutoAssertNoNurseryAlloc::AutoAssertNoNurseryAlloc() {
4926 TlsContext.get()->disallowNurseryAlloc();
4929 AutoAssertNoNurseryAlloc::~AutoAssertNoNurseryAlloc() {
4930 TlsContext.get()->allowNurseryAlloc();
4933 #endif // DEBUG
4935 #ifdef JSGC_HASH_TABLE_CHECKS
4936 void GCRuntime::checkHashTablesAfterMovingGC() {
4938 * Check that internal hash tables no longer have any pointers to things
4939 * that have been moved.
4941 rt->geckoProfiler().checkStringsMapAfterMovingGC();
4942 if (rt->hasJitRuntime() && rt->jitRuntime()->hasInterpreterEntryMap()) {
4943 rt->jitRuntime()->getInterpreterEntryMap()->checkScriptsAfterMovingGC();
4945 for (ZonesIter zone(this, SkipAtoms); !zone.done(); zone.next()) {
4946 zone->checkUniqueIdTableAfterMovingGC();
4947 zone->shapeZone().checkTablesAfterMovingGC();
4948 zone->checkAllCrossCompartmentWrappersAfterMovingGC();
4949 zone->checkScriptMapsAfterMovingGC();
4951 // Note: CompactPropMaps never have a table.
4952 JS::AutoCheckCannotGC nogc;
4953 for (auto map = zone->cellIterUnsafe<NormalPropMap>(); !map.done();
4954 map.next()) {
4955 if (PropMapTable* table = map->asLinked()->maybeTable(nogc)) {
4956 table->checkAfterMovingGC();
4959 for (auto map = zone->cellIterUnsafe<DictionaryPropMap>(); !map.done();
4960 map.next()) {
4961 if (PropMapTable* table = map->asLinked()->maybeTable(nogc)) {
4962 table->checkAfterMovingGC();
4967 for (CompartmentsIter c(this); !c.done(); c.next()) {
4968 for (RealmsInCompartmentIter r(c); !r.done(); r.next()) {
4969 r->dtoaCache.checkCacheAfterMovingGC();
4970 if (r->debugEnvs()) {
4971 r->debugEnvs()->checkHashTablesAfterMovingGC();
4976 #endif
4978 #ifdef DEBUG
4979 bool GCRuntime::hasZone(Zone* target) {
4980 for (AllZonesIter zone(this); !zone.done(); zone.next()) {
4981 if (zone == target) {
4982 return true;
4985 return false;
4987 #endif
4989 void AutoAssertEmptyNursery::checkCondition(JSContext* cx) {
4990 if (!noAlloc) {
4991 noAlloc.emplace();
4993 this->cx = cx;
4994 MOZ_ASSERT(cx->nursery().isEmpty());
4997 AutoEmptyNursery::AutoEmptyNursery(JSContext* cx) {
4998 MOZ_ASSERT(!cx->suppressGC);
4999 cx->runtime()->gc.stats().suspendPhases();
5000 cx->runtime()->gc.evictNursery(JS::GCReason::EVICT_NURSERY);
5001 cx->runtime()->gc.stats().resumePhases();
5002 checkCondition(cx);
5005 #ifdef DEBUG
5007 namespace js {
5009 // We don't want jsfriendapi.h to depend on GenericPrinter,
5010 // so these functions are declared directly in the cpp.
5012 extern JS_PUBLIC_API void DumpString(JSString* str, js::GenericPrinter& out);
5014 } // namespace js
5016 void js::gc::Cell::dump(js::GenericPrinter& out) const {
5017 switch (getTraceKind()) {
5018 case JS::TraceKind::Object:
5019 reinterpret_cast<const JSObject*>(this)->dump(out);
5020 break;
5022 case JS::TraceKind::String:
5023 js::DumpString(reinterpret_cast<JSString*>(const_cast<Cell*>(this)), out);
5024 break;
5026 case JS::TraceKind::Shape:
5027 reinterpret_cast<const Shape*>(this)->dump(out);
5028 break;
5030 default:
5031 out.printf("%s(%p)\n", JS::GCTraceKindToAscii(getTraceKind()),
5032 (void*)this);
5036 // For use in a debugger.
5037 void js::gc::Cell::dump() const {
5038 js::Fprinter out(stderr);
5039 dump(out);
5041 #endif
5043 JS_PUBLIC_API bool js::gc::detail::CanCheckGrayBits(const TenuredCell* cell) {
5044 // We do not check the gray marking state of cells in the following cases:
5046 // 1) When OOM has caused us to clear the gcGrayBitsValid_ flag.
5048 // 2) When we are in an incremental GC and examine a cell that is in a zone
5049 // that is not being collected. Gray targets of CCWs that are marked black
5050 // by a barrier will eventually be marked black in a later GC slice.
5052 // 3) When mark bits are being cleared concurrently by a helper thread.
5054 MOZ_ASSERT(cell);
5056 auto* runtime = cell->runtimeFromAnyThread();
5057 MOZ_ASSERT(CurrentThreadCanAccessRuntime(runtime));
5059 if (!runtime->gc.areGrayBitsValid()) {
5060 return false;
5063 JS::Zone* zone = cell->zone();
5065 if (runtime->gc.isIncrementalGCInProgress() && !zone->wasGCStarted()) {
5066 return false;
5069 return !zone->isGCPreparing();
5072 JS_PUBLIC_API bool js::gc::detail::CellIsMarkedGrayIfKnown(
5073 const TenuredCell* cell) {
5074 MOZ_ASSERT_IF(cell->isPermanentAndMayBeShared(), cell->isMarkedBlack());
5075 if (!cell->isMarkedGray()) {
5076 return false;
5079 return CanCheckGrayBits(cell);
5082 #ifdef DEBUG
5084 JS_PUBLIC_API void js::gc::detail::AssertCellIsNotGray(const Cell* cell) {
5085 if (!cell->isTenured()) {
5086 return;
5089 // Check that a cell is not marked gray.
5091 // Since this is a debug-only check, take account of the eventual mark state
5092 // of cells that will be marked black by the next GC slice in an incremental
5093 // GC. For performance reasons we don't do this in CellIsMarkedGrayIfKnown.
5095 const auto* tc = &cell->asTenured();
5096 if (!tc->isMarkedGray() || !CanCheckGrayBits(tc)) {
5097 return;
5100 // TODO: I'd like to AssertHeapIsIdle() here, but this ends up getting
5101 // called during GC and while iterating the heap for memory reporting.
5102 MOZ_ASSERT(!JS::RuntimeHeapIsCycleCollecting());
5104 if (tc->zone()->isGCMarkingBlackAndGray()) {
5105 // We are doing gray marking in the cell's zone. Even if the cell is
5106 // currently marked gray it may eventually be marked black. Delay checking
5107 // non-black cells until we finish gray marking.
5109 if (!tc->isMarkedBlack()) {
5110 JSRuntime* rt = tc->zone()->runtimeFromMainThread();
5111 AutoEnterOOMUnsafeRegion oomUnsafe;
5112 if (!rt->gc.cellsToAssertNotGray.ref().append(cell)) {
5113 oomUnsafe.crash("Can't append to delayed gray checks list");
5116 return;
5119 MOZ_ASSERT(!tc->isMarkedGray());
5122 extern JS_PUBLIC_API bool js::gc::detail::ObjectIsMarkedBlack(
5123 const JSObject* obj) {
5124 return obj->isMarkedBlack();
5127 #endif
5129 js::gc::ClearEdgesTracer::ClearEdgesTracer(JSRuntime* rt)
5130 : GenericTracerImpl(rt, JS::TracerKind::ClearEdges,
5131 JS::WeakMapTraceAction::TraceKeysAndValues) {}
5133 template <typename T>
5134 void js::gc::ClearEdgesTracer::onEdge(T** thingp, const char* name) {
5135 // We don't handle removing pointers to nursery edges from the store buffer
5136 // with this tracer. Check that this doesn't happen.
5137 T* thing = *thingp;
5138 MOZ_ASSERT(!IsInsideNursery(thing));
5140 // Fire the pre-barrier since we're removing an edge from the graph.
5141 InternalBarrierMethods<T*>::preBarrier(thing);
5143 *thingp = nullptr;
5146 void GCRuntime::setPerformanceHint(PerformanceHint hint) {
5147 if (hint == PerformanceHint::InPageLoad) {
5148 inPageLoadCount++;
5149 } else {
5150 MOZ_ASSERT(inPageLoadCount);
5151 inPageLoadCount--;