Bug 1874684 - Part 10: Replace BigInt with Int128 in RoundNumberToIncrement. r=mgaudet
[gecko.git] / js / src / gc / GC.cpp
blob68dd66898c6c5cec32575aa072055c887e3e78d7
1 /* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*-
2 * vim: set ts=8 sts=2 et sw=2 tw=80:
3 * This Source Code Form is subject to the terms of the Mozilla Public
4 * License, v. 2.0. If a copy of the MPL was not distributed with this
5 * file, You can obtain one at http://mozilla.org/MPL/2.0/. */
7 /*
8 * [SMDOC] Garbage Collector
10 * This code implements an incremental mark-and-sweep garbage collector, with
11 * most sweeping carried out in the background on a parallel thread.
13 * Full vs. zone GC
14 * ----------------
16 * The collector can collect all zones at once, or a subset. These types of
17 * collection are referred to as a full GC and a zone GC respectively.
19 * It is possible for an incremental collection that started out as a full GC to
20 * become a zone GC if new zones are created during the course of the
21 * collection.
23 * Incremental collection
24 * ----------------------
26 * For a collection to be carried out incrementally the following conditions
27 * must be met:
28 * - the collection must be run by calling js::GCSlice() rather than js::GC()
29 * - the GC parameter JSGC_INCREMENTAL_GC_ENABLED must be true.
31 * The last condition is an engine-internal mechanism to ensure that incremental
32 * collection is not carried out without the correct barriers being implemented.
33 * For more information see 'Incremental marking' below.
35 * If the collection is not incremental, all foreground activity happens inside
36 * a single call to GC() or GCSlice(). However the collection is not complete
37 * until the background sweeping activity has finished.
39 * An incremental collection proceeds as a series of slices, interleaved with
40 * mutator activity, i.e. running JavaScript code. Slices are limited by a time
41 * budget. The slice finishes as soon as possible after the requested time has
42 * passed.
44 * Collector states
45 * ----------------
47 * The collector proceeds through the following states, the current state being
48 * held in JSRuntime::gcIncrementalState:
50 * - Prepare - unmarks GC things, discards JIT code and other setup
51 * - MarkRoots - marks the stack and other roots
52 * - Mark - incrementally marks reachable things
53 * - Sweep - sweeps zones in groups and continues marking unswept zones
54 * - Finalize - performs background finalization, concurrent with mutator
55 * - Compact - incrementally compacts by zone
56 * - Decommit - performs background decommit and chunk removal
58 * Roots are marked in the first MarkRoots slice; this is the start of the GC
59 * proper. The following states can take place over one or more slices.
61 * In other words an incremental collection proceeds like this:
63 * Slice 1: Prepare: Starts background task to unmark GC things
65 * ... JS code runs, background unmarking finishes ...
67 * Slice 2: MarkRoots: Roots are pushed onto the mark stack.
68 * Mark: The mark stack is processed by popping an element,
69 * marking it, and pushing its children.
71 * ... JS code runs ...
73 * Slice 3: Mark: More mark stack processing.
75 * ... JS code runs ...
77 * Slice n-1: Mark: More mark stack processing.
79 * ... JS code runs ...
81 * Slice n: Mark: Mark stack is completely drained.
82 * Sweep: Select first group of zones to sweep and sweep them.
84 * ... JS code runs ...
86 * Slice n+1: Sweep: Mark objects in unswept zones that were newly
87 * identified as alive (see below). Then sweep more zone
88 * sweep groups.
90 * ... JS code runs ...
92 * Slice n+2: Sweep: Mark objects in unswept zones that were newly
93 * identified as alive. Then sweep more zones.
95 * ... JS code runs ...
97 * Slice m: Sweep: Sweeping is finished, and background sweeping
98 * started on the helper thread.
100 * ... JS code runs, remaining sweeping done on background thread ...
102 * When background sweeping finishes the GC is complete.
104 * Incremental marking
105 * -------------------
107 * Incremental collection requires close collaboration with the mutator (i.e.,
108 * JS code) to guarantee correctness.
110 * - During an incremental GC, if a memory location (except a root) is written
111 * to, then the value it previously held must be marked. Write barriers
112 * ensure this.
114 * - Any object that is allocated during incremental GC must start out marked.
116 * - Roots are marked in the first slice and hence don't need write barriers.
117 * Roots are things like the C stack and the VM stack.
119 * The problem that write barriers solve is that between slices the mutator can
120 * change the object graph. We must ensure that it cannot do this in such a way
121 * that makes us fail to mark a reachable object (marking an unreachable object
122 * is tolerable).
124 * We use a snapshot-at-the-beginning algorithm to do this. This means that we
125 * promise to mark at least everything that is reachable at the beginning of
126 * collection. To implement it we mark the old contents of every non-root memory
127 * location written to by the mutator while the collection is in progress, using
128 * write barriers. This is described in gc/Barrier.h.
130 * Incremental sweeping
131 * --------------------
133 * Sweeping is difficult to do incrementally because object finalizers must be
134 * run at the start of sweeping, before any mutator code runs. The reason is
135 * that some objects use their finalizers to remove themselves from caches. If
136 * mutator code was allowed to run after the start of sweeping, it could observe
137 * the state of the cache and create a new reference to an object that was just
138 * about to be destroyed.
140 * Sweeping all finalizable objects in one go would introduce long pauses, so
141 * instead sweeping broken up into groups of zones. Zones which are not yet
142 * being swept are still marked, so the issue above does not apply.
144 * The order of sweeping is restricted by cross compartment pointers - for
145 * example say that object |a| from zone A points to object |b| in zone B and
146 * neither object was marked when we transitioned to the Sweep phase. Imagine we
147 * sweep B first and then return to the mutator. It's possible that the mutator
148 * could cause |a| to become alive through a read barrier (perhaps it was a
149 * shape that was accessed via a shape table). Then we would need to mark |b|,
150 * which |a| points to, but |b| has already been swept.
152 * So if there is such a pointer then marking of zone B must not finish before
153 * marking of zone A. Pointers which form a cycle between zones therefore
154 * restrict those zones to being swept at the same time, and these are found
155 * using Tarjan's algorithm for finding the strongly connected components of a
156 * graph.
158 * GC things without finalizers, and things with finalizers that are able to run
159 * in the background, are swept on the background thread. This accounts for most
160 * of the sweeping work.
162 * Reset
163 * -----
165 * During incremental collection it is possible, although unlikely, for
166 * conditions to change such that incremental collection is no longer safe. In
167 * this case, the collection is 'reset' by resetIncrementalGC(). If we are in
168 * the mark state, this just stops marking, but if we have started sweeping
169 * already, we continue non-incrementally until we have swept the current sweep
170 * group. Following a reset, a new collection is started.
172 * Compacting GC
173 * -------------
175 * Compacting GC happens at the end of a major GC as part of the last slice.
176 * There are three parts:
178 * - Arenas are selected for compaction.
179 * - The contents of those arenas are moved to new arenas.
180 * - All references to moved things are updated.
182 * Collecting Atoms
183 * ----------------
185 * Atoms are collected differently from other GC things. They are contained in
186 * a special zone and things in other zones may have pointers to them that are
187 * not recorded in the cross compartment pointer map. Each zone holds a bitmap
188 * with the atoms it might be keeping alive, and atoms are only collected if
189 * they are not included in any zone's atom bitmap. See AtomMarking.cpp for how
190 * this bitmap is managed.
193 #include "gc/GC-inl.h"
195 #include "mozilla/Range.h"
196 #include "mozilla/ScopeExit.h"
197 #include "mozilla/TextUtils.h"
198 #include "mozilla/TimeStamp.h"
200 #include <algorithm>
201 #include <initializer_list>
202 #include <iterator>
203 #include <stdlib.h>
204 #include <string.h>
205 #include <utility>
207 #include "jsapi.h" // JS_AbortIfWrongThread
208 #include "jstypes.h"
210 #include "debugger/DebugAPI.h"
211 #include "gc/ClearEdgesTracer.h"
212 #include "gc/GCContext.h"
213 #include "gc/GCInternals.h"
214 #include "gc/GCLock.h"
215 #include "gc/GCProbes.h"
216 #include "gc/Memory.h"
217 #include "gc/ParallelMarking.h"
218 #include "gc/ParallelWork.h"
219 #include "gc/WeakMap.h"
220 #include "jit/ExecutableAllocator.h"
221 #include "jit/JitCode.h"
222 #include "jit/JitRuntime.h"
223 #include "jit/ProcessExecutableMemory.h"
224 #include "js/HeapAPI.h" // JS::GCCellPtr
225 #include "js/Printer.h"
226 #include "js/SliceBudget.h"
227 #include "util/DifferentialTesting.h"
228 #include "vm/BigIntType.h"
229 #include "vm/EnvironmentObject.h"
230 #include "vm/GetterSetter.h"
231 #include "vm/HelperThreadState.h"
232 #include "vm/JitActivation.h"
233 #include "vm/JSObject.h"
234 #include "vm/JSScript.h"
235 #include "vm/PropMap.h"
236 #include "vm/Realm.h"
237 #include "vm/Shape.h"
238 #include "vm/StringType.h"
239 #include "vm/SymbolType.h"
240 #include "vm/Time.h"
242 #include "gc/Heap-inl.h"
243 #include "gc/Nursery-inl.h"
244 #include "gc/ObjectKind-inl.h"
245 #include "gc/PrivateIterators-inl.h"
246 #include "vm/GeckoProfiler-inl.h"
247 #include "vm/JSContext-inl.h"
248 #include "vm/Realm-inl.h"
249 #include "vm/Stack-inl.h"
251 using namespace js;
252 using namespace js::gc;
254 using mozilla::MakeScopeExit;
255 using mozilla::Maybe;
256 using mozilla::Nothing;
257 using mozilla::Some;
258 using mozilla::TimeDuration;
259 using mozilla::TimeStamp;
261 using JS::AutoGCRooter;
263 const AllocKind gc::slotsToThingKind[] = {
264 // clang-format off
265 /* 0 */ AllocKind::OBJECT0, AllocKind::OBJECT2, AllocKind::OBJECT2, AllocKind::OBJECT4,
266 /* 4 */ AllocKind::OBJECT4, AllocKind::OBJECT8, AllocKind::OBJECT8, AllocKind::OBJECT8,
267 /* 8 */ AllocKind::OBJECT8, AllocKind::OBJECT12, AllocKind::OBJECT12, AllocKind::OBJECT12,
268 /* 12 */ AllocKind::OBJECT12, AllocKind::OBJECT16, AllocKind::OBJECT16, AllocKind::OBJECT16,
269 /* 16 */ AllocKind::OBJECT16
270 // clang-format on
273 static_assert(std::size(slotsToThingKind) == SLOTS_TO_THING_KIND_LIMIT,
274 "We have defined a slot count for each kind.");
276 // A table converting an object size in "slots" (increments of
277 // sizeof(js::Value)) to the total number of bytes in the corresponding
278 // AllocKind. See gc::slotsToThingKind. This primarily allows wasm jit code to
279 // remain compliant with the AllocKind system.
281 // To use this table, subtract sizeof(NativeObject) from your desired allocation
282 // size, divide by sizeof(js::Value) to get the number of "slots", and then
283 // index into this table. See gc::GetGCObjectKindForBytes.
284 const constexpr uint32_t gc::slotsToAllocKindBytes[] = {
285 // These entries correspond exactly to gc::slotsToThingKind. The numeric
286 // comments therefore indicate the number of slots that the "bytes" would
287 // correspond to.
288 // clang-format off
289 /* 0 */ sizeof(JSObject_Slots0), sizeof(JSObject_Slots2), sizeof(JSObject_Slots2), sizeof(JSObject_Slots4),
290 /* 4 */ sizeof(JSObject_Slots4), sizeof(JSObject_Slots8), sizeof(JSObject_Slots8), sizeof(JSObject_Slots8),
291 /* 8 */ sizeof(JSObject_Slots8), sizeof(JSObject_Slots12), sizeof(JSObject_Slots12), sizeof(JSObject_Slots12),
292 /* 12 */ sizeof(JSObject_Slots12), sizeof(JSObject_Slots16), sizeof(JSObject_Slots16), sizeof(JSObject_Slots16),
293 /* 16 */ sizeof(JSObject_Slots16)
294 // clang-format on
297 static_assert(std::size(slotsToAllocKindBytes) == SLOTS_TO_THING_KIND_LIMIT);
299 MOZ_THREAD_LOCAL(JS::GCContext*) js::TlsGCContext;
301 JS::GCContext::GCContext(JSRuntime* runtime) : runtime_(runtime) {}
303 JS::GCContext::~GCContext() {
304 MOZ_ASSERT(!hasJitCodeToPoison());
305 MOZ_ASSERT(!isCollecting());
306 MOZ_ASSERT(gcUse() == GCUse::None);
307 MOZ_ASSERT(!gcSweepZone());
308 MOZ_ASSERT(!isTouchingGrayThings());
311 void JS::GCContext::poisonJitCode() {
312 if (hasJitCodeToPoison()) {
313 jit::ExecutableAllocator::poisonCode(runtime(), jitPoisonRanges);
314 jitPoisonRanges.clearAndFree();
318 #ifdef DEBUG
319 void GCRuntime::verifyAllChunks() {
320 AutoLockGC lock(this);
321 fullChunks(lock).verifyChunks();
322 availableChunks(lock).verifyChunks();
323 emptyChunks(lock).verifyChunks();
325 #endif
327 void GCRuntime::setMinEmptyChunkCount(uint32_t value, const AutoLockGC& lock) {
328 minEmptyChunkCount_ = value;
329 if (minEmptyChunkCount_ > maxEmptyChunkCount_) {
330 maxEmptyChunkCount_ = minEmptyChunkCount_;
332 MOZ_ASSERT(maxEmptyChunkCount_ >= minEmptyChunkCount_);
335 void GCRuntime::setMaxEmptyChunkCount(uint32_t value, const AutoLockGC& lock) {
336 maxEmptyChunkCount_ = value;
337 if (minEmptyChunkCount_ > maxEmptyChunkCount_) {
338 minEmptyChunkCount_ = maxEmptyChunkCount_;
340 MOZ_ASSERT(maxEmptyChunkCount_ >= minEmptyChunkCount_);
343 inline bool GCRuntime::tooManyEmptyChunks(const AutoLockGC& lock) {
344 return emptyChunks(lock).count() > minEmptyChunkCount(lock);
347 ChunkPool GCRuntime::expireEmptyChunkPool(const AutoLockGC& lock) {
348 MOZ_ASSERT(emptyChunks(lock).verify());
349 MOZ_ASSERT(minEmptyChunkCount(lock) <= maxEmptyChunkCount(lock));
351 ChunkPool expired;
352 while (tooManyEmptyChunks(lock)) {
353 TenuredChunk* chunk = emptyChunks(lock).pop();
354 prepareToFreeChunk(chunk->info);
355 expired.push(chunk);
358 MOZ_ASSERT(expired.verify());
359 MOZ_ASSERT(emptyChunks(lock).verify());
360 MOZ_ASSERT(emptyChunks(lock).count() <= maxEmptyChunkCount(lock));
361 MOZ_ASSERT(emptyChunks(lock).count() <= minEmptyChunkCount(lock));
362 return expired;
365 static void FreeChunkPool(ChunkPool& pool) {
366 for (ChunkPool::Iter iter(pool); !iter.done();) {
367 TenuredChunk* chunk = iter.get();
368 iter.next();
369 pool.remove(chunk);
370 MOZ_ASSERT(chunk->unused());
371 UnmapPages(static_cast<void*>(chunk), ChunkSize);
373 MOZ_ASSERT(pool.count() == 0);
376 void GCRuntime::freeEmptyChunks(const AutoLockGC& lock) {
377 FreeChunkPool(emptyChunks(lock));
380 inline void GCRuntime::prepareToFreeChunk(TenuredChunkInfo& info) {
381 MOZ_ASSERT(numArenasFreeCommitted >= info.numArenasFreeCommitted);
382 numArenasFreeCommitted -= info.numArenasFreeCommitted;
383 stats().count(gcstats::COUNT_DESTROY_CHUNK);
384 #ifdef DEBUG
386 * Let FreeChunkPool detect a missing prepareToFreeChunk call before it
387 * frees chunk.
389 info.numArenasFreeCommitted = 0;
390 #endif
393 void GCRuntime::releaseArena(Arena* arena, const AutoLockGC& lock) {
394 MOZ_ASSERT(arena->allocated());
395 MOZ_ASSERT(!arena->onDelayedMarkingList());
396 MOZ_ASSERT(TlsGCContext.get()->isFinalizing());
398 arena->zone->gcHeapSize.removeGCArena(heapSize);
399 arena->release(lock);
400 arena->chunk()->releaseArena(this, arena, lock);
403 GCRuntime::GCRuntime(JSRuntime* rt)
404 : rt(rt),
405 systemZone(nullptr),
406 mainThreadContext(rt),
407 heapState_(JS::HeapState::Idle),
408 stats_(this),
409 sweepingTracer(rt),
410 fullGCRequested(false),
411 helperThreadRatio(TuningDefaults::HelperThreadRatio),
412 maxHelperThreads(TuningDefaults::MaxHelperThreads),
413 helperThreadCount(1),
414 createBudgetCallback(nullptr),
415 minEmptyChunkCount_(TuningDefaults::MinEmptyChunkCount),
416 maxEmptyChunkCount_(TuningDefaults::MaxEmptyChunkCount),
417 rootsHash(256),
418 nextCellUniqueId_(LargestTaggedNullCellPointer +
419 1), // Ensure disjoint from null tagged pointers.
420 numArenasFreeCommitted(0),
421 verifyPreData(nullptr),
422 lastGCStartTime_(TimeStamp::Now()),
423 lastGCEndTime_(TimeStamp::Now()),
424 incrementalGCEnabled(TuningDefaults::IncrementalGCEnabled),
425 perZoneGCEnabled(TuningDefaults::PerZoneGCEnabled),
426 numActiveZoneIters(0),
427 cleanUpEverything(false),
428 grayBitsValid(true),
429 majorGCTriggerReason(JS::GCReason::NO_REASON),
430 minorGCNumber(0),
431 majorGCNumber(0),
432 number(0),
433 sliceNumber(0),
434 isFull(false),
435 incrementalState(gc::State::NotActive),
436 initialState(gc::State::NotActive),
437 useZeal(false),
438 lastMarkSlice(false),
439 safeToYield(true),
440 markOnBackgroundThreadDuringSweeping(false),
441 useBackgroundThreads(false),
442 #ifdef DEBUG
443 hadShutdownGC(false),
444 #endif
445 requestSliceAfterBackgroundTask(false),
446 lifoBlocksToFree((size_t)JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE),
447 lifoBlocksToFreeAfterFullMinorGC(
448 (size_t)JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE),
449 lifoBlocksToFreeAfterNextMinorGC(
450 (size_t)JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE),
451 sweepGroupIndex(0),
452 sweepGroups(nullptr),
453 currentSweepGroup(nullptr),
454 sweepZone(nullptr),
455 abortSweepAfterCurrentGroup(false),
456 sweepMarkResult(IncrementalProgress::NotFinished),
457 #ifdef DEBUG
458 testMarkQueue(rt),
459 #endif
460 startedCompacting(false),
461 zonesCompacted(0),
462 #ifdef DEBUG
463 relocatedArenasToRelease(nullptr),
464 #endif
465 #ifdef JS_GC_ZEAL
466 markingValidator(nullptr),
467 #endif
468 defaultTimeBudgetMS_(TuningDefaults::DefaultTimeBudgetMS),
469 incrementalAllowed(true),
470 compactingEnabled(TuningDefaults::CompactingEnabled),
471 parallelMarkingEnabled(TuningDefaults::ParallelMarkingEnabled),
472 rootsRemoved(false),
473 #ifdef JS_GC_ZEAL
474 zealModeBits(0),
475 zealFrequency(0),
476 nextScheduled(0),
477 deterministicOnly(false),
478 zealSliceBudget(0),
479 selectedForMarking(rt),
480 #endif
481 fullCompartmentChecks(false),
482 gcCallbackDepth(0),
483 alwaysPreserveCode(false),
484 lowMemoryState(false),
485 lock(mutexid::GCLock),
486 storeBufferLock(mutexid::StoreBuffer),
487 delayedMarkingLock(mutexid::GCDelayedMarkingLock),
488 allocTask(this, emptyChunks_.ref()),
489 unmarkTask(this),
490 markTask(this),
491 sweepTask(this),
492 freeTask(this),
493 decommitTask(this),
494 nursery_(this),
495 storeBuffer_(rt),
496 lastAllocRateUpdateTime(TimeStamp::Now()) {
499 using CharRange = mozilla::Range<const char>;
500 using CharRangeVector = Vector<CharRange, 0, SystemAllocPolicy>;
502 static bool SplitStringBy(const CharRange& text, char delimiter,
503 CharRangeVector* result) {
504 auto start = text.begin();
505 for (auto ptr = start; ptr != text.end(); ptr++) {
506 if (*ptr == delimiter) {
507 if (!result->emplaceBack(start, ptr)) {
508 return false;
510 start = ptr + 1;
514 return result->emplaceBack(start, text.end());
517 static bool ParseTimeDuration(const CharRange& text,
518 TimeDuration* durationOut) {
519 const char* str = text.begin().get();
520 char* end;
521 long millis = strtol(str, &end, 10);
522 *durationOut = TimeDuration::FromMilliseconds(double(millis));
523 return str != end && end == text.end().get();
526 static void PrintProfileHelpAndExit(const char* envName, const char* helpText) {
527 fprintf(stderr, "%s=N[,(main|all)]\n", envName);
528 fprintf(stderr, "%s", helpText);
529 exit(0);
532 void js::gc::ReadProfileEnv(const char* envName, const char* helpText,
533 bool* enableOut, bool* workersOut,
534 TimeDuration* thresholdOut) {
535 *enableOut = false;
536 *workersOut = false;
537 *thresholdOut = TimeDuration::Zero();
539 const char* env = getenv(envName);
540 if (!env) {
541 return;
544 if (strcmp(env, "help") == 0) {
545 PrintProfileHelpAndExit(envName, helpText);
548 CharRangeVector parts;
549 auto text = CharRange(env, strlen(env));
550 if (!SplitStringBy(text, ',', &parts)) {
551 MOZ_CRASH("OOM parsing environment variable");
554 if (parts.length() == 0 || parts.length() > 2) {
555 PrintProfileHelpAndExit(envName, helpText);
558 *enableOut = true;
560 if (!ParseTimeDuration(parts[0], thresholdOut)) {
561 PrintProfileHelpAndExit(envName, helpText);
564 if (parts.length() == 2) {
565 const char* threads = parts[1].begin().get();
566 if (strcmp(threads, "all") == 0) {
567 *workersOut = true;
568 } else if (strcmp(threads, "main") != 0) {
569 PrintProfileHelpAndExit(envName, helpText);
574 bool js::gc::ShouldPrintProfile(JSRuntime* runtime, bool enable,
575 bool profileWorkers, TimeDuration threshold,
576 TimeDuration duration) {
577 return enable && (runtime->isMainRuntime() || profileWorkers) &&
578 duration >= threshold;
581 #ifdef JS_GC_ZEAL
583 void GCRuntime::getZealBits(uint32_t* zealBits, uint32_t* frequency,
584 uint32_t* scheduled) {
585 *zealBits = zealModeBits;
586 *frequency = zealFrequency;
587 *scheduled = nextScheduled;
590 const char gc::ZealModeHelpText[] =
591 " Specifies how zealous the garbage collector should be. Some of these "
592 "modes can\n"
593 " be set simultaneously, by passing multiple level options, e.g. \"2;4\" "
594 "will activate\n"
595 " both modes 2 and 4. Modes can be specified by name or number.\n"
596 " \n"
597 " Values:\n"
598 " 0: (None) Normal amount of collection (resets all modes)\n"
599 " 1: (RootsChange) Collect when roots are added or removed\n"
600 " 2: (Alloc) Collect when every N allocations (default: 100)\n"
601 " 4: (VerifierPre) Verify pre write barriers between instructions\n"
602 " 6: (YieldBeforeRootMarking) Incremental GC in two slices that yields "
603 "before root marking\n"
604 " 7: (GenerationalGC) Collect the nursery every N nursery allocations\n"
605 " 8: (YieldBeforeMarking) Incremental GC in two slices that yields "
606 "between\n"
607 " the root marking and marking phases\n"
608 " 9: (YieldBeforeSweeping) Incremental GC in two slices that yields "
609 "between\n"
610 " the marking and sweeping phases\n"
611 " 10: (IncrementalMultipleSlices) Incremental GC in many slices\n"
612 " 11: (IncrementalMarkingValidator) Verify incremental marking\n"
613 " 12: (ElementsBarrier) Use the individual element post-write barrier\n"
614 " regardless of elements size\n"
615 " 13: (CheckHashTablesOnMinorGC) Check internal hashtables on minor GC\n"
616 " 14: (Compact) Perform a shrinking collection every N allocations\n"
617 " 15: (CheckHeapAfterGC) Walk the heap to check its integrity after "
618 "every GC\n"
619 " 17: (YieldBeforeSweepingAtoms) Incremental GC in two slices that "
620 "yields\n"
621 " before sweeping the atoms table\n"
622 " 18: (CheckGrayMarking) Check gray marking invariants after every GC\n"
623 " 19: (YieldBeforeSweepingCaches) Incremental GC in two slices that "
624 "yields\n"
625 " before sweeping weak caches\n"
626 " 21: (YieldBeforeSweepingObjects) Incremental GC in two slices that "
627 "yields\n"
628 " before sweeping foreground finalized objects\n"
629 " 22: (YieldBeforeSweepingNonObjects) Incremental GC in two slices that "
630 "yields\n"
631 " before sweeping non-object GC things\n"
632 " 23: (YieldBeforeSweepingPropMapTrees) Incremental GC in two slices "
633 "that "
634 "yields\n"
635 " before sweeping shape trees\n"
636 " 24: (CheckWeakMapMarking) Check weak map marking invariants after "
637 "every GC\n"
638 " 25: (YieldWhileGrayMarking) Incremental GC in two slices that yields\n"
639 " during gray marking\n";
641 // The set of zeal modes that control incremental slices. These modes are
642 // mutually exclusive.
643 static const mozilla::EnumSet<ZealMode> IncrementalSliceZealModes = {
644 ZealMode::YieldBeforeRootMarking,
645 ZealMode::YieldBeforeMarking,
646 ZealMode::YieldBeforeSweeping,
647 ZealMode::IncrementalMultipleSlices,
648 ZealMode::YieldBeforeSweepingAtoms,
649 ZealMode::YieldBeforeSweepingCaches,
650 ZealMode::YieldBeforeSweepingObjects,
651 ZealMode::YieldBeforeSweepingNonObjects,
652 ZealMode::YieldBeforeSweepingPropMapTrees};
654 void GCRuntime::setZeal(uint8_t zeal, uint32_t frequency) {
655 MOZ_ASSERT(zeal <= unsigned(ZealMode::Limit));
657 if (verifyPreData) {
658 VerifyBarriers(rt, PreBarrierVerifier);
661 if (zeal == 0) {
662 if (hasZealMode(ZealMode::GenerationalGC)) {
663 evictNursery();
664 nursery().leaveZealMode();
667 if (isIncrementalGCInProgress()) {
668 finishGC(JS::GCReason::DEBUG_GC);
672 ZealMode zealMode = ZealMode(zeal);
673 if (zealMode == ZealMode::GenerationalGC) {
674 evictNursery(JS::GCReason::EVICT_NURSERY);
675 nursery().enterZealMode();
678 // Some modes are mutually exclusive. If we're setting one of those, we
679 // first reset all of them.
680 if (IncrementalSliceZealModes.contains(zealMode)) {
681 for (auto mode : IncrementalSliceZealModes) {
682 clearZealMode(mode);
686 bool schedule = zealMode >= ZealMode::Alloc;
687 if (zeal != 0) {
688 zealModeBits |= 1 << unsigned(zeal);
689 } else {
690 zealModeBits = 0;
692 zealFrequency = frequency;
693 nextScheduled = schedule ? frequency : 0;
696 void GCRuntime::unsetZeal(uint8_t zeal) {
697 MOZ_ASSERT(zeal <= unsigned(ZealMode::Limit));
698 ZealMode zealMode = ZealMode(zeal);
700 if (!hasZealMode(zealMode)) {
701 return;
704 if (verifyPreData) {
705 VerifyBarriers(rt, PreBarrierVerifier);
708 if (zealMode == ZealMode::GenerationalGC) {
709 evictNursery();
710 nursery().leaveZealMode();
713 clearZealMode(zealMode);
715 if (zealModeBits == 0) {
716 if (isIncrementalGCInProgress()) {
717 finishGC(JS::GCReason::DEBUG_GC);
720 zealFrequency = 0;
721 nextScheduled = 0;
725 void GCRuntime::setNextScheduled(uint32_t count) { nextScheduled = count; }
727 static bool ParseZealModeName(const CharRange& text, uint32_t* modeOut) {
728 struct ModeInfo {
729 const char* name;
730 size_t length;
731 uint32_t value;
734 static const ModeInfo zealModes[] = {{"None", 0},
735 # define ZEAL_MODE(name, value) {#name, strlen(#name), value},
736 JS_FOR_EACH_ZEAL_MODE(ZEAL_MODE)
737 # undef ZEAL_MODE
740 for (auto mode : zealModes) {
741 if (text.length() == mode.length &&
742 memcmp(text.begin().get(), mode.name, mode.length) == 0) {
743 *modeOut = mode.value;
744 return true;
748 return false;
751 static bool ParseZealModeNumericParam(const CharRange& text,
752 uint32_t* paramOut) {
753 if (text.length() == 0) {
754 return false;
757 for (auto c : text) {
758 if (!mozilla::IsAsciiDigit(c)) {
759 return false;
763 *paramOut = atoi(text.begin().get());
764 return true;
767 static bool PrintZealHelpAndFail() {
768 fprintf(stderr, "Format: JS_GC_ZEAL=level(;level)*[,N]\n");
769 fputs(ZealModeHelpText, stderr);
770 return false;
773 bool GCRuntime::parseAndSetZeal(const char* str) {
774 // Set the zeal mode from a string consisting of one or more mode specifiers
775 // separated by ';', optionally followed by a ',' and the trigger frequency.
776 // The mode specifiers can by a mode name or its number.
778 auto text = CharRange(str, strlen(str));
780 CharRangeVector parts;
781 if (!SplitStringBy(text, ',', &parts)) {
782 return false;
785 if (parts.length() == 0 || parts.length() > 2) {
786 return PrintZealHelpAndFail();
789 uint32_t frequency = JS_DEFAULT_ZEAL_FREQ;
790 if (parts.length() == 2 && !ParseZealModeNumericParam(parts[1], &frequency)) {
791 return PrintZealHelpAndFail();
794 CharRangeVector modes;
795 if (!SplitStringBy(parts[0], ';', &modes)) {
796 return false;
799 for (const auto& descr : modes) {
800 uint32_t mode;
801 if (!ParseZealModeName(descr, &mode) &&
802 !(ParseZealModeNumericParam(descr, &mode) &&
803 mode <= unsigned(ZealMode::Limit))) {
804 return PrintZealHelpAndFail();
807 setZeal(mode, frequency);
810 return true;
813 const char* js::gc::AllocKindName(AllocKind kind) {
814 static const char* const names[] = {
815 # define EXPAND_THING_NAME(allocKind, _1, _2, _3, _4, _5, _6) #allocKind,
816 FOR_EACH_ALLOCKIND(EXPAND_THING_NAME)
817 # undef EXPAND_THING_NAME
819 static_assert(std::size(names) == AllocKindCount,
820 "names array should have an entry for every AllocKind");
822 size_t i = size_t(kind);
823 MOZ_ASSERT(i < std::size(names));
824 return names[i];
827 void js::gc::DumpArenaInfo() {
828 fprintf(stderr, "Arena header size: %zu\n\n", ArenaHeaderSize);
830 fprintf(stderr, "GC thing kinds:\n");
831 fprintf(stderr, "%25s %8s %8s %8s\n",
832 "AllocKind:", "Size:", "Count:", "Padding:");
833 for (auto kind : AllAllocKinds()) {
834 fprintf(stderr, "%25s %8zu %8zu %8zu\n", AllocKindName(kind),
835 Arena::thingSize(kind), Arena::thingsPerArena(kind),
836 Arena::firstThingOffset(kind) - ArenaHeaderSize);
840 #endif // JS_GC_ZEAL
842 bool GCRuntime::init(uint32_t maxbytes) {
843 MOZ_ASSERT(!wasInitialized());
845 MOZ_ASSERT(SystemPageSize());
846 Arena::checkLookupTables();
848 if (!TlsGCContext.init()) {
849 return false;
851 TlsGCContext.set(&mainThreadContext.ref());
853 updateHelperThreadCount();
855 #ifdef JS_GC_ZEAL
856 const char* size = getenv("JSGC_MARK_STACK_LIMIT");
857 if (size) {
858 maybeMarkStackLimit = atoi(size);
860 #endif
862 if (!updateMarkersVector()) {
863 return false;
867 AutoLockGCBgAlloc lock(this);
869 MOZ_ALWAYS_TRUE(tunables.setParameter(JSGC_MAX_BYTES, maxbytes));
871 if (!nursery().init(lock)) {
872 return false;
876 #ifdef JS_GC_ZEAL
877 const char* zealSpec = getenv("JS_GC_ZEAL");
878 if (zealSpec && zealSpec[0] && !parseAndSetZeal(zealSpec)) {
879 return false;
881 #endif
883 for (auto& marker : markers) {
884 if (!marker->init()) {
885 return false;
889 if (!initSweepActions()) {
890 return false;
893 UniquePtr<Zone> zone = MakeUnique<Zone>(rt, Zone::AtomsZone);
894 if (!zone || !zone->init()) {
895 return false;
898 // The atoms zone is stored as the first element of the zones vector.
899 MOZ_ASSERT(zone->isAtomsZone());
900 MOZ_ASSERT(zones().empty());
901 MOZ_ALWAYS_TRUE(zones().reserve(1)); // ZonesVector has inline capacity 4.
902 zones().infallibleAppend(zone.release());
904 gcprobes::Init(this);
906 initialized = true;
907 return true;
910 void GCRuntime::finish() {
911 MOZ_ASSERT(inPageLoadCount == 0);
912 MOZ_ASSERT(!sharedAtomsZone_);
914 // Wait for nursery background free to end and disable it to release memory.
915 if (nursery().isEnabled()) {
916 nursery().disable();
919 // Wait until the background finalization and allocation stops and the
920 // helper thread shuts down before we forcefully release any remaining GC
921 // memory.
922 sweepTask.join();
923 markTask.join();
924 freeTask.join();
925 allocTask.cancelAndWait();
926 decommitTask.cancelAndWait();
927 #ifdef DEBUG
929 MOZ_ASSERT(dispatchedParallelTasks == 0);
930 AutoLockHelperThreadState lock;
931 MOZ_ASSERT(queuedParallelTasks.ref().isEmpty(lock));
933 #endif
935 releaseMarkingThreads();
937 #ifdef JS_GC_ZEAL
938 // Free memory associated with GC verification.
939 finishVerifier();
940 #endif
942 // Delete all remaining zones.
943 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
944 AutoSetThreadIsSweeping threadIsSweeping(rt->gcContext(), zone);
945 for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next()) {
946 for (RealmsInCompartmentIter realm(comp); !realm.done(); realm.next()) {
947 js_delete(realm.get());
949 comp->realms().clear();
950 js_delete(comp.get());
952 zone->compartments().clear();
953 js_delete(zone.get());
956 zones().clear();
958 FreeChunkPool(fullChunks_.ref());
959 FreeChunkPool(availableChunks_.ref());
960 FreeChunkPool(emptyChunks_.ref());
962 TlsGCContext.set(nullptr);
964 gcprobes::Finish(this);
966 nursery().printTotalProfileTimes();
967 stats().printTotalProfileTimes();
970 bool GCRuntime::freezeSharedAtomsZone() {
971 // This is called just after permanent atoms and well-known symbols have been
972 // created. At this point all existing atoms and symbols are permanent.
974 // This method makes the current atoms zone into a shared atoms zone and
975 // removes it from the zones list. Everything in it is marked black. A new
976 // empty atoms zone is created, where all atoms local to this runtime will
977 // live.
979 // The shared atoms zone will not be collected until shutdown when it is
980 // returned to the zone list by restoreSharedAtomsZone().
982 MOZ_ASSERT(rt->isMainRuntime());
983 MOZ_ASSERT(!sharedAtomsZone_);
984 MOZ_ASSERT(zones().length() == 1);
985 MOZ_ASSERT(atomsZone());
986 MOZ_ASSERT(!atomsZone()->wasGCStarted());
987 MOZ_ASSERT(!atomsZone()->needsIncrementalBarrier());
989 AutoAssertEmptyNursery nurseryIsEmpty(rt->mainContextFromOwnThread());
991 atomsZone()->arenas.clearFreeLists();
993 for (auto kind : AllAllocKinds()) {
994 for (auto thing =
995 atomsZone()->cellIterUnsafe<TenuredCell>(kind, nurseryIsEmpty);
996 !thing.done(); thing.next()) {
997 TenuredCell* cell = thing.getCell();
998 MOZ_ASSERT((cell->is<JSString>() &&
999 cell->as<JSString>()->isPermanentAndMayBeShared()) ||
1000 (cell->is<JS::Symbol>() &&
1001 cell->as<JS::Symbol>()->isPermanentAndMayBeShared()));
1002 cell->markBlack();
1006 sharedAtomsZone_ = atomsZone();
1007 zones().clear();
1009 UniquePtr<Zone> zone = MakeUnique<Zone>(rt, Zone::AtomsZone);
1010 if (!zone || !zone->init()) {
1011 return false;
1014 MOZ_ASSERT(zone->isAtomsZone());
1015 zones().infallibleAppend(zone.release());
1017 return true;
1020 void GCRuntime::restoreSharedAtomsZone() {
1021 // Return the shared atoms zone to the zone list. This allows the contents of
1022 // the shared atoms zone to be collected when the parent runtime is shut down.
1024 if (!sharedAtomsZone_) {
1025 return;
1028 MOZ_ASSERT(rt->isMainRuntime());
1029 MOZ_ASSERT(rt->childRuntimeCount == 0);
1031 AutoEnterOOMUnsafeRegion oomUnsafe;
1032 if (!zones().append(sharedAtomsZone_)) {
1033 oomUnsafe.crash("restoreSharedAtomsZone");
1036 sharedAtomsZone_ = nullptr;
1039 bool GCRuntime::setParameter(JSContext* cx, JSGCParamKey key, uint32_t value) {
1040 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
1042 AutoStopVerifyingBarriers pauseVerification(rt, false);
1043 FinishGC(cx);
1044 waitBackgroundSweepEnd();
1046 AutoLockGC lock(this);
1047 return setParameter(key, value, lock);
1050 static bool IsGCThreadParameter(JSGCParamKey key) {
1051 return key == JSGC_HELPER_THREAD_RATIO || key == JSGC_MAX_HELPER_THREADS ||
1052 key == JSGC_MARKING_THREAD_COUNT;
1055 bool GCRuntime::setParameter(JSGCParamKey key, uint32_t value,
1056 AutoLockGC& lock) {
1057 switch (key) {
1058 case JSGC_SLICE_TIME_BUDGET_MS:
1059 defaultTimeBudgetMS_ = value;
1060 break;
1061 case JSGC_INCREMENTAL_GC_ENABLED:
1062 setIncrementalGCEnabled(value != 0);
1063 break;
1064 case JSGC_PER_ZONE_GC_ENABLED:
1065 perZoneGCEnabled = value != 0;
1066 break;
1067 case JSGC_COMPACTING_ENABLED:
1068 compactingEnabled = value != 0;
1069 break;
1070 case JSGC_PARALLEL_MARKING_ENABLED:
1071 setParallelMarkingEnabled(value != 0);
1072 break;
1073 case JSGC_INCREMENTAL_WEAKMAP_ENABLED:
1074 for (auto& marker : markers) {
1075 marker->incrementalWeakMapMarkingEnabled = value != 0;
1077 break;
1078 case JSGC_SEMISPACE_NURSERY_ENABLED: {
1079 AutoUnlockGC unlock(lock);
1080 nursery().setSemispaceEnabled(value);
1081 break;
1083 case JSGC_MIN_EMPTY_CHUNK_COUNT:
1084 setMinEmptyChunkCount(value, lock);
1085 break;
1086 case JSGC_MAX_EMPTY_CHUNK_COUNT:
1087 setMaxEmptyChunkCount(value, lock);
1088 break;
1089 default:
1090 if (IsGCThreadParameter(key)) {
1091 return setThreadParameter(key, value, lock);
1094 if (!tunables.setParameter(key, value)) {
1095 return false;
1097 updateAllGCStartThresholds();
1100 return true;
1103 bool GCRuntime::setThreadParameter(JSGCParamKey key, uint32_t value,
1104 AutoLockGC& lock) {
1105 if (rt->parentRuntime) {
1106 // Don't allow these to be set for worker runtimes.
1107 return false;
1110 switch (key) {
1111 case JSGC_HELPER_THREAD_RATIO:
1112 if (value == 0) {
1113 return false;
1115 helperThreadRatio = double(value) / 100.0;
1116 break;
1117 case JSGC_MAX_HELPER_THREADS:
1118 if (value == 0) {
1119 return false;
1121 maxHelperThreads = value;
1122 break;
1123 case JSGC_MARKING_THREAD_COUNT:
1124 markingThreadCount = std::min(size_t(value), MaxParallelWorkers);
1125 break;
1126 default:
1127 MOZ_CRASH("Unexpected parameter key");
1130 updateHelperThreadCount();
1131 initOrDisableParallelMarking();
1133 return true;
1136 void GCRuntime::resetParameter(JSContext* cx, JSGCParamKey key) {
1137 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
1139 AutoStopVerifyingBarriers pauseVerification(rt, false);
1140 FinishGC(cx);
1141 waitBackgroundSweepEnd();
1143 AutoLockGC lock(this);
1144 resetParameter(key, lock);
1147 void GCRuntime::resetParameter(JSGCParamKey key, AutoLockGC& lock) {
1148 switch (key) {
1149 case JSGC_SLICE_TIME_BUDGET_MS:
1150 defaultTimeBudgetMS_ = TuningDefaults::DefaultTimeBudgetMS;
1151 break;
1152 case JSGC_INCREMENTAL_GC_ENABLED:
1153 setIncrementalGCEnabled(TuningDefaults::IncrementalGCEnabled);
1154 break;
1155 case JSGC_PER_ZONE_GC_ENABLED:
1156 perZoneGCEnabled = TuningDefaults::PerZoneGCEnabled;
1157 break;
1158 case JSGC_COMPACTING_ENABLED:
1159 compactingEnabled = TuningDefaults::CompactingEnabled;
1160 break;
1161 case JSGC_PARALLEL_MARKING_ENABLED:
1162 setParallelMarkingEnabled(TuningDefaults::ParallelMarkingEnabled);
1163 break;
1164 case JSGC_INCREMENTAL_WEAKMAP_ENABLED:
1165 for (auto& marker : markers) {
1166 marker->incrementalWeakMapMarkingEnabled =
1167 TuningDefaults::IncrementalWeakMapMarkingEnabled;
1169 break;
1170 case JSGC_SEMISPACE_NURSERY_ENABLED: {
1171 AutoUnlockGC unlock(lock);
1172 nursery().setSemispaceEnabled(TuningDefaults::SemispaceNurseryEnabled);
1173 break;
1175 case JSGC_MIN_EMPTY_CHUNK_COUNT:
1176 setMinEmptyChunkCount(TuningDefaults::MinEmptyChunkCount, lock);
1177 break;
1178 case JSGC_MAX_EMPTY_CHUNK_COUNT:
1179 setMaxEmptyChunkCount(TuningDefaults::MaxEmptyChunkCount, lock);
1180 break;
1181 default:
1182 if (IsGCThreadParameter(key)) {
1183 resetThreadParameter(key, lock);
1184 return;
1187 tunables.resetParameter(key);
1188 updateAllGCStartThresholds();
1192 void GCRuntime::resetThreadParameter(JSGCParamKey key, AutoLockGC& lock) {
1193 if (rt->parentRuntime) {
1194 return;
1197 switch (key) {
1198 case JSGC_HELPER_THREAD_RATIO:
1199 helperThreadRatio = TuningDefaults::HelperThreadRatio;
1200 break;
1201 case JSGC_MAX_HELPER_THREADS:
1202 maxHelperThreads = TuningDefaults::MaxHelperThreads;
1203 break;
1204 case JSGC_MARKING_THREAD_COUNT:
1205 markingThreadCount = 0;
1206 break;
1207 default:
1208 MOZ_CRASH("Unexpected parameter key");
1211 updateHelperThreadCount();
1212 initOrDisableParallelMarking();
1215 uint32_t GCRuntime::getParameter(JSGCParamKey key) {
1216 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
1217 AutoLockGC lock(this);
1218 return getParameter(key, lock);
1221 uint32_t GCRuntime::getParameter(JSGCParamKey key, const AutoLockGC& lock) {
1222 switch (key) {
1223 case JSGC_BYTES:
1224 return uint32_t(heapSize.bytes());
1225 case JSGC_NURSERY_BYTES:
1226 return nursery().capacity();
1227 case JSGC_NUMBER:
1228 return uint32_t(number);
1229 case JSGC_MAJOR_GC_NUMBER:
1230 return uint32_t(majorGCNumber);
1231 case JSGC_MINOR_GC_NUMBER:
1232 return uint32_t(minorGCNumber);
1233 case JSGC_INCREMENTAL_GC_ENABLED:
1234 return incrementalGCEnabled;
1235 case JSGC_PER_ZONE_GC_ENABLED:
1236 return perZoneGCEnabled;
1237 case JSGC_UNUSED_CHUNKS:
1238 return uint32_t(emptyChunks(lock).count());
1239 case JSGC_TOTAL_CHUNKS:
1240 return uint32_t(fullChunks(lock).count() + availableChunks(lock).count() +
1241 emptyChunks(lock).count());
1242 case JSGC_SLICE_TIME_BUDGET_MS:
1243 MOZ_RELEASE_ASSERT(defaultTimeBudgetMS_ >= 0);
1244 MOZ_RELEASE_ASSERT(defaultTimeBudgetMS_ <= UINT32_MAX);
1245 return uint32_t(defaultTimeBudgetMS_);
1246 case JSGC_MIN_EMPTY_CHUNK_COUNT:
1247 return minEmptyChunkCount(lock);
1248 case JSGC_MAX_EMPTY_CHUNK_COUNT:
1249 return maxEmptyChunkCount(lock);
1250 case JSGC_COMPACTING_ENABLED:
1251 return compactingEnabled;
1252 case JSGC_PARALLEL_MARKING_ENABLED:
1253 return parallelMarkingEnabled;
1254 case JSGC_INCREMENTAL_WEAKMAP_ENABLED:
1255 return marker().incrementalWeakMapMarkingEnabled;
1256 case JSGC_SEMISPACE_NURSERY_ENABLED:
1257 return nursery().semispaceEnabled();
1258 case JSGC_CHUNK_BYTES:
1259 return ChunkSize;
1260 case JSGC_HELPER_THREAD_RATIO:
1261 MOZ_ASSERT(helperThreadRatio > 0.0);
1262 return uint32_t(helperThreadRatio * 100.0);
1263 case JSGC_MAX_HELPER_THREADS:
1264 MOZ_ASSERT(maxHelperThreads <= UINT32_MAX);
1265 return maxHelperThreads;
1266 case JSGC_HELPER_THREAD_COUNT:
1267 return helperThreadCount;
1268 case JSGC_MARKING_THREAD_COUNT:
1269 return markingThreadCount;
1270 case JSGC_SYSTEM_PAGE_SIZE_KB:
1271 return SystemPageSize() / 1024;
1272 default:
1273 return tunables.getParameter(key);
1277 #ifdef JS_GC_ZEAL
1278 void GCRuntime::setMarkStackLimit(size_t limit, AutoLockGC& lock) {
1279 MOZ_ASSERT(!JS::RuntimeHeapIsBusy());
1281 maybeMarkStackLimit = limit;
1283 AutoUnlockGC unlock(lock);
1284 AutoStopVerifyingBarriers pauseVerification(rt, false);
1285 for (auto& marker : markers) {
1286 marker->setMaxCapacity(limit);
1289 #endif
1291 void GCRuntime::setIncrementalGCEnabled(bool enabled) {
1292 incrementalGCEnabled = enabled;
1295 void GCRuntime::updateHelperThreadCount() {
1296 if (!CanUseExtraThreads()) {
1297 // startTask will run the work on the main thread if the count is 1.
1298 MOZ_ASSERT(helperThreadCount == 1);
1299 markingThreadCount = 1;
1301 AutoLockHelperThreadState lock;
1302 maxParallelThreads = 1;
1303 return;
1306 // Number of extra threads required during parallel marking to ensure we can
1307 // start the necessary marking tasks. Background free and background
1308 // allocation may already be running and we want to avoid these tasks blocking
1309 // marking. In real configurations there will be enough threads that this
1310 // won't affect anything.
1311 static constexpr size_t SpareThreadsDuringParallelMarking = 2;
1313 // Calculate the target thread count for GC parallel tasks.
1314 size_t cpuCount = GetHelperThreadCPUCount();
1315 helperThreadCount =
1316 std::clamp(size_t(double(cpuCount) * helperThreadRatio.ref()), size_t(1),
1317 maxHelperThreads.ref());
1319 // Calculate the overall target thread count taking into account the separate
1320 // parameter for parallel marking threads. Add spare threads to avoid blocking
1321 // parallel marking when there is other GC work happening.
1322 size_t targetCount =
1323 std::max(helperThreadCount.ref(),
1324 markingThreadCount.ref() + SpareThreadsDuringParallelMarking);
1326 // Attempt to create extra threads if possible. This is not supported when
1327 // using an external thread pool.
1328 AutoLockHelperThreadState lock;
1329 (void)HelperThreadState().ensureThreadCount(targetCount, lock);
1331 // Limit all thread counts based on the number of threads available, which may
1332 // be fewer than requested.
1333 size_t availableThreadCount = GetHelperThreadCount();
1334 MOZ_ASSERT(availableThreadCount != 0);
1335 targetCount = std::min(targetCount, availableThreadCount);
1336 helperThreadCount = std::min(helperThreadCount.ref(), availableThreadCount);
1337 markingThreadCount =
1338 std::min(markingThreadCount.ref(),
1339 availableThreadCount - SpareThreadsDuringParallelMarking);
1341 // Update the maximum number of threads that will be used for GC work.
1342 maxParallelThreads = targetCount;
1345 size_t GCRuntime::markingWorkerCount() const {
1346 if (!CanUseExtraThreads() || !parallelMarkingEnabled) {
1347 return 1;
1350 if (markingThreadCount) {
1351 return markingThreadCount;
1354 // Limit parallel marking to use at most two threads initially.
1355 return 2;
1358 #ifdef DEBUG
1359 void GCRuntime::assertNoMarkingWork() const {
1360 for (const auto& marker : markers) {
1361 MOZ_ASSERT(marker->isDrained());
1363 MOZ_ASSERT(!hasDelayedMarking());
1365 #endif
1367 bool GCRuntime::setParallelMarkingEnabled(bool enabled) {
1368 if (enabled == parallelMarkingEnabled) {
1369 return true;
1372 parallelMarkingEnabled = enabled;
1373 return initOrDisableParallelMarking();
1376 bool GCRuntime::initOrDisableParallelMarking() {
1377 // Attempt to initialize parallel marking state or disable it on failure. This
1378 // is called when parallel marking is enabled or disabled.
1380 MOZ_ASSERT(markers.length() != 0);
1382 if (updateMarkersVector()) {
1383 return true;
1386 // Failed to initialize parallel marking so disable it instead.
1387 MOZ_ASSERT(parallelMarkingEnabled);
1388 parallelMarkingEnabled = false;
1389 MOZ_ALWAYS_TRUE(updateMarkersVector());
1390 return false;
1393 void GCRuntime::releaseMarkingThreads() {
1394 MOZ_ALWAYS_TRUE(reserveMarkingThreads(0));
1397 bool GCRuntime::reserveMarkingThreads(size_t newCount) {
1398 if (reservedMarkingThreads == newCount) {
1399 return true;
1402 // Update the helper thread system's global count by subtracting this
1403 // runtime's current contribution |reservedMarkingThreads| and adding the new
1404 // contribution |newCount|.
1406 AutoLockHelperThreadState lock;
1407 auto& globalCount = HelperThreadState().gcParallelMarkingThreads;
1408 MOZ_ASSERT(globalCount >= reservedMarkingThreads);
1409 size_t newGlobalCount = globalCount - reservedMarkingThreads + newCount;
1410 if (newGlobalCount > HelperThreadState().threadCount) {
1411 // Not enough total threads.
1412 return false;
1415 globalCount = newGlobalCount;
1416 reservedMarkingThreads = newCount;
1417 return true;
1420 size_t GCRuntime::getMaxParallelThreads() const {
1421 AutoLockHelperThreadState lock;
1422 return maxParallelThreads.ref();
1425 bool GCRuntime::updateMarkersVector() {
1426 MOZ_ASSERT(helperThreadCount >= 1,
1427 "There must always be at least one mark task");
1428 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
1429 assertNoMarkingWork();
1431 // Limit worker count to number of GC parallel tasks that can run
1432 // concurrently, otherwise one thread can deadlock waiting on another.
1433 size_t targetCount = std::min(markingWorkerCount(), getMaxParallelThreads());
1435 if (rt->isMainRuntime()) {
1436 // For the main runtime, reserve helper threads as long as parallel marking
1437 // is enabled. Worker runtimes may not mark in parallel if there are
1438 // insufficient threads available at the time.
1439 size_t threadsToReserve = targetCount > 1 ? targetCount : 0;
1440 if (!reserveMarkingThreads(threadsToReserve)) {
1441 return false;
1445 if (markers.length() > targetCount) {
1446 return markers.resize(targetCount);
1449 while (markers.length() < targetCount) {
1450 auto marker = MakeUnique<GCMarker>(rt);
1451 if (!marker) {
1452 return false;
1455 #ifdef JS_GC_ZEAL
1456 if (maybeMarkStackLimit) {
1457 marker->setMaxCapacity(maybeMarkStackLimit);
1459 #endif
1461 if (!marker->init()) {
1462 return false;
1465 if (!markers.emplaceBack(std::move(marker))) {
1466 return false;
1470 return true;
1473 template <typename F>
1474 static bool EraseCallback(CallbackVector<F>& vector, F callback) {
1475 for (Callback<F>* p = vector.begin(); p != vector.end(); p++) {
1476 if (p->op == callback) {
1477 vector.erase(p);
1478 return true;
1482 return false;
1485 template <typename F>
1486 static bool EraseCallback(CallbackVector<F>& vector, F callback, void* data) {
1487 for (Callback<F>* p = vector.begin(); p != vector.end(); p++) {
1488 if (p->op == callback && p->data == data) {
1489 vector.erase(p);
1490 return true;
1494 return false;
1497 bool GCRuntime::addBlackRootsTracer(JSTraceDataOp traceOp, void* data) {
1498 AssertHeapIsIdle();
1499 return blackRootTracers.ref().append(Callback<JSTraceDataOp>(traceOp, data));
1502 void GCRuntime::removeBlackRootsTracer(JSTraceDataOp traceOp, void* data) {
1503 // Can be called from finalizers
1504 MOZ_ALWAYS_TRUE(EraseCallback(blackRootTracers.ref(), traceOp));
1507 void GCRuntime::setGrayRootsTracer(JSGrayRootsTracer traceOp, void* data) {
1508 AssertHeapIsIdle();
1509 grayRootTracer.ref() = {traceOp, data};
1512 void GCRuntime::clearBlackAndGrayRootTracers() {
1513 MOZ_ASSERT(rt->isBeingDestroyed());
1514 blackRootTracers.ref().clear();
1515 setGrayRootsTracer(nullptr, nullptr);
1518 void GCRuntime::setGCCallback(JSGCCallback callback, void* data) {
1519 gcCallback.ref() = {callback, data};
1522 void GCRuntime::callGCCallback(JSGCStatus status, JS::GCReason reason) const {
1523 const auto& callback = gcCallback.ref();
1524 MOZ_ASSERT(callback.op);
1525 callback.op(rt->mainContextFromOwnThread(), status, reason, callback.data);
1528 void GCRuntime::setObjectsTenuredCallback(JSObjectsTenuredCallback callback,
1529 void* data) {
1530 tenuredCallback.ref() = {callback, data};
1533 void GCRuntime::callObjectsTenuredCallback() {
1534 JS::AutoSuppressGCAnalysis nogc;
1535 const auto& callback = tenuredCallback.ref();
1536 if (callback.op) {
1537 callback.op(rt->mainContextFromOwnThread(), callback.data);
1541 bool GCRuntime::addFinalizeCallback(JSFinalizeCallback callback, void* data) {
1542 return finalizeCallbacks.ref().append(
1543 Callback<JSFinalizeCallback>(callback, data));
1546 void GCRuntime::removeFinalizeCallback(JSFinalizeCallback callback) {
1547 MOZ_ALWAYS_TRUE(EraseCallback(finalizeCallbacks.ref(), callback));
1550 void GCRuntime::callFinalizeCallbacks(JS::GCContext* gcx,
1551 JSFinalizeStatus status) const {
1552 for (const auto& p : finalizeCallbacks.ref()) {
1553 p.op(gcx, status, p.data);
1557 void GCRuntime::setHostCleanupFinalizationRegistryCallback(
1558 JSHostCleanupFinalizationRegistryCallback callback, void* data) {
1559 hostCleanupFinalizationRegistryCallback.ref() = {callback, data};
1562 void GCRuntime::callHostCleanupFinalizationRegistryCallback(
1563 JSFunction* doCleanup, GlobalObject* incumbentGlobal) {
1564 JS::AutoSuppressGCAnalysis nogc;
1565 const auto& callback = hostCleanupFinalizationRegistryCallback.ref();
1566 if (callback.op) {
1567 callback.op(doCleanup, incumbentGlobal, callback.data);
1571 bool GCRuntime::addWeakPointerZonesCallback(JSWeakPointerZonesCallback callback,
1572 void* data) {
1573 return updateWeakPointerZonesCallbacks.ref().append(
1574 Callback<JSWeakPointerZonesCallback>(callback, data));
1577 void GCRuntime::removeWeakPointerZonesCallback(
1578 JSWeakPointerZonesCallback callback) {
1579 MOZ_ALWAYS_TRUE(
1580 EraseCallback(updateWeakPointerZonesCallbacks.ref(), callback));
1583 void GCRuntime::callWeakPointerZonesCallbacks(JSTracer* trc) const {
1584 for (auto const& p : updateWeakPointerZonesCallbacks.ref()) {
1585 p.op(trc, p.data);
1589 bool GCRuntime::addWeakPointerCompartmentCallback(
1590 JSWeakPointerCompartmentCallback callback, void* data) {
1591 return updateWeakPointerCompartmentCallbacks.ref().append(
1592 Callback<JSWeakPointerCompartmentCallback>(callback, data));
1595 void GCRuntime::removeWeakPointerCompartmentCallback(
1596 JSWeakPointerCompartmentCallback callback) {
1597 MOZ_ALWAYS_TRUE(
1598 EraseCallback(updateWeakPointerCompartmentCallbacks.ref(), callback));
1601 void GCRuntime::callWeakPointerCompartmentCallbacks(
1602 JSTracer* trc, JS::Compartment* comp) const {
1603 for (auto const& p : updateWeakPointerCompartmentCallbacks.ref()) {
1604 p.op(trc, comp, p.data);
1608 JS::GCSliceCallback GCRuntime::setSliceCallback(JS::GCSliceCallback callback) {
1609 return stats().setSliceCallback(callback);
1612 bool GCRuntime::addNurseryCollectionCallback(
1613 JS::GCNurseryCollectionCallback callback, void* data) {
1614 return nurseryCollectionCallbacks.ref().append(
1615 Callback<JS::GCNurseryCollectionCallback>(callback, data));
1618 void GCRuntime::removeNurseryCollectionCallback(
1619 JS::GCNurseryCollectionCallback callback, void* data) {
1620 MOZ_ALWAYS_TRUE(
1621 EraseCallback(nurseryCollectionCallbacks.ref(), callback, data));
1624 void GCRuntime::callNurseryCollectionCallbacks(JS::GCNurseryProgress progress,
1625 JS::GCReason reason) {
1626 for (auto const& p : nurseryCollectionCallbacks.ref()) {
1627 p.op(rt->mainContextFromOwnThread(), progress, reason, p.data);
1631 JS::DoCycleCollectionCallback GCRuntime::setDoCycleCollectionCallback(
1632 JS::DoCycleCollectionCallback callback) {
1633 const auto prior = gcDoCycleCollectionCallback.ref();
1634 gcDoCycleCollectionCallback.ref() = {callback, nullptr};
1635 return prior.op;
1638 void GCRuntime::callDoCycleCollectionCallback(JSContext* cx) {
1639 const auto& callback = gcDoCycleCollectionCallback.ref();
1640 if (callback.op) {
1641 callback.op(cx);
1645 bool GCRuntime::addRoot(Value* vp, const char* name) {
1647 * Sometimes Firefox will hold weak references to objects and then convert
1648 * them to strong references by calling AddRoot (e.g., via PreserveWrapper,
1649 * or ModifyBusyCount in workers). We need a read barrier to cover these
1650 * cases.
1652 MOZ_ASSERT(vp);
1653 Value value = *vp;
1654 if (value.isGCThing()) {
1655 ValuePreWriteBarrier(value);
1658 return rootsHash.ref().put(vp, name);
1661 void GCRuntime::removeRoot(Value* vp) {
1662 rootsHash.ref().remove(vp);
1663 notifyRootsRemoved();
1666 /* Compacting GC */
1668 bool js::gc::IsCurrentlyAnimating(const TimeStamp& lastAnimationTime,
1669 const TimeStamp& currentTime) {
1670 // Assume that we're currently animating if js::NotifyAnimationActivity has
1671 // been called in the last second.
1672 static const auto oneSecond = TimeDuration::FromSeconds(1);
1673 return !lastAnimationTime.IsNull() &&
1674 currentTime < (lastAnimationTime + oneSecond);
1677 static bool DiscardedCodeRecently(Zone* zone, const TimeStamp& currentTime) {
1678 static const auto thirtySeconds = TimeDuration::FromSeconds(30);
1679 return !zone->lastDiscardedCodeTime().IsNull() &&
1680 currentTime < (zone->lastDiscardedCodeTime() + thirtySeconds);
1683 bool GCRuntime::shouldCompact() {
1684 // Compact on shrinking GC if enabled. Skip compacting in incremental GCs
1685 // if we are currently animating, unless the user is inactive or we're
1686 // responding to memory pressure.
1688 if (!isShrinkingGC() || !isCompactingGCEnabled()) {
1689 return false;
1692 if (initialReason == JS::GCReason::USER_INACTIVE ||
1693 initialReason == JS::GCReason::MEM_PRESSURE) {
1694 return true;
1697 return !isIncremental ||
1698 !IsCurrentlyAnimating(rt->lastAnimationTime, TimeStamp::Now());
1701 bool GCRuntime::isCompactingGCEnabled() const {
1702 return compactingEnabled &&
1703 rt->mainContextFromOwnThread()->compactingDisabledCount == 0;
1706 JS_PUBLIC_API void JS::SetCreateGCSliceBudgetCallback(
1707 JSContext* cx, JS::CreateSliceBudgetCallback cb) {
1708 cx->runtime()->gc.createBudgetCallback = cb;
1711 void TimeBudget::setDeadlineFromNow() { deadline = TimeStamp::Now() + budget; }
1713 SliceBudget::SliceBudget(TimeBudget time, InterruptRequestFlag* interrupt)
1714 : counter(StepsPerExpensiveCheck),
1715 interruptRequested(interrupt),
1716 budget(TimeBudget(time)) {
1717 budget.as<TimeBudget>().setDeadlineFromNow();
1720 SliceBudget::SliceBudget(WorkBudget work)
1721 : counter(work.budget), interruptRequested(nullptr), budget(work) {}
1723 int SliceBudget::describe(char* buffer, size_t maxlen) const {
1724 if (isUnlimited()) {
1725 return snprintf(buffer, maxlen, "unlimited");
1728 if (isWorkBudget()) {
1729 return snprintf(buffer, maxlen, "work(%" PRId64 ")", workBudget());
1732 const char* interruptStr = "";
1733 if (interruptRequested) {
1734 interruptStr = interrupted ? "INTERRUPTED " : "interruptible ";
1736 const char* extra = "";
1737 if (idle) {
1738 extra = extended ? " (started idle but extended)" : " (idle)";
1740 return snprintf(buffer, maxlen, "%s%" PRId64 "ms%s", interruptStr,
1741 timeBudget(), extra);
1744 bool SliceBudget::checkOverBudget() {
1745 MOZ_ASSERT(counter <= 0);
1746 MOZ_ASSERT(!isUnlimited());
1748 if (isWorkBudget()) {
1749 return true;
1752 if (interruptRequested && *interruptRequested) {
1753 interrupted = true;
1756 if (interrupted) {
1757 return true;
1760 if (TimeStamp::Now() >= budget.as<TimeBudget>().deadline) {
1761 return true;
1764 counter = StepsPerExpensiveCheck;
1765 return false;
1768 void GCRuntime::requestMajorGC(JS::GCReason reason) {
1769 MOZ_ASSERT_IF(reason != JS::GCReason::BG_TASK_FINISHED,
1770 !CurrentThreadIsPerformingGC());
1772 if (majorGCRequested()) {
1773 return;
1776 majorGCTriggerReason = reason;
1777 rt->mainContextFromAnyThread()->requestInterrupt(InterruptReason::MajorGC);
1780 bool GCRuntime::triggerGC(JS::GCReason reason) {
1782 * Don't trigger GCs if this is being called off the main thread from
1783 * onTooMuchMalloc().
1785 if (!CurrentThreadCanAccessRuntime(rt)) {
1786 return false;
1789 /* GC is already running. */
1790 if (JS::RuntimeHeapIsCollecting()) {
1791 return false;
1794 JS::PrepareForFullGC(rt->mainContextFromOwnThread());
1795 requestMajorGC(reason);
1796 return true;
1799 void GCRuntime::maybeTriggerGCAfterAlloc(Zone* zone) {
1800 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
1801 MOZ_ASSERT(!JS::RuntimeHeapIsCollecting());
1803 TriggerResult trigger =
1804 checkHeapThreshold(zone, zone->gcHeapSize, zone->gcHeapThreshold);
1806 if (trigger.shouldTrigger) {
1807 // Start or continue an in progress incremental GC. We do this to try to
1808 // avoid performing non-incremental GCs on zones which allocate a lot of
1809 // data, even when incremental slices can't be triggered via scheduling in
1810 // the event loop.
1811 triggerZoneGC(zone, JS::GCReason::ALLOC_TRIGGER, trigger.usedBytes,
1812 trigger.thresholdBytes);
1816 void js::gc::MaybeMallocTriggerZoneGC(JSRuntime* rt, ZoneAllocator* zoneAlloc,
1817 const HeapSize& heap,
1818 const HeapThreshold& threshold,
1819 JS::GCReason reason) {
1820 rt->gc.maybeTriggerGCAfterMalloc(Zone::from(zoneAlloc), heap, threshold,
1821 reason);
1824 void GCRuntime::maybeTriggerGCAfterMalloc(Zone* zone) {
1825 if (maybeTriggerGCAfterMalloc(zone, zone->mallocHeapSize,
1826 zone->mallocHeapThreshold,
1827 JS::GCReason::TOO_MUCH_MALLOC)) {
1828 return;
1831 maybeTriggerGCAfterMalloc(zone, zone->jitHeapSize, zone->jitHeapThreshold,
1832 JS::GCReason::TOO_MUCH_JIT_CODE);
1835 bool GCRuntime::maybeTriggerGCAfterMalloc(Zone* zone, const HeapSize& heap,
1836 const HeapThreshold& threshold,
1837 JS::GCReason reason) {
1838 // Ignore malloc during sweeping, for example when we resize hash tables.
1839 if (heapState() != JS::HeapState::Idle) {
1840 return false;
1843 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
1845 TriggerResult trigger = checkHeapThreshold(zone, heap, threshold);
1846 if (!trigger.shouldTrigger) {
1847 return false;
1850 // Trigger a zone GC. budgetIncrementalGC() will work out whether to do an
1851 // incremental or non-incremental collection.
1852 triggerZoneGC(zone, reason, trigger.usedBytes, trigger.thresholdBytes);
1853 return true;
1856 TriggerResult GCRuntime::checkHeapThreshold(
1857 Zone* zone, const HeapSize& heapSize, const HeapThreshold& heapThreshold) {
1858 MOZ_ASSERT_IF(heapThreshold.hasSliceThreshold(), zone->wasGCStarted());
1860 size_t usedBytes = heapSize.bytes();
1861 size_t thresholdBytes = heapThreshold.hasSliceThreshold()
1862 ? heapThreshold.sliceBytes()
1863 : heapThreshold.startBytes();
1865 // The incremental limit will be checked if we trigger a GC slice.
1866 MOZ_ASSERT(thresholdBytes <= heapThreshold.incrementalLimitBytes());
1868 return TriggerResult{usedBytes >= thresholdBytes, usedBytes, thresholdBytes};
1871 bool GCRuntime::triggerZoneGC(Zone* zone, JS::GCReason reason, size_t used,
1872 size_t threshold) {
1873 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
1875 /* GC is already running. */
1876 if (JS::RuntimeHeapIsBusy()) {
1877 return false;
1880 #ifdef JS_GC_ZEAL
1881 if (hasZealMode(ZealMode::Alloc)) {
1882 MOZ_RELEASE_ASSERT(triggerGC(reason));
1883 return true;
1885 #endif
1887 if (zone->isAtomsZone()) {
1888 stats().recordTrigger(used, threshold);
1889 MOZ_RELEASE_ASSERT(triggerGC(reason));
1890 return true;
1893 stats().recordTrigger(used, threshold);
1894 zone->scheduleGC();
1895 requestMajorGC(reason);
1896 return true;
1899 void GCRuntime::maybeGC() {
1900 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
1902 #ifdef JS_GC_ZEAL
1903 if (hasZealMode(ZealMode::Alloc) || hasZealMode(ZealMode::RootsChange)) {
1904 JS::PrepareForFullGC(rt->mainContextFromOwnThread());
1905 gc(JS::GCOptions::Normal, JS::GCReason::DEBUG_GC);
1906 return;
1908 #endif
1910 (void)gcIfRequestedImpl(/* eagerOk = */ true);
1913 JS::GCReason GCRuntime::wantMajorGC(bool eagerOk) {
1914 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
1916 if (majorGCRequested()) {
1917 return majorGCTriggerReason;
1920 if (isIncrementalGCInProgress() || !eagerOk) {
1921 return JS::GCReason::NO_REASON;
1924 JS::GCReason reason = JS::GCReason::NO_REASON;
1925 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
1926 if (checkEagerAllocTrigger(zone->gcHeapSize, zone->gcHeapThreshold) ||
1927 checkEagerAllocTrigger(zone->mallocHeapSize,
1928 zone->mallocHeapThreshold)) {
1929 zone->scheduleGC();
1930 reason = JS::GCReason::EAGER_ALLOC_TRIGGER;
1934 return reason;
1937 bool GCRuntime::checkEagerAllocTrigger(const HeapSize& size,
1938 const HeapThreshold& threshold) {
1939 size_t thresholdBytes =
1940 threshold.eagerAllocTrigger(schedulingState.inHighFrequencyGCMode());
1941 size_t usedBytes = size.bytes();
1942 if (usedBytes <= 1024 * 1024 || usedBytes < thresholdBytes) {
1943 return false;
1946 stats().recordTrigger(usedBytes, thresholdBytes);
1947 return true;
1950 bool GCRuntime::shouldDecommit() const {
1951 // If we're doing a shrinking GC we always decommit to release as much memory
1952 // as possible.
1953 if (cleanUpEverything) {
1954 return true;
1957 // If we are allocating heavily enough to trigger "high frequency" GC then
1958 // skip decommit so that we do not compete with the mutator.
1959 return !schedulingState.inHighFrequencyGCMode();
1962 void GCRuntime::startDecommit() {
1963 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::DECOMMIT);
1965 #ifdef DEBUG
1966 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
1967 MOZ_ASSERT(decommitTask.isIdle());
1970 AutoLockGC lock(this);
1971 MOZ_ASSERT(fullChunks(lock).verify());
1972 MOZ_ASSERT(availableChunks(lock).verify());
1973 MOZ_ASSERT(emptyChunks(lock).verify());
1975 // Verify that all entries in the empty chunks pool are unused.
1976 for (ChunkPool::Iter chunk(emptyChunks(lock)); !chunk.done();
1977 chunk.next()) {
1978 MOZ_ASSERT(chunk->unused());
1981 #endif
1983 if (!shouldDecommit()) {
1984 return;
1988 AutoLockGC lock(this);
1989 if (availableChunks(lock).empty() && !tooManyEmptyChunks(lock) &&
1990 emptyChunks(lock).empty()) {
1991 return; // Nothing to do.
1995 #ifdef DEBUG
1997 AutoLockHelperThreadState lock;
1998 MOZ_ASSERT(!requestSliceAfterBackgroundTask);
2000 #endif
2002 if (useBackgroundThreads) {
2003 decommitTask.start();
2004 return;
2007 decommitTask.runFromMainThread();
2010 BackgroundDecommitTask::BackgroundDecommitTask(GCRuntime* gc)
2011 : GCParallelTask(gc, gcstats::PhaseKind::DECOMMIT) {}
2013 void js::gc::BackgroundDecommitTask::run(AutoLockHelperThreadState& lock) {
2015 AutoUnlockHelperThreadState unlock(lock);
2017 ChunkPool emptyChunksToFree;
2019 AutoLockGC gcLock(gc);
2020 emptyChunksToFree = gc->expireEmptyChunkPool(gcLock);
2023 FreeChunkPool(emptyChunksToFree);
2026 AutoLockGC gcLock(gc);
2028 // To help minimize the total number of chunks needed over time, sort the
2029 // available chunks list so that we allocate into more-used chunks first.
2030 gc->availableChunks(gcLock).sort();
2032 if (DecommitEnabled()) {
2033 gc->decommitEmptyChunks(cancel_, gcLock);
2034 gc->decommitFreeArenas(cancel_, gcLock);
2039 gc->maybeRequestGCAfterBackgroundTask(lock);
2042 static inline bool CanDecommitWholeChunk(TenuredChunk* chunk) {
2043 return chunk->unused() && chunk->info.numArenasFreeCommitted != 0;
2046 // Called from a background thread to decommit free arenas. Releases the GC
2047 // lock.
2048 void GCRuntime::decommitEmptyChunks(const bool& cancel, AutoLockGC& lock) {
2049 Vector<TenuredChunk*, 0, SystemAllocPolicy> chunksToDecommit;
2050 for (ChunkPool::Iter chunk(emptyChunks(lock)); !chunk.done(); chunk.next()) {
2051 if (CanDecommitWholeChunk(chunk) && !chunksToDecommit.append(chunk)) {
2052 onOutOfMallocMemory(lock);
2053 return;
2057 for (TenuredChunk* chunk : chunksToDecommit) {
2058 if (cancel) {
2059 break;
2062 // Check whether something used the chunk while lock was released.
2063 if (!CanDecommitWholeChunk(chunk)) {
2064 continue;
2067 // Temporarily remove the chunk while decommitting its memory so that the
2068 // mutator doesn't start allocating from it when we drop the lock.
2069 emptyChunks(lock).remove(chunk);
2072 AutoUnlockGC unlock(lock);
2073 chunk->decommitAllArenas();
2074 MOZ_ASSERT(chunk->info.numArenasFreeCommitted == 0);
2077 emptyChunks(lock).push(chunk);
2081 // Called from a background thread to decommit free arenas. Releases the GC
2082 // lock.
2083 void GCRuntime::decommitFreeArenas(const bool& cancel, AutoLockGC& lock) {
2084 MOZ_ASSERT(DecommitEnabled());
2086 // Since we release the GC lock while doing the decommit syscall below,
2087 // it is dangerous to iterate the available list directly, as the active
2088 // thread could modify it concurrently. Instead, we build and pass an
2089 // explicit Vector containing the Chunks we want to visit.
2090 Vector<TenuredChunk*, 0, SystemAllocPolicy> chunksToDecommit;
2091 for (ChunkPool::Iter chunk(availableChunks(lock)); !chunk.done();
2092 chunk.next()) {
2093 if (chunk->info.numArenasFreeCommitted != 0 &&
2094 !chunksToDecommit.append(chunk)) {
2095 onOutOfMallocMemory(lock);
2096 return;
2100 for (TenuredChunk* chunk : chunksToDecommit) {
2101 chunk->decommitFreeArenas(this, cancel, lock);
2105 // Do all possible decommit immediately from the current thread without
2106 // releasing the GC lock or allocating any memory.
2107 void GCRuntime::decommitFreeArenasWithoutUnlocking(const AutoLockGC& lock) {
2108 MOZ_ASSERT(DecommitEnabled());
2109 for (ChunkPool::Iter chunk(availableChunks(lock)); !chunk.done();
2110 chunk.next()) {
2111 chunk->decommitFreeArenasWithoutUnlocking(lock);
2113 MOZ_ASSERT(availableChunks(lock).verify());
2116 void GCRuntime::maybeRequestGCAfterBackgroundTask(
2117 const AutoLockHelperThreadState& lock) {
2118 if (requestSliceAfterBackgroundTask) {
2119 // Trigger a slice so the main thread can continue the collection
2120 // immediately.
2121 requestSliceAfterBackgroundTask = false;
2122 requestMajorGC(JS::GCReason::BG_TASK_FINISHED);
2126 void GCRuntime::cancelRequestedGCAfterBackgroundTask() {
2127 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
2129 #ifdef DEBUG
2131 AutoLockHelperThreadState lock;
2132 MOZ_ASSERT(!requestSliceAfterBackgroundTask);
2134 #endif
2136 majorGCTriggerReason.compareExchange(JS::GCReason::BG_TASK_FINISHED,
2137 JS::GCReason::NO_REASON);
2140 bool GCRuntime::isWaitingOnBackgroundTask() const {
2141 AutoLockHelperThreadState lock;
2142 return requestSliceAfterBackgroundTask;
2145 void GCRuntime::queueUnusedLifoBlocksForFree(LifoAlloc* lifo) {
2146 MOZ_ASSERT(JS::RuntimeHeapIsBusy());
2147 AutoLockHelperThreadState lock;
2148 lifoBlocksToFree.ref().transferUnusedFrom(lifo);
2151 void GCRuntime::queueAllLifoBlocksForFreeAfterMinorGC(LifoAlloc* lifo) {
2152 lifoBlocksToFreeAfterFullMinorGC.ref().transferFrom(lifo);
2155 void GCRuntime::queueBuffersForFreeAfterMinorGC(Nursery::BufferSet& buffers) {
2156 AutoLockHelperThreadState lock;
2158 if (!buffersToFreeAfterMinorGC.ref().empty()) {
2159 // In the rare case that this hasn't processed the buffers from a previous
2160 // minor GC we have to wait here.
2161 MOZ_ASSERT(!freeTask.isIdle(lock));
2162 freeTask.joinWithLockHeld(lock);
2165 MOZ_ASSERT(buffersToFreeAfterMinorGC.ref().empty());
2166 std::swap(buffersToFreeAfterMinorGC.ref(), buffers);
2169 void Realm::destroy(JS::GCContext* gcx) {
2170 JSRuntime* rt = gcx->runtime();
2171 if (auto callback = rt->destroyRealmCallback) {
2172 callback(gcx, this);
2174 if (principals()) {
2175 JS_DropPrincipals(rt->mainContextFromOwnThread(), principals());
2177 // Bug 1560019: Malloc memory associated with a zone but not with a specific
2178 // GC thing is not currently tracked.
2179 gcx->deleteUntracked(this);
2182 void Compartment::destroy(JS::GCContext* gcx) {
2183 JSRuntime* rt = gcx->runtime();
2184 if (auto callback = rt->destroyCompartmentCallback) {
2185 callback(gcx, this);
2187 // Bug 1560019: Malloc memory associated with a zone but not with a specific
2188 // GC thing is not currently tracked.
2189 gcx->deleteUntracked(this);
2190 rt->gc.stats().sweptCompartment();
2193 void Zone::destroy(JS::GCContext* gcx) {
2194 MOZ_ASSERT(compartments().empty());
2195 JSRuntime* rt = gcx->runtime();
2196 if (auto callback = rt->destroyZoneCallback) {
2197 callback(gcx, this);
2199 // Bug 1560019: Malloc memory associated with a zone but not with a specific
2200 // GC thing is not currently tracked.
2201 gcx->deleteUntracked(this);
2202 gcx->runtime()->gc.stats().sweptZone();
2206 * It's simpler if we preserve the invariant that every zone (except atoms
2207 * zones) has at least one compartment, and every compartment has at least one
2208 * realm. If we know we're deleting the entire zone, then sweepCompartments is
2209 * allowed to delete all compartments. In this case, |keepAtleastOne| is false.
2210 * If any cells remain alive in the zone, set |keepAtleastOne| true to prohibit
2211 * sweepCompartments from deleting every compartment. Instead, it preserves an
2212 * arbitrary compartment in the zone.
2214 void Zone::sweepCompartments(JS::GCContext* gcx, bool keepAtleastOne,
2215 bool destroyingRuntime) {
2216 MOZ_ASSERT_IF(!isAtomsZone(), !compartments().empty());
2217 MOZ_ASSERT_IF(destroyingRuntime, !keepAtleastOne);
2219 Compartment** read = compartments().begin();
2220 Compartment** end = compartments().end();
2221 Compartment** write = read;
2222 while (read < end) {
2223 Compartment* comp = *read++;
2226 * Don't delete the last compartment and realm if keepAtleastOne is
2227 * still true, meaning all the other compartments were deleted.
2229 bool keepAtleastOneRealm = read == end && keepAtleastOne;
2230 comp->sweepRealms(gcx, keepAtleastOneRealm, destroyingRuntime);
2232 if (!comp->realms().empty()) {
2233 *write++ = comp;
2234 keepAtleastOne = false;
2235 } else {
2236 comp->destroy(gcx);
2239 compartments().shrinkTo(write - compartments().begin());
2240 MOZ_ASSERT_IF(keepAtleastOne, !compartments().empty());
2241 MOZ_ASSERT_IF(destroyingRuntime, compartments().empty());
2244 void Compartment::sweepRealms(JS::GCContext* gcx, bool keepAtleastOne,
2245 bool destroyingRuntime) {
2246 MOZ_ASSERT(!realms().empty());
2247 MOZ_ASSERT_IF(destroyingRuntime, !keepAtleastOne);
2249 Realm** read = realms().begin();
2250 Realm** end = realms().end();
2251 Realm** write = read;
2252 while (read < end) {
2253 Realm* realm = *read++;
2256 * Don't delete the last realm if keepAtleastOne is still true, meaning
2257 * all the other realms were deleted.
2259 bool dontDelete = read == end && keepAtleastOne;
2260 if ((realm->marked() || dontDelete) && !destroyingRuntime) {
2261 *write++ = realm;
2262 keepAtleastOne = false;
2263 } else {
2264 realm->destroy(gcx);
2267 realms().shrinkTo(write - realms().begin());
2268 MOZ_ASSERT_IF(keepAtleastOne, !realms().empty());
2269 MOZ_ASSERT_IF(destroyingRuntime, realms().empty());
2272 void GCRuntime::sweepZones(JS::GCContext* gcx, bool destroyingRuntime) {
2273 MOZ_ASSERT_IF(destroyingRuntime, numActiveZoneIters == 0);
2274 MOZ_ASSERT(foregroundFinalizedArenas.ref().isNothing());
2276 if (numActiveZoneIters) {
2277 return;
2280 assertBackgroundSweepingFinished();
2282 // Sweep zones following the atoms zone.
2283 MOZ_ASSERT(zones()[0]->isAtomsZone());
2284 Zone** read = zones().begin() + 1;
2285 Zone** end = zones().end();
2286 Zone** write = read;
2288 while (read < end) {
2289 Zone* zone = *read++;
2291 if (zone->wasGCStarted()) {
2292 MOZ_ASSERT(!zone->isQueuedForBackgroundSweep());
2293 AutoSetThreadIsSweeping threadIsSweeping(zone);
2294 const bool zoneIsDead =
2295 zone->arenas.arenaListsAreEmpty() && !zone->hasMarkedRealms();
2296 MOZ_ASSERT_IF(destroyingRuntime, zoneIsDead);
2297 if (zoneIsDead) {
2298 zone->arenas.checkEmptyFreeLists();
2299 zone->sweepCompartments(gcx, false, destroyingRuntime);
2300 MOZ_ASSERT(zone->compartments().empty());
2301 zone->destroy(gcx);
2302 continue;
2304 zone->sweepCompartments(gcx, true, destroyingRuntime);
2306 *write++ = zone;
2308 zones().shrinkTo(write - zones().begin());
2311 void ArenaLists::checkEmptyArenaList(AllocKind kind) {
2312 MOZ_ASSERT(arenaList(kind).isEmpty());
2315 void GCRuntime::purgeRuntimeForMinorGC() {
2316 for (ZonesIter zone(this, SkipAtoms); !zone.done(); zone.next()) {
2317 zone->externalStringCache().purge();
2318 zone->functionToStringCache().purge();
2322 void GCRuntime::purgeRuntime() {
2323 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::PURGE);
2325 for (GCRealmsIter realm(rt); !realm.done(); realm.next()) {
2326 realm->purge();
2329 for (GCZonesIter zone(this); !zone.done(); zone.next()) {
2330 zone->purgeAtomCache();
2331 zone->externalStringCache().purge();
2332 zone->functionToStringCache().purge();
2333 zone->boundPrefixCache().clearAndCompact();
2334 zone->shapeZone().purgeShapeCaches(rt->gcContext());
2337 JSContext* cx = rt->mainContextFromOwnThread();
2338 queueUnusedLifoBlocksForFree(&cx->tempLifoAlloc());
2339 cx->interpreterStack().purge(rt);
2340 cx->frontendCollectionPool().purge();
2342 rt->caches().purge();
2344 if (rt->isMainRuntime()) {
2345 SharedImmutableStringsCache::getSingleton().purge();
2348 MOZ_ASSERT(marker().unmarkGrayStack.empty());
2349 marker().unmarkGrayStack.clearAndFree();
2352 bool GCRuntime::shouldPreserveJITCode(Realm* realm,
2353 const TimeStamp& currentTime,
2354 JS::GCReason reason,
2355 bool canAllocateMoreCode,
2356 bool isActiveCompartment) {
2357 if (cleanUpEverything) {
2358 return false;
2360 if (!canAllocateMoreCode) {
2361 return false;
2364 if (isActiveCompartment) {
2365 return true;
2367 if (alwaysPreserveCode) {
2368 return true;
2370 if (realm->preserveJitCode()) {
2371 return true;
2373 if (IsCurrentlyAnimating(realm->lastAnimationTime, currentTime) &&
2374 DiscardedCodeRecently(realm->zone(), currentTime)) {
2375 return true;
2377 if (reason == JS::GCReason::DEBUG_GC) {
2378 return true;
2381 return false;
2384 #ifdef DEBUG
2385 class CompartmentCheckTracer final : public JS::CallbackTracer {
2386 void onChild(JS::GCCellPtr thing, const char* name) override;
2387 bool edgeIsInCrossCompartmentMap(JS::GCCellPtr dst);
2389 public:
2390 explicit CompartmentCheckTracer(JSRuntime* rt)
2391 : JS::CallbackTracer(rt, JS::TracerKind::CompartmentCheck,
2392 JS::WeakEdgeTraceAction::Skip) {}
2394 Cell* src = nullptr;
2395 JS::TraceKind srcKind = JS::TraceKind::Null;
2396 Zone* zone = nullptr;
2397 Compartment* compartment = nullptr;
2400 static bool InCrossCompartmentMap(JSRuntime* rt, JSObject* src,
2401 JS::GCCellPtr dst) {
2402 // Cross compartment edges are either in the cross compartment map or in a
2403 // debugger weakmap.
2405 Compartment* srccomp = src->compartment();
2407 if (dst.is<JSObject>()) {
2408 if (ObjectWrapperMap::Ptr p = srccomp->lookupWrapper(&dst.as<JSObject>())) {
2409 if (*p->value().unsafeGet() == src) {
2410 return true;
2415 if (DebugAPI::edgeIsInDebuggerWeakmap(rt, src, dst)) {
2416 return true;
2419 return false;
2422 void CompartmentCheckTracer::onChild(JS::GCCellPtr thing, const char* name) {
2423 Compartment* comp =
2424 MapGCThingTyped(thing, [](auto t) { return t->maybeCompartment(); });
2425 if (comp && compartment) {
2426 MOZ_ASSERT(comp == compartment || edgeIsInCrossCompartmentMap(thing));
2427 } else {
2428 TenuredCell* tenured = &thing.asCell()->asTenured();
2429 Zone* thingZone = tenured->zoneFromAnyThread();
2430 MOZ_ASSERT(thingZone == zone || thingZone->isAtomsZone());
2434 bool CompartmentCheckTracer::edgeIsInCrossCompartmentMap(JS::GCCellPtr dst) {
2435 return srcKind == JS::TraceKind::Object &&
2436 InCrossCompartmentMap(runtime(), static_cast<JSObject*>(src), dst);
2439 void GCRuntime::checkForCompartmentMismatches() {
2440 JSContext* cx = rt->mainContextFromOwnThread();
2441 if (cx->disableStrictProxyCheckingCount) {
2442 return;
2445 CompartmentCheckTracer trc(rt);
2446 AutoAssertEmptyNursery empty(cx);
2447 for (ZonesIter zone(this, SkipAtoms); !zone.done(); zone.next()) {
2448 trc.zone = zone;
2449 for (auto thingKind : AllAllocKinds()) {
2450 for (auto i = zone->cellIterUnsafe<TenuredCell>(thingKind, empty);
2451 !i.done(); i.next()) {
2452 trc.src = i.getCell();
2453 trc.srcKind = MapAllocToTraceKind(thingKind);
2454 trc.compartment = MapGCThingTyped(
2455 trc.src, trc.srcKind, [](auto t) { return t->maybeCompartment(); });
2456 JS::TraceChildren(&trc, JS::GCCellPtr(trc.src, trc.srcKind));
2461 #endif
2463 static bool ShouldCleanUpEverything(JS::GCOptions options) {
2464 // During shutdown, we must clean everything up, for the sake of leak
2465 // detection. When a runtime has no contexts, or we're doing a GC before a
2466 // shutdown CC, those are strong indications that we're shutting down.
2467 return options == JS::GCOptions::Shutdown || options == JS::GCOptions::Shrink;
2470 static bool ShouldUseBackgroundThreads(bool isIncremental,
2471 JS::GCReason reason) {
2472 bool shouldUse = isIncremental && CanUseExtraThreads();
2473 MOZ_ASSERT_IF(reason == JS::GCReason::DESTROY_RUNTIME, !shouldUse);
2474 return shouldUse;
2477 void GCRuntime::startCollection(JS::GCReason reason) {
2478 checkGCStateNotInUse();
2479 MOZ_ASSERT_IF(
2480 isShuttingDown(),
2481 isShutdownGC() ||
2482 reason == JS::GCReason::XPCONNECT_SHUTDOWN /* Bug 1650075 */);
2484 initialReason = reason;
2485 cleanUpEverything = ShouldCleanUpEverything(gcOptions());
2486 isCompacting = shouldCompact();
2487 rootsRemoved = false;
2488 sweepGroupIndex = 0;
2489 lastGCStartTime_ = TimeStamp::Now();
2491 #ifdef DEBUG
2492 if (isShutdownGC()) {
2493 hadShutdownGC = true;
2496 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
2497 zone->gcSweepGroupIndex = 0;
2499 #endif
2502 static void RelazifyFunctions(Zone* zone, AllocKind kind) {
2503 MOZ_ASSERT(kind == AllocKind::FUNCTION ||
2504 kind == AllocKind::FUNCTION_EXTENDED);
2506 JSRuntime* rt = zone->runtimeFromMainThread();
2507 AutoAssertEmptyNursery empty(rt->mainContextFromOwnThread());
2509 for (auto i = zone->cellIterUnsafe<JSObject>(kind, empty); !i.done();
2510 i.next()) {
2511 JSFunction* fun = &i->as<JSFunction>();
2512 // When iterating over the GC-heap, we may encounter function objects that
2513 // are incomplete (missing a BaseScript when we expect one). We must check
2514 // for this case before we can call JSFunction::hasBytecode().
2515 if (fun->isIncomplete()) {
2516 continue;
2518 if (fun->hasBytecode()) {
2519 fun->maybeRelazify(rt);
2524 static bool ShouldCollectZone(Zone* zone, JS::GCReason reason) {
2525 // If we are repeating a GC because we noticed dead compartments haven't
2526 // been collected, then only collect zones containing those compartments.
2527 if (reason == JS::GCReason::COMPARTMENT_REVIVED) {
2528 for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next()) {
2529 if (comp->gcState.scheduledForDestruction) {
2530 return true;
2534 return false;
2537 // Otherwise we only collect scheduled zones.
2538 return zone->isGCScheduled();
2541 bool GCRuntime::prepareZonesForCollection(JS::GCReason reason,
2542 bool* isFullOut) {
2543 #ifdef DEBUG
2544 /* Assert that zone state is as we expect */
2545 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
2546 MOZ_ASSERT(!zone->isCollecting());
2547 MOZ_ASSERT_IF(!zone->isAtomsZone(), !zone->compartments().empty());
2548 for (auto i : AllAllocKinds()) {
2549 MOZ_ASSERT(zone->arenas.collectingArenaList(i).isEmpty());
2552 #endif
2554 *isFullOut = true;
2555 bool any = false;
2557 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
2558 /* Set up which zones will be collected. */
2559 bool shouldCollect = ShouldCollectZone(zone, reason);
2560 if (shouldCollect) {
2561 any = true;
2562 zone->changeGCState(Zone::NoGC, Zone::Prepare);
2563 } else {
2564 *isFullOut = false;
2567 zone->setWasCollected(shouldCollect);
2570 /* Check that at least one zone is scheduled for collection. */
2571 return any;
2574 void GCRuntime::discardJITCodeForGC() {
2575 size_t nurserySiteResetCount = 0;
2576 size_t pretenuredSiteResetCount = 0;
2578 js::CancelOffThreadIonCompile(rt, JS::Zone::Prepare);
2579 for (GCZonesIter zone(this); !zone.done(); zone.next()) {
2580 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::MARK_DISCARD_CODE);
2582 // We may need to reset allocation sites and discard JIT code to recover if
2583 // we find object lifetimes have changed.
2584 PretenuringZone& pz = zone->pretenuring;
2585 bool resetNurserySites = pz.shouldResetNurseryAllocSites();
2586 bool resetPretenuredSites = pz.shouldResetPretenuredAllocSites();
2588 if (!zone->isPreservingCode()) {
2589 Zone::DiscardOptions options;
2590 options.discardJitScripts = true;
2591 options.resetNurseryAllocSites = resetNurserySites;
2592 options.resetPretenuredAllocSites = resetPretenuredSites;
2593 zone->discardJitCode(rt->gcContext(), options);
2594 } else if (resetNurserySites || resetPretenuredSites) {
2595 zone->resetAllocSitesAndInvalidate(resetNurserySites,
2596 resetPretenuredSites);
2599 if (resetNurserySites) {
2600 nurserySiteResetCount++;
2602 if (resetPretenuredSites) {
2603 pretenuredSiteResetCount++;
2607 if (nursery().reportPretenuring()) {
2608 if (nurserySiteResetCount) {
2609 fprintf(
2610 stderr,
2611 "GC reset nursery alloc sites and invalidated code in %zu zones\n",
2612 nurserySiteResetCount);
2614 if (pretenuredSiteResetCount) {
2615 fprintf(
2616 stderr,
2617 "GC reset pretenured alloc sites and invalidated code in %zu zones\n",
2618 pretenuredSiteResetCount);
2623 void GCRuntime::relazifyFunctionsForShrinkingGC() {
2624 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::RELAZIFY_FUNCTIONS);
2625 for (GCZonesIter zone(this); !zone.done(); zone.next()) {
2626 RelazifyFunctions(zone, AllocKind::FUNCTION);
2627 RelazifyFunctions(zone, AllocKind::FUNCTION_EXTENDED);
2631 void GCRuntime::purgePropMapTablesForShrinkingGC() {
2632 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::PURGE_PROP_MAP_TABLES);
2633 for (GCZonesIter zone(this); !zone.done(); zone.next()) {
2634 if (!canRelocateZone(zone) || zone->keepPropMapTables()) {
2635 continue;
2638 // Note: CompactPropMaps never have a table.
2639 for (auto map = zone->cellIterUnsafe<NormalPropMap>(); !map.done();
2640 map.next()) {
2641 if (map->asLinked()->hasTable()) {
2642 map->asLinked()->purgeTable(rt->gcContext());
2645 for (auto map = zone->cellIterUnsafe<DictionaryPropMap>(); !map.done();
2646 map.next()) {
2647 if (map->asLinked()->hasTable()) {
2648 map->asLinked()->purgeTable(rt->gcContext());
2654 // The debugger keeps track of the URLs for the sources of each realm's scripts.
2655 // These URLs are purged on shrinking GCs.
2656 void GCRuntime::purgeSourceURLsForShrinkingGC() {
2657 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::PURGE_SOURCE_URLS);
2658 for (GCZonesIter zone(this); !zone.done(); zone.next()) {
2659 // URLs are not tracked for realms in the system zone.
2660 if (!canRelocateZone(zone) || zone->isSystemZone()) {
2661 continue;
2663 for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next()) {
2664 for (RealmsInCompartmentIter realm(comp); !realm.done(); realm.next()) {
2665 GlobalObject* global = realm.get()->unsafeUnbarrieredMaybeGlobal();
2666 if (global) {
2667 global->clearSourceURLSHolder();
2674 void GCRuntime::unmarkWeakMaps() {
2675 for (GCZonesIter zone(this); !zone.done(); zone.next()) {
2676 /* Unmark all weak maps in the zones being collected. */
2677 WeakMapBase::unmarkZone(zone);
2681 bool GCRuntime::beginPreparePhase(JS::GCReason reason, AutoGCSession& session) {
2682 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::PREPARE);
2684 if (!prepareZonesForCollection(reason, &isFull.ref())) {
2685 return false;
2689 * Start a parallel task to clear all mark state for the zones we are
2690 * collecting. This is linear in the size of the heap we are collecting and so
2691 * can be slow. This usually happens concurrently with the mutator and GC
2692 * proper does not start until this is complete.
2694 unmarkTask.initZones();
2695 if (useBackgroundThreads) {
2696 unmarkTask.start();
2697 } else {
2698 unmarkTask.runFromMainThread();
2702 * Process any queued source compressions during the start of a major
2703 * GC.
2705 * Bug 1650075: When we start passing GCOptions::Shutdown for
2706 * GCReason::XPCONNECT_SHUTDOWN GCs we can remove the extra check.
2708 if (!isShutdownGC() && reason != JS::GCReason::XPCONNECT_SHUTDOWN) {
2709 StartHandlingCompressionsOnGC(rt);
2712 return true;
2715 BackgroundUnmarkTask::BackgroundUnmarkTask(GCRuntime* gc)
2716 : GCParallelTask(gc, gcstats::PhaseKind::UNMARK) {}
2718 void BackgroundUnmarkTask::initZones() {
2719 MOZ_ASSERT(isIdle());
2720 MOZ_ASSERT(zones.empty());
2721 MOZ_ASSERT(!isCancelled());
2723 // We can't safely iterate the zones vector from another thread so we copy the
2724 // zones to be collected into another vector.
2725 AutoEnterOOMUnsafeRegion oomUnsafe;
2726 for (GCZonesIter zone(gc); !zone.done(); zone.next()) {
2727 if (!zones.append(zone.get())) {
2728 oomUnsafe.crash("BackgroundUnmarkTask::initZones");
2731 zone->arenas.clearFreeLists();
2732 zone->arenas.moveArenasToCollectingLists();
2736 void BackgroundUnmarkTask::run(AutoLockHelperThreadState& helperTheadLock) {
2737 AutoUnlockHelperThreadState unlock(helperTheadLock);
2739 for (Zone* zone : zones) {
2740 for (auto kind : AllAllocKinds()) {
2741 ArenaList& arenas = zone->arenas.collectingArenaList(kind);
2742 for (ArenaListIter arena(arenas.head()); !arena.done(); arena.next()) {
2743 arena->unmarkAll();
2744 if (isCancelled()) {
2745 break;
2751 zones.clear();
2754 void GCRuntime::endPreparePhase(JS::GCReason reason) {
2755 MOZ_ASSERT(unmarkTask.isIdle());
2757 for (GCZonesIter zone(this); !zone.done(); zone.next()) {
2758 zone->setPreservingCode(false);
2761 // Discard JIT code more aggressively if the process is approaching its
2762 // executable code limit.
2763 bool canAllocateMoreCode = jit::CanLikelyAllocateMoreExecutableMemory();
2764 auto currentTime = TimeStamp::Now();
2766 Compartment* activeCompartment = nullptr;
2767 jit::JitActivationIterator activation(rt->mainContextFromOwnThread());
2768 if (!activation.done()) {
2769 activeCompartment = activation->compartment();
2772 for (CompartmentsIter c(rt); !c.done(); c.next()) {
2773 c->gcState.scheduledForDestruction = false;
2774 c->gcState.maybeAlive = false;
2775 c->gcState.hasEnteredRealm = false;
2776 if (c->invisibleToDebugger()) {
2777 c->gcState.maybeAlive = true; // Presumed to be a system compartment.
2779 bool isActiveCompartment = c == activeCompartment;
2780 for (RealmsInCompartmentIter r(c); !r.done(); r.next()) {
2781 if (r->shouldTraceGlobal() || !r->zone()->isGCScheduled()) {
2782 c->gcState.maybeAlive = true;
2784 if (shouldPreserveJITCode(r, currentTime, reason, canAllocateMoreCode,
2785 isActiveCompartment)) {
2786 r->zone()->setPreservingCode(true);
2788 if (r->hasBeenEnteredIgnoringJit()) {
2789 c->gcState.hasEnteredRealm = true;
2795 * Perform remaining preparation work that must take place in the first true
2796 * GC slice.
2800 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::PREPARE);
2802 AutoLockHelperThreadState helperLock;
2804 /* Clear mark state for WeakMaps in parallel with other work. */
2805 AutoRunParallelTask unmarkWeakMaps(this, &GCRuntime::unmarkWeakMaps,
2806 gcstats::PhaseKind::UNMARK_WEAKMAPS,
2807 GCUse::Unspecified, helperLock);
2809 AutoUnlockHelperThreadState unlock(helperLock);
2811 // Discard JIT code. For incremental collections, the sweep phase may
2812 // also discard JIT code.
2813 discardJITCodeForGC();
2814 haveDiscardedJITCodeThisSlice = true;
2817 * We must purge the runtime at the beginning of an incremental GC. The
2818 * danger if we purge later is that the snapshot invariant of
2819 * incremental GC will be broken, as follows. If some object is
2820 * reachable only through some cache (say the dtoaCache) then it will
2821 * not be part of the snapshot. If we purge after root marking, then
2822 * the mutator could obtain a pointer to the object and start using
2823 * it. This object might never be marked, so a GC hazard would exist.
2825 purgeRuntime();
2828 // This will start background free for lifo blocks queued by purgeRuntime,
2829 // even if there's nothing in the nursery.
2830 collectNurseryFromMajorGC(reason);
2833 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::PREPARE);
2834 // Relazify functions after discarding JIT code (we can't relazify functions
2835 // with JIT code) and before the actual mark phase, so that the current GC
2836 // can collect the JSScripts we're unlinking here. We do this only when
2837 // we're performing a shrinking GC, as too much relazification can cause
2838 // performance issues when we have to reparse the same functions over and
2839 // over.
2840 if (isShrinkingGC()) {
2841 relazifyFunctionsForShrinkingGC();
2842 purgePropMapTablesForShrinkingGC();
2843 purgeSourceURLsForShrinkingGC();
2846 if (isShutdownGC()) {
2847 /* Clear any engine roots that may hold external data live. */
2848 for (GCZonesIter zone(this); !zone.done(); zone.next()) {
2849 zone->clearRootsForShutdownGC();
2852 #ifdef DEBUG
2853 testMarkQueue.clear();
2854 queuePos = 0;
2855 #endif
2859 #ifdef DEBUG
2860 if (fullCompartmentChecks) {
2861 checkForCompartmentMismatches();
2863 #endif
2866 AutoUpdateLiveCompartments::AutoUpdateLiveCompartments(GCRuntime* gc) : gc(gc) {
2867 for (GCCompartmentsIter c(gc->rt); !c.done(); c.next()) {
2868 c->gcState.hasMarkedCells = false;
2872 AutoUpdateLiveCompartments::~AutoUpdateLiveCompartments() {
2873 for (GCCompartmentsIter c(gc->rt); !c.done(); c.next()) {
2874 if (c->gcState.hasMarkedCells) {
2875 c->gcState.maybeAlive = true;
2880 Zone::GCState Zone::initialMarkingState() const {
2881 if (isAtomsZone()) {
2882 // Don't delay gray marking in the atoms zone like we do in other zones.
2883 return MarkBlackAndGray;
2886 return MarkBlackOnly;
2889 void GCRuntime::beginMarkPhase(AutoGCSession& session) {
2891 * Mark phase.
2893 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::MARK);
2895 // This is the slice we actually start collecting. The number can be used to
2896 // check whether a major GC has started so we must not increment it until we
2897 // get here.
2898 incMajorGcNumber();
2900 #ifdef DEBUG
2901 queuePos = 0;
2902 queueMarkColor.reset();
2903 #endif
2905 for (GCZonesIter zone(this); !zone.done(); zone.next()) {
2906 // In an incremental GC, clear the arena free lists to ensure that
2907 // subsequent allocations refill them and end up marking new cells black.
2908 // See arenaAllocatedDuringGC().
2909 zone->arenas.clearFreeLists();
2911 #ifdef JS_GC_ZEAL
2912 if (hasZealMode(ZealMode::YieldBeforeRootMarking)) {
2913 for (auto kind : AllAllocKinds()) {
2914 for (ArenaIter arena(zone, kind); !arena.done(); arena.next()) {
2915 arena->checkNoMarkedCells();
2919 #endif
2921 // Incremental marking barriers are enabled at this point.
2922 zone->changeGCState(Zone::Prepare, zone->initialMarkingState());
2924 // Merge arenas allocated during the prepare phase, then move all arenas to
2925 // the collecting arena lists.
2926 zone->arenas.mergeArenasFromCollectingLists();
2927 zone->arenas.moveArenasToCollectingLists();
2929 for (RealmsInZoneIter realm(zone); !realm.done(); realm.next()) {
2930 realm->clearAllocatedDuringGC();
2934 updateSchedulingStateOnGCStart();
2935 stats().measureInitialHeapSize();
2937 useParallelMarking = SingleThreadedMarking;
2938 if (canMarkInParallel() && initParallelMarking()) {
2939 useParallelMarking = AllowParallelMarking;
2942 MOZ_ASSERT(!hasDelayedMarking());
2943 for (auto& marker : markers) {
2944 marker->start();
2947 if (rt->isBeingDestroyed()) {
2948 checkNoRuntimeRoots(session);
2949 } else {
2950 AutoUpdateLiveCompartments updateLive(this);
2951 marker().setRootMarkingMode(true);
2952 traceRuntimeForMajorGC(marker().tracer(), session);
2953 marker().setRootMarkingMode(false);
2957 void GCRuntime::findDeadCompartments() {
2958 gcstats::AutoPhase ap1(stats(), gcstats::PhaseKind::FIND_DEAD_COMPARTMENTS);
2961 * This code ensures that if a compartment is "dead", then it will be
2962 * collected in this GC. A compartment is considered dead if its maybeAlive
2963 * flag is false. The maybeAlive flag is set if:
2965 * (1) the compartment has been entered (set in beginMarkPhase() above)
2966 * (2) the compartment's zone is not being collected (set in
2967 * endPreparePhase() above)
2968 * (3) an object in the compartment was marked during root marking, either
2969 * as a black root or a gray root. This is arranged by
2970 * SetCompartmentHasMarkedCells and AutoUpdateLiveCompartments.
2971 * (4) the compartment has incoming cross-compartment edges from another
2972 * compartment that has maybeAlive set (set by this method).
2973 * (5) the compartment has the invisibleToDebugger flag set, as it is
2974 * presumed to be a system compartment (set in endPreparePhase() above)
2976 * If the maybeAlive is false, then we set the scheduledForDestruction flag.
2977 * At the end of the GC, we look for compartments where
2978 * scheduledForDestruction is true. These are compartments that were somehow
2979 * "revived" during the incremental GC. If any are found, we do a special,
2980 * non-incremental GC of those compartments to try to collect them.
2982 * Compartments can be revived for a variety of reasons, including:
2984 * (1) A dead reflector can be revived by DOM code that still refers to the
2985 * underlying DOM node (see bug 811587).
2986 * (2) JS_TransplantObject iterates over all compartments, live or dead, and
2987 * operates on their objects. This can trigger read barriers and mark
2988 * unreachable objects. See bug 803376 for details on this problem. To
2989 * avoid the problem, we try to avoid allocation and read barriers
2990 * during JS_TransplantObject and the like.
2991 * (3) Read barriers. A compartment may only have weak roots and reading one
2992 * of these will cause the compartment to stay alive even though the GC
2993 * thought it should die. An example of this is Gecko's unprivileged
2994 * junk scope, which is handled by ignoring system compartments (see bug
2995 * 1868437).
2998 // Propagate the maybeAlive flag via cross-compartment edges.
3000 Vector<Compartment*, 0, js::SystemAllocPolicy> workList;
3002 for (CompartmentsIter comp(rt); !comp.done(); comp.next()) {
3003 if (comp->gcState.maybeAlive) {
3004 if (!workList.append(comp)) {
3005 return;
3010 while (!workList.empty()) {
3011 Compartment* comp = workList.popCopy();
3012 for (Compartment::WrappedObjectCompartmentEnum e(comp); !e.empty();
3013 e.popFront()) {
3014 Compartment* dest = e.front();
3015 if (!dest->gcState.maybeAlive) {
3016 dest->gcState.maybeAlive = true;
3017 if (!workList.append(dest)) {
3018 return;
3024 // Set scheduledForDestruction based on maybeAlive.
3026 for (GCCompartmentsIter comp(rt); !comp.done(); comp.next()) {
3027 MOZ_ASSERT(!comp->gcState.scheduledForDestruction);
3028 if (!comp->gcState.maybeAlive) {
3029 comp->gcState.scheduledForDestruction = true;
3034 void GCRuntime::updateSchedulingStateOnGCStart() {
3035 heapSize.updateOnGCStart();
3037 // Update memory counters for the zones we are collecting.
3038 for (GCZonesIter zone(this); !zone.done(); zone.next()) {
3039 zone->updateSchedulingStateOnGCStart();
3043 inline bool GCRuntime::canMarkInParallel() const {
3044 MOZ_ASSERT(state() >= gc::State::MarkRoots);
3046 #if defined(DEBUG) || defined(JS_OOM_BREAKPOINT)
3047 // OOM testing limits the engine to using a single helper thread.
3048 if (oom::simulator.targetThread() == THREAD_TYPE_GCPARALLEL) {
3049 return false;
3051 #endif
3053 return markers.length() > 1 && stats().initialCollectedBytes() >=
3054 tunables.parallelMarkingThresholdBytes();
3057 bool GCRuntime::initParallelMarking() {
3058 // This is called at the start of collection.
3060 MOZ_ASSERT(canMarkInParallel());
3062 // Reserve/release helper threads for worker runtimes. These are released at
3063 // the end of sweeping. If there are not enough helper threads because
3064 // other runtimes are marking in parallel then parallel marking will not be
3065 // used.
3066 if (!rt->isMainRuntime() && !reserveMarkingThreads(markers.length())) {
3067 return false;
3070 // Allocate stack for parallel markers. The first marker always has stack
3071 // allocated. Other markers have their stack freed in
3072 // GCRuntime::finishCollection.
3073 for (size_t i = 1; i < markers.length(); i++) {
3074 if (!markers[i]->initStack()) {
3075 return false;
3079 return true;
3082 IncrementalProgress GCRuntime::markUntilBudgetExhausted(
3083 SliceBudget& sliceBudget, ParallelMarking allowParallelMarking,
3084 ShouldReportMarkTime reportTime) {
3085 // Run a marking slice and return whether the stack is now empty.
3087 AutoMajorGCProfilerEntry s(this);
3089 if (initialState != State::Mark) {
3090 sliceBudget.forceCheck();
3091 if (sliceBudget.isOverBudget()) {
3092 return NotFinished;
3096 if (processTestMarkQueue() == QueueYielded) {
3097 return NotFinished;
3100 if (allowParallelMarking) {
3101 MOZ_ASSERT(canMarkInParallel());
3102 MOZ_ASSERT(parallelMarkingEnabled);
3103 MOZ_ASSERT(reportTime);
3104 MOZ_ASSERT(!isBackgroundMarking());
3106 ParallelMarker pm(this);
3107 if (!pm.mark(sliceBudget)) {
3108 return NotFinished;
3111 assertNoMarkingWork();
3112 return Finished;
3115 #ifdef DEBUG
3116 AutoSetThreadIsMarking threadIsMarking;
3117 #endif // DEBUG
3119 return marker().markUntilBudgetExhausted(sliceBudget, reportTime)
3120 ? Finished
3121 : NotFinished;
3124 void GCRuntime::drainMarkStack() {
3125 auto unlimited = SliceBudget::unlimited();
3126 MOZ_RELEASE_ASSERT(marker().markUntilBudgetExhausted(unlimited));
3129 #ifdef DEBUG
3131 const GCVector<HeapPtr<JS::Value>, 0, SystemAllocPolicy>&
3132 GCRuntime::getTestMarkQueue() const {
3133 return testMarkQueue.get();
3136 bool GCRuntime::appendTestMarkQueue(const JS::Value& value) {
3137 return testMarkQueue.append(value);
3140 void GCRuntime::clearTestMarkQueue() {
3141 testMarkQueue.clear();
3142 queuePos = 0;
3145 size_t GCRuntime::testMarkQueuePos() const { return queuePos; }
3147 #endif
3149 GCRuntime::MarkQueueProgress GCRuntime::processTestMarkQueue() {
3150 #ifdef DEBUG
3151 if (testMarkQueue.empty()) {
3152 return QueueComplete;
3155 if (queueMarkColor == mozilla::Some(MarkColor::Gray) &&
3156 state() != State::Sweep) {
3157 return QueueSuspended;
3160 // If the queue wants to be gray marking, but we've pushed a black object
3161 // since set-color-gray was processed, then we can't switch to gray and must
3162 // again wait until gray marking is possible.
3164 // Remove this code if the restriction against marking gray during black is
3165 // relaxed.
3166 if (queueMarkColor == mozilla::Some(MarkColor::Gray) &&
3167 marker().hasBlackEntries()) {
3168 return QueueSuspended;
3171 // If the queue wants to be marking a particular color, switch to that color.
3172 // In any case, restore the mark color to whatever it was when we entered
3173 // this function.
3174 bool willRevertToGray = marker().markColor() == MarkColor::Gray;
3175 AutoSetMarkColor autoRevertColor(
3176 marker(), queueMarkColor.valueOr(marker().markColor()));
3178 // Process the mark queue by taking each object in turn, pushing it onto the
3179 // mark stack, and processing just the top element with processMarkStackTop
3180 // without recursing into reachable objects.
3181 while (queuePos < testMarkQueue.length()) {
3182 Value val = testMarkQueue[queuePos++].get();
3183 if (val.isObject()) {
3184 JSObject* obj = &val.toObject();
3185 JS::Zone* zone = obj->zone();
3186 if (!zone->isGCMarking() || obj->isMarkedAtLeast(marker().markColor())) {
3187 continue;
3190 // If we have started sweeping, obey sweep group ordering. But note that
3191 // we will first be called during the initial sweep slice, when the sweep
3192 // group indexes have not yet been computed. In that case, we can mark
3193 // freely.
3194 if (state() == State::Sweep && initialState != State::Sweep) {
3195 if (zone->gcSweepGroupIndex < getCurrentSweepGroupIndex()) {
3196 // Too late. This must have been added after we started collecting,
3197 // and we've already processed its sweep group. Skip it.
3198 continue;
3200 if (zone->gcSweepGroupIndex > getCurrentSweepGroupIndex()) {
3201 // Not ready yet. Wait until we reach the object's sweep group.
3202 queuePos--;
3203 return QueueSuspended;
3207 if (marker().markColor() == MarkColor::Gray &&
3208 zone->isGCMarkingBlackOnly()) {
3209 // Have not yet reached the point where we can mark this object, so
3210 // continue with the GC.
3211 queuePos--;
3212 return QueueSuspended;
3215 if (marker().markColor() == MarkColor::Black && willRevertToGray) {
3216 // If we put any black objects on the stack, we wouldn't be able to
3217 // return to gray marking. So delay the marking until we're back to
3218 // black marking.
3219 queuePos--;
3220 return QueueSuspended;
3223 // Mark the object.
3224 AutoEnterOOMUnsafeRegion oomUnsafe;
3225 if (!marker().markOneObjectForTest(obj)) {
3226 // If we overflowed the stack here and delayed marking, then we won't be
3227 // testing what we think we're testing.
3228 MOZ_ASSERT(obj->asTenured().arena()->onDelayedMarkingList());
3229 oomUnsafe.crash("Overflowed stack while marking test queue");
3231 } else if (val.isString()) {
3232 JSLinearString* str = &val.toString()->asLinear();
3233 if (js::StringEqualsLiteral(str, "yield") && isIncrementalGc()) {
3234 return QueueYielded;
3237 if (js::StringEqualsLiteral(str, "enter-weak-marking-mode") ||
3238 js::StringEqualsLiteral(str, "abort-weak-marking-mode")) {
3239 if (marker().isRegularMarking()) {
3240 // We can't enter weak marking mode at just any time, so instead
3241 // we'll stop processing the queue and continue on with the GC. Once
3242 // we enter weak marking mode, we can continue to the rest of the
3243 // queue. Note that we will also suspend for aborting, and then abort
3244 // the earliest following weak marking mode.
3245 queuePos--;
3246 return QueueSuspended;
3248 if (js::StringEqualsLiteral(str, "abort-weak-marking-mode")) {
3249 marker().abortLinearWeakMarking();
3251 } else if (js::StringEqualsLiteral(str, "drain")) {
3252 auto unlimited = SliceBudget::unlimited();
3253 MOZ_RELEASE_ASSERT(
3254 marker().markUntilBudgetExhausted(unlimited, DontReportMarkTime));
3255 } else if (js::StringEqualsLiteral(str, "set-color-gray")) {
3256 queueMarkColor = mozilla::Some(MarkColor::Gray);
3257 if (state() != State::Sweep || marker().hasBlackEntries()) {
3258 // Cannot mark gray yet, so continue with the GC.
3259 queuePos--;
3260 return QueueSuspended;
3262 marker().setMarkColor(MarkColor::Gray);
3263 } else if (js::StringEqualsLiteral(str, "set-color-black")) {
3264 queueMarkColor = mozilla::Some(MarkColor::Black);
3265 marker().setMarkColor(MarkColor::Black);
3266 } else if (js::StringEqualsLiteral(str, "unset-color")) {
3267 queueMarkColor.reset();
3271 #endif
3273 return QueueComplete;
3276 static bool IsEmergencyGC(JS::GCReason reason) {
3277 return reason == JS::GCReason::LAST_DITCH ||
3278 reason == JS::GCReason::MEM_PRESSURE;
3281 void GCRuntime::finishCollection(JS::GCReason reason) {
3282 assertBackgroundSweepingFinished();
3284 MOZ_ASSERT(!hasDelayedMarking());
3285 for (size_t i = 0; i < markers.length(); i++) {
3286 const auto& marker = markers[i];
3287 marker->stop();
3288 if (i == 0) {
3289 marker->resetStackCapacity();
3290 } else {
3291 marker->freeStack();
3295 maybeStopPretenuring();
3297 if (IsEmergencyGC(reason)) {
3298 waitBackgroundFreeEnd();
3301 TimeStamp currentTime = TimeStamp::Now();
3303 updateSchedulingStateAfterCollection(currentTime);
3305 for (GCZonesIter zone(this); !zone.done(); zone.next()) {
3306 zone->changeGCState(Zone::Finished, Zone::NoGC);
3307 zone->notifyObservingDebuggers();
3310 #ifdef JS_GC_ZEAL
3311 clearSelectedForMarking();
3312 #endif
3314 schedulingState.updateHighFrequencyMode(lastGCEndTime_, currentTime,
3315 tunables);
3316 lastGCEndTime_ = currentTime;
3318 checkGCStateNotInUse();
3321 void GCRuntime::checkGCStateNotInUse() {
3322 #ifdef DEBUG
3323 for (auto& marker : markers) {
3324 MOZ_ASSERT(!marker->isActive());
3325 MOZ_ASSERT(marker->isDrained());
3327 MOZ_ASSERT(!hasDelayedMarking());
3329 MOZ_ASSERT(!lastMarkSlice);
3331 MOZ_ASSERT(foregroundFinalizedArenas.ref().isNothing());
3333 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
3334 if (zone->wasCollected()) {
3335 zone->arenas.checkGCStateNotInUse();
3337 MOZ_ASSERT(!zone->wasGCStarted());
3338 MOZ_ASSERT(!zone->needsIncrementalBarrier());
3339 MOZ_ASSERT(!zone->isOnList());
3342 MOZ_ASSERT(zonesToMaybeCompact.ref().isEmpty());
3343 MOZ_ASSERT(cellsToAssertNotGray.ref().empty());
3345 AutoLockHelperThreadState lock;
3346 MOZ_ASSERT(!requestSliceAfterBackgroundTask);
3347 MOZ_ASSERT(unmarkTask.isIdle(lock));
3348 MOZ_ASSERT(markTask.isIdle(lock));
3349 MOZ_ASSERT(sweepTask.isIdle(lock));
3350 MOZ_ASSERT(decommitTask.isIdle(lock));
3351 #endif
3354 void GCRuntime::maybeStopPretenuring() {
3355 nursery().maybeStopPretenuring(this);
3357 size_t zonesWhereStringsEnabled = 0;
3358 size_t zonesWhereBigIntsEnabled = 0;
3360 for (GCZonesIter zone(this); !zone.done(); zone.next()) {
3361 if (zone->nurseryStringsDisabled || zone->nurseryBigIntsDisabled) {
3362 // We may need to reset allocation sites and discard JIT code to recover
3363 // if we find object lifetimes have changed.
3364 if (zone->pretenuring.shouldResetPretenuredAllocSites()) {
3365 zone->unknownAllocSite(JS::TraceKind::String)->maybeResetState();
3366 zone->unknownAllocSite(JS::TraceKind::BigInt)->maybeResetState();
3367 if (zone->nurseryStringsDisabled) {
3368 zone->nurseryStringsDisabled = false;
3369 zonesWhereStringsEnabled++;
3371 if (zone->nurseryBigIntsDisabled) {
3372 zone->nurseryBigIntsDisabled = false;
3373 zonesWhereBigIntsEnabled++;
3375 nursery().updateAllocFlagsForZone(zone);
3380 if (nursery().reportPretenuring()) {
3381 if (zonesWhereStringsEnabled) {
3382 fprintf(stderr, "GC re-enabled nursery string allocation in %zu zones\n",
3383 zonesWhereStringsEnabled);
3385 if (zonesWhereBigIntsEnabled) {
3386 fprintf(stderr, "GC re-enabled nursery big int allocation in %zu zones\n",
3387 zonesWhereBigIntsEnabled);
3392 void GCRuntime::updateSchedulingStateAfterCollection(TimeStamp currentTime) {
3393 TimeDuration totalGCTime = stats().totalGCTime();
3394 size_t totalInitialBytes = stats().initialCollectedBytes();
3396 for (GCZonesIter zone(this); !zone.done(); zone.next()) {
3397 if (tunables.balancedHeapLimitsEnabled() && totalInitialBytes != 0) {
3398 zone->updateCollectionRate(totalGCTime, totalInitialBytes);
3400 zone->clearGCSliceThresholds();
3401 zone->updateGCStartThresholds(*this);
3405 void GCRuntime::updateAllGCStartThresholds() {
3406 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
3407 zone->updateGCStartThresholds(*this);
3411 void GCRuntime::updateAllocationRates() {
3412 // Calculate mutator time since the last update. This ignores the fact that
3413 // the zone could have been created since the last update.
3415 TimeStamp currentTime = TimeStamp::Now();
3416 TimeDuration totalTime = currentTime - lastAllocRateUpdateTime;
3417 if (collectorTimeSinceAllocRateUpdate >= totalTime) {
3418 // It shouldn't happen but occasionally we see collector time being larger
3419 // than total time. Skip the update in that case.
3420 return;
3423 TimeDuration mutatorTime = totalTime - collectorTimeSinceAllocRateUpdate;
3425 for (AllZonesIter zone(this); !zone.done(); zone.next()) {
3426 zone->updateAllocationRate(mutatorTime);
3427 zone->updateGCStartThresholds(*this);
3430 lastAllocRateUpdateTime = currentTime;
3431 collectorTimeSinceAllocRateUpdate = TimeDuration::Zero();
3434 static const char* GCHeapStateToLabel(JS::HeapState heapState) {
3435 switch (heapState) {
3436 case JS::HeapState::MinorCollecting:
3437 return "Minor GC";
3438 case JS::HeapState::MajorCollecting:
3439 return "Major GC";
3440 default:
3441 MOZ_CRASH("Unexpected heap state when pushing GC profiling stack frame");
3443 MOZ_ASSERT_UNREACHABLE("Should have exhausted every JS::HeapState variant!");
3444 return nullptr;
3447 static JS::ProfilingCategoryPair GCHeapStateToProfilingCategory(
3448 JS::HeapState heapState) {
3449 return heapState == JS::HeapState::MinorCollecting
3450 ? JS::ProfilingCategoryPair::GCCC_MinorGC
3451 : JS::ProfilingCategoryPair::GCCC_MajorGC;
3454 /* Start a new heap session. */
3455 AutoHeapSession::AutoHeapSession(GCRuntime* gc, JS::HeapState heapState)
3456 : gc(gc), prevState(gc->heapState_) {
3457 MOZ_ASSERT(CurrentThreadCanAccessRuntime(gc->rt));
3458 MOZ_ASSERT(prevState == JS::HeapState::Idle ||
3459 (prevState == JS::HeapState::MajorCollecting &&
3460 heapState == JS::HeapState::MinorCollecting));
3461 MOZ_ASSERT(heapState != JS::HeapState::Idle);
3463 gc->heapState_ = heapState;
3465 if (heapState == JS::HeapState::MinorCollecting ||
3466 heapState == JS::HeapState::MajorCollecting) {
3467 profilingStackFrame.emplace(
3468 gc->rt->mainContextFromOwnThread(), GCHeapStateToLabel(heapState),
3469 GCHeapStateToProfilingCategory(heapState),
3470 uint32_t(ProfilingStackFrame::Flags::RELEVANT_FOR_JS));
3474 AutoHeapSession::~AutoHeapSession() {
3475 MOZ_ASSERT(JS::RuntimeHeapIsBusy());
3476 gc->heapState_ = prevState;
3479 static const char* MajorGCStateToLabel(State state) {
3480 switch (state) {
3481 case State::Mark:
3482 return "js::GCRuntime::markUntilBudgetExhausted";
3483 case State::Sweep:
3484 return "js::GCRuntime::performSweepActions";
3485 case State::Compact:
3486 return "js::GCRuntime::compactPhase";
3487 default:
3488 MOZ_CRASH("Unexpected heap state when pushing GC profiling stack frame");
3491 MOZ_ASSERT_UNREACHABLE("Should have exhausted every State variant!");
3492 return nullptr;
3495 static JS::ProfilingCategoryPair MajorGCStateToProfilingCategory(State state) {
3496 switch (state) {
3497 case State::Mark:
3498 return JS::ProfilingCategoryPair::GCCC_MajorGC_Mark;
3499 case State::Sweep:
3500 return JS::ProfilingCategoryPair::GCCC_MajorGC_Sweep;
3501 case State::Compact:
3502 return JS::ProfilingCategoryPair::GCCC_MajorGC_Compact;
3503 default:
3504 MOZ_CRASH("Unexpected heap state when pushing GC profiling stack frame");
3508 AutoMajorGCProfilerEntry::AutoMajorGCProfilerEntry(GCRuntime* gc)
3509 : AutoGeckoProfilerEntry(gc->rt->mainContextFromAnyThread(),
3510 MajorGCStateToLabel(gc->state()),
3511 MajorGCStateToProfilingCategory(gc->state())) {
3512 MOZ_ASSERT(gc->heapState() == JS::HeapState::MajorCollecting);
3515 GCRuntime::IncrementalResult GCRuntime::resetIncrementalGC(
3516 GCAbortReason reason) {
3517 MOZ_ASSERT(reason != GCAbortReason::None);
3519 // Drop as much work as possible from an ongoing incremental GC so
3520 // we can start a new GC after it has finished.
3521 if (incrementalState == State::NotActive) {
3522 return IncrementalResult::Ok;
3525 AutoGCSession session(this, JS::HeapState::MajorCollecting);
3527 switch (incrementalState) {
3528 case State::NotActive:
3529 case State::MarkRoots:
3530 case State::Finish:
3531 MOZ_CRASH("Unexpected GC state in resetIncrementalGC");
3532 break;
3534 case State::Prepare:
3535 unmarkTask.cancelAndWait();
3537 for (GCZonesIter zone(this); !zone.done(); zone.next()) {
3538 zone->changeGCState(Zone::Prepare, Zone::NoGC);
3539 zone->clearGCSliceThresholds();
3540 zone->arenas.clearFreeLists();
3541 zone->arenas.mergeArenasFromCollectingLists();
3544 incrementalState = State::NotActive;
3545 checkGCStateNotInUse();
3546 break;
3548 case State::Mark: {
3549 // Cancel any ongoing marking.
3550 for (auto& marker : markers) {
3551 marker->reset();
3553 resetDelayedMarking();
3555 for (GCCompartmentsIter c(rt); !c.done(); c.next()) {
3556 resetGrayList(c);
3559 for (GCZonesIter zone(this); !zone.done(); zone.next()) {
3560 zone->changeGCState(zone->initialMarkingState(), Zone::NoGC);
3561 zone->clearGCSliceThresholds();
3562 zone->arenas.unmarkPreMarkedFreeCells();
3563 zone->arenas.mergeArenasFromCollectingLists();
3567 AutoLockHelperThreadState lock;
3568 lifoBlocksToFree.ref().freeAll();
3571 lastMarkSlice = false;
3572 incrementalState = State::Finish;
3574 #ifdef DEBUG
3575 for (auto& marker : markers) {
3576 MOZ_ASSERT(!marker->shouldCheckCompartments());
3578 #endif
3580 break;
3583 case State::Sweep: {
3584 // Finish sweeping the current sweep group, then abort.
3585 for (CompartmentsIter c(rt); !c.done(); c.next()) {
3586 c->gcState.scheduledForDestruction = false;
3589 abortSweepAfterCurrentGroup = true;
3590 isCompacting = false;
3592 break;
3595 case State::Finalize: {
3596 isCompacting = false;
3597 break;
3600 case State::Compact: {
3601 // Skip any remaining zones that would have been compacted.
3602 MOZ_ASSERT(isCompacting);
3603 startedCompacting = true;
3604 zonesToMaybeCompact.ref().clear();
3605 break;
3608 case State::Decommit: {
3609 break;
3613 stats().reset(reason);
3615 return IncrementalResult::ResetIncremental;
3618 AutoDisableBarriers::AutoDisableBarriers(GCRuntime* gc) : gc(gc) {
3620 * Clear needsIncrementalBarrier early so we don't do any write barriers
3621 * during sweeping.
3623 for (GCZonesIter zone(gc); !zone.done(); zone.next()) {
3624 if (zone->isGCMarking()) {
3625 MOZ_ASSERT(zone->needsIncrementalBarrier());
3626 zone->setNeedsIncrementalBarrier(false);
3628 MOZ_ASSERT(!zone->needsIncrementalBarrier());
3632 AutoDisableBarriers::~AutoDisableBarriers() {
3633 for (GCZonesIter zone(gc); !zone.done(); zone.next()) {
3634 MOZ_ASSERT(!zone->needsIncrementalBarrier());
3635 if (zone->isGCMarking()) {
3636 zone->setNeedsIncrementalBarrier(true);
3641 static bool NeedToCollectNursery(GCRuntime* gc) {
3642 return !gc->nursery().isEmpty() || !gc->storeBuffer().isEmpty();
3645 #ifdef DEBUG
3646 static const char* DescribeBudget(const SliceBudget& budget) {
3647 constexpr size_t length = 32;
3648 static char buffer[length];
3649 budget.describe(buffer, length);
3650 return buffer;
3652 #endif
3654 static bool ShouldPauseMutatorWhileWaiting(const SliceBudget& budget,
3655 JS::GCReason reason,
3656 bool budgetWasIncreased) {
3657 // When we're nearing the incremental limit at which we will finish the
3658 // collection synchronously, pause the main thread if there is only background
3659 // GC work happening. This allows the GC to catch up and avoid hitting the
3660 // limit.
3661 return budget.isTimeBudget() &&
3662 (reason == JS::GCReason::ALLOC_TRIGGER ||
3663 reason == JS::GCReason::TOO_MUCH_MALLOC) &&
3664 budgetWasIncreased;
3667 void GCRuntime::incrementalSlice(SliceBudget& budget, JS::GCReason reason,
3668 bool budgetWasIncreased) {
3669 MOZ_ASSERT_IF(isIncrementalGCInProgress(), isIncremental);
3671 AutoSetThreadIsPerformingGC performingGC(rt->gcContext());
3673 AutoGCSession session(this, JS::HeapState::MajorCollecting);
3675 bool destroyingRuntime = (reason == JS::GCReason::DESTROY_RUNTIME);
3677 initialState = incrementalState;
3678 isIncremental = !budget.isUnlimited();
3679 useBackgroundThreads = ShouldUseBackgroundThreads(isIncremental, reason);
3680 haveDiscardedJITCodeThisSlice = false;
3682 #ifdef JS_GC_ZEAL
3683 // Do the incremental collection type specified by zeal mode if the collection
3684 // was triggered by runDebugGC() and incremental GC has not been cancelled by
3685 // resetIncrementalGC().
3686 useZeal = isIncremental && reason == JS::GCReason::DEBUG_GC;
3687 #endif
3689 #ifdef DEBUG
3690 stats().log(
3691 "Incremental: %d, lastMarkSlice: %d, useZeal: %d, budget: %s, "
3692 "budgetWasIncreased: %d",
3693 bool(isIncremental), bool(lastMarkSlice), bool(useZeal),
3694 DescribeBudget(budget), budgetWasIncreased);
3695 #endif
3697 if (useZeal && hasIncrementalTwoSliceZealMode()) {
3698 // Yields between slices occurs at predetermined points in these modes; the
3699 // budget is not used. |isIncremental| is still true.
3700 stats().log("Using unlimited budget for two-slice zeal mode");
3701 budget = SliceBudget::unlimited();
3704 bool shouldPauseMutator =
3705 ShouldPauseMutatorWhileWaiting(budget, reason, budgetWasIncreased);
3707 switch (incrementalState) {
3708 case State::NotActive:
3709 startCollection(reason);
3711 incrementalState = State::Prepare;
3712 if (!beginPreparePhase(reason, session)) {
3713 incrementalState = State::NotActive;
3714 break;
3717 if (useZeal && hasZealMode(ZealMode::YieldBeforeRootMarking)) {
3718 break;
3721 [[fallthrough]];
3723 case State::Prepare:
3724 if (waitForBackgroundTask(unmarkTask, budget, shouldPauseMutator,
3725 DontTriggerSliceWhenFinished) == NotFinished) {
3726 break;
3729 incrementalState = State::MarkRoots;
3730 [[fallthrough]];
3732 case State::MarkRoots:
3733 endPreparePhase(reason);
3735 beginMarkPhase(session);
3736 incrementalState = State::Mark;
3738 if (useZeal && hasZealMode(ZealMode::YieldBeforeMarking) &&
3739 isIncremental) {
3740 break;
3743 [[fallthrough]];
3745 case State::Mark:
3746 if (mightSweepInThisSlice(budget.isUnlimited())) {
3747 // Trace wrapper rooters before marking if we might start sweeping in
3748 // this slice.
3749 rt->mainContextFromOwnThread()->traceWrapperGCRooters(
3750 marker().tracer());
3754 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::MARK);
3755 if (markUntilBudgetExhausted(budget, useParallelMarking) ==
3756 NotFinished) {
3757 break;
3761 assertNoMarkingWork();
3764 * There are a number of reasons why we break out of collection here,
3765 * either ending the slice or to run a new interation of the loop in
3766 * GCRuntime::collect()
3770 * In incremental GCs where we have already performed more than one
3771 * slice we yield after marking with the aim of starting the sweep in
3772 * the next slice, since the first slice of sweeping can be expensive.
3774 * This is modified by the various zeal modes. We don't yield in
3775 * YieldBeforeMarking mode and we always yield in YieldBeforeSweeping
3776 * mode.
3778 * We will need to mark anything new on the stack when we resume, so
3779 * we stay in Mark state.
3781 if (isIncremental && !lastMarkSlice) {
3782 if ((initialState == State::Mark &&
3783 !(useZeal && hasZealMode(ZealMode::YieldBeforeMarking))) ||
3784 (useZeal && hasZealMode(ZealMode::YieldBeforeSweeping))) {
3785 lastMarkSlice = true;
3786 stats().log("Yielding before starting sweeping");
3787 break;
3791 incrementalState = State::Sweep;
3792 lastMarkSlice = false;
3794 beginSweepPhase(reason, session);
3796 [[fallthrough]];
3798 case State::Sweep:
3799 if (storeBuffer().mayHavePointersToDeadCells()) {
3800 collectNurseryFromMajorGC(reason);
3803 if (initialState == State::Sweep) {
3804 rt->mainContextFromOwnThread()->traceWrapperGCRooters(
3805 marker().tracer());
3808 if (performSweepActions(budget) == NotFinished) {
3809 break;
3812 endSweepPhase(destroyingRuntime);
3814 incrementalState = State::Finalize;
3816 [[fallthrough]];
3818 case State::Finalize:
3819 if (waitForBackgroundTask(sweepTask, budget, shouldPauseMutator,
3820 TriggerSliceWhenFinished) == NotFinished) {
3821 break;
3824 assertBackgroundSweepingFinished();
3827 // Sweep the zones list now that background finalization is finished to
3828 // remove and free dead zones, compartments and realms.
3829 gcstats::AutoPhase ap1(stats(), gcstats::PhaseKind::SWEEP);
3830 gcstats::AutoPhase ap2(stats(), gcstats::PhaseKind::DESTROY);
3831 sweepZones(rt->gcContext(), destroyingRuntime);
3834 MOZ_ASSERT(!startedCompacting);
3835 incrementalState = State::Compact;
3837 // Always yield before compacting since it is not incremental.
3838 if (isCompacting && !budget.isUnlimited()) {
3839 break;
3842 [[fallthrough]];
3844 case State::Compact:
3845 if (isCompacting) {
3846 if (NeedToCollectNursery(this)) {
3847 collectNurseryFromMajorGC(reason);
3850 storeBuffer().checkEmpty();
3851 if (!startedCompacting) {
3852 beginCompactPhase();
3855 if (compactPhase(reason, budget, session) == NotFinished) {
3856 break;
3859 endCompactPhase();
3862 startDecommit();
3863 incrementalState = State::Decommit;
3865 [[fallthrough]];
3867 case State::Decommit:
3868 if (waitForBackgroundTask(decommitTask, budget, shouldPauseMutator,
3869 TriggerSliceWhenFinished) == NotFinished) {
3870 break;
3873 incrementalState = State::Finish;
3875 [[fallthrough]];
3877 case State::Finish:
3878 finishCollection(reason);
3879 incrementalState = State::NotActive;
3880 break;
3883 #ifdef DEBUG
3884 MOZ_ASSERT(safeToYield);
3885 for (auto& marker : markers) {
3886 MOZ_ASSERT(marker->markColor() == MarkColor::Black);
3888 MOZ_ASSERT(!rt->gcContext()->hasJitCodeToPoison());
3889 #endif
3892 void GCRuntime::collectNurseryFromMajorGC(JS::GCReason reason) {
3893 collectNursery(gcOptions(), JS::GCReason::EVICT_NURSERY,
3894 gcstats::PhaseKind::EVICT_NURSERY_FOR_MAJOR_GC);
3896 MOZ_ASSERT(nursery().isEmpty());
3897 MOZ_ASSERT(storeBuffer().isEmpty());
3900 bool GCRuntime::hasForegroundWork() const {
3901 switch (incrementalState) {
3902 case State::NotActive:
3903 // Incremental GC is not running and no work is pending.
3904 return false;
3905 case State::Prepare:
3906 // We yield in the Prepare state after starting unmarking.
3907 return !unmarkTask.wasStarted();
3908 case State::Finalize:
3909 // We yield in the Finalize state to wait for background sweeping.
3910 return !isBackgroundSweeping();
3911 case State::Decommit:
3912 // We yield in the Decommit state to wait for background decommit.
3913 return !decommitTask.wasStarted();
3914 default:
3915 // In all other states there is still work to do.
3916 return true;
3920 IncrementalProgress GCRuntime::waitForBackgroundTask(
3921 GCParallelTask& task, const SliceBudget& budget, bool shouldPauseMutator,
3922 ShouldTriggerSliceWhenFinished triggerSlice) {
3923 // Wait here in non-incremental collections, or if we want to pause the
3924 // mutator to let the GC catch up.
3925 if (budget.isUnlimited() || shouldPauseMutator) {
3926 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::WAIT_BACKGROUND_THREAD);
3927 Maybe<TimeStamp> deadline;
3928 if (budget.isTimeBudget()) {
3929 deadline.emplace(budget.deadline());
3931 task.join(deadline);
3934 // In incremental collections, yield if the task has not finished and
3935 // optionally request a slice to notify us when this happens.
3936 if (!budget.isUnlimited()) {
3937 AutoLockHelperThreadState lock;
3938 if (task.wasStarted(lock)) {
3939 if (triggerSlice) {
3940 requestSliceAfterBackgroundTask = true;
3942 return NotFinished;
3945 task.joinWithLockHeld(lock);
3948 MOZ_ASSERT(task.isIdle());
3950 if (triggerSlice) {
3951 cancelRequestedGCAfterBackgroundTask();
3954 return Finished;
3957 GCAbortReason gc::IsIncrementalGCUnsafe(JSRuntime* rt) {
3958 MOZ_ASSERT(!rt->mainContextFromOwnThread()->suppressGC);
3960 if (!rt->gc.isIncrementalGCAllowed()) {
3961 return GCAbortReason::IncrementalDisabled;
3964 return GCAbortReason::None;
3967 inline void GCRuntime::checkZoneIsScheduled(Zone* zone, JS::GCReason reason,
3968 const char* trigger) {
3969 #ifdef DEBUG
3970 if (zone->isGCScheduled()) {
3971 return;
3974 fprintf(stderr,
3975 "checkZoneIsScheduled: Zone %p not scheduled as expected in %s GC "
3976 "for %s trigger\n",
3977 zone, JS::ExplainGCReason(reason), trigger);
3978 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
3979 fprintf(stderr, " Zone %p:%s%s\n", zone.get(),
3980 zone->isAtomsZone() ? " atoms" : "",
3981 zone->isGCScheduled() ? " scheduled" : "");
3983 fflush(stderr);
3984 MOZ_CRASH("Zone not scheduled");
3985 #endif
3988 GCRuntime::IncrementalResult GCRuntime::budgetIncrementalGC(
3989 bool nonincrementalByAPI, JS::GCReason reason, SliceBudget& budget) {
3990 if (nonincrementalByAPI) {
3991 stats().nonincremental(GCAbortReason::NonIncrementalRequested);
3992 budget = SliceBudget::unlimited();
3994 // Reset any in progress incremental GC if this was triggered via the
3995 // API. This isn't required for correctness, but sometimes during tests
3996 // the caller expects this GC to collect certain objects, and we need
3997 // to make sure to collect everything possible.
3998 if (reason != JS::GCReason::ALLOC_TRIGGER) {
3999 return resetIncrementalGC(GCAbortReason::NonIncrementalRequested);
4002 return IncrementalResult::Ok;
4005 if (reason == JS::GCReason::ABORT_GC) {
4006 budget = SliceBudget::unlimited();
4007 stats().nonincremental(GCAbortReason::AbortRequested);
4008 return resetIncrementalGC(GCAbortReason::AbortRequested);
4011 if (!budget.isUnlimited()) {
4012 GCAbortReason unsafeReason = IsIncrementalGCUnsafe(rt);
4013 if (unsafeReason == GCAbortReason::None) {
4014 if (reason == JS::GCReason::COMPARTMENT_REVIVED) {
4015 unsafeReason = GCAbortReason::CompartmentRevived;
4016 } else if (!incrementalGCEnabled) {
4017 unsafeReason = GCAbortReason::ModeChange;
4021 if (unsafeReason != GCAbortReason::None) {
4022 budget = SliceBudget::unlimited();
4023 stats().nonincremental(unsafeReason);
4024 return resetIncrementalGC(unsafeReason);
4028 GCAbortReason resetReason = GCAbortReason::None;
4029 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
4030 if (zone->gcHeapSize.bytes() >=
4031 zone->gcHeapThreshold.incrementalLimitBytes()) {
4032 checkZoneIsScheduled(zone, reason, "GC bytes");
4033 budget = SliceBudget::unlimited();
4034 stats().nonincremental(GCAbortReason::GCBytesTrigger);
4035 if (zone->wasGCStarted() && zone->gcState() > Zone::Sweep) {
4036 resetReason = GCAbortReason::GCBytesTrigger;
4040 if (zone->mallocHeapSize.bytes() >=
4041 zone->mallocHeapThreshold.incrementalLimitBytes()) {
4042 checkZoneIsScheduled(zone, reason, "malloc bytes");
4043 budget = SliceBudget::unlimited();
4044 stats().nonincremental(GCAbortReason::MallocBytesTrigger);
4045 if (zone->wasGCStarted() && zone->gcState() > Zone::Sweep) {
4046 resetReason = GCAbortReason::MallocBytesTrigger;
4050 if (zone->jitHeapSize.bytes() >=
4051 zone->jitHeapThreshold.incrementalLimitBytes()) {
4052 checkZoneIsScheduled(zone, reason, "JIT code bytes");
4053 budget = SliceBudget::unlimited();
4054 stats().nonincremental(GCAbortReason::JitCodeBytesTrigger);
4055 if (zone->wasGCStarted() && zone->gcState() > Zone::Sweep) {
4056 resetReason = GCAbortReason::JitCodeBytesTrigger;
4060 if (isIncrementalGCInProgress() &&
4061 zone->isGCScheduled() != zone->wasGCStarted()) {
4062 budget = SliceBudget::unlimited();
4063 resetReason = GCAbortReason::ZoneChange;
4067 if (resetReason != GCAbortReason::None) {
4068 return resetIncrementalGC(resetReason);
4071 return IncrementalResult::Ok;
4074 bool GCRuntime::maybeIncreaseSliceBudget(SliceBudget& budget) {
4075 if (js::SupportDifferentialTesting()) {
4076 return false;
4079 if (!budget.isTimeBudget() || !isIncrementalGCInProgress()) {
4080 return false;
4083 bool wasIncreasedForLongCollections =
4084 maybeIncreaseSliceBudgetForLongCollections(budget);
4085 bool wasIncreasedForUgentCollections =
4086 maybeIncreaseSliceBudgetForUrgentCollections(budget);
4088 return wasIncreasedForLongCollections || wasIncreasedForUgentCollections;
4091 // Return true if the budget is actually extended after rounding.
4092 static bool ExtendBudget(SliceBudget& budget, double newDuration) {
4093 long millis = lround(newDuration);
4094 if (millis <= budget.timeBudget()) {
4095 return false;
4098 bool idleTriggered = budget.idle;
4099 budget = SliceBudget(TimeBudget(millis), nullptr); // Uninterruptible.
4100 budget.idle = idleTriggered;
4101 budget.extended = true;
4102 return true;
4105 bool GCRuntime::maybeIncreaseSliceBudgetForLongCollections(
4106 SliceBudget& budget) {
4107 // For long-running collections, enforce a minimum time budget that increases
4108 // linearly with time up to a maximum.
4110 // All times are in milliseconds.
4111 struct BudgetAtTime {
4112 double time;
4113 double budget;
4115 const BudgetAtTime MinBudgetStart{1500, 0.0};
4116 const BudgetAtTime MinBudgetEnd{2500, 100.0};
4118 double totalTime = (TimeStamp::Now() - lastGCStartTime()).ToMilliseconds();
4120 double minBudget =
4121 LinearInterpolate(totalTime, MinBudgetStart.time, MinBudgetStart.budget,
4122 MinBudgetEnd.time, MinBudgetEnd.budget);
4124 return ExtendBudget(budget, minBudget);
4127 bool GCRuntime::maybeIncreaseSliceBudgetForUrgentCollections(
4128 SliceBudget& budget) {
4129 // Enforce a minimum time budget based on how close we are to the incremental
4130 // limit.
4132 size_t minBytesRemaining = SIZE_MAX;
4133 for (AllZonesIter zone(this); !zone.done(); zone.next()) {
4134 if (!zone->wasGCStarted()) {
4135 continue;
4137 size_t gcBytesRemaining =
4138 zone->gcHeapThreshold.incrementalBytesRemaining(zone->gcHeapSize);
4139 minBytesRemaining = std::min(minBytesRemaining, gcBytesRemaining);
4140 size_t mallocBytesRemaining =
4141 zone->mallocHeapThreshold.incrementalBytesRemaining(
4142 zone->mallocHeapSize);
4143 minBytesRemaining = std::min(minBytesRemaining, mallocBytesRemaining);
4146 if (minBytesRemaining < tunables.urgentThresholdBytes() &&
4147 minBytesRemaining != 0) {
4148 // Increase budget based on the reciprocal of the fraction remaining.
4149 double fractionRemaining =
4150 double(minBytesRemaining) / double(tunables.urgentThresholdBytes());
4151 double minBudget = double(defaultSliceBudgetMS()) / fractionRemaining;
4152 return ExtendBudget(budget, minBudget);
4155 return false;
4158 static void ScheduleZones(GCRuntime* gc, JS::GCReason reason) {
4159 for (ZonesIter zone(gc, WithAtoms); !zone.done(); zone.next()) {
4160 // Re-check heap threshold for alloc-triggered zones that were not
4161 // previously collected. Now we have allocation rate data, the heap limit
4162 // may have been increased beyond the current size.
4163 if (gc->tunables.balancedHeapLimitsEnabled() && zone->isGCScheduled() &&
4164 zone->smoothedCollectionRate.ref().isNothing() &&
4165 reason == JS::GCReason::ALLOC_TRIGGER &&
4166 zone->gcHeapSize.bytes() < zone->gcHeapThreshold.startBytes()) {
4167 zone->unscheduleGC(); // May still be re-scheduled below.
4170 if (gc->isShutdownGC()) {
4171 zone->scheduleGC();
4174 if (!gc->isPerZoneGCEnabled()) {
4175 zone->scheduleGC();
4178 // To avoid resets, continue to collect any zones that were being
4179 // collected in a previous slice.
4180 if (gc->isIncrementalGCInProgress() && zone->wasGCStarted()) {
4181 zone->scheduleGC();
4184 // This is a heuristic to reduce the total number of collections.
4185 bool inHighFrequencyMode = gc->schedulingState.inHighFrequencyGCMode();
4186 if (zone->gcHeapSize.bytes() >=
4187 zone->gcHeapThreshold.eagerAllocTrigger(inHighFrequencyMode) ||
4188 zone->mallocHeapSize.bytes() >=
4189 zone->mallocHeapThreshold.eagerAllocTrigger(inHighFrequencyMode) ||
4190 zone->jitHeapSize.bytes() >= zone->jitHeapThreshold.startBytes()) {
4191 zone->scheduleGC();
4196 static void UnscheduleZones(GCRuntime* gc) {
4197 for (ZonesIter zone(gc->rt, WithAtoms); !zone.done(); zone.next()) {
4198 zone->unscheduleGC();
4202 class js::gc::AutoCallGCCallbacks {
4203 GCRuntime& gc_;
4204 JS::GCReason reason_;
4206 public:
4207 explicit AutoCallGCCallbacks(GCRuntime& gc, JS::GCReason reason)
4208 : gc_(gc), reason_(reason) {
4209 gc_.maybeCallGCCallback(JSGC_BEGIN, reason);
4211 ~AutoCallGCCallbacks() { gc_.maybeCallGCCallback(JSGC_END, reason_); }
4214 void GCRuntime::maybeCallGCCallback(JSGCStatus status, JS::GCReason reason) {
4215 if (!gcCallback.ref().op) {
4216 return;
4219 if (isIncrementalGCInProgress()) {
4220 return;
4223 if (gcCallbackDepth == 0) {
4224 // Save scheduled zone information in case the callback clears it.
4225 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
4226 zone->gcScheduledSaved_ = zone->gcScheduled_;
4230 // Save and clear GC options and state in case the callback reenters GC.
4231 JS::GCOptions options = gcOptions();
4232 maybeGcOptions = Nothing();
4233 bool savedFullGCRequested = fullGCRequested;
4234 fullGCRequested = false;
4236 gcCallbackDepth++;
4238 callGCCallback(status, reason);
4240 MOZ_ASSERT(gcCallbackDepth != 0);
4241 gcCallbackDepth--;
4243 // Restore the original GC options.
4244 maybeGcOptions = Some(options);
4246 // At the end of a GC, clear out the fullGCRequested state. At the start,
4247 // restore the previous setting.
4248 fullGCRequested = (status == JSGC_END) ? false : savedFullGCRequested;
4250 if (gcCallbackDepth == 0) {
4251 // Ensure any zone that was originally scheduled stays scheduled.
4252 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
4253 zone->gcScheduled_ = zone->gcScheduled_ || zone->gcScheduledSaved_;
4259 * We disable inlining to ensure that the bottom of the stack with possible GC
4260 * roots recorded in MarkRuntime excludes any pointers we use during the marking
4261 * implementation.
4263 MOZ_NEVER_INLINE GCRuntime::IncrementalResult GCRuntime::gcCycle(
4264 bool nonincrementalByAPI, const SliceBudget& budgetArg,
4265 JS::GCReason reason) {
4266 // Assert if this is a GC unsafe region.
4267 rt->mainContextFromOwnThread()->verifyIsSafeToGC();
4269 // It's ok if threads other than the main thread have suppressGC set, as
4270 // they are operating on zones which will not be collected from here.
4271 MOZ_ASSERT(!rt->mainContextFromOwnThread()->suppressGC);
4273 // This reason is used internally. See below.
4274 MOZ_ASSERT(reason != JS::GCReason::RESET);
4276 // Background finalization and decommit are finished by definition before we
4277 // can start a new major GC. Background allocation may still be running, but
4278 // that's OK because chunk pools are protected by the GC lock.
4279 if (!isIncrementalGCInProgress()) {
4280 assertBackgroundSweepingFinished();
4281 MOZ_ASSERT(decommitTask.isIdle());
4284 // Note that GC callbacks are allowed to re-enter GC.
4285 AutoCallGCCallbacks callCallbacks(*this, reason);
4287 // Increase slice budget for long running collections before it is recorded by
4288 // AutoGCSlice.
4289 SliceBudget budget(budgetArg);
4290 bool budgetWasIncreased = maybeIncreaseSliceBudget(budget);
4292 ScheduleZones(this, reason);
4294 auto updateCollectorTime = MakeScopeExit([&] {
4295 if (const gcstats::Statistics::SliceData* slice = stats().lastSlice()) {
4296 collectorTimeSinceAllocRateUpdate += slice->duration();
4300 gcstats::AutoGCSlice agc(stats(), scanZonesBeforeGC(), gcOptions(), budget,
4301 reason, budgetWasIncreased);
4303 IncrementalResult result =
4304 budgetIncrementalGC(nonincrementalByAPI, reason, budget);
4305 if (result == IncrementalResult::ResetIncremental) {
4306 if (incrementalState == State::NotActive) {
4307 // The collection was reset and has finished.
4308 return result;
4311 // The collection was reset but we must finish up some remaining work.
4312 reason = JS::GCReason::RESET;
4315 majorGCTriggerReason = JS::GCReason::NO_REASON;
4316 MOZ_ASSERT(!stats().hasTrigger());
4318 incGcNumber();
4319 incGcSliceNumber();
4321 gcprobes::MajorGCStart();
4322 incrementalSlice(budget, reason, budgetWasIncreased);
4323 gcprobes::MajorGCEnd();
4325 MOZ_ASSERT_IF(result == IncrementalResult::ResetIncremental,
4326 !isIncrementalGCInProgress());
4327 return result;
4330 inline bool GCRuntime::mightSweepInThisSlice(bool nonIncremental) {
4331 MOZ_ASSERT(incrementalState < State::Sweep);
4332 return nonIncremental || lastMarkSlice || hasIncrementalTwoSliceZealMode();
4335 #ifdef JS_GC_ZEAL
4336 static bool IsDeterministicGCReason(JS::GCReason reason) {
4337 switch (reason) {
4338 case JS::GCReason::API:
4339 case JS::GCReason::DESTROY_RUNTIME:
4340 case JS::GCReason::LAST_DITCH:
4341 case JS::GCReason::TOO_MUCH_MALLOC:
4342 case JS::GCReason::TOO_MUCH_WASM_MEMORY:
4343 case JS::GCReason::TOO_MUCH_JIT_CODE:
4344 case JS::GCReason::ALLOC_TRIGGER:
4345 case JS::GCReason::DEBUG_GC:
4346 case JS::GCReason::CC_FORCED:
4347 case JS::GCReason::SHUTDOWN_CC:
4348 case JS::GCReason::ABORT_GC:
4349 case JS::GCReason::DISABLE_GENERATIONAL_GC:
4350 case JS::GCReason::FINISH_GC:
4351 case JS::GCReason::PREPARE_FOR_TRACING:
4352 return true;
4354 default:
4355 return false;
4358 #endif
4360 gcstats::ZoneGCStats GCRuntime::scanZonesBeforeGC() {
4361 gcstats::ZoneGCStats zoneStats;
4362 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
4363 zoneStats.zoneCount++;
4364 zoneStats.compartmentCount += zone->compartments().length();
4365 for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next()) {
4366 zoneStats.realmCount += comp->realms().length();
4368 if (zone->isGCScheduled()) {
4369 zoneStats.collectedZoneCount++;
4370 zoneStats.collectedCompartmentCount += zone->compartments().length();
4374 return zoneStats;
4377 // The GC can only clean up scheduledForDestruction realms that were marked live
4378 // by a barrier (e.g. by RemapWrappers from a navigation event). It is also
4379 // common to have realms held live because they are part of a cycle in gecko,
4380 // e.g. involving the HTMLDocument wrapper. In this case, we need to run the
4381 // CycleCollector in order to remove these edges before the realm can be freed.
4382 void GCRuntime::maybeDoCycleCollection() {
4383 const static float ExcessiveGrayRealms = 0.8f;
4384 const static size_t LimitGrayRealms = 200;
4386 size_t realmsTotal = 0;
4387 size_t realmsGray = 0;
4388 for (RealmsIter realm(rt); !realm.done(); realm.next()) {
4389 ++realmsTotal;
4390 GlobalObject* global = realm->unsafeUnbarrieredMaybeGlobal();
4391 if (global && global->isMarkedGray()) {
4392 ++realmsGray;
4395 float grayFraction = float(realmsGray) / float(realmsTotal);
4396 if (grayFraction > ExcessiveGrayRealms || realmsGray > LimitGrayRealms) {
4397 callDoCycleCollectionCallback(rt->mainContextFromOwnThread());
4401 void GCRuntime::checkCanCallAPI() {
4402 MOZ_RELEASE_ASSERT(CurrentThreadCanAccessRuntime(rt));
4404 /* If we attempt to invoke the GC while we are running in the GC, assert. */
4405 MOZ_RELEASE_ASSERT(!JS::RuntimeHeapIsBusy());
4408 bool GCRuntime::checkIfGCAllowedInCurrentState(JS::GCReason reason) {
4409 if (rt->mainContextFromOwnThread()->suppressGC) {
4410 return false;
4413 // Only allow shutdown GCs when we're destroying the runtime. This keeps
4414 // the GC callback from triggering a nested GC and resetting global state.
4415 if (rt->isBeingDestroyed() && !isShutdownGC()) {
4416 return false;
4419 #ifdef JS_GC_ZEAL
4420 if (deterministicOnly && !IsDeterministicGCReason(reason)) {
4421 return false;
4423 #endif
4425 return true;
4428 bool GCRuntime::shouldRepeatForDeadZone(JS::GCReason reason) {
4429 MOZ_ASSERT_IF(reason == JS::GCReason::COMPARTMENT_REVIVED, !isIncremental);
4430 MOZ_ASSERT(!isIncrementalGCInProgress());
4432 if (!isIncremental) {
4433 return false;
4436 for (CompartmentsIter c(rt); !c.done(); c.next()) {
4437 if (c->gcState.scheduledForDestruction) {
4438 return true;
4442 return false;
4445 struct MOZ_RAII AutoSetZoneSliceThresholds {
4446 explicit AutoSetZoneSliceThresholds(GCRuntime* gc) : gc(gc) {
4447 // On entry, zones that are already collecting should have a slice threshold
4448 // set.
4449 for (ZonesIter zone(gc, WithAtoms); !zone.done(); zone.next()) {
4450 MOZ_ASSERT(zone->wasGCStarted() ==
4451 zone->gcHeapThreshold.hasSliceThreshold());
4452 MOZ_ASSERT(zone->wasGCStarted() ==
4453 zone->mallocHeapThreshold.hasSliceThreshold());
4457 ~AutoSetZoneSliceThresholds() {
4458 // On exit, update the thresholds for all collecting zones.
4459 bool waitingOnBGTask = gc->isWaitingOnBackgroundTask();
4460 for (ZonesIter zone(gc, WithAtoms); !zone.done(); zone.next()) {
4461 if (zone->wasGCStarted()) {
4462 zone->setGCSliceThresholds(*gc, waitingOnBGTask);
4463 } else {
4464 MOZ_ASSERT(!zone->gcHeapThreshold.hasSliceThreshold());
4465 MOZ_ASSERT(!zone->mallocHeapThreshold.hasSliceThreshold());
4470 GCRuntime* gc;
4473 void GCRuntime::collect(bool nonincrementalByAPI, const SliceBudget& budget,
4474 JS::GCReason reason) {
4475 TimeStamp startTime = TimeStamp::Now();
4476 auto timer = MakeScopeExit([&] {
4477 if (Realm* realm = rt->mainContextFromOwnThread()->realm()) {
4478 realm->timers.gcTime += TimeStamp::Now() - startTime;
4482 auto clearGCOptions = MakeScopeExit([&] {
4483 if (!isIncrementalGCInProgress()) {
4484 maybeGcOptions = Nothing();
4488 MOZ_ASSERT(reason != JS::GCReason::NO_REASON);
4490 // Checks run for each request, even if we do not actually GC.
4491 checkCanCallAPI();
4493 // Check if we are allowed to GC at this time before proceeding.
4494 if (!checkIfGCAllowedInCurrentState(reason)) {
4495 return;
4498 stats().log("GC slice starting in state %s", StateName(incrementalState));
4500 AutoStopVerifyingBarriers av(rt, isShutdownGC());
4501 AutoMaybeLeaveAtomsZone leaveAtomsZone(rt->mainContextFromOwnThread());
4502 AutoSetZoneSliceThresholds sliceThresholds(this);
4504 schedulingState.updateHighFrequencyModeForReason(reason);
4506 if (!isIncrementalGCInProgress() && tunables.balancedHeapLimitsEnabled()) {
4507 updateAllocationRates();
4510 bool repeat;
4511 do {
4512 IncrementalResult cycleResult =
4513 gcCycle(nonincrementalByAPI, budget, reason);
4515 if (reason == JS::GCReason::ABORT_GC) {
4516 MOZ_ASSERT(!isIncrementalGCInProgress());
4517 stats().log("GC aborted by request");
4518 break;
4522 * Sometimes when we finish a GC we need to immediately start a new one.
4523 * This happens in the following cases:
4524 * - when we reset the current GC
4525 * - when finalizers drop roots during shutdown
4526 * - when zones that we thought were dead at the start of GC are
4527 * not collected (see the large comment in beginMarkPhase)
4529 repeat = false;
4530 if (!isIncrementalGCInProgress()) {
4531 if (cycleResult == ResetIncremental) {
4532 repeat = true;
4533 } else if (rootsRemoved && isShutdownGC()) {
4534 /* Need to re-schedule all zones for GC. */
4535 JS::PrepareForFullGC(rt->mainContextFromOwnThread());
4536 repeat = true;
4537 reason = JS::GCReason::ROOTS_REMOVED;
4538 } else if (shouldRepeatForDeadZone(reason)) {
4539 repeat = true;
4540 reason = JS::GCReason::COMPARTMENT_REVIVED;
4543 } while (repeat);
4545 if (reason == JS::GCReason::COMPARTMENT_REVIVED) {
4546 maybeDoCycleCollection();
4549 #ifdef JS_GC_ZEAL
4550 if (hasZealMode(ZealMode::CheckHeapAfterGC)) {
4551 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::TRACE_HEAP);
4552 CheckHeapAfterGC(rt);
4554 if (hasZealMode(ZealMode::CheckGrayMarking) && !isIncrementalGCInProgress()) {
4555 MOZ_RELEASE_ASSERT(CheckGrayMarkingState(rt));
4557 #endif
4558 stats().log("GC slice ending in state %s", StateName(incrementalState));
4560 UnscheduleZones(this);
4563 SliceBudget GCRuntime::defaultBudget(JS::GCReason reason, int64_t millis) {
4564 // millis == 0 means use internal GC scheduling logic to come up with
4565 // a duration for the slice budget. This may end up still being zero
4566 // based on preferences.
4567 if (millis == 0) {
4568 millis = defaultSliceBudgetMS();
4571 // If the embedding has registered a callback for creating SliceBudgets,
4572 // then use it.
4573 if (createBudgetCallback) {
4574 return createBudgetCallback(reason, millis);
4577 // Otherwise, the preference can request an unlimited duration slice.
4578 if (millis == 0) {
4579 return SliceBudget::unlimited();
4582 return SliceBudget(TimeBudget(millis));
4585 void GCRuntime::gc(JS::GCOptions options, JS::GCReason reason) {
4586 if (!isIncrementalGCInProgress()) {
4587 setGCOptions(options);
4590 collect(true, SliceBudget::unlimited(), reason);
4593 void GCRuntime::startGC(JS::GCOptions options, JS::GCReason reason,
4594 const js::SliceBudget& budget) {
4595 MOZ_ASSERT(!isIncrementalGCInProgress());
4596 setGCOptions(options);
4598 if (!JS::IsIncrementalGCEnabled(rt->mainContextFromOwnThread())) {
4599 collect(true, SliceBudget::unlimited(), reason);
4600 return;
4603 collect(false, budget, reason);
4606 void GCRuntime::setGCOptions(JS::GCOptions options) {
4607 MOZ_ASSERT(maybeGcOptions == Nothing());
4608 maybeGcOptions = Some(options);
4611 void GCRuntime::gcSlice(JS::GCReason reason, const js::SliceBudget& budget) {
4612 MOZ_ASSERT(isIncrementalGCInProgress());
4613 collect(false, budget, reason);
4616 void GCRuntime::finishGC(JS::GCReason reason) {
4617 MOZ_ASSERT(isIncrementalGCInProgress());
4619 // If we're not collecting because we're out of memory then skip the
4620 // compacting phase if we need to finish an ongoing incremental GC
4621 // non-incrementally to avoid janking the browser.
4622 if (!IsOOMReason(initialReason)) {
4623 if (incrementalState == State::Compact) {
4624 abortGC();
4625 return;
4628 isCompacting = false;
4631 collect(false, SliceBudget::unlimited(), reason);
4634 void GCRuntime::abortGC() {
4635 MOZ_ASSERT(isIncrementalGCInProgress());
4636 checkCanCallAPI();
4637 MOZ_ASSERT(!rt->mainContextFromOwnThread()->suppressGC);
4639 collect(false, SliceBudget::unlimited(), JS::GCReason::ABORT_GC);
4642 static bool ZonesSelected(GCRuntime* gc) {
4643 for (ZonesIter zone(gc, WithAtoms); !zone.done(); zone.next()) {
4644 if (zone->isGCScheduled()) {
4645 return true;
4648 return false;
4651 void GCRuntime::startDebugGC(JS::GCOptions options, const SliceBudget& budget) {
4652 MOZ_ASSERT(!isIncrementalGCInProgress());
4653 setGCOptions(options);
4655 if (!ZonesSelected(this)) {
4656 JS::PrepareForFullGC(rt->mainContextFromOwnThread());
4659 collect(false, budget, JS::GCReason::DEBUG_GC);
4662 void GCRuntime::debugGCSlice(const SliceBudget& budget) {
4663 MOZ_ASSERT(isIncrementalGCInProgress());
4665 if (!ZonesSelected(this)) {
4666 JS::PrepareForIncrementalGC(rt->mainContextFromOwnThread());
4669 collect(false, budget, JS::GCReason::DEBUG_GC);
4672 /* Schedule a full GC unless a zone will already be collected. */
4673 void js::PrepareForDebugGC(JSRuntime* rt) {
4674 if (!ZonesSelected(&rt->gc)) {
4675 JS::PrepareForFullGC(rt->mainContextFromOwnThread());
4679 void GCRuntime::onOutOfMallocMemory() {
4680 // Stop allocating new chunks.
4681 allocTask.cancelAndWait();
4683 // Make sure we release anything queued for release.
4684 decommitTask.join();
4685 nursery().joinDecommitTask();
4687 // Wait for background free of nursery huge slots to finish.
4688 sweepTask.join();
4690 AutoLockGC lock(this);
4691 onOutOfMallocMemory(lock);
4694 void GCRuntime::onOutOfMallocMemory(const AutoLockGC& lock) {
4695 #ifdef DEBUG
4696 // Release any relocated arenas we may be holding on to, without releasing
4697 // the GC lock.
4698 releaseHeldRelocatedArenasWithoutUnlocking(lock);
4699 #endif
4701 // Throw away any excess chunks we have lying around.
4702 freeEmptyChunks(lock);
4704 // Immediately decommit as many arenas as possible in the hopes that this
4705 // might let the OS scrape together enough pages to satisfy the failing
4706 // malloc request.
4707 if (DecommitEnabled()) {
4708 decommitFreeArenasWithoutUnlocking(lock);
4712 void GCRuntime::minorGC(JS::GCReason reason, gcstats::PhaseKind phase) {
4713 MOZ_ASSERT(!JS::RuntimeHeapIsBusy());
4715 MOZ_ASSERT_IF(reason == JS::GCReason::EVICT_NURSERY,
4716 !rt->mainContextFromOwnThread()->suppressGC);
4717 if (rt->mainContextFromOwnThread()->suppressGC) {
4718 return;
4721 incGcNumber();
4723 collectNursery(JS::GCOptions::Normal, reason, phase);
4725 #ifdef JS_GC_ZEAL
4726 if (hasZealMode(ZealMode::CheckHeapAfterGC)) {
4727 gcstats::AutoPhase ap(stats(), phase);
4728 CheckHeapAfterGC(rt);
4730 #endif
4732 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
4733 maybeTriggerGCAfterAlloc(zone);
4734 maybeTriggerGCAfterMalloc(zone);
4738 void GCRuntime::collectNursery(JS::GCOptions options, JS::GCReason reason,
4739 gcstats::PhaseKind phase) {
4740 AutoMaybeLeaveAtomsZone leaveAtomsZone(rt->mainContextFromOwnThread());
4742 uint32_t numAllocs = 0;
4743 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) {
4744 numAllocs += zone->getAndResetTenuredAllocsSinceMinorGC();
4746 stats().setAllocsSinceMinorGCTenured(numAllocs);
4748 gcstats::AutoPhase ap(stats(), phase);
4750 nursery().collect(options, reason);
4752 startBackgroundFreeAfterMinorGC();
4754 // We ignore gcMaxBytes when allocating for minor collection. However, if we
4755 // overflowed, we disable the nursery. The next time we allocate, we'll fail
4756 // because bytes >= gcMaxBytes.
4757 if (heapSize.bytes() >= tunables.gcMaxBytes()) {
4758 if (!nursery().isEmpty()) {
4759 nursery().collect(options, JS::GCReason::DISABLE_GENERATIONAL_GC);
4760 MOZ_ASSERT(nursery().isEmpty());
4761 startBackgroundFreeAfterMinorGC();
4763 nursery().disable();
4767 void GCRuntime::startBackgroundFreeAfterMinorGC() {
4768 // Called after nursery collection. Free whatever blocks are safe to free now.
4770 AutoLockHelperThreadState lock;
4772 lifoBlocksToFree.ref().transferFrom(&lifoBlocksToFreeAfterNextMinorGC.ref());
4774 if (nursery().tenuredEverything) {
4775 lifoBlocksToFree.ref().transferFrom(
4776 &lifoBlocksToFreeAfterFullMinorGC.ref());
4777 } else {
4778 lifoBlocksToFreeAfterNextMinorGC.ref().transferFrom(
4779 &lifoBlocksToFreeAfterFullMinorGC.ref());
4782 if (lifoBlocksToFree.ref().isEmpty() &&
4783 buffersToFreeAfterMinorGC.ref().empty()) {
4784 return;
4787 freeTask.startOrRunIfIdle(lock);
4790 bool GCRuntime::gcIfRequestedImpl(bool eagerOk) {
4791 // This method returns whether a major GC was performed.
4793 if (nursery().minorGCRequested()) {
4794 minorGC(nursery().minorGCTriggerReason());
4797 JS::GCReason reason = wantMajorGC(eagerOk);
4798 if (reason == JS::GCReason::NO_REASON) {
4799 return false;
4802 SliceBudget budget = defaultBudget(reason, 0);
4803 if (!isIncrementalGCInProgress()) {
4804 startGC(JS::GCOptions::Normal, reason, budget);
4805 } else {
4806 gcSlice(reason, budget);
4808 return true;
4811 void js::gc::FinishGC(JSContext* cx, JS::GCReason reason) {
4812 // Calling this when GC is suppressed won't have any effect.
4813 MOZ_ASSERT(!cx->suppressGC);
4815 // GC callbacks may run arbitrary code, including JS. Check this regardless of
4816 // whether we GC for this invocation.
4817 MOZ_ASSERT(cx->isNurseryAllocAllowed());
4819 if (JS::IsIncrementalGCInProgress(cx)) {
4820 JS::PrepareForIncrementalGC(cx);
4821 JS::FinishIncrementalGC(cx, reason);
4825 void js::gc::WaitForBackgroundTasks(JSContext* cx) {
4826 cx->runtime()->gc.waitForBackgroundTasks();
4829 void GCRuntime::waitForBackgroundTasks() {
4830 MOZ_ASSERT(!isIncrementalGCInProgress());
4831 MOZ_ASSERT(sweepTask.isIdle());
4832 MOZ_ASSERT(decommitTask.isIdle());
4833 MOZ_ASSERT(markTask.isIdle());
4835 allocTask.join();
4836 freeTask.join();
4837 nursery().joinDecommitTask();
4840 Realm* js::NewRealm(JSContext* cx, JSPrincipals* principals,
4841 const JS::RealmOptions& options) {
4842 JSRuntime* rt = cx->runtime();
4843 JS_AbortIfWrongThread(cx);
4845 UniquePtr<Zone> zoneHolder;
4846 UniquePtr<Compartment> compHolder;
4848 Compartment* comp = nullptr;
4849 Zone* zone = nullptr;
4850 JS::CompartmentSpecifier compSpec =
4851 options.creationOptions().compartmentSpecifier();
4852 switch (compSpec) {
4853 case JS::CompartmentSpecifier::NewCompartmentInSystemZone:
4854 // systemZone might be null here, in which case we'll make a zone and
4855 // set this field below.
4856 zone = rt->gc.systemZone;
4857 break;
4858 case JS::CompartmentSpecifier::NewCompartmentInExistingZone:
4859 zone = options.creationOptions().zone();
4860 MOZ_ASSERT(zone);
4861 break;
4862 case JS::CompartmentSpecifier::ExistingCompartment:
4863 comp = options.creationOptions().compartment();
4864 zone = comp->zone();
4865 break;
4866 case JS::CompartmentSpecifier::NewCompartmentAndZone:
4867 break;
4870 if (!zone) {
4871 Zone::Kind kind = Zone::NormalZone;
4872 const JSPrincipals* trusted = rt->trustedPrincipals();
4873 if (compSpec == JS::CompartmentSpecifier::NewCompartmentInSystemZone ||
4874 (principals && principals == trusted)) {
4875 kind = Zone::SystemZone;
4878 zoneHolder = MakeUnique<Zone>(cx->runtime(), kind);
4879 if (!zoneHolder || !zoneHolder->init()) {
4880 ReportOutOfMemory(cx);
4881 return nullptr;
4884 zone = zoneHolder.get();
4887 bool invisibleToDebugger = options.creationOptions().invisibleToDebugger();
4888 if (comp) {
4889 // Debugger visibility is per-compartment, not per-realm, so make sure the
4890 // new realm's visibility matches its compartment's.
4891 MOZ_ASSERT(comp->invisibleToDebugger() == invisibleToDebugger);
4892 } else {
4893 compHolder = cx->make_unique<JS::Compartment>(zone, invisibleToDebugger);
4894 if (!compHolder) {
4895 return nullptr;
4898 comp = compHolder.get();
4901 UniquePtr<Realm> realm(cx->new_<Realm>(comp, options));
4902 if (!realm) {
4903 return nullptr;
4905 realm->init(cx, principals);
4907 // Make sure we don't put system and non-system realms in the same
4908 // compartment.
4909 if (!compHolder) {
4910 MOZ_RELEASE_ASSERT(realm->isSystem() == IsSystemCompartment(comp));
4913 AutoLockGC lock(rt);
4915 // Reserve space in the Vectors before we start mutating them.
4916 if (!comp->realms().reserve(comp->realms().length() + 1) ||
4917 (compHolder &&
4918 !zone->compartments().reserve(zone->compartments().length() + 1)) ||
4919 (zoneHolder && !rt->gc.zones().reserve(rt->gc.zones().length() + 1))) {
4920 ReportOutOfMemory(cx);
4921 return nullptr;
4924 // After this everything must be infallible.
4926 comp->realms().infallibleAppend(realm.get());
4928 if (compHolder) {
4929 zone->compartments().infallibleAppend(compHolder.release());
4932 if (zoneHolder) {
4933 rt->gc.zones().infallibleAppend(zoneHolder.release());
4935 // Lazily set the runtime's system zone.
4936 if (compSpec == JS::CompartmentSpecifier::NewCompartmentInSystemZone) {
4937 MOZ_RELEASE_ASSERT(!rt->gc.systemZone);
4938 MOZ_ASSERT(zone->isSystemZone());
4939 rt->gc.systemZone = zone;
4943 return realm.release();
4946 void GCRuntime::runDebugGC() {
4947 #ifdef JS_GC_ZEAL
4948 if (rt->mainContextFromOwnThread()->suppressGC) {
4949 return;
4952 if (hasZealMode(ZealMode::GenerationalGC)) {
4953 return minorGC(JS::GCReason::DEBUG_GC);
4956 PrepareForDebugGC(rt);
4958 auto budget = SliceBudget::unlimited();
4959 if (hasZealMode(ZealMode::IncrementalMultipleSlices)) {
4961 * Start with a small slice limit and double it every slice. This
4962 * ensure that we get multiple slices, and collection runs to
4963 * completion.
4965 if (!isIncrementalGCInProgress()) {
4966 zealSliceBudget = zealFrequency / 2;
4967 } else {
4968 zealSliceBudget *= 2;
4970 budget = SliceBudget(WorkBudget(zealSliceBudget));
4972 js::gc::State initialState = incrementalState;
4973 if (!isIncrementalGCInProgress()) {
4974 setGCOptions(JS::GCOptions::Shrink);
4976 collect(false, budget, JS::GCReason::DEBUG_GC);
4978 /* Reset the slice size when we get to the sweep or compact phases. */
4979 if ((initialState == State::Mark && incrementalState == State::Sweep) ||
4980 (initialState == State::Sweep && incrementalState == State::Compact)) {
4981 zealSliceBudget = zealFrequency / 2;
4983 } else if (hasIncrementalTwoSliceZealMode()) {
4984 // These modes trigger incremental GC that happens in two slices and the
4985 // supplied budget is ignored by incrementalSlice.
4986 budget = SliceBudget(WorkBudget(1));
4988 if (!isIncrementalGCInProgress()) {
4989 setGCOptions(JS::GCOptions::Normal);
4991 collect(false, budget, JS::GCReason::DEBUG_GC);
4992 } else if (hasZealMode(ZealMode::Compact)) {
4993 gc(JS::GCOptions::Shrink, JS::GCReason::DEBUG_GC);
4994 } else {
4995 gc(JS::GCOptions::Normal, JS::GCReason::DEBUG_GC);
4998 #endif
5001 void GCRuntime::setFullCompartmentChecks(bool enabled) {
5002 MOZ_ASSERT(!JS::RuntimeHeapIsMajorCollecting());
5003 fullCompartmentChecks = enabled;
5006 void GCRuntime::notifyRootsRemoved() {
5007 rootsRemoved = true;
5009 #ifdef JS_GC_ZEAL
5010 /* Schedule a GC to happen "soon". */
5011 if (hasZealMode(ZealMode::RootsChange)) {
5012 nextScheduled = 1;
5014 #endif
5017 #ifdef JS_GC_ZEAL
5018 bool GCRuntime::selectForMarking(JSObject* object) {
5019 MOZ_ASSERT(!JS::RuntimeHeapIsMajorCollecting());
5020 return selectedForMarking.ref().get().append(object);
5023 void GCRuntime::clearSelectedForMarking() {
5024 selectedForMarking.ref().get().clearAndFree();
5027 void GCRuntime::setDeterministic(bool enabled) {
5028 MOZ_ASSERT(!JS::RuntimeHeapIsMajorCollecting());
5029 deterministicOnly = enabled;
5031 #endif
5033 #ifdef DEBUG
5035 AutoAssertNoNurseryAlloc::AutoAssertNoNurseryAlloc() {
5036 TlsContext.get()->disallowNurseryAlloc();
5039 AutoAssertNoNurseryAlloc::~AutoAssertNoNurseryAlloc() {
5040 TlsContext.get()->allowNurseryAlloc();
5043 #endif // DEBUG
5045 #ifdef JSGC_HASH_TABLE_CHECKS
5046 void GCRuntime::checkHashTablesAfterMovingGC() {
5048 * Check that internal hash tables no longer have any pointers to things
5049 * that have been moved.
5051 rt->geckoProfiler().checkStringsMapAfterMovingGC();
5052 if (rt->hasJitRuntime() && rt->jitRuntime()->hasInterpreterEntryMap()) {
5053 rt->jitRuntime()->getInterpreterEntryMap()->checkScriptsAfterMovingGC();
5055 for (ZonesIter zone(this, SkipAtoms); !zone.done(); zone.next()) {
5056 zone->checkUniqueIdTableAfterMovingGC();
5057 zone->shapeZone().checkTablesAfterMovingGC();
5058 zone->checkAllCrossCompartmentWrappersAfterMovingGC();
5059 zone->checkScriptMapsAfterMovingGC();
5061 // Note: CompactPropMaps never have a table.
5062 JS::AutoCheckCannotGC nogc;
5063 for (auto map = zone->cellIterUnsafe<NormalPropMap>(); !map.done();
5064 map.next()) {
5065 if (PropMapTable* table = map->asLinked()->maybeTable(nogc)) {
5066 table->checkAfterMovingGC();
5069 for (auto map = zone->cellIterUnsafe<DictionaryPropMap>(); !map.done();
5070 map.next()) {
5071 if (PropMapTable* table = map->asLinked()->maybeTable(nogc)) {
5072 table->checkAfterMovingGC();
5077 for (CompartmentsIter c(this); !c.done(); c.next()) {
5078 for (RealmsInCompartmentIter r(c); !r.done(); r.next()) {
5079 r->dtoaCache.checkCacheAfterMovingGC();
5080 if (r->debugEnvs()) {
5081 r->debugEnvs()->checkHashTablesAfterMovingGC();
5086 #endif
5088 #ifdef DEBUG
5089 bool GCRuntime::hasZone(Zone* target) {
5090 for (AllZonesIter zone(this); !zone.done(); zone.next()) {
5091 if (zone == target) {
5092 return true;
5095 return false;
5097 #endif
5099 void AutoAssertEmptyNursery::checkCondition(JSContext* cx) {
5100 if (!noAlloc) {
5101 noAlloc.emplace();
5103 this->cx = cx;
5104 MOZ_ASSERT(cx->nursery().isEmpty());
5107 AutoEmptyNursery::AutoEmptyNursery(JSContext* cx) {
5108 MOZ_ASSERT(!cx->suppressGC);
5109 cx->runtime()->gc.stats().suspendPhases();
5110 cx->runtime()->gc.evictNursery(JS::GCReason::EVICT_NURSERY);
5111 cx->runtime()->gc.stats().resumePhases();
5112 checkCondition(cx);
5115 #ifdef DEBUG
5117 namespace js {
5119 // We don't want jsfriendapi.h to depend on GenericPrinter,
5120 // so these functions are declared directly in the cpp.
5122 extern JS_PUBLIC_API void DumpString(JSString* str, js::GenericPrinter& out);
5124 } // namespace js
5126 void js::gc::Cell::dump(js::GenericPrinter& out) const {
5127 switch (getTraceKind()) {
5128 case JS::TraceKind::Object:
5129 reinterpret_cast<const JSObject*>(this)->dump(out);
5130 break;
5132 case JS::TraceKind::String:
5133 js::DumpString(reinterpret_cast<JSString*>(const_cast<Cell*>(this)), out);
5134 break;
5136 case JS::TraceKind::Shape:
5137 reinterpret_cast<const Shape*>(this)->dump(out);
5138 break;
5140 default:
5141 out.printf("%s(%p)\n", JS::GCTraceKindToAscii(getTraceKind()),
5142 (void*)this);
5146 // For use in a debugger.
5147 void js::gc::Cell::dump() const {
5148 js::Fprinter out(stderr);
5149 dump(out);
5151 #endif
5153 JS_PUBLIC_API bool js::gc::detail::CanCheckGrayBits(const TenuredCell* cell) {
5154 // We do not check the gray marking state of cells in the following cases:
5156 // 1) When OOM has caused us to clear the gcGrayBitsValid_ flag.
5158 // 2) When we are in an incremental GC and examine a cell that is in a zone
5159 // that is not being collected. Gray targets of CCWs that are marked black
5160 // by a barrier will eventually be marked black in a later GC slice.
5162 // 3) When mark bits are being cleared concurrently by a helper thread.
5164 MOZ_ASSERT(cell);
5166 auto* runtime = cell->runtimeFromAnyThread();
5167 MOZ_ASSERT(CurrentThreadCanAccessRuntime(runtime));
5169 if (!runtime->gc.areGrayBitsValid()) {
5170 return false;
5173 JS::Zone* zone = cell->zone();
5175 if (runtime->gc.isIncrementalGCInProgress() && !zone->wasGCStarted()) {
5176 return false;
5179 return !zone->isGCPreparing();
5182 JS_PUBLIC_API bool js::gc::detail::CellIsMarkedGrayIfKnown(
5183 const TenuredCell* cell) {
5184 MOZ_ASSERT_IF(cell->isPermanentAndMayBeShared(), cell->isMarkedBlack());
5185 if (!cell->isMarkedGray()) {
5186 return false;
5189 return CanCheckGrayBits(cell);
5192 #ifdef DEBUG
5194 JS_PUBLIC_API void js::gc::detail::AssertCellIsNotGray(const Cell* cell) {
5195 if (!cell->isTenured()) {
5196 return;
5199 // Check that a cell is not marked gray.
5201 // Since this is a debug-only check, take account of the eventual mark state
5202 // of cells that will be marked black by the next GC slice in an incremental
5203 // GC. For performance reasons we don't do this in CellIsMarkedGrayIfKnown.
5205 const auto* tc = &cell->asTenured();
5206 if (!tc->isMarkedGray() || !CanCheckGrayBits(tc)) {
5207 return;
5210 // TODO: I'd like to AssertHeapIsIdle() here, but this ends up getting
5211 // called during GC and while iterating the heap for memory reporting.
5212 MOZ_ASSERT(!JS::RuntimeHeapIsCycleCollecting());
5214 if (tc->zone()->isGCMarkingBlackAndGray()) {
5215 // We are doing gray marking in the cell's zone. Even if the cell is
5216 // currently marked gray it may eventually be marked black. Delay checking
5217 // non-black cells until we finish gray marking.
5219 if (!tc->isMarkedBlack()) {
5220 JSRuntime* rt = tc->zone()->runtimeFromMainThread();
5221 AutoEnterOOMUnsafeRegion oomUnsafe;
5222 if (!rt->gc.cellsToAssertNotGray.ref().append(cell)) {
5223 oomUnsafe.crash("Can't append to delayed gray checks list");
5226 return;
5229 MOZ_ASSERT(!tc->isMarkedGray());
5232 extern JS_PUBLIC_API bool js::gc::detail::ObjectIsMarkedBlack(
5233 const JSObject* obj) {
5234 return obj->isMarkedBlack();
5237 #endif
5239 js::gc::ClearEdgesTracer::ClearEdgesTracer(JSRuntime* rt)
5240 : GenericTracerImpl(rt, JS::TracerKind::ClearEdges,
5241 JS::WeakMapTraceAction::TraceKeysAndValues) {}
5243 template <typename T>
5244 void js::gc::ClearEdgesTracer::onEdge(T** thingp, const char* name) {
5245 // We don't handle removing pointers to nursery edges from the store buffer
5246 // with this tracer. Check that this doesn't happen.
5247 T* thing = *thingp;
5248 MOZ_ASSERT(!IsInsideNursery(thing));
5250 // Fire the pre-barrier since we're removing an edge from the graph.
5251 InternalBarrierMethods<T*>::preBarrier(thing);
5253 *thingp = nullptr;
5256 void GCRuntime::setPerformanceHint(PerformanceHint hint) {
5257 if (hint == PerformanceHint::InPageLoad) {
5258 inPageLoadCount++;
5259 } else {
5260 MOZ_ASSERT(inPageLoadCount);
5261 inPageLoadCount--;