Bumping manifests a=b2g-bump
[gecko.git] / js / src / jsgc.cpp
blob84245c5b1751757b584170f807846e7ce1bec8e7
1 /* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 4 -*-
2 * vim: set ts=8 sts=4 et sw=4 tw=99:
3 * This Source Code Form is subject to the terms of the Mozilla Public
4 * License, v. 2.0. If a copy of the MPL was not distributed with this
5 * file, You can obtain one at http://mozilla.org/MPL/2.0/. */
7 /*
8 * This code implements an incremental mark-and-sweep garbage collector, with
9 * most sweeping carried out in the background on a parallel thread.
11 * Full vs. zone GC
12 * ----------------
14 * The collector can collect all zones at once, or a subset. These types of
15 * collection are referred to as a full GC and a zone GC respectively.
17 * The atoms zone is only collected in a full GC since objects in any zone may
18 * have pointers to atoms, and these are not recorded in the cross compartment
19 * pointer map. Also, the atoms zone is not collected if any thread has an
20 * AutoKeepAtoms instance on the stack, or there are any exclusive threads using
21 * the runtime.
23 * It is possible for an incremental collection that started out as a full GC to
24 * become a zone GC if new zones are created during the course of the
25 * collection.
27 * Incremental collection
28 * ----------------------
30 * For a collection to be carried out incrementally the following conditions
31 * must be met:
32 * - the collection must be run by calling js::GCSlice() rather than js::GC()
33 * - the GC mode must have been set to JSGC_MODE_INCREMENTAL with
34 * JS_SetGCParameter()
35 * - no thread may have an AutoKeepAtoms instance on the stack
36 * - all native objects that have their own trace hook must indicate that they
37 * implement read and write barriers with the JSCLASS_IMPLEMENTS_BARRIERS
38 * flag
40 * The last condition is an engine-internal mechanism to ensure that incremental
41 * collection is not carried out without the correct barriers being implemented.
42 * For more information see 'Incremental marking' below.
44 * If the collection is not incremental, all foreground activity happens inside
45 * a single call to GC() or GCSlice(). However the collection is not complete
46 * until the background sweeping activity has finished.
48 * An incremental collection proceeds as a series of slices, interleaved with
49 * mutator activity, i.e. running JavaScript code. Slices are limited by a time
50 * budget. The slice finishes as soon as possible after the requested time has
51 * passed.
53 * Collector states
54 * ----------------
56 * The collector proceeds through the following states, the current state being
57 * held in JSRuntime::gcIncrementalState:
59 * - MARK_ROOTS - marks the stack and other roots
60 * - MARK - incrementally marks reachable things
61 * - SWEEP - sweeps zones in groups and continues marking unswept zones
63 * The MARK_ROOTS activity always takes place in the first slice. The next two
64 * states can take place over one or more slices.
66 * In other words an incremental collection proceeds like this:
68 * Slice 1: MARK_ROOTS: Roots pushed onto the mark stack.
69 * MARK: The mark stack is processed by popping an element,
70 * marking it, and pushing its children.
72 * ... JS code runs ...
74 * Slice 2: MARK: More mark stack processing.
76 * ... JS code runs ...
78 * Slice n-1: MARK: More mark stack processing.
80 * ... JS code runs ...
82 * Slice n: MARK: Mark stack is completely drained.
83 * SWEEP: Select first group of zones to sweep and sweep them.
85 * ... JS code runs ...
87 * Slice n+1: SWEEP: Mark objects in unswept zones that were newly
88 * identified as alive (see below). Then sweep more zone
89 * groups.
91 * ... JS code runs ...
93 * Slice n+2: SWEEP: Mark objects in unswept zones that were newly
94 * identified as alive. Then sweep more zone groups.
96 * ... JS code runs ...
98 * Slice m: SWEEP: Sweeping is finished, and background sweeping
99 * started on the helper thread.
101 * ... JS code runs, remaining sweeping done on background thread ...
103 * When background sweeping finishes the GC is complete.
105 * Incremental marking
106 * -------------------
108 * Incremental collection requires close collaboration with the mutator (i.e.,
109 * JS code) to guarantee correctness.
111 * - During an incremental GC, if a memory location (except a root) is written
112 * to, then the value it previously held must be marked. Write barriers
113 * ensure this.
115 * - Any object that is allocated during incremental GC must start out marked.
117 * - Roots are marked in the first slice and hence don't need write barriers.
118 * Roots are things like the C stack and the VM stack.
120 * The problem that write barriers solve is that between slices the mutator can
121 * change the object graph. We must ensure that it cannot do this in such a way
122 * that makes us fail to mark a reachable object (marking an unreachable object
123 * is tolerable).
125 * We use a snapshot-at-the-beginning algorithm to do this. This means that we
126 * promise to mark at least everything that is reachable at the beginning of
127 * collection. To implement it we mark the old contents of every non-root memory
128 * location written to by the mutator while the collection is in progress, using
129 * write barriers. This is described in gc/Barrier.h.
131 * Incremental sweeping
132 * --------------------
134 * Sweeping is difficult to do incrementally because object finalizers must be
135 * run at the start of sweeping, before any mutator code runs. The reason is
136 * that some objects use their finalizers to remove themselves from caches. If
137 * mutator code was allowed to run after the start of sweeping, it could observe
138 * the state of the cache and create a new reference to an object that was just
139 * about to be destroyed.
141 * Sweeping all finalizable objects in one go would introduce long pauses, so
142 * instead sweeping broken up into groups of zones. Zones which are not yet
143 * being swept are still marked, so the issue above does not apply.
145 * The order of sweeping is restricted by cross compartment pointers - for
146 * example say that object |a| from zone A points to object |b| in zone B and
147 * neither object was marked when we transitioned to the SWEEP phase. Imagine we
148 * sweep B first and then return to the mutator. It's possible that the mutator
149 * could cause |a| to become alive through a read barrier (perhaps it was a
150 * shape that was accessed via a shape table). Then we would need to mark |b|,
151 * which |a| points to, but |b| has already been swept.
153 * So if there is such a pointer then marking of zone B must not finish before
154 * marking of zone A. Pointers which form a cycle between zones therefore
155 * restrict those zones to being swept at the same time, and these are found
156 * using Tarjan's algorithm for finding the strongly connected components of a
157 * graph.
159 * GC things without finalizers, and things with finalizers that are able to run
160 * in the background, are swept on the background thread. This accounts for most
161 * of the sweeping work.
163 * Reset
164 * -----
166 * During incremental collection it is possible, although unlikely, for
167 * conditions to change such that incremental collection is no longer safe. In
168 * this case, the collection is 'reset' by ResetIncrementalGC(). If we are in
169 * the mark state, this just stops marking, but if we have started sweeping
170 * already, we continue until we have swept the current zone group. Following a
171 * reset, a new non-incremental collection is started.
173 * Compacting GC
174 * -------------
176 * Compacting GC happens at the end of a major GC as part of the last slice.
177 * There are three parts:
179 * - Arenas are selected for compaction.
180 * - The contents of those arenas are moved to new arenas.
181 * - All references to moved things are updated.
184 #include "jsgcinlines.h"
186 #include "mozilla/ArrayUtils.h"
187 #include "mozilla/DebugOnly.h"
188 #include "mozilla/MacroForEach.h"
189 #include "mozilla/MemoryReporting.h"
190 #include "mozilla/Move.h"
192 #include <string.h> /* for memset used when DEBUG */
193 #ifndef XP_WIN
194 # include <unistd.h>
195 #endif
197 #include "jsapi.h"
198 #include "jsatom.h"
199 #include "jscntxt.h"
200 #include "jscompartment.h"
201 #include "jsobj.h"
202 #include "jsprf.h"
203 #include "jsscript.h"
204 #include "jstypes.h"
205 #include "jsutil.h"
206 #include "jswatchpoint.h"
207 #include "jsweakmap.h"
208 #ifdef XP_WIN
209 # include "jswin.h"
210 #endif
211 #include "prmjtime.h"
213 #include "gc/FindSCCs.h"
214 #include "gc/GCInternals.h"
215 #include "gc/GCTrace.h"
216 #include "gc/Marking.h"
217 #include "gc/Memory.h"
218 #include "jit/BaselineJIT.h"
219 #include "jit/IonCode.h"
220 #include "js/SliceBudget.h"
221 #include "vm/Debugger.h"
222 #include "vm/ForkJoin.h"
223 #include "vm/ProxyObject.h"
224 #include "vm/Shape.h"
225 #include "vm/String.h"
226 #include "vm/Symbol.h"
227 #include "vm/TraceLogging.h"
228 #include "vm/WrapperObject.h"
230 #include "jsobjinlines.h"
231 #include "jsscriptinlines.h"
233 #include "vm/Stack-inl.h"
234 #include "vm/String-inl.h"
236 using namespace js;
237 using namespace js::gc;
239 using mozilla::Maybe;
240 using mozilla::Swap;
242 using JS::AutoGCRooter;
244 /* Perform a Full GC every 20 seconds if MaybeGC is called */
245 static const uint64_t GC_IDLE_FULL_SPAN = 20 * 1000 * 1000;
247 /* Increase the IGC marking slice time if we are in highFrequencyGC mode. */
248 static const int IGC_MARK_SLICE_MULTIPLIER = 2;
250 const AllocKind gc::slotsToThingKind[] = {
251 /* 0 */ FINALIZE_OBJECT0, FINALIZE_OBJECT2, FINALIZE_OBJECT2, FINALIZE_OBJECT4,
252 /* 4 */ FINALIZE_OBJECT4, FINALIZE_OBJECT8, FINALIZE_OBJECT8, FINALIZE_OBJECT8,
253 /* 8 */ FINALIZE_OBJECT8, FINALIZE_OBJECT12, FINALIZE_OBJECT12, FINALIZE_OBJECT12,
254 /* 12 */ FINALIZE_OBJECT12, FINALIZE_OBJECT16, FINALIZE_OBJECT16, FINALIZE_OBJECT16,
255 /* 16 */ FINALIZE_OBJECT16
258 static_assert(JS_ARRAY_LENGTH(slotsToThingKind) == SLOTS_TO_THING_KIND_LIMIT,
259 "We have defined a slot count for each kind.");
261 // Assert that SortedArenaList::MinThingSize is <= the real minimum thing size.
262 #define CHECK_MIN_THING_SIZE_INNER(x_) \
263 static_assert(x_ >= SortedArenaList::MinThingSize, \
264 #x_ " is less than SortedArenaList::MinThingSize!");
265 #define CHECK_MIN_THING_SIZE(...) { __VA_ARGS__ }; /* Define the array. */ \
266 MOZ_FOR_EACH(CHECK_MIN_THING_SIZE_INNER, (), (__VA_ARGS__ UINT32_MAX))
268 const uint32_t Arena::ThingSizes[] = CHECK_MIN_THING_SIZE(
269 sizeof(JSObject), /* FINALIZE_OBJECT0 */
270 sizeof(JSObject), /* FINALIZE_OBJECT0_BACKGROUND */
271 sizeof(JSObject_Slots2), /* FINALIZE_OBJECT2 */
272 sizeof(JSObject_Slots2), /* FINALIZE_OBJECT2_BACKGROUND */
273 sizeof(JSObject_Slots4), /* FINALIZE_OBJECT4 */
274 sizeof(JSObject_Slots4), /* FINALIZE_OBJECT4_BACKGROUND */
275 sizeof(JSObject_Slots8), /* FINALIZE_OBJECT8 */
276 sizeof(JSObject_Slots8), /* FINALIZE_OBJECT8_BACKGROUND */
277 sizeof(JSObject_Slots12), /* FINALIZE_OBJECT12 */
278 sizeof(JSObject_Slots12), /* FINALIZE_OBJECT12_BACKGROUND */
279 sizeof(JSObject_Slots16), /* FINALIZE_OBJECT16 */
280 sizeof(JSObject_Slots16), /* FINALIZE_OBJECT16_BACKGROUND */
281 sizeof(JSScript), /* FINALIZE_SCRIPT */
282 sizeof(LazyScript), /* FINALIZE_LAZY_SCRIPT */
283 sizeof(Shape), /* FINALIZE_SHAPE */
284 sizeof(BaseShape), /* FINALIZE_BASE_SHAPE */
285 sizeof(types::TypeObject), /* FINALIZE_TYPE_OBJECT */
286 sizeof(JSFatInlineString), /* FINALIZE_FAT_INLINE_STRING */
287 sizeof(JSString), /* FINALIZE_STRING */
288 sizeof(JSExternalString), /* FINALIZE_EXTERNAL_STRING */
289 sizeof(JS::Symbol), /* FINALIZE_SYMBOL */
290 sizeof(jit::JitCode), /* FINALIZE_JITCODE */
293 #undef CHECK_MIN_THING_SIZE_INNER
294 #undef CHECK_MIN_THING_SIZE
296 #define OFFSET(type) uint32_t(sizeof(ArenaHeader) + (ArenaSize - sizeof(ArenaHeader)) % sizeof(type))
298 const uint32_t Arena::FirstThingOffsets[] = {
299 OFFSET(JSObject), /* FINALIZE_OBJECT0 */
300 OFFSET(JSObject), /* FINALIZE_OBJECT0_BACKGROUND */
301 OFFSET(JSObject_Slots2), /* FINALIZE_OBJECT2 */
302 OFFSET(JSObject_Slots2), /* FINALIZE_OBJECT2_BACKGROUND */
303 OFFSET(JSObject_Slots4), /* FINALIZE_OBJECT4 */
304 OFFSET(JSObject_Slots4), /* FINALIZE_OBJECT4_BACKGROUND */
305 OFFSET(JSObject_Slots8), /* FINALIZE_OBJECT8 */
306 OFFSET(JSObject_Slots8), /* FINALIZE_OBJECT8_BACKGROUND */
307 OFFSET(JSObject_Slots12), /* FINALIZE_OBJECT12 */
308 OFFSET(JSObject_Slots12), /* FINALIZE_OBJECT12_BACKGROUND */
309 OFFSET(JSObject_Slots16), /* FINALIZE_OBJECT16 */
310 OFFSET(JSObject_Slots16), /* FINALIZE_OBJECT16_BACKGROUND */
311 OFFSET(JSScript), /* FINALIZE_SCRIPT */
312 OFFSET(LazyScript), /* FINALIZE_LAZY_SCRIPT */
313 OFFSET(Shape), /* FINALIZE_SHAPE */
314 OFFSET(BaseShape), /* FINALIZE_BASE_SHAPE */
315 OFFSET(types::TypeObject), /* FINALIZE_TYPE_OBJECT */
316 OFFSET(JSFatInlineString), /* FINALIZE_FAT_INLINE_STRING */
317 OFFSET(JSString), /* FINALIZE_STRING */
318 OFFSET(JSExternalString), /* FINALIZE_EXTERNAL_STRING */
319 OFFSET(JS::Symbol), /* FINALIZE_SYMBOL */
320 OFFSET(jit::JitCode), /* FINALIZE_JITCODE */
323 #undef OFFSET
325 const char*
326 js::gc::TraceKindAsAscii(JSGCTraceKind kind)
328 switch(kind) {
329 case JSTRACE_OBJECT: return "JSTRACE_OBJECT";
330 case JSTRACE_STRING: return "JSTRACE_STRING";
331 case JSTRACE_SYMBOL: return "JSTRACE_SYMBOL";
332 case JSTRACE_SCRIPT: return "JSTRACE_SCRIPT";
333 case JSTRACE_LAZY_SCRIPT: return "JSTRACE_SCRIPT";
334 case JSTRACE_JITCODE: return "JSTRACE_JITCODE";
335 case JSTRACE_SHAPE: return "JSTRACE_SHAPE";
336 case JSTRACE_BASE_SHAPE: return "JSTRACE_BASE_SHAPE";
337 case JSTRACE_TYPE_OBJECT: return "JSTRACE_TYPE_OBJECT";
338 default: return "INVALID";
343 * Finalization order for incrementally swept things.
346 static const AllocKind FinalizePhaseStrings[] = {
347 FINALIZE_EXTERNAL_STRING
350 static const AllocKind FinalizePhaseScripts[] = {
351 FINALIZE_SCRIPT,
352 FINALIZE_LAZY_SCRIPT
355 static const AllocKind FinalizePhaseJitCode[] = {
356 FINALIZE_JITCODE
359 static const AllocKind * const FinalizePhases[] = {
360 FinalizePhaseStrings,
361 FinalizePhaseScripts,
362 FinalizePhaseJitCode
364 static const int FinalizePhaseCount = sizeof(FinalizePhases) / sizeof(AllocKind*);
366 static const int FinalizePhaseLength[] = {
367 sizeof(FinalizePhaseStrings) / sizeof(AllocKind),
368 sizeof(FinalizePhaseScripts) / sizeof(AllocKind),
369 sizeof(FinalizePhaseJitCode) / sizeof(AllocKind)
372 static const gcstats::Phase FinalizePhaseStatsPhase[] = {
373 gcstats::PHASE_SWEEP_STRING,
374 gcstats::PHASE_SWEEP_SCRIPT,
375 gcstats::PHASE_SWEEP_JITCODE
379 * Finalization order for things swept in the background.
382 static const AllocKind BackgroundPhaseObjects[] = {
383 FINALIZE_OBJECT0_BACKGROUND,
384 FINALIZE_OBJECT2_BACKGROUND,
385 FINALIZE_OBJECT4_BACKGROUND,
386 FINALIZE_OBJECT8_BACKGROUND,
387 FINALIZE_OBJECT12_BACKGROUND,
388 FINALIZE_OBJECT16_BACKGROUND
391 static const AllocKind BackgroundPhaseStringsAndSymbols[] = {
392 FINALIZE_FAT_INLINE_STRING,
393 FINALIZE_STRING,
394 FINALIZE_SYMBOL
397 static const AllocKind BackgroundPhaseShapes[] = {
398 FINALIZE_SHAPE,
399 FINALIZE_BASE_SHAPE,
400 FINALIZE_TYPE_OBJECT
403 static const AllocKind * const BackgroundPhases[] = {
404 BackgroundPhaseObjects,
405 BackgroundPhaseStringsAndSymbols,
406 BackgroundPhaseShapes
408 static const int BackgroundPhaseCount = sizeof(BackgroundPhases) / sizeof(AllocKind*);
410 static const int BackgroundPhaseLength[] = {
411 sizeof(BackgroundPhaseObjects) / sizeof(AllocKind),
412 sizeof(BackgroundPhaseStringsAndSymbols) / sizeof(AllocKind),
413 sizeof(BackgroundPhaseShapes) / sizeof(AllocKind)
416 #ifdef DEBUG
417 void
418 ArenaHeader::checkSynchronizedWithFreeList() const
421 * Do not allow to access the free list when its real head is still stored
422 * in FreeLists and is not synchronized with this one.
424 JS_ASSERT(allocated());
427 * We can be called from the background finalization thread when the free
428 * list in the zone can mutate at any moment. We cannot do any
429 * checks in this case.
431 if (IsBackgroundFinalized(getAllocKind()) && zone->runtimeFromAnyThread()->gc.onBackgroundThread())
432 return;
434 FreeSpan firstSpan = firstFreeSpan.decompact(arenaAddress());
435 if (firstSpan.isEmpty())
436 return;
437 const FreeList* freeList = zone->allocator.arenas.getFreeList(getAllocKind());
438 if (freeList->isEmpty() || firstSpan.arenaAddress() != freeList->arenaAddress())
439 return;
442 * Here this arena has free things, FreeList::lists[thingKind] is not
443 * empty and also points to this arena. Thus they must be the same.
445 JS_ASSERT(freeList->isSameNonEmptySpan(firstSpan));
447 #endif
449 /* static */ void
450 Arena::staticAsserts()
452 static_assert(JS_ARRAY_LENGTH(ThingSizes) == FINALIZE_LIMIT, "We have defined all thing sizes.");
453 static_assert(JS_ARRAY_LENGTH(FirstThingOffsets) == FINALIZE_LIMIT, "We have defined all offsets.");
456 void
457 Arena::setAsFullyUnused(AllocKind thingKind)
459 FreeSpan fullSpan;
460 size_t thingSize = Arena::thingSize(thingKind);
461 fullSpan.initFinal(thingsStart(thingKind), thingsEnd() - thingSize, thingSize);
462 aheader.setFirstFreeSpan(&fullSpan);
465 template<typename T>
466 inline size_t
467 Arena::finalize(FreeOp* fop, AllocKind thingKind, size_t thingSize)
469 /* Enforce requirements on size of T. */
470 JS_ASSERT(thingSize % CellSize == 0);
471 JS_ASSERT(thingSize <= 255);
473 JS_ASSERT(aheader.allocated());
474 JS_ASSERT(thingKind == aheader.getAllocKind());
475 JS_ASSERT(thingSize == aheader.getThingSize());
476 JS_ASSERT(!aheader.hasDelayedMarking);
477 JS_ASSERT(!aheader.markOverflow);
478 JS_ASSERT(!aheader.allocatedDuringIncremental);
480 uintptr_t firstThing = thingsStart(thingKind);
481 uintptr_t firstThingOrSuccessorOfLastMarkedThing = firstThing;
482 uintptr_t lastThing = thingsEnd() - thingSize;
484 FreeSpan newListHead;
485 FreeSpan* newListTail = &newListHead;
486 size_t nmarked = 0;
488 for (ArenaCellIterUnderFinalize i(&aheader); !i.done(); i.next()) {
489 T* t = i.get<T>();
490 if (t->isMarked()) {
491 uintptr_t thing = reinterpret_cast<uintptr_t>(t);
492 if (thing != firstThingOrSuccessorOfLastMarkedThing) {
493 // We just finished passing over one or more free things,
494 // so record a new FreeSpan.
495 newListTail->initBoundsUnchecked(firstThingOrSuccessorOfLastMarkedThing,
496 thing - thingSize);
497 newListTail = newListTail->nextSpanUnchecked();
499 firstThingOrSuccessorOfLastMarkedThing = thing + thingSize;
500 nmarked++;
501 } else {
502 t->finalize(fop);
503 JS_POISON(t, JS_SWEPT_TENURED_PATTERN, thingSize);
504 TraceTenuredFinalize(t);
508 if (nmarked == 0) {
509 // Do nothing. The caller will update the arena header appropriately.
510 JS_ASSERT(newListTail == &newListHead);
511 JS_EXTRA_POISON(data, JS_SWEPT_TENURED_PATTERN, sizeof(data));
512 return nmarked;
515 JS_ASSERT(firstThingOrSuccessorOfLastMarkedThing != firstThing);
516 uintptr_t lastMarkedThing = firstThingOrSuccessorOfLastMarkedThing - thingSize;
517 if (lastThing == lastMarkedThing) {
518 // If the last thing was marked, we will have already set the bounds of
519 // the final span, and we just need to terminate the list.
520 newListTail->initAsEmpty();
521 } else {
522 // Otherwise, end the list with a span that covers the final stretch of free things.
523 newListTail->initFinal(firstThingOrSuccessorOfLastMarkedThing, lastThing, thingSize);
526 #ifdef DEBUG
527 size_t nfree = 0;
528 for (const FreeSpan* span = &newListHead; !span->isEmpty(); span = span->nextSpan())
529 nfree += span->length(thingSize);
530 JS_ASSERT(nfree + nmarked == thingsPerArena(thingSize));
531 #endif
532 aheader.setFirstFreeSpan(&newListHead);
533 return nmarked;
536 template<typename T>
537 static inline bool
538 FinalizeTypedArenas(FreeOp* fop,
539 ArenaHeader** src,
540 SortedArenaList& dest,
541 AllocKind thingKind,
542 SliceBudget& budget)
545 * Finalize arenas from src list, releasing empty arenas and inserting the
546 * others into the appropriate destination size bins.
550 * During parallel sections, we sometimes finalize the parallel arenas,
551 * but in that case, we want to hold on to the memory in our arena
552 * lists, not offer it up for reuse.
554 bool releaseArenas = !InParallelSection();
556 size_t thingSize = Arena::thingSize(thingKind);
557 size_t thingsPerArena = Arena::thingsPerArena(thingSize);
559 while (ArenaHeader* aheader = *src) {
560 *src = aheader->next;
561 size_t nmarked = aheader->getArena()->finalize<T>(fop, thingKind, thingSize);
562 size_t nfree = thingsPerArena - nmarked;
564 if (nmarked)
565 dest.insertAt(aheader, nfree);
566 else if (releaseArenas)
567 aheader->chunk()->releaseArena(aheader);
568 else
569 aheader->chunk()->recycleArena(aheader, dest, thingKind, thingsPerArena);
571 budget.step(thingsPerArena);
572 if (budget.isOverBudget())
573 return false;
576 return true;
580 * Finalize the list. On return, |al|'s cursor points to the first non-empty
581 * arena in the list (which may be null if all arenas are full).
583 static bool
584 FinalizeArenas(FreeOp* fop,
585 ArenaHeader** src,
586 SortedArenaList& dest,
587 AllocKind thingKind,
588 SliceBudget& budget)
590 switch (thingKind) {
591 case FINALIZE_OBJECT0:
592 case FINALIZE_OBJECT0_BACKGROUND:
593 case FINALIZE_OBJECT2:
594 case FINALIZE_OBJECT2_BACKGROUND:
595 case FINALIZE_OBJECT4:
596 case FINALIZE_OBJECT4_BACKGROUND:
597 case FINALIZE_OBJECT8:
598 case FINALIZE_OBJECT8_BACKGROUND:
599 case FINALIZE_OBJECT12:
600 case FINALIZE_OBJECT12_BACKGROUND:
601 case FINALIZE_OBJECT16:
602 case FINALIZE_OBJECT16_BACKGROUND:
603 return FinalizeTypedArenas<JSObject>(fop, src, dest, thingKind, budget);
604 case FINALIZE_SCRIPT:
605 return FinalizeTypedArenas<JSScript>(fop, src, dest, thingKind, budget);
606 case FINALIZE_LAZY_SCRIPT:
607 return FinalizeTypedArenas<LazyScript>(fop, src, dest, thingKind, budget);
608 case FINALIZE_SHAPE:
609 return FinalizeTypedArenas<Shape>(fop, src, dest, thingKind, budget);
610 case FINALIZE_BASE_SHAPE:
611 return FinalizeTypedArenas<BaseShape>(fop, src, dest, thingKind, budget);
612 case FINALIZE_TYPE_OBJECT:
613 return FinalizeTypedArenas<types::TypeObject>(fop, src, dest, thingKind, budget);
614 case FINALIZE_STRING:
615 return FinalizeTypedArenas<JSString>(fop, src, dest, thingKind, budget);
616 case FINALIZE_FAT_INLINE_STRING:
617 return FinalizeTypedArenas<JSFatInlineString>(fop, src, dest, thingKind, budget);
618 case FINALIZE_EXTERNAL_STRING:
619 return FinalizeTypedArenas<JSExternalString>(fop, src, dest, thingKind, budget);
620 case FINALIZE_SYMBOL:
621 return FinalizeTypedArenas<JS::Symbol>(fop, src, dest, thingKind, budget);
622 case FINALIZE_JITCODE:
624 // JitCode finalization may release references on an executable
625 // allocator that is accessed when requesting interrupts.
626 JSRuntime::AutoLockForInterrupt lock(fop->runtime());
627 return FinalizeTypedArenas<jit::JitCode>(fop, src, dest, thingKind, budget);
629 default:
630 MOZ_CRASH("Invalid alloc kind");
634 static inline Chunk*
635 AllocChunk(JSRuntime* rt)
637 return static_cast<Chunk*>(MapAlignedPages(ChunkSize, ChunkSize));
640 static inline void
641 FreeChunk(JSRuntime* rt, Chunk* p)
643 UnmapPages(static_cast<void*>(p), ChunkSize);
646 /* Must be called with the GC lock taken. */
647 inline Chunk*
648 ChunkPool::get(JSRuntime* rt)
650 Chunk* chunk = emptyChunkListHead;
651 if (!chunk) {
652 JS_ASSERT(!emptyCount);
653 return nullptr;
656 JS_ASSERT(emptyCount);
657 emptyChunkListHead = chunk->info.next;
658 --emptyCount;
659 return chunk;
662 /* Must be called either during the GC or with the GC lock taken. */
663 inline void
664 ChunkPool::put(Chunk* chunk)
666 chunk->info.age = 0;
667 chunk->info.next = emptyChunkListHead;
668 emptyChunkListHead = chunk;
669 emptyCount++;
672 inline Chunk*
673 ChunkPool::Enum::front()
675 Chunk* chunk = *chunkp;
676 JS_ASSERT_IF(chunk, pool.getEmptyCount() != 0);
677 return chunk;
680 inline void
681 ChunkPool::Enum::popFront()
683 JS_ASSERT(!empty());
684 chunkp = &front()->info.next;
687 inline void
688 ChunkPool::Enum::removeAndPopFront()
690 JS_ASSERT(!empty());
691 *chunkp = front()->info.next;
692 --pool.emptyCount;
695 /* Must be called either during the GC or with the GC lock taken. */
696 Chunk*
697 GCRuntime::expireChunkPool(bool shrinkBuffers, bool releaseAll)
700 * Return old empty chunks to the system while preserving the order of
701 * other chunks in the list. This way, if the GC runs several times
702 * without emptying the list, the older chunks will stay at the tail
703 * and are more likely to reach the max age.
705 Chunk* freeList = nullptr;
706 unsigned freeChunkCount = 0;
707 for (ChunkPool::Enum e(chunkPool); !e.empty(); ) {
708 Chunk* chunk = e.front();
709 JS_ASSERT(chunk->unused());
710 JS_ASSERT(!chunkSet.has(chunk));
711 if (releaseAll || freeChunkCount >= tunables.maxEmptyChunkCount() ||
712 (freeChunkCount >= tunables.minEmptyChunkCount() &&
713 (shrinkBuffers || chunk->info.age == MAX_EMPTY_CHUNK_AGE)))
715 e.removeAndPopFront();
716 prepareToFreeChunk(chunk->info);
717 chunk->info.next = freeList;
718 freeList = chunk;
719 } else {
720 /* Keep the chunk but increase its age. */
721 ++freeChunkCount;
722 ++chunk->info.age;
723 e.popFront();
726 JS_ASSERT(chunkPool.getEmptyCount() <= tunables.maxEmptyChunkCount());
727 JS_ASSERT_IF(shrinkBuffers, chunkPool.getEmptyCount() <= tunables.minEmptyChunkCount());
728 JS_ASSERT_IF(releaseAll, chunkPool.getEmptyCount() == 0);
729 return freeList;
732 void
733 GCRuntime::freeChunkList(Chunk* chunkListHead)
735 while (Chunk* chunk = chunkListHead) {
736 JS_ASSERT(!chunk->info.numArenasFreeCommitted);
737 chunkListHead = chunk->info.next;
738 FreeChunk(rt, chunk);
742 void
743 GCRuntime::expireAndFreeChunkPool(bool releaseAll)
745 freeChunkList(expireChunkPool(true, releaseAll));
748 /* static */ Chunk*
749 Chunk::allocate(JSRuntime* rt)
751 Chunk* chunk = AllocChunk(rt);
752 if (!chunk)
753 return nullptr;
754 chunk->init(rt);
755 rt->gc.stats.count(gcstats::STAT_NEW_CHUNK);
756 return chunk;
759 /* Must be called with the GC lock taken. */
760 inline void
761 GCRuntime::releaseChunk(Chunk* chunk)
763 JS_ASSERT(chunk);
764 prepareToFreeChunk(chunk->info);
765 FreeChunk(rt, chunk);
768 inline void
769 GCRuntime::prepareToFreeChunk(ChunkInfo& info)
771 JS_ASSERT(numArenasFreeCommitted >= info.numArenasFreeCommitted);
772 numArenasFreeCommitted -= info.numArenasFreeCommitted;
773 stats.count(gcstats::STAT_DESTROY_CHUNK);
774 #ifdef DEBUG
776 * Let FreeChunkList detect a missing prepareToFreeChunk call before it
777 * frees chunk.
779 info.numArenasFreeCommitted = 0;
780 #endif
783 void Chunk::decommitAllArenas(JSRuntime* rt)
785 decommittedArenas.clear(true);
786 MarkPagesUnused(&arenas[0], ArenasPerChunk * ArenaSize);
788 info.freeArenasHead = nullptr;
789 info.lastDecommittedArenaOffset = 0;
790 info.numArenasFree = ArenasPerChunk;
791 info.numArenasFreeCommitted = 0;
794 void
795 Chunk::init(JSRuntime* rt)
797 JS_POISON(this, JS_FRESH_TENURED_PATTERN, ChunkSize);
800 * We clear the bitmap to guard against xpc_IsGrayGCThing being called on
801 * uninitialized data, which would happen before the first GC cycle.
803 bitmap.clear();
806 * Decommit the arenas. We do this after poisoning so that if the OS does
807 * not have to recycle the pages, we still get the benefit of poisoning.
809 decommitAllArenas(rt);
811 /* Initialize the chunk info. */
812 info.age = 0;
813 info.trailer.storeBuffer = nullptr;
814 info.trailer.location = ChunkLocationBitTenuredHeap;
815 info.trailer.runtime = rt;
817 /* The rest of info fields are initialized in pickChunk. */
820 inline Chunk**
821 GCRuntime::getAvailableChunkList(Zone* zone)
823 return zone->isSystem
824 ? &systemAvailableChunkListHead
825 : &userAvailableChunkListHead;
828 inline void
829 Chunk::addToAvailableList(Zone* zone)
831 JSRuntime* rt = zone->runtimeFromAnyThread();
832 insertToAvailableList(rt->gc.getAvailableChunkList(zone));
835 inline void
836 Chunk::insertToAvailableList(Chunk** insertPoint)
838 JS_ASSERT(hasAvailableArenas());
839 JS_ASSERT(!info.prevp);
840 JS_ASSERT(!info.next);
841 info.prevp = insertPoint;
842 Chunk* insertBefore = *insertPoint;
843 if (insertBefore) {
844 JS_ASSERT(insertBefore->info.prevp == insertPoint);
845 insertBefore->info.prevp = &info.next;
847 info.next = insertBefore;
848 *insertPoint = this;
851 inline void
852 Chunk::removeFromAvailableList()
854 JS_ASSERT(info.prevp);
855 *info.prevp = info.next;
856 if (info.next) {
857 JS_ASSERT(info.next->info.prevp == &info.next);
858 info.next->info.prevp = info.prevp;
860 info.prevp = nullptr;
861 info.next = nullptr;
865 * Search for and return the next decommitted Arena. Our goal is to keep
866 * lastDecommittedArenaOffset "close" to a free arena. We do this by setting
867 * it to the most recently freed arena when we free, and forcing it to
868 * the last alloc + 1 when we allocate.
870 uint32_t
871 Chunk::findDecommittedArenaOffset()
873 /* Note: lastFreeArenaOffset can be past the end of the list. */
874 for (unsigned i = info.lastDecommittedArenaOffset; i < ArenasPerChunk; i++)
875 if (decommittedArenas.get(i))
876 return i;
877 for (unsigned i = 0; i < info.lastDecommittedArenaOffset; i++)
878 if (decommittedArenas.get(i))
879 return i;
880 MOZ_CRASH("No decommitted arenas found.");
883 ArenaHeader*
884 Chunk::fetchNextDecommittedArena()
886 JS_ASSERT(info.numArenasFreeCommitted == 0);
887 JS_ASSERT(info.numArenasFree > 0);
889 unsigned offset = findDecommittedArenaOffset();
890 info.lastDecommittedArenaOffset = offset + 1;
891 --info.numArenasFree;
892 decommittedArenas.unset(offset);
894 Arena* arena = &arenas[offset];
895 MarkPagesInUse(arena, ArenaSize);
896 arena->aheader.setAsNotAllocated();
898 return &arena->aheader;
901 inline void
902 GCRuntime::updateOnFreeArenaAlloc(const ChunkInfo& info)
904 JS_ASSERT(info.numArenasFreeCommitted <= numArenasFreeCommitted);
905 --numArenasFreeCommitted;
908 inline ArenaHeader*
909 Chunk::fetchNextFreeArena(JSRuntime* rt)
911 JS_ASSERT(info.numArenasFreeCommitted > 0);
912 JS_ASSERT(info.numArenasFreeCommitted <= info.numArenasFree);
914 ArenaHeader* aheader = info.freeArenasHead;
915 info.freeArenasHead = aheader->next;
916 --info.numArenasFreeCommitted;
917 --info.numArenasFree;
918 rt->gc.updateOnFreeArenaAlloc(info);
920 return aheader;
923 ArenaHeader*
924 Chunk::allocateArena(Zone* zone, AllocKind thingKind)
926 JS_ASSERT(hasAvailableArenas());
928 JSRuntime* rt = zone->runtimeFromAnyThread();
929 if (!rt->isHeapMinorCollecting() &&
930 !rt->isHeapCompacting() &&
931 rt->gc.usage.gcBytes() >= rt->gc.tunables.gcMaxBytes())
933 #ifdef JSGC_FJGENERATIONAL
934 // This is an approximation to the best test, which would check that
935 // this thread is currently promoting into the tenured area. I doubt
936 // the better test would make much difference.
937 if (!rt->isFJMinorCollecting())
938 return nullptr;
939 #else
940 return nullptr;
941 #endif
944 ArenaHeader* aheader = MOZ_LIKELY(info.numArenasFreeCommitted > 0)
945 ? fetchNextFreeArena(rt)
946 : fetchNextDecommittedArena();
947 aheader->init(zone, thingKind);
948 if (MOZ_UNLIKELY(!hasAvailableArenas()))
949 removeFromAvailableList();
951 zone->usage.addGCArena();
953 if (!rt->isHeapCompacting() && zone->usage.gcBytes() >= zone->threshold.gcTriggerBytes()) {
954 AutoUnlockGC unlock(rt);
955 rt->gc.triggerZoneGC(zone, JS::gcreason::ALLOC_TRIGGER);
958 return aheader;
961 inline void
962 GCRuntime::updateOnArenaFree(const ChunkInfo& info)
964 ++numArenasFreeCommitted;
967 inline void
968 Chunk::addArenaToFreeList(JSRuntime* rt, ArenaHeader* aheader)
970 JS_ASSERT(!aheader->allocated());
971 aheader->next = info.freeArenasHead;
972 info.freeArenasHead = aheader;
973 ++info.numArenasFreeCommitted;
974 ++info.numArenasFree;
975 rt->gc.updateOnArenaFree(info);
978 void
979 Chunk::recycleArena(ArenaHeader* aheader, SortedArenaList& dest, AllocKind thingKind,
980 size_t thingsPerArena)
982 aheader->getArena()->setAsFullyUnused(thingKind);
983 dest.insertAt(aheader, thingsPerArena);
986 void
987 Chunk::releaseArena(ArenaHeader* aheader)
989 JS_ASSERT(aheader->allocated());
990 JS_ASSERT(!aheader->hasDelayedMarking);
991 Zone* zone = aheader->zone;
992 JSRuntime* rt = zone->runtimeFromAnyThread();
993 AutoLockGC maybeLock;
994 if (rt->gc.isBackgroundSweeping())
995 maybeLock.lock(rt);
997 if (rt->gc.isBackgroundSweeping())
998 zone->threshold.updateForRemovedArena(rt->gc.tunables);
999 zone->usage.removeGCArena();
1001 aheader->setAsNotAllocated();
1002 addArenaToFreeList(rt, aheader);
1004 if (info.numArenasFree == 1) {
1005 JS_ASSERT(!info.prevp);
1006 JS_ASSERT(!info.next);
1007 addToAvailableList(zone);
1008 } else if (!unused()) {
1009 JS_ASSERT(info.prevp);
1010 } else {
1011 JS_ASSERT(unused());
1012 removeFromAvailableList();
1013 decommitAllArenas(rt);
1014 rt->gc.moveChunkToFreePool(this);
1018 void
1019 GCRuntime::moveChunkToFreePool(Chunk* chunk)
1021 JS_ASSERT(chunk->unused());
1022 JS_ASSERT(chunkSet.has(chunk));
1023 chunkSet.remove(chunk);
1024 chunkPool.put(chunk);
1027 inline bool
1028 GCRuntime::wantBackgroundAllocation() const
1031 * To minimize memory waste we do not want to run the background chunk
1032 * allocation if we have empty chunks or when the runtime needs just few
1033 * of them.
1035 return helperState.canBackgroundAllocate() &&
1036 chunkPool.getEmptyCount() < tunables.minEmptyChunkCount() &&
1037 chunkSet.count() >= 4;
1040 class js::gc::AutoMaybeStartBackgroundAllocation
1042 private:
1043 JSRuntime* runtime;
1044 MOZ_DECL_USE_GUARD_OBJECT_NOTIFIER
1046 public:
1047 explicit AutoMaybeStartBackgroundAllocation(MOZ_GUARD_OBJECT_NOTIFIER_ONLY_PARAM)
1048 : runtime(nullptr)
1050 MOZ_GUARD_OBJECT_NOTIFIER_INIT;
1053 void tryToStartBackgroundAllocation(JSRuntime* rt) {
1054 runtime = rt;
1057 ~AutoMaybeStartBackgroundAllocation() {
1058 if (runtime && !runtime->currentThreadOwnsInterruptLock()) {
1059 AutoLockHelperThreadState helperLock;
1060 AutoLockGC lock(runtime);
1061 runtime->gc.startBackgroundAllocationIfIdle();
1066 /* The caller must hold the GC lock. */
1067 Chunk*
1068 GCRuntime::pickChunk(Zone* zone, AutoMaybeStartBackgroundAllocation& maybeStartBackgroundAllocation)
1070 Chunk** listHeadp = getAvailableChunkList(zone);
1071 Chunk* chunk = *listHeadp;
1072 if (chunk)
1073 return chunk;
1075 chunk = chunkPool.get(rt);
1076 if (!chunk) {
1077 chunk = Chunk::allocate(rt);
1078 if (!chunk)
1079 return nullptr;
1080 JS_ASSERT(chunk->info.numArenasFreeCommitted == 0);
1083 JS_ASSERT(chunk->unused());
1084 JS_ASSERT(!chunkSet.has(chunk));
1086 if (wantBackgroundAllocation())
1087 maybeStartBackgroundAllocation.tryToStartBackgroundAllocation(rt);
1089 chunkAllocationSinceLastGC = true;
1092 * FIXME bug 583732 - chunk is newly allocated and cannot be present in
1093 * the table so using ordinary lookupForAdd is suboptimal here.
1095 GCChunkSet::AddPtr p = chunkSet.lookupForAdd(chunk);
1096 JS_ASSERT(!p);
1097 if (!chunkSet.add(p, chunk)) {
1098 releaseChunk(chunk);
1099 return nullptr;
1102 chunk->info.prevp = nullptr;
1103 chunk->info.next = nullptr;
1104 chunk->addToAvailableList(zone);
1106 return chunk;
1109 GCRuntime::GCRuntime(JSRuntime* rt) :
1110 rt(rt),
1111 systemZone(nullptr),
1112 #ifdef JSGC_GENERATIONAL
1113 nursery(rt),
1114 storeBuffer(rt, nursery),
1115 #endif
1116 stats(rt),
1117 marker(rt),
1118 usage(nullptr),
1119 systemAvailableChunkListHead(nullptr),
1120 userAvailableChunkListHead(nullptr),
1121 maxMallocBytes(0),
1122 numArenasFreeCommitted(0),
1123 verifyPreData(nullptr),
1124 verifyPostData(nullptr),
1125 chunkAllocationSinceLastGC(false),
1126 nextFullGCTime(0),
1127 lastGCTime(0),
1128 mode(JSGC_MODE_INCREMENTAL),
1129 numActiveZoneIters(0),
1130 decommitThreshold(32 * 1024 * 1024),
1131 cleanUpEverything(false),
1132 grayBitsValid(false),
1133 isNeeded(0),
1134 majorGCNumber(0),
1135 jitReleaseNumber(0),
1136 number(0),
1137 startNumber(0),
1138 isFull(false),
1139 triggerReason(JS::gcreason::NO_REASON),
1140 #ifdef DEBUG
1141 disableStrictProxyCheckingCount(0),
1142 #endif
1143 incrementalState(gc::NO_INCREMENTAL),
1144 lastMarkSlice(false),
1145 sweepOnBackgroundThread(false),
1146 foundBlackGrayEdges(false),
1147 sweepingZones(nullptr),
1148 zoneGroupIndex(0),
1149 zoneGroups(nullptr),
1150 currentZoneGroup(nullptr),
1151 sweepZone(nullptr),
1152 sweepKindIndex(0),
1153 abortSweepAfterCurrentGroup(false),
1154 arenasAllocatedDuringSweep(nullptr),
1155 #ifdef JS_GC_MARKING_VALIDATION
1156 markingValidator(nullptr),
1157 #endif
1158 interFrameGC(0),
1159 sliceBudget(SliceBudget::Unlimited),
1160 incrementalAllowed(true),
1161 generationalDisabled(0),
1162 #ifdef JSGC_COMPACTING
1163 compactingDisabled(0),
1164 #endif
1165 manipulatingDeadZones(false),
1166 objectsMarkedInDeadZones(0),
1167 poked(false),
1168 heapState(Idle),
1169 #ifdef JS_GC_ZEAL
1170 zealMode(0),
1171 zealFrequency(0),
1172 nextScheduled(0),
1173 deterministicOnly(false),
1174 incrementalLimit(0),
1175 #endif
1176 validate(true),
1177 fullCompartmentChecks(false),
1178 mallocBytes(0),
1179 mallocGCTriggered(false),
1180 #ifdef DEBUG
1181 inUnsafeRegion(0),
1182 #endif
1183 alwaysPreserveCode(false),
1184 #ifdef DEBUG
1185 noGCOrAllocationCheck(0),
1186 #endif
1187 lock(nullptr),
1188 lockOwner(nullptr),
1189 helperState(rt)
1191 setGCMode(JSGC_MODE_GLOBAL);
1194 #ifdef JS_GC_ZEAL
1196 void
1197 GCRuntime::setZeal(uint8_t zeal, uint32_t frequency)
1199 if (verifyPreData)
1200 VerifyBarriers(rt, PreBarrierVerifier);
1201 if (verifyPostData)
1202 VerifyBarriers(rt, PostBarrierVerifier);
1204 #ifdef JSGC_GENERATIONAL
1205 if (zealMode == ZealGenerationalGCValue) {
1206 evictNursery(JS::gcreason::DEBUG_GC);
1207 nursery.leaveZealMode();
1210 if (zeal == ZealGenerationalGCValue)
1211 nursery.enterZealMode();
1212 #endif
1214 bool schedule = zeal >= js::gc::ZealAllocValue;
1215 zealMode = zeal;
1216 zealFrequency = frequency;
1217 nextScheduled = schedule ? frequency : 0;
1220 void
1221 GCRuntime::setNextScheduled(uint32_t count)
1223 nextScheduled = count;
1226 bool
1227 GCRuntime::initZeal()
1229 const char* env = getenv("JS_GC_ZEAL");
1230 if (!env)
1231 return true;
1233 int zeal = -1;
1234 int frequency = JS_DEFAULT_ZEAL_FREQ;
1235 if (strcmp(env, "help") != 0) {
1236 zeal = atoi(env);
1237 const char* p = strchr(env, ',');
1238 if (p)
1239 frequency = atoi(p + 1);
1242 if (zeal < 0 || zeal > ZealLimit || frequency < 0) {
1243 fprintf(stderr,
1244 "Format: JS_GC_ZEAL=N[,F]\n"
1245 "N indicates \"zealousness\":\n"
1246 " 0: no additional GCs\n"
1247 " 1: additional GCs at common danger points\n"
1248 " 2: GC every F allocations (default: 100)\n"
1249 " 3: GC when the window paints (browser only)\n"
1250 " 4: Verify pre write barriers between instructions\n"
1251 " 5: Verify pre write barriers between paints\n"
1252 " 6: Verify stack rooting\n"
1253 " 7: Collect the nursery every N nursery allocations\n"
1254 " 8: Incremental GC in two slices: 1) mark roots 2) finish collection\n"
1255 " 9: Incremental GC in two slices: 1) mark all 2) new marking and finish\n"
1256 " 10: Incremental GC in multiple slices\n"
1257 " 11: Verify post write barriers between instructions\n"
1258 " 12: Verify post write barriers between paints\n"
1259 " 13: Purge analysis state every F allocations (default: 100)\n");
1260 return false;
1263 setZeal(zeal, frequency);
1264 return true;
1267 #endif
1270 * Lifetime in number of major GCs for type sets attached to scripts containing
1271 * observed types.
1273 static const uint64_t JIT_SCRIPT_RELEASE_TYPES_PERIOD = 20;
1275 bool
1276 GCRuntime::init(uint32_t maxbytes, uint32_t maxNurseryBytes)
1278 InitMemorySubsystem();
1280 lock = PR_NewLock();
1281 if (!lock)
1282 return false;
1284 if (!chunkSet.init(INITIAL_CHUNK_CAPACITY))
1285 return false;
1287 if (!rootsHash.init(256))
1288 return false;
1290 if (!helperState.init())
1291 return false;
1294 * Separate gcMaxMallocBytes from gcMaxBytes but initialize to maxbytes
1295 * for default backward API compatibility.
1297 tunables.setParameter(JSGC_MAX_BYTES, maxbytes);
1298 setMaxMallocBytes(maxbytes);
1300 jitReleaseNumber = majorGCNumber + JIT_SCRIPT_RELEASE_TYPES_PERIOD;
1302 #ifdef JSGC_GENERATIONAL
1303 if (!nursery.init(maxNurseryBytes))
1304 return false;
1306 if (!nursery.isEnabled()) {
1307 JS_ASSERT(nursery.nurserySize() == 0);
1308 ++rt->gc.generationalDisabled;
1309 } else {
1310 JS_ASSERT(nursery.nurserySize() > 0);
1311 if (!storeBuffer.enable())
1312 return false;
1314 #endif
1316 #ifdef JS_GC_ZEAL
1317 if (!initZeal())
1318 return false;
1319 #endif
1321 if (!InitTrace(*this))
1322 return false;
1324 if (!marker.init(mode))
1325 return false;
1327 return true;
1330 void
1331 GCRuntime::recordNativeStackTop()
1333 /* Record the stack top here only if we are called from a request. */
1334 if (!rt->requestDepth)
1335 return;
1336 conservativeGC.recordStackTop();
1339 void
1340 GCRuntime::finish()
1343 * Wait until the background finalization stops and the helper thread
1344 * shuts down before we forcefully release any remaining GC memory.
1346 helperState.finish();
1348 #ifdef JS_GC_ZEAL
1349 /* Free memory associated with GC verification. */
1350 finishVerifier();
1351 #endif
1353 /* Delete all remaining zones. */
1354 if (rt->gcInitialized) {
1355 for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
1356 for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next())
1357 js_delete(comp.get());
1358 js_delete(zone.get());
1362 zones.clear();
1364 systemAvailableChunkListHead = nullptr;
1365 userAvailableChunkListHead = nullptr;
1366 if (chunkSet.initialized()) {
1367 for (GCChunkSet::Range r(chunkSet.all()); !r.empty(); r.popFront())
1368 releaseChunk(r.front());
1369 chunkSet.clear();
1372 expireAndFreeChunkPool(true);
1374 if (rootsHash.initialized())
1375 rootsHash.clear();
1377 FinishPersistentRootedChains(rt);
1379 if (lock) {
1380 PR_DestroyLock(lock);
1381 lock = nullptr;
1384 FinishTrace();
1387 void
1388 js::gc::FinishPersistentRootedChains(JSRuntime* rt)
1390 /* The lists of persistent roots are stored on the shadow runtime. */
1391 rt->functionPersistentRooteds.clear();
1392 rt->idPersistentRooteds.clear();
1393 rt->objectPersistentRooteds.clear();
1394 rt->scriptPersistentRooteds.clear();
1395 rt->stringPersistentRooteds.clear();
1396 rt->valuePersistentRooteds.clear();
1399 void
1400 GCRuntime::setParameter(JSGCParamKey key, uint32_t value)
1402 switch (key) {
1403 case JSGC_MAX_MALLOC_BYTES:
1404 setMaxMallocBytes(value);
1405 break;
1406 case JSGC_SLICE_TIME_BUDGET:
1407 sliceBudget = SliceBudget::TimeBudget(value);
1408 break;
1409 case JSGC_MARK_STACK_LIMIT:
1410 setMarkStackLimit(value);
1411 break;
1412 case JSGC_DECOMMIT_THRESHOLD:
1413 decommitThreshold = value * 1024 * 1024;
1414 break;
1415 case JSGC_MODE:
1416 mode = JSGCMode(value);
1417 JS_ASSERT(mode == JSGC_MODE_GLOBAL ||
1418 mode == JSGC_MODE_COMPARTMENT ||
1419 mode == JSGC_MODE_INCREMENTAL);
1420 break;
1421 default:
1422 tunables.setParameter(key, value);
1426 void
1427 GCSchedulingTunables::setParameter(JSGCParamKey key, uint32_t value)
1429 switch(key) {
1430 case JSGC_MAX_BYTES:
1431 gcMaxBytes_ = value;
1432 break;
1433 case JSGC_HIGH_FREQUENCY_TIME_LIMIT:
1434 highFrequencyThresholdUsec_ = value * PRMJ_USEC_PER_MSEC;
1435 break;
1436 case JSGC_HIGH_FREQUENCY_LOW_LIMIT:
1437 highFrequencyLowLimitBytes_ = value * 1024 * 1024;
1438 if (highFrequencyLowLimitBytes_ >= highFrequencyHighLimitBytes_)
1439 highFrequencyHighLimitBytes_ = highFrequencyLowLimitBytes_ + 1;
1440 JS_ASSERT(highFrequencyHighLimitBytes_ > highFrequencyLowLimitBytes_);
1441 break;
1442 case JSGC_HIGH_FREQUENCY_HIGH_LIMIT:
1443 MOZ_ASSERT(value > 0);
1444 highFrequencyHighLimitBytes_ = value * 1024 * 1024;
1445 if (highFrequencyHighLimitBytes_ <= highFrequencyLowLimitBytes_)
1446 highFrequencyLowLimitBytes_ = highFrequencyHighLimitBytes_ - 1;
1447 JS_ASSERT(highFrequencyHighLimitBytes_ > highFrequencyLowLimitBytes_);
1448 break;
1449 case JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MAX:
1450 highFrequencyHeapGrowthMax_ = value / 100.0;
1451 MOZ_ASSERT(highFrequencyHeapGrowthMax_ / 0.85 > 1.0);
1452 break;
1453 case JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MIN:
1454 highFrequencyHeapGrowthMin_ = value / 100.0;
1455 MOZ_ASSERT(highFrequencyHeapGrowthMin_ / 0.85 > 1.0);
1456 break;
1457 case JSGC_LOW_FREQUENCY_HEAP_GROWTH:
1458 lowFrequencyHeapGrowth_ = value / 100.0;
1459 MOZ_ASSERT(lowFrequencyHeapGrowth_ / 0.9 > 1.0);
1460 break;
1461 case JSGC_DYNAMIC_HEAP_GROWTH:
1462 dynamicHeapGrowthEnabled_ = value;
1463 break;
1464 case JSGC_DYNAMIC_MARK_SLICE:
1465 dynamicMarkSliceEnabled_ = value;
1466 break;
1467 case JSGC_ALLOCATION_THRESHOLD:
1468 gcZoneAllocThresholdBase_ = value * 1024 * 1024;
1469 break;
1470 case JSGC_MIN_EMPTY_CHUNK_COUNT:
1471 minEmptyChunkCount_ = value;
1472 if (minEmptyChunkCount_ > maxEmptyChunkCount_)
1473 maxEmptyChunkCount_ = minEmptyChunkCount_;
1474 JS_ASSERT(maxEmptyChunkCount_ >= minEmptyChunkCount_);
1475 break;
1476 case JSGC_MAX_EMPTY_CHUNK_COUNT:
1477 maxEmptyChunkCount_ = value;
1478 if (minEmptyChunkCount_ > maxEmptyChunkCount_)
1479 minEmptyChunkCount_ = maxEmptyChunkCount_;
1480 JS_ASSERT(maxEmptyChunkCount_ >= minEmptyChunkCount_);
1481 break;
1482 default:
1483 MOZ_CRASH("Unknown GC parameter.");
1487 uint32_t
1488 GCRuntime::getParameter(JSGCParamKey key)
1490 switch (key) {
1491 case JSGC_MAX_BYTES:
1492 return uint32_t(tunables.gcMaxBytes());
1493 case JSGC_MAX_MALLOC_BYTES:
1494 return maxMallocBytes;
1495 case JSGC_BYTES:
1496 return uint32_t(usage.gcBytes());
1497 case JSGC_MODE:
1498 return uint32_t(mode);
1499 case JSGC_UNUSED_CHUNKS:
1500 return uint32_t(chunkPool.getEmptyCount());
1501 case JSGC_TOTAL_CHUNKS:
1502 return uint32_t(chunkSet.count() + chunkPool.getEmptyCount());
1503 case JSGC_SLICE_TIME_BUDGET:
1504 return uint32_t(sliceBudget > 0 ? sliceBudget / PRMJ_USEC_PER_MSEC : 0);
1505 case JSGC_MARK_STACK_LIMIT:
1506 return marker.maxCapacity();
1507 case JSGC_HIGH_FREQUENCY_TIME_LIMIT:
1508 return tunables.highFrequencyThresholdUsec();
1509 case JSGC_HIGH_FREQUENCY_LOW_LIMIT:
1510 return tunables.highFrequencyLowLimitBytes() / 1024 / 1024;
1511 case JSGC_HIGH_FREQUENCY_HIGH_LIMIT:
1512 return tunables.highFrequencyHighLimitBytes() / 1024 / 1024;
1513 case JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MAX:
1514 return uint32_t(tunables.highFrequencyHeapGrowthMax() * 100);
1515 case JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MIN:
1516 return uint32_t(tunables.highFrequencyHeapGrowthMin() * 100);
1517 case JSGC_LOW_FREQUENCY_HEAP_GROWTH:
1518 return uint32_t(tunables.lowFrequencyHeapGrowth() * 100);
1519 case JSGC_DYNAMIC_HEAP_GROWTH:
1520 return tunables.isDynamicHeapGrowthEnabled();
1521 case JSGC_DYNAMIC_MARK_SLICE:
1522 return tunables.isDynamicMarkSliceEnabled();
1523 case JSGC_ALLOCATION_THRESHOLD:
1524 return tunables.gcZoneAllocThresholdBase() / 1024 / 1024;
1525 case JSGC_MIN_EMPTY_CHUNK_COUNT:
1526 return tunables.minEmptyChunkCount();
1527 case JSGC_MAX_EMPTY_CHUNK_COUNT:
1528 return tunables.maxEmptyChunkCount();
1529 default:
1530 JS_ASSERT(key == JSGC_NUMBER);
1531 return uint32_t(number);
1535 void
1536 GCRuntime::setMarkStackLimit(size_t limit)
1538 JS_ASSERT(!isHeapBusy());
1539 AutoStopVerifyingBarriers pauseVerification(rt, false);
1540 marker.setMaxCapacity(limit);
1543 template <typename T> struct BarrierOwner {};
1544 template <typename T> struct BarrierOwner<T*> { typedef T result; };
1545 template <> struct BarrierOwner<Value> { typedef HeapValue result; };
1547 bool
1548 GCRuntime::addBlackRootsTracer(JSTraceDataOp traceOp, void* data)
1550 AssertHeapIsIdle(rt);
1551 return !!blackRootTracers.append(Callback<JSTraceDataOp>(traceOp, data));
1554 void
1555 GCRuntime::removeBlackRootsTracer(JSTraceDataOp traceOp, void* data)
1557 // Can be called from finalizers
1558 for (size_t i = 0; i < blackRootTracers.length(); i++) {
1559 Callback<JSTraceDataOp>* e = &blackRootTracers[i];
1560 if (e->op == traceOp && e->data == data) {
1561 blackRootTracers.erase(e);
1566 void
1567 GCRuntime::setGrayRootsTracer(JSTraceDataOp traceOp, void* data)
1569 AssertHeapIsIdle(rt);
1570 grayRootTracer.op = traceOp;
1571 grayRootTracer.data = data;
1574 void
1575 GCRuntime::setGCCallback(JSGCCallback callback, void* data)
1577 gcCallback.op = callback;
1578 gcCallback.data = data;
1581 bool
1582 GCRuntime::addFinalizeCallback(JSFinalizeCallback callback, void* data)
1584 return finalizeCallbacks.append(Callback<JSFinalizeCallback>(callback, data));
1587 void
1588 GCRuntime::removeFinalizeCallback(JSFinalizeCallback callback)
1590 for (Callback<JSFinalizeCallback>* p = finalizeCallbacks.begin();
1591 p < finalizeCallbacks.end(); p++) {
1592 if (p->op == callback) {
1593 finalizeCallbacks.erase(p);
1594 break;
1599 JS::GCSliceCallback
1600 GCRuntime::setSliceCallback(JS::GCSliceCallback callback) {
1601 return stats.setSliceCallback(callback);
1604 template <typename T>
1605 bool
1606 GCRuntime::addRoot(T* rp, const char* name, JSGCRootType rootType)
1609 * Sometimes Firefox will hold weak references to objects and then convert
1610 * them to strong references by calling AddRoot (e.g., via PreserveWrapper,
1611 * or ModifyBusyCount in workers). We need a read barrier to cover these
1612 * cases.
1614 if (rt->gc.incrementalState != NO_INCREMENTAL)
1615 BarrierOwner<T>::result::writeBarrierPre(*rp);
1617 return rt->gc.rootsHash.put((void*)rp, RootInfo(name, rootType));
1620 void
1621 GCRuntime::removeRoot(void* rp)
1623 rootsHash.remove(rp);
1624 poke();
1627 template <typename T>
1628 static bool
1629 AddRoot(JSRuntime* rt, T* rp, const char* name, JSGCRootType rootType)
1631 return rt->gc.addRoot(rp, name, rootType);
1634 template <typename T>
1635 static bool
1636 AddRoot(JSContext* cx, T* rp, const char* name, JSGCRootType rootType)
1638 bool ok = cx->runtime()->gc.addRoot(rp, name, rootType);
1639 if (!ok)
1640 JS_ReportOutOfMemory(cx);
1641 return ok;
1644 bool
1645 js::AddValueRoot(JSContext* cx, Value* vp, const char* name)
1647 return AddRoot(cx, vp, name, JS_GC_ROOT_VALUE_PTR);
1650 extern bool
1651 js::AddValueRootRT(JSRuntime* rt, js::Value* vp, const char* name)
1653 return AddRoot(rt, vp, name, JS_GC_ROOT_VALUE_PTR);
1656 extern bool
1657 js::AddStringRoot(JSContext* cx, JSString** rp, const char* name)
1659 return AddRoot(cx, rp, name, JS_GC_ROOT_STRING_PTR);
1662 extern bool
1663 js::AddObjectRoot(JSContext* cx, JSObject** rp, const char* name)
1665 return AddRoot(cx, rp, name, JS_GC_ROOT_OBJECT_PTR);
1668 extern bool
1669 js::AddObjectRoot(JSRuntime* rt, JSObject** rp, const char* name)
1671 return AddRoot(rt, rp, name, JS_GC_ROOT_OBJECT_PTR);
1674 extern bool
1675 js::AddScriptRoot(JSContext* cx, JSScript** rp, const char* name)
1677 return AddRoot(cx, rp, name, JS_GC_ROOT_SCRIPT_PTR);
1680 extern JS_FRIEND_API(bool)
1681 js::AddRawValueRoot(JSContext* cx, Value* vp, const char* name)
1683 return AddRoot(cx, vp, name, JS_GC_ROOT_VALUE_PTR);
1686 extern JS_FRIEND_API(void)
1687 js::RemoveRawValueRoot(JSContext* cx, Value* vp)
1689 RemoveRoot(cx->runtime(), vp);
1692 void
1693 js::RemoveRoot(JSRuntime* rt, void* rp)
1695 rt->gc.removeRoot(rp);
1698 void
1699 GCRuntime::setMaxMallocBytes(size_t value)
1702 * For compatibility treat any value that exceeds PTRDIFF_T_MAX to
1703 * mean that value.
1705 maxMallocBytes = (ptrdiff_t(value) >= 0) ? value : size_t(-1) >> 1;
1706 resetMallocBytes();
1707 for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next())
1708 zone->setGCMaxMallocBytes(value);
1711 void
1712 GCRuntime::resetMallocBytes()
1714 mallocBytes = ptrdiff_t(maxMallocBytes);
1715 mallocGCTriggered = false;
1718 void
1719 GCRuntime::updateMallocCounter(JS::Zone* zone, size_t nbytes)
1721 mallocBytes -= ptrdiff_t(nbytes);
1722 if (MOZ_UNLIKELY(isTooMuchMalloc()))
1723 onTooMuchMalloc();
1724 else if (zone)
1725 zone->updateMallocCounter(nbytes);
1728 void
1729 GCRuntime::onTooMuchMalloc()
1731 if (!mallocGCTriggered)
1732 mallocGCTriggered = triggerGC(JS::gcreason::TOO_MUCH_MALLOC);
1735 /* static */ double
1736 ZoneHeapThreshold::computeZoneHeapGrowthFactorForHeapSize(size_t lastBytes,
1737 const GCSchedulingTunables& tunables,
1738 const GCSchedulingState& state)
1740 if (!tunables.isDynamicHeapGrowthEnabled())
1741 return 3.0;
1743 // For small zones, our collection heuristics do not matter much: favor
1744 // something simple in this case.
1745 if (lastBytes < 1 * 1024 * 1024)
1746 return tunables.lowFrequencyHeapGrowth();
1748 // If GC's are not triggering in rapid succession, use a lower threshold so
1749 // that we will collect garbage sooner.
1750 if (!state.inHighFrequencyGCMode())
1751 return tunables.lowFrequencyHeapGrowth();
1753 // The heap growth factor depends on the heap size after a GC and the GC
1754 // frequency. For low frequency GCs (more than 1sec between GCs) we let
1755 // the heap grow to 150%. For high frequency GCs we let the heap grow
1756 // depending on the heap size:
1757 // lastBytes < highFrequencyLowLimit: 300%
1758 // lastBytes > highFrequencyHighLimit: 150%
1759 // otherwise: linear interpolation between 300% and 150% based on lastBytes
1761 // Use shorter names to make the operation comprehensible.
1762 double minRatio = tunables.highFrequencyHeapGrowthMin();
1763 double maxRatio = tunables.highFrequencyHeapGrowthMax();
1764 double lowLimit = tunables.highFrequencyLowLimitBytes();
1765 double highLimit = tunables.highFrequencyHighLimitBytes();
1767 if (lastBytes <= lowLimit)
1768 return maxRatio;
1770 if (lastBytes >= highLimit)
1771 return minRatio;
1773 double factor = maxRatio - ((maxRatio - minRatio) * ((lastBytes - lowLimit) /
1774 (highLimit - lowLimit)));
1775 JS_ASSERT(factor >= minRatio);
1776 JS_ASSERT(factor <= maxRatio);
1777 return factor;
1780 /* static */ size_t
1781 ZoneHeapThreshold::computeZoneTriggerBytes(double growthFactor, size_t lastBytes,
1782 JSGCInvocationKind gckind,
1783 const GCSchedulingTunables& tunables)
1785 size_t base = gckind == GC_SHRINK
1786 ? lastBytes
1787 : Max(lastBytes, tunables.gcZoneAllocThresholdBase());
1788 double trigger = double(base) * growthFactor;
1789 return size_t(Min(double(tunables.gcMaxBytes()), trigger));
1792 void
1793 ZoneHeapThreshold::updateAfterGC(size_t lastBytes, JSGCInvocationKind gckind,
1794 const GCSchedulingTunables& tunables,
1795 const GCSchedulingState& state)
1797 gcHeapGrowthFactor_ = computeZoneHeapGrowthFactorForHeapSize(lastBytes, tunables, state);
1798 gcTriggerBytes_ = computeZoneTriggerBytes(gcHeapGrowthFactor_, lastBytes, gckind, tunables);
1801 void
1802 ZoneHeapThreshold::updateForRemovedArena(const GCSchedulingTunables& tunables)
1804 size_t amount = ArenaSize * gcHeapGrowthFactor_;
1806 JS_ASSERT(amount > 0);
1807 JS_ASSERT(gcTriggerBytes_ >= amount);
1809 if (gcTriggerBytes_ - amount < tunables.gcZoneAllocThresholdBase() * gcHeapGrowthFactor_)
1810 return;
1812 gcTriggerBytes_ -= amount;
1815 Allocator::Allocator(Zone* zone)
1816 : zone_(zone)
1819 inline void
1820 GCMarker::delayMarkingArena(ArenaHeader* aheader)
1822 if (aheader->hasDelayedMarking) {
1823 /* Arena already scheduled to be marked later */
1824 return;
1826 aheader->setNextDelayedMarking(unmarkedArenaStackTop);
1827 unmarkedArenaStackTop = aheader;
1828 markLaterArenas++;
1831 void
1832 GCMarker::delayMarkingChildren(const void* thing)
1834 const Cell* cell = reinterpret_cast<const Cell*>(thing);
1835 cell->arenaHeader()->markOverflow = 1;
1836 delayMarkingArena(cell->arenaHeader());
1839 inline void
1840 ArenaLists::prepareForIncrementalGC(JSRuntime* rt)
1842 for (size_t i = 0; i != FINALIZE_LIMIT; ++i) {
1843 FreeList* freeList = &freeLists[i];
1844 if (!freeList->isEmpty()) {
1845 ArenaHeader* aheader = freeList->arenaHeader();
1846 aheader->allocatedDuringIncremental = true;
1847 rt->gc.marker.delayMarkingArena(aheader);
1852 inline void
1853 GCRuntime::arenaAllocatedDuringGC(JS::Zone* zone, ArenaHeader* arena)
1855 if (zone->needsIncrementalBarrier()) {
1856 arena->allocatedDuringIncremental = true;
1857 marker.delayMarkingArena(arena);
1858 } else if (zone->isGCSweeping()) {
1859 arena->setNextAllocDuringSweep(arenasAllocatedDuringSweep);
1860 arenasAllocatedDuringSweep = arena;
1864 inline void*
1865 ArenaLists::allocateFromArenaInline(Zone* zone, AllocKind thingKind,
1866 AutoMaybeStartBackgroundAllocation& maybeStartBackgroundAllocation)
1869 * Parallel JS Note:
1871 * This function can be called from parallel threads all of which
1872 * are associated with the same compartment. In that case, each
1873 * thread will have a distinct ArenaLists. Therefore, whenever we
1874 * fall through to pickChunk() we must be sure that we are holding
1875 * a lock.
1878 AutoLockGC maybeLock;
1880 bool backgroundFinalizationIsRunning = false;
1881 ArenaLists::BackgroundFinalizeState* bfs = &backgroundFinalizeState[thingKind];
1882 if (*bfs != BFS_DONE) {
1884 * We cannot search the arena list for free things while background
1885 * finalization runs and can modify it at any moment. So we always
1886 * allocate a new arena in that case.
1888 JSRuntime* rt = zone->runtimeFromAnyThread();
1889 maybeLock.lock(rt);
1890 if (*bfs == BFS_RUN) {
1891 backgroundFinalizationIsRunning = true;
1892 } else if (*bfs == BFS_JUST_FINISHED) {
1893 /* See comments before BackgroundFinalizeState definition. */
1894 *bfs = BFS_DONE;
1895 } else {
1896 JS_ASSERT(*bfs == BFS_DONE);
1900 ArenaHeader* aheader;
1901 ArenaList* al = &arenaLists[thingKind];
1902 if (!backgroundFinalizationIsRunning && (aheader = al->arenaAfterCursor())) {
1904 * Normally, the empty arenas are returned to the chunk
1905 * and should not be present on the list. In parallel
1906 * execution, however, we keep empty arenas in the arena
1907 * list to avoid synchronizing on the chunk.
1909 JS_ASSERT(!aheader->isEmpty() || InParallelSection());
1911 al->moveCursorPast(aheader);
1914 * Move the free span stored in the arena to the free list and
1915 * allocate from it.
1917 FreeSpan firstFreeSpan = aheader->getFirstFreeSpan();
1918 freeLists[thingKind].setHead(&firstFreeSpan);
1919 aheader->setAsFullyUsed();
1920 if (MOZ_UNLIKELY(zone->wasGCStarted()))
1921 zone->runtimeFromMainThread()->gc.arenaAllocatedDuringGC(zone, aheader);
1922 void* thing = freeLists[thingKind].allocate(Arena::thingSize(thingKind));
1923 JS_ASSERT(thing); // This allocation is infallible.
1924 return thing;
1927 /* Make sure we hold the GC lock before we call pickChunk. */
1928 JSRuntime* rt = zone->runtimeFromAnyThread();
1929 if (!maybeLock.locked())
1930 maybeLock.lock(rt);
1931 Chunk* chunk = rt->gc.pickChunk(zone, maybeStartBackgroundAllocation);
1932 if (!chunk)
1933 return nullptr;
1936 * While we still hold the GC lock get an arena from some chunk, mark it
1937 * as full as its single free span is moved to the free lists, and insert
1938 * it to the list as a fully allocated arena.
1940 JS_ASSERT(al->isCursorAtEnd());
1941 aheader = chunk->allocateArena(zone, thingKind);
1942 if (!aheader)
1943 return nullptr;
1945 if (MOZ_UNLIKELY(zone->wasGCStarted()))
1946 rt->gc.arenaAllocatedDuringGC(zone, aheader);
1947 al->insertAtCursor(aheader);
1950 * Allocate from a newly allocated arena. The arena will have been set up
1951 * as fully used during the initialization so we have to re-mark it as
1952 * empty before allocating.
1954 JS_ASSERT(!aheader->hasFreeThings());
1955 Arena* arena = aheader->getArena();
1956 size_t thingSize = Arena::thingSize(thingKind);
1957 FreeSpan fullSpan;
1958 fullSpan.initFinal(arena->thingsStart(thingKind), arena->thingsEnd() - thingSize, thingSize);
1959 freeLists[thingKind].setHead(&fullSpan);
1960 return freeLists[thingKind].allocate(thingSize);
1963 void*
1964 ArenaLists::allocateFromArena(JS::Zone* zone, AllocKind thingKind)
1966 AutoMaybeStartBackgroundAllocation maybeStartBackgroundAllocation;
1967 return allocateFromArenaInline(zone, thingKind, maybeStartBackgroundAllocation);
1970 void
1971 ArenaLists::wipeDuringParallelExecution(JSRuntime* rt)
1973 JS_ASSERT(InParallelSection());
1975 // First, check that we all objects we have allocated are eligible
1976 // for background finalization. The idea is that we will free
1977 // (below) ALL background finalizable objects, because we know (by
1978 // the rules of parallel execution) they are not reachable except
1979 // by other thread-local objects. However, if there were any
1980 // object ineligible for background finalization, it might retain
1981 // a reference to one of these background finalizable objects, and
1982 // that'd be bad.
1983 for (unsigned i = 0; i < FINALIZE_LAST; i++) {
1984 AllocKind thingKind = AllocKind(i);
1985 if (!IsBackgroundFinalized(thingKind) && !arenaLists[thingKind].isEmpty())
1986 return;
1989 // Finalize all background finalizable objects immediately and
1990 // return the (now empty) arenas back to arena list.
1991 FreeOp fop(rt);
1992 for (unsigned i = 0; i < FINALIZE_OBJECT_LAST; i++) {
1993 AllocKind thingKind = AllocKind(i);
1995 if (!IsBackgroundFinalized(thingKind))
1996 continue;
1998 if (!arenaLists[i].isEmpty()) {
1999 purge(thingKind);
2000 forceFinalizeNow(&fop, thingKind);
2005 /* Compacting GC */
2007 bool
2008 GCRuntime::shouldCompact()
2010 #ifdef JSGC_COMPACTING
2011 return invocationKind == GC_SHRINK && !compactingDisabled;
2012 #else
2013 return false;
2014 #endif
2017 #ifdef JSGC_COMPACTING
2019 void
2020 GCRuntime::disableCompactingGC()
2022 ++rt->gc.compactingDisabled;
2025 void
2026 GCRuntime::enableCompactingGC()
2028 JS_ASSERT(compactingDisabled > 0);
2029 --compactingDisabled;
2032 AutoDisableCompactingGC::AutoDisableCompactingGC(JSRuntime* rt)
2033 : gc(rt->gc)
2035 gc.disableCompactingGC();
2038 AutoDisableCompactingGC::~AutoDisableCompactingGC()
2040 gc.enableCompactingGC();
2043 static void
2044 ForwardCell(Cell* dest, Cell* src)
2046 // Mark a cell has having been relocated and astore forwarding pointer to
2047 // the new cell.
2048 MOZ_ASSERT(src->tenuredZone() == dest->tenuredZone());
2050 // Putting the values this way round is a terrible hack to make
2051 // ObjectImpl::zone() work on forwarded objects.
2052 MOZ_ASSERT(ObjectImpl::offsetOfShape() == 0);
2053 uintptr_t* ptr = reinterpret_cast<uintptr_t*>(src);
2054 ptr[0] = reinterpret_cast<uintptr_t>(dest); // Forwarding address
2055 ptr[1] = ForwardedCellMagicValue; // Moved!
2058 static bool
2059 ArenaContainsGlobal(ArenaHeader* arena)
2061 if (arena->getAllocKind() > FINALIZE_OBJECT_LAST)
2062 return false;
2064 for (ArenaCellIterUnderGC i(arena); !i.done(); i.next()) {
2065 JSObject* obj = static_cast<JSObject*>(i.getCell());
2066 if (obj->is<GlobalObject>())
2067 return true;
2070 return false;
2073 static bool
2074 CanRelocateArena(ArenaHeader* arena)
2077 * We can't currently move global objects because their address is baked
2078 * into compiled code. We therefore skip moving the contents of any arena
2079 * containing a global if ion or baseline are enabled.
2081 JSRuntime* rt = arena->zone->runtimeFromMainThread();
2082 return arena->getAllocKind() <= FINALIZE_OBJECT_LAST &&
2083 ((!rt->options().baseline() && !rt->options().ion()) || !ArenaContainsGlobal(arena));
2086 static bool
2087 ShouldRelocateArena(ArenaHeader* arena)
2089 #ifdef JS_GC_ZEAL
2090 if (arena->zone->runtimeFromMainThread()->gc.zeal() == ZealCompactValue)
2091 return true;
2092 #endif
2095 * Eventually, this will be based on brilliant heuristics that look at fill
2096 * percentage and fragmentation and... stuff.
2098 return arena->hasFreeThings();
2102 * Choose some arenas to relocate all cells out of and remove them from the
2103 * arena list. Return the head of the list of arenas to relocate.
2105 ArenaHeader*
2106 ArenaList::pickArenasToRelocate()
2108 check();
2109 ArenaHeader* head = nullptr;
2110 ArenaHeader** tailp = &head;
2112 // TODO: Only scan through the arenas with space available.
2113 ArenaHeader** arenap = &head_;
2114 while (*arenap) {
2115 ArenaHeader* arena = *arenap;
2116 JS_ASSERT(arena);
2117 if (CanRelocateArena(arena) && ShouldRelocateArena(arena)) {
2118 // Remove from arena list
2119 if (cursorp_ == &arena->next)
2120 cursorp_ = arenap;
2121 *arenap = arena->next;
2122 arena->next = nullptr;
2124 // Append to relocation list
2125 *tailp = arena;
2126 tailp = &arena->next;
2127 } else {
2128 arenap = &arena->next;
2132 check();
2133 return head;
2136 #ifdef DEBUG
2137 inline bool
2138 PtrIsInRange(void* ptr, void* start, size_t length)
2140 return uintptr_t(ptr) - uintptr_t(start) < length;
2142 #endif
2144 static bool
2145 RelocateCell(Zone* zone, Cell* src, AllocKind thingKind, size_t thingSize)
2147 // Allocate a new cell.
2148 void* dst = zone->allocator.arenas.allocateFromFreeList(thingKind, thingSize);
2149 if (!dst)
2150 dst = js::gc::ArenaLists::refillFreeListInGC(zone, thingKind);
2151 if (!dst)
2152 return false;
2154 // Copy source cell contents to destination.
2155 memcpy(dst, src, thingSize);
2157 // Fixup the pointer to inline object elements if necessary.
2158 if (thingKind <= FINALIZE_OBJECT_LAST) {
2159 JSObject* srcObj = static_cast<JSObject*>(src);
2160 JSObject* dstObj = static_cast<JSObject*>(dst);
2161 if (srcObj->hasFixedElements())
2162 dstObj->setFixedElements();
2164 if (srcObj->is<ArrayBufferObject>()) {
2165 // We must fix up any inline data pointers while we know the source
2166 // object and before we mark any of the views.
2167 ArrayBufferObject::fixupDataPointerAfterMovingGC(
2168 srcObj->as<ArrayBufferObject>(), dstObj->as<ArrayBufferObject>());
2169 } else if (srcObj->is<TypedArrayObject>()) {
2170 TypedArrayObject& typedArray = srcObj->as<TypedArrayObject>();
2171 if (!typedArray.hasBuffer()) {
2172 JS_ASSERT(srcObj->getPrivate() ==
2173 srcObj->fixedData(TypedArrayObject::FIXED_DATA_START));
2174 dstObj->setPrivate(dstObj->fixedData(TypedArrayObject::FIXED_DATA_START));
2179 JS_ASSERT_IF(dstObj->isNative(),
2180 !PtrIsInRange((HeapSlot*)dstObj->getDenseElements(), src, thingSize));
2183 // Copy the mark bits.
2184 static_cast<Cell*>(dst)->copyMarkBitsFrom(src);
2186 // Mark source cell as forwarded and leave a pointer to the destination.
2187 ForwardCell(static_cast<Cell*>(dst), src);
2189 return true;
2192 static bool
2193 RelocateArena(ArenaHeader* aheader)
2195 JS_ASSERT(aheader->allocated());
2196 JS_ASSERT(!aheader->hasDelayedMarking);
2197 JS_ASSERT(!aheader->markOverflow);
2198 JS_ASSERT(!aheader->allocatedDuringIncremental);
2200 Zone* zone = aheader->zone;
2202 AllocKind thingKind = aheader->getAllocKind();
2203 size_t thingSize = aheader->getThingSize();
2205 for (ArenaCellIterUnderFinalize i(aheader); !i.done(); i.next()) {
2206 if (!RelocateCell(zone, i.getCell(), thingKind, thingSize)) {
2207 MOZ_CRASH(); // TODO: Handle failure here.
2208 return false;
2212 return true;
2216 * Relocate all arenas identified by pickArenasToRelocate: for each arena,
2217 * relocate each cell within it, then tack it onto a list of relocated arenas.
2218 * Currently, we allow the relocation to fail, in which case the arena will be
2219 * moved back onto the list of arenas with space available. (I did this
2220 * originally to test my list manipulation before implementing the actual
2221 * moving, with half a thought to allowing pinning (moving only a portion of
2222 * the cells in an arena), but now it's probably just dead weight. FIXME)
2224 ArenaHeader*
2225 ArenaList::relocateArenas(ArenaHeader* toRelocate, ArenaHeader* relocated)
2227 check();
2229 while (ArenaHeader* arena = toRelocate) {
2230 toRelocate = arena->next;
2232 if (RelocateArena(arena)) {
2233 // Prepend to list of relocated arenas
2234 arena->next = relocated;
2235 relocated = arena;
2236 } else {
2237 // For some reason, the arena did not end up empty. Prepend it to
2238 // the portion of the list that the cursor is pointing to (the
2239 // arenas with space available) so that it will be used for future
2240 // allocations.
2241 JS_ASSERT(arena->hasFreeThings());
2242 insertAtCursor(arena);
2246 check();
2248 return relocated;
2251 ArenaHeader*
2252 ArenaLists::relocateArenas(ArenaHeader* relocatedList)
2254 // Flush all the freeLists back into the arena headers
2255 purge();
2256 checkEmptyFreeLists();
2258 for (size_t i = 0; i < FINALIZE_LIMIT; i++) {
2259 ArenaList& al = arenaLists[i];
2260 ArenaHeader* toRelocate = al.pickArenasToRelocate();
2261 if (toRelocate)
2262 relocatedList = al.relocateArenas(toRelocate, relocatedList);
2266 * When we allocate new locations for cells, we use
2267 * allocateFromFreeList(). Reset the free list again so that
2268 * AutoCopyFreeListToArenasForGC doesn't complain that the free lists
2269 * are different now.
2271 purge();
2272 checkEmptyFreeLists();
2274 return relocatedList;
2277 ArenaHeader*
2278 GCRuntime::relocateArenas()
2280 gcstats::AutoPhase ap(stats, gcstats::PHASE_COMPACT_MOVE);
2282 ArenaHeader* relocatedList = nullptr;
2283 for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
2284 JS_ASSERT(zone->isGCFinished());
2285 JS_ASSERT(!zone->isPreservingCode());
2287 // We cannot move atoms as we depend on their addresses being constant.
2288 if (!rt->isAtomsZone(zone)) {
2289 zone->setGCState(Zone::Compact);
2290 relocatedList = zone->allocator.arenas.relocateArenas(relocatedList);
2294 return relocatedList;
2297 struct MovingTracer : JSTracer {
2298 MovingTracer(JSRuntime* rt) : JSTracer(rt, Visit, TraceWeakMapValues) {}
2300 static void Visit(JSTracer* jstrc, void** thingp, JSGCTraceKind kind);
2301 static void Sweep(JSTracer* jstrc);
2304 void
2305 MovingTracer::Visit(JSTracer* jstrc, void** thingp, JSGCTraceKind kind)
2307 Cell* thing = static_cast<Cell*>(*thingp);
2308 Zone* zone = thing->tenuredZoneFromAnyThread();
2309 if (!zone->isGCCompacting()) {
2310 JS_ASSERT(!IsForwarded(thing));
2311 return;
2313 JS_ASSERT(CurrentThreadCanAccessZone(zone));
2315 if (IsForwarded(thing)) {
2316 Cell* dst = Forwarded(thing);
2317 *thingp = dst;
2321 void
2322 MovingTracer::Sweep(JSTracer* jstrc)
2324 JSRuntime* rt = jstrc->runtime();
2325 FreeOp* fop = rt->defaultFreeOp();
2327 WatchpointMap::sweepAll(rt);
2329 Debugger::sweepAll(fop);
2331 for (ZonesIter zone(rt, SkipAtoms); !zone.done(); zone.next()) {
2332 if (zone->isCollecting()) {
2333 bool oom = false;
2334 zone->sweep(fop, false, &oom);
2335 JS_ASSERT(!oom);
2337 for (CompartmentsInZoneIter c(zone); !c.done(); c.next()) {
2338 c->sweep(fop, false);
2340 } else {
2341 /* Update cross compartment wrappers into moved zones. */
2342 for (CompartmentsInZoneIter c(zone); !c.done(); c.next())
2343 c->sweepCrossCompartmentWrappers();
2347 /* Type inference may put more blocks here to free. */
2348 rt->freeLifoAlloc.freeAll();
2350 /* Clear the new object cache as this can contain cell pointers. */
2351 rt->newObjectCache.purge();
2355 * Update the interal pointers in a single cell.
2357 static void
2358 UpdateCellPointers(MovingTracer* trc, Cell* cell, JSGCTraceKind traceKind) {
2359 TraceChildren(trc, cell, traceKind);
2361 if (traceKind == JSTRACE_SHAPE) {
2362 Shape* shape = static_cast<Shape*>(cell);
2363 shape->fixupAfterMovingGC();
2364 } else if (traceKind == JSTRACE_BASE_SHAPE) {
2365 BaseShape* base = static_cast<BaseShape*>(cell);
2366 base->fixupAfterMovingGC();
2371 * Update pointers to relocated cells by doing a full heap traversal and sweep.
2373 * The latter is necessary to update weak references which are not marked as
2374 * part of the traversal.
2376 void
2377 GCRuntime::updatePointersToRelocatedCells()
2379 JS_ASSERT(rt->currentThreadHasExclusiveAccess());
2381 gcstats::AutoPhase ap(stats, gcstats::PHASE_COMPACT_UPDATE);
2382 MovingTracer trc(rt);
2384 // TODO: We may need to fix up other weak pointers here.
2386 // Fixup compartment global pointers as these get accessed during marking.
2387 for (GCCompartmentsIter comp(rt); !comp.done(); comp.next())
2388 comp->fixupAfterMovingGC();
2390 // Fixup cross compartment wrappers as we assert the existence of wrappers in the map.
2391 for (CompartmentsIter comp(rt, SkipAtoms); !comp.done(); comp.next())
2392 comp->fixupCrossCompartmentWrappers(&trc);
2394 // Fixup generators as these are not normally traced.
2395 for (ContextIter i(rt); !i.done(); i.next()) {
2396 for (JSGenerator* gen = i.get()->innermostGenerator(); gen; gen = gen->prevGenerator)
2397 gen->obj = MaybeForwarded(gen->obj.get());
2400 // Iterate through all allocated cells to update internal pointers.
2401 for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
2402 ArenaLists& al = zone->allocator.arenas;
2403 for (unsigned i = 0; i < FINALIZE_LIMIT; ++i) {
2404 AllocKind thingKind = static_cast<AllocKind>(i);
2405 JSGCTraceKind traceKind = MapAllocToTraceKind(thingKind);
2406 for (ArenaHeader* arena = al.getFirstArena(thingKind); arena; arena = arena->next) {
2407 for (ArenaCellIterUnderGC i(arena); !i.done(); i.next()) {
2408 UpdateCellPointers(&trc, i.getCell(), traceKind);
2414 // Mark roots to update them.
2415 markRuntime(&trc, MarkRuntime);
2416 Debugger::markAll(&trc);
2417 Debugger::markCrossCompartmentDebuggerObjectReferents(&trc);
2419 for (GCCompartmentsIter c(rt); !c.done(); c.next()) {
2420 WeakMapBase::markAll(c, &trc);
2421 if (c->watchpointMap)
2422 c->watchpointMap->markAll(&trc);
2425 // Mark all gray roots, making sure we call the trace callback to get the
2426 // current set.
2427 marker.resetBufferedGrayRoots();
2428 markAllGrayReferences(gcstats::PHASE_COMPACT_UPDATE_GRAY);
2430 MovingTracer::Sweep(&trc);
2433 void
2434 GCRuntime::releaseRelocatedArenas(ArenaHeader* relocatedList)
2436 // Release the relocated arenas, now containing only forwarding pointers
2438 #ifdef DEBUG
2439 for (ArenaHeader* arena = relocatedList; arena; arena = arena->next) {
2440 for (ArenaCellIterUnderFinalize i(arena); !i.done(); i.next()) {
2441 Cell* src = i.getCell();
2442 JS_ASSERT(IsForwarded(src));
2443 Cell* dest = Forwarded(src);
2444 JS_ASSERT(src->isMarked(BLACK) == dest->isMarked(BLACK));
2445 JS_ASSERT(src->isMarked(GRAY) == dest->isMarked(GRAY));
2448 #endif
2450 unsigned count = 0;
2451 while (relocatedList) {
2452 ArenaHeader* aheader = relocatedList;
2453 relocatedList = relocatedList->next;
2455 // Mark arena as empty
2456 AllocKind thingKind = aheader->getAllocKind();
2457 size_t thingSize = aheader->getThingSize();
2458 Arena* arena = aheader->getArena();
2459 FreeSpan fullSpan;
2460 fullSpan.initFinal(arena->thingsStart(thingKind), arena->thingsEnd() - thingSize, thingSize);
2461 aheader->setFirstFreeSpan(&fullSpan);
2463 #if defined(JS_CRASH_DIAGNOSTICS) || defined(JS_GC_ZEAL)
2464 JS_POISON(reinterpret_cast<void*>(arena->thingsStart(thingKind)),
2465 JS_MOVED_TENURED_PATTERN, Arena::thingsSpan(thingSize));
2466 #endif
2468 aheader->chunk()->releaseArena(aheader);
2469 ++count;
2472 AutoLockGC lock(rt);
2473 expireChunksAndArenas(true);
2476 #endif // JSGC_COMPACTING
2478 void
2479 ArenaLists::finalizeNow(FreeOp* fop, AllocKind thingKind)
2481 JS_ASSERT(!IsBackgroundFinalized(thingKind));
2482 forceFinalizeNow(fop, thingKind);
2485 void
2486 ArenaLists::forceFinalizeNow(FreeOp* fop, AllocKind thingKind)
2488 JS_ASSERT(backgroundFinalizeState[thingKind] == BFS_DONE);
2490 ArenaHeader* arenas = arenaLists[thingKind].head();
2491 if (!arenas)
2492 return;
2493 arenaLists[thingKind].clear();
2495 size_t thingsPerArena = Arena::thingsPerArena(Arena::thingSize(thingKind));
2496 SortedArenaList finalizedSorted(thingsPerArena);
2498 SliceBudget budget;
2499 FinalizeArenas(fop, &arenas, finalizedSorted, thingKind, budget);
2500 JS_ASSERT(!arenas);
2502 arenaLists[thingKind] = finalizedSorted.toArenaList();
2505 void
2506 ArenaLists::queueForForegroundSweep(FreeOp* fop, AllocKind thingKind)
2508 JS_ASSERT(!IsBackgroundFinalized(thingKind));
2509 JS_ASSERT(backgroundFinalizeState[thingKind] == BFS_DONE);
2510 JS_ASSERT(!arenaListsToSweep[thingKind]);
2512 arenaListsToSweep[thingKind] = arenaLists[thingKind].head();
2513 arenaLists[thingKind].clear();
2516 inline void
2517 ArenaLists::queueForBackgroundSweep(FreeOp* fop, AllocKind thingKind)
2519 JS_ASSERT(IsBackgroundFinalized(thingKind));
2520 JS_ASSERT(!fop->runtime()->gc.isBackgroundSweeping());
2522 ArenaList* al = &arenaLists[thingKind];
2523 if (al->isEmpty()) {
2524 JS_ASSERT(backgroundFinalizeState[thingKind] == BFS_DONE);
2525 return;
2529 * The state can be done, or just-finished if we have not allocated any GC
2530 * things from the arena list after the previous background finalization.
2532 JS_ASSERT(backgroundFinalizeState[thingKind] == BFS_DONE ||
2533 backgroundFinalizeState[thingKind] == BFS_JUST_FINISHED);
2535 arenaListsToSweep[thingKind] = al->head();
2536 al->clear();
2537 backgroundFinalizeState[thingKind] = BFS_RUN;
2540 /*static*/ void
2541 ArenaLists::backgroundFinalize(FreeOp* fop, ArenaHeader* listHead, bool onBackgroundThread)
2543 JS_ASSERT(listHead);
2544 AllocKind thingKind = listHead->getAllocKind();
2545 Zone* zone = listHead->zone;
2547 size_t thingsPerArena = Arena::thingsPerArena(Arena::thingSize(thingKind));
2548 SortedArenaList finalizedSorted(thingsPerArena);
2550 SliceBudget budget;
2551 FinalizeArenas(fop, &listHead, finalizedSorted, thingKind, budget);
2552 JS_ASSERT(!listHead);
2554 // When arenas are queued for background finalization, all arenas are moved
2555 // to arenaListsToSweep[], leaving the arenaLists[] empty. However, new
2556 // arenas may be allocated before background finalization finishes; now that
2557 // finalization is complete, we want to merge these lists back together.
2558 ArenaLists* lists = &zone->allocator.arenas;
2559 ArenaList* al = &lists->arenaLists[thingKind];
2561 // Flatten |finalizedSorted| into a regular ArenaList.
2562 ArenaList finalized = finalizedSorted.toArenaList();
2564 // Store this for later, since merging may change the state of |finalized|.
2565 bool allClear = finalized.isEmpty();
2567 AutoLockGC lock(fop->runtime());
2568 JS_ASSERT(lists->backgroundFinalizeState[thingKind] == BFS_RUN);
2570 // Join |al| and |finalized| into a single list.
2571 *al = finalized.insertListWithCursorAtEnd(*al);
2574 * We must set the state to BFS_JUST_FINISHED if we are running on the
2575 * background thread and we have touched arenaList list, even if we add to
2576 * the list only fully allocated arenas without any free things. It ensures
2577 * that the allocation thread takes the GC lock and all writes to the free
2578 * list elements are propagated. As we always take the GC lock when
2579 * allocating new arenas from the chunks we can set the state to BFS_DONE if
2580 * we have released all finalized arenas back to their chunks.
2582 if (onBackgroundThread && !allClear)
2583 lists->backgroundFinalizeState[thingKind] = BFS_JUST_FINISHED;
2584 else
2585 lists->backgroundFinalizeState[thingKind] = BFS_DONE;
2587 lists->arenaListsToSweep[thingKind] = nullptr;
2590 void
2591 ArenaLists::queueObjectsForSweep(FreeOp* fop)
2593 gcstats::AutoPhase ap(fop->runtime()->gc.stats, gcstats::PHASE_SWEEP_OBJECT);
2595 finalizeNow(fop, FINALIZE_OBJECT0);
2596 finalizeNow(fop, FINALIZE_OBJECT2);
2597 finalizeNow(fop, FINALIZE_OBJECT4);
2598 finalizeNow(fop, FINALIZE_OBJECT8);
2599 finalizeNow(fop, FINALIZE_OBJECT12);
2600 finalizeNow(fop, FINALIZE_OBJECT16);
2602 queueForBackgroundSweep(fop, FINALIZE_OBJECT0_BACKGROUND);
2603 queueForBackgroundSweep(fop, FINALIZE_OBJECT2_BACKGROUND);
2604 queueForBackgroundSweep(fop, FINALIZE_OBJECT4_BACKGROUND);
2605 queueForBackgroundSweep(fop, FINALIZE_OBJECT8_BACKGROUND);
2606 queueForBackgroundSweep(fop, FINALIZE_OBJECT12_BACKGROUND);
2607 queueForBackgroundSweep(fop, FINALIZE_OBJECT16_BACKGROUND);
2610 void
2611 ArenaLists::queueStringsAndSymbolsForSweep(FreeOp* fop)
2613 gcstats::AutoPhase ap(fop->runtime()->gc.stats, gcstats::PHASE_SWEEP_STRING);
2615 queueForBackgroundSweep(fop, FINALIZE_FAT_INLINE_STRING);
2616 queueForBackgroundSweep(fop, FINALIZE_STRING);
2617 queueForBackgroundSweep(fop, FINALIZE_SYMBOL);
2619 queueForForegroundSweep(fop, FINALIZE_EXTERNAL_STRING);
2622 void
2623 ArenaLists::queueScriptsForSweep(FreeOp* fop)
2625 gcstats::AutoPhase ap(fop->runtime()->gc.stats, gcstats::PHASE_SWEEP_SCRIPT);
2626 queueForForegroundSweep(fop, FINALIZE_SCRIPT);
2627 queueForForegroundSweep(fop, FINALIZE_LAZY_SCRIPT);
2630 void
2631 ArenaLists::queueJitCodeForSweep(FreeOp* fop)
2633 gcstats::AutoPhase ap(fop->runtime()->gc.stats, gcstats::PHASE_SWEEP_JITCODE);
2634 queueForForegroundSweep(fop, FINALIZE_JITCODE);
2637 void
2638 ArenaLists::queueShapesForSweep(FreeOp* fop)
2640 gcstats::AutoPhase ap(fop->runtime()->gc.stats, gcstats::PHASE_SWEEP_SHAPE);
2642 queueForBackgroundSweep(fop, FINALIZE_SHAPE);
2643 queueForBackgroundSweep(fop, FINALIZE_BASE_SHAPE);
2644 queueForBackgroundSweep(fop, FINALIZE_TYPE_OBJECT);
2647 static void*
2648 RunLastDitchGC(JSContext* cx, JS::Zone* zone, AllocKind thingKind)
2651 * In parallel sections, we do not attempt to refill the free list
2652 * and hence do not encounter last ditch GC.
2654 JS_ASSERT(!InParallelSection());
2656 PrepareZoneForGC(zone);
2658 JSRuntime* rt = cx->runtime();
2660 /* The last ditch GC preserves all atoms. */
2661 AutoKeepAtoms keepAtoms(cx->perThreadData);
2662 rt->gc.gc(GC_NORMAL, JS::gcreason::LAST_DITCH);
2665 * The JSGC_END callback can legitimately allocate new GC
2666 * things and populate the free list. If that happens, just
2667 * return that list head.
2669 size_t thingSize = Arena::thingSize(thingKind);
2670 if (void* thing = zone->allocator.arenas.allocateFromFreeList(thingKind, thingSize))
2671 return thing;
2673 return nullptr;
2676 template <AllowGC allowGC>
2677 /* static */ void*
2678 ArenaLists::refillFreeList(ThreadSafeContext* cx, AllocKind thingKind)
2680 JS_ASSERT(cx->allocator()->arenas.freeLists[thingKind].isEmpty());
2681 JS_ASSERT_IF(cx->isJSContext(), !cx->asJSContext()->runtime()->isHeapBusy());
2683 Zone* zone = cx->allocator()->zone_;
2685 bool runGC = cx->allowGC() && allowGC &&
2686 cx->asJSContext()->runtime()->gc.incrementalState != NO_INCREMENTAL &&
2687 zone->usage.gcBytes() > zone->threshold.gcTriggerBytes();
2689 JS_ASSERT_IF(cx->isJSContext() && allowGC,
2690 !cx->asJSContext()->runtime()->currentThreadHasExclusiveAccess());
2692 for (;;) {
2693 if (MOZ_UNLIKELY(runGC)) {
2694 if (void* thing = RunLastDitchGC(cx->asJSContext(), zone, thingKind))
2695 return thing;
2698 AutoMaybeStartBackgroundAllocation maybeStartBackgroundAllocation;
2700 if (cx->isJSContext()) {
2702 * allocateFromArena may fail while the background finalization still
2703 * run. If we are on the main thread, we want to wait for it to finish
2704 * and restart. However, checking for that is racy as the background
2705 * finalization could free some things after allocateFromArena decided
2706 * to fail but at this point it may have already stopped. To avoid
2707 * this race we always try to allocate twice.
2709 for (bool secondAttempt = false; ; secondAttempt = true) {
2710 void* thing = cx->allocator()->arenas.allocateFromArenaInline(zone, thingKind,
2711 maybeStartBackgroundAllocation);
2712 if (MOZ_LIKELY(!!thing))
2713 return thing;
2714 if (secondAttempt)
2715 break;
2717 cx->asJSContext()->runtime()->gc.waitBackgroundSweepEnd();
2719 } else {
2721 * If we're off the main thread, we try to allocate once and
2722 * return whatever value we get. If we aren't in a ForkJoin
2723 * session (i.e. we are in a helper thread async with the main
2724 * thread), we need to first ensure the main thread is not in a GC
2725 * session.
2727 mozilla::Maybe<AutoLockHelperThreadState> lock;
2728 JSRuntime* rt = zone->runtimeFromAnyThread();
2729 if (rt->exclusiveThreadsPresent()) {
2730 lock.emplace();
2731 while (rt->isHeapBusy())
2732 HelperThreadState().wait(GlobalHelperThreadState::PRODUCER);
2735 void* thing = cx->allocator()->arenas.allocateFromArenaInline(zone, thingKind,
2736 maybeStartBackgroundAllocation);
2737 if (thing)
2738 return thing;
2741 if (!cx->allowGC() || !allowGC)
2742 return nullptr;
2745 * We failed to allocate. Run the GC if we haven't done it already.
2746 * Otherwise report OOM.
2748 if (runGC)
2749 break;
2750 runGC = true;
2753 JS_ASSERT(allowGC);
2754 js_ReportOutOfMemory(cx);
2755 return nullptr;
2758 template void*
2759 ArenaLists::refillFreeList<NoGC>(ThreadSafeContext* cx, AllocKind thingKind);
2761 template void*
2762 ArenaLists::refillFreeList<CanGC>(ThreadSafeContext* cx, AllocKind thingKind);
2764 /* static */ void*
2765 ArenaLists::refillFreeListInGC(Zone* zone, AllocKind thingKind)
2768 * Called by compacting GC to refill a free list while we are in a GC.
2771 Allocator& allocator = zone->allocator;
2772 JS_ASSERT(allocator.arenas.freeLists[thingKind].isEmpty());
2773 mozilla::DebugOnly<JSRuntime*> rt = zone->runtimeFromMainThread();
2774 JS_ASSERT(rt->isHeapMajorCollecting());
2775 JS_ASSERT(!rt->gc.isBackgroundSweeping());
2777 return allocator.arenas.allocateFromArena(zone, thingKind);
2780 /* static */ int64_t
2781 SliceBudget::TimeBudget(int64_t millis)
2783 return millis * PRMJ_USEC_PER_MSEC;
2786 /* static */ int64_t
2787 SliceBudget::WorkBudget(int64_t work)
2789 /* For work = 0 not to mean Unlimited, we subtract 1. */
2790 return -work - 1;
2793 SliceBudget::SliceBudget()
2795 reset();
2798 SliceBudget::SliceBudget(int64_t budget)
2800 if (budget == Unlimited) {
2801 reset();
2802 } else if (budget > 0) {
2803 deadline = PRMJ_Now() + budget;
2804 counter = CounterReset;
2805 } else {
2806 deadline = 0;
2807 counter = -budget - 1;
2811 bool
2812 SliceBudget::checkOverBudget()
2814 bool over = PRMJ_Now() > deadline;
2815 if (!over)
2816 counter = CounterReset;
2817 return over;
2820 void
2821 js::MarkCompartmentActive(InterpreterFrame* fp)
2823 fp->script()->compartment()->zone()->active = true;
2826 void
2827 GCRuntime::requestInterrupt(JS::gcreason::Reason reason)
2829 if (isNeeded)
2830 return;
2832 isNeeded = true;
2833 triggerReason = reason;
2834 rt->requestInterrupt(JSRuntime::RequestInterruptMainThread);
2837 bool
2838 GCRuntime::triggerGC(JS::gcreason::Reason reason)
2840 /* Wait till end of parallel section to trigger GC. */
2841 if (InParallelSection()) {
2842 ForkJoinContext::current()->requestGC(reason);
2843 return true;
2847 * Don't trigger GCs if this is being called off the main thread from
2848 * onTooMuchMalloc().
2850 if (!CurrentThreadCanAccessRuntime(rt))
2851 return false;
2853 /* Don't trigger GCs when allocating under the interrupt callback lock. */
2854 if (rt->currentThreadOwnsInterruptLock())
2855 return false;
2857 /* GC is already running. */
2858 if (rt->isHeapCollecting())
2859 return false;
2861 JS::PrepareForFullGC(rt);
2862 requestInterrupt(reason);
2863 return true;
2866 bool
2867 GCRuntime::triggerZoneGC(Zone* zone, JS::gcreason::Reason reason)
2870 * If parallel threads are running, wait till they
2871 * are stopped to trigger GC.
2873 if (InParallelSection()) {
2874 ForkJoinContext::current()->requestZoneGC(zone, reason);
2875 return true;
2878 /* Zones in use by a thread with an exclusive context can't be collected. */
2879 if (zone->usedByExclusiveThread)
2880 return false;
2882 /* Don't trigger GCs when allocating under the interrupt callback lock. */
2883 if (rt->currentThreadOwnsInterruptLock())
2884 return false;
2886 /* GC is already running. */
2887 if (rt->isHeapCollecting())
2888 return false;
2890 #ifdef JS_GC_ZEAL
2891 if (zealMode == ZealAllocValue) {
2892 triggerGC(reason);
2893 return true;
2895 #endif
2897 if (rt->isAtomsZone(zone)) {
2898 /* We can't do a zone GC of the atoms compartment. */
2899 triggerGC(reason);
2900 return true;
2903 PrepareZoneForGC(zone);
2904 requestInterrupt(reason);
2905 return true;
2908 bool
2909 GCRuntime::maybeGC(Zone* zone)
2911 JS_ASSERT(CurrentThreadCanAccessRuntime(rt));
2913 #ifdef JS_GC_ZEAL
2914 if (zealMode == ZealAllocValue || zealMode == ZealPokeValue) {
2915 JS::PrepareForFullGC(rt);
2916 gc(GC_NORMAL, JS::gcreason::MAYBEGC);
2917 return true;
2919 #endif
2921 if (isNeeded) {
2922 gcSlice(GC_NORMAL, JS::gcreason::MAYBEGC);
2923 return true;
2926 double factor = schedulingState.inHighFrequencyGCMode() ? 0.85 : 0.9;
2927 if (zone->usage.gcBytes() > 1024 * 1024 &&
2928 zone->usage.gcBytes() >= factor * zone->threshold.gcTriggerBytes() &&
2929 incrementalState == NO_INCREMENTAL &&
2930 !isBackgroundSweeping())
2932 PrepareZoneForGC(zone);
2933 gcSlice(GC_NORMAL, JS::gcreason::MAYBEGC);
2934 return true;
2937 return false;
2940 void
2941 GCRuntime::maybePeriodicFullGC()
2944 * Trigger a periodic full GC.
2946 * This is a source of non-determinism, but is not called from the shell.
2948 * Access to the counters and, on 32 bit, setting gcNextFullGCTime below
2949 * is not atomic and a race condition could trigger or suppress the GC. We
2950 * tolerate this.
2952 #ifndef JS_MORE_DETERMINISTIC
2953 int64_t now = PRMJ_Now();
2954 if (nextFullGCTime && nextFullGCTime <= now) {
2955 if (chunkAllocationSinceLastGC ||
2956 numArenasFreeCommitted > decommitThreshold)
2958 JS::PrepareForFullGC(rt);
2959 gcSlice(GC_SHRINK, JS::gcreason::MAYBEGC);
2960 } else {
2961 nextFullGCTime = now + GC_IDLE_FULL_SPAN;
2964 #endif
2967 void
2968 GCRuntime::decommitArenasFromAvailableList(Chunk** availableListHeadp)
2970 Chunk* chunk = *availableListHeadp;
2971 if (!chunk)
2972 return;
2975 * Decommit is expensive so we avoid holding the GC lock while calling it.
2977 * We decommit from the tail of the list to minimize interference with the
2978 * main thread that may start to allocate things at this point.
2980 * The arena that is been decommitted outside the GC lock must not be
2981 * available for allocations either via the free list or via the
2982 * decommittedArenas bitmap. For that we just fetch the arena from the
2983 * free list before the decommit pretending as it was allocated. If this
2984 * arena also is the single free arena in the chunk, then we must remove
2985 * from the available list before we release the lock so the allocation
2986 * thread would not see chunks with no free arenas on the available list.
2988 * After we retake the lock, we mark the arena as free and decommitted if
2989 * the decommit was successful. We must also add the chunk back to the
2990 * available list if we removed it previously or when the main thread
2991 * have allocated all remaining free arenas in the chunk.
2993 * We also must make sure that the aheader is not accessed again after we
2994 * decommit the arena.
2996 JS_ASSERT(chunk->info.prevp == availableListHeadp);
2997 while (Chunk* next = chunk->info.next) {
2998 JS_ASSERT(next->info.prevp == &chunk->info.next);
2999 chunk = next;
3002 for (;;) {
3003 while (chunk->info.numArenasFreeCommitted != 0) {
3004 ArenaHeader* aheader = chunk->fetchNextFreeArena(rt);
3006 Chunk** savedPrevp = chunk->info.prevp;
3007 if (!chunk->hasAvailableArenas())
3008 chunk->removeFromAvailableList();
3010 size_t arenaIndex = Chunk::arenaIndex(aheader->arenaAddress());
3011 bool ok;
3014 * If the main thread waits for the decommit to finish, skip
3015 * potentially expensive unlock/lock pair on the contested
3016 * lock.
3018 Maybe<AutoUnlockGC> maybeUnlock;
3019 if (!isHeapBusy())
3020 maybeUnlock.emplace(rt);
3021 ok = MarkPagesUnused(aheader->getArena(), ArenaSize);
3024 if (ok) {
3025 ++chunk->info.numArenasFree;
3026 chunk->decommittedArenas.set(arenaIndex);
3027 } else {
3028 chunk->addArenaToFreeList(rt, aheader);
3030 JS_ASSERT(chunk->hasAvailableArenas());
3031 JS_ASSERT(!chunk->unused());
3032 if (chunk->info.numArenasFree == 1) {
3034 * Put the chunk back to the available list either at the
3035 * point where it was before to preserve the available list
3036 * that we enumerate, or, when the allocation thread has fully
3037 * used all the previous chunks, at the beginning of the
3038 * available list.
3040 Chunk** insertPoint = savedPrevp;
3041 if (savedPrevp != availableListHeadp) {
3042 Chunk* prev = Chunk::fromPointerToNext(savedPrevp);
3043 if (!prev->hasAvailableArenas())
3044 insertPoint = availableListHeadp;
3046 chunk->insertToAvailableList(insertPoint);
3047 } else {
3048 JS_ASSERT(chunk->info.prevp);
3051 if (chunkAllocationSinceLastGC || !ok) {
3053 * The allocator thread has started to get new chunks. We should stop
3054 * to avoid decommitting arenas in just allocated chunks.
3056 return;
3061 * chunk->info.prevp becomes null when the allocator thread consumed
3062 * all chunks from the available list.
3064 JS_ASSERT_IF(chunk->info.prevp, *chunk->info.prevp == chunk);
3065 if (chunk->info.prevp == availableListHeadp || !chunk->info.prevp)
3066 break;
3069 * prevp exists and is not the list head. It must point to the next
3070 * field of the previous chunk.
3072 chunk = chunk->getPrevious();
3076 void
3077 GCRuntime::decommitArenas()
3079 decommitArenasFromAvailableList(&systemAvailableChunkListHead);
3080 decommitArenasFromAvailableList(&userAvailableChunkListHead);
3083 /* Must be called with the GC lock taken. */
3084 void
3085 GCRuntime::expireChunksAndArenas(bool shouldShrink)
3087 #ifdef JSGC_FJGENERATIONAL
3088 rt->threadPool.pruneChunkCache();
3089 #endif
3091 if (Chunk* toFree = expireChunkPool(shouldShrink, false)) {
3092 AutoUnlockGC unlock(rt);
3093 freeChunkList(toFree);
3096 if (shouldShrink)
3097 decommitArenas();
3100 void
3101 GCRuntime::sweepBackgroundThings(bool onBackgroundThread)
3104 * We must finalize in the correct order, see comments in
3105 * finalizeObjects.
3107 FreeOp fop(rt);
3108 for (int phase = 0 ; phase < BackgroundPhaseCount ; ++phase) {
3109 for (Zone* zone = sweepingZones; zone; zone = zone->gcNextGraphNode) {
3110 for (int index = 0 ; index < BackgroundPhaseLength[phase] ; ++index) {
3111 AllocKind kind = BackgroundPhases[phase][index];
3112 ArenaHeader* arenas = zone->allocator.arenas.arenaListsToSweep[kind];
3113 if (arenas)
3114 ArenaLists::backgroundFinalize(&fop, arenas, onBackgroundThread);
3119 sweepingZones = nullptr;
3122 void
3123 GCRuntime::assertBackgroundSweepingFinished()
3125 #ifdef DEBUG
3126 JS_ASSERT(!sweepingZones);
3127 for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
3128 for (unsigned i = 0; i < FINALIZE_LIMIT; ++i) {
3129 JS_ASSERT(!zone->allocator.arenas.arenaListsToSweep[i]);
3130 JS_ASSERT(zone->allocator.arenas.doneBackgroundFinalize(AllocKind(i)));
3133 #endif
3136 unsigned
3137 js::GetCPUCount()
3139 static unsigned ncpus = 0;
3140 if (ncpus == 0) {
3141 # ifdef XP_WIN
3142 SYSTEM_INFO sysinfo;
3143 GetSystemInfo(&sysinfo);
3144 ncpus = unsigned(sysinfo.dwNumberOfProcessors);
3145 # else
3146 long n = sysconf(_SC_NPROCESSORS_ONLN);
3147 ncpus = (n > 0) ? unsigned(n) : 1;
3148 # endif
3150 return ncpus;
3153 bool
3154 GCHelperState::init()
3156 if (!(done = PR_NewCondVar(rt->gc.lock)))
3157 return false;
3159 if (CanUseExtraThreads()) {
3160 backgroundAllocation = (GetCPUCount() >= 2);
3161 HelperThreadState().ensureInitialized();
3162 } else {
3163 backgroundAllocation = false;
3166 return true;
3169 void
3170 GCHelperState::finish()
3172 if (!rt->gc.lock) {
3173 JS_ASSERT(state_ == IDLE);
3174 return;
3177 // Wait for any lingering background sweeping to finish.
3178 waitBackgroundSweepEnd();
3180 if (done)
3181 PR_DestroyCondVar(done);
3184 GCHelperState::State
3185 GCHelperState::state()
3187 JS_ASSERT(rt->gc.currentThreadOwnsGCLock());
3188 return state_;
3191 void
3192 GCHelperState::setState(State state)
3194 JS_ASSERT(rt->gc.currentThreadOwnsGCLock());
3195 state_ = state;
3198 void
3199 GCHelperState::startBackgroundThread(State newState)
3201 JS_ASSERT(!thread && state() == IDLE && newState != IDLE);
3202 setState(newState);
3204 if (!HelperThreadState().gcHelperWorklist().append(this))
3205 CrashAtUnhandlableOOM("Could not add to pending GC helpers list");
3206 HelperThreadState().notifyAll(GlobalHelperThreadState::PRODUCER);
3209 void
3210 GCHelperState::waitForBackgroundThread()
3212 JS_ASSERT(CurrentThreadCanAccessRuntime(rt));
3214 rt->gc.lockOwner = nullptr;
3215 PR_WaitCondVar(done, PR_INTERVAL_NO_TIMEOUT);
3216 #ifdef DEBUG
3217 rt->gc.lockOwner = PR_GetCurrentThread();
3218 #endif
3221 void
3222 GCHelperState::work()
3224 JS_ASSERT(CanUseExtraThreads());
3226 AutoLockGC lock(rt);
3228 JS_ASSERT(!thread);
3229 thread = PR_GetCurrentThread();
3231 TraceLogger* logger = TraceLoggerForCurrentThread();
3233 switch (state()) {
3235 case IDLE:
3236 MOZ_CRASH("GC helper triggered on idle state");
3237 break;
3239 case SWEEPING: {
3240 AutoTraceLog logSweeping(logger, TraceLogger::GCSweeping);
3241 doSweep();
3242 JS_ASSERT(state() == SWEEPING);
3243 break;
3246 case ALLOCATING: {
3247 AutoTraceLog logAllocation(logger, TraceLogger::GCAllocation);
3248 do {
3249 Chunk* chunk;
3251 AutoUnlockGC unlock(rt);
3252 chunk = Chunk::allocate(rt);
3255 /* OOM stops the background allocation. */
3256 if (!chunk)
3257 break;
3258 JS_ASSERT(chunk->info.numArenasFreeCommitted == 0);
3259 rt->gc.chunkPool.put(chunk);
3260 } while (state() == ALLOCATING && rt->gc.wantBackgroundAllocation());
3262 JS_ASSERT(state() == ALLOCATING || state() == CANCEL_ALLOCATION);
3263 break;
3266 case CANCEL_ALLOCATION:
3267 break;
3270 setState(IDLE);
3271 thread = nullptr;
3273 PR_NotifyAllCondVar(done);
3276 void
3277 GCHelperState::startBackgroundSweep(bool shouldShrink)
3279 JS_ASSERT(CanUseExtraThreads());
3281 AutoLockHelperThreadState helperLock;
3282 AutoLockGC lock(rt);
3283 JS_ASSERT(state() == IDLE);
3284 JS_ASSERT(!sweepFlag);
3285 sweepFlag = true;
3286 shrinkFlag = shouldShrink;
3287 startBackgroundThread(SWEEPING);
3290 /* Must be called with the GC lock taken. */
3291 void
3292 GCHelperState::startBackgroundShrink()
3294 JS_ASSERT(CanUseExtraThreads());
3295 switch (state()) {
3296 case IDLE:
3297 JS_ASSERT(!sweepFlag);
3298 shrinkFlag = true;
3299 startBackgroundThread(SWEEPING);
3300 break;
3301 case SWEEPING:
3302 shrinkFlag = true;
3303 break;
3304 case ALLOCATING:
3305 case CANCEL_ALLOCATION:
3307 * If we have started background allocation there is nothing to
3308 * shrink.
3310 break;
3314 void
3315 GCHelperState::waitBackgroundSweepEnd()
3317 AutoLockGC lock(rt);
3318 while (state() == SWEEPING)
3319 waitForBackgroundThread();
3320 if (rt->gc.incrementalState == NO_INCREMENTAL)
3321 rt->gc.assertBackgroundSweepingFinished();
3324 void
3325 GCHelperState::waitBackgroundSweepOrAllocEnd()
3327 AutoLockGC lock(rt);
3328 if (state() == ALLOCATING)
3329 setState(CANCEL_ALLOCATION);
3330 while (state() == SWEEPING || state() == CANCEL_ALLOCATION)
3331 waitForBackgroundThread();
3332 if (rt->gc.incrementalState == NO_INCREMENTAL)
3333 rt->gc.assertBackgroundSweepingFinished();
3336 /* Must be called with the GC lock taken. */
3337 inline void
3338 GCHelperState::startBackgroundAllocationIfIdle()
3340 if (state_ == IDLE)
3341 startBackgroundThread(ALLOCATING);
3344 /* Must be called with the GC lock taken. */
3345 void
3346 GCHelperState::doSweep()
3348 AutoSetThreadIsSweeping threadIsSweeping;
3350 if (sweepFlag) {
3351 sweepFlag = false;
3352 AutoUnlockGC unlock(rt);
3354 rt->gc.sweepBackgroundThings(true);
3356 rt->freeLifoAlloc.freeAll();
3359 bool shrinking = shrinkFlag;
3360 rt->gc.expireChunksAndArenas(shrinking);
3363 * The main thread may have called ShrinkGCBuffers while
3364 * ExpireChunksAndArenas(rt, false) was running, so we recheck the flag
3365 * afterwards.
3367 if (!shrinking && shrinkFlag) {
3368 shrinkFlag = false;
3369 rt->gc.expireChunksAndArenas(true);
3373 bool
3374 GCHelperState::onBackgroundThread()
3376 return PR_GetCurrentThread() == thread;
3379 bool
3380 GCRuntime::shouldReleaseObservedTypes()
3382 bool releaseTypes = false;
3384 #ifdef JS_GC_ZEAL
3385 if (zealMode != 0)
3386 releaseTypes = true;
3387 #endif
3389 /* We may miss the exact target GC due to resets. */
3390 if (majorGCNumber >= jitReleaseNumber)
3391 releaseTypes = true;
3393 if (releaseTypes)
3394 jitReleaseNumber = majorGCNumber + JIT_SCRIPT_RELEASE_TYPES_PERIOD;
3396 return releaseTypes;
3400 * It's simpler if we preserve the invariant that every zone has at least one
3401 * compartment. If we know we're deleting the entire zone, then
3402 * SweepCompartments is allowed to delete all compartments. In this case,
3403 * |keepAtleastOne| is false. If some objects remain in the zone so that it
3404 * cannot be deleted, then we set |keepAtleastOne| to true, which prohibits
3405 * SweepCompartments from deleting every compartment. Instead, it preserves an
3406 * arbitrary compartment in the zone.
3408 void
3409 Zone::sweepCompartments(FreeOp* fop, bool keepAtleastOne, bool destroyingRuntime)
3411 JSRuntime* rt = runtimeFromMainThread();
3412 JSDestroyCompartmentCallback callback = rt->destroyCompartmentCallback;
3414 JSCompartment** read = compartments.begin();
3415 JSCompartment** end = compartments.end();
3416 JSCompartment** write = read;
3417 bool foundOne = false;
3418 while (read < end) {
3419 JSCompartment* comp = *read++;
3420 JS_ASSERT(!rt->isAtomsCompartment(comp));
3423 * Don't delete the last compartment if all the ones before it were
3424 * deleted and keepAtleastOne is true.
3426 bool dontDelete = read == end && !foundOne && keepAtleastOne;
3427 if ((!comp->marked && !dontDelete) || destroyingRuntime) {
3428 if (callback)
3429 callback(fop, comp);
3430 if (comp->principals)
3431 JS_DropPrincipals(rt, comp->principals);
3432 js_delete(comp);
3433 } else {
3434 *write++ = comp;
3435 foundOne = true;
3438 compartments.resize(write - compartments.begin());
3439 JS_ASSERT_IF(keepAtleastOne, !compartments.empty());
3442 void
3443 GCRuntime::sweepZones(FreeOp* fop, bool destroyingRuntime)
3445 MOZ_ASSERT_IF(destroyingRuntime, rt->gc.numActiveZoneIters == 0);
3446 if (rt->gc.numActiveZoneIters)
3447 return;
3449 JSZoneCallback callback = rt->destroyZoneCallback;
3451 /* Skip the atomsCompartment zone. */
3452 Zone** read = zones.begin() + 1;
3453 Zone** end = zones.end();
3454 Zone** write = read;
3455 JS_ASSERT(zones.length() >= 1);
3456 JS_ASSERT(rt->isAtomsZone(zones[0]));
3458 while (read < end) {
3459 Zone* zone = *read++;
3461 if (zone->wasGCStarted()) {
3462 if ((zone->allocator.arenas.arenaListsAreEmpty() && !zone->hasMarkedCompartments()) ||
3463 destroyingRuntime)
3465 zone->allocator.arenas.checkEmptyFreeLists();
3466 if (callback)
3467 callback(zone);
3468 zone->sweepCompartments(fop, false, destroyingRuntime);
3469 JS_ASSERT(zone->compartments.empty());
3470 fop->delete_(zone);
3471 continue;
3473 zone->sweepCompartments(fop, true, destroyingRuntime);
3475 *write++ = zone;
3477 zones.resize(write - zones.begin());
3480 static void
3481 PurgeRuntime(JSRuntime* rt)
3483 for (GCCompartmentsIter comp(rt); !comp.done(); comp.next())
3484 comp->purge();
3486 rt->freeLifoAlloc.transferUnusedFrom(&rt->tempLifoAlloc);
3487 rt->interpreterStack().purge(rt);
3489 rt->gsnCache.purge();
3490 rt->scopeCoordinateNameCache.purge();
3491 rt->newObjectCache.purge();
3492 rt->nativeIterCache.purge();
3493 rt->uncompressedSourceCache.purge();
3494 rt->evalCache.clear();
3495 rt->regExpTestCache.purge();
3497 if (!rt->hasActiveCompilations())
3498 rt->parseMapPool().purgeAll();
3501 bool
3502 GCRuntime::shouldPreserveJITCode(JSCompartment* comp, int64_t currentTime,
3503 JS::gcreason::Reason reason)
3505 if (cleanUpEverything)
3506 return false;
3508 if (alwaysPreserveCode)
3509 return true;
3510 if (comp->lastAnimationTime + PRMJ_USEC_PER_SEC >= currentTime)
3511 return true;
3512 if (reason == JS::gcreason::DEBUG_GC)
3513 return true;
3515 if (comp->jitCompartment() && comp->jitCompartment()->hasRecentParallelActivity())
3516 return true;
3518 return false;
3521 #ifdef DEBUG
3522 class CompartmentCheckTracer : public JSTracer
3524 public:
3525 CompartmentCheckTracer(JSRuntime* rt, JSTraceCallback callback)
3526 : JSTracer(rt, callback)
3529 Cell* src;
3530 JSGCTraceKind srcKind;
3531 Zone* zone;
3532 JSCompartment* compartment;
3535 static bool
3536 InCrossCompartmentMap(JSObject* src, Cell* dst, JSGCTraceKind dstKind)
3538 JSCompartment* srccomp = src->compartment();
3540 if (dstKind == JSTRACE_OBJECT) {
3541 Value key = ObjectValue(*static_cast<JSObject*>(dst));
3542 if (WrapperMap::Ptr p = srccomp->lookupWrapper(key)) {
3543 if (*p->value().unsafeGet() == ObjectValue(*src))
3544 return true;
3549 * If the cross-compartment edge is caused by the debugger, then we don't
3550 * know the right hashtable key, so we have to iterate.
3552 for (JSCompartment::WrapperEnum e(srccomp); !e.empty(); e.popFront()) {
3553 if (e.front().key().wrapped == dst && ToMarkable(e.front().value()) == src)
3554 return true;
3557 return false;
3560 static void
3561 CheckCompartment(CompartmentCheckTracer* trc, JSCompartment* thingCompartment,
3562 Cell* thing, JSGCTraceKind kind)
3564 JS_ASSERT(thingCompartment == trc->compartment ||
3565 trc->runtime()->isAtomsCompartment(thingCompartment) ||
3566 (trc->srcKind == JSTRACE_OBJECT &&
3567 InCrossCompartmentMap((JSObject*)trc->src, thing, kind)));
3570 static JSCompartment*
3571 CompartmentOfCell(Cell* thing, JSGCTraceKind kind)
3573 if (kind == JSTRACE_OBJECT)
3574 return static_cast<JSObject*>(thing)->compartment();
3575 else if (kind == JSTRACE_SHAPE)
3576 return static_cast<Shape*>(thing)->compartment();
3577 else if (kind == JSTRACE_BASE_SHAPE)
3578 return static_cast<BaseShape*>(thing)->compartment();
3579 else if (kind == JSTRACE_SCRIPT)
3580 return static_cast<JSScript*>(thing)->compartment();
3581 else
3582 return nullptr;
3585 static void
3586 CheckCompartmentCallback(JSTracer* trcArg, void** thingp, JSGCTraceKind kind)
3588 CompartmentCheckTracer* trc = static_cast<CompartmentCheckTracer*>(trcArg);
3589 Cell* thing = (Cell*)*thingp;
3591 JSCompartment* comp = CompartmentOfCell(thing, kind);
3592 if (comp && trc->compartment) {
3593 CheckCompartment(trc, comp, thing, kind);
3594 } else {
3595 JS_ASSERT(thing->tenuredZone() == trc->zone ||
3596 trc->runtime()->isAtomsZone(thing->tenuredZone()));
3600 void
3601 GCRuntime::checkForCompartmentMismatches()
3603 if (disableStrictProxyCheckingCount)
3604 return;
3606 CompartmentCheckTracer trc(rt, CheckCompartmentCallback);
3607 for (ZonesIter zone(rt, SkipAtoms); !zone.done(); zone.next()) {
3608 trc.zone = zone;
3609 for (size_t thingKind = 0; thingKind < FINALIZE_LAST; thingKind++) {
3610 for (ZoneCellIterUnderGC i(zone, AllocKind(thingKind)); !i.done(); i.next()) {
3611 trc.src = i.getCell();
3612 trc.srcKind = MapAllocToTraceKind(AllocKind(thingKind));
3613 trc.compartment = CompartmentOfCell(trc.src, trc.srcKind);
3614 JS_TraceChildren(&trc, trc.src, trc.srcKind);
3619 #endif
3621 bool
3622 GCRuntime::beginMarkPhase(JS::gcreason::Reason reason)
3624 int64_t currentTime = PRMJ_Now();
3626 #ifdef DEBUG
3627 if (fullCompartmentChecks)
3628 checkForCompartmentMismatches();
3629 #endif
3631 isFull = true;
3632 bool any = false;
3634 for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
3635 /* Assert that zone state is as we expect */
3636 JS_ASSERT(!zone->isCollecting());
3637 JS_ASSERT(!zone->compartments.empty());
3638 for (unsigned i = 0; i < FINALIZE_LIMIT; ++i)
3639 JS_ASSERT(!zone->allocator.arenas.arenaListsToSweep[i]);
3641 /* Set up which zones will be collected. */
3642 if (zone->isGCScheduled()) {
3643 if (!rt->isAtomsZone(zone)) {
3644 any = true;
3645 zone->setGCState(Zone::Mark);
3647 } else {
3648 isFull = false;
3651 zone->setPreservingCode(false);
3654 for (CompartmentsIter c(rt, WithAtoms); !c.done(); c.next()) {
3655 JS_ASSERT(c->gcLiveArrayBuffers.empty());
3656 c->marked = false;
3657 c->scheduledForDestruction = false;
3658 c->maybeAlive = false;
3659 if (shouldPreserveJITCode(c, currentTime, reason))
3660 c->zone()->setPreservingCode(true);
3663 if (!rt->gc.cleanUpEverything) {
3664 if (JSCompartment* comp = jit::TopmostIonActivationCompartment(rt))
3665 comp->zone()->setPreservingCode(true);
3669 * Atoms are not in the cross-compartment map. So if there are any
3670 * zones that are not being collected, we are not allowed to collect
3671 * atoms. Otherwise, the non-collected zones could contain pointers
3672 * to atoms that we would miss.
3674 * keepAtoms() will only change on the main thread, which we are currently
3675 * on. If the value of keepAtoms() changes between GC slices, then we'll
3676 * cancel the incremental GC. See IsIncrementalGCSafe.
3678 if (isFull && !rt->keepAtoms()) {
3679 Zone* atomsZone = rt->atomsCompartment()->zone();
3680 if (atomsZone->isGCScheduled()) {
3681 JS_ASSERT(!atomsZone->isCollecting());
3682 atomsZone->setGCState(Zone::Mark);
3683 any = true;
3687 /* Check that at least one zone is scheduled for collection. */
3688 if (!any)
3689 return false;
3692 * At the end of each incremental slice, we call prepareForIncrementalGC,
3693 * which marks objects in all arenas that we're currently allocating
3694 * into. This can cause leaks if unreachable objects are in these
3695 * arenas. This purge call ensures that we only mark arenas that have had
3696 * allocations after the incremental GC started.
3698 if (isIncremental) {
3699 for (GCZonesIter zone(rt); !zone.done(); zone.next())
3700 zone->allocator.arenas.purge();
3703 marker.start();
3704 JS_ASSERT(!marker.callback);
3705 JS_ASSERT(IS_GC_MARKING_TRACER(&marker));
3707 /* For non-incremental GC the following sweep discards the jit code. */
3708 if (isIncremental) {
3709 for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
3710 gcstats::AutoPhase ap(stats, gcstats::PHASE_MARK_DISCARD_CODE);
3711 zone->discardJitCode(rt->defaultFreeOp());
3715 GCMarker* gcmarker = &marker;
3717 startNumber = number;
3720 * We must purge the runtime at the beginning of an incremental GC. The
3721 * danger if we purge later is that the snapshot invariant of incremental
3722 * GC will be broken, as follows. If some object is reachable only through
3723 * some cache (say the dtoaCache) then it will not be part of the snapshot.
3724 * If we purge after root marking, then the mutator could obtain a pointer
3725 * to the object and start using it. This object might never be marked, so
3726 * a GC hazard would exist.
3729 gcstats::AutoPhase ap(stats, gcstats::PHASE_PURGE);
3730 PurgeRuntime(rt);
3734 * Mark phase.
3736 gcstats::AutoPhase ap1(stats, gcstats::PHASE_MARK);
3737 gcstats::AutoPhase ap2(stats, gcstats::PHASE_MARK_ROOTS);
3739 for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
3740 /* Unmark everything in the zones being collected. */
3741 zone->allocator.arenas.unmarkAll();
3744 for (GCCompartmentsIter c(rt); !c.done(); c.next()) {
3745 /* Unmark all weak maps in the compartments being collected. */
3746 WeakMapBase::unmarkCompartment(c);
3749 if (isFull)
3750 UnmarkScriptData(rt);
3752 markRuntime(gcmarker, MarkRuntime);
3753 if (isIncremental)
3754 bufferGrayRoots();
3757 * This code ensures that if a compartment is "dead", then it will be
3758 * collected in this GC. A compartment is considered dead if its maybeAlive
3759 * flag is false. The maybeAlive flag is set if:
3760 * (1) the compartment has incoming cross-compartment edges, or
3761 * (2) an object in the compartment was marked during root marking, either
3762 * as a black root or a gray root.
3763 * If the maybeAlive is false, then we set the scheduledForDestruction flag.
3764 * At the end of the GC, we look for compartments where
3765 * scheduledForDestruction is true. These are compartments that were somehow
3766 * "revived" during the incremental GC. If any are found, we do a special,
3767 * non-incremental GC of those compartments to try to collect them.
3769 * Compartments can be revived for a variety of reasons. On reason is bug
3770 * 811587, where a reflector that was dead can be revived by DOM code that
3771 * still refers to the underlying DOM node.
3773 * Read barriers and allocations can also cause revival. This might happen
3774 * during a function like JS_TransplantObject, which iterates over all
3775 * compartments, live or dead, and operates on their objects. See bug 803376
3776 * for details on this problem. To avoid the problem, we try to avoid
3777 * allocation and read barriers during JS_TransplantObject and the like.
3780 /* Set the maybeAlive flag based on cross-compartment edges. */
3781 for (CompartmentsIter c(rt, SkipAtoms); !c.done(); c.next()) {
3782 for (JSCompartment::WrapperEnum e(c); !e.empty(); e.popFront()) {
3783 const CrossCompartmentKey& key = e.front().key();
3784 JSCompartment* dest;
3785 switch (key.kind) {
3786 case CrossCompartmentKey::ObjectWrapper:
3787 case CrossCompartmentKey::DebuggerObject:
3788 case CrossCompartmentKey::DebuggerSource:
3789 case CrossCompartmentKey::DebuggerEnvironment:
3790 dest = static_cast<JSObject*>(key.wrapped)->compartment();
3791 break;
3792 case CrossCompartmentKey::DebuggerScript:
3793 dest = static_cast<JSScript*>(key.wrapped)->compartment();
3794 break;
3795 default:
3796 dest = nullptr;
3797 break;
3799 if (dest)
3800 dest->maybeAlive = true;
3805 * For black roots, code in gc/Marking.cpp will already have set maybeAlive
3806 * during MarkRuntime.
3809 for (GCCompartmentsIter c(rt); !c.done(); c.next()) {
3810 if (!c->maybeAlive && !rt->isAtomsCompartment(c))
3811 c->scheduledForDestruction = true;
3813 foundBlackGrayEdges = false;
3815 return true;
3818 template <class CompartmentIterT>
3819 void
3820 GCRuntime::markWeakReferences(gcstats::Phase phase)
3822 JS_ASSERT(marker.isDrained());
3824 gcstats::AutoPhase ap1(stats, phase);
3826 for (;;) {
3827 bool markedAny = false;
3828 for (CompartmentIterT c(rt); !c.done(); c.next()) {
3829 markedAny |= WatchpointMap::markCompartmentIteratively(c, &marker);
3830 markedAny |= WeakMapBase::markCompartmentIteratively(c, &marker);
3832 markedAny |= Debugger::markAllIteratively(&marker);
3834 if (!markedAny)
3835 break;
3837 SliceBudget budget;
3838 marker.drainMarkStack(budget);
3840 JS_ASSERT(marker.isDrained());
3843 void
3844 GCRuntime::markWeakReferencesInCurrentGroup(gcstats::Phase phase)
3846 markWeakReferences<GCCompartmentGroupIter>(phase);
3849 template <class ZoneIterT, class CompartmentIterT>
3850 void
3851 GCRuntime::markGrayReferences(gcstats::Phase phase)
3853 gcstats::AutoPhase ap(stats, phase);
3854 if (marker.hasBufferedGrayRoots()) {
3855 for (ZoneIterT zone(rt); !zone.done(); zone.next())
3856 marker.markBufferedGrayRoots(zone);
3857 } else {
3858 JS_ASSERT(!isIncremental);
3859 if (JSTraceDataOp op = grayRootTracer.op)
3860 (*op)(&marker, grayRootTracer.data);
3862 SliceBudget budget;
3863 marker.drainMarkStack(budget);
3866 void
3867 GCRuntime::markGrayReferencesInCurrentGroup(gcstats::Phase phase)
3869 markGrayReferences<GCZoneGroupIter, GCCompartmentGroupIter>(phase);
3872 void
3873 GCRuntime::markAllWeakReferences(gcstats::Phase phase)
3875 markWeakReferences<GCCompartmentsIter>(phase);
3878 void
3879 GCRuntime::markAllGrayReferences(gcstats::Phase phase)
3881 markGrayReferences<GCZonesIter, GCCompartmentsIter>(phase);
3884 #ifdef DEBUG
3886 class js::gc::MarkingValidator
3888 public:
3889 explicit MarkingValidator(GCRuntime* gc);
3890 ~MarkingValidator();
3891 void nonIncrementalMark();
3892 void validate();
3894 private:
3895 GCRuntime* gc;
3896 bool initialized;
3898 typedef HashMap<Chunk*, ChunkBitmap*, GCChunkHasher, SystemAllocPolicy> BitmapMap;
3899 BitmapMap map;
3902 #endif // DEBUG
3904 #ifdef JS_GC_MARKING_VALIDATION
3906 js::gc::MarkingValidator::MarkingValidator(GCRuntime* gc)
3907 : gc(gc),
3908 initialized(false)
3911 js::gc::MarkingValidator::~MarkingValidator()
3913 if (!map.initialized())
3914 return;
3916 for (BitmapMap::Range r(map.all()); !r.empty(); r.popFront())
3917 js_delete(r.front().value());
3920 void
3921 js::gc::MarkingValidator::nonIncrementalMark()
3924 * Perform a non-incremental mark for all collecting zones and record
3925 * the results for later comparison.
3927 * Currently this does not validate gray marking.
3930 if (!map.init())
3931 return;
3933 JSRuntime* runtime = gc->rt;
3934 GCMarker* gcmarker = &gc->marker;
3936 /* Save existing mark bits. */
3937 for (GCChunkSet::Range r(gc->chunkSet.all()); !r.empty(); r.popFront()) {
3938 ChunkBitmap* bitmap = &r.front()->bitmap;
3939 ChunkBitmap* entry = js_new<ChunkBitmap>();
3940 if (!entry)
3941 return;
3943 memcpy((void*)entry->bitmap, (void*)bitmap->bitmap, sizeof(bitmap->bitmap));
3944 if (!map.putNew(r.front(), entry))
3945 return;
3949 * Temporarily clear the weakmaps' mark flags and the lists of live array
3950 * buffers for the compartments we are collecting.
3953 WeakMapSet markedWeakMaps;
3954 if (!markedWeakMaps.init())
3955 return;
3957 ArrayBufferVector arrayBuffers;
3958 for (GCCompartmentsIter c(runtime); !c.done(); c.next()) {
3959 if (!WeakMapBase::saveCompartmentMarkedWeakMaps(c, markedWeakMaps) ||
3960 !ArrayBufferObject::saveArrayBufferList(c, arrayBuffers))
3962 return;
3967 * After this point, the function should run to completion, so we shouldn't
3968 * do anything fallible.
3970 initialized = true;
3972 for (GCCompartmentsIter c(runtime); !c.done(); c.next()) {
3973 WeakMapBase::unmarkCompartment(c);
3974 ArrayBufferObject::resetArrayBufferList(c);
3977 /* Re-do all the marking, but non-incrementally. */
3978 js::gc::State state = gc->incrementalState;
3979 gc->incrementalState = MARK_ROOTS;
3981 JS_ASSERT(gcmarker->isDrained());
3982 gcmarker->reset();
3984 for (GCChunkSet::Range r(gc->chunkSet.all()); !r.empty(); r.popFront())
3985 r.front()->bitmap.clear();
3988 gcstats::AutoPhase ap1(gc->stats, gcstats::PHASE_MARK);
3989 gcstats::AutoPhase ap2(gc->stats, gcstats::PHASE_MARK_ROOTS);
3990 gc->markRuntime(gcmarker, GCRuntime::MarkRuntime, GCRuntime::UseSavedRoots);
3994 gcstats::AutoPhase ap1(gc->stats, gcstats::PHASE_MARK);
3995 SliceBudget budget;
3996 gc->incrementalState = MARK;
3997 gc->marker.drainMarkStack(budget);
4000 gc->incrementalState = SWEEP;
4002 gcstats::AutoPhase ap1(gc->stats, gcstats::PHASE_SWEEP);
4003 gcstats::AutoPhase ap2(gc->stats, gcstats::PHASE_SWEEP_MARK);
4004 gc->markAllWeakReferences(gcstats::PHASE_SWEEP_MARK_WEAK);
4006 /* Update zone state for gray marking. */
4007 for (GCZonesIter zone(runtime); !zone.done(); zone.next()) {
4008 JS_ASSERT(zone->isGCMarkingBlack());
4009 zone->setGCState(Zone::MarkGray);
4011 gc->marker.setMarkColorGray();
4013 gc->markAllGrayReferences(gcstats::PHASE_SWEEP_MARK_GRAY);
4014 gc->markAllWeakReferences(gcstats::PHASE_SWEEP_MARK_GRAY_WEAK);
4016 /* Restore zone state. */
4017 for (GCZonesIter zone(runtime); !zone.done(); zone.next()) {
4018 JS_ASSERT(zone->isGCMarkingGray());
4019 zone->setGCState(Zone::Mark);
4021 JS_ASSERT(gc->marker.isDrained());
4022 gc->marker.setMarkColorBlack();
4025 /* Take a copy of the non-incremental mark state and restore the original. */
4026 for (GCChunkSet::Range r(gc->chunkSet.all()); !r.empty(); r.popFront()) {
4027 Chunk* chunk = r.front();
4028 ChunkBitmap* bitmap = &chunk->bitmap;
4029 ChunkBitmap* entry = map.lookup(chunk)->value();
4030 Swap(*entry, *bitmap);
4033 for (GCCompartmentsIter c(runtime); !c.done(); c.next()) {
4034 WeakMapBase::unmarkCompartment(c);
4035 ArrayBufferObject::resetArrayBufferList(c);
4037 WeakMapBase::restoreCompartmentMarkedWeakMaps(markedWeakMaps);
4038 ArrayBufferObject::restoreArrayBufferLists(arrayBuffers);
4040 gc->incrementalState = state;
4043 void
4044 js::gc::MarkingValidator::validate()
4047 * Validates the incremental marking for a single compartment by comparing
4048 * the mark bits to those previously recorded for a non-incremental mark.
4051 if (!initialized)
4052 return;
4054 for (GCChunkSet::Range r(gc->chunkSet.all()); !r.empty(); r.popFront()) {
4055 Chunk* chunk = r.front();
4056 BitmapMap::Ptr ptr = map.lookup(chunk);
4057 if (!ptr)
4058 continue; /* Allocated after we did the non-incremental mark. */
4060 ChunkBitmap* bitmap = ptr->value();
4061 ChunkBitmap* incBitmap = &chunk->bitmap;
4063 for (size_t i = 0; i < ArenasPerChunk; i++) {
4064 if (chunk->decommittedArenas.get(i))
4065 continue;
4066 Arena* arena = &chunk->arenas[i];
4067 if (!arena->aheader.allocated())
4068 continue;
4069 if (!arena->aheader.zone->isGCSweeping())
4070 continue;
4071 if (arena->aheader.allocatedDuringIncremental)
4072 continue;
4074 AllocKind kind = arena->aheader.getAllocKind();
4075 uintptr_t thing = arena->thingsStart(kind);
4076 uintptr_t end = arena->thingsEnd();
4077 while (thing < end) {
4078 Cell* cell = (Cell*)thing;
4081 * If a non-incremental GC wouldn't have collected a cell, then
4082 * an incremental GC won't collect it.
4084 JS_ASSERT_IF(bitmap->isMarked(cell, BLACK), incBitmap->isMarked(cell, BLACK));
4087 * If the cycle collector isn't allowed to collect an object
4088 * after a non-incremental GC has run, then it isn't allowed to
4089 * collected it after an incremental GC.
4091 JS_ASSERT_IF(!bitmap->isMarked(cell, GRAY), !incBitmap->isMarked(cell, GRAY));
4093 thing += Arena::thingSize(kind);
4099 #endif // JS_GC_MARKING_VALIDATION
4101 void
4102 GCRuntime::computeNonIncrementalMarkingForValidation()
4104 #ifdef JS_GC_MARKING_VALIDATION
4105 JS_ASSERT(!markingValidator);
4106 if (isIncremental && validate)
4107 markingValidator = js_new<MarkingValidator>(this);
4108 if (markingValidator)
4109 markingValidator->nonIncrementalMark();
4110 #endif
4113 void
4114 GCRuntime::validateIncrementalMarking()
4116 #ifdef JS_GC_MARKING_VALIDATION
4117 if (markingValidator)
4118 markingValidator->validate();
4119 #endif
4122 void
4123 GCRuntime::finishMarkingValidation()
4125 #ifdef JS_GC_MARKING_VALIDATION
4126 js_delete(markingValidator);
4127 markingValidator = nullptr;
4128 #endif
4131 static void
4132 AssertNeedsBarrierFlagsConsistent(JSRuntime* rt)
4134 #ifdef JS_GC_MARKING_VALIDATION
4135 bool anyNeedsBarrier = false;
4136 for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next())
4137 anyNeedsBarrier |= zone->needsIncrementalBarrier();
4138 JS_ASSERT(rt->needsIncrementalBarrier() == anyNeedsBarrier);
4139 #endif
4142 static void
4143 DropStringWrappers(JSRuntime* rt)
4146 * String "wrappers" are dropped on GC because their presence would require
4147 * us to sweep the wrappers in all compartments every time we sweep a
4148 * compartment group.
4150 for (CompartmentsIter c(rt, SkipAtoms); !c.done(); c.next()) {
4151 for (JSCompartment::WrapperEnum e(c); !e.empty(); e.popFront()) {
4152 if (e.front().key().kind == CrossCompartmentKey::StringWrapper)
4153 e.removeFront();
4159 * Group zones that must be swept at the same time.
4161 * If compartment A has an edge to an unmarked object in compartment B, then we
4162 * must not sweep A in a later slice than we sweep B. That's because a write
4163 * barrier in A that could lead to the unmarked object in B becoming
4164 * marked. However, if we had already swept that object, we would be in trouble.
4166 * If we consider these dependencies as a graph, then all the compartments in
4167 * any strongly-connected component of this graph must be swept in the same
4168 * slice.
4170 * Tarjan's algorithm is used to calculate the components.
4173 void
4174 JSCompartment::findOutgoingEdges(ComponentFinder<JS::Zone>& finder)
4176 for (js::WrapperMap::Enum e(crossCompartmentWrappers); !e.empty(); e.popFront()) {
4177 CrossCompartmentKey::Kind kind = e.front().key().kind;
4178 JS_ASSERT(kind != CrossCompartmentKey::StringWrapper);
4179 Cell* other = e.front().key().wrapped;
4180 if (kind == CrossCompartmentKey::ObjectWrapper) {
4182 * Add edge to wrapped object compartment if wrapped object is not
4183 * marked black to indicate that wrapper compartment not be swept
4184 * after wrapped compartment.
4186 if (!other->isMarked(BLACK) || other->isMarked(GRAY)) {
4187 JS::Zone* w = other->tenuredZone();
4188 if (w->isGCMarking())
4189 finder.addEdgeTo(w);
4191 } else {
4192 JS_ASSERT(kind == CrossCompartmentKey::DebuggerScript ||
4193 kind == CrossCompartmentKey::DebuggerSource ||
4194 kind == CrossCompartmentKey::DebuggerObject ||
4195 kind == CrossCompartmentKey::DebuggerEnvironment);
4197 * Add edge for debugger object wrappers, to ensure (in conjuction
4198 * with call to Debugger::findCompartmentEdges below) that debugger
4199 * and debuggee objects are always swept in the same group.
4201 JS::Zone* w = other->tenuredZone();
4202 if (w->isGCMarking())
4203 finder.addEdgeTo(w);
4207 Debugger::findCompartmentEdges(zone(), finder);
4210 void
4211 Zone::findOutgoingEdges(ComponentFinder<JS::Zone>& finder)
4214 * Any compartment may have a pointer to an atom in the atoms
4215 * compartment, and these aren't in the cross compartment map.
4217 JSRuntime* rt = runtimeFromMainThread();
4218 if (rt->atomsCompartment()->zone()->isGCMarking())
4219 finder.addEdgeTo(rt->atomsCompartment()->zone());
4221 for (CompartmentsInZoneIter comp(this); !comp.done(); comp.next())
4222 comp->findOutgoingEdges(finder);
4224 for (ZoneSet::Range r = gcZoneGroupEdges.all(); !r.empty(); r.popFront()) {
4225 if (r.front()->isGCMarking())
4226 finder.addEdgeTo(r.front());
4228 gcZoneGroupEdges.clear();
4231 bool
4232 GCRuntime::findZoneEdgesForWeakMaps()
4235 * Weakmaps which have keys with delegates in a different zone introduce the
4236 * need for zone edges from the delegate's zone to the weakmap zone.
4238 * Since the edges point into and not away from the zone the weakmap is in
4239 * we must find these edges in advance and store them in a set on the Zone.
4240 * If we run out of memory, we fall back to sweeping everything in one
4241 * group.
4244 for (GCCompartmentsIter comp(rt); !comp.done(); comp.next()) {
4245 if (!WeakMapBase::findZoneEdgesForCompartment(comp))
4246 return false;
4249 return true;
4252 void
4253 GCRuntime::findZoneGroups()
4255 ComponentFinder<Zone> finder(rt->mainThread.nativeStackLimit[StackForSystemCode]);
4256 if (!isIncremental || !findZoneEdgesForWeakMaps())
4257 finder.useOneComponent();
4259 for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
4260 JS_ASSERT(zone->isGCMarking());
4261 finder.addNode(zone);
4263 zoneGroups = finder.getResultsList();
4264 currentZoneGroup = zoneGroups;
4265 zoneGroupIndex = 0;
4267 for (Zone* head = currentZoneGroup; head; head = head->nextGroup()) {
4268 for (Zone* zone = head; zone; zone = zone->nextNodeInGroup())
4269 JS_ASSERT(zone->isGCMarking());
4272 JS_ASSERT_IF(!isIncremental, !currentZoneGroup->nextGroup());
4275 static void
4276 ResetGrayList(JSCompartment* comp);
4278 void
4279 GCRuntime::getNextZoneGroup()
4281 currentZoneGroup = currentZoneGroup->nextGroup();
4282 ++zoneGroupIndex;
4283 if (!currentZoneGroup) {
4284 abortSweepAfterCurrentGroup = false;
4285 return;
4288 for (Zone* zone = currentZoneGroup; zone; zone = zone->nextNodeInGroup())
4289 JS_ASSERT(zone->isGCMarking());
4291 if (!isIncremental)
4292 ComponentFinder<Zone>::mergeGroups(currentZoneGroup);
4294 if (abortSweepAfterCurrentGroup) {
4295 JS_ASSERT(!isIncremental);
4296 for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) {
4297 JS_ASSERT(!zone->gcNextGraphComponent);
4298 JS_ASSERT(zone->isGCMarking());
4299 zone->setNeedsIncrementalBarrier(false, Zone::UpdateJit);
4300 zone->setGCState(Zone::NoGC);
4301 zone->gcGrayRoots.clearAndFree();
4303 rt->setNeedsIncrementalBarrier(false);
4304 AssertNeedsBarrierFlagsConsistent(rt);
4306 for (GCCompartmentGroupIter comp(rt); !comp.done(); comp.next()) {
4307 ArrayBufferObject::resetArrayBufferList(comp);
4308 ResetGrayList(comp);
4311 abortSweepAfterCurrentGroup = false;
4312 currentZoneGroup = nullptr;
4317 * Gray marking:
4319 * At the end of collection, anything reachable from a gray root that has not
4320 * otherwise been marked black must be marked gray.
4322 * This means that when marking things gray we must not allow marking to leave
4323 * the current compartment group, as that could result in things being marked
4324 * grey when they might subsequently be marked black. To achieve this, when we
4325 * find a cross compartment pointer we don't mark the referent but add it to a
4326 * singly-linked list of incoming gray pointers that is stored with each
4327 * compartment.
4329 * The list head is stored in JSCompartment::gcIncomingGrayPointers and contains
4330 * cross compartment wrapper objects. The next pointer is stored in the second
4331 * extra slot of the cross compartment wrapper.
4333 * The list is created during gray marking when one of the
4334 * MarkCrossCompartmentXXX functions is called for a pointer that leaves the
4335 * current compartent group. This calls DelayCrossCompartmentGrayMarking to
4336 * push the referring object onto the list.
4338 * The list is traversed and then unlinked in
4339 * MarkIncomingCrossCompartmentPointers.
4342 static bool
4343 IsGrayListObject(JSObject* obj)
4345 JS_ASSERT(obj);
4346 return obj->is<CrossCompartmentWrapperObject>() && !IsDeadProxyObject(obj);
4349 /* static */ unsigned
4350 ProxyObject::grayLinkSlot(JSObject* obj)
4352 JS_ASSERT(IsGrayListObject(obj));
4353 return ProxyObject::EXTRA_SLOT + 1;
4356 #ifdef DEBUG
4357 static void
4358 AssertNotOnGrayList(JSObject* obj)
4360 JS_ASSERT_IF(IsGrayListObject(obj),
4361 obj->getReservedSlot(ProxyObject::grayLinkSlot(obj)).isUndefined());
4363 #endif
4365 static JSObject*
4366 CrossCompartmentPointerReferent(JSObject* obj)
4368 JS_ASSERT(IsGrayListObject(obj));
4369 return &obj->as<ProxyObject>().private_().toObject();
4372 static JSObject*
4373 NextIncomingCrossCompartmentPointer(JSObject* prev, bool unlink)
4375 unsigned slot = ProxyObject::grayLinkSlot(prev);
4376 JSObject* next = prev->getReservedSlot(slot).toObjectOrNull();
4377 JS_ASSERT_IF(next, IsGrayListObject(next));
4379 if (unlink)
4380 prev->setSlot(slot, UndefinedValue());
4382 return next;
4385 void
4386 js::DelayCrossCompartmentGrayMarking(JSObject* src)
4388 JS_ASSERT(IsGrayListObject(src));
4390 /* Called from MarkCrossCompartmentXXX functions. */
4391 unsigned slot = ProxyObject::grayLinkSlot(src);
4392 JSObject* dest = CrossCompartmentPointerReferent(src);
4393 JSCompartment* comp = dest->compartment();
4395 if (src->getReservedSlot(slot).isUndefined()) {
4396 src->setCrossCompartmentSlot(slot, ObjectOrNullValue(comp->gcIncomingGrayPointers));
4397 comp->gcIncomingGrayPointers = src;
4398 } else {
4399 JS_ASSERT(src->getReservedSlot(slot).isObjectOrNull());
4402 #ifdef DEBUG
4404 * Assert that the object is in our list, also walking the list to check its
4405 * integrity.
4407 JSObject* obj = comp->gcIncomingGrayPointers;
4408 bool found = false;
4409 while (obj) {
4410 if (obj == src)
4411 found = true;
4412 obj = NextIncomingCrossCompartmentPointer(obj, false);
4414 JS_ASSERT(found);
4415 #endif
4418 static void
4419 MarkIncomingCrossCompartmentPointers(JSRuntime* rt, const uint32_t color)
4421 JS_ASSERT(color == BLACK || color == GRAY);
4423 static const gcstats::Phase statsPhases[] = {
4424 gcstats::PHASE_SWEEP_MARK_INCOMING_BLACK,
4425 gcstats::PHASE_SWEEP_MARK_INCOMING_GRAY
4427 gcstats::AutoPhase ap1(rt->gc.stats, statsPhases[color]);
4429 bool unlinkList = color == GRAY;
4431 for (GCCompartmentGroupIter c(rt); !c.done(); c.next()) {
4432 JS_ASSERT_IF(color == GRAY, c->zone()->isGCMarkingGray());
4433 JS_ASSERT_IF(color == BLACK, c->zone()->isGCMarkingBlack());
4434 JS_ASSERT_IF(c->gcIncomingGrayPointers, IsGrayListObject(c->gcIncomingGrayPointers));
4436 for (JSObject* src = c->gcIncomingGrayPointers;
4437 src;
4438 src = NextIncomingCrossCompartmentPointer(src, unlinkList))
4440 JSObject* dst = CrossCompartmentPointerReferent(src);
4441 JS_ASSERT(dst->compartment() == c);
4443 if (color == GRAY) {
4444 if (IsObjectMarked(&src) && src->isMarked(GRAY))
4445 MarkGCThingUnbarriered(&rt->gc.marker, (void**)&dst,
4446 "cross-compartment gray pointer");
4447 } else {
4448 if (IsObjectMarked(&src) && !src->isMarked(GRAY))
4449 MarkGCThingUnbarriered(&rt->gc.marker, (void**)&dst,
4450 "cross-compartment black pointer");
4454 if (unlinkList)
4455 c->gcIncomingGrayPointers = nullptr;
4458 SliceBudget budget;
4459 rt->gc.marker.drainMarkStack(budget);
4462 static bool
4463 RemoveFromGrayList(JSObject* wrapper)
4465 if (!IsGrayListObject(wrapper))
4466 return false;
4468 unsigned slot = ProxyObject::grayLinkSlot(wrapper);
4469 if (wrapper->getReservedSlot(slot).isUndefined())
4470 return false; /* Not on our list. */
4472 JSObject* tail = wrapper->getReservedSlot(slot).toObjectOrNull();
4473 wrapper->setReservedSlot(slot, UndefinedValue());
4475 JSCompartment* comp = CrossCompartmentPointerReferent(wrapper)->compartment();
4476 JSObject* obj = comp->gcIncomingGrayPointers;
4477 if (obj == wrapper) {
4478 comp->gcIncomingGrayPointers = tail;
4479 return true;
4482 while (obj) {
4483 unsigned slot = ProxyObject::grayLinkSlot(obj);
4484 JSObject* next = obj->getReservedSlot(slot).toObjectOrNull();
4485 if (next == wrapper) {
4486 obj->setCrossCompartmentSlot(slot, ObjectOrNullValue(tail));
4487 return true;
4489 obj = next;
4492 MOZ_CRASH("object not found in gray link list");
4495 static void
4496 ResetGrayList(JSCompartment* comp)
4498 JSObject* src = comp->gcIncomingGrayPointers;
4499 while (src)
4500 src = NextIncomingCrossCompartmentPointer(src, true);
4501 comp->gcIncomingGrayPointers = nullptr;
4504 void
4505 js::NotifyGCNukeWrapper(JSObject* obj)
4508 * References to target of wrapper are being removed, we no longer have to
4509 * remember to mark it.
4511 RemoveFromGrayList(obj);
4514 enum {
4515 JS_GC_SWAP_OBJECT_A_REMOVED = 1 << 0,
4516 JS_GC_SWAP_OBJECT_B_REMOVED = 1 << 1
4519 unsigned
4520 js::NotifyGCPreSwap(JSObject* a, JSObject* b)
4523 * Two objects in the same compartment are about to have had their contents
4524 * swapped. If either of them are in our gray pointer list, then we remove
4525 * them from the lists, returning a bitset indicating what happened.
4527 return (RemoveFromGrayList(a) ? JS_GC_SWAP_OBJECT_A_REMOVED : 0) |
4528 (RemoveFromGrayList(b) ? JS_GC_SWAP_OBJECT_B_REMOVED : 0);
4531 void
4532 js::NotifyGCPostSwap(JSObject* a, JSObject* b, unsigned removedFlags)
4535 * Two objects in the same compartment have had their contents swapped. If
4536 * either of them were in our gray pointer list, we re-add them again.
4538 if (removedFlags & JS_GC_SWAP_OBJECT_A_REMOVED)
4539 DelayCrossCompartmentGrayMarking(b);
4540 if (removedFlags & JS_GC_SWAP_OBJECT_B_REMOVED)
4541 DelayCrossCompartmentGrayMarking(a);
4544 void
4545 GCRuntime::endMarkingZoneGroup()
4547 gcstats::AutoPhase ap(stats, gcstats::PHASE_SWEEP_MARK);
4550 * Mark any incoming black pointers from previously swept compartments
4551 * whose referents are not marked. This can occur when gray cells become
4552 * black by the action of UnmarkGray.
4554 MarkIncomingCrossCompartmentPointers(rt, BLACK);
4556 markWeakReferencesInCurrentGroup(gcstats::PHASE_SWEEP_MARK_WEAK);
4559 * Change state of current group to MarkGray to restrict marking to this
4560 * group. Note that there may be pointers to the atoms compartment, and
4561 * these will be marked through, as they are not marked with
4562 * MarkCrossCompartmentXXX.
4564 for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) {
4565 JS_ASSERT(zone->isGCMarkingBlack());
4566 zone->setGCState(Zone::MarkGray);
4568 marker.setMarkColorGray();
4570 /* Mark incoming gray pointers from previously swept compartments. */
4571 MarkIncomingCrossCompartmentPointers(rt, GRAY);
4573 /* Mark gray roots and mark transitively inside the current compartment group. */
4574 markGrayReferencesInCurrentGroup(gcstats::PHASE_SWEEP_MARK_GRAY);
4575 markWeakReferencesInCurrentGroup(gcstats::PHASE_SWEEP_MARK_GRAY_WEAK);
4577 /* Restore marking state. */
4578 for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) {
4579 JS_ASSERT(zone->isGCMarkingGray());
4580 zone->setGCState(Zone::Mark);
4582 MOZ_ASSERT(marker.isDrained());
4583 marker.setMarkColorBlack();
4586 void
4587 GCRuntime::beginSweepingZoneGroup()
4590 * Begin sweeping the group of zones in gcCurrentZoneGroup,
4591 * performing actions that must be done before yielding to caller.
4594 bool sweepingAtoms = false;
4595 for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) {
4596 /* Set the GC state to sweeping. */
4597 JS_ASSERT(zone->isGCMarking());
4598 zone->setGCState(Zone::Sweep);
4600 /* Purge the ArenaLists before sweeping. */
4601 zone->allocator.arenas.purge();
4603 if (rt->isAtomsZone(zone))
4604 sweepingAtoms = true;
4606 if (rt->sweepZoneCallback)
4607 rt->sweepZoneCallback(zone);
4609 zone->gcLastZoneGroupIndex = zoneGroupIndex;
4612 validateIncrementalMarking();
4614 FreeOp fop(rt);
4617 gcstats::AutoPhase ap(stats, gcstats::PHASE_FINALIZE_START);
4618 for (Callback<JSFinalizeCallback>* p = rt->gc.finalizeCallbacks.begin();
4619 p < rt->gc.finalizeCallbacks.end(); p++)
4621 p->op(&fop, JSFINALIZE_GROUP_START, !isFull /* unused */, p->data);
4625 if (sweepingAtoms) {
4627 gcstats::AutoPhase ap(stats, gcstats::PHASE_SWEEP_ATOMS);
4628 rt->sweepAtoms();
4631 gcstats::AutoPhase ap(stats, gcstats::PHASE_SWEEP_SYMBOL_REGISTRY);
4632 rt->symbolRegistry().sweep();
4636 /* Prune out dead views from ArrayBuffer's view lists. */
4637 for (GCCompartmentGroupIter c(rt); !c.done(); c.next())
4638 ArrayBufferObject::sweep(c);
4640 /* Collect watch points associated with unreachable objects. */
4641 WatchpointMap::sweepAll(rt);
4643 /* Detach unreachable debuggers and global objects from each other. */
4644 Debugger::sweepAll(&fop);
4647 gcstats::AutoPhase ap(stats, gcstats::PHASE_SWEEP_COMPARTMENTS);
4649 for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) {
4650 gcstats::AutoPhase ap(stats, gcstats::PHASE_SWEEP_DISCARD_CODE);
4651 zone->discardJitCode(&fop);
4654 for (GCCompartmentGroupIter c(rt); !c.done(); c.next()) {
4655 gcstats::AutoSCC scc(stats, zoneGroupIndex);
4656 gcstats::AutoPhase ap(stats, gcstats::PHASE_SWEEP_TABLES);
4658 c->sweep(&fop, releaseObservedTypes && !c->zone()->isPreservingCode());
4661 for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) {
4662 gcstats::AutoSCC scc(stats, zoneGroupIndex);
4664 // If there is an OOM while sweeping types, the type information
4665 // will be deoptimized so that it is still correct (i.e.
4666 // overapproximates the possible types in the zone), but the
4667 // constraints might not have been triggered on the deoptimization
4668 // or even copied over completely. In this case, destroy all JIT
4669 // code and new script information in the zone, the only things
4670 // whose correctness depends on the type constraints.
4671 bool oom = false;
4672 zone->sweep(&fop, releaseObservedTypes && !zone->isPreservingCode(), &oom);
4674 if (oom) {
4675 zone->setPreservingCode(false);
4676 zone->discardJitCode(&fop);
4677 zone->types.clearAllNewScriptsOnOOM();
4683 * Queue all GC things in all zones for sweeping, either in the
4684 * foreground or on the background thread.
4686 * Note that order is important here for the background case.
4688 * Objects are finalized immediately but this may change in the future.
4690 for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) {
4691 gcstats::AutoSCC scc(stats, zoneGroupIndex);
4692 zone->allocator.arenas.queueObjectsForSweep(&fop);
4694 for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) {
4695 gcstats::AutoSCC scc(stats, zoneGroupIndex);
4696 zone->allocator.arenas.queueStringsAndSymbolsForSweep(&fop);
4698 for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) {
4699 gcstats::AutoSCC scc(stats, zoneGroupIndex);
4700 zone->allocator.arenas.queueScriptsForSweep(&fop);
4702 for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) {
4703 gcstats::AutoSCC scc(stats, zoneGroupIndex);
4704 zone->allocator.arenas.queueJitCodeForSweep(&fop);
4706 for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) {
4707 gcstats::AutoSCC scc(stats, zoneGroupIndex);
4708 zone->allocator.arenas.queueShapesForSweep(&fop);
4709 zone->allocator.arenas.gcShapeArenasToSweep =
4710 zone->allocator.arenas.arenaListsToSweep[FINALIZE_SHAPE];
4713 finalizePhase = 0;
4714 sweepZone = currentZoneGroup;
4715 sweepKindIndex = 0;
4718 gcstats::AutoPhase ap(stats, gcstats::PHASE_FINALIZE_END);
4719 for (Callback<JSFinalizeCallback>* p = rt->gc.finalizeCallbacks.begin();
4720 p < rt->gc.finalizeCallbacks.end(); p++)
4722 p->op(&fop, JSFINALIZE_GROUP_END, !isFull /* unused */, p->data);
4727 void
4728 GCRuntime::endSweepingZoneGroup()
4730 /* Update the GC state for zones we have swept and unlink the list. */
4731 for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) {
4732 JS_ASSERT(zone->isGCSweeping());
4733 zone->setGCState(Zone::Finished);
4736 /* Reset the list of arenas marked as being allocated during sweep phase. */
4737 while (ArenaHeader* arena = arenasAllocatedDuringSweep) {
4738 arenasAllocatedDuringSweep = arena->getNextAllocDuringSweep();
4739 arena->unsetAllocDuringSweep();
4743 void
4744 GCRuntime::beginSweepPhase(bool destroyingRuntime)
4747 * Sweep phase.
4749 * Finalize as we sweep, outside of lock but with rt->isHeapBusy()
4750 * true so that any attempt to allocate a GC-thing from a finalizer will
4751 * fail, rather than nest badly and leave the unmarked newborn to be swept.
4754 AutoSetThreadIsSweeping threadIsSweeping;
4756 JS_ASSERT(!abortSweepAfterCurrentGroup);
4758 computeNonIncrementalMarkingForValidation();
4760 gcstats::AutoPhase ap(stats, gcstats::PHASE_SWEEP);
4762 sweepOnBackgroundThread =
4763 !destroyingRuntime && !TraceEnabled() && CanUseExtraThreads() && !shouldCompact();
4765 releaseObservedTypes = shouldReleaseObservedTypes();
4767 #ifdef DEBUG
4768 for (CompartmentsIter c(rt, SkipAtoms); !c.done(); c.next()) {
4769 JS_ASSERT(!c->gcIncomingGrayPointers);
4770 for (JSCompartment::WrapperEnum e(c); !e.empty(); e.popFront()) {
4771 if (e.front().key().kind != CrossCompartmentKey::StringWrapper)
4772 AssertNotOnGrayList(&e.front().value().get().toObject());
4775 #endif
4777 DropStringWrappers(rt);
4778 findZoneGroups();
4779 endMarkingZoneGroup();
4780 beginSweepingZoneGroup();
4783 bool
4784 ArenaLists::foregroundFinalize(FreeOp* fop, AllocKind thingKind, SliceBudget& sliceBudget,
4785 SortedArenaList& sweepList)
4787 if (!arenaListsToSweep[thingKind] && incrementalSweptArenas.isEmpty())
4788 return true;
4790 if (!FinalizeArenas(fop, &arenaListsToSweep[thingKind], sweepList, thingKind, sliceBudget)) {
4791 incrementalSweptArenaKind = thingKind;
4792 incrementalSweptArenas = sweepList.toArenaList();
4793 return false;
4796 // Clear any previous incremental sweep state we may have saved.
4797 incrementalSweptArenas.clear();
4799 // Join |arenaLists[thingKind]| and |sweepList| into a single list.
4800 ArenaList finalized = sweepList.toArenaList();
4801 arenaLists[thingKind] = finalized.insertListWithCursorAtEnd(arenaLists[thingKind]);
4803 return true;
4806 bool
4807 GCRuntime::drainMarkStack(SliceBudget& sliceBudget, gcstats::Phase phase)
4809 /* Run a marking slice and return whether the stack is now empty. */
4810 gcstats::AutoPhase ap(stats, phase);
4811 return marker.drainMarkStack(sliceBudget);
4814 bool
4815 GCRuntime::sweepPhase(SliceBudget& sliceBudget)
4817 AutoSetThreadIsSweeping threadIsSweeping;
4819 gcstats::AutoPhase ap(stats, gcstats::PHASE_SWEEP);
4820 FreeOp fop(rt);
4822 bool finished = drainMarkStack(sliceBudget, gcstats::PHASE_SWEEP_MARK);
4823 if (!finished)
4824 return false;
4826 for (;;) {
4827 /* Finalize foreground finalized things. */
4828 for (; finalizePhase < FinalizePhaseCount ; ++finalizePhase) {
4829 gcstats::AutoPhase ap(stats, FinalizePhaseStatsPhase[finalizePhase]);
4831 for (; sweepZone; sweepZone = sweepZone->nextNodeInGroup()) {
4832 Zone* zone = sweepZone;
4834 while (sweepKindIndex < FinalizePhaseLength[finalizePhase]) {
4835 AllocKind kind = FinalizePhases[finalizePhase][sweepKindIndex];
4837 /* Set the number of things per arena for this AllocKind. */
4838 size_t thingsPerArena = Arena::thingsPerArena(Arena::thingSize(kind));
4839 incrementalSweepList.setThingsPerArena(thingsPerArena);
4841 if (!zone->allocator.arenas.foregroundFinalize(&fop, kind, sliceBudget,
4842 incrementalSweepList))
4843 return false; /* Yield to the mutator. */
4845 /* Reset the slots of the sweep list that we used. */
4846 incrementalSweepList.reset(thingsPerArena);
4848 ++sweepKindIndex;
4850 sweepKindIndex = 0;
4852 sweepZone = currentZoneGroup;
4855 /* Remove dead shapes from the shape tree, but don't finalize them yet. */
4857 gcstats::AutoPhase ap(stats, gcstats::PHASE_SWEEP_SHAPE);
4859 for (; sweepZone; sweepZone = sweepZone->nextNodeInGroup()) {
4860 Zone* zone = sweepZone;
4861 while (ArenaHeader* arena = zone->allocator.arenas.gcShapeArenasToSweep) {
4862 for (ArenaCellIterUnderGC i(arena); !i.done(); i.next()) {
4863 Shape* shape = i.get<Shape>();
4864 if (!shape->isMarked())
4865 shape->sweep();
4868 zone->allocator.arenas.gcShapeArenasToSweep = arena->next;
4869 sliceBudget.step(Arena::thingsPerArena(Arena::thingSize(FINALIZE_SHAPE)));
4870 if (sliceBudget.isOverBudget())
4871 return false; /* Yield to the mutator. */
4876 endSweepingZoneGroup();
4877 getNextZoneGroup();
4878 if (!currentZoneGroup)
4879 return true; /* We're finished. */
4880 endMarkingZoneGroup();
4881 beginSweepingZoneGroup();
4885 void
4886 GCRuntime::endSweepPhase(bool destroyingRuntime)
4888 AutoSetThreadIsSweeping threadIsSweeping;
4890 gcstats::AutoPhase ap(stats, gcstats::PHASE_SWEEP);
4891 FreeOp fop(rt);
4893 JS_ASSERT_IF(destroyingRuntime, !sweepOnBackgroundThread);
4896 * Recalculate whether GC was full or not as this may have changed due to
4897 * newly created zones. Can only change from full to not full.
4899 if (isFull) {
4900 for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
4901 if (!zone->isCollecting()) {
4902 isFull = false;
4903 break;
4909 * If we found any black->gray edges during marking, we completely clear the
4910 * mark bits of all uncollected zones, or if a reset has occured, zones that
4911 * will no longer be collected. This is safe, although it may
4912 * prevent the cycle collector from collecting some dead objects.
4914 if (foundBlackGrayEdges) {
4915 for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
4916 if (!zone->isCollecting())
4917 zone->allocator.arenas.unmarkAll();
4922 gcstats::AutoPhase ap(stats, gcstats::PHASE_DESTROY);
4925 * Sweep script filenames after sweeping functions in the generic loop
4926 * above. In this way when a scripted function's finalizer destroys the
4927 * script and calls rt->destroyScriptHook, the hook can still access the
4928 * script's filename. See bug 323267.
4930 if (isFull)
4931 SweepScriptData(rt);
4933 /* Clear out any small pools that we're hanging on to. */
4934 if (jit::ExecutableAllocator* execAlloc = rt->maybeExecAlloc())
4935 execAlloc->purge();
4937 if (rt->jitRuntime() && rt->jitRuntime()->hasIonAlloc()) {
4938 JSRuntime::AutoLockForInterrupt lock(rt);
4939 rt->jitRuntime()->ionAlloc(rt)->purge();
4943 * This removes compartments from rt->compartment, so we do it last to make
4944 * sure we don't miss sweeping any compartments.
4946 if (!destroyingRuntime)
4947 sweepZones(&fop, destroyingRuntime);
4949 if (!sweepOnBackgroundThread) {
4951 * Destroy arenas after we finished the sweeping so finalizers can
4952 * safely use IsAboutToBeFinalized(). This is done on the
4953 * GCHelperState if possible. We acquire the lock only because
4954 * Expire needs to unlock it for other callers.
4956 AutoLockGC lock(rt);
4957 expireChunksAndArenas(invocationKind == GC_SHRINK);
4962 gcstats::AutoPhase ap(stats, gcstats::PHASE_FINALIZE_END);
4964 for (Callback<JSFinalizeCallback>* p = rt->gc.finalizeCallbacks.begin();
4965 p < rt->gc.finalizeCallbacks.end(); p++)
4967 p->op(&fop, JSFINALIZE_COLLECTION_END, !isFull, p->data);
4970 /* If we finished a full GC, then the gray bits are correct. */
4971 if (isFull)
4972 grayBitsValid = true;
4975 /* Set up list of zones for sweeping of background things. */
4976 JS_ASSERT(!sweepingZones);
4977 for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
4978 zone->gcNextGraphNode = sweepingZones;
4979 sweepingZones = zone;
4982 /* If not sweeping on background thread then we must do it here. */
4983 if (!sweepOnBackgroundThread) {
4984 gcstats::AutoPhase ap(stats, gcstats::PHASE_DESTROY);
4986 sweepBackgroundThings(false);
4988 rt->freeLifoAlloc.freeAll();
4990 /* Ensure the compartments get swept if it's the last GC. */
4991 if (destroyingRuntime)
4992 sweepZones(&fop, destroyingRuntime);
4995 finishMarkingValidation();
4997 #ifdef DEBUG
4998 for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
4999 for (unsigned i = 0 ; i < FINALIZE_LIMIT ; ++i) {
5000 JS_ASSERT_IF(!IsBackgroundFinalized(AllocKind(i)) ||
5001 !sweepOnBackgroundThread,
5002 !zone->allocator.arenas.arenaListsToSweep[i]);
5006 for (CompartmentsIter c(rt, SkipAtoms); !c.done(); c.next()) {
5007 JS_ASSERT(!c->gcIncomingGrayPointers);
5008 JS_ASSERT(c->gcLiveArrayBuffers.empty());
5010 for (JSCompartment::WrapperEnum e(c); !e.empty(); e.popFront()) {
5011 if (e.front().key().kind != CrossCompartmentKey::StringWrapper)
5012 AssertNotOnGrayList(&e.front().value().unbarrieredGet().toObject());
5015 #endif
5018 #ifdef JSGC_COMPACTING
5019 void
5020 GCRuntime::compactPhase()
5022 JS_ASSERT(rt->gc.nursery.isEmpty());
5023 JS_ASSERT(!sweepOnBackgroundThread);
5025 gcstats::AutoPhase ap(stats, gcstats::PHASE_COMPACT);
5027 ArenaHeader* relocatedList = relocateArenas();
5029 updatePointersToRelocatedCells();
5030 releaseRelocatedArenas(relocatedList);
5032 #ifdef DEBUG
5033 CheckHashTablesAfterMovingGC(rt);
5034 for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
5035 if (!rt->isAtomsZone(zone) && !zone->isPreservingCode())
5036 zone->allocator.arenas.checkEmptyFreeLists();
5038 #endif
5040 #endif // JSGC_COMPACTING
5042 void
5043 GCRuntime::finishCollection()
5045 JS_ASSERT(marker.isDrained());
5046 marker.stop();
5048 uint64_t currentTime = PRMJ_Now();
5049 schedulingState.updateHighFrequencyMode(lastGCTime, currentTime, tunables);
5051 for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
5052 zone->threshold.updateAfterGC(zone->usage.gcBytes(), invocationKind, tunables,
5053 schedulingState);
5054 if (zone->isCollecting()) {
5055 JS_ASSERT(zone->isGCFinished() || zone->isGCCompacting());
5056 zone->setGCState(Zone::NoGC);
5057 zone->active = false;
5060 JS_ASSERT(!zone->isCollecting());
5061 JS_ASSERT(!zone->wasGCStarted());
5064 lastGCTime = currentTime;
5067 /* Start a new heap session. */
5068 AutoTraceSession::AutoTraceSession(JSRuntime* rt, js::HeapState heapState)
5069 : lock(rt),
5070 runtime(rt),
5071 prevState(rt->gc.heapState)
5073 JS_ASSERT(rt->gc.isAllocAllowed());
5074 JS_ASSERT(rt->gc.heapState == Idle);
5075 JS_ASSERT(heapState != Idle);
5076 #ifdef JSGC_GENERATIONAL
5077 JS_ASSERT_IF(heapState == MajorCollecting, rt->gc.nursery.isEmpty());
5078 #endif
5080 // Threads with an exclusive context can hit refillFreeList while holding
5081 // the exclusive access lock. To avoid deadlocking when we try to acquire
5082 // this lock during GC and the other thread is waiting, make sure we hold
5083 // the exclusive access lock during GC sessions.
5084 JS_ASSERT(rt->currentThreadHasExclusiveAccess());
5086 if (rt->exclusiveThreadsPresent()) {
5087 // Lock the helper thread state when changing the heap state in the
5088 // presence of exclusive threads, to avoid racing with refillFreeList.
5089 AutoLockHelperThreadState lock;
5090 rt->gc.heapState = heapState;
5091 } else {
5092 rt->gc.heapState = heapState;
5096 AutoTraceSession::~AutoTraceSession()
5098 JS_ASSERT(runtime->isHeapBusy());
5100 if (runtime->exclusiveThreadsPresent()) {
5101 AutoLockHelperThreadState lock;
5102 runtime->gc.heapState = prevState;
5104 // Notify any helper threads waiting for the trace session to end.
5105 HelperThreadState().notifyAll(GlobalHelperThreadState::PRODUCER);
5106 } else {
5107 runtime->gc.heapState = prevState;
5111 AutoCopyFreeListToArenas::AutoCopyFreeListToArenas(JSRuntime* rt, ZoneSelector selector)
5112 : runtime(rt),
5113 selector(selector)
5115 for (ZonesIter zone(rt, selector); !zone.done(); zone.next())
5116 zone->allocator.arenas.copyFreeListsToArenas();
5119 AutoCopyFreeListToArenas::~AutoCopyFreeListToArenas()
5121 for (ZonesIter zone(runtime, selector); !zone.done(); zone.next())
5122 zone->allocator.arenas.clearFreeListsInArenas();
5125 class AutoCopyFreeListToArenasForGC
5127 JSRuntime* runtime;
5129 public:
5130 explicit AutoCopyFreeListToArenasForGC(JSRuntime* rt) : runtime(rt) {
5131 JS_ASSERT(rt->currentThreadHasExclusiveAccess());
5132 for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next())
5133 zone->allocator.arenas.copyFreeListsToArenas();
5135 ~AutoCopyFreeListToArenasForGC() {
5136 for (ZonesIter zone(runtime, WithAtoms); !zone.done(); zone.next())
5137 zone->allocator.arenas.clearFreeListsInArenas();
5141 void
5142 GCRuntime::resetIncrementalGC(const char* reason)
5144 switch (incrementalState) {
5145 case NO_INCREMENTAL:
5146 return;
5148 case MARK: {
5149 /* Cancel any ongoing marking. */
5150 AutoCopyFreeListToArenasForGC copy(rt);
5152 marker.reset();
5153 marker.stop();
5155 for (GCCompartmentsIter c(rt); !c.done(); c.next()) {
5156 ArrayBufferObject::resetArrayBufferList(c);
5157 ResetGrayList(c);
5160 for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
5161 JS_ASSERT(zone->isGCMarking());
5162 zone->setNeedsIncrementalBarrier(false, Zone::UpdateJit);
5163 zone->setGCState(Zone::NoGC);
5165 rt->setNeedsIncrementalBarrier(false);
5166 AssertNeedsBarrierFlagsConsistent(rt);
5168 incrementalState = NO_INCREMENTAL;
5170 JS_ASSERT(!marker.shouldCheckCompartments());
5172 break;
5175 case SWEEP:
5176 marker.reset();
5178 for (CompartmentsIter c(rt, SkipAtoms); !c.done(); c.next())
5179 c->scheduledForDestruction = false;
5181 /* Finish sweeping the current zone group, then abort. */
5182 abortSweepAfterCurrentGroup = true;
5183 incrementalCollectSlice(SliceBudget::Unlimited, JS::gcreason::RESET);
5186 gcstats::AutoPhase ap(stats, gcstats::PHASE_WAIT_BACKGROUND_THREAD);
5187 rt->gc.waitBackgroundSweepOrAllocEnd();
5189 break;
5191 default:
5192 MOZ_CRASH("Invalid incremental GC state");
5195 stats.reset(reason);
5197 #ifdef DEBUG
5198 for (CompartmentsIter c(rt, SkipAtoms); !c.done(); c.next())
5199 JS_ASSERT(c->gcLiveArrayBuffers.empty());
5201 for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
5202 JS_ASSERT(!zone->needsIncrementalBarrier());
5203 for (unsigned i = 0; i < FINALIZE_LIMIT; ++i)
5204 JS_ASSERT(!zone->allocator.arenas.arenaListsToSweep[i]);
5206 #endif
5209 namespace {
5211 class AutoGCSlice {
5212 public:
5213 explicit AutoGCSlice(JSRuntime* rt);
5214 ~AutoGCSlice();
5216 private:
5217 JSRuntime* runtime;
5220 } /* anonymous namespace */
5222 AutoGCSlice::AutoGCSlice(JSRuntime* rt)
5223 : runtime(rt)
5226 * During incremental GC, the compartment's active flag determines whether
5227 * there are stack frames active for any of its scripts. Normally this flag
5228 * is set at the beginning of the mark phase. During incremental GC, we also
5229 * set it at the start of every phase.
5231 for (ActivationIterator iter(rt); !iter.done(); ++iter)
5232 iter->compartment()->zone()->active = true;
5234 for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
5236 * Clear needsIncrementalBarrier early so we don't do any write
5237 * barriers during GC. We don't need to update the Ion barriers (which
5238 * is expensive) because Ion code doesn't run during GC. If need be,
5239 * we'll update the Ion barriers in ~AutoGCSlice.
5241 if (zone->isGCMarking()) {
5242 JS_ASSERT(zone->needsIncrementalBarrier());
5243 zone->setNeedsIncrementalBarrier(false, Zone::DontUpdateJit);
5244 } else {
5245 JS_ASSERT(!zone->needsIncrementalBarrier());
5248 rt->setNeedsIncrementalBarrier(false);
5249 AssertNeedsBarrierFlagsConsistent(rt);
5252 AutoGCSlice::~AutoGCSlice()
5254 /* We can't use GCZonesIter if this is the end of the last slice. */
5255 bool haveBarriers = false;
5256 for (ZonesIter zone(runtime, WithAtoms); !zone.done(); zone.next()) {
5257 if (zone->isGCMarking()) {
5258 zone->setNeedsIncrementalBarrier(true, Zone::UpdateJit);
5259 zone->allocator.arenas.prepareForIncrementalGC(runtime);
5260 haveBarriers = true;
5261 } else {
5262 zone->setNeedsIncrementalBarrier(false, Zone::UpdateJit);
5265 runtime->setNeedsIncrementalBarrier(haveBarriers);
5266 AssertNeedsBarrierFlagsConsistent(runtime);
5269 void
5270 GCRuntime::pushZealSelectedObjects()
5272 #ifdef JS_GC_ZEAL
5273 /* Push selected objects onto the mark stack and clear the list. */
5274 for (JSObject** obj = selectedForMarking.begin(); obj != selectedForMarking.end(); obj++)
5275 MarkObjectUnbarriered(&marker, obj, "selected obj");
5276 #endif
5279 void
5280 GCRuntime::incrementalCollectSlice(int64_t budget,
5281 JS::gcreason::Reason reason)
5283 JS_ASSERT(rt->currentThreadHasExclusiveAccess());
5285 AutoCopyFreeListToArenasForGC copy(rt);
5286 AutoGCSlice slice(rt);
5288 bool destroyingRuntime = (reason == JS::gcreason::DESTROY_RUNTIME);
5290 gc::State initialState = incrementalState;
5292 int zeal = 0;
5293 #ifdef JS_GC_ZEAL
5294 if (reason == JS::gcreason::DEBUG_GC && budget != SliceBudget::Unlimited) {
5296 * Do the incremental collection type specified by zeal mode if the
5297 * collection was triggered by runDebugGC() and incremental GC has not
5298 * been cancelled by resetIncrementalGC().
5300 zeal = zealMode;
5302 #endif
5304 JS_ASSERT_IF(incrementalState != NO_INCREMENTAL, isIncremental);
5305 isIncremental = budget != SliceBudget::Unlimited;
5307 if (zeal == ZealIncrementalRootsThenFinish || zeal == ZealIncrementalMarkAllThenFinish) {
5309 * Yields between slices occurs at predetermined points in these modes;
5310 * the budget is not used.
5312 budget = SliceBudget::Unlimited;
5315 SliceBudget sliceBudget(budget);
5317 if (incrementalState == NO_INCREMENTAL) {
5318 incrementalState = MARK_ROOTS;
5319 lastMarkSlice = false;
5322 if (incrementalState == MARK)
5323 AutoGCRooter::traceAllWrappers(&marker);
5325 switch (incrementalState) {
5327 case MARK_ROOTS:
5328 if (!beginMarkPhase(reason)) {
5329 incrementalState = NO_INCREMENTAL;
5330 return;
5333 if (!destroyingRuntime)
5334 pushZealSelectedObjects();
5336 incrementalState = MARK;
5338 if (isIncremental && zeal == ZealIncrementalRootsThenFinish)
5339 break;
5341 /* fall through */
5343 case MARK: {
5344 /* If we needed delayed marking for gray roots, then collect until done. */
5345 if (!marker.hasBufferedGrayRoots()) {
5346 sliceBudget.reset();
5347 isIncremental = false;
5350 bool finished = drainMarkStack(sliceBudget, gcstats::PHASE_MARK);
5351 if (!finished)
5352 break;
5354 JS_ASSERT(marker.isDrained());
5356 if (!lastMarkSlice && isIncremental &&
5357 ((initialState == MARK && zeal != ZealIncrementalRootsThenFinish) ||
5358 zeal == ZealIncrementalMarkAllThenFinish))
5361 * Yield with the aim of starting the sweep in the next
5362 * slice. We will need to mark anything new on the stack
5363 * when we resume, so we stay in MARK state.
5365 lastMarkSlice = true;
5366 break;
5369 incrementalState = SWEEP;
5372 * This runs to completion, but we don't continue if the budget is
5373 * now exhasted.
5375 beginSweepPhase(destroyingRuntime);
5376 if (sliceBudget.isOverBudget())
5377 break;
5380 * Always yield here when running in incremental multi-slice zeal
5381 * mode, so RunDebugGC can reset the slice buget.
5383 if (isIncremental && zeal == ZealIncrementalMultipleSlices)
5384 break;
5386 /* fall through */
5389 case SWEEP: {
5390 bool finished = sweepPhase(sliceBudget);
5391 if (!finished)
5392 break;
5394 endSweepPhase(destroyingRuntime);
5396 if (sweepOnBackgroundThread)
5397 helperState.startBackgroundSweep(invocationKind == GC_SHRINK);
5399 #ifdef JSGC_COMPACTING
5400 if (shouldCompact()) {
5401 incrementalState = COMPACT;
5402 compactPhase();
5404 #endif
5406 finishCollection();
5407 incrementalState = NO_INCREMENTAL;
5408 break;
5411 default:
5412 JS_ASSERT(false);
5416 IncrementalSafety
5417 gc::IsIncrementalGCSafe(JSRuntime* rt)
5419 JS_ASSERT(!rt->mainThread.suppressGC);
5421 if (rt->keepAtoms())
5422 return IncrementalSafety::Unsafe("keepAtoms set");
5424 if (!rt->gc.isIncrementalGCAllowed())
5425 return IncrementalSafety::Unsafe("incremental permanently disabled");
5427 return IncrementalSafety::Safe();
5430 void
5431 GCRuntime::budgetIncrementalGC(int64_t* budget)
5433 IncrementalSafety safe = IsIncrementalGCSafe(rt);
5434 if (!safe) {
5435 resetIncrementalGC(safe.reason());
5436 *budget = SliceBudget::Unlimited;
5437 stats.nonincremental(safe.reason());
5438 return;
5441 if (mode != JSGC_MODE_INCREMENTAL) {
5442 resetIncrementalGC("GC mode change");
5443 *budget = SliceBudget::Unlimited;
5444 stats.nonincremental("GC mode");
5445 return;
5448 if (isTooMuchMalloc()) {
5449 *budget = SliceBudget::Unlimited;
5450 stats.nonincremental("malloc bytes trigger");
5453 bool reset = false;
5454 for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
5455 if (zone->usage.gcBytes() >= zone->threshold.gcTriggerBytes()) {
5456 *budget = SliceBudget::Unlimited;
5457 stats.nonincremental("allocation trigger");
5460 if (incrementalState != NO_INCREMENTAL &&
5461 zone->isGCScheduled() != zone->wasGCStarted())
5463 reset = true;
5466 if (zone->isTooMuchMalloc()) {
5467 *budget = SliceBudget::Unlimited;
5468 stats.nonincremental("malloc bytes trigger");
5472 if (reset)
5473 resetIncrementalGC("zone change");
5476 namespace {
5478 #ifdef JSGC_GENERATIONAL
5479 class AutoDisableStoreBuffer
5481 StoreBuffer& sb;
5482 bool prior;
5484 public:
5485 explicit AutoDisableStoreBuffer(GCRuntime* gc) : sb(gc->storeBuffer) {
5486 prior = sb.isEnabled();
5487 sb.disable();
5489 ~AutoDisableStoreBuffer() {
5490 if (prior)
5491 sb.enable();
5494 #else
5495 struct AutoDisableStoreBuffer
5497 AutoDisableStoreBuffer(GCRuntime* gc) {}
5499 #endif
5501 } /* anonymous namespace */
5504 * Run one GC "cycle" (either a slice of incremental GC or an entire
5505 * non-incremental GC. We disable inlining to ensure that the bottom of the
5506 * stack with possible GC roots recorded in MarkRuntime excludes any pointers we
5507 * use during the marking implementation.
5509 * Returns true if we "reset" an existing incremental GC, which would force us
5510 * to run another cycle.
5512 MOZ_NEVER_INLINE bool
5513 GCRuntime::gcCycle(bool incremental, int64_t budget, JSGCInvocationKind gckind,
5514 JS::gcreason::Reason reason)
5516 minorGC(reason);
5519 * Marking can trigger many incidental post barriers, some of them for
5520 * objects which are not going to be live after the GC.
5522 AutoDisableStoreBuffer adsb(this);
5524 AutoTraceSession session(rt, MajorCollecting);
5526 isNeeded = false;
5527 interFrameGC = true;
5529 number++;
5530 if (incrementalState == NO_INCREMENTAL)
5531 majorGCNumber++;
5533 // It's ok if threads other than the main thread have suppressGC set, as
5534 // they are operating on zones which will not be collected from here.
5535 JS_ASSERT(!rt->mainThread.suppressGC);
5537 // Assert if this is a GC unsafe region.
5538 JS::AutoAssertOnGC::VerifyIsSafeToGC(rt);
5541 * As we about to purge caches and clear the mark bits we must wait for
5542 * any background finalization to finish. We must also wait for the
5543 * background allocation to finish so we can avoid taking the GC lock
5544 * when manipulating the chunks during the GC.
5547 gcstats::AutoPhase ap(stats, gcstats::PHASE_WAIT_BACKGROUND_THREAD);
5548 waitBackgroundSweepOrAllocEnd();
5551 State prevState = incrementalState;
5553 if (!incremental) {
5554 /* If non-incremental GC was requested, reset incremental GC. */
5555 resetIncrementalGC("requested");
5556 stats.nonincremental("requested");
5557 budget = SliceBudget::Unlimited;
5558 } else {
5559 budgetIncrementalGC(&budget);
5562 /* The GC was reset, so we need a do-over. */
5563 if (prevState != NO_INCREMENTAL && incrementalState == NO_INCREMENTAL)
5564 return true;
5566 TraceMajorGCStart();
5568 /* Set the invocation kind in the first slice. */
5569 if (incrementalState == NO_INCREMENTAL)
5570 invocationKind = gckind;
5572 incrementalCollectSlice(budget, reason);
5574 #ifndef JS_MORE_DETERMINISTIC
5575 nextFullGCTime = PRMJ_Now() + GC_IDLE_FULL_SPAN;
5576 #endif
5578 chunkAllocationSinceLastGC = false;
5580 #ifdef JS_GC_ZEAL
5581 /* Keeping these around after a GC is dangerous. */
5582 clearSelectedForMarking();
5583 #endif
5585 /* Clear gcMallocBytes for all compartments */
5586 for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
5587 zone->resetGCMallocBytes();
5588 zone->unscheduleGC();
5591 resetMallocBytes();
5593 TraceMajorGCEnd();
5595 return false;
5598 #ifdef JS_GC_ZEAL
5599 static bool
5600 IsDeterministicGCReason(JS::gcreason::Reason reason)
5602 if (reason > JS::gcreason::DEBUG_GC &&
5603 reason != JS::gcreason::CC_FORCED && reason != JS::gcreason::SHUTDOWN_CC)
5605 return false;
5608 if (reason == JS::gcreason::MAYBEGC)
5609 return false;
5611 return true;
5613 #endif
5615 static bool
5616 ShouldCleanUpEverything(JS::gcreason::Reason reason, JSGCInvocationKind gckind)
5618 // During shutdown, we must clean everything up, for the sake of leak
5619 // detection. When a runtime has no contexts, or we're doing a GC before a
5620 // shutdown CC, those are strong indications that we're shutting down.
5621 return reason == JS::gcreason::DESTROY_RUNTIME ||
5622 reason == JS::gcreason::SHUTDOWN_CC ||
5623 gckind == GC_SHRINK;
5626 gcstats::ZoneGCStats
5627 GCRuntime::scanZonesBeforeGC()
5629 gcstats::ZoneGCStats zoneStats;
5630 for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
5631 if (mode == JSGC_MODE_GLOBAL)
5632 zone->scheduleGC();
5634 /* This is a heuristic to avoid resets. */
5635 if (incrementalState != NO_INCREMENTAL && zone->needsIncrementalBarrier())
5636 zone->scheduleGC();
5638 zoneStats.zoneCount++;
5639 if (zone->isGCScheduled())
5640 zoneStats.collectedCount++;
5643 for (CompartmentsIter c(rt, WithAtoms); !c.done(); c.next())
5644 zoneStats.compartmentCount++;
5646 return zoneStats;
5649 void
5650 GCRuntime::collect(bool incremental, int64_t budget, JSGCInvocationKind gckind,
5651 JS::gcreason::Reason reason)
5653 /* GC shouldn't be running in parallel execution mode */
5654 MOZ_ASSERT(!InParallelSection());
5656 JS_AbortIfWrongThread(rt);
5658 /* If we attempt to invoke the GC while we are running in the GC, assert. */
5659 MOZ_ASSERT(!rt->isHeapBusy());
5661 /* The engine never locks across anything that could GC. */
5662 MOZ_ASSERT(!rt->currentThreadHasExclusiveAccess());
5664 if (rt->mainThread.suppressGC)
5665 return;
5667 TraceLogger* logger = TraceLoggerForMainThread(rt);
5668 AutoTraceLog logGC(logger, TraceLogger::GC);
5670 #ifdef JS_GC_ZEAL
5671 if (deterministicOnly && !IsDeterministicGCReason(reason))
5672 return;
5673 #endif
5675 JS_ASSERT_IF(!incremental || budget != SliceBudget::Unlimited, JSGC_INCREMENTAL);
5677 AutoStopVerifyingBarriers av(rt, reason == JS::gcreason::SHUTDOWN_CC ||
5678 reason == JS::gcreason::DESTROY_RUNTIME);
5680 recordNativeStackTop();
5682 gcstats::AutoGCSlice agc(stats, scanZonesBeforeGC(), reason);
5684 cleanUpEverything = ShouldCleanUpEverything(reason, gckind);
5686 bool repeat = false;
5687 do {
5689 * Let the API user decide to defer a GC if it wants to (unless this
5690 * is the last context). Invoke the callback regardless.
5692 if (incrementalState == NO_INCREMENTAL) {
5693 gcstats::AutoPhase ap(stats, gcstats::PHASE_GC_BEGIN);
5694 if (gcCallback.op)
5695 gcCallback.op(rt, JSGC_BEGIN, gcCallback.data);
5698 poked = false;
5699 bool wasReset = gcCycle(incremental, budget, gckind, reason);
5701 if (incrementalState == NO_INCREMENTAL) {
5702 gcstats::AutoPhase ap(stats, gcstats::PHASE_GC_END);
5703 if (gcCallback.op)
5704 gcCallback.op(rt, JSGC_END, gcCallback.data);
5707 /* Need to re-schedule all zones for GC. */
5708 if (poked && cleanUpEverything)
5709 JS::PrepareForFullGC(rt);
5712 * This code makes an extra effort to collect compartments that we
5713 * thought were dead at the start of the GC. See the large comment in
5714 * beginMarkPhase.
5716 bool repeatForDeadZone = false;
5717 if (incremental && incrementalState == NO_INCREMENTAL) {
5718 for (CompartmentsIter c(rt, SkipAtoms); !c.done(); c.next()) {
5719 if (c->scheduledForDestruction) {
5720 incremental = false;
5721 repeatForDeadZone = true;
5722 reason = JS::gcreason::COMPARTMENT_REVIVED;
5723 c->zone()->scheduleGC();
5729 * If we reset an existing GC, we need to start a new one. Also, we
5730 * repeat GCs that happen during shutdown (the gcShouldCleanUpEverything
5731 * case) until we can be sure that no additional garbage is created
5732 * (which typically happens if roots are dropped during finalizers).
5734 repeat = (poked && cleanUpEverything) || wasReset || repeatForDeadZone;
5735 } while (repeat);
5737 if (incrementalState == NO_INCREMENTAL)
5738 EnqueuePendingParseTasksAfterGC(rt);
5741 void
5742 GCRuntime::gc(JSGCInvocationKind gckind, JS::gcreason::Reason reason)
5744 collect(false, SliceBudget::Unlimited, gckind, reason);
5747 void
5748 GCRuntime::gcSlice(JSGCInvocationKind gckind, JS::gcreason::Reason reason, int64_t millis)
5750 int64_t budget;
5751 if (millis)
5752 budget = SliceBudget::TimeBudget(millis);
5753 else if (schedulingState.inHighFrequencyGCMode() && tunables.isDynamicMarkSliceEnabled())
5754 budget = sliceBudget * IGC_MARK_SLICE_MULTIPLIER;
5755 else
5756 budget = sliceBudget;
5758 collect(true, budget, gckind, reason);
5761 void
5762 GCRuntime::gcFinalSlice(JSGCInvocationKind gckind, JS::gcreason::Reason reason)
5764 collect(true, SliceBudget::Unlimited, gckind, reason);
5767 void
5768 GCRuntime::notifyDidPaint()
5770 #ifdef JS_GC_ZEAL
5771 if (zealMode == ZealFrameVerifierPreValue) {
5772 verifyPreBarriers();
5773 return;
5776 if (zealMode == ZealFrameVerifierPostValue) {
5777 verifyPostBarriers();
5778 return;
5781 if (zealMode == ZealFrameGCValue) {
5782 JS::PrepareForFullGC(rt);
5783 gcSlice(GC_NORMAL, JS::gcreason::REFRESH_FRAME);
5784 return;
5786 #endif
5788 if (JS::IsIncrementalGCInProgress(rt) && !interFrameGC) {
5789 JS::PrepareForIncrementalGC(rt);
5790 gcSlice(GC_NORMAL, JS::gcreason::REFRESH_FRAME);
5793 interFrameGC = false;
5796 static bool
5797 ZonesSelected(JSRuntime* rt)
5799 for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
5800 if (zone->isGCScheduled())
5801 return true;
5803 return false;
5806 void
5807 GCRuntime::gcDebugSlice(bool limit, int64_t objCount)
5809 int64_t budget = limit ? SliceBudget::WorkBudget(objCount) : SliceBudget::Unlimited;
5810 if (!ZonesSelected(rt)) {
5811 if (JS::IsIncrementalGCInProgress(rt))
5812 JS::PrepareForIncrementalGC(rt);
5813 else
5814 JS::PrepareForFullGC(rt);
5816 collect(true, budget, GC_NORMAL, JS::gcreason::DEBUG_GC);
5819 /* Schedule a full GC unless a zone will already be collected. */
5820 void
5821 js::PrepareForDebugGC(JSRuntime* rt)
5823 if (!ZonesSelected(rt))
5824 JS::PrepareForFullGC(rt);
5827 JS_FRIEND_API(void)
5828 JS::ShrinkGCBuffers(JSRuntime* rt)
5830 rt->gc.shrinkBuffers();
5833 void
5834 GCRuntime::shrinkBuffers()
5836 AutoLockHelperThreadState helperLock;
5837 AutoLockGC lock(rt);
5838 JS_ASSERT(!rt->isHeapBusy());
5840 if (CanUseExtraThreads())
5841 helperState.startBackgroundShrink();
5842 else
5843 expireChunksAndArenas(true);
5846 void
5847 GCRuntime::minorGC(JS::gcreason::Reason reason)
5849 #ifdef JSGC_GENERATIONAL
5850 TraceLogger* logger = TraceLoggerForMainThread(rt);
5851 AutoTraceLog logMinorGC(logger, TraceLogger::MinorGC);
5852 nursery.collect(rt, reason, nullptr);
5853 JS_ASSERT_IF(!rt->mainThread.suppressGC, nursery.isEmpty());
5854 #endif
5857 void
5858 GCRuntime::minorGC(JSContext* cx, JS::gcreason::Reason reason)
5860 // Alternate to the runtime-taking form above which allows marking type
5861 // objects as needing pretenuring.
5862 #ifdef JSGC_GENERATIONAL
5863 TraceLogger* logger = TraceLoggerForMainThread(rt);
5864 AutoTraceLog logMinorGC(logger, TraceLogger::MinorGC);
5865 Nursery::TypeObjectList pretenureTypes;
5866 nursery.collect(rt, reason, &pretenureTypes);
5867 for (size_t i = 0; i < pretenureTypes.length(); i++) {
5868 if (pretenureTypes[i]->canPreTenure())
5869 pretenureTypes[i]->setShouldPreTenure(cx);
5871 JS_ASSERT_IF(!rt->mainThread.suppressGC, nursery.isEmpty());
5872 #endif
5875 void
5876 GCRuntime::disableGenerationalGC()
5878 #ifdef JSGC_GENERATIONAL
5879 if (isGenerationalGCEnabled()) {
5880 minorGC(JS::gcreason::API);
5881 nursery.disable();
5882 storeBuffer.disable();
5884 #endif
5885 ++rt->gc.generationalDisabled;
5888 void
5889 GCRuntime::enableGenerationalGC()
5891 JS_ASSERT(generationalDisabled > 0);
5892 --generationalDisabled;
5893 #ifdef JSGC_GENERATIONAL
5894 if (generationalDisabled == 0) {
5895 nursery.enable();
5896 storeBuffer.enable();
5898 #endif
5901 void
5902 GCRuntime::gcIfNeeded(JSContext* cx)
5904 #ifdef JSGC_GENERATIONAL
5906 * In case of store buffer overflow perform minor GC first so that the
5907 * correct reason is seen in the logs.
5909 if (storeBuffer.isAboutToOverflow())
5910 minorGC(cx, JS::gcreason::FULL_STORE_BUFFER);
5911 #endif
5913 if (isNeeded)
5914 gcSlice(GC_NORMAL, rt->gc.triggerReason, 0);
5917 AutoFinishGC::AutoFinishGC(JSRuntime* rt)
5919 if (JS::IsIncrementalGCInProgress(rt)) {
5920 JS::PrepareForIncrementalGC(rt);
5921 JS::FinishIncrementalGC(rt, JS::gcreason::API);
5924 rt->gc.waitBackgroundSweepEnd();
5927 AutoPrepareForTracing::AutoPrepareForTracing(JSRuntime* rt, ZoneSelector selector)
5928 : finish(rt),
5929 session(rt),
5930 copy(rt, selector)
5932 rt->gc.recordNativeStackTop();
5935 JSCompartment*
5936 js::NewCompartment(JSContext* cx, Zone* zone, JSPrincipals* principals,
5937 const JS::CompartmentOptions& options)
5939 JSRuntime* rt = cx->runtime();
5940 JS_AbortIfWrongThread(rt);
5942 ScopedJSDeletePtr<Zone> zoneHolder;
5943 if (!zone) {
5944 zone = cx->new_<Zone>(rt);
5945 if (!zone)
5946 return nullptr;
5948 zoneHolder.reset(zone);
5950 const JSPrincipals* trusted = rt->trustedPrincipals();
5951 bool isSystem = principals && principals == trusted;
5952 if (!zone->init(isSystem))
5953 return nullptr;
5956 ScopedJSDeletePtr<JSCompartment> compartment(cx->new_<JSCompartment>(zone, options));
5957 if (!compartment || !compartment->init(cx))
5958 return nullptr;
5960 // Set up the principals.
5961 JS_SetCompartmentPrincipals(compartment, principals);
5963 AutoLockGC lock(rt);
5965 if (!zone->compartments.append(compartment.get())) {
5966 js_ReportOutOfMemory(cx);
5967 return nullptr;
5970 if (zoneHolder && !rt->gc.zones.append(zone)) {
5971 js_ReportOutOfMemory(cx);
5972 return nullptr;
5975 zoneHolder.forget();
5976 return compartment.forget();
5979 void
5980 gc::MergeCompartments(JSCompartment* source, JSCompartment* target)
5982 // The source compartment must be specifically flagged as mergable. This
5983 // also implies that the compartment is not visible to the debugger.
5984 JS_ASSERT(source->options_.mergeable());
5986 JS_ASSERT(source->addonId == target->addonId);
5988 JSRuntime* rt = source->runtimeFromMainThread();
5990 AutoPrepareForTracing prepare(rt, SkipAtoms);
5992 // Cleanup tables and other state in the source compartment that will be
5993 // meaningless after merging into the target compartment.
5995 source->clearTables();
5997 // Fixup compartment pointers in source to refer to target.
5999 for (ZoneCellIter iter(source->zone(), FINALIZE_SCRIPT); !iter.done(); iter.next()) {
6000 JSScript* script = iter.get<JSScript>();
6001 JS_ASSERT(script->compartment() == source);
6002 script->compartment_ = target;
6005 for (ZoneCellIter iter(source->zone(), FINALIZE_BASE_SHAPE); !iter.done(); iter.next()) {
6006 BaseShape* base = iter.get<BaseShape>();
6007 JS_ASSERT(base->compartment() == source);
6008 base->compartment_ = target;
6011 // Fixup zone pointers in source's zone to refer to target's zone.
6013 for (size_t thingKind = 0; thingKind != FINALIZE_LIMIT; thingKind++) {
6014 for (ArenaIter aiter(source->zone(), AllocKind(thingKind)); !aiter.done(); aiter.next()) {
6015 ArenaHeader* aheader = aiter.get();
6016 aheader->zone = target->zone();
6020 // The source should be the only compartment in its zone.
6021 for (CompartmentsInZoneIter c(source->zone()); !c.done(); c.next())
6022 JS_ASSERT(c.get() == source);
6024 // Merge the allocator in source's zone into target's zone.
6025 target->zone()->allocator.arenas.adoptArenas(rt, &source->zone()->allocator.arenas);
6026 target->zone()->usage.adopt(source->zone()->usage);
6028 // Merge other info in source's zone into target's zone.
6029 target->zone()->types.typeLifoAlloc.transferFrom(&source->zone()->types.typeLifoAlloc);
6032 void
6033 GCRuntime::runDebugGC()
6035 #ifdef JS_GC_ZEAL
6036 int type = zealMode;
6038 if (rt->mainThread.suppressGC)
6039 return;
6041 if (type == js::gc::ZealGenerationalGCValue)
6042 return minorGC(JS::gcreason::DEBUG_GC);
6044 PrepareForDebugGC(rt);
6046 if (type == ZealIncrementalRootsThenFinish ||
6047 type == ZealIncrementalMarkAllThenFinish ||
6048 type == ZealIncrementalMultipleSlices)
6050 js::gc::State initialState = incrementalState;
6051 int64_t budget;
6052 if (type == ZealIncrementalMultipleSlices) {
6054 * Start with a small slice limit and double it every slice. This
6055 * ensure that we get multiple slices, and collection runs to
6056 * completion.
6058 if (initialState == NO_INCREMENTAL)
6059 incrementalLimit = zealFrequency / 2;
6060 else
6061 incrementalLimit *= 2;
6062 budget = SliceBudget::WorkBudget(incrementalLimit);
6063 } else {
6064 // This triggers incremental GC but is actually ignored by IncrementalMarkSlice.
6065 budget = SliceBudget::WorkBudget(1);
6068 collect(true, budget, GC_NORMAL, JS::gcreason::DEBUG_GC);
6071 * For multi-slice zeal, reset the slice size when we get to the sweep
6072 * phase.
6074 if (type == ZealIncrementalMultipleSlices &&
6075 initialState == MARK && incrementalState == SWEEP)
6077 incrementalLimit = zealFrequency / 2;
6079 } else if (type == ZealCompactValue) {
6080 collect(false, SliceBudget::Unlimited, GC_SHRINK, JS::gcreason::DEBUG_GC);
6081 } else {
6082 collect(false, SliceBudget::Unlimited, GC_NORMAL, JS::gcreason::DEBUG_GC);
6085 #endif
6088 void
6089 GCRuntime::setValidate(bool enabled)
6091 JS_ASSERT(!isHeapMajorCollecting());
6092 validate = enabled;
6095 void
6096 GCRuntime::setFullCompartmentChecks(bool enabled)
6098 JS_ASSERT(!isHeapMajorCollecting());
6099 fullCompartmentChecks = enabled;
6102 #ifdef JS_GC_ZEAL
6103 bool
6104 GCRuntime::selectForMarking(JSObject* object)
6106 JS_ASSERT(!isHeapMajorCollecting());
6107 return selectedForMarking.append(object);
6110 void
6111 GCRuntime::clearSelectedForMarking()
6113 selectedForMarking.clearAndFree();
6116 void
6117 GCRuntime::setDeterministic(bool enabled)
6119 JS_ASSERT(!isHeapMajorCollecting());
6120 deterministicOnly = enabled;
6122 #endif
6124 #ifdef DEBUG
6126 /* Should only be called manually under gdb */
6127 void PreventGCDuringInteractiveDebug()
6129 TlsPerThreadData.get()->suppressGC++;
6132 #endif
6134 void
6135 js::ReleaseAllJITCode(FreeOp* fop)
6137 #ifdef JSGC_GENERATIONAL
6139 * Scripts can entrain nursery things, inserting references to the script
6140 * into the store buffer. Clear the store buffer before discarding scripts.
6142 fop->runtime()->gc.evictNursery();
6143 #endif
6145 for (ZonesIter zone(fop->runtime(), SkipAtoms); !zone.done(); zone.next()) {
6146 if (!zone->jitZone())
6147 continue;
6149 #ifdef DEBUG
6150 /* Assert no baseline scripts are marked as active. */
6151 for (ZoneCellIter i(zone, FINALIZE_SCRIPT); !i.done(); i.next()) {
6152 JSScript* script = i.get<JSScript>();
6153 JS_ASSERT_IF(script->hasBaselineScript(), !script->baselineScript()->active());
6155 #endif
6157 /* Mark baseline scripts on the stack as active. */
6158 jit::MarkActiveBaselineScripts(zone);
6160 jit::InvalidateAll(fop, zone);
6162 for (ZoneCellIter i(zone, FINALIZE_SCRIPT); !i.done(); i.next()) {
6163 JSScript* script = i.get<JSScript>();
6164 jit::FinishInvalidation<SequentialExecution>(fop, script);
6165 jit::FinishInvalidation<ParallelExecution>(fop, script);
6168 * Discard baseline script if it's not marked as active. Note that
6169 * this also resets the active flag.
6171 jit::FinishDiscardBaselineScript(fop, script);
6174 zone->jitZone()->optimizedStubSpace()->free();
6178 void
6179 js::PurgeJITCaches(Zone* zone)
6181 for (ZoneCellIterUnderGC i(zone, FINALIZE_SCRIPT); !i.done(); i.next()) {
6182 JSScript* script = i.get<JSScript>();
6184 /* Discard Ion caches. */
6185 jit::PurgeCaches(script);
6189 void
6190 ArenaLists::normalizeBackgroundFinalizeState(AllocKind thingKind)
6192 ArenaLists::BackgroundFinalizeState* bfs = &backgroundFinalizeState[thingKind];
6193 switch (*bfs) {
6194 case BFS_DONE:
6195 break;
6196 case BFS_JUST_FINISHED:
6197 // No allocations between end of last sweep and now.
6198 // Transfering over arenas is a kind of allocation.
6199 *bfs = BFS_DONE;
6200 break;
6201 default:
6202 JS_ASSERT(!"Background finalization in progress, but it should not be.");
6203 break;
6207 void
6208 ArenaLists::adoptArenas(JSRuntime* rt, ArenaLists* fromArenaLists)
6210 // The other parallel threads have all completed now, and GC
6211 // should be inactive, but still take the lock as a kind of read
6212 // fence.
6213 AutoLockGC lock(rt);
6215 fromArenaLists->purge();
6217 for (size_t thingKind = 0; thingKind != FINALIZE_LIMIT; thingKind++) {
6218 // When we enter a parallel section, we join the background
6219 // thread, and we do not run GC while in the parallel section,
6220 // so no finalizer should be active!
6221 normalizeBackgroundFinalizeState(AllocKind(thingKind));
6222 fromArenaLists->normalizeBackgroundFinalizeState(AllocKind(thingKind));
6224 ArenaList* fromList = &fromArenaLists->arenaLists[thingKind];
6225 ArenaList* toList = &arenaLists[thingKind];
6226 fromList->check();
6227 toList->check();
6228 ArenaHeader* next;
6229 for (ArenaHeader* fromHeader = fromList->head(); fromHeader; fromHeader = next) {
6230 // Copy fromHeader->next before releasing/reinserting.
6231 next = fromHeader->next;
6233 // During parallel execution, we sometimes keep empty arenas
6234 // on the lists rather than sending them back to the chunk.
6235 // Therefore, if fromHeader is empty, send it back to the
6236 // chunk now. Otherwise, attach to |toList|.
6237 if (fromHeader->isEmpty())
6238 fromHeader->chunk()->releaseArena(fromHeader);
6239 else
6240 toList->insertAtCursor(fromHeader);
6242 fromList->clear();
6243 toList->check();
6247 bool
6248 ArenaLists::containsArena(JSRuntime* rt, ArenaHeader* needle)
6250 AutoLockGC lock(rt);
6251 size_t allocKind = needle->getAllocKind();
6252 for (ArenaHeader* aheader = arenaLists[allocKind].head(); aheader; aheader = aheader->next) {
6253 if (aheader == needle)
6254 return true;
6256 return false;
6260 AutoSuppressGC::AutoSuppressGC(ExclusiveContext* cx)
6261 : suppressGC_(cx->perThreadData->suppressGC)
6263 suppressGC_++;
6266 AutoSuppressGC::AutoSuppressGC(JSCompartment* comp)
6267 : suppressGC_(comp->runtimeFromMainThread()->mainThread.suppressGC)
6269 suppressGC_++;
6272 AutoSuppressGC::AutoSuppressGC(JSRuntime* rt)
6273 : suppressGC_(rt->mainThread.suppressGC)
6275 suppressGC_++;
6278 bool
6279 js::UninlinedIsInsideNursery(const gc::Cell* cell)
6281 return IsInsideNursery(cell);
6284 #ifdef DEBUG
6285 AutoDisableProxyCheck::AutoDisableProxyCheck(JSRuntime* rt
6286 MOZ_GUARD_OBJECT_NOTIFIER_PARAM_IN_IMPL)
6287 : gc(rt->gc)
6289 MOZ_GUARD_OBJECT_NOTIFIER_INIT;
6290 gc.disableStrictProxyChecking();
6293 AutoDisableProxyCheck::~AutoDisableProxyCheck()
6295 gc.enableStrictProxyChecking();
6298 JS_FRIEND_API(void)
6299 JS::AssertGCThingMustBeTenured(JSObject* obj)
6301 JS_ASSERT((!IsNurseryAllocable(obj->tenuredGetAllocKind()) || obj->getClass()->finalize) &&
6302 obj->isTenured());
6305 JS_FRIEND_API(void)
6306 js::gc::AssertGCThingHasType(js::gc::Cell* cell, JSGCTraceKind kind)
6308 JS_ASSERT(cell);
6309 if (IsInsideNursery(cell))
6310 JS_ASSERT(kind == JSTRACE_OBJECT);
6311 else
6312 JS_ASSERT(MapAllocToTraceKind(cell->tenuredGetAllocKind()) == kind);
6315 JS_FRIEND_API(size_t)
6316 JS::GetGCNumber()
6318 JSRuntime* rt = js::TlsPerThreadData.get()->runtimeFromMainThread();
6319 if (!rt)
6320 return 0;
6321 return rt->gc.gcNumber();
6323 #endif
6325 #ifdef DEBUG
6326 JS::AutoAssertOnGC::AutoAssertOnGC()
6327 : gc(nullptr), gcNumber(0)
6329 js::PerThreadData* data = js::TlsPerThreadData.get();
6330 if (data) {
6332 * GC's from off-thread will always assert, so off-thread is implicitly
6333 * AutoAssertOnGC. We still need to allow AutoAssertOnGC to be used in
6334 * code that works from both threads, however. We also use this to
6335 * annotate the off thread run loops.
6337 JSRuntime* runtime = data->runtimeIfOnOwnerThread();
6338 if (runtime) {
6339 gc = &runtime->gc;
6340 gcNumber = gc->gcNumber();
6341 gc->enterUnsafeRegion();
6346 JS::AutoAssertOnGC::AutoAssertOnGC(JSRuntime* rt)
6347 : gc(&rt->gc), gcNumber(rt->gc.gcNumber())
6349 gc->enterUnsafeRegion();
6352 JS::AutoAssertOnGC::~AutoAssertOnGC()
6354 if (gc) {
6355 gc->leaveUnsafeRegion();
6358 * The following backstop assertion should never fire: if we bumped the
6359 * gcNumber, we should have asserted because inUnsafeRegion was true.
6361 MOZ_ASSERT(gcNumber == gc->gcNumber(), "GC ran inside an AutoAssertOnGC scope.");
6365 /* static */ void
6366 JS::AutoAssertOnGC::VerifyIsSafeToGC(JSRuntime* rt)
6368 if (rt->gc.isInsideUnsafeRegion())
6369 MOZ_CRASH("[AutoAssertOnGC] possible GC in GC-unsafe region");
6371 #endif
6373 #ifdef JSGC_HASH_TABLE_CHECKS
6374 void
6375 js::gc::CheckHashTablesAfterMovingGC(JSRuntime* rt)
6378 * Check that internal hash tables no longer have any pointers to things
6379 * that have been moved.
6381 for (CompartmentsIter c(rt, SkipAtoms); !c.done(); c.next()) {
6382 c->checkTypeObjectTablesAfterMovingGC();
6383 c->checkInitialShapesTableAfterMovingGC();
6384 c->checkWrapperMapAfterMovingGC();
6385 if (c->debugScopes)
6386 c->debugScopes->checkHashTablesAfterMovingGC(rt);
6389 #endif