1 /* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 4 -*-
2 * vim: set ts=8 sts=4 et sw=4 tw=99:
3 * This Source Code Form is subject to the terms of the Mozilla Public
4 * License, v. 2.0. If a copy of the MPL was not distributed with this
5 * file, You can obtain one at http://mozilla.org/MPL/2.0/. */
8 * This code implements an incremental mark-and-sweep garbage collector, with
9 * most sweeping carried out in the background on a parallel thread.
14 * The collector can collect all zones at once, or a subset. These types of
15 * collection are referred to as a full GC and a zone GC respectively.
17 * The atoms zone is only collected in a full GC since objects in any zone may
18 * have pointers to atoms, and these are not recorded in the cross compartment
19 * pointer map. Also, the atoms zone is not collected if any thread has an
20 * AutoKeepAtoms instance on the stack, or there are any exclusive threads using
23 * It is possible for an incremental collection that started out as a full GC to
24 * become a zone GC if new zones are created during the course of the
27 * Incremental collection
28 * ----------------------
30 * For a collection to be carried out incrementally the following conditions
32 * - the collection must be run by calling js::GCSlice() rather than js::GC()
33 * - the GC mode must have been set to JSGC_MODE_INCREMENTAL with
35 * - no thread may have an AutoKeepAtoms instance on the stack
36 * - all native objects that have their own trace hook must indicate that they
37 * implement read and write barriers with the JSCLASS_IMPLEMENTS_BARRIERS
40 * The last condition is an engine-internal mechanism to ensure that incremental
41 * collection is not carried out without the correct barriers being implemented.
42 * For more information see 'Incremental marking' below.
44 * If the collection is not incremental, all foreground activity happens inside
45 * a single call to GC() or GCSlice(). However the collection is not complete
46 * until the background sweeping activity has finished.
48 * An incremental collection proceeds as a series of slices, interleaved with
49 * mutator activity, i.e. running JavaScript code. Slices are limited by a time
50 * budget. The slice finishes as soon as possible after the requested time has
56 * The collector proceeds through the following states, the current state being
57 * held in JSRuntime::gcIncrementalState:
59 * - MARK_ROOTS - marks the stack and other roots
60 * - MARK - incrementally marks reachable things
61 * - SWEEP - sweeps zones in groups and continues marking unswept zones
63 * The MARK_ROOTS activity always takes place in the first slice. The next two
64 * states can take place over one or more slices.
66 * In other words an incremental collection proceeds like this:
68 * Slice 1: MARK_ROOTS: Roots pushed onto the mark stack.
69 * MARK: The mark stack is processed by popping an element,
70 * marking it, and pushing its children.
72 * ... JS code runs ...
74 * Slice 2: MARK: More mark stack processing.
76 * ... JS code runs ...
78 * Slice n-1: MARK: More mark stack processing.
80 * ... JS code runs ...
82 * Slice n: MARK: Mark stack is completely drained.
83 * SWEEP: Select first group of zones to sweep and sweep them.
85 * ... JS code runs ...
87 * Slice n+1: SWEEP: Mark objects in unswept zones that were newly
88 * identified as alive (see below). Then sweep more zone
91 * ... JS code runs ...
93 * Slice n+2: SWEEP: Mark objects in unswept zones that were newly
94 * identified as alive. Then sweep more zone groups.
96 * ... JS code runs ...
98 * Slice m: SWEEP: Sweeping is finished, and background sweeping
99 * started on the helper thread.
101 * ... JS code runs, remaining sweeping done on background thread ...
103 * When background sweeping finishes the GC is complete.
105 * Incremental marking
106 * -------------------
108 * Incremental collection requires close collaboration with the mutator (i.e.,
109 * JS code) to guarantee correctness.
111 * - During an incremental GC, if a memory location (except a root) is written
112 * to, then the value it previously held must be marked. Write barriers
115 * - Any object that is allocated during incremental GC must start out marked.
117 * - Roots are marked in the first slice and hence don't need write barriers.
118 * Roots are things like the C stack and the VM stack.
120 * The problem that write barriers solve is that between slices the mutator can
121 * change the object graph. We must ensure that it cannot do this in such a way
122 * that makes us fail to mark a reachable object (marking an unreachable object
125 * We use a snapshot-at-the-beginning algorithm to do this. This means that we
126 * promise to mark at least everything that is reachable at the beginning of
127 * collection. To implement it we mark the old contents of every non-root memory
128 * location written to by the mutator while the collection is in progress, using
129 * write barriers. This is described in gc/Barrier.h.
131 * Incremental sweeping
132 * --------------------
134 * Sweeping is difficult to do incrementally because object finalizers must be
135 * run at the start of sweeping, before any mutator code runs. The reason is
136 * that some objects use their finalizers to remove themselves from caches. If
137 * mutator code was allowed to run after the start of sweeping, it could observe
138 * the state of the cache and create a new reference to an object that was just
139 * about to be destroyed.
141 * Sweeping all finalizable objects in one go would introduce long pauses, so
142 * instead sweeping broken up into groups of zones. Zones which are not yet
143 * being swept are still marked, so the issue above does not apply.
145 * The order of sweeping is restricted by cross compartment pointers - for
146 * example say that object |a| from zone A points to object |b| in zone B and
147 * neither object was marked when we transitioned to the SWEEP phase. Imagine we
148 * sweep B first and then return to the mutator. It's possible that the mutator
149 * could cause |a| to become alive through a read barrier (perhaps it was a
150 * shape that was accessed via a shape table). Then we would need to mark |b|,
151 * which |a| points to, but |b| has already been swept.
153 * So if there is such a pointer then marking of zone B must not finish before
154 * marking of zone A. Pointers which form a cycle between zones therefore
155 * restrict those zones to being swept at the same time, and these are found
156 * using Tarjan's algorithm for finding the strongly connected components of a
159 * GC things without finalizers, and things with finalizers that are able to run
160 * in the background, are swept on the background thread. This accounts for most
161 * of the sweeping work.
166 * During incremental collection it is possible, although unlikely, for
167 * conditions to change such that incremental collection is no longer safe. In
168 * this case, the collection is 'reset' by ResetIncrementalGC(). If we are in
169 * the mark state, this just stops marking, but if we have started sweeping
170 * already, we continue until we have swept the current zone group. Following a
171 * reset, a new non-incremental collection is started.
176 * Compacting GC happens at the end of a major GC as part of the last slice.
177 * There are three parts:
179 * - Arenas are selected for compaction.
180 * - The contents of those arenas are moved to new arenas.
181 * - All references to moved things are updated.
184 #include "jsgcinlines.h"
186 #include "mozilla/ArrayUtils.h"
187 #include "mozilla/DebugOnly.h"
188 #include "mozilla/MacroForEach.h"
189 #include "mozilla/MemoryReporting.h"
190 #include "mozilla/Move.h"
192 #include <string.h> /* for memset used when DEBUG */
200 #include "jscompartment.h"
203 #include "jsscript.h"
206 #include "jswatchpoint.h"
207 #include "jsweakmap.h"
211 #include "prmjtime.h"
213 #include "gc/FindSCCs.h"
214 #include "gc/GCInternals.h"
215 #include "gc/GCTrace.h"
216 #include "gc/Marking.h"
217 #include "gc/Memory.h"
218 #include "jit/BaselineJIT.h"
219 #include "jit/IonCode.h"
220 #include "js/SliceBudget.h"
221 #include "vm/Debugger.h"
222 #include "vm/ForkJoin.h"
223 #include "vm/ProxyObject.h"
224 #include "vm/Shape.h"
225 #include "vm/String.h"
226 #include "vm/Symbol.h"
227 #include "vm/TraceLogging.h"
228 #include "vm/WrapperObject.h"
230 #include "jsobjinlines.h"
231 #include "jsscriptinlines.h"
233 #include "vm/Stack-inl.h"
234 #include "vm/String-inl.h"
237 using namespace js::gc
;
239 using mozilla::Maybe
;
242 using JS::AutoGCRooter
;
244 /* Perform a Full GC every 20 seconds if MaybeGC is called */
245 static const uint64_t GC_IDLE_FULL_SPAN
= 20 * 1000 * 1000;
247 /* Increase the IGC marking slice time if we are in highFrequencyGC mode. */
248 static const int IGC_MARK_SLICE_MULTIPLIER
= 2;
250 const AllocKind
gc::slotsToThingKind
[] = {
251 /* 0 */ FINALIZE_OBJECT0
, FINALIZE_OBJECT2
, FINALIZE_OBJECT2
, FINALIZE_OBJECT4
,
252 /* 4 */ FINALIZE_OBJECT4
, FINALIZE_OBJECT8
, FINALIZE_OBJECT8
, FINALIZE_OBJECT8
,
253 /* 8 */ FINALIZE_OBJECT8
, FINALIZE_OBJECT12
, FINALIZE_OBJECT12
, FINALIZE_OBJECT12
,
254 /* 12 */ FINALIZE_OBJECT12
, FINALIZE_OBJECT16
, FINALIZE_OBJECT16
, FINALIZE_OBJECT16
,
255 /* 16 */ FINALIZE_OBJECT16
258 static_assert(JS_ARRAY_LENGTH(slotsToThingKind
) == SLOTS_TO_THING_KIND_LIMIT
,
259 "We have defined a slot count for each kind.");
261 // Assert that SortedArenaList::MinThingSize is <= the real minimum thing size.
262 #define CHECK_MIN_THING_SIZE_INNER(x_) \
263 static_assert(x_ >= SortedArenaList::MinThingSize, \
264 #x_ " is less than SortedArenaList::MinThingSize!");
265 #define CHECK_MIN_THING_SIZE(...) { __VA_ARGS__ }; /* Define the array. */ \
266 MOZ_FOR_EACH(CHECK_MIN_THING_SIZE_INNER, (), (__VA_ARGS__ UINT32_MAX))
268 const uint32_t Arena::ThingSizes
[] = CHECK_MIN_THING_SIZE(
269 sizeof(JSObject
), /* FINALIZE_OBJECT0 */
270 sizeof(JSObject
), /* FINALIZE_OBJECT0_BACKGROUND */
271 sizeof(JSObject_Slots2
), /* FINALIZE_OBJECT2 */
272 sizeof(JSObject_Slots2
), /* FINALIZE_OBJECT2_BACKGROUND */
273 sizeof(JSObject_Slots4
), /* FINALIZE_OBJECT4 */
274 sizeof(JSObject_Slots4
), /* FINALIZE_OBJECT4_BACKGROUND */
275 sizeof(JSObject_Slots8
), /* FINALIZE_OBJECT8 */
276 sizeof(JSObject_Slots8
), /* FINALIZE_OBJECT8_BACKGROUND */
277 sizeof(JSObject_Slots12
), /* FINALIZE_OBJECT12 */
278 sizeof(JSObject_Slots12
), /* FINALIZE_OBJECT12_BACKGROUND */
279 sizeof(JSObject_Slots16
), /* FINALIZE_OBJECT16 */
280 sizeof(JSObject_Slots16
), /* FINALIZE_OBJECT16_BACKGROUND */
281 sizeof(JSScript
), /* FINALIZE_SCRIPT */
282 sizeof(LazyScript
), /* FINALIZE_LAZY_SCRIPT */
283 sizeof(Shape
), /* FINALIZE_SHAPE */
284 sizeof(BaseShape
), /* FINALIZE_BASE_SHAPE */
285 sizeof(types::TypeObject
), /* FINALIZE_TYPE_OBJECT */
286 sizeof(JSFatInlineString
), /* FINALIZE_FAT_INLINE_STRING */
287 sizeof(JSString
), /* FINALIZE_STRING */
288 sizeof(JSExternalString
), /* FINALIZE_EXTERNAL_STRING */
289 sizeof(JS::Symbol
), /* FINALIZE_SYMBOL */
290 sizeof(jit::JitCode
), /* FINALIZE_JITCODE */
293 #undef CHECK_MIN_THING_SIZE_INNER
294 #undef CHECK_MIN_THING_SIZE
296 #define OFFSET(type) uint32_t(sizeof(ArenaHeader) + (ArenaSize - sizeof(ArenaHeader)) % sizeof(type))
298 const uint32_t Arena::FirstThingOffsets
[] = {
299 OFFSET(JSObject
), /* FINALIZE_OBJECT0 */
300 OFFSET(JSObject
), /* FINALIZE_OBJECT0_BACKGROUND */
301 OFFSET(JSObject_Slots2
), /* FINALIZE_OBJECT2 */
302 OFFSET(JSObject_Slots2
), /* FINALIZE_OBJECT2_BACKGROUND */
303 OFFSET(JSObject_Slots4
), /* FINALIZE_OBJECT4 */
304 OFFSET(JSObject_Slots4
), /* FINALIZE_OBJECT4_BACKGROUND */
305 OFFSET(JSObject_Slots8
), /* FINALIZE_OBJECT8 */
306 OFFSET(JSObject_Slots8
), /* FINALIZE_OBJECT8_BACKGROUND */
307 OFFSET(JSObject_Slots12
), /* FINALIZE_OBJECT12 */
308 OFFSET(JSObject_Slots12
), /* FINALIZE_OBJECT12_BACKGROUND */
309 OFFSET(JSObject_Slots16
), /* FINALIZE_OBJECT16 */
310 OFFSET(JSObject_Slots16
), /* FINALIZE_OBJECT16_BACKGROUND */
311 OFFSET(JSScript
), /* FINALIZE_SCRIPT */
312 OFFSET(LazyScript
), /* FINALIZE_LAZY_SCRIPT */
313 OFFSET(Shape
), /* FINALIZE_SHAPE */
314 OFFSET(BaseShape
), /* FINALIZE_BASE_SHAPE */
315 OFFSET(types::TypeObject
), /* FINALIZE_TYPE_OBJECT */
316 OFFSET(JSFatInlineString
), /* FINALIZE_FAT_INLINE_STRING */
317 OFFSET(JSString
), /* FINALIZE_STRING */
318 OFFSET(JSExternalString
), /* FINALIZE_EXTERNAL_STRING */
319 OFFSET(JS::Symbol
), /* FINALIZE_SYMBOL */
320 OFFSET(jit::JitCode
), /* FINALIZE_JITCODE */
326 js::gc::TraceKindAsAscii(JSGCTraceKind kind
)
329 case JSTRACE_OBJECT
: return "JSTRACE_OBJECT";
330 case JSTRACE_STRING
: return "JSTRACE_STRING";
331 case JSTRACE_SYMBOL
: return "JSTRACE_SYMBOL";
332 case JSTRACE_SCRIPT
: return "JSTRACE_SCRIPT";
333 case JSTRACE_LAZY_SCRIPT
: return "JSTRACE_SCRIPT";
334 case JSTRACE_JITCODE
: return "JSTRACE_JITCODE";
335 case JSTRACE_SHAPE
: return "JSTRACE_SHAPE";
336 case JSTRACE_BASE_SHAPE
: return "JSTRACE_BASE_SHAPE";
337 case JSTRACE_TYPE_OBJECT
: return "JSTRACE_TYPE_OBJECT";
338 default: return "INVALID";
343 * Finalization order for incrementally swept things.
346 static const AllocKind FinalizePhaseStrings
[] = {
347 FINALIZE_EXTERNAL_STRING
350 static const AllocKind FinalizePhaseScripts
[] = {
355 static const AllocKind FinalizePhaseJitCode
[] = {
359 static const AllocKind
* const FinalizePhases
[] = {
360 FinalizePhaseStrings
,
361 FinalizePhaseScripts
,
364 static const int FinalizePhaseCount
= sizeof(FinalizePhases
) / sizeof(AllocKind
*);
366 static const int FinalizePhaseLength
[] = {
367 sizeof(FinalizePhaseStrings
) / sizeof(AllocKind
),
368 sizeof(FinalizePhaseScripts
) / sizeof(AllocKind
),
369 sizeof(FinalizePhaseJitCode
) / sizeof(AllocKind
)
372 static const gcstats::Phase FinalizePhaseStatsPhase
[] = {
373 gcstats::PHASE_SWEEP_STRING
,
374 gcstats::PHASE_SWEEP_SCRIPT
,
375 gcstats::PHASE_SWEEP_JITCODE
379 * Finalization order for things swept in the background.
382 static const AllocKind BackgroundPhaseObjects
[] = {
383 FINALIZE_OBJECT0_BACKGROUND
,
384 FINALIZE_OBJECT2_BACKGROUND
,
385 FINALIZE_OBJECT4_BACKGROUND
,
386 FINALIZE_OBJECT8_BACKGROUND
,
387 FINALIZE_OBJECT12_BACKGROUND
,
388 FINALIZE_OBJECT16_BACKGROUND
391 static const AllocKind BackgroundPhaseStringsAndSymbols
[] = {
392 FINALIZE_FAT_INLINE_STRING
,
397 static const AllocKind BackgroundPhaseShapes
[] = {
403 static const AllocKind
* const BackgroundPhases
[] = {
404 BackgroundPhaseObjects
,
405 BackgroundPhaseStringsAndSymbols
,
406 BackgroundPhaseShapes
408 static const int BackgroundPhaseCount
= sizeof(BackgroundPhases
) / sizeof(AllocKind
*);
410 static const int BackgroundPhaseLength
[] = {
411 sizeof(BackgroundPhaseObjects
) / sizeof(AllocKind
),
412 sizeof(BackgroundPhaseStringsAndSymbols
) / sizeof(AllocKind
),
413 sizeof(BackgroundPhaseShapes
) / sizeof(AllocKind
)
418 ArenaHeader::checkSynchronizedWithFreeList() const
421 * Do not allow to access the free list when its real head is still stored
422 * in FreeLists and is not synchronized with this one.
424 JS_ASSERT(allocated());
427 * We can be called from the background finalization thread when the free
428 * list in the zone can mutate at any moment. We cannot do any
429 * checks in this case.
431 if (IsBackgroundFinalized(getAllocKind()) && zone
->runtimeFromAnyThread()->gc
.onBackgroundThread())
434 FreeSpan firstSpan
= firstFreeSpan
.decompact(arenaAddress());
435 if (firstSpan
.isEmpty())
437 const FreeList
* freeList
= zone
->allocator
.arenas
.getFreeList(getAllocKind());
438 if (freeList
->isEmpty() || firstSpan
.arenaAddress() != freeList
->arenaAddress())
442 * Here this arena has free things, FreeList::lists[thingKind] is not
443 * empty and also points to this arena. Thus they must be the same.
445 JS_ASSERT(freeList
->isSameNonEmptySpan(firstSpan
));
450 Arena::staticAsserts()
452 static_assert(JS_ARRAY_LENGTH(ThingSizes
) == FINALIZE_LIMIT
, "We have defined all thing sizes.");
453 static_assert(JS_ARRAY_LENGTH(FirstThingOffsets
) == FINALIZE_LIMIT
, "We have defined all offsets.");
457 Arena::setAsFullyUnused(AllocKind thingKind
)
460 size_t thingSize
= Arena::thingSize(thingKind
);
461 fullSpan
.initFinal(thingsStart(thingKind
), thingsEnd() - thingSize
, thingSize
);
462 aheader
.setFirstFreeSpan(&fullSpan
);
467 Arena::finalize(FreeOp
* fop
, AllocKind thingKind
, size_t thingSize
)
469 /* Enforce requirements on size of T. */
470 JS_ASSERT(thingSize
% CellSize
== 0);
471 JS_ASSERT(thingSize
<= 255);
473 JS_ASSERT(aheader
.allocated());
474 JS_ASSERT(thingKind
== aheader
.getAllocKind());
475 JS_ASSERT(thingSize
== aheader
.getThingSize());
476 JS_ASSERT(!aheader
.hasDelayedMarking
);
477 JS_ASSERT(!aheader
.markOverflow
);
478 JS_ASSERT(!aheader
.allocatedDuringIncremental
);
480 uintptr_t firstThing
= thingsStart(thingKind
);
481 uintptr_t firstThingOrSuccessorOfLastMarkedThing
= firstThing
;
482 uintptr_t lastThing
= thingsEnd() - thingSize
;
484 FreeSpan newListHead
;
485 FreeSpan
* newListTail
= &newListHead
;
488 for (ArenaCellIterUnderFinalize
i(&aheader
); !i
.done(); i
.next()) {
491 uintptr_t thing
= reinterpret_cast<uintptr_t>(t
);
492 if (thing
!= firstThingOrSuccessorOfLastMarkedThing
) {
493 // We just finished passing over one or more free things,
494 // so record a new FreeSpan.
495 newListTail
->initBoundsUnchecked(firstThingOrSuccessorOfLastMarkedThing
,
497 newListTail
= newListTail
->nextSpanUnchecked();
499 firstThingOrSuccessorOfLastMarkedThing
= thing
+ thingSize
;
503 JS_POISON(t
, JS_SWEPT_TENURED_PATTERN
, thingSize
);
504 TraceTenuredFinalize(t
);
509 // Do nothing. The caller will update the arena header appropriately.
510 JS_ASSERT(newListTail
== &newListHead
);
511 JS_EXTRA_POISON(data
, JS_SWEPT_TENURED_PATTERN
, sizeof(data
));
515 JS_ASSERT(firstThingOrSuccessorOfLastMarkedThing
!= firstThing
);
516 uintptr_t lastMarkedThing
= firstThingOrSuccessorOfLastMarkedThing
- thingSize
;
517 if (lastThing
== lastMarkedThing
) {
518 // If the last thing was marked, we will have already set the bounds of
519 // the final span, and we just need to terminate the list.
520 newListTail
->initAsEmpty();
522 // Otherwise, end the list with a span that covers the final stretch of free things.
523 newListTail
->initFinal(firstThingOrSuccessorOfLastMarkedThing
, lastThing
, thingSize
);
528 for (const FreeSpan
* span
= &newListHead
; !span
->isEmpty(); span
= span
->nextSpan())
529 nfree
+= span
->length(thingSize
);
530 JS_ASSERT(nfree
+ nmarked
== thingsPerArena(thingSize
));
532 aheader
.setFirstFreeSpan(&newListHead
);
538 FinalizeTypedArenas(FreeOp
* fop
,
540 SortedArenaList
& dest
,
545 * Finalize arenas from src list, releasing empty arenas and inserting the
546 * others into the appropriate destination size bins.
550 * During parallel sections, we sometimes finalize the parallel arenas,
551 * but in that case, we want to hold on to the memory in our arena
552 * lists, not offer it up for reuse.
554 bool releaseArenas
= !InParallelSection();
556 size_t thingSize
= Arena::thingSize(thingKind
);
557 size_t thingsPerArena
= Arena::thingsPerArena(thingSize
);
559 while (ArenaHeader
* aheader
= *src
) {
560 *src
= aheader
->next
;
561 size_t nmarked
= aheader
->getArena()->finalize
<T
>(fop
, thingKind
, thingSize
);
562 size_t nfree
= thingsPerArena
- nmarked
;
565 dest
.insertAt(aheader
, nfree
);
566 else if (releaseArenas
)
567 aheader
->chunk()->releaseArena(aheader
);
569 aheader
->chunk()->recycleArena(aheader
, dest
, thingKind
, thingsPerArena
);
571 budget
.step(thingsPerArena
);
572 if (budget
.isOverBudget())
580 * Finalize the list. On return, |al|'s cursor points to the first non-empty
581 * arena in the list (which may be null if all arenas are full).
584 FinalizeArenas(FreeOp
* fop
,
586 SortedArenaList
& dest
,
591 case FINALIZE_OBJECT0
:
592 case FINALIZE_OBJECT0_BACKGROUND
:
593 case FINALIZE_OBJECT2
:
594 case FINALIZE_OBJECT2_BACKGROUND
:
595 case FINALIZE_OBJECT4
:
596 case FINALIZE_OBJECT4_BACKGROUND
:
597 case FINALIZE_OBJECT8
:
598 case FINALIZE_OBJECT8_BACKGROUND
:
599 case FINALIZE_OBJECT12
:
600 case FINALIZE_OBJECT12_BACKGROUND
:
601 case FINALIZE_OBJECT16
:
602 case FINALIZE_OBJECT16_BACKGROUND
:
603 return FinalizeTypedArenas
<JSObject
>(fop
, src
, dest
, thingKind
, budget
);
604 case FINALIZE_SCRIPT
:
605 return FinalizeTypedArenas
<JSScript
>(fop
, src
, dest
, thingKind
, budget
);
606 case FINALIZE_LAZY_SCRIPT
:
607 return FinalizeTypedArenas
<LazyScript
>(fop
, src
, dest
, thingKind
, budget
);
609 return FinalizeTypedArenas
<Shape
>(fop
, src
, dest
, thingKind
, budget
);
610 case FINALIZE_BASE_SHAPE
:
611 return FinalizeTypedArenas
<BaseShape
>(fop
, src
, dest
, thingKind
, budget
);
612 case FINALIZE_TYPE_OBJECT
:
613 return FinalizeTypedArenas
<types::TypeObject
>(fop
, src
, dest
, thingKind
, budget
);
614 case FINALIZE_STRING
:
615 return FinalizeTypedArenas
<JSString
>(fop
, src
, dest
, thingKind
, budget
);
616 case FINALIZE_FAT_INLINE_STRING
:
617 return FinalizeTypedArenas
<JSFatInlineString
>(fop
, src
, dest
, thingKind
, budget
);
618 case FINALIZE_EXTERNAL_STRING
:
619 return FinalizeTypedArenas
<JSExternalString
>(fop
, src
, dest
, thingKind
, budget
);
620 case FINALIZE_SYMBOL
:
621 return FinalizeTypedArenas
<JS::Symbol
>(fop
, src
, dest
, thingKind
, budget
);
622 case FINALIZE_JITCODE
:
624 // JitCode finalization may release references on an executable
625 // allocator that is accessed when requesting interrupts.
626 JSRuntime::AutoLockForInterrupt
lock(fop
->runtime());
627 return FinalizeTypedArenas
<jit::JitCode
>(fop
, src
, dest
, thingKind
, budget
);
630 MOZ_CRASH("Invalid alloc kind");
635 AllocChunk(JSRuntime
* rt
)
637 return static_cast<Chunk
*>(MapAlignedPages(ChunkSize
, ChunkSize
));
641 FreeChunk(JSRuntime
* rt
, Chunk
* p
)
643 UnmapPages(static_cast<void*>(p
), ChunkSize
);
646 /* Must be called with the GC lock taken. */
648 ChunkPool::get(JSRuntime
* rt
)
650 Chunk
* chunk
= emptyChunkListHead
;
652 JS_ASSERT(!emptyCount
);
656 JS_ASSERT(emptyCount
);
657 emptyChunkListHead
= chunk
->info
.next
;
662 /* Must be called either during the GC or with the GC lock taken. */
664 ChunkPool::put(Chunk
* chunk
)
667 chunk
->info
.next
= emptyChunkListHead
;
668 emptyChunkListHead
= chunk
;
673 ChunkPool::Enum::front()
675 Chunk
* chunk
= *chunkp
;
676 JS_ASSERT_IF(chunk
, pool
.getEmptyCount() != 0);
681 ChunkPool::Enum::popFront()
684 chunkp
= &front()->info
.next
;
688 ChunkPool::Enum::removeAndPopFront()
691 *chunkp
= front()->info
.next
;
695 /* Must be called either during the GC or with the GC lock taken. */
697 GCRuntime::expireChunkPool(bool shrinkBuffers
, bool releaseAll
)
700 * Return old empty chunks to the system while preserving the order of
701 * other chunks in the list. This way, if the GC runs several times
702 * without emptying the list, the older chunks will stay at the tail
703 * and are more likely to reach the max age.
705 Chunk
* freeList
= nullptr;
706 unsigned freeChunkCount
= 0;
707 for (ChunkPool::Enum
e(chunkPool
); !e
.empty(); ) {
708 Chunk
* chunk
= e
.front();
709 JS_ASSERT(chunk
->unused());
710 JS_ASSERT(!chunkSet
.has(chunk
));
711 if (releaseAll
|| freeChunkCount
>= tunables
.maxEmptyChunkCount() ||
712 (freeChunkCount
>= tunables
.minEmptyChunkCount() &&
713 (shrinkBuffers
|| chunk
->info
.age
== MAX_EMPTY_CHUNK_AGE
)))
715 e
.removeAndPopFront();
716 prepareToFreeChunk(chunk
->info
);
717 chunk
->info
.next
= freeList
;
720 /* Keep the chunk but increase its age. */
726 JS_ASSERT(chunkPool
.getEmptyCount() <= tunables
.maxEmptyChunkCount());
727 JS_ASSERT_IF(shrinkBuffers
, chunkPool
.getEmptyCount() <= tunables
.minEmptyChunkCount());
728 JS_ASSERT_IF(releaseAll
, chunkPool
.getEmptyCount() == 0);
733 GCRuntime::freeChunkList(Chunk
* chunkListHead
)
735 while (Chunk
* chunk
= chunkListHead
) {
736 JS_ASSERT(!chunk
->info
.numArenasFreeCommitted
);
737 chunkListHead
= chunk
->info
.next
;
738 FreeChunk(rt
, chunk
);
743 GCRuntime::expireAndFreeChunkPool(bool releaseAll
)
745 freeChunkList(expireChunkPool(true, releaseAll
));
749 Chunk::allocate(JSRuntime
* rt
)
751 Chunk
* chunk
= AllocChunk(rt
);
755 rt
->gc
.stats
.count(gcstats::STAT_NEW_CHUNK
);
759 /* Must be called with the GC lock taken. */
761 GCRuntime::releaseChunk(Chunk
* chunk
)
764 prepareToFreeChunk(chunk
->info
);
765 FreeChunk(rt
, chunk
);
769 GCRuntime::prepareToFreeChunk(ChunkInfo
& info
)
771 JS_ASSERT(numArenasFreeCommitted
>= info
.numArenasFreeCommitted
);
772 numArenasFreeCommitted
-= info
.numArenasFreeCommitted
;
773 stats
.count(gcstats::STAT_DESTROY_CHUNK
);
776 * Let FreeChunkList detect a missing prepareToFreeChunk call before it
779 info
.numArenasFreeCommitted
= 0;
783 void Chunk::decommitAllArenas(JSRuntime
* rt
)
785 decommittedArenas
.clear(true);
786 MarkPagesUnused(&arenas
[0], ArenasPerChunk
* ArenaSize
);
788 info
.freeArenasHead
= nullptr;
789 info
.lastDecommittedArenaOffset
= 0;
790 info
.numArenasFree
= ArenasPerChunk
;
791 info
.numArenasFreeCommitted
= 0;
795 Chunk::init(JSRuntime
* rt
)
797 JS_POISON(this, JS_FRESH_TENURED_PATTERN
, ChunkSize
);
800 * We clear the bitmap to guard against xpc_IsGrayGCThing being called on
801 * uninitialized data, which would happen before the first GC cycle.
806 * Decommit the arenas. We do this after poisoning so that if the OS does
807 * not have to recycle the pages, we still get the benefit of poisoning.
809 decommitAllArenas(rt
);
811 /* Initialize the chunk info. */
813 info
.trailer
.storeBuffer
= nullptr;
814 info
.trailer
.location
= ChunkLocationBitTenuredHeap
;
815 info
.trailer
.runtime
= rt
;
817 /* The rest of info fields are initialized in pickChunk. */
821 GCRuntime::getAvailableChunkList(Zone
* zone
)
823 return zone
->isSystem
824 ? &systemAvailableChunkListHead
825 : &userAvailableChunkListHead
;
829 Chunk::addToAvailableList(Zone
* zone
)
831 JSRuntime
* rt
= zone
->runtimeFromAnyThread();
832 insertToAvailableList(rt
->gc
.getAvailableChunkList(zone
));
836 Chunk::insertToAvailableList(Chunk
** insertPoint
)
838 JS_ASSERT(hasAvailableArenas());
839 JS_ASSERT(!info
.prevp
);
840 JS_ASSERT(!info
.next
);
841 info
.prevp
= insertPoint
;
842 Chunk
* insertBefore
= *insertPoint
;
844 JS_ASSERT(insertBefore
->info
.prevp
== insertPoint
);
845 insertBefore
->info
.prevp
= &info
.next
;
847 info
.next
= insertBefore
;
852 Chunk::removeFromAvailableList()
854 JS_ASSERT(info
.prevp
);
855 *info
.prevp
= info
.next
;
857 JS_ASSERT(info
.next
->info
.prevp
== &info
.next
);
858 info
.next
->info
.prevp
= info
.prevp
;
860 info
.prevp
= nullptr;
865 * Search for and return the next decommitted Arena. Our goal is to keep
866 * lastDecommittedArenaOffset "close" to a free arena. We do this by setting
867 * it to the most recently freed arena when we free, and forcing it to
868 * the last alloc + 1 when we allocate.
871 Chunk::findDecommittedArenaOffset()
873 /* Note: lastFreeArenaOffset can be past the end of the list. */
874 for (unsigned i
= info
.lastDecommittedArenaOffset
; i
< ArenasPerChunk
; i
++)
875 if (decommittedArenas
.get(i
))
877 for (unsigned i
= 0; i
< info
.lastDecommittedArenaOffset
; i
++)
878 if (decommittedArenas
.get(i
))
880 MOZ_CRASH("No decommitted arenas found.");
884 Chunk::fetchNextDecommittedArena()
886 JS_ASSERT(info
.numArenasFreeCommitted
== 0);
887 JS_ASSERT(info
.numArenasFree
> 0);
889 unsigned offset
= findDecommittedArenaOffset();
890 info
.lastDecommittedArenaOffset
= offset
+ 1;
891 --info
.numArenasFree
;
892 decommittedArenas
.unset(offset
);
894 Arena
* arena
= &arenas
[offset
];
895 MarkPagesInUse(arena
, ArenaSize
);
896 arena
->aheader
.setAsNotAllocated();
898 return &arena
->aheader
;
902 GCRuntime::updateOnFreeArenaAlloc(const ChunkInfo
& info
)
904 JS_ASSERT(info
.numArenasFreeCommitted
<= numArenasFreeCommitted
);
905 --numArenasFreeCommitted
;
909 Chunk::fetchNextFreeArena(JSRuntime
* rt
)
911 JS_ASSERT(info
.numArenasFreeCommitted
> 0);
912 JS_ASSERT(info
.numArenasFreeCommitted
<= info
.numArenasFree
);
914 ArenaHeader
* aheader
= info
.freeArenasHead
;
915 info
.freeArenasHead
= aheader
->next
;
916 --info
.numArenasFreeCommitted
;
917 --info
.numArenasFree
;
918 rt
->gc
.updateOnFreeArenaAlloc(info
);
924 Chunk::allocateArena(Zone
* zone
, AllocKind thingKind
)
926 JS_ASSERT(hasAvailableArenas());
928 JSRuntime
* rt
= zone
->runtimeFromAnyThread();
929 if (!rt
->isHeapMinorCollecting() &&
930 !rt
->isHeapCompacting() &&
931 rt
->gc
.usage
.gcBytes() >= rt
->gc
.tunables
.gcMaxBytes())
933 #ifdef JSGC_FJGENERATIONAL
934 // This is an approximation to the best test, which would check that
935 // this thread is currently promoting into the tenured area. I doubt
936 // the better test would make much difference.
937 if (!rt
->isFJMinorCollecting())
944 ArenaHeader
* aheader
= MOZ_LIKELY(info
.numArenasFreeCommitted
> 0)
945 ? fetchNextFreeArena(rt
)
946 : fetchNextDecommittedArena();
947 aheader
->init(zone
, thingKind
);
948 if (MOZ_UNLIKELY(!hasAvailableArenas()))
949 removeFromAvailableList();
951 zone
->usage
.addGCArena();
953 if (!rt
->isHeapCompacting() && zone
->usage
.gcBytes() >= zone
->threshold
.gcTriggerBytes()) {
954 AutoUnlockGC
unlock(rt
);
955 rt
->gc
.triggerZoneGC(zone
, JS::gcreason::ALLOC_TRIGGER
);
962 GCRuntime::updateOnArenaFree(const ChunkInfo
& info
)
964 ++numArenasFreeCommitted
;
968 Chunk::addArenaToFreeList(JSRuntime
* rt
, ArenaHeader
* aheader
)
970 JS_ASSERT(!aheader
->allocated());
971 aheader
->next
= info
.freeArenasHead
;
972 info
.freeArenasHead
= aheader
;
973 ++info
.numArenasFreeCommitted
;
974 ++info
.numArenasFree
;
975 rt
->gc
.updateOnArenaFree(info
);
979 Chunk::recycleArena(ArenaHeader
* aheader
, SortedArenaList
& dest
, AllocKind thingKind
,
980 size_t thingsPerArena
)
982 aheader
->getArena()->setAsFullyUnused(thingKind
);
983 dest
.insertAt(aheader
, thingsPerArena
);
987 Chunk::releaseArena(ArenaHeader
* aheader
)
989 JS_ASSERT(aheader
->allocated());
990 JS_ASSERT(!aheader
->hasDelayedMarking
);
991 Zone
* zone
= aheader
->zone
;
992 JSRuntime
* rt
= zone
->runtimeFromAnyThread();
993 AutoLockGC maybeLock
;
994 if (rt
->gc
.isBackgroundSweeping())
997 if (rt
->gc
.isBackgroundSweeping())
998 zone
->threshold
.updateForRemovedArena(rt
->gc
.tunables
);
999 zone
->usage
.removeGCArena();
1001 aheader
->setAsNotAllocated();
1002 addArenaToFreeList(rt
, aheader
);
1004 if (info
.numArenasFree
== 1) {
1005 JS_ASSERT(!info
.prevp
);
1006 JS_ASSERT(!info
.next
);
1007 addToAvailableList(zone
);
1008 } else if (!unused()) {
1009 JS_ASSERT(info
.prevp
);
1011 JS_ASSERT(unused());
1012 removeFromAvailableList();
1013 decommitAllArenas(rt
);
1014 rt
->gc
.moveChunkToFreePool(this);
1019 GCRuntime::moveChunkToFreePool(Chunk
* chunk
)
1021 JS_ASSERT(chunk
->unused());
1022 JS_ASSERT(chunkSet
.has(chunk
));
1023 chunkSet
.remove(chunk
);
1024 chunkPool
.put(chunk
);
1028 GCRuntime::wantBackgroundAllocation() const
1031 * To minimize memory waste we do not want to run the background chunk
1032 * allocation if we have empty chunks or when the runtime needs just few
1035 return helperState
.canBackgroundAllocate() &&
1036 chunkPool
.getEmptyCount() < tunables
.minEmptyChunkCount() &&
1037 chunkSet
.count() >= 4;
1040 class js::gc::AutoMaybeStartBackgroundAllocation
1044 MOZ_DECL_USE_GUARD_OBJECT_NOTIFIER
1047 explicit AutoMaybeStartBackgroundAllocation(MOZ_GUARD_OBJECT_NOTIFIER_ONLY_PARAM
)
1050 MOZ_GUARD_OBJECT_NOTIFIER_INIT
;
1053 void tryToStartBackgroundAllocation(JSRuntime
* rt
) {
1057 ~AutoMaybeStartBackgroundAllocation() {
1058 if (runtime
&& !runtime
->currentThreadOwnsInterruptLock()) {
1059 AutoLockHelperThreadState helperLock
;
1060 AutoLockGC
lock(runtime
);
1061 runtime
->gc
.startBackgroundAllocationIfIdle();
1066 /* The caller must hold the GC lock. */
1068 GCRuntime::pickChunk(Zone
* zone
, AutoMaybeStartBackgroundAllocation
& maybeStartBackgroundAllocation
)
1070 Chunk
** listHeadp
= getAvailableChunkList(zone
);
1071 Chunk
* chunk
= *listHeadp
;
1075 chunk
= chunkPool
.get(rt
);
1077 chunk
= Chunk::allocate(rt
);
1080 JS_ASSERT(chunk
->info
.numArenasFreeCommitted
== 0);
1083 JS_ASSERT(chunk
->unused());
1084 JS_ASSERT(!chunkSet
.has(chunk
));
1086 if (wantBackgroundAllocation())
1087 maybeStartBackgroundAllocation
.tryToStartBackgroundAllocation(rt
);
1089 chunkAllocationSinceLastGC
= true;
1092 * FIXME bug 583732 - chunk is newly allocated and cannot be present in
1093 * the table so using ordinary lookupForAdd is suboptimal here.
1095 GCChunkSet::AddPtr p
= chunkSet
.lookupForAdd(chunk
);
1097 if (!chunkSet
.add(p
, chunk
)) {
1098 releaseChunk(chunk
);
1102 chunk
->info
.prevp
= nullptr;
1103 chunk
->info
.next
= nullptr;
1104 chunk
->addToAvailableList(zone
);
1109 GCRuntime::GCRuntime(JSRuntime
* rt
) :
1111 systemZone(nullptr),
1112 #ifdef JSGC_GENERATIONAL
1114 storeBuffer(rt
, nursery
),
1119 systemAvailableChunkListHead(nullptr),
1120 userAvailableChunkListHead(nullptr),
1122 numArenasFreeCommitted(0),
1123 verifyPreData(nullptr),
1124 verifyPostData(nullptr),
1125 chunkAllocationSinceLastGC(false),
1128 mode(JSGC_MODE_INCREMENTAL
),
1129 numActiveZoneIters(0),
1130 decommitThreshold(32 * 1024 * 1024),
1131 cleanUpEverything(false),
1132 grayBitsValid(false),
1135 jitReleaseNumber(0),
1139 triggerReason(JS::gcreason::NO_REASON
),
1141 disableStrictProxyCheckingCount(0),
1143 incrementalState(gc::NO_INCREMENTAL
),
1144 lastMarkSlice(false),
1145 sweepOnBackgroundThread(false),
1146 foundBlackGrayEdges(false),
1147 sweepingZones(nullptr),
1149 zoneGroups(nullptr),
1150 currentZoneGroup(nullptr),
1153 abortSweepAfterCurrentGroup(false),
1154 arenasAllocatedDuringSweep(nullptr),
1155 #ifdef JS_GC_MARKING_VALIDATION
1156 markingValidator(nullptr),
1159 sliceBudget(SliceBudget::Unlimited
),
1160 incrementalAllowed(true),
1161 generationalDisabled(0),
1162 #ifdef JSGC_COMPACTING
1163 compactingDisabled(0),
1165 manipulatingDeadZones(false),
1166 objectsMarkedInDeadZones(0),
1173 deterministicOnly(false),
1174 incrementalLimit(0),
1177 fullCompartmentChecks(false),
1179 mallocGCTriggered(false),
1183 alwaysPreserveCode(false),
1185 noGCOrAllocationCheck(0),
1191 setGCMode(JSGC_MODE_GLOBAL
);
1197 GCRuntime::setZeal(uint8_t zeal
, uint32_t frequency
)
1200 VerifyBarriers(rt
, PreBarrierVerifier
);
1202 VerifyBarriers(rt
, PostBarrierVerifier
);
1204 #ifdef JSGC_GENERATIONAL
1205 if (zealMode
== ZealGenerationalGCValue
) {
1206 evictNursery(JS::gcreason::DEBUG_GC
);
1207 nursery
.leaveZealMode();
1210 if (zeal
== ZealGenerationalGCValue
)
1211 nursery
.enterZealMode();
1214 bool schedule
= zeal
>= js::gc::ZealAllocValue
;
1216 zealFrequency
= frequency
;
1217 nextScheduled
= schedule
? frequency
: 0;
1221 GCRuntime::setNextScheduled(uint32_t count
)
1223 nextScheduled
= count
;
1227 GCRuntime::initZeal()
1229 const char* env
= getenv("JS_GC_ZEAL");
1234 int frequency
= JS_DEFAULT_ZEAL_FREQ
;
1235 if (strcmp(env
, "help") != 0) {
1237 const char* p
= strchr(env
, ',');
1239 frequency
= atoi(p
+ 1);
1242 if (zeal
< 0 || zeal
> ZealLimit
|| frequency
< 0) {
1244 "Format: JS_GC_ZEAL=N[,F]\n"
1245 "N indicates \"zealousness\":\n"
1246 " 0: no additional GCs\n"
1247 " 1: additional GCs at common danger points\n"
1248 " 2: GC every F allocations (default: 100)\n"
1249 " 3: GC when the window paints (browser only)\n"
1250 " 4: Verify pre write barriers between instructions\n"
1251 " 5: Verify pre write barriers between paints\n"
1252 " 6: Verify stack rooting\n"
1253 " 7: Collect the nursery every N nursery allocations\n"
1254 " 8: Incremental GC in two slices: 1) mark roots 2) finish collection\n"
1255 " 9: Incremental GC in two slices: 1) mark all 2) new marking and finish\n"
1256 " 10: Incremental GC in multiple slices\n"
1257 " 11: Verify post write barriers between instructions\n"
1258 " 12: Verify post write barriers between paints\n"
1259 " 13: Purge analysis state every F allocations (default: 100)\n");
1263 setZeal(zeal
, frequency
);
1270 * Lifetime in number of major GCs for type sets attached to scripts containing
1273 static const uint64_t JIT_SCRIPT_RELEASE_TYPES_PERIOD
= 20;
1276 GCRuntime::init(uint32_t maxbytes
, uint32_t maxNurseryBytes
)
1278 InitMemorySubsystem();
1280 lock
= PR_NewLock();
1284 if (!chunkSet
.init(INITIAL_CHUNK_CAPACITY
))
1287 if (!rootsHash
.init(256))
1290 if (!helperState
.init())
1294 * Separate gcMaxMallocBytes from gcMaxBytes but initialize to maxbytes
1295 * for default backward API compatibility.
1297 tunables
.setParameter(JSGC_MAX_BYTES
, maxbytes
);
1298 setMaxMallocBytes(maxbytes
);
1300 jitReleaseNumber
= majorGCNumber
+ JIT_SCRIPT_RELEASE_TYPES_PERIOD
;
1302 #ifdef JSGC_GENERATIONAL
1303 if (!nursery
.init(maxNurseryBytes
))
1306 if (!nursery
.isEnabled()) {
1307 JS_ASSERT(nursery
.nurserySize() == 0);
1308 ++rt
->gc
.generationalDisabled
;
1310 JS_ASSERT(nursery
.nurserySize() > 0);
1311 if (!storeBuffer
.enable())
1321 if (!InitTrace(*this))
1324 if (!marker
.init(mode
))
1331 GCRuntime::recordNativeStackTop()
1333 /* Record the stack top here only if we are called from a request. */
1334 if (!rt
->requestDepth
)
1336 conservativeGC
.recordStackTop();
1343 * Wait until the background finalization stops and the helper thread
1344 * shuts down before we forcefully release any remaining GC memory.
1346 helperState
.finish();
1349 /* Free memory associated with GC verification. */
1353 /* Delete all remaining zones. */
1354 if (rt
->gcInitialized
) {
1355 for (ZonesIter
zone(rt
, WithAtoms
); !zone
.done(); zone
.next()) {
1356 for (CompartmentsInZoneIter
comp(zone
); !comp
.done(); comp
.next())
1357 js_delete(comp
.get());
1358 js_delete(zone
.get());
1364 systemAvailableChunkListHead
= nullptr;
1365 userAvailableChunkListHead
= nullptr;
1366 if (chunkSet
.initialized()) {
1367 for (GCChunkSet::Range
r(chunkSet
.all()); !r
.empty(); r
.popFront())
1368 releaseChunk(r
.front());
1372 expireAndFreeChunkPool(true);
1374 if (rootsHash
.initialized())
1377 FinishPersistentRootedChains(rt
);
1380 PR_DestroyLock(lock
);
1388 js::gc::FinishPersistentRootedChains(JSRuntime
* rt
)
1390 /* The lists of persistent roots are stored on the shadow runtime. */
1391 rt
->functionPersistentRooteds
.clear();
1392 rt
->idPersistentRooteds
.clear();
1393 rt
->objectPersistentRooteds
.clear();
1394 rt
->scriptPersistentRooteds
.clear();
1395 rt
->stringPersistentRooteds
.clear();
1396 rt
->valuePersistentRooteds
.clear();
1400 GCRuntime::setParameter(JSGCParamKey key
, uint32_t value
)
1403 case JSGC_MAX_MALLOC_BYTES
:
1404 setMaxMallocBytes(value
);
1406 case JSGC_SLICE_TIME_BUDGET
:
1407 sliceBudget
= SliceBudget::TimeBudget(value
);
1409 case JSGC_MARK_STACK_LIMIT
:
1410 setMarkStackLimit(value
);
1412 case JSGC_DECOMMIT_THRESHOLD
:
1413 decommitThreshold
= value
* 1024 * 1024;
1416 mode
= JSGCMode(value
);
1417 JS_ASSERT(mode
== JSGC_MODE_GLOBAL
||
1418 mode
== JSGC_MODE_COMPARTMENT
||
1419 mode
== JSGC_MODE_INCREMENTAL
);
1422 tunables
.setParameter(key
, value
);
1427 GCSchedulingTunables::setParameter(JSGCParamKey key
, uint32_t value
)
1430 case JSGC_MAX_BYTES
:
1431 gcMaxBytes_
= value
;
1433 case JSGC_HIGH_FREQUENCY_TIME_LIMIT
:
1434 highFrequencyThresholdUsec_
= value
* PRMJ_USEC_PER_MSEC
;
1436 case JSGC_HIGH_FREQUENCY_LOW_LIMIT
:
1437 highFrequencyLowLimitBytes_
= value
* 1024 * 1024;
1438 if (highFrequencyLowLimitBytes_
>= highFrequencyHighLimitBytes_
)
1439 highFrequencyHighLimitBytes_
= highFrequencyLowLimitBytes_
+ 1;
1440 JS_ASSERT(highFrequencyHighLimitBytes_
> highFrequencyLowLimitBytes_
);
1442 case JSGC_HIGH_FREQUENCY_HIGH_LIMIT
:
1443 MOZ_ASSERT(value
> 0);
1444 highFrequencyHighLimitBytes_
= value
* 1024 * 1024;
1445 if (highFrequencyHighLimitBytes_
<= highFrequencyLowLimitBytes_
)
1446 highFrequencyLowLimitBytes_
= highFrequencyHighLimitBytes_
- 1;
1447 JS_ASSERT(highFrequencyHighLimitBytes_
> highFrequencyLowLimitBytes_
);
1449 case JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MAX
:
1450 highFrequencyHeapGrowthMax_
= value
/ 100.0;
1451 MOZ_ASSERT(highFrequencyHeapGrowthMax_
/ 0.85 > 1.0);
1453 case JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MIN
:
1454 highFrequencyHeapGrowthMin_
= value
/ 100.0;
1455 MOZ_ASSERT(highFrequencyHeapGrowthMin_
/ 0.85 > 1.0);
1457 case JSGC_LOW_FREQUENCY_HEAP_GROWTH
:
1458 lowFrequencyHeapGrowth_
= value
/ 100.0;
1459 MOZ_ASSERT(lowFrequencyHeapGrowth_
/ 0.9 > 1.0);
1461 case JSGC_DYNAMIC_HEAP_GROWTH
:
1462 dynamicHeapGrowthEnabled_
= value
;
1464 case JSGC_DYNAMIC_MARK_SLICE
:
1465 dynamicMarkSliceEnabled_
= value
;
1467 case JSGC_ALLOCATION_THRESHOLD
:
1468 gcZoneAllocThresholdBase_
= value
* 1024 * 1024;
1470 case JSGC_MIN_EMPTY_CHUNK_COUNT
:
1471 minEmptyChunkCount_
= value
;
1472 if (minEmptyChunkCount_
> maxEmptyChunkCount_
)
1473 maxEmptyChunkCount_
= minEmptyChunkCount_
;
1474 JS_ASSERT(maxEmptyChunkCount_
>= minEmptyChunkCount_
);
1476 case JSGC_MAX_EMPTY_CHUNK_COUNT
:
1477 maxEmptyChunkCount_
= value
;
1478 if (minEmptyChunkCount_
> maxEmptyChunkCount_
)
1479 minEmptyChunkCount_
= maxEmptyChunkCount_
;
1480 JS_ASSERT(maxEmptyChunkCount_
>= minEmptyChunkCount_
);
1483 MOZ_CRASH("Unknown GC parameter.");
1488 GCRuntime::getParameter(JSGCParamKey key
)
1491 case JSGC_MAX_BYTES
:
1492 return uint32_t(tunables
.gcMaxBytes());
1493 case JSGC_MAX_MALLOC_BYTES
:
1494 return maxMallocBytes
;
1496 return uint32_t(usage
.gcBytes());
1498 return uint32_t(mode
);
1499 case JSGC_UNUSED_CHUNKS
:
1500 return uint32_t(chunkPool
.getEmptyCount());
1501 case JSGC_TOTAL_CHUNKS
:
1502 return uint32_t(chunkSet
.count() + chunkPool
.getEmptyCount());
1503 case JSGC_SLICE_TIME_BUDGET
:
1504 return uint32_t(sliceBudget
> 0 ? sliceBudget
/ PRMJ_USEC_PER_MSEC
: 0);
1505 case JSGC_MARK_STACK_LIMIT
:
1506 return marker
.maxCapacity();
1507 case JSGC_HIGH_FREQUENCY_TIME_LIMIT
:
1508 return tunables
.highFrequencyThresholdUsec();
1509 case JSGC_HIGH_FREQUENCY_LOW_LIMIT
:
1510 return tunables
.highFrequencyLowLimitBytes() / 1024 / 1024;
1511 case JSGC_HIGH_FREQUENCY_HIGH_LIMIT
:
1512 return tunables
.highFrequencyHighLimitBytes() / 1024 / 1024;
1513 case JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MAX
:
1514 return uint32_t(tunables
.highFrequencyHeapGrowthMax() * 100);
1515 case JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MIN
:
1516 return uint32_t(tunables
.highFrequencyHeapGrowthMin() * 100);
1517 case JSGC_LOW_FREQUENCY_HEAP_GROWTH
:
1518 return uint32_t(tunables
.lowFrequencyHeapGrowth() * 100);
1519 case JSGC_DYNAMIC_HEAP_GROWTH
:
1520 return tunables
.isDynamicHeapGrowthEnabled();
1521 case JSGC_DYNAMIC_MARK_SLICE
:
1522 return tunables
.isDynamicMarkSliceEnabled();
1523 case JSGC_ALLOCATION_THRESHOLD
:
1524 return tunables
.gcZoneAllocThresholdBase() / 1024 / 1024;
1525 case JSGC_MIN_EMPTY_CHUNK_COUNT
:
1526 return tunables
.minEmptyChunkCount();
1527 case JSGC_MAX_EMPTY_CHUNK_COUNT
:
1528 return tunables
.maxEmptyChunkCount();
1530 JS_ASSERT(key
== JSGC_NUMBER
);
1531 return uint32_t(number
);
1536 GCRuntime::setMarkStackLimit(size_t limit
)
1538 JS_ASSERT(!isHeapBusy());
1539 AutoStopVerifyingBarriers
pauseVerification(rt
, false);
1540 marker
.setMaxCapacity(limit
);
1543 template <typename T
> struct BarrierOwner
{};
1544 template <typename T
> struct BarrierOwner
<T
*> { typedef T result
; };
1545 template <> struct BarrierOwner
<Value
> { typedef HeapValue result
; };
1548 GCRuntime::addBlackRootsTracer(JSTraceDataOp traceOp
, void* data
)
1550 AssertHeapIsIdle(rt
);
1551 return !!blackRootTracers
.append(Callback
<JSTraceDataOp
>(traceOp
, data
));
1555 GCRuntime::removeBlackRootsTracer(JSTraceDataOp traceOp
, void* data
)
1557 // Can be called from finalizers
1558 for (size_t i
= 0; i
< blackRootTracers
.length(); i
++) {
1559 Callback
<JSTraceDataOp
>* e
= &blackRootTracers
[i
];
1560 if (e
->op
== traceOp
&& e
->data
== data
) {
1561 blackRootTracers
.erase(e
);
1567 GCRuntime::setGrayRootsTracer(JSTraceDataOp traceOp
, void* data
)
1569 AssertHeapIsIdle(rt
);
1570 grayRootTracer
.op
= traceOp
;
1571 grayRootTracer
.data
= data
;
1575 GCRuntime::setGCCallback(JSGCCallback callback
, void* data
)
1577 gcCallback
.op
= callback
;
1578 gcCallback
.data
= data
;
1582 GCRuntime::addFinalizeCallback(JSFinalizeCallback callback
, void* data
)
1584 return finalizeCallbacks
.append(Callback
<JSFinalizeCallback
>(callback
, data
));
1588 GCRuntime::removeFinalizeCallback(JSFinalizeCallback callback
)
1590 for (Callback
<JSFinalizeCallback
>* p
= finalizeCallbacks
.begin();
1591 p
< finalizeCallbacks
.end(); p
++) {
1592 if (p
->op
== callback
) {
1593 finalizeCallbacks
.erase(p
);
1600 GCRuntime::setSliceCallback(JS::GCSliceCallback callback
) {
1601 return stats
.setSliceCallback(callback
);
1604 template <typename T
>
1606 GCRuntime::addRoot(T
* rp
, const char* name
, JSGCRootType rootType
)
1609 * Sometimes Firefox will hold weak references to objects and then convert
1610 * them to strong references by calling AddRoot (e.g., via PreserveWrapper,
1611 * or ModifyBusyCount in workers). We need a read barrier to cover these
1614 if (rt
->gc
.incrementalState
!= NO_INCREMENTAL
)
1615 BarrierOwner
<T
>::result::writeBarrierPre(*rp
);
1617 return rt
->gc
.rootsHash
.put((void*)rp
, RootInfo(name
, rootType
));
1621 GCRuntime::removeRoot(void* rp
)
1623 rootsHash
.remove(rp
);
1627 template <typename T
>
1629 AddRoot(JSRuntime
* rt
, T
* rp
, const char* name
, JSGCRootType rootType
)
1631 return rt
->gc
.addRoot(rp
, name
, rootType
);
1634 template <typename T
>
1636 AddRoot(JSContext
* cx
, T
* rp
, const char* name
, JSGCRootType rootType
)
1638 bool ok
= cx
->runtime()->gc
.addRoot(rp
, name
, rootType
);
1640 JS_ReportOutOfMemory(cx
);
1645 js::AddValueRoot(JSContext
* cx
, Value
* vp
, const char* name
)
1647 return AddRoot(cx
, vp
, name
, JS_GC_ROOT_VALUE_PTR
);
1651 js::AddValueRootRT(JSRuntime
* rt
, js::Value
* vp
, const char* name
)
1653 return AddRoot(rt
, vp
, name
, JS_GC_ROOT_VALUE_PTR
);
1657 js::AddStringRoot(JSContext
* cx
, JSString
** rp
, const char* name
)
1659 return AddRoot(cx
, rp
, name
, JS_GC_ROOT_STRING_PTR
);
1663 js::AddObjectRoot(JSContext
* cx
, JSObject
** rp
, const char* name
)
1665 return AddRoot(cx
, rp
, name
, JS_GC_ROOT_OBJECT_PTR
);
1669 js::AddObjectRoot(JSRuntime
* rt
, JSObject
** rp
, const char* name
)
1671 return AddRoot(rt
, rp
, name
, JS_GC_ROOT_OBJECT_PTR
);
1675 js::AddScriptRoot(JSContext
* cx
, JSScript
** rp
, const char* name
)
1677 return AddRoot(cx
, rp
, name
, JS_GC_ROOT_SCRIPT_PTR
);
1680 extern JS_FRIEND_API(bool)
1681 js::AddRawValueRoot(JSContext
* cx
, Value
* vp
, const char* name
)
1683 return AddRoot(cx
, vp
, name
, JS_GC_ROOT_VALUE_PTR
);
1686 extern JS_FRIEND_API(void)
1687 js::RemoveRawValueRoot(JSContext
* cx
, Value
* vp
)
1689 RemoveRoot(cx
->runtime(), vp
);
1693 js::RemoveRoot(JSRuntime
* rt
, void* rp
)
1695 rt
->gc
.removeRoot(rp
);
1699 GCRuntime::setMaxMallocBytes(size_t value
)
1702 * For compatibility treat any value that exceeds PTRDIFF_T_MAX to
1705 maxMallocBytes
= (ptrdiff_t(value
) >= 0) ? value
: size_t(-1) >> 1;
1707 for (ZonesIter
zone(rt
, WithAtoms
); !zone
.done(); zone
.next())
1708 zone
->setGCMaxMallocBytes(value
);
1712 GCRuntime::resetMallocBytes()
1714 mallocBytes
= ptrdiff_t(maxMallocBytes
);
1715 mallocGCTriggered
= false;
1719 GCRuntime::updateMallocCounter(JS::Zone
* zone
, size_t nbytes
)
1721 mallocBytes
-= ptrdiff_t(nbytes
);
1722 if (MOZ_UNLIKELY(isTooMuchMalloc()))
1725 zone
->updateMallocCounter(nbytes
);
1729 GCRuntime::onTooMuchMalloc()
1731 if (!mallocGCTriggered
)
1732 mallocGCTriggered
= triggerGC(JS::gcreason::TOO_MUCH_MALLOC
);
1736 ZoneHeapThreshold::computeZoneHeapGrowthFactorForHeapSize(size_t lastBytes
,
1737 const GCSchedulingTunables
& tunables
,
1738 const GCSchedulingState
& state
)
1740 if (!tunables
.isDynamicHeapGrowthEnabled())
1743 // For small zones, our collection heuristics do not matter much: favor
1744 // something simple in this case.
1745 if (lastBytes
< 1 * 1024 * 1024)
1746 return tunables
.lowFrequencyHeapGrowth();
1748 // If GC's are not triggering in rapid succession, use a lower threshold so
1749 // that we will collect garbage sooner.
1750 if (!state
.inHighFrequencyGCMode())
1751 return tunables
.lowFrequencyHeapGrowth();
1753 // The heap growth factor depends on the heap size after a GC and the GC
1754 // frequency. For low frequency GCs (more than 1sec between GCs) we let
1755 // the heap grow to 150%. For high frequency GCs we let the heap grow
1756 // depending on the heap size:
1757 // lastBytes < highFrequencyLowLimit: 300%
1758 // lastBytes > highFrequencyHighLimit: 150%
1759 // otherwise: linear interpolation between 300% and 150% based on lastBytes
1761 // Use shorter names to make the operation comprehensible.
1762 double minRatio
= tunables
.highFrequencyHeapGrowthMin();
1763 double maxRatio
= tunables
.highFrequencyHeapGrowthMax();
1764 double lowLimit
= tunables
.highFrequencyLowLimitBytes();
1765 double highLimit
= tunables
.highFrequencyHighLimitBytes();
1767 if (lastBytes
<= lowLimit
)
1770 if (lastBytes
>= highLimit
)
1773 double factor
= maxRatio
- ((maxRatio
- minRatio
) * ((lastBytes
- lowLimit
) /
1774 (highLimit
- lowLimit
)));
1775 JS_ASSERT(factor
>= minRatio
);
1776 JS_ASSERT(factor
<= maxRatio
);
1781 ZoneHeapThreshold::computeZoneTriggerBytes(double growthFactor
, size_t lastBytes
,
1782 JSGCInvocationKind gckind
,
1783 const GCSchedulingTunables
& tunables
)
1785 size_t base
= gckind
== GC_SHRINK
1787 : Max(lastBytes
, tunables
.gcZoneAllocThresholdBase());
1788 double trigger
= double(base
) * growthFactor
;
1789 return size_t(Min(double(tunables
.gcMaxBytes()), trigger
));
1793 ZoneHeapThreshold::updateAfterGC(size_t lastBytes
, JSGCInvocationKind gckind
,
1794 const GCSchedulingTunables
& tunables
,
1795 const GCSchedulingState
& state
)
1797 gcHeapGrowthFactor_
= computeZoneHeapGrowthFactorForHeapSize(lastBytes
, tunables
, state
);
1798 gcTriggerBytes_
= computeZoneTriggerBytes(gcHeapGrowthFactor_
, lastBytes
, gckind
, tunables
);
1802 ZoneHeapThreshold::updateForRemovedArena(const GCSchedulingTunables
& tunables
)
1804 size_t amount
= ArenaSize
* gcHeapGrowthFactor_
;
1806 JS_ASSERT(amount
> 0);
1807 JS_ASSERT(gcTriggerBytes_
>= amount
);
1809 if (gcTriggerBytes_
- amount
< tunables
.gcZoneAllocThresholdBase() * gcHeapGrowthFactor_
)
1812 gcTriggerBytes_
-= amount
;
1815 Allocator::Allocator(Zone
* zone
)
1820 GCMarker::delayMarkingArena(ArenaHeader
* aheader
)
1822 if (aheader
->hasDelayedMarking
) {
1823 /* Arena already scheduled to be marked later */
1826 aheader
->setNextDelayedMarking(unmarkedArenaStackTop
);
1827 unmarkedArenaStackTop
= aheader
;
1832 GCMarker::delayMarkingChildren(const void* thing
)
1834 const Cell
* cell
= reinterpret_cast<const Cell
*>(thing
);
1835 cell
->arenaHeader()->markOverflow
= 1;
1836 delayMarkingArena(cell
->arenaHeader());
1840 ArenaLists::prepareForIncrementalGC(JSRuntime
* rt
)
1842 for (size_t i
= 0; i
!= FINALIZE_LIMIT
; ++i
) {
1843 FreeList
* freeList
= &freeLists
[i
];
1844 if (!freeList
->isEmpty()) {
1845 ArenaHeader
* aheader
= freeList
->arenaHeader();
1846 aheader
->allocatedDuringIncremental
= true;
1847 rt
->gc
.marker
.delayMarkingArena(aheader
);
1853 GCRuntime::arenaAllocatedDuringGC(JS::Zone
* zone
, ArenaHeader
* arena
)
1855 if (zone
->needsIncrementalBarrier()) {
1856 arena
->allocatedDuringIncremental
= true;
1857 marker
.delayMarkingArena(arena
);
1858 } else if (zone
->isGCSweeping()) {
1859 arena
->setNextAllocDuringSweep(arenasAllocatedDuringSweep
);
1860 arenasAllocatedDuringSweep
= arena
;
1865 ArenaLists::allocateFromArenaInline(Zone
* zone
, AllocKind thingKind
,
1866 AutoMaybeStartBackgroundAllocation
& maybeStartBackgroundAllocation
)
1871 * This function can be called from parallel threads all of which
1872 * are associated with the same compartment. In that case, each
1873 * thread will have a distinct ArenaLists. Therefore, whenever we
1874 * fall through to pickChunk() we must be sure that we are holding
1878 AutoLockGC maybeLock
;
1880 bool backgroundFinalizationIsRunning
= false;
1881 ArenaLists::BackgroundFinalizeState
* bfs
= &backgroundFinalizeState
[thingKind
];
1882 if (*bfs
!= BFS_DONE
) {
1884 * We cannot search the arena list for free things while background
1885 * finalization runs and can modify it at any moment. So we always
1886 * allocate a new arena in that case.
1888 JSRuntime
* rt
= zone
->runtimeFromAnyThread();
1890 if (*bfs
== BFS_RUN
) {
1891 backgroundFinalizationIsRunning
= true;
1892 } else if (*bfs
== BFS_JUST_FINISHED
) {
1893 /* See comments before BackgroundFinalizeState definition. */
1896 JS_ASSERT(*bfs
== BFS_DONE
);
1900 ArenaHeader
* aheader
;
1901 ArenaList
* al
= &arenaLists
[thingKind
];
1902 if (!backgroundFinalizationIsRunning
&& (aheader
= al
->arenaAfterCursor())) {
1904 * Normally, the empty arenas are returned to the chunk
1905 * and should not be present on the list. In parallel
1906 * execution, however, we keep empty arenas in the arena
1907 * list to avoid synchronizing on the chunk.
1909 JS_ASSERT(!aheader
->isEmpty() || InParallelSection());
1911 al
->moveCursorPast(aheader
);
1914 * Move the free span stored in the arena to the free list and
1917 FreeSpan firstFreeSpan
= aheader
->getFirstFreeSpan();
1918 freeLists
[thingKind
].setHead(&firstFreeSpan
);
1919 aheader
->setAsFullyUsed();
1920 if (MOZ_UNLIKELY(zone
->wasGCStarted()))
1921 zone
->runtimeFromMainThread()->gc
.arenaAllocatedDuringGC(zone
, aheader
);
1922 void* thing
= freeLists
[thingKind
].allocate(Arena::thingSize(thingKind
));
1923 JS_ASSERT(thing
); // This allocation is infallible.
1927 /* Make sure we hold the GC lock before we call pickChunk. */
1928 JSRuntime
* rt
= zone
->runtimeFromAnyThread();
1929 if (!maybeLock
.locked())
1931 Chunk
* chunk
= rt
->gc
.pickChunk(zone
, maybeStartBackgroundAllocation
);
1936 * While we still hold the GC lock get an arena from some chunk, mark it
1937 * as full as its single free span is moved to the free lists, and insert
1938 * it to the list as a fully allocated arena.
1940 JS_ASSERT(al
->isCursorAtEnd());
1941 aheader
= chunk
->allocateArena(zone
, thingKind
);
1945 if (MOZ_UNLIKELY(zone
->wasGCStarted()))
1946 rt
->gc
.arenaAllocatedDuringGC(zone
, aheader
);
1947 al
->insertAtCursor(aheader
);
1950 * Allocate from a newly allocated arena. The arena will have been set up
1951 * as fully used during the initialization so we have to re-mark it as
1952 * empty before allocating.
1954 JS_ASSERT(!aheader
->hasFreeThings());
1955 Arena
* arena
= aheader
->getArena();
1956 size_t thingSize
= Arena::thingSize(thingKind
);
1958 fullSpan
.initFinal(arena
->thingsStart(thingKind
), arena
->thingsEnd() - thingSize
, thingSize
);
1959 freeLists
[thingKind
].setHead(&fullSpan
);
1960 return freeLists
[thingKind
].allocate(thingSize
);
1964 ArenaLists::allocateFromArena(JS::Zone
* zone
, AllocKind thingKind
)
1966 AutoMaybeStartBackgroundAllocation maybeStartBackgroundAllocation
;
1967 return allocateFromArenaInline(zone
, thingKind
, maybeStartBackgroundAllocation
);
1971 ArenaLists::wipeDuringParallelExecution(JSRuntime
* rt
)
1973 JS_ASSERT(InParallelSection());
1975 // First, check that we all objects we have allocated are eligible
1976 // for background finalization. The idea is that we will free
1977 // (below) ALL background finalizable objects, because we know (by
1978 // the rules of parallel execution) they are not reachable except
1979 // by other thread-local objects. However, if there were any
1980 // object ineligible for background finalization, it might retain
1981 // a reference to one of these background finalizable objects, and
1983 for (unsigned i
= 0; i
< FINALIZE_LAST
; i
++) {
1984 AllocKind thingKind
= AllocKind(i
);
1985 if (!IsBackgroundFinalized(thingKind
) && !arenaLists
[thingKind
].isEmpty())
1989 // Finalize all background finalizable objects immediately and
1990 // return the (now empty) arenas back to arena list.
1992 for (unsigned i
= 0; i
< FINALIZE_OBJECT_LAST
; i
++) {
1993 AllocKind thingKind
= AllocKind(i
);
1995 if (!IsBackgroundFinalized(thingKind
))
1998 if (!arenaLists
[i
].isEmpty()) {
2000 forceFinalizeNow(&fop
, thingKind
);
2008 GCRuntime::shouldCompact()
2010 #ifdef JSGC_COMPACTING
2011 return invocationKind
== GC_SHRINK
&& !compactingDisabled
;
2017 #ifdef JSGC_COMPACTING
2020 GCRuntime::disableCompactingGC()
2022 ++rt
->gc
.compactingDisabled
;
2026 GCRuntime::enableCompactingGC()
2028 JS_ASSERT(compactingDisabled
> 0);
2029 --compactingDisabled
;
2032 AutoDisableCompactingGC::AutoDisableCompactingGC(JSRuntime
* rt
)
2035 gc
.disableCompactingGC();
2038 AutoDisableCompactingGC::~AutoDisableCompactingGC()
2040 gc
.enableCompactingGC();
2044 ForwardCell(Cell
* dest
, Cell
* src
)
2046 // Mark a cell has having been relocated and astore forwarding pointer to
2048 MOZ_ASSERT(src
->tenuredZone() == dest
->tenuredZone());
2050 // Putting the values this way round is a terrible hack to make
2051 // ObjectImpl::zone() work on forwarded objects.
2052 MOZ_ASSERT(ObjectImpl::offsetOfShape() == 0);
2053 uintptr_t* ptr
= reinterpret_cast<uintptr_t*>(src
);
2054 ptr
[0] = reinterpret_cast<uintptr_t>(dest
); // Forwarding address
2055 ptr
[1] = ForwardedCellMagicValue
; // Moved!
2059 ArenaContainsGlobal(ArenaHeader
* arena
)
2061 if (arena
->getAllocKind() > FINALIZE_OBJECT_LAST
)
2064 for (ArenaCellIterUnderGC
i(arena
); !i
.done(); i
.next()) {
2065 JSObject
* obj
= static_cast<JSObject
*>(i
.getCell());
2066 if (obj
->is
<GlobalObject
>())
2074 CanRelocateArena(ArenaHeader
* arena
)
2077 * We can't currently move global objects because their address is baked
2078 * into compiled code. We therefore skip moving the contents of any arena
2079 * containing a global if ion or baseline are enabled.
2081 JSRuntime
* rt
= arena
->zone
->runtimeFromMainThread();
2082 return arena
->getAllocKind() <= FINALIZE_OBJECT_LAST
&&
2083 ((!rt
->options().baseline() && !rt
->options().ion()) || !ArenaContainsGlobal(arena
));
2087 ShouldRelocateArena(ArenaHeader
* arena
)
2090 if (arena
->zone
->runtimeFromMainThread()->gc
.zeal() == ZealCompactValue
)
2095 * Eventually, this will be based on brilliant heuristics that look at fill
2096 * percentage and fragmentation and... stuff.
2098 return arena
->hasFreeThings();
2102 * Choose some arenas to relocate all cells out of and remove them from the
2103 * arena list. Return the head of the list of arenas to relocate.
2106 ArenaList::pickArenasToRelocate()
2109 ArenaHeader
* head
= nullptr;
2110 ArenaHeader
** tailp
= &head
;
2112 // TODO: Only scan through the arenas with space available.
2113 ArenaHeader
** arenap
= &head_
;
2115 ArenaHeader
* arena
= *arenap
;
2117 if (CanRelocateArena(arena
) && ShouldRelocateArena(arena
)) {
2118 // Remove from arena list
2119 if (cursorp_
== &arena
->next
)
2121 *arenap
= arena
->next
;
2122 arena
->next
= nullptr;
2124 // Append to relocation list
2126 tailp
= &arena
->next
;
2128 arenap
= &arena
->next
;
2138 PtrIsInRange(void* ptr
, void* start
, size_t length
)
2140 return uintptr_t(ptr
) - uintptr_t(start
) < length
;
2145 RelocateCell(Zone
* zone
, Cell
* src
, AllocKind thingKind
, size_t thingSize
)
2147 // Allocate a new cell.
2148 void* dst
= zone
->allocator
.arenas
.allocateFromFreeList(thingKind
, thingSize
);
2150 dst
= js::gc::ArenaLists::refillFreeListInGC(zone
, thingKind
);
2154 // Copy source cell contents to destination.
2155 memcpy(dst
, src
, thingSize
);
2157 // Fixup the pointer to inline object elements if necessary.
2158 if (thingKind
<= FINALIZE_OBJECT_LAST
) {
2159 JSObject
* srcObj
= static_cast<JSObject
*>(src
);
2160 JSObject
* dstObj
= static_cast<JSObject
*>(dst
);
2161 if (srcObj
->hasFixedElements())
2162 dstObj
->setFixedElements();
2164 if (srcObj
->is
<ArrayBufferObject
>()) {
2165 // We must fix up any inline data pointers while we know the source
2166 // object and before we mark any of the views.
2167 ArrayBufferObject::fixupDataPointerAfterMovingGC(
2168 srcObj
->as
<ArrayBufferObject
>(), dstObj
->as
<ArrayBufferObject
>());
2169 } else if (srcObj
->is
<TypedArrayObject
>()) {
2170 TypedArrayObject
& typedArray
= srcObj
->as
<TypedArrayObject
>();
2171 if (!typedArray
.hasBuffer()) {
2172 JS_ASSERT(srcObj
->getPrivate() ==
2173 srcObj
->fixedData(TypedArrayObject::FIXED_DATA_START
));
2174 dstObj
->setPrivate(dstObj
->fixedData(TypedArrayObject::FIXED_DATA_START
));
2179 JS_ASSERT_IF(dstObj
->isNative(),
2180 !PtrIsInRange((HeapSlot
*)dstObj
->getDenseElements(), src
, thingSize
));
2183 // Copy the mark bits.
2184 static_cast<Cell
*>(dst
)->copyMarkBitsFrom(src
);
2186 // Mark source cell as forwarded and leave a pointer to the destination.
2187 ForwardCell(static_cast<Cell
*>(dst
), src
);
2193 RelocateArena(ArenaHeader
* aheader
)
2195 JS_ASSERT(aheader
->allocated());
2196 JS_ASSERT(!aheader
->hasDelayedMarking
);
2197 JS_ASSERT(!aheader
->markOverflow
);
2198 JS_ASSERT(!aheader
->allocatedDuringIncremental
);
2200 Zone
* zone
= aheader
->zone
;
2202 AllocKind thingKind
= aheader
->getAllocKind();
2203 size_t thingSize
= aheader
->getThingSize();
2205 for (ArenaCellIterUnderFinalize
i(aheader
); !i
.done(); i
.next()) {
2206 if (!RelocateCell(zone
, i
.getCell(), thingKind
, thingSize
)) {
2207 MOZ_CRASH(); // TODO: Handle failure here.
2216 * Relocate all arenas identified by pickArenasToRelocate: for each arena,
2217 * relocate each cell within it, then tack it onto a list of relocated arenas.
2218 * Currently, we allow the relocation to fail, in which case the arena will be
2219 * moved back onto the list of arenas with space available. (I did this
2220 * originally to test my list manipulation before implementing the actual
2221 * moving, with half a thought to allowing pinning (moving only a portion of
2222 * the cells in an arena), but now it's probably just dead weight. FIXME)
2225 ArenaList::relocateArenas(ArenaHeader
* toRelocate
, ArenaHeader
* relocated
)
2229 while (ArenaHeader
* arena
= toRelocate
) {
2230 toRelocate
= arena
->next
;
2232 if (RelocateArena(arena
)) {
2233 // Prepend to list of relocated arenas
2234 arena
->next
= relocated
;
2237 // For some reason, the arena did not end up empty. Prepend it to
2238 // the portion of the list that the cursor is pointing to (the
2239 // arenas with space available) so that it will be used for future
2241 JS_ASSERT(arena
->hasFreeThings());
2242 insertAtCursor(arena
);
2252 ArenaLists::relocateArenas(ArenaHeader
* relocatedList
)
2254 // Flush all the freeLists back into the arena headers
2256 checkEmptyFreeLists();
2258 for (size_t i
= 0; i
< FINALIZE_LIMIT
; i
++) {
2259 ArenaList
& al
= arenaLists
[i
];
2260 ArenaHeader
* toRelocate
= al
.pickArenasToRelocate();
2262 relocatedList
= al
.relocateArenas(toRelocate
, relocatedList
);
2266 * When we allocate new locations for cells, we use
2267 * allocateFromFreeList(). Reset the free list again so that
2268 * AutoCopyFreeListToArenasForGC doesn't complain that the free lists
2269 * are different now.
2272 checkEmptyFreeLists();
2274 return relocatedList
;
2278 GCRuntime::relocateArenas()
2280 gcstats::AutoPhase
ap(stats
, gcstats::PHASE_COMPACT_MOVE
);
2282 ArenaHeader
* relocatedList
= nullptr;
2283 for (GCZonesIter
zone(rt
); !zone
.done(); zone
.next()) {
2284 JS_ASSERT(zone
->isGCFinished());
2285 JS_ASSERT(!zone
->isPreservingCode());
2287 // We cannot move atoms as we depend on their addresses being constant.
2288 if (!rt
->isAtomsZone(zone
)) {
2289 zone
->setGCState(Zone::Compact
);
2290 relocatedList
= zone
->allocator
.arenas
.relocateArenas(relocatedList
);
2294 return relocatedList
;
2297 struct MovingTracer
: JSTracer
{
2298 MovingTracer(JSRuntime
* rt
) : JSTracer(rt
, Visit
, TraceWeakMapValues
) {}
2300 static void Visit(JSTracer
* jstrc
, void** thingp
, JSGCTraceKind kind
);
2301 static void Sweep(JSTracer
* jstrc
);
2305 MovingTracer::Visit(JSTracer
* jstrc
, void** thingp
, JSGCTraceKind kind
)
2307 Cell
* thing
= static_cast<Cell
*>(*thingp
);
2308 Zone
* zone
= thing
->tenuredZoneFromAnyThread();
2309 if (!zone
->isGCCompacting()) {
2310 JS_ASSERT(!IsForwarded(thing
));
2313 JS_ASSERT(CurrentThreadCanAccessZone(zone
));
2315 if (IsForwarded(thing
)) {
2316 Cell
* dst
= Forwarded(thing
);
2322 MovingTracer::Sweep(JSTracer
* jstrc
)
2324 JSRuntime
* rt
= jstrc
->runtime();
2325 FreeOp
* fop
= rt
->defaultFreeOp();
2327 WatchpointMap::sweepAll(rt
);
2329 Debugger::sweepAll(fop
);
2331 for (ZonesIter
zone(rt
, SkipAtoms
); !zone
.done(); zone
.next()) {
2332 if (zone
->isCollecting()) {
2334 zone
->sweep(fop
, false, &oom
);
2337 for (CompartmentsInZoneIter
c(zone
); !c
.done(); c
.next()) {
2338 c
->sweep(fop
, false);
2341 /* Update cross compartment wrappers into moved zones. */
2342 for (CompartmentsInZoneIter
c(zone
); !c
.done(); c
.next())
2343 c
->sweepCrossCompartmentWrappers();
2347 /* Type inference may put more blocks here to free. */
2348 rt
->freeLifoAlloc
.freeAll();
2350 /* Clear the new object cache as this can contain cell pointers. */
2351 rt
->newObjectCache
.purge();
2355 * Update the interal pointers in a single cell.
2358 UpdateCellPointers(MovingTracer
* trc
, Cell
* cell
, JSGCTraceKind traceKind
) {
2359 TraceChildren(trc
, cell
, traceKind
);
2361 if (traceKind
== JSTRACE_SHAPE
) {
2362 Shape
* shape
= static_cast<Shape
*>(cell
);
2363 shape
->fixupAfterMovingGC();
2364 } else if (traceKind
== JSTRACE_BASE_SHAPE
) {
2365 BaseShape
* base
= static_cast<BaseShape
*>(cell
);
2366 base
->fixupAfterMovingGC();
2371 * Update pointers to relocated cells by doing a full heap traversal and sweep.
2373 * The latter is necessary to update weak references which are not marked as
2374 * part of the traversal.
2377 GCRuntime::updatePointersToRelocatedCells()
2379 JS_ASSERT(rt
->currentThreadHasExclusiveAccess());
2381 gcstats::AutoPhase
ap(stats
, gcstats::PHASE_COMPACT_UPDATE
);
2382 MovingTracer
trc(rt
);
2384 // TODO: We may need to fix up other weak pointers here.
2386 // Fixup compartment global pointers as these get accessed during marking.
2387 for (GCCompartmentsIter
comp(rt
); !comp
.done(); comp
.next())
2388 comp
->fixupAfterMovingGC();
2390 // Fixup cross compartment wrappers as we assert the existence of wrappers in the map.
2391 for (CompartmentsIter
comp(rt
, SkipAtoms
); !comp
.done(); comp
.next())
2392 comp
->fixupCrossCompartmentWrappers(&trc
);
2394 // Fixup generators as these are not normally traced.
2395 for (ContextIter
i(rt
); !i
.done(); i
.next()) {
2396 for (JSGenerator
* gen
= i
.get()->innermostGenerator(); gen
; gen
= gen
->prevGenerator
)
2397 gen
->obj
= MaybeForwarded(gen
->obj
.get());
2400 // Iterate through all allocated cells to update internal pointers.
2401 for (GCZonesIter
zone(rt
); !zone
.done(); zone
.next()) {
2402 ArenaLists
& al
= zone
->allocator
.arenas
;
2403 for (unsigned i
= 0; i
< FINALIZE_LIMIT
; ++i
) {
2404 AllocKind thingKind
= static_cast<AllocKind
>(i
);
2405 JSGCTraceKind traceKind
= MapAllocToTraceKind(thingKind
);
2406 for (ArenaHeader
* arena
= al
.getFirstArena(thingKind
); arena
; arena
= arena
->next
) {
2407 for (ArenaCellIterUnderGC
i(arena
); !i
.done(); i
.next()) {
2408 UpdateCellPointers(&trc
, i
.getCell(), traceKind
);
2414 // Mark roots to update them.
2415 markRuntime(&trc
, MarkRuntime
);
2416 Debugger::markAll(&trc
);
2417 Debugger::markCrossCompartmentDebuggerObjectReferents(&trc
);
2419 for (GCCompartmentsIter
c(rt
); !c
.done(); c
.next()) {
2420 WeakMapBase::markAll(c
, &trc
);
2421 if (c
->watchpointMap
)
2422 c
->watchpointMap
->markAll(&trc
);
2425 // Mark all gray roots, making sure we call the trace callback to get the
2427 marker
.resetBufferedGrayRoots();
2428 markAllGrayReferences(gcstats::PHASE_COMPACT_UPDATE_GRAY
);
2430 MovingTracer::Sweep(&trc
);
2434 GCRuntime::releaseRelocatedArenas(ArenaHeader
* relocatedList
)
2436 // Release the relocated arenas, now containing only forwarding pointers
2439 for (ArenaHeader
* arena
= relocatedList
; arena
; arena
= arena
->next
) {
2440 for (ArenaCellIterUnderFinalize
i(arena
); !i
.done(); i
.next()) {
2441 Cell
* src
= i
.getCell();
2442 JS_ASSERT(IsForwarded(src
));
2443 Cell
* dest
= Forwarded(src
);
2444 JS_ASSERT(src
->isMarked(BLACK
) == dest
->isMarked(BLACK
));
2445 JS_ASSERT(src
->isMarked(GRAY
) == dest
->isMarked(GRAY
));
2451 while (relocatedList
) {
2452 ArenaHeader
* aheader
= relocatedList
;
2453 relocatedList
= relocatedList
->next
;
2455 // Mark arena as empty
2456 AllocKind thingKind
= aheader
->getAllocKind();
2457 size_t thingSize
= aheader
->getThingSize();
2458 Arena
* arena
= aheader
->getArena();
2460 fullSpan
.initFinal(arena
->thingsStart(thingKind
), arena
->thingsEnd() - thingSize
, thingSize
);
2461 aheader
->setFirstFreeSpan(&fullSpan
);
2463 #if defined(JS_CRASH_DIAGNOSTICS) || defined(JS_GC_ZEAL)
2464 JS_POISON(reinterpret_cast<void*>(arena
->thingsStart(thingKind
)),
2465 JS_MOVED_TENURED_PATTERN
, Arena::thingsSpan(thingSize
));
2468 aheader
->chunk()->releaseArena(aheader
);
2472 AutoLockGC
lock(rt
);
2473 expireChunksAndArenas(true);
2476 #endif // JSGC_COMPACTING
2479 ArenaLists::finalizeNow(FreeOp
* fop
, AllocKind thingKind
)
2481 JS_ASSERT(!IsBackgroundFinalized(thingKind
));
2482 forceFinalizeNow(fop
, thingKind
);
2486 ArenaLists::forceFinalizeNow(FreeOp
* fop
, AllocKind thingKind
)
2488 JS_ASSERT(backgroundFinalizeState
[thingKind
] == BFS_DONE
);
2490 ArenaHeader
* arenas
= arenaLists
[thingKind
].head();
2493 arenaLists
[thingKind
].clear();
2495 size_t thingsPerArena
= Arena::thingsPerArena(Arena::thingSize(thingKind
));
2496 SortedArenaList
finalizedSorted(thingsPerArena
);
2499 FinalizeArenas(fop
, &arenas
, finalizedSorted
, thingKind
, budget
);
2502 arenaLists
[thingKind
] = finalizedSorted
.toArenaList();
2506 ArenaLists::queueForForegroundSweep(FreeOp
* fop
, AllocKind thingKind
)
2508 JS_ASSERT(!IsBackgroundFinalized(thingKind
));
2509 JS_ASSERT(backgroundFinalizeState
[thingKind
] == BFS_DONE
);
2510 JS_ASSERT(!arenaListsToSweep
[thingKind
]);
2512 arenaListsToSweep
[thingKind
] = arenaLists
[thingKind
].head();
2513 arenaLists
[thingKind
].clear();
2517 ArenaLists::queueForBackgroundSweep(FreeOp
* fop
, AllocKind thingKind
)
2519 JS_ASSERT(IsBackgroundFinalized(thingKind
));
2520 JS_ASSERT(!fop
->runtime()->gc
.isBackgroundSweeping());
2522 ArenaList
* al
= &arenaLists
[thingKind
];
2523 if (al
->isEmpty()) {
2524 JS_ASSERT(backgroundFinalizeState
[thingKind
] == BFS_DONE
);
2529 * The state can be done, or just-finished if we have not allocated any GC
2530 * things from the arena list after the previous background finalization.
2532 JS_ASSERT(backgroundFinalizeState
[thingKind
] == BFS_DONE
||
2533 backgroundFinalizeState
[thingKind
] == BFS_JUST_FINISHED
);
2535 arenaListsToSweep
[thingKind
] = al
->head();
2537 backgroundFinalizeState
[thingKind
] = BFS_RUN
;
2541 ArenaLists::backgroundFinalize(FreeOp
* fop
, ArenaHeader
* listHead
, bool onBackgroundThread
)
2543 JS_ASSERT(listHead
);
2544 AllocKind thingKind
= listHead
->getAllocKind();
2545 Zone
* zone
= listHead
->zone
;
2547 size_t thingsPerArena
= Arena::thingsPerArena(Arena::thingSize(thingKind
));
2548 SortedArenaList
finalizedSorted(thingsPerArena
);
2551 FinalizeArenas(fop
, &listHead
, finalizedSorted
, thingKind
, budget
);
2552 JS_ASSERT(!listHead
);
2554 // When arenas are queued for background finalization, all arenas are moved
2555 // to arenaListsToSweep[], leaving the arenaLists[] empty. However, new
2556 // arenas may be allocated before background finalization finishes; now that
2557 // finalization is complete, we want to merge these lists back together.
2558 ArenaLists
* lists
= &zone
->allocator
.arenas
;
2559 ArenaList
* al
= &lists
->arenaLists
[thingKind
];
2561 // Flatten |finalizedSorted| into a regular ArenaList.
2562 ArenaList finalized
= finalizedSorted
.toArenaList();
2564 // Store this for later, since merging may change the state of |finalized|.
2565 bool allClear
= finalized
.isEmpty();
2567 AutoLockGC
lock(fop
->runtime());
2568 JS_ASSERT(lists
->backgroundFinalizeState
[thingKind
] == BFS_RUN
);
2570 // Join |al| and |finalized| into a single list.
2571 *al
= finalized
.insertListWithCursorAtEnd(*al
);
2574 * We must set the state to BFS_JUST_FINISHED if we are running on the
2575 * background thread and we have touched arenaList list, even if we add to
2576 * the list only fully allocated arenas without any free things. It ensures
2577 * that the allocation thread takes the GC lock and all writes to the free
2578 * list elements are propagated. As we always take the GC lock when
2579 * allocating new arenas from the chunks we can set the state to BFS_DONE if
2580 * we have released all finalized arenas back to their chunks.
2582 if (onBackgroundThread
&& !allClear
)
2583 lists
->backgroundFinalizeState
[thingKind
] = BFS_JUST_FINISHED
;
2585 lists
->backgroundFinalizeState
[thingKind
] = BFS_DONE
;
2587 lists
->arenaListsToSweep
[thingKind
] = nullptr;
2591 ArenaLists::queueObjectsForSweep(FreeOp
* fop
)
2593 gcstats::AutoPhase
ap(fop
->runtime()->gc
.stats
, gcstats::PHASE_SWEEP_OBJECT
);
2595 finalizeNow(fop
, FINALIZE_OBJECT0
);
2596 finalizeNow(fop
, FINALIZE_OBJECT2
);
2597 finalizeNow(fop
, FINALIZE_OBJECT4
);
2598 finalizeNow(fop
, FINALIZE_OBJECT8
);
2599 finalizeNow(fop
, FINALIZE_OBJECT12
);
2600 finalizeNow(fop
, FINALIZE_OBJECT16
);
2602 queueForBackgroundSweep(fop
, FINALIZE_OBJECT0_BACKGROUND
);
2603 queueForBackgroundSweep(fop
, FINALIZE_OBJECT2_BACKGROUND
);
2604 queueForBackgroundSweep(fop
, FINALIZE_OBJECT4_BACKGROUND
);
2605 queueForBackgroundSweep(fop
, FINALIZE_OBJECT8_BACKGROUND
);
2606 queueForBackgroundSweep(fop
, FINALIZE_OBJECT12_BACKGROUND
);
2607 queueForBackgroundSweep(fop
, FINALIZE_OBJECT16_BACKGROUND
);
2611 ArenaLists::queueStringsAndSymbolsForSweep(FreeOp
* fop
)
2613 gcstats::AutoPhase
ap(fop
->runtime()->gc
.stats
, gcstats::PHASE_SWEEP_STRING
);
2615 queueForBackgroundSweep(fop
, FINALIZE_FAT_INLINE_STRING
);
2616 queueForBackgroundSweep(fop
, FINALIZE_STRING
);
2617 queueForBackgroundSweep(fop
, FINALIZE_SYMBOL
);
2619 queueForForegroundSweep(fop
, FINALIZE_EXTERNAL_STRING
);
2623 ArenaLists::queueScriptsForSweep(FreeOp
* fop
)
2625 gcstats::AutoPhase
ap(fop
->runtime()->gc
.stats
, gcstats::PHASE_SWEEP_SCRIPT
);
2626 queueForForegroundSweep(fop
, FINALIZE_SCRIPT
);
2627 queueForForegroundSweep(fop
, FINALIZE_LAZY_SCRIPT
);
2631 ArenaLists::queueJitCodeForSweep(FreeOp
* fop
)
2633 gcstats::AutoPhase
ap(fop
->runtime()->gc
.stats
, gcstats::PHASE_SWEEP_JITCODE
);
2634 queueForForegroundSweep(fop
, FINALIZE_JITCODE
);
2638 ArenaLists::queueShapesForSweep(FreeOp
* fop
)
2640 gcstats::AutoPhase
ap(fop
->runtime()->gc
.stats
, gcstats::PHASE_SWEEP_SHAPE
);
2642 queueForBackgroundSweep(fop
, FINALIZE_SHAPE
);
2643 queueForBackgroundSweep(fop
, FINALIZE_BASE_SHAPE
);
2644 queueForBackgroundSweep(fop
, FINALIZE_TYPE_OBJECT
);
2648 RunLastDitchGC(JSContext
* cx
, JS::Zone
* zone
, AllocKind thingKind
)
2651 * In parallel sections, we do not attempt to refill the free list
2652 * and hence do not encounter last ditch GC.
2654 JS_ASSERT(!InParallelSection());
2656 PrepareZoneForGC(zone
);
2658 JSRuntime
* rt
= cx
->runtime();
2660 /* The last ditch GC preserves all atoms. */
2661 AutoKeepAtoms
keepAtoms(cx
->perThreadData
);
2662 rt
->gc
.gc(GC_NORMAL
, JS::gcreason::LAST_DITCH
);
2665 * The JSGC_END callback can legitimately allocate new GC
2666 * things and populate the free list. If that happens, just
2667 * return that list head.
2669 size_t thingSize
= Arena::thingSize(thingKind
);
2670 if (void* thing
= zone
->allocator
.arenas
.allocateFromFreeList(thingKind
, thingSize
))
2676 template <AllowGC allowGC
>
2678 ArenaLists::refillFreeList(ThreadSafeContext
* cx
, AllocKind thingKind
)
2680 JS_ASSERT(cx
->allocator()->arenas
.freeLists
[thingKind
].isEmpty());
2681 JS_ASSERT_IF(cx
->isJSContext(), !cx
->asJSContext()->runtime()->isHeapBusy());
2683 Zone
* zone
= cx
->allocator()->zone_
;
2685 bool runGC
= cx
->allowGC() && allowGC
&&
2686 cx
->asJSContext()->runtime()->gc
.incrementalState
!= NO_INCREMENTAL
&&
2687 zone
->usage
.gcBytes() > zone
->threshold
.gcTriggerBytes();
2689 JS_ASSERT_IF(cx
->isJSContext() && allowGC
,
2690 !cx
->asJSContext()->runtime()->currentThreadHasExclusiveAccess());
2693 if (MOZ_UNLIKELY(runGC
)) {
2694 if (void* thing
= RunLastDitchGC(cx
->asJSContext(), zone
, thingKind
))
2698 AutoMaybeStartBackgroundAllocation maybeStartBackgroundAllocation
;
2700 if (cx
->isJSContext()) {
2702 * allocateFromArena may fail while the background finalization still
2703 * run. If we are on the main thread, we want to wait for it to finish
2704 * and restart. However, checking for that is racy as the background
2705 * finalization could free some things after allocateFromArena decided
2706 * to fail but at this point it may have already stopped. To avoid
2707 * this race we always try to allocate twice.
2709 for (bool secondAttempt
= false; ; secondAttempt
= true) {
2710 void* thing
= cx
->allocator()->arenas
.allocateFromArenaInline(zone
, thingKind
,
2711 maybeStartBackgroundAllocation
);
2712 if (MOZ_LIKELY(!!thing
))
2717 cx
->asJSContext()->runtime()->gc
.waitBackgroundSweepEnd();
2721 * If we're off the main thread, we try to allocate once and
2722 * return whatever value we get. If we aren't in a ForkJoin
2723 * session (i.e. we are in a helper thread async with the main
2724 * thread), we need to first ensure the main thread is not in a GC
2727 mozilla::Maybe
<AutoLockHelperThreadState
> lock
;
2728 JSRuntime
* rt
= zone
->runtimeFromAnyThread();
2729 if (rt
->exclusiveThreadsPresent()) {
2731 while (rt
->isHeapBusy())
2732 HelperThreadState().wait(GlobalHelperThreadState::PRODUCER
);
2735 void* thing
= cx
->allocator()->arenas
.allocateFromArenaInline(zone
, thingKind
,
2736 maybeStartBackgroundAllocation
);
2741 if (!cx
->allowGC() || !allowGC
)
2745 * We failed to allocate. Run the GC if we haven't done it already.
2746 * Otherwise report OOM.
2754 js_ReportOutOfMemory(cx
);
2759 ArenaLists::refillFreeList
<NoGC
>(ThreadSafeContext
* cx
, AllocKind thingKind
);
2762 ArenaLists::refillFreeList
<CanGC
>(ThreadSafeContext
* cx
, AllocKind thingKind
);
2765 ArenaLists::refillFreeListInGC(Zone
* zone
, AllocKind thingKind
)
2768 * Called by compacting GC to refill a free list while we are in a GC.
2771 Allocator
& allocator
= zone
->allocator
;
2772 JS_ASSERT(allocator
.arenas
.freeLists
[thingKind
].isEmpty());
2773 mozilla::DebugOnly
<JSRuntime
*> rt
= zone
->runtimeFromMainThread();
2774 JS_ASSERT(rt
->isHeapMajorCollecting());
2775 JS_ASSERT(!rt
->gc
.isBackgroundSweeping());
2777 return allocator
.arenas
.allocateFromArena(zone
, thingKind
);
2780 /* static */ int64_t
2781 SliceBudget::TimeBudget(int64_t millis
)
2783 return millis
* PRMJ_USEC_PER_MSEC
;
2786 /* static */ int64_t
2787 SliceBudget::WorkBudget(int64_t work
)
2789 /* For work = 0 not to mean Unlimited, we subtract 1. */
2793 SliceBudget::SliceBudget()
2798 SliceBudget::SliceBudget(int64_t budget
)
2800 if (budget
== Unlimited
) {
2802 } else if (budget
> 0) {
2803 deadline
= PRMJ_Now() + budget
;
2804 counter
= CounterReset
;
2807 counter
= -budget
- 1;
2812 SliceBudget::checkOverBudget()
2814 bool over
= PRMJ_Now() > deadline
;
2816 counter
= CounterReset
;
2821 js::MarkCompartmentActive(InterpreterFrame
* fp
)
2823 fp
->script()->compartment()->zone()->active
= true;
2827 GCRuntime::requestInterrupt(JS::gcreason::Reason reason
)
2833 triggerReason
= reason
;
2834 rt
->requestInterrupt(JSRuntime::RequestInterruptMainThread
);
2838 GCRuntime::triggerGC(JS::gcreason::Reason reason
)
2840 /* Wait till end of parallel section to trigger GC. */
2841 if (InParallelSection()) {
2842 ForkJoinContext::current()->requestGC(reason
);
2847 * Don't trigger GCs if this is being called off the main thread from
2848 * onTooMuchMalloc().
2850 if (!CurrentThreadCanAccessRuntime(rt
))
2853 /* Don't trigger GCs when allocating under the interrupt callback lock. */
2854 if (rt
->currentThreadOwnsInterruptLock())
2857 /* GC is already running. */
2858 if (rt
->isHeapCollecting())
2861 JS::PrepareForFullGC(rt
);
2862 requestInterrupt(reason
);
2867 GCRuntime::triggerZoneGC(Zone
* zone
, JS::gcreason::Reason reason
)
2870 * If parallel threads are running, wait till they
2871 * are stopped to trigger GC.
2873 if (InParallelSection()) {
2874 ForkJoinContext::current()->requestZoneGC(zone
, reason
);
2878 /* Zones in use by a thread with an exclusive context can't be collected. */
2879 if (zone
->usedByExclusiveThread
)
2882 /* Don't trigger GCs when allocating under the interrupt callback lock. */
2883 if (rt
->currentThreadOwnsInterruptLock())
2886 /* GC is already running. */
2887 if (rt
->isHeapCollecting())
2891 if (zealMode
== ZealAllocValue
) {
2897 if (rt
->isAtomsZone(zone
)) {
2898 /* We can't do a zone GC of the atoms compartment. */
2903 PrepareZoneForGC(zone
);
2904 requestInterrupt(reason
);
2909 GCRuntime::maybeGC(Zone
* zone
)
2911 JS_ASSERT(CurrentThreadCanAccessRuntime(rt
));
2914 if (zealMode
== ZealAllocValue
|| zealMode
== ZealPokeValue
) {
2915 JS::PrepareForFullGC(rt
);
2916 gc(GC_NORMAL
, JS::gcreason::MAYBEGC
);
2922 gcSlice(GC_NORMAL
, JS::gcreason::MAYBEGC
);
2926 double factor
= schedulingState
.inHighFrequencyGCMode() ? 0.85 : 0.9;
2927 if (zone
->usage
.gcBytes() > 1024 * 1024 &&
2928 zone
->usage
.gcBytes() >= factor
* zone
->threshold
.gcTriggerBytes() &&
2929 incrementalState
== NO_INCREMENTAL
&&
2930 !isBackgroundSweeping())
2932 PrepareZoneForGC(zone
);
2933 gcSlice(GC_NORMAL
, JS::gcreason::MAYBEGC
);
2941 GCRuntime::maybePeriodicFullGC()
2944 * Trigger a periodic full GC.
2946 * This is a source of non-determinism, but is not called from the shell.
2948 * Access to the counters and, on 32 bit, setting gcNextFullGCTime below
2949 * is not atomic and a race condition could trigger or suppress the GC. We
2952 #ifndef JS_MORE_DETERMINISTIC
2953 int64_t now
= PRMJ_Now();
2954 if (nextFullGCTime
&& nextFullGCTime
<= now
) {
2955 if (chunkAllocationSinceLastGC
||
2956 numArenasFreeCommitted
> decommitThreshold
)
2958 JS::PrepareForFullGC(rt
);
2959 gcSlice(GC_SHRINK
, JS::gcreason::MAYBEGC
);
2961 nextFullGCTime
= now
+ GC_IDLE_FULL_SPAN
;
2968 GCRuntime::decommitArenasFromAvailableList(Chunk
** availableListHeadp
)
2970 Chunk
* chunk
= *availableListHeadp
;
2975 * Decommit is expensive so we avoid holding the GC lock while calling it.
2977 * We decommit from the tail of the list to minimize interference with the
2978 * main thread that may start to allocate things at this point.
2980 * The arena that is been decommitted outside the GC lock must not be
2981 * available for allocations either via the free list or via the
2982 * decommittedArenas bitmap. For that we just fetch the arena from the
2983 * free list before the decommit pretending as it was allocated. If this
2984 * arena also is the single free arena in the chunk, then we must remove
2985 * from the available list before we release the lock so the allocation
2986 * thread would not see chunks with no free arenas on the available list.
2988 * After we retake the lock, we mark the arena as free and decommitted if
2989 * the decommit was successful. We must also add the chunk back to the
2990 * available list if we removed it previously or when the main thread
2991 * have allocated all remaining free arenas in the chunk.
2993 * We also must make sure that the aheader is not accessed again after we
2994 * decommit the arena.
2996 JS_ASSERT(chunk
->info
.prevp
== availableListHeadp
);
2997 while (Chunk
* next
= chunk
->info
.next
) {
2998 JS_ASSERT(next
->info
.prevp
== &chunk
->info
.next
);
3003 while (chunk
->info
.numArenasFreeCommitted
!= 0) {
3004 ArenaHeader
* aheader
= chunk
->fetchNextFreeArena(rt
);
3006 Chunk
** savedPrevp
= chunk
->info
.prevp
;
3007 if (!chunk
->hasAvailableArenas())
3008 chunk
->removeFromAvailableList();
3010 size_t arenaIndex
= Chunk::arenaIndex(aheader
->arenaAddress());
3014 * If the main thread waits for the decommit to finish, skip
3015 * potentially expensive unlock/lock pair on the contested
3018 Maybe
<AutoUnlockGC
> maybeUnlock
;
3020 maybeUnlock
.emplace(rt
);
3021 ok
= MarkPagesUnused(aheader
->getArena(), ArenaSize
);
3025 ++chunk
->info
.numArenasFree
;
3026 chunk
->decommittedArenas
.set(arenaIndex
);
3028 chunk
->addArenaToFreeList(rt
, aheader
);
3030 JS_ASSERT(chunk
->hasAvailableArenas());
3031 JS_ASSERT(!chunk
->unused());
3032 if (chunk
->info
.numArenasFree
== 1) {
3034 * Put the chunk back to the available list either at the
3035 * point where it was before to preserve the available list
3036 * that we enumerate, or, when the allocation thread has fully
3037 * used all the previous chunks, at the beginning of the
3040 Chunk
** insertPoint
= savedPrevp
;
3041 if (savedPrevp
!= availableListHeadp
) {
3042 Chunk
* prev
= Chunk::fromPointerToNext(savedPrevp
);
3043 if (!prev
->hasAvailableArenas())
3044 insertPoint
= availableListHeadp
;
3046 chunk
->insertToAvailableList(insertPoint
);
3048 JS_ASSERT(chunk
->info
.prevp
);
3051 if (chunkAllocationSinceLastGC
|| !ok
) {
3053 * The allocator thread has started to get new chunks. We should stop
3054 * to avoid decommitting arenas in just allocated chunks.
3061 * chunk->info.prevp becomes null when the allocator thread consumed
3062 * all chunks from the available list.
3064 JS_ASSERT_IF(chunk
->info
.prevp
, *chunk
->info
.prevp
== chunk
);
3065 if (chunk
->info
.prevp
== availableListHeadp
|| !chunk
->info
.prevp
)
3069 * prevp exists and is not the list head. It must point to the next
3070 * field of the previous chunk.
3072 chunk
= chunk
->getPrevious();
3077 GCRuntime::decommitArenas()
3079 decommitArenasFromAvailableList(&systemAvailableChunkListHead
);
3080 decommitArenasFromAvailableList(&userAvailableChunkListHead
);
3083 /* Must be called with the GC lock taken. */
3085 GCRuntime::expireChunksAndArenas(bool shouldShrink
)
3087 #ifdef JSGC_FJGENERATIONAL
3088 rt
->threadPool
.pruneChunkCache();
3091 if (Chunk
* toFree
= expireChunkPool(shouldShrink
, false)) {
3092 AutoUnlockGC
unlock(rt
);
3093 freeChunkList(toFree
);
3101 GCRuntime::sweepBackgroundThings(bool onBackgroundThread
)
3104 * We must finalize in the correct order, see comments in
3108 for (int phase
= 0 ; phase
< BackgroundPhaseCount
; ++phase
) {
3109 for (Zone
* zone
= sweepingZones
; zone
; zone
= zone
->gcNextGraphNode
) {
3110 for (int index
= 0 ; index
< BackgroundPhaseLength
[phase
] ; ++index
) {
3111 AllocKind kind
= BackgroundPhases
[phase
][index
];
3112 ArenaHeader
* arenas
= zone
->allocator
.arenas
.arenaListsToSweep
[kind
];
3114 ArenaLists::backgroundFinalize(&fop
, arenas
, onBackgroundThread
);
3119 sweepingZones
= nullptr;
3123 GCRuntime::assertBackgroundSweepingFinished()
3126 JS_ASSERT(!sweepingZones
);
3127 for (ZonesIter
zone(rt
, WithAtoms
); !zone
.done(); zone
.next()) {
3128 for (unsigned i
= 0; i
< FINALIZE_LIMIT
; ++i
) {
3129 JS_ASSERT(!zone
->allocator
.arenas
.arenaListsToSweep
[i
]);
3130 JS_ASSERT(zone
->allocator
.arenas
.doneBackgroundFinalize(AllocKind(i
)));
3139 static unsigned ncpus
= 0;
3142 SYSTEM_INFO sysinfo
;
3143 GetSystemInfo(&sysinfo
);
3144 ncpus
= unsigned(sysinfo
.dwNumberOfProcessors
);
3146 long n
= sysconf(_SC_NPROCESSORS_ONLN
);
3147 ncpus
= (n
> 0) ? unsigned(n
) : 1;
3154 GCHelperState::init()
3156 if (!(done
= PR_NewCondVar(rt
->gc
.lock
)))
3159 if (CanUseExtraThreads()) {
3160 backgroundAllocation
= (GetCPUCount() >= 2);
3161 HelperThreadState().ensureInitialized();
3163 backgroundAllocation
= false;
3170 GCHelperState::finish()
3173 JS_ASSERT(state_
== IDLE
);
3177 // Wait for any lingering background sweeping to finish.
3178 waitBackgroundSweepEnd();
3181 PR_DestroyCondVar(done
);
3184 GCHelperState::State
3185 GCHelperState::state()
3187 JS_ASSERT(rt
->gc
.currentThreadOwnsGCLock());
3192 GCHelperState::setState(State state
)
3194 JS_ASSERT(rt
->gc
.currentThreadOwnsGCLock());
3199 GCHelperState::startBackgroundThread(State newState
)
3201 JS_ASSERT(!thread
&& state() == IDLE
&& newState
!= IDLE
);
3204 if (!HelperThreadState().gcHelperWorklist().append(this))
3205 CrashAtUnhandlableOOM("Could not add to pending GC helpers list");
3206 HelperThreadState().notifyAll(GlobalHelperThreadState::PRODUCER
);
3210 GCHelperState::waitForBackgroundThread()
3212 JS_ASSERT(CurrentThreadCanAccessRuntime(rt
));
3214 rt
->gc
.lockOwner
= nullptr;
3215 PR_WaitCondVar(done
, PR_INTERVAL_NO_TIMEOUT
);
3217 rt
->gc
.lockOwner
= PR_GetCurrentThread();
3222 GCHelperState::work()
3224 JS_ASSERT(CanUseExtraThreads());
3226 AutoLockGC
lock(rt
);
3229 thread
= PR_GetCurrentThread();
3231 TraceLogger
* logger
= TraceLoggerForCurrentThread();
3236 MOZ_CRASH("GC helper triggered on idle state");
3240 AutoTraceLog
logSweeping(logger
, TraceLogger::GCSweeping
);
3242 JS_ASSERT(state() == SWEEPING
);
3247 AutoTraceLog
logAllocation(logger
, TraceLogger::GCAllocation
);
3251 AutoUnlockGC
unlock(rt
);
3252 chunk
= Chunk::allocate(rt
);
3255 /* OOM stops the background allocation. */
3258 JS_ASSERT(chunk
->info
.numArenasFreeCommitted
== 0);
3259 rt
->gc
.chunkPool
.put(chunk
);
3260 } while (state() == ALLOCATING
&& rt
->gc
.wantBackgroundAllocation());
3262 JS_ASSERT(state() == ALLOCATING
|| state() == CANCEL_ALLOCATION
);
3266 case CANCEL_ALLOCATION
:
3273 PR_NotifyAllCondVar(done
);
3277 GCHelperState::startBackgroundSweep(bool shouldShrink
)
3279 JS_ASSERT(CanUseExtraThreads());
3281 AutoLockHelperThreadState helperLock
;
3282 AutoLockGC
lock(rt
);
3283 JS_ASSERT(state() == IDLE
);
3284 JS_ASSERT(!sweepFlag
);
3286 shrinkFlag
= shouldShrink
;
3287 startBackgroundThread(SWEEPING
);
3290 /* Must be called with the GC lock taken. */
3292 GCHelperState::startBackgroundShrink()
3294 JS_ASSERT(CanUseExtraThreads());
3297 JS_ASSERT(!sweepFlag
);
3299 startBackgroundThread(SWEEPING
);
3305 case CANCEL_ALLOCATION
:
3307 * If we have started background allocation there is nothing to
3315 GCHelperState::waitBackgroundSweepEnd()
3317 AutoLockGC
lock(rt
);
3318 while (state() == SWEEPING
)
3319 waitForBackgroundThread();
3320 if (rt
->gc
.incrementalState
== NO_INCREMENTAL
)
3321 rt
->gc
.assertBackgroundSweepingFinished();
3325 GCHelperState::waitBackgroundSweepOrAllocEnd()
3327 AutoLockGC
lock(rt
);
3328 if (state() == ALLOCATING
)
3329 setState(CANCEL_ALLOCATION
);
3330 while (state() == SWEEPING
|| state() == CANCEL_ALLOCATION
)
3331 waitForBackgroundThread();
3332 if (rt
->gc
.incrementalState
== NO_INCREMENTAL
)
3333 rt
->gc
.assertBackgroundSweepingFinished();
3336 /* Must be called with the GC lock taken. */
3338 GCHelperState::startBackgroundAllocationIfIdle()
3341 startBackgroundThread(ALLOCATING
);
3344 /* Must be called with the GC lock taken. */
3346 GCHelperState::doSweep()
3348 AutoSetThreadIsSweeping threadIsSweeping
;
3352 AutoUnlockGC
unlock(rt
);
3354 rt
->gc
.sweepBackgroundThings(true);
3356 rt
->freeLifoAlloc
.freeAll();
3359 bool shrinking
= shrinkFlag
;
3360 rt
->gc
.expireChunksAndArenas(shrinking
);
3363 * The main thread may have called ShrinkGCBuffers while
3364 * ExpireChunksAndArenas(rt, false) was running, so we recheck the flag
3367 if (!shrinking
&& shrinkFlag
) {
3369 rt
->gc
.expireChunksAndArenas(true);
3374 GCHelperState::onBackgroundThread()
3376 return PR_GetCurrentThread() == thread
;
3380 GCRuntime::shouldReleaseObservedTypes()
3382 bool releaseTypes
= false;
3386 releaseTypes
= true;
3389 /* We may miss the exact target GC due to resets. */
3390 if (majorGCNumber
>= jitReleaseNumber
)
3391 releaseTypes
= true;
3394 jitReleaseNumber
= majorGCNumber
+ JIT_SCRIPT_RELEASE_TYPES_PERIOD
;
3396 return releaseTypes
;
3400 * It's simpler if we preserve the invariant that every zone has at least one
3401 * compartment. If we know we're deleting the entire zone, then
3402 * SweepCompartments is allowed to delete all compartments. In this case,
3403 * |keepAtleastOne| is false. If some objects remain in the zone so that it
3404 * cannot be deleted, then we set |keepAtleastOne| to true, which prohibits
3405 * SweepCompartments from deleting every compartment. Instead, it preserves an
3406 * arbitrary compartment in the zone.
3409 Zone::sweepCompartments(FreeOp
* fop
, bool keepAtleastOne
, bool destroyingRuntime
)
3411 JSRuntime
* rt
= runtimeFromMainThread();
3412 JSDestroyCompartmentCallback callback
= rt
->destroyCompartmentCallback
;
3414 JSCompartment
** read
= compartments
.begin();
3415 JSCompartment
** end
= compartments
.end();
3416 JSCompartment
** write
= read
;
3417 bool foundOne
= false;
3418 while (read
< end
) {
3419 JSCompartment
* comp
= *read
++;
3420 JS_ASSERT(!rt
->isAtomsCompartment(comp
));
3423 * Don't delete the last compartment if all the ones before it were
3424 * deleted and keepAtleastOne is true.
3426 bool dontDelete
= read
== end
&& !foundOne
&& keepAtleastOne
;
3427 if ((!comp
->marked
&& !dontDelete
) || destroyingRuntime
) {
3429 callback(fop
, comp
);
3430 if (comp
->principals
)
3431 JS_DropPrincipals(rt
, comp
->principals
);
3438 compartments
.resize(write
- compartments
.begin());
3439 JS_ASSERT_IF(keepAtleastOne
, !compartments
.empty());
3443 GCRuntime::sweepZones(FreeOp
* fop
, bool destroyingRuntime
)
3445 MOZ_ASSERT_IF(destroyingRuntime
, rt
->gc
.numActiveZoneIters
== 0);
3446 if (rt
->gc
.numActiveZoneIters
)
3449 JSZoneCallback callback
= rt
->destroyZoneCallback
;
3451 /* Skip the atomsCompartment zone. */
3452 Zone
** read
= zones
.begin() + 1;
3453 Zone
** end
= zones
.end();
3454 Zone
** write
= read
;
3455 JS_ASSERT(zones
.length() >= 1);
3456 JS_ASSERT(rt
->isAtomsZone(zones
[0]));
3458 while (read
< end
) {
3459 Zone
* zone
= *read
++;
3461 if (zone
->wasGCStarted()) {
3462 if ((zone
->allocator
.arenas
.arenaListsAreEmpty() && !zone
->hasMarkedCompartments()) ||
3465 zone
->allocator
.arenas
.checkEmptyFreeLists();
3468 zone
->sweepCompartments(fop
, false, destroyingRuntime
);
3469 JS_ASSERT(zone
->compartments
.empty());
3473 zone
->sweepCompartments(fop
, true, destroyingRuntime
);
3477 zones
.resize(write
- zones
.begin());
3481 PurgeRuntime(JSRuntime
* rt
)
3483 for (GCCompartmentsIter
comp(rt
); !comp
.done(); comp
.next())
3486 rt
->freeLifoAlloc
.transferUnusedFrom(&rt
->tempLifoAlloc
);
3487 rt
->interpreterStack().purge(rt
);
3489 rt
->gsnCache
.purge();
3490 rt
->scopeCoordinateNameCache
.purge();
3491 rt
->newObjectCache
.purge();
3492 rt
->nativeIterCache
.purge();
3493 rt
->uncompressedSourceCache
.purge();
3494 rt
->evalCache
.clear();
3495 rt
->regExpTestCache
.purge();
3497 if (!rt
->hasActiveCompilations())
3498 rt
->parseMapPool().purgeAll();
3502 GCRuntime::shouldPreserveJITCode(JSCompartment
* comp
, int64_t currentTime
,
3503 JS::gcreason::Reason reason
)
3505 if (cleanUpEverything
)
3508 if (alwaysPreserveCode
)
3510 if (comp
->lastAnimationTime
+ PRMJ_USEC_PER_SEC
>= currentTime
)
3512 if (reason
== JS::gcreason::DEBUG_GC
)
3515 if (comp
->jitCompartment() && comp
->jitCompartment()->hasRecentParallelActivity())
3522 class CompartmentCheckTracer
: public JSTracer
3525 CompartmentCheckTracer(JSRuntime
* rt
, JSTraceCallback callback
)
3526 : JSTracer(rt
, callback
)
3530 JSGCTraceKind srcKind
;
3532 JSCompartment
* compartment
;
3536 InCrossCompartmentMap(JSObject
* src
, Cell
* dst
, JSGCTraceKind dstKind
)
3538 JSCompartment
* srccomp
= src
->compartment();
3540 if (dstKind
== JSTRACE_OBJECT
) {
3541 Value key
= ObjectValue(*static_cast<JSObject
*>(dst
));
3542 if (WrapperMap::Ptr p
= srccomp
->lookupWrapper(key
)) {
3543 if (*p
->value().unsafeGet() == ObjectValue(*src
))
3549 * If the cross-compartment edge is caused by the debugger, then we don't
3550 * know the right hashtable key, so we have to iterate.
3552 for (JSCompartment::WrapperEnum
e(srccomp
); !e
.empty(); e
.popFront()) {
3553 if (e
.front().key().wrapped
== dst
&& ToMarkable(e
.front().value()) == src
)
3561 CheckCompartment(CompartmentCheckTracer
* trc
, JSCompartment
* thingCompartment
,
3562 Cell
* thing
, JSGCTraceKind kind
)
3564 JS_ASSERT(thingCompartment
== trc
->compartment
||
3565 trc
->runtime()->isAtomsCompartment(thingCompartment
) ||
3566 (trc
->srcKind
== JSTRACE_OBJECT
&&
3567 InCrossCompartmentMap((JSObject
*)trc
->src
, thing
, kind
)));
3570 static JSCompartment
*
3571 CompartmentOfCell(Cell
* thing
, JSGCTraceKind kind
)
3573 if (kind
== JSTRACE_OBJECT
)
3574 return static_cast<JSObject
*>(thing
)->compartment();
3575 else if (kind
== JSTRACE_SHAPE
)
3576 return static_cast<Shape
*>(thing
)->compartment();
3577 else if (kind
== JSTRACE_BASE_SHAPE
)
3578 return static_cast<BaseShape
*>(thing
)->compartment();
3579 else if (kind
== JSTRACE_SCRIPT
)
3580 return static_cast<JSScript
*>(thing
)->compartment();
3586 CheckCompartmentCallback(JSTracer
* trcArg
, void** thingp
, JSGCTraceKind kind
)
3588 CompartmentCheckTracer
* trc
= static_cast<CompartmentCheckTracer
*>(trcArg
);
3589 Cell
* thing
= (Cell
*)*thingp
;
3591 JSCompartment
* comp
= CompartmentOfCell(thing
, kind
);
3592 if (comp
&& trc
->compartment
) {
3593 CheckCompartment(trc
, comp
, thing
, kind
);
3595 JS_ASSERT(thing
->tenuredZone() == trc
->zone
||
3596 trc
->runtime()->isAtomsZone(thing
->tenuredZone()));
3601 GCRuntime::checkForCompartmentMismatches()
3603 if (disableStrictProxyCheckingCount
)
3606 CompartmentCheckTracer
trc(rt
, CheckCompartmentCallback
);
3607 for (ZonesIter
zone(rt
, SkipAtoms
); !zone
.done(); zone
.next()) {
3609 for (size_t thingKind
= 0; thingKind
< FINALIZE_LAST
; thingKind
++) {
3610 for (ZoneCellIterUnderGC
i(zone
, AllocKind(thingKind
)); !i
.done(); i
.next()) {
3611 trc
.src
= i
.getCell();
3612 trc
.srcKind
= MapAllocToTraceKind(AllocKind(thingKind
));
3613 trc
.compartment
= CompartmentOfCell(trc
.src
, trc
.srcKind
);
3614 JS_TraceChildren(&trc
, trc
.src
, trc
.srcKind
);
3622 GCRuntime::beginMarkPhase(JS::gcreason::Reason reason
)
3624 int64_t currentTime
= PRMJ_Now();
3627 if (fullCompartmentChecks
)
3628 checkForCompartmentMismatches();
3634 for (ZonesIter
zone(rt
, WithAtoms
); !zone
.done(); zone
.next()) {
3635 /* Assert that zone state is as we expect */
3636 JS_ASSERT(!zone
->isCollecting());
3637 JS_ASSERT(!zone
->compartments
.empty());
3638 for (unsigned i
= 0; i
< FINALIZE_LIMIT
; ++i
)
3639 JS_ASSERT(!zone
->allocator
.arenas
.arenaListsToSweep
[i
]);
3641 /* Set up which zones will be collected. */
3642 if (zone
->isGCScheduled()) {
3643 if (!rt
->isAtomsZone(zone
)) {
3645 zone
->setGCState(Zone::Mark
);
3651 zone
->setPreservingCode(false);
3654 for (CompartmentsIter
c(rt
, WithAtoms
); !c
.done(); c
.next()) {
3655 JS_ASSERT(c
->gcLiveArrayBuffers
.empty());
3657 c
->scheduledForDestruction
= false;
3658 c
->maybeAlive
= false;
3659 if (shouldPreserveJITCode(c
, currentTime
, reason
))
3660 c
->zone()->setPreservingCode(true);
3663 if (!rt
->gc
.cleanUpEverything
) {
3664 if (JSCompartment
* comp
= jit::TopmostIonActivationCompartment(rt
))
3665 comp
->zone()->setPreservingCode(true);
3669 * Atoms are not in the cross-compartment map. So if there are any
3670 * zones that are not being collected, we are not allowed to collect
3671 * atoms. Otherwise, the non-collected zones could contain pointers
3672 * to atoms that we would miss.
3674 * keepAtoms() will only change on the main thread, which we are currently
3675 * on. If the value of keepAtoms() changes between GC slices, then we'll
3676 * cancel the incremental GC. See IsIncrementalGCSafe.
3678 if (isFull
&& !rt
->keepAtoms()) {
3679 Zone
* atomsZone
= rt
->atomsCompartment()->zone();
3680 if (atomsZone
->isGCScheduled()) {
3681 JS_ASSERT(!atomsZone
->isCollecting());
3682 atomsZone
->setGCState(Zone::Mark
);
3687 /* Check that at least one zone is scheduled for collection. */
3692 * At the end of each incremental slice, we call prepareForIncrementalGC,
3693 * which marks objects in all arenas that we're currently allocating
3694 * into. This can cause leaks if unreachable objects are in these
3695 * arenas. This purge call ensures that we only mark arenas that have had
3696 * allocations after the incremental GC started.
3698 if (isIncremental
) {
3699 for (GCZonesIter
zone(rt
); !zone
.done(); zone
.next())
3700 zone
->allocator
.arenas
.purge();
3704 JS_ASSERT(!marker
.callback
);
3705 JS_ASSERT(IS_GC_MARKING_TRACER(&marker
));
3707 /* For non-incremental GC the following sweep discards the jit code. */
3708 if (isIncremental
) {
3709 for (GCZonesIter
zone(rt
); !zone
.done(); zone
.next()) {
3710 gcstats::AutoPhase
ap(stats
, gcstats::PHASE_MARK_DISCARD_CODE
);
3711 zone
->discardJitCode(rt
->defaultFreeOp());
3715 GCMarker
* gcmarker
= &marker
;
3717 startNumber
= number
;
3720 * We must purge the runtime at the beginning of an incremental GC. The
3721 * danger if we purge later is that the snapshot invariant of incremental
3722 * GC will be broken, as follows. If some object is reachable only through
3723 * some cache (say the dtoaCache) then it will not be part of the snapshot.
3724 * If we purge after root marking, then the mutator could obtain a pointer
3725 * to the object and start using it. This object might never be marked, so
3726 * a GC hazard would exist.
3729 gcstats::AutoPhase
ap(stats
, gcstats::PHASE_PURGE
);
3736 gcstats::AutoPhase
ap1(stats
, gcstats::PHASE_MARK
);
3737 gcstats::AutoPhase
ap2(stats
, gcstats::PHASE_MARK_ROOTS
);
3739 for (GCZonesIter
zone(rt
); !zone
.done(); zone
.next()) {
3740 /* Unmark everything in the zones being collected. */
3741 zone
->allocator
.arenas
.unmarkAll();
3744 for (GCCompartmentsIter
c(rt
); !c
.done(); c
.next()) {
3745 /* Unmark all weak maps in the compartments being collected. */
3746 WeakMapBase::unmarkCompartment(c
);
3750 UnmarkScriptData(rt
);
3752 markRuntime(gcmarker
, MarkRuntime
);
3757 * This code ensures that if a compartment is "dead", then it will be
3758 * collected in this GC. A compartment is considered dead if its maybeAlive
3759 * flag is false. The maybeAlive flag is set if:
3760 * (1) the compartment has incoming cross-compartment edges, or
3761 * (2) an object in the compartment was marked during root marking, either
3762 * as a black root or a gray root.
3763 * If the maybeAlive is false, then we set the scheduledForDestruction flag.
3764 * At the end of the GC, we look for compartments where
3765 * scheduledForDestruction is true. These are compartments that were somehow
3766 * "revived" during the incremental GC. If any are found, we do a special,
3767 * non-incremental GC of those compartments to try to collect them.
3769 * Compartments can be revived for a variety of reasons. On reason is bug
3770 * 811587, where a reflector that was dead can be revived by DOM code that
3771 * still refers to the underlying DOM node.
3773 * Read barriers and allocations can also cause revival. This might happen
3774 * during a function like JS_TransplantObject, which iterates over all
3775 * compartments, live or dead, and operates on their objects. See bug 803376
3776 * for details on this problem. To avoid the problem, we try to avoid
3777 * allocation and read barriers during JS_TransplantObject and the like.
3780 /* Set the maybeAlive flag based on cross-compartment edges. */
3781 for (CompartmentsIter
c(rt
, SkipAtoms
); !c
.done(); c
.next()) {
3782 for (JSCompartment::WrapperEnum
e(c
); !e
.empty(); e
.popFront()) {
3783 const CrossCompartmentKey
& key
= e
.front().key();
3784 JSCompartment
* dest
;
3786 case CrossCompartmentKey::ObjectWrapper
:
3787 case CrossCompartmentKey::DebuggerObject
:
3788 case CrossCompartmentKey::DebuggerSource
:
3789 case CrossCompartmentKey::DebuggerEnvironment
:
3790 dest
= static_cast<JSObject
*>(key
.wrapped
)->compartment();
3792 case CrossCompartmentKey::DebuggerScript
:
3793 dest
= static_cast<JSScript
*>(key
.wrapped
)->compartment();
3800 dest
->maybeAlive
= true;
3805 * For black roots, code in gc/Marking.cpp will already have set maybeAlive
3806 * during MarkRuntime.
3809 for (GCCompartmentsIter
c(rt
); !c
.done(); c
.next()) {
3810 if (!c
->maybeAlive
&& !rt
->isAtomsCompartment(c
))
3811 c
->scheduledForDestruction
= true;
3813 foundBlackGrayEdges
= false;
3818 template <class CompartmentIterT
>
3820 GCRuntime::markWeakReferences(gcstats::Phase phase
)
3822 JS_ASSERT(marker
.isDrained());
3824 gcstats::AutoPhase
ap1(stats
, phase
);
3827 bool markedAny
= false;
3828 for (CompartmentIterT
c(rt
); !c
.done(); c
.next()) {
3829 markedAny
|= WatchpointMap::markCompartmentIteratively(c
, &marker
);
3830 markedAny
|= WeakMapBase::markCompartmentIteratively(c
, &marker
);
3832 markedAny
|= Debugger::markAllIteratively(&marker
);
3838 marker
.drainMarkStack(budget
);
3840 JS_ASSERT(marker
.isDrained());
3844 GCRuntime::markWeakReferencesInCurrentGroup(gcstats::Phase phase
)
3846 markWeakReferences
<GCCompartmentGroupIter
>(phase
);
3849 template <class ZoneIterT
, class CompartmentIterT
>
3851 GCRuntime::markGrayReferences(gcstats::Phase phase
)
3853 gcstats::AutoPhase
ap(stats
, phase
);
3854 if (marker
.hasBufferedGrayRoots()) {
3855 for (ZoneIterT
zone(rt
); !zone
.done(); zone
.next())
3856 marker
.markBufferedGrayRoots(zone
);
3858 JS_ASSERT(!isIncremental
);
3859 if (JSTraceDataOp op
= grayRootTracer
.op
)
3860 (*op
)(&marker
, grayRootTracer
.data
);
3863 marker
.drainMarkStack(budget
);
3867 GCRuntime::markGrayReferencesInCurrentGroup(gcstats::Phase phase
)
3869 markGrayReferences
<GCZoneGroupIter
, GCCompartmentGroupIter
>(phase
);
3873 GCRuntime::markAllWeakReferences(gcstats::Phase phase
)
3875 markWeakReferences
<GCCompartmentsIter
>(phase
);
3879 GCRuntime::markAllGrayReferences(gcstats::Phase phase
)
3881 markGrayReferences
<GCZonesIter
, GCCompartmentsIter
>(phase
);
3886 class js::gc::MarkingValidator
3889 explicit MarkingValidator(GCRuntime
* gc
);
3890 ~MarkingValidator();
3891 void nonIncrementalMark();
3898 typedef HashMap
<Chunk
*, ChunkBitmap
*, GCChunkHasher
, SystemAllocPolicy
> BitmapMap
;
3904 #ifdef JS_GC_MARKING_VALIDATION
3906 js::gc::MarkingValidator::MarkingValidator(GCRuntime
* gc
)
3911 js::gc::MarkingValidator::~MarkingValidator()
3913 if (!map
.initialized())
3916 for (BitmapMap::Range
r(map
.all()); !r
.empty(); r
.popFront())
3917 js_delete(r
.front().value());
3921 js::gc::MarkingValidator::nonIncrementalMark()
3924 * Perform a non-incremental mark for all collecting zones and record
3925 * the results for later comparison.
3927 * Currently this does not validate gray marking.
3933 JSRuntime
* runtime
= gc
->rt
;
3934 GCMarker
* gcmarker
= &gc
->marker
;
3936 /* Save existing mark bits. */
3937 for (GCChunkSet::Range
r(gc
->chunkSet
.all()); !r
.empty(); r
.popFront()) {
3938 ChunkBitmap
* bitmap
= &r
.front()->bitmap
;
3939 ChunkBitmap
* entry
= js_new
<ChunkBitmap
>();
3943 memcpy((void*)entry
->bitmap
, (void*)bitmap
->bitmap
, sizeof(bitmap
->bitmap
));
3944 if (!map
.putNew(r
.front(), entry
))
3949 * Temporarily clear the weakmaps' mark flags and the lists of live array
3950 * buffers for the compartments we are collecting.
3953 WeakMapSet markedWeakMaps
;
3954 if (!markedWeakMaps
.init())
3957 ArrayBufferVector arrayBuffers
;
3958 for (GCCompartmentsIter
c(runtime
); !c
.done(); c
.next()) {
3959 if (!WeakMapBase::saveCompartmentMarkedWeakMaps(c
, markedWeakMaps
) ||
3960 !ArrayBufferObject::saveArrayBufferList(c
, arrayBuffers
))
3967 * After this point, the function should run to completion, so we shouldn't
3968 * do anything fallible.
3972 for (GCCompartmentsIter
c(runtime
); !c
.done(); c
.next()) {
3973 WeakMapBase::unmarkCompartment(c
);
3974 ArrayBufferObject::resetArrayBufferList(c
);
3977 /* Re-do all the marking, but non-incrementally. */
3978 js::gc::State state
= gc
->incrementalState
;
3979 gc
->incrementalState
= MARK_ROOTS
;
3981 JS_ASSERT(gcmarker
->isDrained());
3984 for (GCChunkSet::Range
r(gc
->chunkSet
.all()); !r
.empty(); r
.popFront())
3985 r
.front()->bitmap
.clear();
3988 gcstats::AutoPhase
ap1(gc
->stats
, gcstats::PHASE_MARK
);
3989 gcstats::AutoPhase
ap2(gc
->stats
, gcstats::PHASE_MARK_ROOTS
);
3990 gc
->markRuntime(gcmarker
, GCRuntime::MarkRuntime
, GCRuntime::UseSavedRoots
);
3994 gcstats::AutoPhase
ap1(gc
->stats
, gcstats::PHASE_MARK
);
3996 gc
->incrementalState
= MARK
;
3997 gc
->marker
.drainMarkStack(budget
);
4000 gc
->incrementalState
= SWEEP
;
4002 gcstats::AutoPhase
ap1(gc
->stats
, gcstats::PHASE_SWEEP
);
4003 gcstats::AutoPhase
ap2(gc
->stats
, gcstats::PHASE_SWEEP_MARK
);
4004 gc
->markAllWeakReferences(gcstats::PHASE_SWEEP_MARK_WEAK
);
4006 /* Update zone state for gray marking. */
4007 for (GCZonesIter
zone(runtime
); !zone
.done(); zone
.next()) {
4008 JS_ASSERT(zone
->isGCMarkingBlack());
4009 zone
->setGCState(Zone::MarkGray
);
4011 gc
->marker
.setMarkColorGray();
4013 gc
->markAllGrayReferences(gcstats::PHASE_SWEEP_MARK_GRAY
);
4014 gc
->markAllWeakReferences(gcstats::PHASE_SWEEP_MARK_GRAY_WEAK
);
4016 /* Restore zone state. */
4017 for (GCZonesIter
zone(runtime
); !zone
.done(); zone
.next()) {
4018 JS_ASSERT(zone
->isGCMarkingGray());
4019 zone
->setGCState(Zone::Mark
);
4021 JS_ASSERT(gc
->marker
.isDrained());
4022 gc
->marker
.setMarkColorBlack();
4025 /* Take a copy of the non-incremental mark state and restore the original. */
4026 for (GCChunkSet::Range
r(gc
->chunkSet
.all()); !r
.empty(); r
.popFront()) {
4027 Chunk
* chunk
= r
.front();
4028 ChunkBitmap
* bitmap
= &chunk
->bitmap
;
4029 ChunkBitmap
* entry
= map
.lookup(chunk
)->value();
4030 Swap(*entry
, *bitmap
);
4033 for (GCCompartmentsIter
c(runtime
); !c
.done(); c
.next()) {
4034 WeakMapBase::unmarkCompartment(c
);
4035 ArrayBufferObject::resetArrayBufferList(c
);
4037 WeakMapBase::restoreCompartmentMarkedWeakMaps(markedWeakMaps
);
4038 ArrayBufferObject::restoreArrayBufferLists(arrayBuffers
);
4040 gc
->incrementalState
= state
;
4044 js::gc::MarkingValidator::validate()
4047 * Validates the incremental marking for a single compartment by comparing
4048 * the mark bits to those previously recorded for a non-incremental mark.
4054 for (GCChunkSet::Range
r(gc
->chunkSet
.all()); !r
.empty(); r
.popFront()) {
4055 Chunk
* chunk
= r
.front();
4056 BitmapMap::Ptr ptr
= map
.lookup(chunk
);
4058 continue; /* Allocated after we did the non-incremental mark. */
4060 ChunkBitmap
* bitmap
= ptr
->value();
4061 ChunkBitmap
* incBitmap
= &chunk
->bitmap
;
4063 for (size_t i
= 0; i
< ArenasPerChunk
; i
++) {
4064 if (chunk
->decommittedArenas
.get(i
))
4066 Arena
* arena
= &chunk
->arenas
[i
];
4067 if (!arena
->aheader
.allocated())
4069 if (!arena
->aheader
.zone
->isGCSweeping())
4071 if (arena
->aheader
.allocatedDuringIncremental
)
4074 AllocKind kind
= arena
->aheader
.getAllocKind();
4075 uintptr_t thing
= arena
->thingsStart(kind
);
4076 uintptr_t end
= arena
->thingsEnd();
4077 while (thing
< end
) {
4078 Cell
* cell
= (Cell
*)thing
;
4081 * If a non-incremental GC wouldn't have collected a cell, then
4082 * an incremental GC won't collect it.
4084 JS_ASSERT_IF(bitmap
->isMarked(cell
, BLACK
), incBitmap
->isMarked(cell
, BLACK
));
4087 * If the cycle collector isn't allowed to collect an object
4088 * after a non-incremental GC has run, then it isn't allowed to
4089 * collected it after an incremental GC.
4091 JS_ASSERT_IF(!bitmap
->isMarked(cell
, GRAY
), !incBitmap
->isMarked(cell
, GRAY
));
4093 thing
+= Arena::thingSize(kind
);
4099 #endif // JS_GC_MARKING_VALIDATION
4102 GCRuntime::computeNonIncrementalMarkingForValidation()
4104 #ifdef JS_GC_MARKING_VALIDATION
4105 JS_ASSERT(!markingValidator
);
4106 if (isIncremental
&& validate
)
4107 markingValidator
= js_new
<MarkingValidator
>(this);
4108 if (markingValidator
)
4109 markingValidator
->nonIncrementalMark();
4114 GCRuntime::validateIncrementalMarking()
4116 #ifdef JS_GC_MARKING_VALIDATION
4117 if (markingValidator
)
4118 markingValidator
->validate();
4123 GCRuntime::finishMarkingValidation()
4125 #ifdef JS_GC_MARKING_VALIDATION
4126 js_delete(markingValidator
);
4127 markingValidator
= nullptr;
4132 AssertNeedsBarrierFlagsConsistent(JSRuntime
* rt
)
4134 #ifdef JS_GC_MARKING_VALIDATION
4135 bool anyNeedsBarrier
= false;
4136 for (ZonesIter
zone(rt
, WithAtoms
); !zone
.done(); zone
.next())
4137 anyNeedsBarrier
|= zone
->needsIncrementalBarrier();
4138 JS_ASSERT(rt
->needsIncrementalBarrier() == anyNeedsBarrier
);
4143 DropStringWrappers(JSRuntime
* rt
)
4146 * String "wrappers" are dropped on GC because their presence would require
4147 * us to sweep the wrappers in all compartments every time we sweep a
4148 * compartment group.
4150 for (CompartmentsIter
c(rt
, SkipAtoms
); !c
.done(); c
.next()) {
4151 for (JSCompartment::WrapperEnum
e(c
); !e
.empty(); e
.popFront()) {
4152 if (e
.front().key().kind
== CrossCompartmentKey::StringWrapper
)
4159 * Group zones that must be swept at the same time.
4161 * If compartment A has an edge to an unmarked object in compartment B, then we
4162 * must not sweep A in a later slice than we sweep B. That's because a write
4163 * barrier in A that could lead to the unmarked object in B becoming
4164 * marked. However, if we had already swept that object, we would be in trouble.
4166 * If we consider these dependencies as a graph, then all the compartments in
4167 * any strongly-connected component of this graph must be swept in the same
4170 * Tarjan's algorithm is used to calculate the components.
4174 JSCompartment::findOutgoingEdges(ComponentFinder
<JS::Zone
>& finder
)
4176 for (js::WrapperMap::Enum
e(crossCompartmentWrappers
); !e
.empty(); e
.popFront()) {
4177 CrossCompartmentKey::Kind kind
= e
.front().key().kind
;
4178 JS_ASSERT(kind
!= CrossCompartmentKey::StringWrapper
);
4179 Cell
* other
= e
.front().key().wrapped
;
4180 if (kind
== CrossCompartmentKey::ObjectWrapper
) {
4182 * Add edge to wrapped object compartment if wrapped object is not
4183 * marked black to indicate that wrapper compartment not be swept
4184 * after wrapped compartment.
4186 if (!other
->isMarked(BLACK
) || other
->isMarked(GRAY
)) {
4187 JS::Zone
* w
= other
->tenuredZone();
4188 if (w
->isGCMarking())
4189 finder
.addEdgeTo(w
);
4192 JS_ASSERT(kind
== CrossCompartmentKey::DebuggerScript
||
4193 kind
== CrossCompartmentKey::DebuggerSource
||
4194 kind
== CrossCompartmentKey::DebuggerObject
||
4195 kind
== CrossCompartmentKey::DebuggerEnvironment
);
4197 * Add edge for debugger object wrappers, to ensure (in conjuction
4198 * with call to Debugger::findCompartmentEdges below) that debugger
4199 * and debuggee objects are always swept in the same group.
4201 JS::Zone
* w
= other
->tenuredZone();
4202 if (w
->isGCMarking())
4203 finder
.addEdgeTo(w
);
4207 Debugger::findCompartmentEdges(zone(), finder
);
4211 Zone::findOutgoingEdges(ComponentFinder
<JS::Zone
>& finder
)
4214 * Any compartment may have a pointer to an atom in the atoms
4215 * compartment, and these aren't in the cross compartment map.
4217 JSRuntime
* rt
= runtimeFromMainThread();
4218 if (rt
->atomsCompartment()->zone()->isGCMarking())
4219 finder
.addEdgeTo(rt
->atomsCompartment()->zone());
4221 for (CompartmentsInZoneIter
comp(this); !comp
.done(); comp
.next())
4222 comp
->findOutgoingEdges(finder
);
4224 for (ZoneSet::Range r
= gcZoneGroupEdges
.all(); !r
.empty(); r
.popFront()) {
4225 if (r
.front()->isGCMarking())
4226 finder
.addEdgeTo(r
.front());
4228 gcZoneGroupEdges
.clear();
4232 GCRuntime::findZoneEdgesForWeakMaps()
4235 * Weakmaps which have keys with delegates in a different zone introduce the
4236 * need for zone edges from the delegate's zone to the weakmap zone.
4238 * Since the edges point into and not away from the zone the weakmap is in
4239 * we must find these edges in advance and store them in a set on the Zone.
4240 * If we run out of memory, we fall back to sweeping everything in one
4244 for (GCCompartmentsIter
comp(rt
); !comp
.done(); comp
.next()) {
4245 if (!WeakMapBase::findZoneEdgesForCompartment(comp
))
4253 GCRuntime::findZoneGroups()
4255 ComponentFinder
<Zone
> finder(rt
->mainThread
.nativeStackLimit
[StackForSystemCode
]);
4256 if (!isIncremental
|| !findZoneEdgesForWeakMaps())
4257 finder
.useOneComponent();
4259 for (GCZonesIter
zone(rt
); !zone
.done(); zone
.next()) {
4260 JS_ASSERT(zone
->isGCMarking());
4261 finder
.addNode(zone
);
4263 zoneGroups
= finder
.getResultsList();
4264 currentZoneGroup
= zoneGroups
;
4267 for (Zone
* head
= currentZoneGroup
; head
; head
= head
->nextGroup()) {
4268 for (Zone
* zone
= head
; zone
; zone
= zone
->nextNodeInGroup())
4269 JS_ASSERT(zone
->isGCMarking());
4272 JS_ASSERT_IF(!isIncremental
, !currentZoneGroup
->nextGroup());
4276 ResetGrayList(JSCompartment
* comp
);
4279 GCRuntime::getNextZoneGroup()
4281 currentZoneGroup
= currentZoneGroup
->nextGroup();
4283 if (!currentZoneGroup
) {
4284 abortSweepAfterCurrentGroup
= false;
4288 for (Zone
* zone
= currentZoneGroup
; zone
; zone
= zone
->nextNodeInGroup())
4289 JS_ASSERT(zone
->isGCMarking());
4292 ComponentFinder
<Zone
>::mergeGroups(currentZoneGroup
);
4294 if (abortSweepAfterCurrentGroup
) {
4295 JS_ASSERT(!isIncremental
);
4296 for (GCZoneGroupIter
zone(rt
); !zone
.done(); zone
.next()) {
4297 JS_ASSERT(!zone
->gcNextGraphComponent
);
4298 JS_ASSERT(zone
->isGCMarking());
4299 zone
->setNeedsIncrementalBarrier(false, Zone::UpdateJit
);
4300 zone
->setGCState(Zone::NoGC
);
4301 zone
->gcGrayRoots
.clearAndFree();
4303 rt
->setNeedsIncrementalBarrier(false);
4304 AssertNeedsBarrierFlagsConsistent(rt
);
4306 for (GCCompartmentGroupIter
comp(rt
); !comp
.done(); comp
.next()) {
4307 ArrayBufferObject::resetArrayBufferList(comp
);
4308 ResetGrayList(comp
);
4311 abortSweepAfterCurrentGroup
= false;
4312 currentZoneGroup
= nullptr;
4319 * At the end of collection, anything reachable from a gray root that has not
4320 * otherwise been marked black must be marked gray.
4322 * This means that when marking things gray we must not allow marking to leave
4323 * the current compartment group, as that could result in things being marked
4324 * grey when they might subsequently be marked black. To achieve this, when we
4325 * find a cross compartment pointer we don't mark the referent but add it to a
4326 * singly-linked list of incoming gray pointers that is stored with each
4329 * The list head is stored in JSCompartment::gcIncomingGrayPointers and contains
4330 * cross compartment wrapper objects. The next pointer is stored in the second
4331 * extra slot of the cross compartment wrapper.
4333 * The list is created during gray marking when one of the
4334 * MarkCrossCompartmentXXX functions is called for a pointer that leaves the
4335 * current compartent group. This calls DelayCrossCompartmentGrayMarking to
4336 * push the referring object onto the list.
4338 * The list is traversed and then unlinked in
4339 * MarkIncomingCrossCompartmentPointers.
4343 IsGrayListObject(JSObject
* obj
)
4346 return obj
->is
<CrossCompartmentWrapperObject
>() && !IsDeadProxyObject(obj
);
4349 /* static */ unsigned
4350 ProxyObject::grayLinkSlot(JSObject
* obj
)
4352 JS_ASSERT(IsGrayListObject(obj
));
4353 return ProxyObject::EXTRA_SLOT
+ 1;
4358 AssertNotOnGrayList(JSObject
* obj
)
4360 JS_ASSERT_IF(IsGrayListObject(obj
),
4361 obj
->getReservedSlot(ProxyObject::grayLinkSlot(obj
)).isUndefined());
4366 CrossCompartmentPointerReferent(JSObject
* obj
)
4368 JS_ASSERT(IsGrayListObject(obj
));
4369 return &obj
->as
<ProxyObject
>().private_().toObject();
4373 NextIncomingCrossCompartmentPointer(JSObject
* prev
, bool unlink
)
4375 unsigned slot
= ProxyObject::grayLinkSlot(prev
);
4376 JSObject
* next
= prev
->getReservedSlot(slot
).toObjectOrNull();
4377 JS_ASSERT_IF(next
, IsGrayListObject(next
));
4380 prev
->setSlot(slot
, UndefinedValue());
4386 js::DelayCrossCompartmentGrayMarking(JSObject
* src
)
4388 JS_ASSERT(IsGrayListObject(src
));
4390 /* Called from MarkCrossCompartmentXXX functions. */
4391 unsigned slot
= ProxyObject::grayLinkSlot(src
);
4392 JSObject
* dest
= CrossCompartmentPointerReferent(src
);
4393 JSCompartment
* comp
= dest
->compartment();
4395 if (src
->getReservedSlot(slot
).isUndefined()) {
4396 src
->setCrossCompartmentSlot(slot
, ObjectOrNullValue(comp
->gcIncomingGrayPointers
));
4397 comp
->gcIncomingGrayPointers
= src
;
4399 JS_ASSERT(src
->getReservedSlot(slot
).isObjectOrNull());
4404 * Assert that the object is in our list, also walking the list to check its
4407 JSObject
* obj
= comp
->gcIncomingGrayPointers
;
4412 obj
= NextIncomingCrossCompartmentPointer(obj
, false);
4419 MarkIncomingCrossCompartmentPointers(JSRuntime
* rt
, const uint32_t color
)
4421 JS_ASSERT(color
== BLACK
|| color
== GRAY
);
4423 static const gcstats::Phase statsPhases
[] = {
4424 gcstats::PHASE_SWEEP_MARK_INCOMING_BLACK
,
4425 gcstats::PHASE_SWEEP_MARK_INCOMING_GRAY
4427 gcstats::AutoPhase
ap1(rt
->gc
.stats
, statsPhases
[color
]);
4429 bool unlinkList
= color
== GRAY
;
4431 for (GCCompartmentGroupIter
c(rt
); !c
.done(); c
.next()) {
4432 JS_ASSERT_IF(color
== GRAY
, c
->zone()->isGCMarkingGray());
4433 JS_ASSERT_IF(color
== BLACK
, c
->zone()->isGCMarkingBlack());
4434 JS_ASSERT_IF(c
->gcIncomingGrayPointers
, IsGrayListObject(c
->gcIncomingGrayPointers
));
4436 for (JSObject
* src
= c
->gcIncomingGrayPointers
;
4438 src
= NextIncomingCrossCompartmentPointer(src
, unlinkList
))
4440 JSObject
* dst
= CrossCompartmentPointerReferent(src
);
4441 JS_ASSERT(dst
->compartment() == c
);
4443 if (color
== GRAY
) {
4444 if (IsObjectMarked(&src
) && src
->isMarked(GRAY
))
4445 MarkGCThingUnbarriered(&rt
->gc
.marker
, (void**)&dst
,
4446 "cross-compartment gray pointer");
4448 if (IsObjectMarked(&src
) && !src
->isMarked(GRAY
))
4449 MarkGCThingUnbarriered(&rt
->gc
.marker
, (void**)&dst
,
4450 "cross-compartment black pointer");
4455 c
->gcIncomingGrayPointers
= nullptr;
4459 rt
->gc
.marker
.drainMarkStack(budget
);
4463 RemoveFromGrayList(JSObject
* wrapper
)
4465 if (!IsGrayListObject(wrapper
))
4468 unsigned slot
= ProxyObject::grayLinkSlot(wrapper
);
4469 if (wrapper
->getReservedSlot(slot
).isUndefined())
4470 return false; /* Not on our list. */
4472 JSObject
* tail
= wrapper
->getReservedSlot(slot
).toObjectOrNull();
4473 wrapper
->setReservedSlot(slot
, UndefinedValue());
4475 JSCompartment
* comp
= CrossCompartmentPointerReferent(wrapper
)->compartment();
4476 JSObject
* obj
= comp
->gcIncomingGrayPointers
;
4477 if (obj
== wrapper
) {
4478 comp
->gcIncomingGrayPointers
= tail
;
4483 unsigned slot
= ProxyObject::grayLinkSlot(obj
);
4484 JSObject
* next
= obj
->getReservedSlot(slot
).toObjectOrNull();
4485 if (next
== wrapper
) {
4486 obj
->setCrossCompartmentSlot(slot
, ObjectOrNullValue(tail
));
4492 MOZ_CRASH("object not found in gray link list");
4496 ResetGrayList(JSCompartment
* comp
)
4498 JSObject
* src
= comp
->gcIncomingGrayPointers
;
4500 src
= NextIncomingCrossCompartmentPointer(src
, true);
4501 comp
->gcIncomingGrayPointers
= nullptr;
4505 js::NotifyGCNukeWrapper(JSObject
* obj
)
4508 * References to target of wrapper are being removed, we no longer have to
4509 * remember to mark it.
4511 RemoveFromGrayList(obj
);
4515 JS_GC_SWAP_OBJECT_A_REMOVED
= 1 << 0,
4516 JS_GC_SWAP_OBJECT_B_REMOVED
= 1 << 1
4520 js::NotifyGCPreSwap(JSObject
* a
, JSObject
* b
)
4523 * Two objects in the same compartment are about to have had their contents
4524 * swapped. If either of them are in our gray pointer list, then we remove
4525 * them from the lists, returning a bitset indicating what happened.
4527 return (RemoveFromGrayList(a
) ? JS_GC_SWAP_OBJECT_A_REMOVED
: 0) |
4528 (RemoveFromGrayList(b
) ? JS_GC_SWAP_OBJECT_B_REMOVED
: 0);
4532 js::NotifyGCPostSwap(JSObject
* a
, JSObject
* b
, unsigned removedFlags
)
4535 * Two objects in the same compartment have had their contents swapped. If
4536 * either of them were in our gray pointer list, we re-add them again.
4538 if (removedFlags
& JS_GC_SWAP_OBJECT_A_REMOVED
)
4539 DelayCrossCompartmentGrayMarking(b
);
4540 if (removedFlags
& JS_GC_SWAP_OBJECT_B_REMOVED
)
4541 DelayCrossCompartmentGrayMarking(a
);
4545 GCRuntime::endMarkingZoneGroup()
4547 gcstats::AutoPhase
ap(stats
, gcstats::PHASE_SWEEP_MARK
);
4550 * Mark any incoming black pointers from previously swept compartments
4551 * whose referents are not marked. This can occur when gray cells become
4552 * black by the action of UnmarkGray.
4554 MarkIncomingCrossCompartmentPointers(rt
, BLACK
);
4556 markWeakReferencesInCurrentGroup(gcstats::PHASE_SWEEP_MARK_WEAK
);
4559 * Change state of current group to MarkGray to restrict marking to this
4560 * group. Note that there may be pointers to the atoms compartment, and
4561 * these will be marked through, as they are not marked with
4562 * MarkCrossCompartmentXXX.
4564 for (GCZoneGroupIter
zone(rt
); !zone
.done(); zone
.next()) {
4565 JS_ASSERT(zone
->isGCMarkingBlack());
4566 zone
->setGCState(Zone::MarkGray
);
4568 marker
.setMarkColorGray();
4570 /* Mark incoming gray pointers from previously swept compartments. */
4571 MarkIncomingCrossCompartmentPointers(rt
, GRAY
);
4573 /* Mark gray roots and mark transitively inside the current compartment group. */
4574 markGrayReferencesInCurrentGroup(gcstats::PHASE_SWEEP_MARK_GRAY
);
4575 markWeakReferencesInCurrentGroup(gcstats::PHASE_SWEEP_MARK_GRAY_WEAK
);
4577 /* Restore marking state. */
4578 for (GCZoneGroupIter
zone(rt
); !zone
.done(); zone
.next()) {
4579 JS_ASSERT(zone
->isGCMarkingGray());
4580 zone
->setGCState(Zone::Mark
);
4582 MOZ_ASSERT(marker
.isDrained());
4583 marker
.setMarkColorBlack();
4587 GCRuntime::beginSweepingZoneGroup()
4590 * Begin sweeping the group of zones in gcCurrentZoneGroup,
4591 * performing actions that must be done before yielding to caller.
4594 bool sweepingAtoms
= false;
4595 for (GCZoneGroupIter
zone(rt
); !zone
.done(); zone
.next()) {
4596 /* Set the GC state to sweeping. */
4597 JS_ASSERT(zone
->isGCMarking());
4598 zone
->setGCState(Zone::Sweep
);
4600 /* Purge the ArenaLists before sweeping. */
4601 zone
->allocator
.arenas
.purge();
4603 if (rt
->isAtomsZone(zone
))
4604 sweepingAtoms
= true;
4606 if (rt
->sweepZoneCallback
)
4607 rt
->sweepZoneCallback(zone
);
4609 zone
->gcLastZoneGroupIndex
= zoneGroupIndex
;
4612 validateIncrementalMarking();
4617 gcstats::AutoPhase
ap(stats
, gcstats::PHASE_FINALIZE_START
);
4618 for (Callback
<JSFinalizeCallback
>* p
= rt
->gc
.finalizeCallbacks
.begin();
4619 p
< rt
->gc
.finalizeCallbacks
.end(); p
++)
4621 p
->op(&fop
, JSFINALIZE_GROUP_START
, !isFull
/* unused */, p
->data
);
4625 if (sweepingAtoms
) {
4627 gcstats::AutoPhase
ap(stats
, gcstats::PHASE_SWEEP_ATOMS
);
4631 gcstats::AutoPhase
ap(stats
, gcstats::PHASE_SWEEP_SYMBOL_REGISTRY
);
4632 rt
->symbolRegistry().sweep();
4636 /* Prune out dead views from ArrayBuffer's view lists. */
4637 for (GCCompartmentGroupIter
c(rt
); !c
.done(); c
.next())
4638 ArrayBufferObject::sweep(c
);
4640 /* Collect watch points associated with unreachable objects. */
4641 WatchpointMap::sweepAll(rt
);
4643 /* Detach unreachable debuggers and global objects from each other. */
4644 Debugger::sweepAll(&fop
);
4647 gcstats::AutoPhase
ap(stats
, gcstats::PHASE_SWEEP_COMPARTMENTS
);
4649 for (GCZoneGroupIter
zone(rt
); !zone
.done(); zone
.next()) {
4650 gcstats::AutoPhase
ap(stats
, gcstats::PHASE_SWEEP_DISCARD_CODE
);
4651 zone
->discardJitCode(&fop
);
4654 for (GCCompartmentGroupIter
c(rt
); !c
.done(); c
.next()) {
4655 gcstats::AutoSCC
scc(stats
, zoneGroupIndex
);
4656 gcstats::AutoPhase
ap(stats
, gcstats::PHASE_SWEEP_TABLES
);
4658 c
->sweep(&fop
, releaseObservedTypes
&& !c
->zone()->isPreservingCode());
4661 for (GCZoneGroupIter
zone(rt
); !zone
.done(); zone
.next()) {
4662 gcstats::AutoSCC
scc(stats
, zoneGroupIndex
);
4664 // If there is an OOM while sweeping types, the type information
4665 // will be deoptimized so that it is still correct (i.e.
4666 // overapproximates the possible types in the zone), but the
4667 // constraints might not have been triggered on the deoptimization
4668 // or even copied over completely. In this case, destroy all JIT
4669 // code and new script information in the zone, the only things
4670 // whose correctness depends on the type constraints.
4672 zone
->sweep(&fop
, releaseObservedTypes
&& !zone
->isPreservingCode(), &oom
);
4675 zone
->setPreservingCode(false);
4676 zone
->discardJitCode(&fop
);
4677 zone
->types
.clearAllNewScriptsOnOOM();
4683 * Queue all GC things in all zones for sweeping, either in the
4684 * foreground or on the background thread.
4686 * Note that order is important here for the background case.
4688 * Objects are finalized immediately but this may change in the future.
4690 for (GCZoneGroupIter
zone(rt
); !zone
.done(); zone
.next()) {
4691 gcstats::AutoSCC
scc(stats
, zoneGroupIndex
);
4692 zone
->allocator
.arenas
.queueObjectsForSweep(&fop
);
4694 for (GCZoneGroupIter
zone(rt
); !zone
.done(); zone
.next()) {
4695 gcstats::AutoSCC
scc(stats
, zoneGroupIndex
);
4696 zone
->allocator
.arenas
.queueStringsAndSymbolsForSweep(&fop
);
4698 for (GCZoneGroupIter
zone(rt
); !zone
.done(); zone
.next()) {
4699 gcstats::AutoSCC
scc(stats
, zoneGroupIndex
);
4700 zone
->allocator
.arenas
.queueScriptsForSweep(&fop
);
4702 for (GCZoneGroupIter
zone(rt
); !zone
.done(); zone
.next()) {
4703 gcstats::AutoSCC
scc(stats
, zoneGroupIndex
);
4704 zone
->allocator
.arenas
.queueJitCodeForSweep(&fop
);
4706 for (GCZoneGroupIter
zone(rt
); !zone
.done(); zone
.next()) {
4707 gcstats::AutoSCC
scc(stats
, zoneGroupIndex
);
4708 zone
->allocator
.arenas
.queueShapesForSweep(&fop
);
4709 zone
->allocator
.arenas
.gcShapeArenasToSweep
=
4710 zone
->allocator
.arenas
.arenaListsToSweep
[FINALIZE_SHAPE
];
4714 sweepZone
= currentZoneGroup
;
4718 gcstats::AutoPhase
ap(stats
, gcstats::PHASE_FINALIZE_END
);
4719 for (Callback
<JSFinalizeCallback
>* p
= rt
->gc
.finalizeCallbacks
.begin();
4720 p
< rt
->gc
.finalizeCallbacks
.end(); p
++)
4722 p
->op(&fop
, JSFINALIZE_GROUP_END
, !isFull
/* unused */, p
->data
);
4728 GCRuntime::endSweepingZoneGroup()
4730 /* Update the GC state for zones we have swept and unlink the list. */
4731 for (GCZoneGroupIter
zone(rt
); !zone
.done(); zone
.next()) {
4732 JS_ASSERT(zone
->isGCSweeping());
4733 zone
->setGCState(Zone::Finished
);
4736 /* Reset the list of arenas marked as being allocated during sweep phase. */
4737 while (ArenaHeader
* arena
= arenasAllocatedDuringSweep
) {
4738 arenasAllocatedDuringSweep
= arena
->getNextAllocDuringSweep();
4739 arena
->unsetAllocDuringSweep();
4744 GCRuntime::beginSweepPhase(bool destroyingRuntime
)
4749 * Finalize as we sweep, outside of lock but with rt->isHeapBusy()
4750 * true so that any attempt to allocate a GC-thing from a finalizer will
4751 * fail, rather than nest badly and leave the unmarked newborn to be swept.
4754 AutoSetThreadIsSweeping threadIsSweeping
;
4756 JS_ASSERT(!abortSweepAfterCurrentGroup
);
4758 computeNonIncrementalMarkingForValidation();
4760 gcstats::AutoPhase
ap(stats
, gcstats::PHASE_SWEEP
);
4762 sweepOnBackgroundThread
=
4763 !destroyingRuntime
&& !TraceEnabled() && CanUseExtraThreads() && !shouldCompact();
4765 releaseObservedTypes
= shouldReleaseObservedTypes();
4768 for (CompartmentsIter
c(rt
, SkipAtoms
); !c
.done(); c
.next()) {
4769 JS_ASSERT(!c
->gcIncomingGrayPointers
);
4770 for (JSCompartment::WrapperEnum
e(c
); !e
.empty(); e
.popFront()) {
4771 if (e
.front().key().kind
!= CrossCompartmentKey::StringWrapper
)
4772 AssertNotOnGrayList(&e
.front().value().get().toObject());
4777 DropStringWrappers(rt
);
4779 endMarkingZoneGroup();
4780 beginSweepingZoneGroup();
4784 ArenaLists::foregroundFinalize(FreeOp
* fop
, AllocKind thingKind
, SliceBudget
& sliceBudget
,
4785 SortedArenaList
& sweepList
)
4787 if (!arenaListsToSweep
[thingKind
] && incrementalSweptArenas
.isEmpty())
4790 if (!FinalizeArenas(fop
, &arenaListsToSweep
[thingKind
], sweepList
, thingKind
, sliceBudget
)) {
4791 incrementalSweptArenaKind
= thingKind
;
4792 incrementalSweptArenas
= sweepList
.toArenaList();
4796 // Clear any previous incremental sweep state we may have saved.
4797 incrementalSweptArenas
.clear();
4799 // Join |arenaLists[thingKind]| and |sweepList| into a single list.
4800 ArenaList finalized
= sweepList
.toArenaList();
4801 arenaLists
[thingKind
] = finalized
.insertListWithCursorAtEnd(arenaLists
[thingKind
]);
4807 GCRuntime::drainMarkStack(SliceBudget
& sliceBudget
, gcstats::Phase phase
)
4809 /* Run a marking slice and return whether the stack is now empty. */
4810 gcstats::AutoPhase
ap(stats
, phase
);
4811 return marker
.drainMarkStack(sliceBudget
);
4815 GCRuntime::sweepPhase(SliceBudget
& sliceBudget
)
4817 AutoSetThreadIsSweeping threadIsSweeping
;
4819 gcstats::AutoPhase
ap(stats
, gcstats::PHASE_SWEEP
);
4822 bool finished
= drainMarkStack(sliceBudget
, gcstats::PHASE_SWEEP_MARK
);
4827 /* Finalize foreground finalized things. */
4828 for (; finalizePhase
< FinalizePhaseCount
; ++finalizePhase
) {
4829 gcstats::AutoPhase
ap(stats
, FinalizePhaseStatsPhase
[finalizePhase
]);
4831 for (; sweepZone
; sweepZone
= sweepZone
->nextNodeInGroup()) {
4832 Zone
* zone
= sweepZone
;
4834 while (sweepKindIndex
< FinalizePhaseLength
[finalizePhase
]) {
4835 AllocKind kind
= FinalizePhases
[finalizePhase
][sweepKindIndex
];
4837 /* Set the number of things per arena for this AllocKind. */
4838 size_t thingsPerArena
= Arena::thingsPerArena(Arena::thingSize(kind
));
4839 incrementalSweepList
.setThingsPerArena(thingsPerArena
);
4841 if (!zone
->allocator
.arenas
.foregroundFinalize(&fop
, kind
, sliceBudget
,
4842 incrementalSweepList
))
4843 return false; /* Yield to the mutator. */
4845 /* Reset the slots of the sweep list that we used. */
4846 incrementalSweepList
.reset(thingsPerArena
);
4852 sweepZone
= currentZoneGroup
;
4855 /* Remove dead shapes from the shape tree, but don't finalize them yet. */
4857 gcstats::AutoPhase
ap(stats
, gcstats::PHASE_SWEEP_SHAPE
);
4859 for (; sweepZone
; sweepZone
= sweepZone
->nextNodeInGroup()) {
4860 Zone
* zone
= sweepZone
;
4861 while (ArenaHeader
* arena
= zone
->allocator
.arenas
.gcShapeArenasToSweep
) {
4862 for (ArenaCellIterUnderGC
i(arena
); !i
.done(); i
.next()) {
4863 Shape
* shape
= i
.get
<Shape
>();
4864 if (!shape
->isMarked())
4868 zone
->allocator
.arenas
.gcShapeArenasToSweep
= arena
->next
;
4869 sliceBudget
.step(Arena::thingsPerArena(Arena::thingSize(FINALIZE_SHAPE
)));
4870 if (sliceBudget
.isOverBudget())
4871 return false; /* Yield to the mutator. */
4876 endSweepingZoneGroup();
4878 if (!currentZoneGroup
)
4879 return true; /* We're finished. */
4880 endMarkingZoneGroup();
4881 beginSweepingZoneGroup();
4886 GCRuntime::endSweepPhase(bool destroyingRuntime
)
4888 AutoSetThreadIsSweeping threadIsSweeping
;
4890 gcstats::AutoPhase
ap(stats
, gcstats::PHASE_SWEEP
);
4893 JS_ASSERT_IF(destroyingRuntime
, !sweepOnBackgroundThread
);
4896 * Recalculate whether GC was full or not as this may have changed due to
4897 * newly created zones. Can only change from full to not full.
4900 for (ZonesIter
zone(rt
, WithAtoms
); !zone
.done(); zone
.next()) {
4901 if (!zone
->isCollecting()) {
4909 * If we found any black->gray edges during marking, we completely clear the
4910 * mark bits of all uncollected zones, or if a reset has occured, zones that
4911 * will no longer be collected. This is safe, although it may
4912 * prevent the cycle collector from collecting some dead objects.
4914 if (foundBlackGrayEdges
) {
4915 for (ZonesIter
zone(rt
, WithAtoms
); !zone
.done(); zone
.next()) {
4916 if (!zone
->isCollecting())
4917 zone
->allocator
.arenas
.unmarkAll();
4922 gcstats::AutoPhase
ap(stats
, gcstats::PHASE_DESTROY
);
4925 * Sweep script filenames after sweeping functions in the generic loop
4926 * above. In this way when a scripted function's finalizer destroys the
4927 * script and calls rt->destroyScriptHook, the hook can still access the
4928 * script's filename. See bug 323267.
4931 SweepScriptData(rt
);
4933 /* Clear out any small pools that we're hanging on to. */
4934 if (jit::ExecutableAllocator
* execAlloc
= rt
->maybeExecAlloc())
4937 if (rt
->jitRuntime() && rt
->jitRuntime()->hasIonAlloc()) {
4938 JSRuntime::AutoLockForInterrupt
lock(rt
);
4939 rt
->jitRuntime()->ionAlloc(rt
)->purge();
4943 * This removes compartments from rt->compartment, so we do it last to make
4944 * sure we don't miss sweeping any compartments.
4946 if (!destroyingRuntime
)
4947 sweepZones(&fop
, destroyingRuntime
);
4949 if (!sweepOnBackgroundThread
) {
4951 * Destroy arenas after we finished the sweeping so finalizers can
4952 * safely use IsAboutToBeFinalized(). This is done on the
4953 * GCHelperState if possible. We acquire the lock only because
4954 * Expire needs to unlock it for other callers.
4956 AutoLockGC
lock(rt
);
4957 expireChunksAndArenas(invocationKind
== GC_SHRINK
);
4962 gcstats::AutoPhase
ap(stats
, gcstats::PHASE_FINALIZE_END
);
4964 for (Callback
<JSFinalizeCallback
>* p
= rt
->gc
.finalizeCallbacks
.begin();
4965 p
< rt
->gc
.finalizeCallbacks
.end(); p
++)
4967 p
->op(&fop
, JSFINALIZE_COLLECTION_END
, !isFull
, p
->data
);
4970 /* If we finished a full GC, then the gray bits are correct. */
4972 grayBitsValid
= true;
4975 /* Set up list of zones for sweeping of background things. */
4976 JS_ASSERT(!sweepingZones
);
4977 for (GCZonesIter
zone(rt
); !zone
.done(); zone
.next()) {
4978 zone
->gcNextGraphNode
= sweepingZones
;
4979 sweepingZones
= zone
;
4982 /* If not sweeping on background thread then we must do it here. */
4983 if (!sweepOnBackgroundThread
) {
4984 gcstats::AutoPhase
ap(stats
, gcstats::PHASE_DESTROY
);
4986 sweepBackgroundThings(false);
4988 rt
->freeLifoAlloc
.freeAll();
4990 /* Ensure the compartments get swept if it's the last GC. */
4991 if (destroyingRuntime
)
4992 sweepZones(&fop
, destroyingRuntime
);
4995 finishMarkingValidation();
4998 for (ZonesIter
zone(rt
, WithAtoms
); !zone
.done(); zone
.next()) {
4999 for (unsigned i
= 0 ; i
< FINALIZE_LIMIT
; ++i
) {
5000 JS_ASSERT_IF(!IsBackgroundFinalized(AllocKind(i
)) ||
5001 !sweepOnBackgroundThread
,
5002 !zone
->allocator
.arenas
.arenaListsToSweep
[i
]);
5006 for (CompartmentsIter
c(rt
, SkipAtoms
); !c
.done(); c
.next()) {
5007 JS_ASSERT(!c
->gcIncomingGrayPointers
);
5008 JS_ASSERT(c
->gcLiveArrayBuffers
.empty());
5010 for (JSCompartment::WrapperEnum
e(c
); !e
.empty(); e
.popFront()) {
5011 if (e
.front().key().kind
!= CrossCompartmentKey::StringWrapper
)
5012 AssertNotOnGrayList(&e
.front().value().unbarrieredGet().toObject());
5018 #ifdef JSGC_COMPACTING
5020 GCRuntime::compactPhase()
5022 JS_ASSERT(rt
->gc
.nursery
.isEmpty());
5023 JS_ASSERT(!sweepOnBackgroundThread
);
5025 gcstats::AutoPhase
ap(stats
, gcstats::PHASE_COMPACT
);
5027 ArenaHeader
* relocatedList
= relocateArenas();
5029 updatePointersToRelocatedCells();
5030 releaseRelocatedArenas(relocatedList
);
5033 CheckHashTablesAfterMovingGC(rt
);
5034 for (GCZonesIter
zone(rt
); !zone
.done(); zone
.next()) {
5035 if (!rt
->isAtomsZone(zone
) && !zone
->isPreservingCode())
5036 zone
->allocator
.arenas
.checkEmptyFreeLists();
5040 #endif // JSGC_COMPACTING
5043 GCRuntime::finishCollection()
5045 JS_ASSERT(marker
.isDrained());
5048 uint64_t currentTime
= PRMJ_Now();
5049 schedulingState
.updateHighFrequencyMode(lastGCTime
, currentTime
, tunables
);
5051 for (ZonesIter
zone(rt
, WithAtoms
); !zone
.done(); zone
.next()) {
5052 zone
->threshold
.updateAfterGC(zone
->usage
.gcBytes(), invocationKind
, tunables
,
5054 if (zone
->isCollecting()) {
5055 JS_ASSERT(zone
->isGCFinished() || zone
->isGCCompacting());
5056 zone
->setGCState(Zone::NoGC
);
5057 zone
->active
= false;
5060 JS_ASSERT(!zone
->isCollecting());
5061 JS_ASSERT(!zone
->wasGCStarted());
5064 lastGCTime
= currentTime
;
5067 /* Start a new heap session. */
5068 AutoTraceSession::AutoTraceSession(JSRuntime
* rt
, js::HeapState heapState
)
5071 prevState(rt
->gc
.heapState
)
5073 JS_ASSERT(rt
->gc
.isAllocAllowed());
5074 JS_ASSERT(rt
->gc
.heapState
== Idle
);
5075 JS_ASSERT(heapState
!= Idle
);
5076 #ifdef JSGC_GENERATIONAL
5077 JS_ASSERT_IF(heapState
== MajorCollecting
, rt
->gc
.nursery
.isEmpty());
5080 // Threads with an exclusive context can hit refillFreeList while holding
5081 // the exclusive access lock. To avoid deadlocking when we try to acquire
5082 // this lock during GC and the other thread is waiting, make sure we hold
5083 // the exclusive access lock during GC sessions.
5084 JS_ASSERT(rt
->currentThreadHasExclusiveAccess());
5086 if (rt
->exclusiveThreadsPresent()) {
5087 // Lock the helper thread state when changing the heap state in the
5088 // presence of exclusive threads, to avoid racing with refillFreeList.
5089 AutoLockHelperThreadState lock
;
5090 rt
->gc
.heapState
= heapState
;
5092 rt
->gc
.heapState
= heapState
;
5096 AutoTraceSession::~AutoTraceSession()
5098 JS_ASSERT(runtime
->isHeapBusy());
5100 if (runtime
->exclusiveThreadsPresent()) {
5101 AutoLockHelperThreadState lock
;
5102 runtime
->gc
.heapState
= prevState
;
5104 // Notify any helper threads waiting for the trace session to end.
5105 HelperThreadState().notifyAll(GlobalHelperThreadState::PRODUCER
);
5107 runtime
->gc
.heapState
= prevState
;
5111 AutoCopyFreeListToArenas::AutoCopyFreeListToArenas(JSRuntime
* rt
, ZoneSelector selector
)
5115 for (ZonesIter
zone(rt
, selector
); !zone
.done(); zone
.next())
5116 zone
->allocator
.arenas
.copyFreeListsToArenas();
5119 AutoCopyFreeListToArenas::~AutoCopyFreeListToArenas()
5121 for (ZonesIter
zone(runtime
, selector
); !zone
.done(); zone
.next())
5122 zone
->allocator
.arenas
.clearFreeListsInArenas();
5125 class AutoCopyFreeListToArenasForGC
5130 explicit AutoCopyFreeListToArenasForGC(JSRuntime
* rt
) : runtime(rt
) {
5131 JS_ASSERT(rt
->currentThreadHasExclusiveAccess());
5132 for (ZonesIter
zone(rt
, WithAtoms
); !zone
.done(); zone
.next())
5133 zone
->allocator
.arenas
.copyFreeListsToArenas();
5135 ~AutoCopyFreeListToArenasForGC() {
5136 for (ZonesIter
zone(runtime
, WithAtoms
); !zone
.done(); zone
.next())
5137 zone
->allocator
.arenas
.clearFreeListsInArenas();
5142 GCRuntime::resetIncrementalGC(const char* reason
)
5144 switch (incrementalState
) {
5145 case NO_INCREMENTAL
:
5149 /* Cancel any ongoing marking. */
5150 AutoCopyFreeListToArenasForGC
copy(rt
);
5155 for (GCCompartmentsIter
c(rt
); !c
.done(); c
.next()) {
5156 ArrayBufferObject::resetArrayBufferList(c
);
5160 for (GCZonesIter
zone(rt
); !zone
.done(); zone
.next()) {
5161 JS_ASSERT(zone
->isGCMarking());
5162 zone
->setNeedsIncrementalBarrier(false, Zone::UpdateJit
);
5163 zone
->setGCState(Zone::NoGC
);
5165 rt
->setNeedsIncrementalBarrier(false);
5166 AssertNeedsBarrierFlagsConsistent(rt
);
5168 incrementalState
= NO_INCREMENTAL
;
5170 JS_ASSERT(!marker
.shouldCheckCompartments());
5178 for (CompartmentsIter
c(rt
, SkipAtoms
); !c
.done(); c
.next())
5179 c
->scheduledForDestruction
= false;
5181 /* Finish sweeping the current zone group, then abort. */
5182 abortSweepAfterCurrentGroup
= true;
5183 incrementalCollectSlice(SliceBudget::Unlimited
, JS::gcreason::RESET
);
5186 gcstats::AutoPhase
ap(stats
, gcstats::PHASE_WAIT_BACKGROUND_THREAD
);
5187 rt
->gc
.waitBackgroundSweepOrAllocEnd();
5192 MOZ_CRASH("Invalid incremental GC state");
5195 stats
.reset(reason
);
5198 for (CompartmentsIter
c(rt
, SkipAtoms
); !c
.done(); c
.next())
5199 JS_ASSERT(c
->gcLiveArrayBuffers
.empty());
5201 for (ZonesIter
zone(rt
, WithAtoms
); !zone
.done(); zone
.next()) {
5202 JS_ASSERT(!zone
->needsIncrementalBarrier());
5203 for (unsigned i
= 0; i
< FINALIZE_LIMIT
; ++i
)
5204 JS_ASSERT(!zone
->allocator
.arenas
.arenaListsToSweep
[i
]);
5213 explicit AutoGCSlice(JSRuntime
* rt
);
5220 } /* anonymous namespace */
5222 AutoGCSlice::AutoGCSlice(JSRuntime
* rt
)
5226 * During incremental GC, the compartment's active flag determines whether
5227 * there are stack frames active for any of its scripts. Normally this flag
5228 * is set at the beginning of the mark phase. During incremental GC, we also
5229 * set it at the start of every phase.
5231 for (ActivationIterator
iter(rt
); !iter
.done(); ++iter
)
5232 iter
->compartment()->zone()->active
= true;
5234 for (GCZonesIter
zone(rt
); !zone
.done(); zone
.next()) {
5236 * Clear needsIncrementalBarrier early so we don't do any write
5237 * barriers during GC. We don't need to update the Ion barriers (which
5238 * is expensive) because Ion code doesn't run during GC. If need be,
5239 * we'll update the Ion barriers in ~AutoGCSlice.
5241 if (zone
->isGCMarking()) {
5242 JS_ASSERT(zone
->needsIncrementalBarrier());
5243 zone
->setNeedsIncrementalBarrier(false, Zone::DontUpdateJit
);
5245 JS_ASSERT(!zone
->needsIncrementalBarrier());
5248 rt
->setNeedsIncrementalBarrier(false);
5249 AssertNeedsBarrierFlagsConsistent(rt
);
5252 AutoGCSlice::~AutoGCSlice()
5254 /* We can't use GCZonesIter if this is the end of the last slice. */
5255 bool haveBarriers
= false;
5256 for (ZonesIter
zone(runtime
, WithAtoms
); !zone
.done(); zone
.next()) {
5257 if (zone
->isGCMarking()) {
5258 zone
->setNeedsIncrementalBarrier(true, Zone::UpdateJit
);
5259 zone
->allocator
.arenas
.prepareForIncrementalGC(runtime
);
5260 haveBarriers
= true;
5262 zone
->setNeedsIncrementalBarrier(false, Zone::UpdateJit
);
5265 runtime
->setNeedsIncrementalBarrier(haveBarriers
);
5266 AssertNeedsBarrierFlagsConsistent(runtime
);
5270 GCRuntime::pushZealSelectedObjects()
5273 /* Push selected objects onto the mark stack and clear the list. */
5274 for (JSObject
** obj
= selectedForMarking
.begin(); obj
!= selectedForMarking
.end(); obj
++)
5275 MarkObjectUnbarriered(&marker
, obj
, "selected obj");
5280 GCRuntime::incrementalCollectSlice(int64_t budget
,
5281 JS::gcreason::Reason reason
)
5283 JS_ASSERT(rt
->currentThreadHasExclusiveAccess());
5285 AutoCopyFreeListToArenasForGC
copy(rt
);
5286 AutoGCSlice
slice(rt
);
5288 bool destroyingRuntime
= (reason
== JS::gcreason::DESTROY_RUNTIME
);
5290 gc::State initialState
= incrementalState
;
5294 if (reason
== JS::gcreason::DEBUG_GC
&& budget
!= SliceBudget::Unlimited
) {
5296 * Do the incremental collection type specified by zeal mode if the
5297 * collection was triggered by runDebugGC() and incremental GC has not
5298 * been cancelled by resetIncrementalGC().
5304 JS_ASSERT_IF(incrementalState
!= NO_INCREMENTAL
, isIncremental
);
5305 isIncremental
= budget
!= SliceBudget::Unlimited
;
5307 if (zeal
== ZealIncrementalRootsThenFinish
|| zeal
== ZealIncrementalMarkAllThenFinish
) {
5309 * Yields between slices occurs at predetermined points in these modes;
5310 * the budget is not used.
5312 budget
= SliceBudget::Unlimited
;
5315 SliceBudget
sliceBudget(budget
);
5317 if (incrementalState
== NO_INCREMENTAL
) {
5318 incrementalState
= MARK_ROOTS
;
5319 lastMarkSlice
= false;
5322 if (incrementalState
== MARK
)
5323 AutoGCRooter::traceAllWrappers(&marker
);
5325 switch (incrementalState
) {
5328 if (!beginMarkPhase(reason
)) {
5329 incrementalState
= NO_INCREMENTAL
;
5333 if (!destroyingRuntime
)
5334 pushZealSelectedObjects();
5336 incrementalState
= MARK
;
5338 if (isIncremental
&& zeal
== ZealIncrementalRootsThenFinish
)
5344 /* If we needed delayed marking for gray roots, then collect until done. */
5345 if (!marker
.hasBufferedGrayRoots()) {
5346 sliceBudget
.reset();
5347 isIncremental
= false;
5350 bool finished
= drainMarkStack(sliceBudget
, gcstats::PHASE_MARK
);
5354 JS_ASSERT(marker
.isDrained());
5356 if (!lastMarkSlice
&& isIncremental
&&
5357 ((initialState
== MARK
&& zeal
!= ZealIncrementalRootsThenFinish
) ||
5358 zeal
== ZealIncrementalMarkAllThenFinish
))
5361 * Yield with the aim of starting the sweep in the next
5362 * slice. We will need to mark anything new on the stack
5363 * when we resume, so we stay in MARK state.
5365 lastMarkSlice
= true;
5369 incrementalState
= SWEEP
;
5372 * This runs to completion, but we don't continue if the budget is
5375 beginSweepPhase(destroyingRuntime
);
5376 if (sliceBudget
.isOverBudget())
5380 * Always yield here when running in incremental multi-slice zeal
5381 * mode, so RunDebugGC can reset the slice buget.
5383 if (isIncremental
&& zeal
== ZealIncrementalMultipleSlices
)
5390 bool finished
= sweepPhase(sliceBudget
);
5394 endSweepPhase(destroyingRuntime
);
5396 if (sweepOnBackgroundThread
)
5397 helperState
.startBackgroundSweep(invocationKind
== GC_SHRINK
);
5399 #ifdef JSGC_COMPACTING
5400 if (shouldCompact()) {
5401 incrementalState
= COMPACT
;
5407 incrementalState
= NO_INCREMENTAL
;
5417 gc::IsIncrementalGCSafe(JSRuntime
* rt
)
5419 JS_ASSERT(!rt
->mainThread
.suppressGC
);
5421 if (rt
->keepAtoms())
5422 return IncrementalSafety::Unsafe("keepAtoms set");
5424 if (!rt
->gc
.isIncrementalGCAllowed())
5425 return IncrementalSafety::Unsafe("incremental permanently disabled");
5427 return IncrementalSafety::Safe();
5431 GCRuntime::budgetIncrementalGC(int64_t* budget
)
5433 IncrementalSafety safe
= IsIncrementalGCSafe(rt
);
5435 resetIncrementalGC(safe
.reason());
5436 *budget
= SliceBudget::Unlimited
;
5437 stats
.nonincremental(safe
.reason());
5441 if (mode
!= JSGC_MODE_INCREMENTAL
) {
5442 resetIncrementalGC("GC mode change");
5443 *budget
= SliceBudget::Unlimited
;
5444 stats
.nonincremental("GC mode");
5448 if (isTooMuchMalloc()) {
5449 *budget
= SliceBudget::Unlimited
;
5450 stats
.nonincremental("malloc bytes trigger");
5454 for (ZonesIter
zone(rt
, WithAtoms
); !zone
.done(); zone
.next()) {
5455 if (zone
->usage
.gcBytes() >= zone
->threshold
.gcTriggerBytes()) {
5456 *budget
= SliceBudget::Unlimited
;
5457 stats
.nonincremental("allocation trigger");
5460 if (incrementalState
!= NO_INCREMENTAL
&&
5461 zone
->isGCScheduled() != zone
->wasGCStarted())
5466 if (zone
->isTooMuchMalloc()) {
5467 *budget
= SliceBudget::Unlimited
;
5468 stats
.nonincremental("malloc bytes trigger");
5473 resetIncrementalGC("zone change");
5478 #ifdef JSGC_GENERATIONAL
5479 class AutoDisableStoreBuffer
5485 explicit AutoDisableStoreBuffer(GCRuntime
* gc
) : sb(gc
->storeBuffer
) {
5486 prior
= sb
.isEnabled();
5489 ~AutoDisableStoreBuffer() {
5495 struct AutoDisableStoreBuffer
5497 AutoDisableStoreBuffer(GCRuntime
* gc
) {}
5501 } /* anonymous namespace */
5504 * Run one GC "cycle" (either a slice of incremental GC or an entire
5505 * non-incremental GC. We disable inlining to ensure that the bottom of the
5506 * stack with possible GC roots recorded in MarkRuntime excludes any pointers we
5507 * use during the marking implementation.
5509 * Returns true if we "reset" an existing incremental GC, which would force us
5510 * to run another cycle.
5512 MOZ_NEVER_INLINE
bool
5513 GCRuntime::gcCycle(bool incremental
, int64_t budget
, JSGCInvocationKind gckind
,
5514 JS::gcreason::Reason reason
)
5519 * Marking can trigger many incidental post barriers, some of them for
5520 * objects which are not going to be live after the GC.
5522 AutoDisableStoreBuffer
adsb(this);
5524 AutoTraceSession
session(rt
, MajorCollecting
);
5527 interFrameGC
= true;
5530 if (incrementalState
== NO_INCREMENTAL
)
5533 // It's ok if threads other than the main thread have suppressGC set, as
5534 // they are operating on zones which will not be collected from here.
5535 JS_ASSERT(!rt
->mainThread
.suppressGC
);
5537 // Assert if this is a GC unsafe region.
5538 JS::AutoAssertOnGC::VerifyIsSafeToGC(rt
);
5541 * As we about to purge caches and clear the mark bits we must wait for
5542 * any background finalization to finish. We must also wait for the
5543 * background allocation to finish so we can avoid taking the GC lock
5544 * when manipulating the chunks during the GC.
5547 gcstats::AutoPhase
ap(stats
, gcstats::PHASE_WAIT_BACKGROUND_THREAD
);
5548 waitBackgroundSweepOrAllocEnd();
5551 State prevState
= incrementalState
;
5554 /* If non-incremental GC was requested, reset incremental GC. */
5555 resetIncrementalGC("requested");
5556 stats
.nonincremental("requested");
5557 budget
= SliceBudget::Unlimited
;
5559 budgetIncrementalGC(&budget
);
5562 /* The GC was reset, so we need a do-over. */
5563 if (prevState
!= NO_INCREMENTAL
&& incrementalState
== NO_INCREMENTAL
)
5566 TraceMajorGCStart();
5568 /* Set the invocation kind in the first slice. */
5569 if (incrementalState
== NO_INCREMENTAL
)
5570 invocationKind
= gckind
;
5572 incrementalCollectSlice(budget
, reason
);
5574 #ifndef JS_MORE_DETERMINISTIC
5575 nextFullGCTime
= PRMJ_Now() + GC_IDLE_FULL_SPAN
;
5578 chunkAllocationSinceLastGC
= false;
5581 /* Keeping these around after a GC is dangerous. */
5582 clearSelectedForMarking();
5585 /* Clear gcMallocBytes for all compartments */
5586 for (ZonesIter
zone(rt
, WithAtoms
); !zone
.done(); zone
.next()) {
5587 zone
->resetGCMallocBytes();
5588 zone
->unscheduleGC();
5600 IsDeterministicGCReason(JS::gcreason::Reason reason
)
5602 if (reason
> JS::gcreason::DEBUG_GC
&&
5603 reason
!= JS::gcreason::CC_FORCED
&& reason
!= JS::gcreason::SHUTDOWN_CC
)
5608 if (reason
== JS::gcreason::MAYBEGC
)
5616 ShouldCleanUpEverything(JS::gcreason::Reason reason
, JSGCInvocationKind gckind
)
5618 // During shutdown, we must clean everything up, for the sake of leak
5619 // detection. When a runtime has no contexts, or we're doing a GC before a
5620 // shutdown CC, those are strong indications that we're shutting down.
5621 return reason
== JS::gcreason::DESTROY_RUNTIME
||
5622 reason
== JS::gcreason::SHUTDOWN_CC
||
5623 gckind
== GC_SHRINK
;
5626 gcstats::ZoneGCStats
5627 GCRuntime::scanZonesBeforeGC()
5629 gcstats::ZoneGCStats zoneStats
;
5630 for (ZonesIter
zone(rt
, WithAtoms
); !zone
.done(); zone
.next()) {
5631 if (mode
== JSGC_MODE_GLOBAL
)
5634 /* This is a heuristic to avoid resets. */
5635 if (incrementalState
!= NO_INCREMENTAL
&& zone
->needsIncrementalBarrier())
5638 zoneStats
.zoneCount
++;
5639 if (zone
->isGCScheduled())
5640 zoneStats
.collectedCount
++;
5643 for (CompartmentsIter
c(rt
, WithAtoms
); !c
.done(); c
.next())
5644 zoneStats
.compartmentCount
++;
5650 GCRuntime::collect(bool incremental
, int64_t budget
, JSGCInvocationKind gckind
,
5651 JS::gcreason::Reason reason
)
5653 /* GC shouldn't be running in parallel execution mode */
5654 MOZ_ASSERT(!InParallelSection());
5656 JS_AbortIfWrongThread(rt
);
5658 /* If we attempt to invoke the GC while we are running in the GC, assert. */
5659 MOZ_ASSERT(!rt
->isHeapBusy());
5661 /* The engine never locks across anything that could GC. */
5662 MOZ_ASSERT(!rt
->currentThreadHasExclusiveAccess());
5664 if (rt
->mainThread
.suppressGC
)
5667 TraceLogger
* logger
= TraceLoggerForMainThread(rt
);
5668 AutoTraceLog
logGC(logger
, TraceLogger::GC
);
5671 if (deterministicOnly
&& !IsDeterministicGCReason(reason
))
5675 JS_ASSERT_IF(!incremental
|| budget
!= SliceBudget::Unlimited
, JSGC_INCREMENTAL
);
5677 AutoStopVerifyingBarriers
av(rt
, reason
== JS::gcreason::SHUTDOWN_CC
||
5678 reason
== JS::gcreason::DESTROY_RUNTIME
);
5680 recordNativeStackTop();
5682 gcstats::AutoGCSlice
agc(stats
, scanZonesBeforeGC(), reason
);
5684 cleanUpEverything
= ShouldCleanUpEverything(reason
, gckind
);
5686 bool repeat
= false;
5689 * Let the API user decide to defer a GC if it wants to (unless this
5690 * is the last context). Invoke the callback regardless.
5692 if (incrementalState
== NO_INCREMENTAL
) {
5693 gcstats::AutoPhase
ap(stats
, gcstats::PHASE_GC_BEGIN
);
5695 gcCallback
.op(rt
, JSGC_BEGIN
, gcCallback
.data
);
5699 bool wasReset
= gcCycle(incremental
, budget
, gckind
, reason
);
5701 if (incrementalState
== NO_INCREMENTAL
) {
5702 gcstats::AutoPhase
ap(stats
, gcstats::PHASE_GC_END
);
5704 gcCallback
.op(rt
, JSGC_END
, gcCallback
.data
);
5707 /* Need to re-schedule all zones for GC. */
5708 if (poked
&& cleanUpEverything
)
5709 JS::PrepareForFullGC(rt
);
5712 * This code makes an extra effort to collect compartments that we
5713 * thought were dead at the start of the GC. See the large comment in
5716 bool repeatForDeadZone
= false;
5717 if (incremental
&& incrementalState
== NO_INCREMENTAL
) {
5718 for (CompartmentsIter
c(rt
, SkipAtoms
); !c
.done(); c
.next()) {
5719 if (c
->scheduledForDestruction
) {
5720 incremental
= false;
5721 repeatForDeadZone
= true;
5722 reason
= JS::gcreason::COMPARTMENT_REVIVED
;
5723 c
->zone()->scheduleGC();
5729 * If we reset an existing GC, we need to start a new one. Also, we
5730 * repeat GCs that happen during shutdown (the gcShouldCleanUpEverything
5731 * case) until we can be sure that no additional garbage is created
5732 * (which typically happens if roots are dropped during finalizers).
5734 repeat
= (poked
&& cleanUpEverything
) || wasReset
|| repeatForDeadZone
;
5737 if (incrementalState
== NO_INCREMENTAL
)
5738 EnqueuePendingParseTasksAfterGC(rt
);
5742 GCRuntime::gc(JSGCInvocationKind gckind
, JS::gcreason::Reason reason
)
5744 collect(false, SliceBudget::Unlimited
, gckind
, reason
);
5748 GCRuntime::gcSlice(JSGCInvocationKind gckind
, JS::gcreason::Reason reason
, int64_t millis
)
5752 budget
= SliceBudget::TimeBudget(millis
);
5753 else if (schedulingState
.inHighFrequencyGCMode() && tunables
.isDynamicMarkSliceEnabled())
5754 budget
= sliceBudget
* IGC_MARK_SLICE_MULTIPLIER
;
5756 budget
= sliceBudget
;
5758 collect(true, budget
, gckind
, reason
);
5762 GCRuntime::gcFinalSlice(JSGCInvocationKind gckind
, JS::gcreason::Reason reason
)
5764 collect(true, SliceBudget::Unlimited
, gckind
, reason
);
5768 GCRuntime::notifyDidPaint()
5771 if (zealMode
== ZealFrameVerifierPreValue
) {
5772 verifyPreBarriers();
5776 if (zealMode
== ZealFrameVerifierPostValue
) {
5777 verifyPostBarriers();
5781 if (zealMode
== ZealFrameGCValue
) {
5782 JS::PrepareForFullGC(rt
);
5783 gcSlice(GC_NORMAL
, JS::gcreason::REFRESH_FRAME
);
5788 if (JS::IsIncrementalGCInProgress(rt
) && !interFrameGC
) {
5789 JS::PrepareForIncrementalGC(rt
);
5790 gcSlice(GC_NORMAL
, JS::gcreason::REFRESH_FRAME
);
5793 interFrameGC
= false;
5797 ZonesSelected(JSRuntime
* rt
)
5799 for (ZonesIter
zone(rt
, WithAtoms
); !zone
.done(); zone
.next()) {
5800 if (zone
->isGCScheduled())
5807 GCRuntime::gcDebugSlice(bool limit
, int64_t objCount
)
5809 int64_t budget
= limit
? SliceBudget::WorkBudget(objCount
) : SliceBudget::Unlimited
;
5810 if (!ZonesSelected(rt
)) {
5811 if (JS::IsIncrementalGCInProgress(rt
))
5812 JS::PrepareForIncrementalGC(rt
);
5814 JS::PrepareForFullGC(rt
);
5816 collect(true, budget
, GC_NORMAL
, JS::gcreason::DEBUG_GC
);
5819 /* Schedule a full GC unless a zone will already be collected. */
5821 js::PrepareForDebugGC(JSRuntime
* rt
)
5823 if (!ZonesSelected(rt
))
5824 JS::PrepareForFullGC(rt
);
5828 JS::ShrinkGCBuffers(JSRuntime
* rt
)
5830 rt
->gc
.shrinkBuffers();
5834 GCRuntime::shrinkBuffers()
5836 AutoLockHelperThreadState helperLock
;
5837 AutoLockGC
lock(rt
);
5838 JS_ASSERT(!rt
->isHeapBusy());
5840 if (CanUseExtraThreads())
5841 helperState
.startBackgroundShrink();
5843 expireChunksAndArenas(true);
5847 GCRuntime::minorGC(JS::gcreason::Reason reason
)
5849 #ifdef JSGC_GENERATIONAL
5850 TraceLogger
* logger
= TraceLoggerForMainThread(rt
);
5851 AutoTraceLog
logMinorGC(logger
, TraceLogger::MinorGC
);
5852 nursery
.collect(rt
, reason
, nullptr);
5853 JS_ASSERT_IF(!rt
->mainThread
.suppressGC
, nursery
.isEmpty());
5858 GCRuntime::minorGC(JSContext
* cx
, JS::gcreason::Reason reason
)
5860 // Alternate to the runtime-taking form above which allows marking type
5861 // objects as needing pretenuring.
5862 #ifdef JSGC_GENERATIONAL
5863 TraceLogger
* logger
= TraceLoggerForMainThread(rt
);
5864 AutoTraceLog
logMinorGC(logger
, TraceLogger::MinorGC
);
5865 Nursery::TypeObjectList pretenureTypes
;
5866 nursery
.collect(rt
, reason
, &pretenureTypes
);
5867 for (size_t i
= 0; i
< pretenureTypes
.length(); i
++) {
5868 if (pretenureTypes
[i
]->canPreTenure())
5869 pretenureTypes
[i
]->setShouldPreTenure(cx
);
5871 JS_ASSERT_IF(!rt
->mainThread
.suppressGC
, nursery
.isEmpty());
5876 GCRuntime::disableGenerationalGC()
5878 #ifdef JSGC_GENERATIONAL
5879 if (isGenerationalGCEnabled()) {
5880 minorGC(JS::gcreason::API
);
5882 storeBuffer
.disable();
5885 ++rt
->gc
.generationalDisabled
;
5889 GCRuntime::enableGenerationalGC()
5891 JS_ASSERT(generationalDisabled
> 0);
5892 --generationalDisabled
;
5893 #ifdef JSGC_GENERATIONAL
5894 if (generationalDisabled
== 0) {
5896 storeBuffer
.enable();
5902 GCRuntime::gcIfNeeded(JSContext
* cx
)
5904 #ifdef JSGC_GENERATIONAL
5906 * In case of store buffer overflow perform minor GC first so that the
5907 * correct reason is seen in the logs.
5909 if (storeBuffer
.isAboutToOverflow())
5910 minorGC(cx
, JS::gcreason::FULL_STORE_BUFFER
);
5914 gcSlice(GC_NORMAL
, rt
->gc
.triggerReason
, 0);
5917 AutoFinishGC::AutoFinishGC(JSRuntime
* rt
)
5919 if (JS::IsIncrementalGCInProgress(rt
)) {
5920 JS::PrepareForIncrementalGC(rt
);
5921 JS::FinishIncrementalGC(rt
, JS::gcreason::API
);
5924 rt
->gc
.waitBackgroundSweepEnd();
5927 AutoPrepareForTracing::AutoPrepareForTracing(JSRuntime
* rt
, ZoneSelector selector
)
5932 rt
->gc
.recordNativeStackTop();
5936 js::NewCompartment(JSContext
* cx
, Zone
* zone
, JSPrincipals
* principals
,
5937 const JS::CompartmentOptions
& options
)
5939 JSRuntime
* rt
= cx
->runtime();
5940 JS_AbortIfWrongThread(rt
);
5942 ScopedJSDeletePtr
<Zone
> zoneHolder
;
5944 zone
= cx
->new_
<Zone
>(rt
);
5948 zoneHolder
.reset(zone
);
5950 const JSPrincipals
* trusted
= rt
->trustedPrincipals();
5951 bool isSystem
= principals
&& principals
== trusted
;
5952 if (!zone
->init(isSystem
))
5956 ScopedJSDeletePtr
<JSCompartment
> compartment(cx
->new_
<JSCompartment
>(zone
, options
));
5957 if (!compartment
|| !compartment
->init(cx
))
5960 // Set up the principals.
5961 JS_SetCompartmentPrincipals(compartment
, principals
);
5963 AutoLockGC
lock(rt
);
5965 if (!zone
->compartments
.append(compartment
.get())) {
5966 js_ReportOutOfMemory(cx
);
5970 if (zoneHolder
&& !rt
->gc
.zones
.append(zone
)) {
5971 js_ReportOutOfMemory(cx
);
5975 zoneHolder
.forget();
5976 return compartment
.forget();
5980 gc::MergeCompartments(JSCompartment
* source
, JSCompartment
* target
)
5982 // The source compartment must be specifically flagged as mergable. This
5983 // also implies that the compartment is not visible to the debugger.
5984 JS_ASSERT(source
->options_
.mergeable());
5986 JS_ASSERT(source
->addonId
== target
->addonId
);
5988 JSRuntime
* rt
= source
->runtimeFromMainThread();
5990 AutoPrepareForTracing
prepare(rt
, SkipAtoms
);
5992 // Cleanup tables and other state in the source compartment that will be
5993 // meaningless after merging into the target compartment.
5995 source
->clearTables();
5997 // Fixup compartment pointers in source to refer to target.
5999 for (ZoneCellIter
iter(source
->zone(), FINALIZE_SCRIPT
); !iter
.done(); iter
.next()) {
6000 JSScript
* script
= iter
.get
<JSScript
>();
6001 JS_ASSERT(script
->compartment() == source
);
6002 script
->compartment_
= target
;
6005 for (ZoneCellIter
iter(source
->zone(), FINALIZE_BASE_SHAPE
); !iter
.done(); iter
.next()) {
6006 BaseShape
* base
= iter
.get
<BaseShape
>();
6007 JS_ASSERT(base
->compartment() == source
);
6008 base
->compartment_
= target
;
6011 // Fixup zone pointers in source's zone to refer to target's zone.
6013 for (size_t thingKind
= 0; thingKind
!= FINALIZE_LIMIT
; thingKind
++) {
6014 for (ArenaIter
aiter(source
->zone(), AllocKind(thingKind
)); !aiter
.done(); aiter
.next()) {
6015 ArenaHeader
* aheader
= aiter
.get();
6016 aheader
->zone
= target
->zone();
6020 // The source should be the only compartment in its zone.
6021 for (CompartmentsInZoneIter
c(source
->zone()); !c
.done(); c
.next())
6022 JS_ASSERT(c
.get() == source
);
6024 // Merge the allocator in source's zone into target's zone.
6025 target
->zone()->allocator
.arenas
.adoptArenas(rt
, &source
->zone()->allocator
.arenas
);
6026 target
->zone()->usage
.adopt(source
->zone()->usage
);
6028 // Merge other info in source's zone into target's zone.
6029 target
->zone()->types
.typeLifoAlloc
.transferFrom(&source
->zone()->types
.typeLifoAlloc
);
6033 GCRuntime::runDebugGC()
6036 int type
= zealMode
;
6038 if (rt
->mainThread
.suppressGC
)
6041 if (type
== js::gc::ZealGenerationalGCValue
)
6042 return minorGC(JS::gcreason::DEBUG_GC
);
6044 PrepareForDebugGC(rt
);
6046 if (type
== ZealIncrementalRootsThenFinish
||
6047 type
== ZealIncrementalMarkAllThenFinish
||
6048 type
== ZealIncrementalMultipleSlices
)
6050 js::gc::State initialState
= incrementalState
;
6052 if (type
== ZealIncrementalMultipleSlices
) {
6054 * Start with a small slice limit and double it every slice. This
6055 * ensure that we get multiple slices, and collection runs to
6058 if (initialState
== NO_INCREMENTAL
)
6059 incrementalLimit
= zealFrequency
/ 2;
6061 incrementalLimit
*= 2;
6062 budget
= SliceBudget::WorkBudget(incrementalLimit
);
6064 // This triggers incremental GC but is actually ignored by IncrementalMarkSlice.
6065 budget
= SliceBudget::WorkBudget(1);
6068 collect(true, budget
, GC_NORMAL
, JS::gcreason::DEBUG_GC
);
6071 * For multi-slice zeal, reset the slice size when we get to the sweep
6074 if (type
== ZealIncrementalMultipleSlices
&&
6075 initialState
== MARK
&& incrementalState
== SWEEP
)
6077 incrementalLimit
= zealFrequency
/ 2;
6079 } else if (type
== ZealCompactValue
) {
6080 collect(false, SliceBudget::Unlimited
, GC_SHRINK
, JS::gcreason::DEBUG_GC
);
6082 collect(false, SliceBudget::Unlimited
, GC_NORMAL
, JS::gcreason::DEBUG_GC
);
6089 GCRuntime::setValidate(bool enabled
)
6091 JS_ASSERT(!isHeapMajorCollecting());
6096 GCRuntime::setFullCompartmentChecks(bool enabled
)
6098 JS_ASSERT(!isHeapMajorCollecting());
6099 fullCompartmentChecks
= enabled
;
6104 GCRuntime::selectForMarking(JSObject
* object
)
6106 JS_ASSERT(!isHeapMajorCollecting());
6107 return selectedForMarking
.append(object
);
6111 GCRuntime::clearSelectedForMarking()
6113 selectedForMarking
.clearAndFree();
6117 GCRuntime::setDeterministic(bool enabled
)
6119 JS_ASSERT(!isHeapMajorCollecting());
6120 deterministicOnly
= enabled
;
6126 /* Should only be called manually under gdb */
6127 void PreventGCDuringInteractiveDebug()
6129 TlsPerThreadData
.get()->suppressGC
++;
6135 js::ReleaseAllJITCode(FreeOp
* fop
)
6137 #ifdef JSGC_GENERATIONAL
6139 * Scripts can entrain nursery things, inserting references to the script
6140 * into the store buffer. Clear the store buffer before discarding scripts.
6142 fop
->runtime()->gc
.evictNursery();
6145 for (ZonesIter
zone(fop
->runtime(), SkipAtoms
); !zone
.done(); zone
.next()) {
6146 if (!zone
->jitZone())
6150 /* Assert no baseline scripts are marked as active. */
6151 for (ZoneCellIter
i(zone
, FINALIZE_SCRIPT
); !i
.done(); i
.next()) {
6152 JSScript
* script
= i
.get
<JSScript
>();
6153 JS_ASSERT_IF(script
->hasBaselineScript(), !script
->baselineScript()->active());
6157 /* Mark baseline scripts on the stack as active. */
6158 jit::MarkActiveBaselineScripts(zone
);
6160 jit::InvalidateAll(fop
, zone
);
6162 for (ZoneCellIter
i(zone
, FINALIZE_SCRIPT
); !i
.done(); i
.next()) {
6163 JSScript
* script
= i
.get
<JSScript
>();
6164 jit::FinishInvalidation
<SequentialExecution
>(fop
, script
);
6165 jit::FinishInvalidation
<ParallelExecution
>(fop
, script
);
6168 * Discard baseline script if it's not marked as active. Note that
6169 * this also resets the active flag.
6171 jit::FinishDiscardBaselineScript(fop
, script
);
6174 zone
->jitZone()->optimizedStubSpace()->free();
6179 js::PurgeJITCaches(Zone
* zone
)
6181 for (ZoneCellIterUnderGC
i(zone
, FINALIZE_SCRIPT
); !i
.done(); i
.next()) {
6182 JSScript
* script
= i
.get
<JSScript
>();
6184 /* Discard Ion caches. */
6185 jit::PurgeCaches(script
);
6190 ArenaLists::normalizeBackgroundFinalizeState(AllocKind thingKind
)
6192 ArenaLists::BackgroundFinalizeState
* bfs
= &backgroundFinalizeState
[thingKind
];
6196 case BFS_JUST_FINISHED
:
6197 // No allocations between end of last sweep and now.
6198 // Transfering over arenas is a kind of allocation.
6202 JS_ASSERT(!"Background finalization in progress, but it should not be.");
6208 ArenaLists::adoptArenas(JSRuntime
* rt
, ArenaLists
* fromArenaLists
)
6210 // The other parallel threads have all completed now, and GC
6211 // should be inactive, but still take the lock as a kind of read
6213 AutoLockGC
lock(rt
);
6215 fromArenaLists
->purge();
6217 for (size_t thingKind
= 0; thingKind
!= FINALIZE_LIMIT
; thingKind
++) {
6218 // When we enter a parallel section, we join the background
6219 // thread, and we do not run GC while in the parallel section,
6220 // so no finalizer should be active!
6221 normalizeBackgroundFinalizeState(AllocKind(thingKind
));
6222 fromArenaLists
->normalizeBackgroundFinalizeState(AllocKind(thingKind
));
6224 ArenaList
* fromList
= &fromArenaLists
->arenaLists
[thingKind
];
6225 ArenaList
* toList
= &arenaLists
[thingKind
];
6229 for (ArenaHeader
* fromHeader
= fromList
->head(); fromHeader
; fromHeader
= next
) {
6230 // Copy fromHeader->next before releasing/reinserting.
6231 next
= fromHeader
->next
;
6233 // During parallel execution, we sometimes keep empty arenas
6234 // on the lists rather than sending them back to the chunk.
6235 // Therefore, if fromHeader is empty, send it back to the
6236 // chunk now. Otherwise, attach to |toList|.
6237 if (fromHeader
->isEmpty())
6238 fromHeader
->chunk()->releaseArena(fromHeader
);
6240 toList
->insertAtCursor(fromHeader
);
6248 ArenaLists::containsArena(JSRuntime
* rt
, ArenaHeader
* needle
)
6250 AutoLockGC
lock(rt
);
6251 size_t allocKind
= needle
->getAllocKind();
6252 for (ArenaHeader
* aheader
= arenaLists
[allocKind
].head(); aheader
; aheader
= aheader
->next
) {
6253 if (aheader
== needle
)
6260 AutoSuppressGC::AutoSuppressGC(ExclusiveContext
* cx
)
6261 : suppressGC_(cx
->perThreadData
->suppressGC
)
6266 AutoSuppressGC::AutoSuppressGC(JSCompartment
* comp
)
6267 : suppressGC_(comp
->runtimeFromMainThread()->mainThread
.suppressGC
)
6272 AutoSuppressGC::AutoSuppressGC(JSRuntime
* rt
)
6273 : suppressGC_(rt
->mainThread
.suppressGC
)
6279 js::UninlinedIsInsideNursery(const gc::Cell
* cell
)
6281 return IsInsideNursery(cell
);
6285 AutoDisableProxyCheck::AutoDisableProxyCheck(JSRuntime
* rt
6286 MOZ_GUARD_OBJECT_NOTIFIER_PARAM_IN_IMPL
)
6289 MOZ_GUARD_OBJECT_NOTIFIER_INIT
;
6290 gc
.disableStrictProxyChecking();
6293 AutoDisableProxyCheck::~AutoDisableProxyCheck()
6295 gc
.enableStrictProxyChecking();
6299 JS::AssertGCThingMustBeTenured(JSObject
* obj
)
6301 JS_ASSERT((!IsNurseryAllocable(obj
->tenuredGetAllocKind()) || obj
->getClass()->finalize
) &&
6306 js::gc::AssertGCThingHasType(js::gc::Cell
* cell
, JSGCTraceKind kind
)
6309 if (IsInsideNursery(cell
))
6310 JS_ASSERT(kind
== JSTRACE_OBJECT
);
6312 JS_ASSERT(MapAllocToTraceKind(cell
->tenuredGetAllocKind()) == kind
);
6315 JS_FRIEND_API(size_t)
6318 JSRuntime
* rt
= js::TlsPerThreadData
.get()->runtimeFromMainThread();
6321 return rt
->gc
.gcNumber();
6326 JS::AutoAssertOnGC::AutoAssertOnGC()
6327 : gc(nullptr), gcNumber(0)
6329 js::PerThreadData
* data
= js::TlsPerThreadData
.get();
6332 * GC's from off-thread will always assert, so off-thread is implicitly
6333 * AutoAssertOnGC. We still need to allow AutoAssertOnGC to be used in
6334 * code that works from both threads, however. We also use this to
6335 * annotate the off thread run loops.
6337 JSRuntime
* runtime
= data
->runtimeIfOnOwnerThread();
6340 gcNumber
= gc
->gcNumber();
6341 gc
->enterUnsafeRegion();
6346 JS::AutoAssertOnGC::AutoAssertOnGC(JSRuntime
* rt
)
6347 : gc(&rt
->gc
), gcNumber(rt
->gc
.gcNumber())
6349 gc
->enterUnsafeRegion();
6352 JS::AutoAssertOnGC::~AutoAssertOnGC()
6355 gc
->leaveUnsafeRegion();
6358 * The following backstop assertion should never fire: if we bumped the
6359 * gcNumber, we should have asserted because inUnsafeRegion was true.
6361 MOZ_ASSERT(gcNumber
== gc
->gcNumber(), "GC ran inside an AutoAssertOnGC scope.");
6366 JS::AutoAssertOnGC::VerifyIsSafeToGC(JSRuntime
* rt
)
6368 if (rt
->gc
.isInsideUnsafeRegion())
6369 MOZ_CRASH("[AutoAssertOnGC] possible GC in GC-unsafe region");
6373 #ifdef JSGC_HASH_TABLE_CHECKS
6375 js::gc::CheckHashTablesAfterMovingGC(JSRuntime
* rt
)
6378 * Check that internal hash tables no longer have any pointers to things
6379 * that have been moved.
6381 for (CompartmentsIter
c(rt
, SkipAtoms
); !c
.done(); c
.next()) {
6382 c
->checkTypeObjectTablesAfterMovingGC();
6383 c
->checkInitialShapesTableAfterMovingGC();
6384 c
->checkWrapperMapAfterMovingGC();
6386 c
->debugScopes
->checkHashTablesAfterMovingGC(rt
);