Bug 1816170 - Disable perftest-on-autoland cron. r=aglavic
[gecko.git] / xpcom / base / nsCycleCollector.cpp
blob4bb5cd227193c2da651d7dba8a1abab8724960e3
1 /* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*- */
2 /* vim: set ts=8 sts=2 et sw=2 tw=80: */
3 /* This Source Code Form is subject to the terms of the Mozilla Public
4 * License, v. 2.0. If a copy of the MPL was not distributed with this
5 * file, You can obtain one at http://mozilla.org/MPL/2.0/. */
7 //
8 // This file implements a garbage-cycle collector based on the paper
9 //
10 // Concurrent Cycle Collection in Reference Counted Systems
11 // Bacon & Rajan (2001), ECOOP 2001 / Springer LNCS vol 2072
13 // We are not using the concurrent or acyclic cases of that paper; so
14 // the green, red and orange colors are not used.
16 // The collector is based on tracking pointers of four colors:
18 // Black nodes are definitely live. If we ever determine a node is
19 // black, it's ok to forget about, drop from our records.
21 // White nodes are definitely garbage cycles. Once we finish with our
22 // scanning, we unlink all the white nodes and expect that by
23 // unlinking them they will self-destruct (since a garbage cycle is
24 // only keeping itself alive with internal links, by definition).
26 // Snow-white is an addition to the original algorithm. A snow-white node
27 // has reference count zero and is just waiting for deletion.
29 // Grey nodes are being scanned. Nodes that turn grey will turn
30 // either black if we determine that they're live, or white if we
31 // determine that they're a garbage cycle. After the main collection
32 // algorithm there should be no grey nodes.
34 // Purple nodes are *candidates* for being scanned. They are nodes we
35 // haven't begun scanning yet because they're not old enough, or we're
36 // still partway through the algorithm.
38 // XPCOM objects participating in garbage-cycle collection are obliged
39 // to inform us when they ought to turn purple; that is, when their
40 // refcount transitions from N+1 -> N, for nonzero N. Furthermore we
41 // require that *after* an XPCOM object has informed us of turning
42 // purple, they will tell us when they either transition back to being
43 // black (incremented refcount) or are ultimately deleted.
45 // Incremental cycle collection
47 // Beyond the simple state machine required to implement incremental
48 // collection, the CC needs to be able to compensate for things the browser
49 // is doing during the collection. There are two kinds of problems. For each
50 // of these, there are two cases to deal with: purple-buffered C++ objects
51 // and JS objects.
53 // The first problem is that an object in the CC's graph can become garbage.
54 // This is bad because the CC touches the objects in its graph at every
55 // stage of its operation.
57 // All cycle collected C++ objects that die during a cycle collection
58 // will end up actually getting deleted by the SnowWhiteKiller. Before
59 // the SWK deletes an object, it checks if an ICC is running, and if so,
60 // if the object is in the graph. If it is, the CC clears mPointer and
61 // mParticipant so it does not point to the raw object any more. Because
62 // objects could die any time the CC returns to the mutator, any time the CC
63 // accesses a PtrInfo it must perform a null check on mParticipant to
64 // ensure the object has not gone away.
66 // JS objects don't always run finalizers, so the CC can't remove them from
67 // the graph when they die. Fortunately, JS objects can only die during a GC,
68 // so if a GC is begun during an ICC, the browser synchronously finishes off
69 // the ICC, which clears the entire CC graph. If the GC and CC are scheduled
70 // properly, this should be rare.
72 // The second problem is that objects in the graph can be changed, say by
73 // being addrefed or released, or by having a field updated, after the object
74 // has been added to the graph. The problem is that ICC can miss a newly
75 // created reference to an object, and end up unlinking an object that is
76 // actually alive.
78 // The basic idea of the solution, from "An on-the-fly Reference Counting
79 // Garbage Collector for Java" by Levanoni and Petrank, is to notice if an
80 // object has had an additional reference to it created during the collection,
81 // and if so, don't collect it during the current collection. This avoids having
82 // to rerun the scan as in Bacon & Rajan 2001.
84 // For cycle collected C++ objects, we modify AddRef to place the object in
85 // the purple buffer, in addition to Release. Then, in the CC, we treat any
86 // objects in the purple buffer as being alive, after graph building has
87 // completed. Because they are in the purple buffer, they will be suspected
88 // in the next CC, so there's no danger of leaks. This is imprecise, because
89 // we will treat as live an object that has been Released but not AddRefed
90 // during graph building, but that's probably rare enough that the additional
91 // bookkeeping overhead is not worthwhile.
93 // For JS objects, the cycle collector is only looking at gray objects. If a
94 // gray object is touched during ICC, it will be made black by UnmarkGray.
95 // Thus, if a JS object has become black during the ICC, we treat it as live.
96 // Merged JS zones have to be handled specially: we scan all zone globals.
97 // If any are black, we treat the zone as being black.
99 // Safety
101 // An XPCOM object is either scan-safe or scan-unsafe, purple-safe or
102 // purple-unsafe.
104 // An nsISupports object is scan-safe if:
106 // - It can be QI'ed to |nsXPCOMCycleCollectionParticipant|, though
107 // this operation loses ISupports identity (like nsIClassInfo).
108 // - Additionally, the operation |traverse| on the resulting
109 // nsXPCOMCycleCollectionParticipant does not cause *any* refcount
110 // adjustment to occur (no AddRef / Release calls).
112 // A non-nsISupports ("native") object is scan-safe by explicitly
113 // providing its nsCycleCollectionParticipant.
115 // An object is purple-safe if it satisfies the following properties:
117 // - The object is scan-safe.
119 // When we receive a pointer |ptr| via
120 // |nsCycleCollector::suspect(ptr)|, we assume it is purple-safe. We
121 // can check the scan-safety, but have no way to ensure the
122 // purple-safety; objects must obey, or else the entire system falls
123 // apart. Don't involve an object in this scheme if you can't
124 // guarantee its purple-safety. The easiest way to ensure that an
125 // object is purple-safe is to use nsCycleCollectingAutoRefCnt.
127 // When we have a scannable set of purple nodes ready, we begin
128 // our walks. During the walks, the nodes we |traverse| should only
129 // feed us more scan-safe nodes, and should not adjust the refcounts
130 // of those nodes.
132 // We do not |AddRef| or |Release| any objects during scanning. We
133 // rely on the purple-safety of the roots that call |suspect| to
134 // hold, such that we will clear the pointer from the purple buffer
135 // entry to the object before it is destroyed. The pointers that are
136 // merely scan-safe we hold only for the duration of scanning, and
137 // there should be no objects released from the scan-safe set during
138 // the scan.
140 // We *do* call |Root| and |Unroot| on every white object, on
141 // either side of the calls to |Unlink|. This keeps the set of white
142 // objects alive during the unlinking.
145 #if !defined(__MINGW32__)
146 # ifdef WIN32
147 # include <crtdbg.h>
148 # include <errno.h>
149 # endif
150 #endif
152 #include "base/process_util.h"
154 #include "mozilla/ArrayUtils.h"
155 #include "mozilla/AutoRestore.h"
156 #include "mozilla/CycleCollectedJSContext.h"
157 #include "mozilla/CycleCollectedJSRuntime.h"
158 #include "mozilla/DebugOnly.h"
159 #include "mozilla/HashFunctions.h"
160 #include "mozilla/HashTable.h"
161 #include "mozilla/HoldDropJSObjects.h"
162 /* This must occur *after* base/process_util.h to avoid typedefs conflicts. */
163 #include <stdint.h>
164 #include <stdio.h>
166 #include <utility>
168 #include "js/SliceBudget.h"
169 #include "mozilla/Attributes.h"
170 #include "mozilla/AutoGlobalTimelineMarker.h"
171 #include "mozilla/Likely.h"
172 #include "mozilla/LinkedList.h"
173 #include "mozilla/MemoryReporting.h"
174 #include "mozilla/MruCache.h"
175 #include "mozilla/PoisonIOInterposer.h"
176 #include "mozilla/ProfilerLabels.h"
177 #include "mozilla/SegmentedVector.h"
178 #include "mozilla/Telemetry.h"
179 #include "mozilla/ThreadLocal.h"
180 #include "mozilla/UniquePtr.h"
181 #include "nsCycleCollectionNoteRootCallback.h"
182 #include "nsCycleCollectionParticipant.h"
183 #include "nsCycleCollector.h"
184 #include "nsDeque.h"
185 #include "nsDumpUtils.h"
186 #include "nsExceptionHandler.h"
187 #include "nsIConsoleService.h"
188 #include "nsICycleCollectorListener.h"
189 #include "nsIFile.h"
190 #include "nsIMemoryReporter.h"
191 #include "nsISerialEventTarget.h"
192 #include "nsPrintfCString.h"
193 #include "nsTArray.h"
194 #include "nsThreadUtils.h"
195 #include "nsXULAppAPI.h"
196 #include "prenv.h"
197 #include "xpcpublic.h"
199 using namespace mozilla;
201 struct NurseryPurpleBufferEntry {
202 void* mPtr;
203 nsCycleCollectionParticipant* mParticipant;
204 nsCycleCollectingAutoRefCnt* mRefCnt;
207 #define NURSERY_PURPLE_BUFFER_SIZE 2048
208 bool gNurseryPurpleBufferEnabled = true;
209 NurseryPurpleBufferEntry gNurseryPurpleBufferEntry[NURSERY_PURPLE_BUFFER_SIZE];
210 uint32_t gNurseryPurpleBufferEntryCount = 0;
212 void ClearNurseryPurpleBuffer();
214 static void SuspectUsingNurseryPurpleBuffer(
215 void* aPtr, nsCycleCollectionParticipant* aCp,
216 nsCycleCollectingAutoRefCnt* aRefCnt) {
217 MOZ_ASSERT(NS_IsMainThread(), "Wrong thread!");
218 MOZ_ASSERT(gNurseryPurpleBufferEnabled);
219 if (gNurseryPurpleBufferEntryCount == NURSERY_PURPLE_BUFFER_SIZE) {
220 ClearNurseryPurpleBuffer();
223 gNurseryPurpleBufferEntry[gNurseryPurpleBufferEntryCount] = {aPtr, aCp,
224 aRefCnt};
225 ++gNurseryPurpleBufferEntryCount;
228 // #define COLLECT_TIME_DEBUG
230 // Enable assertions that are useful for diagnosing errors in graph
231 // construction.
232 // #define DEBUG_CC_GRAPH
234 #define DEFAULT_SHUTDOWN_COLLECTIONS 5
236 // One to do the freeing, then another to detect there is no more work to do.
237 #define NORMAL_SHUTDOWN_COLLECTIONS 2
239 // Cycle collector environment variables
241 // MOZ_CC_LOG_ALL: If defined, always log cycle collector heaps.
243 // MOZ_CC_LOG_SHUTDOWN: If defined, log cycle collector heaps at shutdown.
245 // MOZ_CC_LOG_THREAD: If set to "main", only automatically log main thread
246 // CCs. If set to "worker", only automatically log worker CCs. If set to "all",
247 // log either. The default value is "all". This must be used with either
248 // MOZ_CC_LOG_ALL or MOZ_CC_LOG_SHUTDOWN for it to do anything.
250 // MOZ_CC_LOG_PROCESS: If set to "main", only automatically log main process
251 // CCs. If set to "content", only automatically log tab CCs. If set to "all",
252 // log everything. The default value is "all". This must be used with either
253 // MOZ_CC_LOG_ALL or MOZ_CC_LOG_SHUTDOWN for it to do anything.
255 // MOZ_CC_ALL_TRACES: If set to "all", any cycle collector
256 // logging done will be WantAllTraces, which disables
257 // various cycle collector optimizations to give a fuller picture of
258 // the heap. If set to "shutdown", only shutdown logging will be WantAllTraces.
259 // The default is none.
261 // MOZ_CC_RUN_DURING_SHUTDOWN: In non-DEBUG or builds, if this is set,
262 // run cycle collections at shutdown.
264 // MOZ_CC_LOG_DIRECTORY: The directory in which logs are placed (such as
265 // logs from MOZ_CC_LOG_ALL and MOZ_CC_LOG_SHUTDOWN, or other uses
266 // of nsICycleCollectorListener)
268 // Various parameters of this collector can be tuned using environment
269 // variables.
271 struct nsCycleCollectorParams {
272 bool mLogAll;
273 bool mLogShutdown;
274 bool mAllTracesAll;
275 bool mAllTracesShutdown;
276 bool mLogThisThread;
278 nsCycleCollectorParams()
279 : mLogAll(PR_GetEnv("MOZ_CC_LOG_ALL") != nullptr),
280 mLogShutdown(PR_GetEnv("MOZ_CC_LOG_SHUTDOWN") != nullptr),
281 mAllTracesAll(false),
282 mAllTracesShutdown(false) {
283 const char* logThreadEnv = PR_GetEnv("MOZ_CC_LOG_THREAD");
284 bool threadLogging = true;
285 if (logThreadEnv && !!strcmp(logThreadEnv, "all")) {
286 if (NS_IsMainThread()) {
287 threadLogging = !strcmp(logThreadEnv, "main");
288 } else {
289 threadLogging = !strcmp(logThreadEnv, "worker");
293 const char* logProcessEnv = PR_GetEnv("MOZ_CC_LOG_PROCESS");
294 bool processLogging = true;
295 if (logProcessEnv && !!strcmp(logProcessEnv, "all")) {
296 switch (XRE_GetProcessType()) {
297 case GeckoProcessType_Default:
298 processLogging = !strcmp(logProcessEnv, "main");
299 break;
300 case GeckoProcessType_Content:
301 processLogging = !strcmp(logProcessEnv, "content");
302 break;
303 default:
304 processLogging = false;
305 break;
308 mLogThisThread = threadLogging && processLogging;
310 const char* allTracesEnv = PR_GetEnv("MOZ_CC_ALL_TRACES");
311 if (allTracesEnv) {
312 if (!strcmp(allTracesEnv, "all")) {
313 mAllTracesAll = true;
314 } else if (!strcmp(allTracesEnv, "shutdown")) {
315 mAllTracesShutdown = true;
320 bool LogThisCC(bool aIsShutdown) {
321 return (mLogAll || (aIsShutdown && mLogShutdown)) && mLogThisThread;
324 bool AllTracesThisCC(bool aIsShutdown) {
325 return mAllTracesAll || (aIsShutdown && mAllTracesShutdown);
329 #ifdef COLLECT_TIME_DEBUG
330 class TimeLog {
331 public:
332 TimeLog() : mLastCheckpoint(TimeStamp::Now()) {}
334 void Checkpoint(const char* aEvent) {
335 TimeStamp now = TimeStamp::Now();
336 double dur = (now - mLastCheckpoint).ToMilliseconds();
337 if (dur >= 0.5) {
338 printf("cc: %s took %.1fms\n", aEvent, dur);
340 mLastCheckpoint = now;
343 private:
344 TimeStamp mLastCheckpoint;
346 #else
347 class TimeLog {
348 public:
349 TimeLog() = default;
350 void Checkpoint(const char* aEvent) {}
352 #endif
354 ////////////////////////////////////////////////////////////////////////
355 // Base types
356 ////////////////////////////////////////////////////////////////////////
358 class PtrInfo;
360 class EdgePool {
361 public:
362 // EdgePool allocates arrays of void*, primarily to hold PtrInfo*.
363 // However, at the end of a block, the last two pointers are a null
364 // and then a void** pointing to the next block. This allows
365 // EdgePool::Iterators to be a single word but still capable of crossing
366 // block boundaries.
368 EdgePool() {
369 mSentinelAndBlocks[0].block = nullptr;
370 mSentinelAndBlocks[1].block = nullptr;
373 ~EdgePool() {
374 MOZ_ASSERT(!mSentinelAndBlocks[0].block && !mSentinelAndBlocks[1].block,
375 "Didn't call Clear()?");
378 void Clear() {
379 EdgeBlock* b = EdgeBlocks();
380 while (b) {
381 EdgeBlock* next = b->Next();
382 delete b;
383 b = next;
386 mSentinelAndBlocks[0].block = nullptr;
387 mSentinelAndBlocks[1].block = nullptr;
390 #ifdef DEBUG
391 bool IsEmpty() {
392 return !mSentinelAndBlocks[0].block && !mSentinelAndBlocks[1].block;
394 #endif
396 private:
397 struct EdgeBlock;
398 union PtrInfoOrBlock {
399 // Use a union to avoid reinterpret_cast and the ensuing
400 // potential aliasing bugs.
401 PtrInfo* ptrInfo;
402 EdgeBlock* block;
404 struct EdgeBlock {
405 enum { EdgeBlockSize = 16 * 1024 };
407 PtrInfoOrBlock mPointers[EdgeBlockSize];
408 EdgeBlock() {
409 mPointers[EdgeBlockSize - 2].block = nullptr; // sentinel
410 mPointers[EdgeBlockSize - 1].block = nullptr; // next block pointer
412 EdgeBlock*& Next() { return mPointers[EdgeBlockSize - 1].block; }
413 PtrInfoOrBlock* Start() { return &mPointers[0]; }
414 PtrInfoOrBlock* End() { return &mPointers[EdgeBlockSize - 2]; }
417 // Store the null sentinel so that we can have valid iterators
418 // before adding any edges and without adding any blocks.
419 PtrInfoOrBlock mSentinelAndBlocks[2];
421 EdgeBlock*& EdgeBlocks() { return mSentinelAndBlocks[1].block; }
422 EdgeBlock* EdgeBlocks() const { return mSentinelAndBlocks[1].block; }
424 public:
425 class Iterator {
426 public:
427 Iterator() : mPointer(nullptr) {}
428 explicit Iterator(PtrInfoOrBlock* aPointer) : mPointer(aPointer) {}
429 Iterator(const Iterator& aOther) = default;
431 Iterator& operator++() {
432 if (!mPointer->ptrInfo) {
433 // Null pointer is a sentinel for link to the next block.
434 mPointer = (mPointer + 1)->block->mPointers;
436 ++mPointer;
437 return *this;
440 PtrInfo* operator*() const {
441 if (!mPointer->ptrInfo) {
442 // Null pointer is a sentinel for link to the next block.
443 return (mPointer + 1)->block->mPointers->ptrInfo;
445 return mPointer->ptrInfo;
447 bool operator==(const Iterator& aOther) const {
448 return mPointer == aOther.mPointer;
450 bool operator!=(const Iterator& aOther) const {
451 return mPointer != aOther.mPointer;
454 #ifdef DEBUG_CC_GRAPH
455 bool Initialized() const { return mPointer != nullptr; }
456 #endif
458 private:
459 PtrInfoOrBlock* mPointer;
462 class Builder;
463 friend class Builder;
464 class Builder {
465 public:
466 explicit Builder(EdgePool& aPool)
467 : mCurrent(&aPool.mSentinelAndBlocks[0]),
468 mBlockEnd(&aPool.mSentinelAndBlocks[0]),
469 mNextBlockPtr(&aPool.EdgeBlocks()) {}
471 Iterator Mark() { return Iterator(mCurrent); }
473 void Add(PtrInfo* aEdge) {
474 if (mCurrent == mBlockEnd) {
475 EdgeBlock* b = new EdgeBlock();
476 *mNextBlockPtr = b;
477 mCurrent = b->Start();
478 mBlockEnd = b->End();
479 mNextBlockPtr = &b->Next();
481 (mCurrent++)->ptrInfo = aEdge;
484 private:
485 // mBlockEnd points to space for null sentinel
486 PtrInfoOrBlock* mCurrent;
487 PtrInfoOrBlock* mBlockEnd;
488 EdgeBlock** mNextBlockPtr;
491 size_t SizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const {
492 size_t n = 0;
493 EdgeBlock* b = EdgeBlocks();
494 while (b) {
495 n += aMallocSizeOf(b);
496 b = b->Next();
498 return n;
502 #ifdef DEBUG_CC_GRAPH
503 # define CC_GRAPH_ASSERT(b) MOZ_ASSERT(b)
504 #else
505 # define CC_GRAPH_ASSERT(b)
506 #endif
508 #define CC_TELEMETRY(_name, _value) \
509 do { \
510 if (NS_IsMainThread()) { \
511 Telemetry::Accumulate(Telemetry::CYCLE_COLLECTOR##_name, _value); \
512 } else { \
513 Telemetry::Accumulate(Telemetry::CYCLE_COLLECTOR_WORKER##_name, _value); \
515 } while (0)
517 enum NodeColor { black, white, grey };
519 // This structure should be kept as small as possible; we may expect
520 // hundreds of thousands of them to be allocated and touched
521 // repeatedly during each cycle collection.
522 class PtrInfo final {
523 public:
524 // mParticipant knows a more concrete type.
525 void* mPointer;
526 nsCycleCollectionParticipant* mParticipant;
527 uint32_t mColor : 2;
528 uint32_t mInternalRefs : 30;
529 uint32_t mRefCount;
531 private:
532 EdgePool::Iterator mFirstChild;
534 static const uint32_t kInitialRefCount = UINT32_MAX - 1;
536 public:
537 PtrInfo(void* aPointer, nsCycleCollectionParticipant* aParticipant)
538 : mPointer(aPointer),
539 mParticipant(aParticipant),
540 mColor(grey),
541 mInternalRefs(0),
542 mRefCount(kInitialRefCount),
543 mFirstChild() {
544 MOZ_ASSERT(aParticipant);
546 // We initialize mRefCount to a large non-zero value so
547 // that it doesn't look like a JS object to the cycle collector
548 // in the case where the object dies before being traversed.
549 MOZ_ASSERT(!IsGrayJS() && !IsBlackJS());
552 // Allow NodePool::NodeBlock's constructor to compile.
553 PtrInfo()
554 : mPointer{nullptr},
555 mParticipant{nullptr},
556 mColor{0},
557 mInternalRefs{0},
558 mRefCount{0} {
559 MOZ_ASSERT_UNREACHABLE("should never be called");
562 bool IsGrayJS() const { return mRefCount == 0; }
564 bool IsBlackJS() const { return mRefCount == UINT32_MAX; }
566 bool WasTraversed() const { return mRefCount != kInitialRefCount; }
568 EdgePool::Iterator FirstChild() const {
569 CC_GRAPH_ASSERT(mFirstChild.Initialized());
570 return mFirstChild;
573 // this PtrInfo must be part of a NodePool
574 EdgePool::Iterator LastChild() const {
575 CC_GRAPH_ASSERT((this + 1)->mFirstChild.Initialized());
576 return (this + 1)->mFirstChild;
579 void SetFirstChild(EdgePool::Iterator aFirstChild) {
580 CC_GRAPH_ASSERT(aFirstChild.Initialized());
581 mFirstChild = aFirstChild;
584 // this PtrInfo must be part of a NodePool
585 void SetLastChild(EdgePool::Iterator aLastChild) {
586 CC_GRAPH_ASSERT(aLastChild.Initialized());
587 (this + 1)->mFirstChild = aLastChild;
590 void AnnotatedReleaseAssert(bool aCondition, const char* aMessage);
593 void PtrInfo::AnnotatedReleaseAssert(bool aCondition, const char* aMessage) {
594 if (aCondition) {
595 return;
598 const char* piName = "Unknown";
599 if (mParticipant) {
600 piName = mParticipant->ClassName();
602 nsPrintfCString msg("%s, for class %s", aMessage, piName);
603 NS_WARNING(msg.get());
604 CrashReporter::AnnotateCrashReport(CrashReporter::Annotation::CycleCollector,
605 msg);
607 MOZ_CRASH();
611 * A structure designed to be used like a linked list of PtrInfo, except
612 * it allocates many PtrInfos at a time.
614 class NodePool {
615 private:
616 // The -2 allows us to use |NodeBlockSize + 1| for |mEntries|, and fit
617 // |mNext|, all without causing slop.
618 enum { NodeBlockSize = 4 * 1024 - 2 };
620 struct NodeBlock {
621 // We create and destroy NodeBlock using moz_xmalloc/free rather than new
622 // and delete to avoid calling its constructor and destructor.
623 NodeBlock() : mNext{nullptr} {
624 MOZ_ASSERT_UNREACHABLE("should never be called");
626 // Ensure NodeBlock is the right size (see the comment on NodeBlockSize
627 // above).
628 static_assert(
629 sizeof(NodeBlock) == 81904 || // 32-bit; equals 19.996 x 4 KiB pages
630 sizeof(NodeBlock) ==
631 131048, // 64-bit; equals 31.994 x 4 KiB pages
632 "ill-sized NodeBlock");
634 ~NodeBlock() { MOZ_ASSERT_UNREACHABLE("should never be called"); }
636 NodeBlock* mNext;
637 PtrInfo mEntries[NodeBlockSize + 1]; // +1 to store last child of last node
640 public:
641 NodePool() : mBlocks(nullptr), mLast(nullptr) {}
643 ~NodePool() { MOZ_ASSERT(!mBlocks, "Didn't call Clear()?"); }
645 void Clear() {
646 NodeBlock* b = mBlocks;
647 while (b) {
648 NodeBlock* n = b->mNext;
649 free(b);
650 b = n;
653 mBlocks = nullptr;
654 mLast = nullptr;
657 #ifdef DEBUG
658 bool IsEmpty() { return !mBlocks && !mLast; }
659 #endif
661 class Builder;
662 friend class Builder;
663 class Builder {
664 public:
665 explicit Builder(NodePool& aPool)
666 : mNextBlock(&aPool.mBlocks), mNext(aPool.mLast), mBlockEnd(nullptr) {
667 MOZ_ASSERT(!aPool.mBlocks && !aPool.mLast, "pool not empty");
669 PtrInfo* Add(void* aPointer, nsCycleCollectionParticipant* aParticipant) {
670 if (mNext == mBlockEnd) {
671 NodeBlock* block = static_cast<NodeBlock*>(malloc(sizeof(NodeBlock)));
672 if (!block) {
673 return nullptr;
676 *mNextBlock = block;
677 mNext = block->mEntries;
678 mBlockEnd = block->mEntries + NodeBlockSize;
679 block->mNext = nullptr;
680 mNextBlock = &block->mNext;
682 return new (mozilla::KnownNotNull, mNext++)
683 PtrInfo(aPointer, aParticipant);
686 private:
687 NodeBlock** mNextBlock;
688 PtrInfo*& mNext;
689 PtrInfo* mBlockEnd;
692 class Enumerator;
693 friend class Enumerator;
694 class Enumerator {
695 public:
696 explicit Enumerator(NodePool& aPool)
697 : mFirstBlock(aPool.mBlocks),
698 mCurBlock(nullptr),
699 mNext(nullptr),
700 mBlockEnd(nullptr),
701 mLast(aPool.mLast) {}
703 bool IsDone() const { return mNext == mLast; }
705 bool AtBlockEnd() const { return mNext == mBlockEnd; }
707 PtrInfo* GetNext() {
708 MOZ_ASSERT(!IsDone(), "calling GetNext when done");
709 if (mNext == mBlockEnd) {
710 NodeBlock* nextBlock = mCurBlock ? mCurBlock->mNext : mFirstBlock;
711 mNext = nextBlock->mEntries;
712 mBlockEnd = mNext + NodeBlockSize;
713 mCurBlock = nextBlock;
715 return mNext++;
718 private:
719 // mFirstBlock is a reference to allow an Enumerator to be constructed
720 // for an empty graph.
721 NodeBlock*& mFirstBlock;
722 NodeBlock* mCurBlock;
723 // mNext is the next value we want to return, unless mNext == mBlockEnd
724 // NB: mLast is a reference to allow enumerating while building!
725 PtrInfo* mNext;
726 PtrInfo* mBlockEnd;
727 PtrInfo*& mLast;
730 size_t SizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const {
731 // We don't measure the things pointed to by mEntries[] because those
732 // pointers are non-owning.
733 size_t n = 0;
734 NodeBlock* b = mBlocks;
735 while (b) {
736 n += aMallocSizeOf(b);
737 b = b->mNext;
739 return n;
742 private:
743 NodeBlock* mBlocks;
744 PtrInfo* mLast;
747 struct PtrToNodeHashPolicy {
748 using Key = PtrInfo*;
749 using Lookup = void*;
751 static js::HashNumber hash(const Lookup& aLookup) {
752 return mozilla::HashGeneric(aLookup);
755 static bool match(const Key& aKey, const Lookup& aLookup) {
756 return aKey->mPointer == aLookup;
760 struct WeakMapping {
761 // map and key will be null if the corresponding objects are GC marked
762 PtrInfo* mMap;
763 PtrInfo* mKey;
764 PtrInfo* mKeyDelegate;
765 PtrInfo* mVal;
768 class CCGraphBuilder;
770 struct CCGraph {
771 NodePool mNodes;
772 EdgePool mEdges;
773 nsTArray<WeakMapping> mWeakMaps;
774 uint32_t mRootCount;
776 private:
777 friend CCGraphBuilder;
779 mozilla::HashSet<PtrInfo*, PtrToNodeHashPolicy> mPtrInfoMap;
781 bool mOutOfMemory;
783 static const uint32_t kInitialMapLength = 16384;
785 public:
786 CCGraph()
787 : mRootCount(0), mPtrInfoMap(kInitialMapLength), mOutOfMemory(false) {}
789 ~CCGraph() = default;
791 void Init() { MOZ_ASSERT(IsEmpty(), "Failed to call CCGraph::Clear"); }
793 void Clear() {
794 mNodes.Clear();
795 mEdges.Clear();
796 mWeakMaps.Clear();
797 mRootCount = 0;
798 mPtrInfoMap.clearAndCompact();
799 mOutOfMemory = false;
802 #ifdef DEBUG
803 bool IsEmpty() {
804 return mNodes.IsEmpty() && mEdges.IsEmpty() && mWeakMaps.IsEmpty() &&
805 mRootCount == 0 && mPtrInfoMap.empty();
807 #endif
809 PtrInfo* FindNode(void* aPtr);
810 void RemoveObjectFromMap(void* aObject);
812 uint32_t MapCount() const { return mPtrInfoMap.count(); }
814 size_t SizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const {
815 size_t n = 0;
817 n += mNodes.SizeOfExcludingThis(aMallocSizeOf);
818 n += mEdges.SizeOfExcludingThis(aMallocSizeOf);
820 // We don't measure what the WeakMappings point to, because the
821 // pointers are non-owning.
822 n += mWeakMaps.ShallowSizeOfExcludingThis(aMallocSizeOf);
824 n += mPtrInfoMap.shallowSizeOfExcludingThis(aMallocSizeOf);
826 return n;
830 PtrInfo* CCGraph::FindNode(void* aPtr) {
831 auto p = mPtrInfoMap.lookup(aPtr);
832 return p ? *p : nullptr;
835 void CCGraph::RemoveObjectFromMap(void* aObj) {
836 auto p = mPtrInfoMap.lookup(aObj);
837 if (p) {
838 PtrInfo* pinfo = *p;
839 pinfo->mPointer = nullptr;
840 pinfo->mParticipant = nullptr;
841 mPtrInfoMap.remove(p);
845 static nsISupports* CanonicalizeXPCOMParticipant(nsISupports* aIn) {
846 nsISupports* out = nullptr;
847 aIn->QueryInterface(NS_GET_IID(nsCycleCollectionISupports),
848 reinterpret_cast<void**>(&out));
849 return out;
852 struct nsPurpleBufferEntry {
853 nsPurpleBufferEntry(void* aObject, nsCycleCollectingAutoRefCnt* aRefCnt,
854 nsCycleCollectionParticipant* aParticipant)
855 : mObject(aObject), mRefCnt(aRefCnt), mParticipant(aParticipant) {}
857 nsPurpleBufferEntry(nsPurpleBufferEntry&& aOther)
858 : mObject(nullptr), mRefCnt(nullptr), mParticipant(nullptr) {
859 Swap(aOther);
862 void Swap(nsPurpleBufferEntry& aOther) {
863 std::swap(mObject, aOther.mObject);
864 std::swap(mRefCnt, aOther.mRefCnt);
865 std::swap(mParticipant, aOther.mParticipant);
868 void Clear() {
869 mRefCnt->RemoveFromPurpleBuffer();
870 mRefCnt = nullptr;
871 mObject = nullptr;
872 mParticipant = nullptr;
875 ~nsPurpleBufferEntry() {
876 if (mRefCnt) {
877 mRefCnt->RemoveFromPurpleBuffer();
881 void* mObject;
882 nsCycleCollectingAutoRefCnt* mRefCnt;
883 nsCycleCollectionParticipant* mParticipant; // nullptr for nsISupports
886 class nsCycleCollector;
888 struct nsPurpleBuffer {
889 private:
890 uint32_t mCount;
892 // Try to match the size of a jemalloc bucket, to minimize slop bytes.
893 // - On 32-bit platforms sizeof(nsPurpleBufferEntry) is 12, so mEntries'
894 // Segment is 16,372 bytes.
895 // - On 64-bit platforms sizeof(nsPurpleBufferEntry) is 24, so mEntries'
896 // Segment is 32,760 bytes.
897 static const uint32_t kEntriesPerSegment = 1365;
898 static const size_t kSegmentSize =
899 sizeof(nsPurpleBufferEntry) * kEntriesPerSegment;
900 typedef SegmentedVector<nsPurpleBufferEntry, kSegmentSize,
901 InfallibleAllocPolicy>
902 PurpleBufferVector;
903 PurpleBufferVector mEntries;
905 public:
906 nsPurpleBuffer() : mCount(0) {
907 static_assert(
908 sizeof(PurpleBufferVector::Segment) == 16372 || // 32-bit
909 sizeof(PurpleBufferVector::Segment) == 32760 || // 64-bit
910 sizeof(PurpleBufferVector::Segment) == 32744, // 64-bit Windows
911 "ill-sized nsPurpleBuffer::mEntries");
914 ~nsPurpleBuffer() = default;
916 // This method compacts mEntries.
917 template <class PurpleVisitor>
918 void VisitEntries(PurpleVisitor& aVisitor) {
919 Maybe<AutoRestore<bool>> ar;
920 if (NS_IsMainThread()) {
921 ar.emplace(gNurseryPurpleBufferEnabled);
922 gNurseryPurpleBufferEnabled = false;
923 ClearNurseryPurpleBuffer();
926 if (mEntries.IsEmpty()) {
927 return;
930 uint32_t oldLength = mEntries.Length();
931 uint32_t keptLength = 0;
932 auto revIter = mEntries.IterFromLast();
933 auto iter = mEntries.Iter();
934 // After iteration this points to the first empty entry.
935 auto firstEmptyIter = mEntries.Iter();
936 auto iterFromLastEntry = mEntries.IterFromLast();
937 for (; !iter.Done(); iter.Next()) {
938 nsPurpleBufferEntry& e = iter.Get();
939 if (e.mObject) {
940 if (!aVisitor.Visit(*this, &e)) {
941 return;
945 // Visit call above may have cleared the entry, or the entry was empty
946 // already.
947 if (!e.mObject) {
948 // Try to find a non-empty entry from the end of the vector.
949 for (; !revIter.Done(); revIter.Prev()) {
950 nsPurpleBufferEntry& otherEntry = revIter.Get();
951 if (&e == &otherEntry) {
952 break;
954 if (otherEntry.mObject) {
955 if (!aVisitor.Visit(*this, &otherEntry)) {
956 return;
958 // Visit may have cleared otherEntry.
959 if (otherEntry.mObject) {
960 e.Swap(otherEntry);
961 revIter.Prev(); // We've swapped this now empty entry.
962 break;
968 // Entry is non-empty even after the Visit call, ensure it is kept
969 // in mEntries.
970 if (e.mObject) {
971 firstEmptyIter.Next();
972 ++keptLength;
975 if (&e == &revIter.Get()) {
976 break;
980 // There were some empty entries.
981 if (oldLength != keptLength) {
982 // While visiting entries, some new ones were possibly added. This can
983 // happen during CanSkip. Move all such new entries to be after other
984 // entries. Note, we don't call Visit on newly added entries!
985 if (&iterFromLastEntry.Get() != &mEntries.GetLast()) {
986 iterFromLastEntry.Next(); // Now pointing to the first added entry.
987 auto& iterForNewEntries = iterFromLastEntry;
988 while (!iterForNewEntries.Done()) {
989 MOZ_ASSERT(!firstEmptyIter.Done());
990 MOZ_ASSERT(!firstEmptyIter.Get().mObject);
991 firstEmptyIter.Get().Swap(iterForNewEntries.Get());
992 firstEmptyIter.Next();
993 iterForNewEntries.Next();
997 mEntries.PopLastN(oldLength - keptLength);
1001 void FreeBlocks() {
1002 mCount = 0;
1003 mEntries.Clear();
1006 void SelectPointers(CCGraphBuilder& aBuilder);
1008 // RemoveSkippable removes entries from the purple buffer synchronously
1009 // (1) if !aAsyncSnowWhiteFreeing and nsPurpleBufferEntry::mRefCnt is 0 or
1010 // (2) if nsXPCOMCycleCollectionParticipant::CanSkip() for the obj or
1011 // (3) if nsPurpleBufferEntry::mRefCnt->IsPurple() is false.
1012 // (4) If aRemoveChildlessNodes is true, then any nodes in the purple buffer
1013 // that will have no children in the cycle collector graph will also be
1014 // removed. CanSkip() may be run on these children.
1015 void RemoveSkippable(nsCycleCollector* aCollector, js::SliceBudget& aBudget,
1016 bool aRemoveChildlessNodes, bool aAsyncSnowWhiteFreeing,
1017 CC_ForgetSkippableCallback aCb);
1019 MOZ_ALWAYS_INLINE void Put(void* aObject, nsCycleCollectionParticipant* aCp,
1020 nsCycleCollectingAutoRefCnt* aRefCnt) {
1021 nsPurpleBufferEntry entry(aObject, aRefCnt, aCp);
1022 Unused << mEntries.Append(std::move(entry));
1023 MOZ_ASSERT(!entry.mRefCnt, "Move didn't work!");
1024 ++mCount;
1027 void Remove(nsPurpleBufferEntry* aEntry) {
1028 MOZ_ASSERT(mCount != 0, "must have entries");
1029 --mCount;
1030 aEntry->Clear();
1033 uint32_t Count() const { return mCount; }
1035 size_t SizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const {
1036 return mEntries.SizeOfExcludingThis(aMallocSizeOf);
1040 static bool AddPurpleRoot(CCGraphBuilder& aBuilder, void* aRoot,
1041 nsCycleCollectionParticipant* aParti);
1043 struct SelectPointersVisitor {
1044 explicit SelectPointersVisitor(CCGraphBuilder& aBuilder)
1045 : mBuilder(aBuilder) {}
1047 bool Visit(nsPurpleBuffer& aBuffer, nsPurpleBufferEntry* aEntry) {
1048 MOZ_ASSERT(aEntry->mObject, "Null object in purple buffer");
1049 MOZ_ASSERT(aEntry->mRefCnt->get() != 0,
1050 "SelectPointersVisitor: snow-white object in the purple buffer");
1051 if (!aEntry->mRefCnt->IsPurple() ||
1052 AddPurpleRoot(mBuilder, aEntry->mObject, aEntry->mParticipant)) {
1053 aBuffer.Remove(aEntry);
1055 return true;
1058 private:
1059 CCGraphBuilder& mBuilder;
1062 void nsPurpleBuffer::SelectPointers(CCGraphBuilder& aBuilder) {
1063 SelectPointersVisitor visitor(aBuilder);
1064 VisitEntries(visitor);
1066 MOZ_ASSERT(mCount == 0, "AddPurpleRoot failed");
1067 if (mCount == 0) {
1068 FreeBlocks();
1072 enum ccPhase {
1073 IdlePhase,
1074 GraphBuildingPhase,
1075 ScanAndCollectWhitePhase,
1076 CleanupPhase
1079 enum ccIsManual { CCIsNotManual = false, CCIsManual = true };
1081 ////////////////////////////////////////////////////////////////////////
1082 // Top level structure for the cycle collector.
1083 ////////////////////////////////////////////////////////////////////////
1085 using js::SliceBudget;
1087 class JSPurpleBuffer;
1089 class nsCycleCollector : public nsIMemoryReporter {
1090 public:
1091 NS_DECL_ISUPPORTS
1092 NS_DECL_NSIMEMORYREPORTER
1094 private:
1095 bool mActivelyCollecting;
1096 bool mFreeingSnowWhite;
1097 // mScanInProgress should be false when we're collecting white objects.
1098 bool mScanInProgress;
1099 CycleCollectorResults mResults;
1100 TimeStamp mCollectionStart;
1102 CycleCollectedJSRuntime* mCCJSRuntime;
1104 ccPhase mIncrementalPhase;
1105 CCGraph mGraph;
1106 UniquePtr<CCGraphBuilder> mBuilder;
1107 RefPtr<nsCycleCollectorLogger> mLogger;
1109 #ifdef DEBUG
1110 nsISerialEventTarget* mEventTarget;
1111 #endif
1113 nsCycleCollectorParams mParams;
1115 uint32_t mWhiteNodeCount;
1117 CC_BeforeUnlinkCallback mBeforeUnlinkCB;
1118 CC_ForgetSkippableCallback mForgetSkippableCB;
1120 nsPurpleBuffer mPurpleBuf;
1122 uint32_t mUnmergedNeeded;
1123 uint32_t mMergedInARow;
1125 RefPtr<JSPurpleBuffer> mJSPurpleBuffer;
1127 private:
1128 virtual ~nsCycleCollector();
1130 public:
1131 nsCycleCollector();
1133 void SetCCJSRuntime(CycleCollectedJSRuntime* aCCRuntime);
1134 void ClearCCJSRuntime();
1136 void SetBeforeUnlinkCallback(CC_BeforeUnlinkCallback aBeforeUnlinkCB) {
1137 CheckThreadSafety();
1138 mBeforeUnlinkCB = aBeforeUnlinkCB;
1141 void SetForgetSkippableCallback(
1142 CC_ForgetSkippableCallback aForgetSkippableCB) {
1143 CheckThreadSafety();
1144 mForgetSkippableCB = aForgetSkippableCB;
1147 void Suspect(void* aPtr, nsCycleCollectionParticipant* aCp,
1148 nsCycleCollectingAutoRefCnt* aRefCnt);
1149 void SuspectNurseryEntries();
1150 uint32_t SuspectedCount();
1151 void ForgetSkippable(js::SliceBudget& aBudget, bool aRemoveChildlessNodes,
1152 bool aAsyncSnowWhiteFreeing);
1153 bool FreeSnowWhite(bool aUntilNoSWInPurpleBuffer);
1154 bool FreeSnowWhiteWithBudget(js::SliceBudget& aBudget);
1156 // This method assumes its argument is already canonicalized.
1157 void RemoveObjectFromGraph(void* aPtr);
1159 void PrepareForGarbageCollection();
1160 void FinishAnyCurrentCollection(CCReason aReason);
1162 bool Collect(CCReason aReason, ccIsManual aIsManual, SliceBudget& aBudget,
1163 nsICycleCollectorListener* aManualListener,
1164 bool aPreferShorterSlices = false);
1165 MOZ_CAN_RUN_SCRIPT
1166 void Shutdown(bool aDoCollect);
1168 bool IsIdle() const { return mIncrementalPhase == IdlePhase; }
1170 void SizeOfIncludingThis(mozilla::MallocSizeOf aMallocSizeOf,
1171 size_t* aObjectSize, size_t* aGraphSize,
1172 size_t* aPurpleBufferSize) const;
1174 JSPurpleBuffer* GetJSPurpleBuffer();
1176 CycleCollectedJSRuntime* Runtime() { return mCCJSRuntime; }
1178 private:
1179 void CheckThreadSafety();
1180 MOZ_CAN_RUN_SCRIPT
1181 void ShutdownCollect();
1183 void FixGrayBits(bool aIsShutdown, TimeLog& aTimeLog);
1184 bool IsIncrementalGCInProgress();
1185 void FinishAnyIncrementalGCInProgress();
1186 bool ShouldMergeZones(ccIsManual aIsManual);
1188 void BeginCollection(CCReason aReason, ccIsManual aIsManual,
1189 nsICycleCollectorListener* aManualListener);
1190 void MarkRoots(SliceBudget& aBudget);
1191 void ScanRoots(bool aFullySynchGraphBuild);
1192 void ScanIncrementalRoots();
1193 void ScanWhiteNodes(bool aFullySynchGraphBuild);
1194 void ScanBlackNodes();
1195 void ScanWeakMaps();
1197 // returns whether anything was collected
1198 bool CollectWhite();
1200 void CleanupAfterCollection();
1203 NS_IMPL_ISUPPORTS(nsCycleCollector, nsIMemoryReporter)
1206 * GraphWalker is templatized over a Visitor class that must provide
1207 * the following two methods:
1209 * bool ShouldVisitNode(PtrInfo const *pi);
1210 * void VisitNode(PtrInfo *pi);
1212 template <class Visitor>
1213 class GraphWalker {
1214 private:
1215 Visitor mVisitor;
1217 void DoWalk(nsDeque<PtrInfo>& aQueue);
1219 void CheckedPush(nsDeque<PtrInfo>& aQueue, PtrInfo* aPi) {
1220 if (!aPi) {
1221 MOZ_CRASH();
1223 if (!aQueue.Push(aPi, fallible)) {
1224 mVisitor.Failed();
1228 public:
1229 void Walk(PtrInfo* aPi);
1230 void WalkFromRoots(CCGraph& aGraph);
1231 // copy-constructing the visitor should be cheap, and less
1232 // indirection than using a reference
1233 explicit GraphWalker(const Visitor aVisitor) : mVisitor(aVisitor) {}
1236 ////////////////////////////////////////////////////////////////////////
1237 // The static collector struct
1238 ////////////////////////////////////////////////////////////////////////
1240 struct CollectorData {
1241 RefPtr<nsCycleCollector> mCollector;
1242 CycleCollectedJSContext* mContext;
1245 static MOZ_THREAD_LOCAL(CollectorData*) sCollectorData;
1247 ////////////////////////////////////////////////////////////////////////
1248 // Utility functions
1249 ////////////////////////////////////////////////////////////////////////
1251 static inline void ToParticipant(nsISupports* aPtr,
1252 nsXPCOMCycleCollectionParticipant** aCp) {
1253 // We use QI to move from an nsISupports to an
1254 // nsXPCOMCycleCollectionParticipant, which is a per-class singleton helper
1255 // object that implements traversal and unlinking logic for the nsISupports
1256 // in question.
1257 *aCp = nullptr;
1258 CallQueryInterface(aPtr, aCp);
1261 static void ToParticipant(void* aParti, nsCycleCollectionParticipant** aCp) {
1262 // If the participant is null, this is an nsISupports participant,
1263 // so we must QI to get the real participant.
1265 if (!*aCp) {
1266 nsISupports* nsparti = static_cast<nsISupports*>(aParti);
1267 MOZ_ASSERT(CanonicalizeXPCOMParticipant(nsparti) == nsparti);
1268 nsXPCOMCycleCollectionParticipant* xcp;
1269 ToParticipant(nsparti, &xcp);
1270 *aCp = xcp;
1274 template <class Visitor>
1275 MOZ_NEVER_INLINE void GraphWalker<Visitor>::Walk(PtrInfo* aPi) {
1276 nsDeque<PtrInfo> queue;
1277 CheckedPush(queue, aPi);
1278 DoWalk(queue);
1281 template <class Visitor>
1282 MOZ_NEVER_INLINE void GraphWalker<Visitor>::WalkFromRoots(CCGraph& aGraph) {
1283 nsDeque<PtrInfo> queue;
1284 NodePool::Enumerator etor(aGraph.mNodes);
1285 for (uint32_t i = 0; i < aGraph.mRootCount; ++i) {
1286 CheckedPush(queue, etor.GetNext());
1288 DoWalk(queue);
1291 template <class Visitor>
1292 MOZ_NEVER_INLINE void GraphWalker<Visitor>::DoWalk(nsDeque<PtrInfo>& aQueue) {
1293 // Use a aQueue to match the breadth-first traversal used when we
1294 // built the graph, for hopefully-better locality.
1295 while (aQueue.GetSize() > 0) {
1296 PtrInfo* pi = aQueue.PopFront();
1298 if (pi->WasTraversed() && mVisitor.ShouldVisitNode(pi)) {
1299 mVisitor.VisitNode(pi);
1300 for (EdgePool::Iterator child = pi->FirstChild(),
1301 child_end = pi->LastChild();
1302 child != child_end; ++child) {
1303 CheckedPush(aQueue, *child);
1309 struct CCGraphDescriber : public LinkedListElement<CCGraphDescriber> {
1310 CCGraphDescriber() : mAddress("0x"), mCnt(0), mType(eUnknown) {}
1312 enum Type {
1313 eRefCountedObject,
1314 eGCedObject,
1315 eGCMarkedObject,
1316 eEdge,
1317 eRoot,
1318 eGarbage,
1319 eUnknown
1322 nsCString mAddress;
1323 nsCString mName;
1324 nsCString mCompartmentOrToAddress;
1325 uint32_t mCnt;
1326 Type mType;
1329 class LogStringMessageAsync : public DiscardableRunnable {
1330 public:
1331 explicit LogStringMessageAsync(const nsAString& aMsg)
1332 : mozilla::DiscardableRunnable("LogStringMessageAsync"), mMsg(aMsg) {}
1334 NS_IMETHOD Run() override {
1335 nsCOMPtr<nsIConsoleService> cs =
1336 do_GetService(NS_CONSOLESERVICE_CONTRACTID);
1337 if (cs) {
1338 cs->LogStringMessage(mMsg.get());
1340 return NS_OK;
1343 private:
1344 nsString mMsg;
1347 class nsCycleCollectorLogSinkToFile final : public nsICycleCollectorLogSink {
1348 public:
1349 NS_DECL_ISUPPORTS
1351 nsCycleCollectorLogSinkToFile()
1352 : mProcessIdentifier(base::GetCurrentProcId()),
1353 mGCLog("gc-edges"),
1354 mCCLog("cc-edges") {}
1356 NS_IMETHOD GetFilenameIdentifier(nsAString& aIdentifier) override {
1357 aIdentifier = mFilenameIdentifier;
1358 return NS_OK;
1361 NS_IMETHOD SetFilenameIdentifier(const nsAString& aIdentifier) override {
1362 mFilenameIdentifier = aIdentifier;
1363 return NS_OK;
1366 NS_IMETHOD GetProcessIdentifier(int32_t* aIdentifier) override {
1367 *aIdentifier = mProcessIdentifier;
1368 return NS_OK;
1371 NS_IMETHOD SetProcessIdentifier(int32_t aIdentifier) override {
1372 mProcessIdentifier = aIdentifier;
1373 return NS_OK;
1376 NS_IMETHOD GetGcLog(nsIFile** aPath) override {
1377 NS_IF_ADDREF(*aPath = mGCLog.mFile);
1378 return NS_OK;
1381 NS_IMETHOD GetCcLog(nsIFile** aPath) override {
1382 NS_IF_ADDREF(*aPath = mCCLog.mFile);
1383 return NS_OK;
1386 NS_IMETHOD Open(FILE** aGCLog, FILE** aCCLog) override {
1387 nsresult rv;
1389 if (mGCLog.mStream || mCCLog.mStream) {
1390 return NS_ERROR_UNEXPECTED;
1393 rv = OpenLog(&mGCLog);
1394 NS_ENSURE_SUCCESS(rv, rv);
1395 *aGCLog = mGCLog.mStream;
1397 rv = OpenLog(&mCCLog);
1398 NS_ENSURE_SUCCESS(rv, rv);
1399 *aCCLog = mCCLog.mStream;
1401 return NS_OK;
1404 NS_IMETHOD CloseGCLog() override {
1405 if (!mGCLog.mStream) {
1406 return NS_ERROR_UNEXPECTED;
1408 CloseLog(&mGCLog, u"Garbage"_ns);
1409 return NS_OK;
1412 NS_IMETHOD CloseCCLog() override {
1413 if (!mCCLog.mStream) {
1414 return NS_ERROR_UNEXPECTED;
1416 CloseLog(&mCCLog, u"Cycle"_ns);
1417 return NS_OK;
1420 private:
1421 ~nsCycleCollectorLogSinkToFile() {
1422 if (mGCLog.mStream) {
1423 MozillaUnRegisterDebugFILE(mGCLog.mStream);
1424 fclose(mGCLog.mStream);
1426 if (mCCLog.mStream) {
1427 MozillaUnRegisterDebugFILE(mCCLog.mStream);
1428 fclose(mCCLog.mStream);
1432 struct FileInfo {
1433 const char* const mPrefix;
1434 nsCOMPtr<nsIFile> mFile;
1435 FILE* mStream;
1437 explicit FileInfo(const char* aPrefix)
1438 : mPrefix(aPrefix), mStream(nullptr) {}
1442 * Create a new file named something like aPrefix.$PID.$IDENTIFIER.log in
1443 * $MOZ_CC_LOG_DIRECTORY or in the system's temp directory. No existing
1444 * file will be overwritten; if aPrefix.$PID.$IDENTIFIER.log exists, we'll
1445 * try a file named something like aPrefix.$PID.$IDENTIFIER-1.log, and so
1446 * on.
1448 already_AddRefed<nsIFile> CreateTempFile(const char* aPrefix) {
1449 nsPrintfCString filename("%s.%d%s%s.log", aPrefix, mProcessIdentifier,
1450 mFilenameIdentifier.IsEmpty() ? "" : ".",
1451 NS_ConvertUTF16toUTF8(mFilenameIdentifier).get());
1453 // Get the log directory either from $MOZ_CC_LOG_DIRECTORY or from
1454 // the fallback directories in OpenTempFile. We don't use an nsCOMPtr
1455 // here because OpenTempFile uses an in/out param and getter_AddRefs
1456 // wouldn't work.
1457 nsIFile* logFile = nullptr;
1458 if (char* env = PR_GetEnv("MOZ_CC_LOG_DIRECTORY")) {
1459 NS_NewNativeLocalFile(nsCString(env), /* followLinks = */ true, &logFile);
1462 // On Android or B2G, this function will open a file named
1463 // aFilename under a memory-reporting-specific folder
1464 // (/data/local/tmp/memory-reports). Otherwise, it will open a
1465 // file named aFilename under "NS_OS_TEMP_DIR".
1466 nsresult rv =
1467 nsDumpUtils::OpenTempFile(filename, &logFile, "memory-reports"_ns);
1468 if (NS_FAILED(rv)) {
1469 NS_IF_RELEASE(logFile);
1470 return nullptr;
1473 return dont_AddRef(logFile);
1476 nsresult OpenLog(FileInfo* aLog) {
1477 // Initially create the log in a file starting with "incomplete-".
1478 // We'll move the file and strip off the "incomplete-" once the dump
1479 // completes. (We do this because we don't want scripts which poll
1480 // the filesystem looking for GC/CC dumps to grab a file before we're
1481 // finished writing to it.)
1482 nsAutoCString incomplete;
1483 incomplete += "incomplete-";
1484 incomplete += aLog->mPrefix;
1485 MOZ_ASSERT(!aLog->mFile);
1486 aLog->mFile = CreateTempFile(incomplete.get());
1487 if (NS_WARN_IF(!aLog->mFile)) {
1488 return NS_ERROR_UNEXPECTED;
1491 MOZ_ASSERT(!aLog->mStream);
1492 nsresult rv = aLog->mFile->OpenANSIFileDesc("w", &aLog->mStream);
1493 if (NS_WARN_IF(NS_FAILED(rv))) {
1494 return NS_ERROR_UNEXPECTED;
1496 MozillaRegisterDebugFILE(aLog->mStream);
1497 return NS_OK;
1500 nsresult CloseLog(FileInfo* aLog, const nsAString& aCollectorKind) {
1501 MOZ_ASSERT(aLog->mStream);
1502 MOZ_ASSERT(aLog->mFile);
1504 MozillaUnRegisterDebugFILE(aLog->mStream);
1505 fclose(aLog->mStream);
1506 aLog->mStream = nullptr;
1508 // Strip off "incomplete-".
1509 nsCOMPtr<nsIFile> logFileFinalDestination = CreateTempFile(aLog->mPrefix);
1510 if (NS_WARN_IF(!logFileFinalDestination)) {
1511 return NS_ERROR_UNEXPECTED;
1514 nsAutoString logFileFinalDestinationName;
1515 logFileFinalDestination->GetLeafName(logFileFinalDestinationName);
1516 if (NS_WARN_IF(logFileFinalDestinationName.IsEmpty())) {
1517 return NS_ERROR_UNEXPECTED;
1520 aLog->mFile->MoveTo(/* directory */ nullptr, logFileFinalDestinationName);
1522 // Save the file path.
1523 aLog->mFile = logFileFinalDestination;
1525 // Log to the error console.
1526 nsAutoString logPath;
1527 logFileFinalDestination->GetPath(logPath);
1528 nsAutoString msg =
1529 aCollectorKind + u" Collector log dumped to "_ns + logPath;
1531 // We don't want any JS to run between ScanRoots and CollectWhite calls,
1532 // and since ScanRoots calls this method, better to log the message
1533 // asynchronously.
1534 RefPtr<LogStringMessageAsync> log = new LogStringMessageAsync(msg);
1535 NS_DispatchToCurrentThread(log);
1536 return NS_OK;
1539 int32_t mProcessIdentifier;
1540 nsString mFilenameIdentifier;
1541 FileInfo mGCLog;
1542 FileInfo mCCLog;
1545 NS_IMPL_ISUPPORTS(nsCycleCollectorLogSinkToFile, nsICycleCollectorLogSink)
1547 class nsCycleCollectorLogger final : public nsICycleCollectorListener {
1548 ~nsCycleCollectorLogger() { ClearDescribers(); }
1550 public:
1551 nsCycleCollectorLogger()
1552 : mLogSink(nsCycleCollector_createLogSink()),
1553 mWantAllTraces(false),
1554 mDisableLog(false),
1555 mWantAfterProcessing(false),
1556 mCCLog(nullptr) {}
1558 NS_DECL_ISUPPORTS
1560 void SetAllTraces() { mWantAllTraces = true; }
1562 bool IsAllTraces() { return mWantAllTraces; }
1564 NS_IMETHOD AllTraces(nsICycleCollectorListener** aListener) override {
1565 SetAllTraces();
1566 NS_ADDREF(*aListener = this);
1567 return NS_OK;
1570 NS_IMETHOD GetWantAllTraces(bool* aAllTraces) override {
1571 *aAllTraces = mWantAllTraces;
1572 return NS_OK;
1575 NS_IMETHOD GetDisableLog(bool* aDisableLog) override {
1576 *aDisableLog = mDisableLog;
1577 return NS_OK;
1580 NS_IMETHOD SetDisableLog(bool aDisableLog) override {
1581 mDisableLog = aDisableLog;
1582 return NS_OK;
1585 NS_IMETHOD GetWantAfterProcessing(bool* aWantAfterProcessing) override {
1586 *aWantAfterProcessing = mWantAfterProcessing;
1587 return NS_OK;
1590 NS_IMETHOD SetWantAfterProcessing(bool aWantAfterProcessing) override {
1591 mWantAfterProcessing = aWantAfterProcessing;
1592 return NS_OK;
1595 NS_IMETHOD GetLogSink(nsICycleCollectorLogSink** aLogSink) override {
1596 NS_ADDREF(*aLogSink = mLogSink);
1597 return NS_OK;
1600 NS_IMETHOD SetLogSink(nsICycleCollectorLogSink* aLogSink) override {
1601 if (!aLogSink) {
1602 return NS_ERROR_INVALID_ARG;
1604 mLogSink = aLogSink;
1605 return NS_OK;
1608 nsresult Begin() {
1609 nsresult rv;
1611 mCurrentAddress.AssignLiteral("0x");
1612 ClearDescribers();
1613 if (mDisableLog) {
1614 return NS_OK;
1617 FILE* gcLog;
1618 rv = mLogSink->Open(&gcLog, &mCCLog);
1619 NS_ENSURE_SUCCESS(rv, rv);
1620 // Dump the JS heap.
1621 CollectorData* data = sCollectorData.get();
1622 if (data && data->mContext) {
1623 data->mContext->Runtime()->DumpJSHeap(gcLog);
1625 rv = mLogSink->CloseGCLog();
1626 NS_ENSURE_SUCCESS(rv, rv);
1628 fprintf(mCCLog, "# WantAllTraces=%s\n", mWantAllTraces ? "true" : "false");
1629 return NS_OK;
1631 void NoteRefCountedObject(uint64_t aAddress, uint32_t aRefCount,
1632 const char* aObjectDescription) {
1633 if (!mDisableLog) {
1634 fprintf(mCCLog, "%p [rc=%u] %s\n", (void*)aAddress, aRefCount,
1635 aObjectDescription);
1637 if (mWantAfterProcessing) {
1638 CCGraphDescriber* d = new CCGraphDescriber();
1639 mDescribers.insertBack(d);
1640 mCurrentAddress.AssignLiteral("0x");
1641 mCurrentAddress.AppendInt(aAddress, 16);
1642 d->mType = CCGraphDescriber::eRefCountedObject;
1643 d->mAddress = mCurrentAddress;
1644 d->mCnt = aRefCount;
1645 d->mName.Append(aObjectDescription);
1648 void NoteGCedObject(uint64_t aAddress, bool aMarked,
1649 const char* aObjectDescription,
1650 uint64_t aCompartmentAddress) {
1651 if (!mDisableLog) {
1652 fprintf(mCCLog, "%p [gc%s] %s\n", (void*)aAddress,
1653 aMarked ? ".marked" : "", aObjectDescription);
1655 if (mWantAfterProcessing) {
1656 CCGraphDescriber* d = new CCGraphDescriber();
1657 mDescribers.insertBack(d);
1658 mCurrentAddress.AssignLiteral("0x");
1659 mCurrentAddress.AppendInt(aAddress, 16);
1660 d->mType = aMarked ? CCGraphDescriber::eGCMarkedObject
1661 : CCGraphDescriber::eGCedObject;
1662 d->mAddress = mCurrentAddress;
1663 d->mName.Append(aObjectDescription);
1664 if (aCompartmentAddress) {
1665 d->mCompartmentOrToAddress.AssignLiteral("0x");
1666 d->mCompartmentOrToAddress.AppendInt(aCompartmentAddress, 16);
1667 } else {
1668 d->mCompartmentOrToAddress.SetIsVoid(true);
1672 void NoteEdge(uint64_t aToAddress, const char* aEdgeName) {
1673 if (!mDisableLog) {
1674 fprintf(mCCLog, "> %p %s\n", (void*)aToAddress, aEdgeName);
1676 if (mWantAfterProcessing) {
1677 CCGraphDescriber* d = new CCGraphDescriber();
1678 mDescribers.insertBack(d);
1679 d->mType = CCGraphDescriber::eEdge;
1680 d->mAddress = mCurrentAddress;
1681 d->mCompartmentOrToAddress.AssignLiteral("0x");
1682 d->mCompartmentOrToAddress.AppendInt(aToAddress, 16);
1683 d->mName.Append(aEdgeName);
1686 void NoteWeakMapEntry(uint64_t aMap, uint64_t aKey, uint64_t aKeyDelegate,
1687 uint64_t aValue) {
1688 if (!mDisableLog) {
1689 fprintf(mCCLog, "WeakMapEntry map=%p key=%p keyDelegate=%p value=%p\n",
1690 (void*)aMap, (void*)aKey, (void*)aKeyDelegate, (void*)aValue);
1692 // We don't support after-processing for weak map entries.
1694 void NoteIncrementalRoot(uint64_t aAddress) {
1695 if (!mDisableLog) {
1696 fprintf(mCCLog, "IncrementalRoot %p\n", (void*)aAddress);
1698 // We don't support after-processing for incremental roots.
1700 void BeginResults() {
1701 if (!mDisableLog) {
1702 fputs("==========\n", mCCLog);
1705 void DescribeRoot(uint64_t aAddress, uint32_t aKnownEdges) {
1706 if (!mDisableLog) {
1707 fprintf(mCCLog, "%p [known=%u]\n", (void*)aAddress, aKnownEdges);
1709 if (mWantAfterProcessing) {
1710 CCGraphDescriber* d = new CCGraphDescriber();
1711 mDescribers.insertBack(d);
1712 d->mType = CCGraphDescriber::eRoot;
1713 d->mAddress.AppendInt(aAddress, 16);
1714 d->mCnt = aKnownEdges;
1717 void DescribeGarbage(uint64_t aAddress) {
1718 if (!mDisableLog) {
1719 fprintf(mCCLog, "%p [garbage]\n", (void*)aAddress);
1721 if (mWantAfterProcessing) {
1722 CCGraphDescriber* d = new CCGraphDescriber();
1723 mDescribers.insertBack(d);
1724 d->mType = CCGraphDescriber::eGarbage;
1725 d->mAddress.AppendInt(aAddress, 16);
1728 void End() {
1729 if (!mDisableLog) {
1730 mCCLog = nullptr;
1731 Unused << NS_WARN_IF(NS_FAILED(mLogSink->CloseCCLog()));
1734 NS_IMETHOD ProcessNext(nsICycleCollectorHandler* aHandler,
1735 bool* aCanContinue) override {
1736 if (NS_WARN_IF(!aHandler) || NS_WARN_IF(!mWantAfterProcessing)) {
1737 return NS_ERROR_UNEXPECTED;
1739 CCGraphDescriber* d = mDescribers.popFirst();
1740 if (d) {
1741 switch (d->mType) {
1742 case CCGraphDescriber::eRefCountedObject:
1743 aHandler->NoteRefCountedObject(d->mAddress, d->mCnt, d->mName);
1744 break;
1745 case CCGraphDescriber::eGCedObject:
1746 case CCGraphDescriber::eGCMarkedObject:
1747 aHandler->NoteGCedObject(
1748 d->mAddress, d->mType == CCGraphDescriber::eGCMarkedObject,
1749 d->mName, d->mCompartmentOrToAddress);
1750 break;
1751 case CCGraphDescriber::eEdge:
1752 aHandler->NoteEdge(d->mAddress, d->mCompartmentOrToAddress, d->mName);
1753 break;
1754 case CCGraphDescriber::eRoot:
1755 aHandler->DescribeRoot(d->mAddress, d->mCnt);
1756 break;
1757 case CCGraphDescriber::eGarbage:
1758 aHandler->DescribeGarbage(d->mAddress);
1759 break;
1760 case CCGraphDescriber::eUnknown:
1761 MOZ_ASSERT_UNREACHABLE("CCGraphDescriber::eUnknown");
1762 break;
1764 delete d;
1766 if (!(*aCanContinue = !mDescribers.isEmpty())) {
1767 mCurrentAddress.AssignLiteral("0x");
1769 return NS_OK;
1771 NS_IMETHOD AsLogger(nsCycleCollectorLogger** aRetVal) override {
1772 RefPtr<nsCycleCollectorLogger> rval = this;
1773 rval.forget(aRetVal);
1774 return NS_OK;
1777 private:
1778 void ClearDescribers() {
1779 CCGraphDescriber* d;
1780 while ((d = mDescribers.popFirst())) {
1781 delete d;
1785 nsCOMPtr<nsICycleCollectorLogSink> mLogSink;
1786 bool mWantAllTraces;
1787 bool mDisableLog;
1788 bool mWantAfterProcessing;
1789 nsCString mCurrentAddress;
1790 mozilla::LinkedList<CCGraphDescriber> mDescribers;
1791 FILE* mCCLog;
1794 NS_IMPL_ISUPPORTS(nsCycleCollectorLogger, nsICycleCollectorListener)
1796 already_AddRefed<nsICycleCollectorListener> nsCycleCollector_createLogger() {
1797 nsCOMPtr<nsICycleCollectorListener> logger = new nsCycleCollectorLogger();
1798 return logger.forget();
1801 static bool GCThingIsGrayCCThing(JS::GCCellPtr thing) {
1802 return JS::IsCCTraceKind(thing.kind()) && JS::GCThingIsMarkedGrayInCC(thing);
1805 static bool ValueIsGrayCCThing(const JS::Value& value) {
1806 return JS::IsCCTraceKind(value.traceKind()) &&
1807 JS::GCThingIsMarkedGray(value.toGCCellPtr());
1810 ////////////////////////////////////////////////////////////////////////
1811 // Bacon & Rajan's |MarkRoots| routine.
1812 ////////////////////////////////////////////////////////////////////////
1814 class CCGraphBuilder final : public nsCycleCollectionTraversalCallback,
1815 public nsCycleCollectionNoteRootCallback {
1816 private:
1817 CCGraph& mGraph;
1818 CycleCollectorResults& mResults;
1819 NodePool::Builder mNodeBuilder;
1820 EdgePool::Builder mEdgeBuilder;
1821 MOZ_INIT_OUTSIDE_CTOR PtrInfo* mCurrPi;
1822 nsCycleCollectionParticipant* mJSParticipant;
1823 nsCycleCollectionParticipant* mJSZoneParticipant;
1824 nsCString mNextEdgeName;
1825 RefPtr<nsCycleCollectorLogger> mLogger;
1826 bool mMergeZones;
1827 UniquePtr<NodePool::Enumerator> mCurrNode;
1828 uint32_t mNoteChildCount;
1830 struct PtrInfoCache : public MruCache<void*, PtrInfo*, PtrInfoCache, 491> {
1831 static HashNumber Hash(const void* aKey) { return HashGeneric(aKey); }
1832 static bool Match(const void* aKey, const PtrInfo* aVal) {
1833 return aVal->mPointer == aKey;
1837 PtrInfoCache mGraphCache;
1839 public:
1840 CCGraphBuilder(CCGraph& aGraph, CycleCollectorResults& aResults,
1841 CycleCollectedJSRuntime* aCCRuntime,
1842 nsCycleCollectorLogger* aLogger, bool aMergeZones);
1843 virtual ~CCGraphBuilder();
1845 bool WantAllTraces() const {
1846 return nsCycleCollectionNoteRootCallback::WantAllTraces();
1849 bool AddPurpleRoot(void* aRoot, nsCycleCollectionParticipant* aParti);
1851 // This is called when all roots have been added to the graph, to prepare for
1852 // BuildGraph().
1853 void DoneAddingRoots();
1855 // Do some work traversing nodes in the graph. Returns true if this graph
1856 // building is finished.
1857 bool BuildGraph(SliceBudget& aBudget);
1859 void RemoveCachedEntry(void* aPtr) { mGraphCache.Remove(aPtr); }
1861 private:
1862 PtrInfo* AddNode(void* aPtr, nsCycleCollectionParticipant* aParticipant);
1863 PtrInfo* AddWeakMapNode(JS::GCCellPtr aThing);
1864 PtrInfo* AddWeakMapNode(JSObject* aObject);
1866 void SetFirstChild() { mCurrPi->SetFirstChild(mEdgeBuilder.Mark()); }
1868 void SetLastChild() { mCurrPi->SetLastChild(mEdgeBuilder.Mark()); }
1870 public:
1871 // nsCycleCollectionNoteRootCallback methods.
1872 NS_IMETHOD_(void)
1873 NoteXPCOMRoot(nsISupports* aRoot,
1874 nsCycleCollectionParticipant* aParticipant) override;
1875 NS_IMETHOD_(void) NoteJSRoot(JSObject* aRoot) override;
1876 NS_IMETHOD_(void)
1877 NoteNativeRoot(void* aRoot,
1878 nsCycleCollectionParticipant* aParticipant) override;
1879 NS_IMETHOD_(void)
1880 NoteWeakMapping(JSObject* aMap, JS::GCCellPtr aKey, JSObject* aKdelegate,
1881 JS::GCCellPtr aVal) override;
1882 // This is used to create synthetic non-refcounted references to
1883 // nsXPCWrappedJS from their wrapped JS objects. No map is needed, because
1884 // the SubjectToFinalization list is like a known-black weak map, and
1885 // no delegate is needed because the keys are all unwrapped objects.
1886 NS_IMETHOD_(void)
1887 NoteWeakMapping(JSObject* aKey, nsISupports* aVal,
1888 nsCycleCollectionParticipant* aValParticipant) override;
1890 // nsCycleCollectionTraversalCallback methods.
1891 NS_IMETHOD_(void)
1892 DescribeRefCountedNode(nsrefcnt aRefCount, const char* aObjName) override;
1893 NS_IMETHOD_(void)
1894 DescribeGCedNode(bool aIsMarked, const char* aObjName,
1895 uint64_t aCompartmentAddress) override;
1897 NS_IMETHOD_(void) NoteXPCOMChild(nsISupports* aChild) override;
1898 NS_IMETHOD_(void) NoteJSChild(JS::GCCellPtr aThing) override;
1899 NS_IMETHOD_(void)
1900 NoteNativeChild(void* aChild,
1901 nsCycleCollectionParticipant* aParticipant) override;
1902 NS_IMETHOD_(void) NoteNextEdgeName(const char* aName) override;
1904 private:
1905 NS_IMETHOD_(void)
1906 NoteRoot(void* aRoot, nsCycleCollectionParticipant* aParticipant) {
1907 MOZ_ASSERT(aRoot);
1908 MOZ_ASSERT(aParticipant);
1910 if (!aParticipant->CanSkipInCC(aRoot) || MOZ_UNLIKELY(WantAllTraces())) {
1911 AddNode(aRoot, aParticipant);
1915 NS_IMETHOD_(void)
1916 NoteChild(void* aChild, nsCycleCollectionParticipant* aCp,
1917 nsCString& aEdgeName) {
1918 PtrInfo* childPi = AddNode(aChild, aCp);
1919 if (!childPi) {
1920 return;
1922 mEdgeBuilder.Add(childPi);
1923 if (mLogger) {
1924 mLogger->NoteEdge((uint64_t)aChild, aEdgeName.get());
1926 ++childPi->mInternalRefs;
1929 JS::Zone* MergeZone(JS::GCCellPtr aGcthing) {
1930 if (!mMergeZones) {
1931 return nullptr;
1933 JS::Zone* zone = JS::GetTenuredGCThingZone(aGcthing);
1934 if (js::IsSystemZone(zone)) {
1935 return nullptr;
1937 return zone;
1941 CCGraphBuilder::CCGraphBuilder(CCGraph& aGraph, CycleCollectorResults& aResults,
1942 CycleCollectedJSRuntime* aCCRuntime,
1943 nsCycleCollectorLogger* aLogger,
1944 bool aMergeZones)
1945 : mGraph(aGraph),
1946 mResults(aResults),
1947 mNodeBuilder(aGraph.mNodes),
1948 mEdgeBuilder(aGraph.mEdges),
1949 mJSParticipant(nullptr),
1950 mJSZoneParticipant(nullptr),
1951 mLogger(aLogger),
1952 mMergeZones(aMergeZones),
1953 mNoteChildCount(0) {
1954 // 4096 is an allocation bucket size.
1955 static_assert(sizeof(CCGraphBuilder) <= 4096,
1956 "Don't create too large CCGraphBuilder objects");
1958 if (aCCRuntime) {
1959 mJSParticipant = aCCRuntime->GCThingParticipant();
1960 mJSZoneParticipant = aCCRuntime->ZoneParticipant();
1963 if (mLogger) {
1964 mFlags |= nsCycleCollectionTraversalCallback::WANT_DEBUG_INFO;
1965 if (mLogger->IsAllTraces()) {
1966 mFlags |= nsCycleCollectionTraversalCallback::WANT_ALL_TRACES;
1967 mWantAllTraces = true; // for nsCycleCollectionNoteRootCallback
1971 mMergeZones = mMergeZones && MOZ_LIKELY(!WantAllTraces());
1973 MOZ_ASSERT(nsCycleCollectionNoteRootCallback::WantAllTraces() ==
1974 nsCycleCollectionTraversalCallback::WantAllTraces());
1977 CCGraphBuilder::~CCGraphBuilder() = default;
1979 PtrInfo* CCGraphBuilder::AddNode(void* aPtr,
1980 nsCycleCollectionParticipant* aParticipant) {
1981 if (mGraph.mOutOfMemory) {
1982 return nullptr;
1985 PtrInfoCache::Entry cached = mGraphCache.Lookup(aPtr);
1986 if (cached) {
1987 #ifdef DEBUG
1988 if (cached.Data()->mParticipant != aParticipant) {
1989 auto* parti1 = cached.Data()->mParticipant;
1990 auto* parti2 = aParticipant;
1991 NS_WARNING(
1992 nsPrintfCString("cached participant: %s; AddNode participant: %s\n",
1993 parti1 ? parti1->ClassName() : "null",
1994 parti2 ? parti2->ClassName() : "null")
1995 .get());
1997 #endif
1998 MOZ_ASSERT(cached.Data()->mParticipant == aParticipant,
1999 "nsCycleCollectionParticipant shouldn't change!");
2000 return cached.Data();
2003 PtrInfo* result;
2004 auto p = mGraph.mPtrInfoMap.lookupForAdd(aPtr);
2005 if (!p) {
2006 // New entry
2007 result = mNodeBuilder.Add(aPtr, aParticipant);
2008 if (!result) {
2009 return nullptr;
2012 if (!mGraph.mPtrInfoMap.add(p, result)) {
2013 // `result` leaks here, but we can't free it because it's
2014 // pool-allocated within NodePool.
2015 mGraph.mOutOfMemory = true;
2016 MOZ_ASSERT(false, "OOM while building cycle collector graph");
2017 return nullptr;
2020 } else {
2021 result = *p;
2022 MOZ_ASSERT(result->mParticipant == aParticipant,
2023 "nsCycleCollectionParticipant shouldn't change!");
2026 cached.Set(result);
2028 return result;
2031 bool CCGraphBuilder::AddPurpleRoot(void* aRoot,
2032 nsCycleCollectionParticipant* aParti) {
2033 ToParticipant(aRoot, &aParti);
2035 if (WantAllTraces() || !aParti->CanSkipInCC(aRoot)) {
2036 PtrInfo* pinfo = AddNode(aRoot, aParti);
2037 if (!pinfo) {
2038 return false;
2042 return true;
2045 void CCGraphBuilder::DoneAddingRoots() {
2046 // We've finished adding roots, and everything in the graph is a root.
2047 mGraph.mRootCount = mGraph.MapCount();
2049 mCurrNode = MakeUnique<NodePool::Enumerator>(mGraph.mNodes);
2052 MOZ_NEVER_INLINE bool CCGraphBuilder::BuildGraph(SliceBudget& aBudget) {
2053 MOZ_ASSERT(mCurrNode);
2055 while (!aBudget.isOverBudget() && !mCurrNode->IsDone()) {
2056 mNoteChildCount = 0;
2058 PtrInfo* pi = mCurrNode->GetNext();
2059 if (!pi) {
2060 MOZ_CRASH();
2063 mCurrPi = pi;
2065 // We need to call SetFirstChild() even on deleted nodes, to set their
2066 // firstChild() that may be read by a prior non-deleted neighbor.
2067 SetFirstChild();
2069 if (pi->mParticipant) {
2070 nsresult rv = pi->mParticipant->TraverseNativeAndJS(pi->mPointer, *this);
2071 MOZ_RELEASE_ASSERT(!NS_FAILED(rv),
2072 "Cycle collector Traverse method failed");
2075 if (mCurrNode->AtBlockEnd()) {
2076 SetLastChild();
2079 aBudget.step(mNoteChildCount + 1);
2082 if (!mCurrNode->IsDone()) {
2083 return false;
2086 if (mGraph.mRootCount > 0) {
2087 SetLastChild();
2090 mCurrNode = nullptr;
2092 return true;
2095 NS_IMETHODIMP_(void)
2096 CCGraphBuilder::NoteXPCOMRoot(nsISupports* aRoot,
2097 nsCycleCollectionParticipant* aParticipant) {
2098 MOZ_ASSERT(aRoot == CanonicalizeXPCOMParticipant(aRoot));
2100 #ifdef DEBUG
2101 nsXPCOMCycleCollectionParticipant* cp;
2102 ToParticipant(aRoot, &cp);
2103 MOZ_ASSERT(aParticipant == cp);
2104 #endif
2106 NoteRoot(aRoot, aParticipant);
2109 NS_IMETHODIMP_(void)
2110 CCGraphBuilder::NoteJSRoot(JSObject* aRoot) {
2111 if (JS::Zone* zone = MergeZone(JS::GCCellPtr(aRoot))) {
2112 NoteRoot(zone, mJSZoneParticipant);
2113 } else {
2114 NoteRoot(aRoot, mJSParticipant);
2118 NS_IMETHODIMP_(void)
2119 CCGraphBuilder::NoteNativeRoot(void* aRoot,
2120 nsCycleCollectionParticipant* aParticipant) {
2121 NoteRoot(aRoot, aParticipant);
2124 NS_IMETHODIMP_(void)
2125 CCGraphBuilder::DescribeRefCountedNode(nsrefcnt aRefCount,
2126 const char* aObjName) {
2127 mCurrPi->AnnotatedReleaseAssert(aRefCount != 0,
2128 "CCed refcounted object has zero refcount");
2129 mCurrPi->AnnotatedReleaseAssert(
2130 aRefCount != UINT32_MAX,
2131 "CCed refcounted object has overflowing refcount");
2133 mResults.mVisitedRefCounted++;
2135 if (mLogger) {
2136 mLogger->NoteRefCountedObject((uint64_t)mCurrPi->mPointer, aRefCount,
2137 aObjName);
2140 mCurrPi->mRefCount = aRefCount;
2143 NS_IMETHODIMP_(void)
2144 CCGraphBuilder::DescribeGCedNode(bool aIsMarked, const char* aObjName,
2145 uint64_t aCompartmentAddress) {
2146 uint32_t refCount = aIsMarked ? UINT32_MAX : 0;
2147 mResults.mVisitedGCed++;
2149 if (mLogger) {
2150 mLogger->NoteGCedObject((uint64_t)mCurrPi->mPointer, aIsMarked, aObjName,
2151 aCompartmentAddress);
2154 mCurrPi->mRefCount = refCount;
2157 NS_IMETHODIMP_(void)
2158 CCGraphBuilder::NoteXPCOMChild(nsISupports* aChild) {
2159 nsCString edgeName;
2160 if (WantDebugInfo()) {
2161 edgeName.Assign(mNextEdgeName);
2162 mNextEdgeName.Truncate();
2164 if (!aChild || !(aChild = CanonicalizeXPCOMParticipant(aChild))) {
2165 return;
2168 ++mNoteChildCount;
2170 nsXPCOMCycleCollectionParticipant* cp;
2171 ToParticipant(aChild, &cp);
2172 if (cp && (!cp->CanSkipThis(aChild) || WantAllTraces())) {
2173 NoteChild(aChild, cp, edgeName);
2177 NS_IMETHODIMP_(void)
2178 CCGraphBuilder::NoteNativeChild(void* aChild,
2179 nsCycleCollectionParticipant* aParticipant) {
2180 nsCString edgeName;
2181 if (WantDebugInfo()) {
2182 edgeName.Assign(mNextEdgeName);
2183 mNextEdgeName.Truncate();
2185 if (!aChild) {
2186 return;
2189 ++mNoteChildCount;
2191 MOZ_ASSERT(aParticipant, "Need a nsCycleCollectionParticipant!");
2192 if (!aParticipant->CanSkipThis(aChild) || WantAllTraces()) {
2193 NoteChild(aChild, aParticipant, edgeName);
2197 NS_IMETHODIMP_(void)
2198 CCGraphBuilder::NoteJSChild(JS::GCCellPtr aChild) {
2199 if (!aChild) {
2200 return;
2203 ++mNoteChildCount;
2205 nsCString edgeName;
2206 if (MOZ_UNLIKELY(WantDebugInfo())) {
2207 edgeName.Assign(mNextEdgeName);
2208 mNextEdgeName.Truncate();
2211 if (GCThingIsGrayCCThing(aChild) || MOZ_UNLIKELY(WantAllTraces())) {
2212 if (JS::Zone* zone = MergeZone(aChild)) {
2213 NoteChild(zone, mJSZoneParticipant, edgeName);
2214 } else {
2215 NoteChild(aChild.asCell(), mJSParticipant, edgeName);
2220 NS_IMETHODIMP_(void)
2221 CCGraphBuilder::NoteNextEdgeName(const char* aName) {
2222 if (WantDebugInfo()) {
2223 mNextEdgeName = aName;
2227 PtrInfo* CCGraphBuilder::AddWeakMapNode(JS::GCCellPtr aNode) {
2228 MOZ_ASSERT(aNode, "Weak map node should be non-null.");
2230 if (!GCThingIsGrayCCThing(aNode) && !WantAllTraces()) {
2231 return nullptr;
2234 if (JS::Zone* zone = MergeZone(aNode)) {
2235 return AddNode(zone, mJSZoneParticipant);
2237 return AddNode(aNode.asCell(), mJSParticipant);
2240 PtrInfo* CCGraphBuilder::AddWeakMapNode(JSObject* aObject) {
2241 return AddWeakMapNode(JS::GCCellPtr(aObject));
2244 NS_IMETHODIMP_(void)
2245 CCGraphBuilder::NoteWeakMapping(JSObject* aMap, JS::GCCellPtr aKey,
2246 JSObject* aKdelegate, JS::GCCellPtr aVal) {
2247 // Don't try to optimize away the entry here, as we've already attempted to
2248 // do that in TraceWeakMapping in nsXPConnect.
2249 WeakMapping* mapping = mGraph.mWeakMaps.AppendElement();
2250 mapping->mMap = aMap ? AddWeakMapNode(aMap) : nullptr;
2251 mapping->mKey = aKey ? AddWeakMapNode(aKey) : nullptr;
2252 mapping->mKeyDelegate =
2253 aKdelegate ? AddWeakMapNode(aKdelegate) : mapping->mKey;
2254 mapping->mVal = aVal ? AddWeakMapNode(aVal) : nullptr;
2256 if (mLogger) {
2257 mLogger->NoteWeakMapEntry((uint64_t)aMap, aKey ? aKey.unsafeAsInteger() : 0,
2258 (uint64_t)aKdelegate,
2259 aVal ? aVal.unsafeAsInteger() : 0);
2263 NS_IMETHODIMP_(void)
2264 CCGraphBuilder::NoteWeakMapping(JSObject* aKey, nsISupports* aVal,
2265 nsCycleCollectionParticipant* aValParticipant) {
2266 MOZ_ASSERT(aKey, "Don't call NoteWeakMapping with a null key");
2267 MOZ_ASSERT(aVal, "Don't call NoteWeakMapping with a null value");
2268 WeakMapping* mapping = mGraph.mWeakMaps.AppendElement();
2269 mapping->mMap = nullptr;
2270 mapping->mKey = AddWeakMapNode(aKey);
2271 mapping->mKeyDelegate = mapping->mKey;
2272 MOZ_ASSERT(js::UncheckedUnwrapWithoutExpose(aKey) == aKey);
2273 mapping->mVal = AddNode(aVal, aValParticipant);
2275 if (mLogger) {
2276 mLogger->NoteWeakMapEntry(0, (uint64_t)aKey, 0, (uint64_t)aVal);
2280 static bool AddPurpleRoot(CCGraphBuilder& aBuilder, void* aRoot,
2281 nsCycleCollectionParticipant* aParti) {
2282 return aBuilder.AddPurpleRoot(aRoot, aParti);
2285 // MayHaveChild() will be false after a Traverse if the object does
2286 // not have any children the CC will visit.
2287 class ChildFinder : public nsCycleCollectionTraversalCallback {
2288 public:
2289 ChildFinder() : mMayHaveChild(false) {}
2291 // The logic of the Note*Child functions must mirror that of their
2292 // respective functions in CCGraphBuilder.
2293 NS_IMETHOD_(void) NoteXPCOMChild(nsISupports* aChild) override;
2294 NS_IMETHOD_(void)
2295 NoteNativeChild(void* aChild, nsCycleCollectionParticipant* aHelper) override;
2296 NS_IMETHOD_(void) NoteJSChild(JS::GCCellPtr aThing) override;
2298 NS_IMETHOD_(void)
2299 NoteWeakMapping(JSObject* aKey, nsISupports* aVal,
2300 nsCycleCollectionParticipant* aValParticipant) override {}
2302 NS_IMETHOD_(void)
2303 DescribeRefCountedNode(nsrefcnt aRefcount, const char* aObjname) override {}
2304 NS_IMETHOD_(void)
2305 DescribeGCedNode(bool aIsMarked, const char* aObjname,
2306 uint64_t aCompartmentAddress) override {}
2307 NS_IMETHOD_(void) NoteNextEdgeName(const char* aName) override {}
2308 bool MayHaveChild() { return mMayHaveChild; }
2310 private:
2311 bool mMayHaveChild;
2314 NS_IMETHODIMP_(void)
2315 ChildFinder::NoteXPCOMChild(nsISupports* aChild) {
2316 if (!aChild || !(aChild = CanonicalizeXPCOMParticipant(aChild))) {
2317 return;
2319 nsXPCOMCycleCollectionParticipant* cp;
2320 ToParticipant(aChild, &cp);
2321 if (cp && !cp->CanSkip(aChild, true)) {
2322 mMayHaveChild = true;
2326 NS_IMETHODIMP_(void)
2327 ChildFinder::NoteNativeChild(void* aChild,
2328 nsCycleCollectionParticipant* aHelper) {
2329 if (!aChild) {
2330 return;
2332 MOZ_ASSERT(aHelper, "Native child must have a participant");
2333 if (!aHelper->CanSkip(aChild, true)) {
2334 mMayHaveChild = true;
2338 NS_IMETHODIMP_(void)
2339 ChildFinder::NoteJSChild(JS::GCCellPtr aChild) {
2340 if (aChild && JS::GCThingIsMarkedGray(aChild)) {
2341 mMayHaveChild = true;
2345 static bool MayHaveChild(void* aObj, nsCycleCollectionParticipant* aCp) {
2346 ChildFinder cf;
2347 aCp->TraverseNativeAndJS(aObj, cf);
2348 return cf.MayHaveChild();
2351 // JSPurpleBuffer keeps references to GCThings which might affect the
2352 // next cycle collection. It is owned only by itself and during unlink its
2353 // self reference is broken down and the object ends up killing itself.
2354 // If GC happens before CC, references to GCthings and the self reference are
2355 // removed.
2356 class JSPurpleBuffer {
2357 ~JSPurpleBuffer() {
2358 MOZ_ASSERT(mValues.IsEmpty());
2359 MOZ_ASSERT(mObjects.IsEmpty());
2362 public:
2363 explicit JSPurpleBuffer(RefPtr<JSPurpleBuffer>& aReferenceToThis)
2364 : mReferenceToThis(aReferenceToThis),
2365 mValues(kSegmentSize),
2366 mObjects(kSegmentSize) {
2367 mReferenceToThis = this;
2368 mozilla::HoldJSObjects(this);
2371 void Destroy() {
2372 RefPtr<JSPurpleBuffer> referenceToThis;
2373 mReferenceToThis.swap(referenceToThis);
2374 mValues.Clear();
2375 mObjects.Clear();
2376 mozilla::DropJSObjects(this);
2379 NS_INLINE_DECL_CYCLE_COLLECTING_NATIVE_REFCOUNTING(JSPurpleBuffer)
2380 NS_DECL_CYCLE_COLLECTION_SCRIPT_HOLDER_NATIVE_CLASS(JSPurpleBuffer)
2382 RefPtr<JSPurpleBuffer>& mReferenceToThis;
2384 // These are raw pointers instead of Heap<T> because we only need Heap<T> for
2385 // pointers which may point into the nursery. The purple buffer never contains
2386 // pointers to the nursery because nursery gcthings can never be gray and only
2387 // gray things can be inserted into the purple buffer.
2388 static const size_t kSegmentSize = 512;
2389 SegmentedVector<JS::Value, kSegmentSize, InfallibleAllocPolicy> mValues;
2390 SegmentedVector<JSObject*, kSegmentSize, InfallibleAllocPolicy> mObjects;
2393 NS_IMPL_CYCLE_COLLECTION_CLASS(JSPurpleBuffer)
2395 NS_IMPL_CYCLE_COLLECTION_UNLINK_BEGIN(JSPurpleBuffer)
2396 tmp->Destroy();
2397 NS_IMPL_CYCLE_COLLECTION_UNLINK_END
2399 NS_IMPL_CYCLE_COLLECTION_TRAVERSE_BEGIN(JSPurpleBuffer)
2400 CycleCollectionNoteChild(cb, tmp, "self");
2401 NS_IMPL_CYCLE_COLLECTION_TRAVERSE_END
2403 #define NS_TRACE_SEGMENTED_ARRAY(_field, _type) \
2405 for (auto iter = tmp->_field.Iter(); !iter.Done(); iter.Next()) { \
2406 js::gc::CallTraceCallbackOnNonHeap<_type, TraceCallbacks>( \
2407 &iter.Get(), aCallbacks, #_field, aClosure); \
2411 NS_IMPL_CYCLE_COLLECTION_TRACE_BEGIN(JSPurpleBuffer)
2412 NS_TRACE_SEGMENTED_ARRAY(mValues, JS::Value)
2413 NS_TRACE_SEGMENTED_ARRAY(mObjects, JSObject*)
2414 NS_IMPL_CYCLE_COLLECTION_TRACE_END
2416 class SnowWhiteKiller : public TraceCallbacks {
2417 struct SnowWhiteObject {
2418 void* mPointer;
2419 nsCycleCollectionParticipant* mParticipant;
2420 nsCycleCollectingAutoRefCnt* mRefCnt;
2423 // Segments are 4 KiB on 32-bit and 8 KiB on 64-bit.
2424 static const size_t kSegmentSize = sizeof(void*) * 1024;
2425 typedef SegmentedVector<SnowWhiteObject, kSegmentSize, InfallibleAllocPolicy>
2426 ObjectsVector;
2428 public:
2429 SnowWhiteKiller(nsCycleCollector* aCollector, js::SliceBudget* aBudget)
2430 : mCollector(aCollector),
2431 mObjects(kSegmentSize),
2432 mBudget(aBudget),
2433 mSawSnowWhiteObjects(false) {
2434 MOZ_ASSERT(mCollector, "Calling SnowWhiteKiller after nsCC went away");
2437 explicit SnowWhiteKiller(nsCycleCollector* aCollector)
2438 : SnowWhiteKiller(aCollector, nullptr) {}
2440 ~SnowWhiteKiller() {
2441 for (auto iter = mObjects.Iter(); !iter.Done(); iter.Next()) {
2442 SnowWhiteObject& o = iter.Get();
2443 MaybeKillObject(o);
2447 private:
2448 void MaybeKillObject(SnowWhiteObject& aObject) {
2449 if (!aObject.mRefCnt->get() && !aObject.mRefCnt->IsInPurpleBuffer()) {
2450 mCollector->RemoveObjectFromGraph(aObject.mPointer);
2451 aObject.mRefCnt->stabilizeForDeletion();
2453 JS::AutoEnterCycleCollection autocc(mCollector->Runtime()->Runtime());
2454 aObject.mParticipant->Trace(aObject.mPointer, *this, nullptr);
2456 aObject.mParticipant->DeleteCycleCollectable(aObject.mPointer);
2460 public:
2461 bool Visit(nsPurpleBuffer& aBuffer, nsPurpleBufferEntry* aEntry) {
2462 if (mBudget) {
2463 if (mBudget->isOverBudget()) {
2464 return false;
2466 mBudget->step();
2469 MOZ_ASSERT(aEntry->mObject, "Null object in purple buffer");
2470 if (!aEntry->mRefCnt->get()) {
2471 mSawSnowWhiteObjects = true;
2472 void* o = aEntry->mObject;
2473 nsCycleCollectionParticipant* cp = aEntry->mParticipant;
2474 ToParticipant(o, &cp);
2475 SnowWhiteObject swo = {o, cp, aEntry->mRefCnt};
2476 if (!mBudget) {
2477 mObjects.InfallibleAppend(swo);
2479 aBuffer.Remove(aEntry);
2480 if (mBudget) {
2481 MaybeKillObject(swo);
2484 return true;
2487 bool HasSnowWhiteObjects() const { return !mObjects.IsEmpty(); }
2489 bool SawSnowWhiteObjects() const { return mSawSnowWhiteObjects; }
2491 virtual void Trace(JS::Heap<JS::Value>* aValue, const char* aName,
2492 void* aClosure) const override {
2493 const JS::Value& val = aValue->unbarrieredGet();
2494 if (val.isGCThing() && ValueIsGrayCCThing(val)) {
2495 MOZ_ASSERT(!js::gc::IsInsideNursery(val.toGCThing()));
2496 mCollector->GetJSPurpleBuffer()->mValues.InfallibleAppend(val);
2500 virtual void Trace(JS::Heap<jsid>* aId, const char* aName,
2501 void* aClosure) const override {}
2503 void AppendJSObjectToPurpleBuffer(JSObject* obj) const {
2504 if (obj && JS::ObjectIsMarkedGray(obj)) {
2505 MOZ_ASSERT(JS::ObjectIsTenured(obj));
2506 mCollector->GetJSPurpleBuffer()->mObjects.InfallibleAppend(obj);
2510 virtual void Trace(JS::Heap<JSObject*>* aObject, const char* aName,
2511 void* aClosure) const override {
2512 AppendJSObjectToPurpleBuffer(aObject->unbarrieredGet());
2515 virtual void Trace(nsWrapperCache* aWrapperCache, const char* aName,
2516 void* aClosure) const override {
2517 AppendJSObjectToPurpleBuffer(aWrapperCache->GetWrapperPreserveColor());
2520 virtual void Trace(JS::TenuredHeap<JSObject*>* aObject, const char* aName,
2521 void* aClosure) const override {
2522 AppendJSObjectToPurpleBuffer(aObject->unbarrieredGetPtr());
2525 virtual void Trace(JS::Heap<JSString*>* aString, const char* aName,
2526 void* aClosure) const override {}
2528 virtual void Trace(JS::Heap<JSScript*>* aScript, const char* aName,
2529 void* aClosure) const override {}
2531 virtual void Trace(JS::Heap<JSFunction*>* aFunction, const char* aName,
2532 void* aClosure) const override {}
2534 private:
2535 RefPtr<nsCycleCollector> mCollector;
2536 ObjectsVector mObjects;
2537 js::SliceBudget* mBudget;
2538 bool mSawSnowWhiteObjects;
2541 class RemoveSkippableVisitor : public SnowWhiteKiller {
2542 public:
2543 RemoveSkippableVisitor(nsCycleCollector* aCollector, js::SliceBudget& aBudget,
2544 bool aRemoveChildlessNodes,
2545 bool aAsyncSnowWhiteFreeing,
2546 CC_ForgetSkippableCallback aCb)
2547 : SnowWhiteKiller(aCollector),
2548 mBudget(aBudget),
2549 mRemoveChildlessNodes(aRemoveChildlessNodes),
2550 mAsyncSnowWhiteFreeing(aAsyncSnowWhiteFreeing),
2551 mDispatchedDeferredDeletion(false),
2552 mCallback(aCb) {}
2554 ~RemoveSkippableVisitor() {
2555 // Note, we must call the callback before SnowWhiteKiller calls
2556 // DeleteCycleCollectable!
2557 if (mCallback) {
2558 mCallback();
2560 if (HasSnowWhiteObjects()) {
2561 // Effectively a continuation.
2562 nsCycleCollector_dispatchDeferredDeletion(true);
2566 bool Visit(nsPurpleBuffer& aBuffer, nsPurpleBufferEntry* aEntry) {
2567 if (mBudget.isOverBudget()) {
2568 return false;
2571 // CanSkip calls can be a bit slow, so increase the likelihood that
2572 // isOverBudget actually checks whether we're over the time budget.
2573 mBudget.step(5);
2574 MOZ_ASSERT(aEntry->mObject, "null mObject in purple buffer");
2575 if (!aEntry->mRefCnt->get()) {
2576 if (!mAsyncSnowWhiteFreeing) {
2577 SnowWhiteKiller::Visit(aBuffer, aEntry);
2578 } else if (!mDispatchedDeferredDeletion) {
2579 mDispatchedDeferredDeletion = true;
2580 nsCycleCollector_dispatchDeferredDeletion(false);
2582 return true;
2584 void* o = aEntry->mObject;
2585 nsCycleCollectionParticipant* cp = aEntry->mParticipant;
2586 ToParticipant(o, &cp);
2587 if (aEntry->mRefCnt->IsPurple() && !cp->CanSkip(o, false) &&
2588 (!mRemoveChildlessNodes || MayHaveChild(o, cp))) {
2589 return true;
2591 aBuffer.Remove(aEntry);
2592 return true;
2595 private:
2596 js::SliceBudget& mBudget;
2597 bool mRemoveChildlessNodes;
2598 bool mAsyncSnowWhiteFreeing;
2599 bool mDispatchedDeferredDeletion;
2600 CC_ForgetSkippableCallback mCallback;
2603 void nsPurpleBuffer::RemoveSkippable(nsCycleCollector* aCollector,
2604 js::SliceBudget& aBudget,
2605 bool aRemoveChildlessNodes,
2606 bool aAsyncSnowWhiteFreeing,
2607 CC_ForgetSkippableCallback aCb) {
2608 RemoveSkippableVisitor visitor(aCollector, aBudget, aRemoveChildlessNodes,
2609 aAsyncSnowWhiteFreeing, aCb);
2610 VisitEntries(visitor);
2613 bool nsCycleCollector::FreeSnowWhite(bool aUntilNoSWInPurpleBuffer) {
2614 CheckThreadSafety();
2616 if (mFreeingSnowWhite) {
2617 return false;
2620 AUTO_PROFILER_LABEL_CATEGORY_PAIR(GCCC_FreeSnowWhite);
2622 AutoRestore<bool> ar(mFreeingSnowWhite);
2623 mFreeingSnowWhite = true;
2625 bool hadSnowWhiteObjects = false;
2626 do {
2627 SnowWhiteKiller visitor(this);
2628 mPurpleBuf.VisitEntries(visitor);
2629 hadSnowWhiteObjects = hadSnowWhiteObjects || visitor.HasSnowWhiteObjects();
2630 if (!visitor.HasSnowWhiteObjects()) {
2631 break;
2633 } while (aUntilNoSWInPurpleBuffer);
2634 return hadSnowWhiteObjects;
2637 bool nsCycleCollector::FreeSnowWhiteWithBudget(js::SliceBudget& aBudget) {
2638 CheckThreadSafety();
2640 if (mFreeingSnowWhite) {
2641 return false;
2644 AUTO_PROFILER_LABEL_CATEGORY_PAIR(GCCC_FreeSnowWhite);
2645 AutoRestore<bool> ar(mFreeingSnowWhite);
2646 mFreeingSnowWhite = true;
2648 SnowWhiteKiller visitor(this, &aBudget);
2649 mPurpleBuf.VisitEntries(visitor);
2650 return visitor.SawSnowWhiteObjects();
2654 void nsCycleCollector::ForgetSkippable(js::SliceBudget& aBudget,
2655 bool aRemoveChildlessNodes,
2656 bool aAsyncSnowWhiteFreeing) {
2657 CheckThreadSafety();
2659 if (mFreeingSnowWhite) {
2660 return;
2663 mozilla::Maybe<mozilla::AutoGlobalTimelineMarker> marker;
2664 if (NS_IsMainThread()) {
2665 marker.emplace("nsCycleCollector::ForgetSkippable",
2666 MarkerStackRequest::NO_STACK);
2669 // If we remove things from the purple buffer during graph building, we may
2670 // lose track of an object that was mutated during graph building.
2671 MOZ_ASSERT(IsIdle());
2673 if (mCCJSRuntime) {
2674 mCCJSRuntime->PrepareForForgetSkippable();
2676 MOZ_ASSERT(
2677 !mScanInProgress,
2678 "Don't forget skippable or free snow-white while scan is in progress.");
2679 mPurpleBuf.RemoveSkippable(this, aBudget, aRemoveChildlessNodes,
2680 aAsyncSnowWhiteFreeing, mForgetSkippableCB);
2683 MOZ_NEVER_INLINE void nsCycleCollector::MarkRoots(SliceBudget& aBudget) {
2684 JS::AutoAssertNoGC nogc;
2685 TimeLog timeLog;
2686 AutoRestore<bool> ar(mScanInProgress);
2687 MOZ_RELEASE_ASSERT(!mScanInProgress);
2688 mScanInProgress = true;
2689 MOZ_ASSERT(mIncrementalPhase == GraphBuildingPhase);
2691 AUTO_PROFILER_LABEL_CATEGORY_PAIR(GCCC_BuildGraph);
2692 JS::AutoEnterCycleCollection autocc(Runtime()->Runtime());
2693 bool doneBuilding = mBuilder->BuildGraph(aBudget);
2695 if (!doneBuilding) {
2696 timeLog.Checkpoint("MarkRoots()");
2697 return;
2700 mBuilder = nullptr;
2701 mIncrementalPhase = ScanAndCollectWhitePhase;
2702 timeLog.Checkpoint("MarkRoots()");
2705 ////////////////////////////////////////////////////////////////////////
2706 // Bacon & Rajan's |ScanRoots| routine.
2707 ////////////////////////////////////////////////////////////////////////
2709 struct ScanBlackVisitor {
2710 ScanBlackVisitor(uint32_t& aWhiteNodeCount, bool& aFailed)
2711 : mWhiteNodeCount(aWhiteNodeCount), mFailed(aFailed) {}
2713 bool ShouldVisitNode(PtrInfo const* aPi) { return aPi->mColor != black; }
2715 MOZ_NEVER_INLINE void VisitNode(PtrInfo* aPi) {
2716 if (aPi->mColor == white) {
2717 --mWhiteNodeCount;
2719 aPi->mColor = black;
2722 void Failed() { mFailed = true; }
2724 private:
2725 uint32_t& mWhiteNodeCount;
2726 bool& mFailed;
2729 static void FloodBlackNode(uint32_t& aWhiteNodeCount, bool& aFailed,
2730 PtrInfo* aPi) {
2731 GraphWalker<ScanBlackVisitor>(ScanBlackVisitor(aWhiteNodeCount, aFailed))
2732 .Walk(aPi);
2733 MOZ_ASSERT(aPi->mColor == black || !aPi->WasTraversed(),
2734 "FloodBlackNode should make aPi black");
2737 // Iterate over the WeakMaps. If we mark anything while iterating
2738 // over the WeakMaps, we must iterate over all of the WeakMaps again.
2739 void nsCycleCollector::ScanWeakMaps() {
2740 bool anyChanged;
2741 bool failed = false;
2742 do {
2743 anyChanged = false;
2744 for (uint32_t i = 0; i < mGraph.mWeakMaps.Length(); i++) {
2745 WeakMapping* wm = &mGraph.mWeakMaps[i];
2747 // If any of these are null, the original object was marked black.
2748 uint32_t mColor = wm->mMap ? wm->mMap->mColor : black;
2749 uint32_t kColor = wm->mKey ? wm->mKey->mColor : black;
2750 uint32_t kdColor = wm->mKeyDelegate ? wm->mKeyDelegate->mColor : black;
2751 uint32_t vColor = wm->mVal ? wm->mVal->mColor : black;
2753 MOZ_ASSERT(mColor != grey, "Uncolored weak map");
2754 MOZ_ASSERT(kColor != grey, "Uncolored weak map key");
2755 MOZ_ASSERT(kdColor != grey, "Uncolored weak map key delegate");
2756 MOZ_ASSERT(vColor != grey, "Uncolored weak map value");
2758 if (mColor == black && kColor != black && kdColor == black) {
2759 FloodBlackNode(mWhiteNodeCount, failed, wm->mKey);
2760 anyChanged = true;
2763 if (mColor == black && kColor == black && vColor != black) {
2764 FloodBlackNode(mWhiteNodeCount, failed, wm->mVal);
2765 anyChanged = true;
2768 } while (anyChanged);
2770 if (failed) {
2771 MOZ_ASSERT(false, "Ran out of memory in ScanWeakMaps");
2772 CC_TELEMETRY(_OOM, true);
2776 // Flood black from any objects in the purple buffer that are in the CC graph.
2777 class PurpleScanBlackVisitor {
2778 public:
2779 PurpleScanBlackVisitor(CCGraph& aGraph, nsCycleCollectorLogger* aLogger,
2780 uint32_t& aCount, bool& aFailed)
2781 : mGraph(aGraph), mLogger(aLogger), mCount(aCount), mFailed(aFailed) {}
2783 bool Visit(nsPurpleBuffer& aBuffer, nsPurpleBufferEntry* aEntry) {
2784 MOZ_ASSERT(aEntry->mObject,
2785 "Entries with null mObject shouldn't be in the purple buffer.");
2786 MOZ_ASSERT(aEntry->mRefCnt->get() != 0,
2787 "Snow-white objects shouldn't be in the purple buffer.");
2789 void* obj = aEntry->mObject;
2791 MOZ_ASSERT(
2792 aEntry->mParticipant ||
2793 CanonicalizeXPCOMParticipant(static_cast<nsISupports*>(obj)) == obj,
2794 "Suspect nsISupports pointer must be canonical");
2796 PtrInfo* pi = mGraph.FindNode(obj);
2797 if (!pi) {
2798 return true;
2800 MOZ_ASSERT(pi->mParticipant,
2801 "No dead objects should be in the purple buffer.");
2802 if (MOZ_UNLIKELY(mLogger)) {
2803 mLogger->NoteIncrementalRoot((uint64_t)pi->mPointer);
2805 if (pi->mColor == black) {
2806 return true;
2808 FloodBlackNode(mCount, mFailed, pi);
2809 return true;
2812 private:
2813 CCGraph& mGraph;
2814 RefPtr<nsCycleCollectorLogger> mLogger;
2815 uint32_t& mCount;
2816 bool& mFailed;
2819 // Objects that have been stored somewhere since the start of incremental graph
2820 // building must be treated as live for this cycle collection, because we may
2821 // not have accurate information about who holds references to them.
2822 void nsCycleCollector::ScanIncrementalRoots() {
2823 TimeLog timeLog;
2825 // Reference counted objects:
2826 // We cleared the purple buffer at the start of the current ICC, so if a
2827 // refcounted object is purple, it may have been AddRef'd during the current
2828 // ICC. (It may also have only been released.) If that is the case, we cannot
2829 // be sure that the set of things pointing to the object in the CC graph
2830 // is accurate. Therefore, for safety, we treat any purple objects as being
2831 // live during the current CC. We don't remove anything from the purple
2832 // buffer here, so these objects will be suspected and freed in the next CC
2833 // if they are garbage.
2834 bool failed = false;
2835 PurpleScanBlackVisitor purpleScanBlackVisitor(mGraph, mLogger,
2836 mWhiteNodeCount, failed);
2837 mPurpleBuf.VisitEntries(purpleScanBlackVisitor);
2838 timeLog.Checkpoint("ScanIncrementalRoots::fix purple");
2840 bool hasJSRuntime = !!mCCJSRuntime;
2841 nsCycleCollectionParticipant* jsParticipant =
2842 hasJSRuntime ? mCCJSRuntime->GCThingParticipant() : nullptr;
2843 nsCycleCollectionParticipant* zoneParticipant =
2844 hasJSRuntime ? mCCJSRuntime->ZoneParticipant() : nullptr;
2845 bool hasLogger = !!mLogger;
2847 NodePool::Enumerator etor(mGraph.mNodes);
2848 while (!etor.IsDone()) {
2849 PtrInfo* pi = etor.GetNext();
2851 // As an optimization, if an object has already been determined to be live,
2852 // don't consider it further. We can't do this if there is a listener,
2853 // because the listener wants to know the complete set of incremental roots.
2854 if (pi->mColor == black && MOZ_LIKELY(!hasLogger)) {
2855 continue;
2858 // Garbage collected objects:
2859 // If a GCed object was added to the graph with a refcount of zero, and is
2860 // now marked black by the GC, it was probably gray before and was exposed
2861 // to active JS, so it may have been stored somewhere, so it needs to be
2862 // treated as live.
2863 if (pi->IsGrayJS() && MOZ_LIKELY(hasJSRuntime)) {
2864 // If the object is still marked gray by the GC, nothing could have gotten
2865 // hold of it, so it isn't an incremental root.
2866 if (pi->mParticipant == jsParticipant) {
2867 JS::GCCellPtr ptr(pi->mPointer, JS::GCThingTraceKind(pi->mPointer));
2868 if (GCThingIsGrayCCThing(ptr)) {
2869 continue;
2871 } else if (pi->mParticipant == zoneParticipant) {
2872 JS::Zone* zone = static_cast<JS::Zone*>(pi->mPointer);
2873 if (js::ZoneGlobalsAreAllGray(zone)) {
2874 continue;
2876 } else {
2877 MOZ_ASSERT(false, "Non-JS thing with 0 refcount? Treating as live.");
2879 } else if (!pi->mParticipant && pi->WasTraversed()) {
2880 // Dead traversed refcounted objects:
2881 // If the object was traversed, it must have been alive at the start of
2882 // the CC, and thus had a positive refcount. It is dead now, so its
2883 // refcount must have decreased at some point during the CC. Therefore,
2884 // it would be in the purple buffer if it wasn't dead, so treat it as an
2885 // incremental root.
2887 // This should not cause leaks because as the object died it should have
2888 // released anything it held onto, which will add them to the purple
2889 // buffer, which will cause them to be considered in the next CC.
2890 } else {
2891 continue;
2894 // At this point, pi must be an incremental root.
2896 // If there's a listener, tell it about this root. We don't bother with the
2897 // optimization of skipping the Walk() if pi is black: it will just return
2898 // without doing anything and there's no need to make this case faster.
2899 if (MOZ_UNLIKELY(hasLogger) && pi->mPointer) {
2900 // Dead objects aren't logged. See bug 1031370.
2901 mLogger->NoteIncrementalRoot((uint64_t)pi->mPointer);
2904 FloodBlackNode(mWhiteNodeCount, failed, pi);
2907 timeLog.Checkpoint("ScanIncrementalRoots::fix nodes");
2909 if (failed) {
2910 NS_ASSERTION(false, "Ran out of memory in ScanIncrementalRoots");
2911 CC_TELEMETRY(_OOM, true);
2915 // Mark nodes white and make sure their refcounts are ok.
2916 // No nodes are marked black during this pass to ensure that refcount
2917 // checking is run on all nodes not marked black by ScanIncrementalRoots.
2918 void nsCycleCollector::ScanWhiteNodes(bool aFullySynchGraphBuild) {
2919 NodePool::Enumerator nodeEnum(mGraph.mNodes);
2920 while (!nodeEnum.IsDone()) {
2921 PtrInfo* pi = nodeEnum.GetNext();
2922 if (pi->mColor == black) {
2923 // Incremental roots can be in a nonsensical state, so don't
2924 // check them. This will miss checking nodes that are merely
2925 // reachable from incremental roots.
2926 MOZ_ASSERT(!aFullySynchGraphBuild,
2927 "In a synch CC, no nodes should be marked black early on.");
2928 continue;
2930 MOZ_ASSERT(pi->mColor == grey);
2932 if (!pi->WasTraversed()) {
2933 // This node was deleted before it was traversed, so there's no reason
2934 // to look at it.
2935 MOZ_ASSERT(!pi->mParticipant,
2936 "Live nodes should all have been traversed");
2937 continue;
2940 if (pi->mInternalRefs == pi->mRefCount || pi->IsGrayJS()) {
2941 pi->mColor = white;
2942 ++mWhiteNodeCount;
2943 continue;
2946 pi->AnnotatedReleaseAssert(
2947 pi->mInternalRefs <= pi->mRefCount,
2948 "More references to an object than its refcount");
2950 // This node will get marked black in the next pass.
2954 // Any remaining grey nodes that haven't already been deleted must be alive,
2955 // so mark them and their children black. Any nodes that are black must have
2956 // already had their children marked black, so there's no need to look at them
2957 // again. This pass may turn some white nodes to black.
2958 void nsCycleCollector::ScanBlackNodes() {
2959 bool failed = false;
2960 NodePool::Enumerator nodeEnum(mGraph.mNodes);
2961 while (!nodeEnum.IsDone()) {
2962 PtrInfo* pi = nodeEnum.GetNext();
2963 if (pi->mColor == grey && pi->WasTraversed()) {
2964 FloodBlackNode(mWhiteNodeCount, failed, pi);
2968 if (failed) {
2969 NS_ASSERTION(false, "Ran out of memory in ScanBlackNodes");
2970 CC_TELEMETRY(_OOM, true);
2974 void nsCycleCollector::ScanRoots(bool aFullySynchGraphBuild) {
2975 JS::AutoAssertNoGC nogc;
2976 AutoRestore<bool> ar(mScanInProgress);
2977 MOZ_RELEASE_ASSERT(!mScanInProgress);
2978 mScanInProgress = true;
2979 mWhiteNodeCount = 0;
2980 MOZ_ASSERT(mIncrementalPhase == ScanAndCollectWhitePhase);
2982 JS::AutoEnterCycleCollection autocc(Runtime()->Runtime());
2984 if (!aFullySynchGraphBuild) {
2985 ScanIncrementalRoots();
2988 TimeLog timeLog;
2989 ScanWhiteNodes(aFullySynchGraphBuild);
2990 timeLog.Checkpoint("ScanRoots::ScanWhiteNodes");
2992 ScanBlackNodes();
2993 timeLog.Checkpoint("ScanRoots::ScanBlackNodes");
2995 // Scanning weak maps must be done last.
2996 ScanWeakMaps();
2997 timeLog.Checkpoint("ScanRoots::ScanWeakMaps");
2999 if (mLogger) {
3000 mLogger->BeginResults();
3002 NodePool::Enumerator etor(mGraph.mNodes);
3003 while (!etor.IsDone()) {
3004 PtrInfo* pi = etor.GetNext();
3005 if (!pi->WasTraversed()) {
3006 continue;
3008 switch (pi->mColor) {
3009 case black:
3010 if (!pi->IsGrayJS() && !pi->IsBlackJS() &&
3011 pi->mInternalRefs != pi->mRefCount) {
3012 mLogger->DescribeRoot((uint64_t)pi->mPointer, pi->mInternalRefs);
3014 break;
3015 case white:
3016 mLogger->DescribeGarbage((uint64_t)pi->mPointer);
3017 break;
3018 case grey:
3019 MOZ_ASSERT(false, "All traversed objects should be black or white");
3020 break;
3024 mLogger->End();
3025 mLogger = nullptr;
3026 timeLog.Checkpoint("ScanRoots::listener");
3030 ////////////////////////////////////////////////////////////////////////
3031 // Bacon & Rajan's |CollectWhite| routine, somewhat modified.
3032 ////////////////////////////////////////////////////////////////////////
3034 bool nsCycleCollector::CollectWhite() {
3035 // Explanation of "somewhat modified": we have no way to collect the
3036 // set of whites "all at once", we have to ask each of them to drop
3037 // their outgoing links and assume this will cause the garbage cycle
3038 // to *mostly* self-destruct (except for the reference we continue
3039 // to hold).
3041 // To do this "safely" we must make sure that the white nodes we're
3042 // operating on are stable for the duration of our operation. So we
3043 // make 3 sets of calls to language runtimes:
3045 // - Root(whites), which should pin the whites in memory.
3046 // - Unlink(whites), which drops outgoing links on each white.
3047 // - Unroot(whites), which returns the whites to normal GC.
3049 // Segments are 4 KiB on 32-bit and 8 KiB on 64-bit.
3050 static const size_t kSegmentSize = sizeof(void*) * 1024;
3051 SegmentedVector<PtrInfo*, kSegmentSize, InfallibleAllocPolicy> whiteNodes(
3052 kSegmentSize);
3053 TimeLog timeLog;
3055 MOZ_ASSERT(mIncrementalPhase == ScanAndCollectWhitePhase);
3057 uint32_t numWhiteNodes = 0;
3058 uint32_t numWhiteGCed = 0;
3059 uint32_t numWhiteJSZones = 0;
3062 JS::AutoAssertNoGC nogc;
3063 bool hasJSRuntime = !!mCCJSRuntime;
3064 nsCycleCollectionParticipant* zoneParticipant =
3065 hasJSRuntime ? mCCJSRuntime->ZoneParticipant() : nullptr;
3067 NodePool::Enumerator etor(mGraph.mNodes);
3068 while (!etor.IsDone()) {
3069 PtrInfo* pinfo = etor.GetNext();
3070 if (pinfo->mColor == white && pinfo->mParticipant) {
3071 if (pinfo->IsGrayJS()) {
3072 MOZ_ASSERT(mCCJSRuntime);
3073 ++numWhiteGCed;
3074 JS::Zone* zone;
3075 if (MOZ_UNLIKELY(pinfo->mParticipant == zoneParticipant)) {
3076 ++numWhiteJSZones;
3077 zone = static_cast<JS::Zone*>(pinfo->mPointer);
3078 } else {
3079 JS::GCCellPtr ptr(pinfo->mPointer,
3080 JS::GCThingTraceKind(pinfo->mPointer));
3081 zone = JS::GetTenuredGCThingZone(ptr);
3083 mCCJSRuntime->AddZoneWaitingForGC(zone);
3084 } else {
3085 whiteNodes.InfallibleAppend(pinfo);
3086 pinfo->mParticipant->Root(pinfo->mPointer);
3087 ++numWhiteNodes;
3093 mResults.mFreedRefCounted += numWhiteNodes;
3094 mResults.mFreedGCed += numWhiteGCed;
3095 mResults.mFreedJSZones += numWhiteJSZones;
3097 timeLog.Checkpoint("CollectWhite::Root");
3099 if (mBeforeUnlinkCB) {
3100 mBeforeUnlinkCB();
3101 timeLog.Checkpoint("CollectWhite::BeforeUnlinkCB");
3104 // Unlink() can trigger a GC, so do not touch any JS or anything
3105 // else not in whiteNodes after here.
3107 for (auto iter = whiteNodes.Iter(); !iter.Done(); iter.Next()) {
3108 PtrInfo* pinfo = iter.Get();
3109 MOZ_ASSERT(pinfo->mParticipant,
3110 "Unlink shouldn't see objects removed from graph.");
3111 pinfo->mParticipant->Unlink(pinfo->mPointer);
3112 #ifdef DEBUG
3113 if (mCCJSRuntime) {
3114 mCCJSRuntime->AssertNoObjectsToTrace(pinfo->mPointer);
3116 #endif
3118 timeLog.Checkpoint("CollectWhite::Unlink");
3120 JS::AutoAssertNoGC nogc;
3121 for (auto iter = whiteNodes.Iter(); !iter.Done(); iter.Next()) {
3122 PtrInfo* pinfo = iter.Get();
3123 MOZ_ASSERT(pinfo->mParticipant,
3124 "Unroot shouldn't see objects removed from graph.");
3125 pinfo->mParticipant->Unroot(pinfo->mPointer);
3127 timeLog.Checkpoint("CollectWhite::Unroot");
3129 nsCycleCollector_dispatchDeferredDeletion(false, true);
3130 timeLog.Checkpoint("CollectWhite::dispatchDeferredDeletion");
3132 mIncrementalPhase = CleanupPhase;
3134 return numWhiteNodes > 0 || numWhiteGCed > 0 || numWhiteJSZones > 0;
3137 ////////////////////////
3138 // Memory reporting
3139 ////////////////////////
3141 MOZ_DEFINE_MALLOC_SIZE_OF(CycleCollectorMallocSizeOf)
3143 NS_IMETHODIMP
3144 nsCycleCollector::CollectReports(nsIHandleReportCallback* aHandleReport,
3145 nsISupports* aData, bool aAnonymize) {
3146 size_t objectSize, graphSize, purpleBufferSize;
3147 SizeOfIncludingThis(CycleCollectorMallocSizeOf, &objectSize, &graphSize,
3148 &purpleBufferSize);
3150 if (objectSize > 0) {
3151 MOZ_COLLECT_REPORT("explicit/cycle-collector/collector-object", KIND_HEAP,
3152 UNITS_BYTES, objectSize,
3153 "Memory used for the cycle collector object itself.");
3156 if (graphSize > 0) {
3157 MOZ_COLLECT_REPORT(
3158 "explicit/cycle-collector/graph", KIND_HEAP, UNITS_BYTES, graphSize,
3159 "Memory used for the cycle collector's graph. This should be zero when "
3160 "the collector is idle.");
3163 if (purpleBufferSize > 0) {
3164 MOZ_COLLECT_REPORT("explicit/cycle-collector/purple-buffer", KIND_HEAP,
3165 UNITS_BYTES, purpleBufferSize,
3166 "Memory used for the cycle collector's purple buffer.");
3169 return NS_OK;
3172 ////////////////////////////////////////////////////////////////////////
3173 // Collector implementation
3174 ////////////////////////////////////////////////////////////////////////
3176 nsCycleCollector::nsCycleCollector()
3177 : mActivelyCollecting(false),
3178 mFreeingSnowWhite(false),
3179 mScanInProgress(false),
3180 mCCJSRuntime(nullptr),
3181 mIncrementalPhase(IdlePhase),
3182 #ifdef DEBUG
3183 mEventTarget(GetCurrentSerialEventTarget()),
3184 #endif
3185 mWhiteNodeCount(0),
3186 mBeforeUnlinkCB(nullptr),
3187 mForgetSkippableCB(nullptr),
3188 mUnmergedNeeded(0),
3189 mMergedInARow(0) {
3192 nsCycleCollector::~nsCycleCollector() {
3193 MOZ_ASSERT(!mJSPurpleBuffer, "Didn't call JSPurpleBuffer::Destroy?");
3195 UnregisterWeakMemoryReporter(this);
3198 void nsCycleCollector::SetCCJSRuntime(CycleCollectedJSRuntime* aCCRuntime) {
3199 MOZ_RELEASE_ASSERT(
3200 !mCCJSRuntime,
3201 "Multiple registrations of CycleCollectedJSRuntime in cycle collector");
3202 mCCJSRuntime = aCCRuntime;
3204 if (!NS_IsMainThread()) {
3205 return;
3208 // We can't register as a reporter in nsCycleCollector() because that runs
3209 // before the memory reporter manager is initialized. So we do it here
3210 // instead.
3211 RegisterWeakMemoryReporter(this);
3214 void nsCycleCollector::ClearCCJSRuntime() {
3215 MOZ_RELEASE_ASSERT(mCCJSRuntime,
3216 "Clearing CycleCollectedJSRuntime in cycle collector "
3217 "before a runtime was registered");
3218 mCCJSRuntime = nullptr;
3221 #ifdef DEBUG
3222 static bool HasParticipant(void* aPtr, nsCycleCollectionParticipant* aParti) {
3223 if (aParti) {
3224 return true;
3227 nsXPCOMCycleCollectionParticipant* xcp;
3228 ToParticipant(static_cast<nsISupports*>(aPtr), &xcp);
3229 return xcp != nullptr;
3231 #endif
3233 MOZ_ALWAYS_INLINE void nsCycleCollector::Suspect(
3234 void* aPtr, nsCycleCollectionParticipant* aParti,
3235 nsCycleCollectingAutoRefCnt* aRefCnt) {
3236 CheckThreadSafety();
3238 // Don't call AddRef or Release of a CCed object in a Traverse() method.
3239 MOZ_ASSERT(!mScanInProgress,
3240 "Attempted to call Suspect() while a scan was in progress");
3242 if (MOZ_UNLIKELY(mScanInProgress)) {
3243 return;
3246 MOZ_ASSERT(aPtr, "Don't suspect null pointers");
3248 MOZ_ASSERT(HasParticipant(aPtr, aParti),
3249 "Suspected nsISupports pointer must QI to "
3250 "nsXPCOMCycleCollectionParticipant");
3252 MOZ_ASSERT(aParti || CanonicalizeXPCOMParticipant(
3253 static_cast<nsISupports*>(aPtr)) == aPtr,
3254 "Suspect nsISupports pointer must be canonical");
3256 mPurpleBuf.Put(aPtr, aParti, aRefCnt);
3259 void nsCycleCollector::SuspectNurseryEntries() {
3260 MOZ_ASSERT(NS_IsMainThread(), "Wrong thread!");
3261 while (gNurseryPurpleBufferEntryCount) {
3262 NurseryPurpleBufferEntry& entry =
3263 gNurseryPurpleBufferEntry[--gNurseryPurpleBufferEntryCount];
3264 mPurpleBuf.Put(entry.mPtr, entry.mParticipant, entry.mRefCnt);
3268 void nsCycleCollector::CheckThreadSafety() {
3269 #ifdef DEBUG
3270 MOZ_ASSERT(mEventTarget->IsOnCurrentThread());
3271 #endif
3274 // The cycle collector uses the mark bitmap to discover what JS objects are
3275 // reachable only from XPConnect roots that might participate in cycles. We ask
3276 // the JS runtime whether we need to force a GC before this CC. It should only
3277 // be true when UnmarkGray has run out of stack. We also force GCs on shutdown
3278 // to collect cycles involving both DOM and JS, and in WantAllTraces CCs to
3279 // prevent hijinks from ForgetSkippable and compartmental GCs.
3280 void nsCycleCollector::FixGrayBits(bool aIsShutdown, TimeLog& aTimeLog) {
3281 CheckThreadSafety();
3283 if (!mCCJSRuntime) {
3284 return;
3287 // If we're not forcing a GC anyways due to shutdown or an all traces CC,
3288 // check to see if we still need to do one to fix the gray bits.
3289 if (!(aIsShutdown || (mLogger && mLogger->IsAllTraces()))) {
3290 mCCJSRuntime->FixWeakMappingGrayBits();
3291 aTimeLog.Checkpoint("FixWeakMappingGrayBits");
3293 bool needGC = !mCCJSRuntime->AreGCGrayBitsValid();
3294 // Only do a telemetry ping for non-shutdown CCs.
3295 CC_TELEMETRY(_NEED_GC, needGC);
3296 if (!needGC) {
3297 return;
3301 mResults.mForcedGC = true;
3303 uint32_t count = 0;
3304 do {
3305 if (aIsShutdown) {
3306 mCCJSRuntime->GarbageCollect(JS::GCOptions::Shutdown,
3307 JS::GCReason::SHUTDOWN_CC);
3308 } else {
3309 mCCJSRuntime->GarbageCollect(JS::GCOptions::Normal,
3310 JS::GCReason::CC_FORCED);
3313 mCCJSRuntime->FixWeakMappingGrayBits();
3315 // It's possible that FixWeakMappingGrayBits will hit OOM when unmarking
3316 // gray and we will have to go round again. The second time there should not
3317 // be any weak mappings to fix up so the loop body should run at most twice.
3318 MOZ_RELEASE_ASSERT(count < 2);
3319 count++;
3320 } while (!mCCJSRuntime->AreGCGrayBitsValid());
3322 aTimeLog.Checkpoint("FixGrayBits");
3325 bool nsCycleCollector::IsIncrementalGCInProgress() {
3326 return mCCJSRuntime && JS::IsIncrementalGCInProgress(mCCJSRuntime->Runtime());
3329 void nsCycleCollector::FinishAnyIncrementalGCInProgress() {
3330 if (IsIncrementalGCInProgress()) {
3331 NS_WARNING("Finishing incremental GC in progress during CC");
3332 JSContext* cx = CycleCollectedJSContext::Get()->Context();
3333 JS::PrepareForIncrementalGC(cx);
3334 JS::FinishIncrementalGC(cx, JS::GCReason::CC_FORCED);
3338 void nsCycleCollector::CleanupAfterCollection() {
3339 TimeLog timeLog;
3340 MOZ_ASSERT(mIncrementalPhase == CleanupPhase);
3341 MOZ_RELEASE_ASSERT(!mScanInProgress);
3342 mGraph.Clear();
3343 timeLog.Checkpoint("CleanupAfterCollection::mGraph.Clear()");
3345 uint32_t interval =
3346 (uint32_t)((TimeStamp::Now() - mCollectionStart).ToMilliseconds());
3347 #ifdef COLLECT_TIME_DEBUG
3348 printf("cc: total cycle collector time was %ums in %u slices\n", interval,
3349 mResults.mNumSlices);
3350 printf(
3351 "cc: visited %u ref counted and %u GCed objects, freed %d ref counted "
3352 "and %d GCed objects",
3353 mResults.mVisitedRefCounted, mResults.mVisitedGCed,
3354 mResults.mFreedRefCounted, mResults.mFreedGCed);
3355 uint32_t numVisited = mResults.mVisitedRefCounted + mResults.mVisitedGCed;
3356 if (numVisited > 1000) {
3357 uint32_t numFreed = mResults.mFreedRefCounted + mResults.mFreedGCed;
3358 printf(" (%d%%)", 100 * numFreed / numVisited);
3360 printf(".\ncc: \n");
3361 #endif
3363 CC_TELEMETRY(, interval);
3364 CC_TELEMETRY(_VISITED_REF_COUNTED, mResults.mVisitedRefCounted);
3365 CC_TELEMETRY(_VISITED_GCED, mResults.mVisitedGCed);
3366 CC_TELEMETRY(_COLLECTED, mWhiteNodeCount);
3367 timeLog.Checkpoint("CleanupAfterCollection::telemetry");
3369 if (mCCJSRuntime) {
3370 mCCJSRuntime->FinalizeDeferredThings(
3371 mResults.mAnyManual ? CycleCollectedJSContext::FinalizeNow
3372 : CycleCollectedJSContext::FinalizeIncrementally);
3373 mCCJSRuntime->EndCycleCollectionCallback(mResults);
3374 timeLog.Checkpoint("CleanupAfterCollection::EndCycleCollectionCallback()");
3376 mIncrementalPhase = IdlePhase;
3379 void nsCycleCollector::ShutdownCollect() {
3380 FinishAnyIncrementalGCInProgress();
3381 CycleCollectedJSContext* ccJSContext = CycleCollectedJSContext::Get();
3382 JS::ShutdownAsyncTasks(ccJSContext->Context());
3384 SliceBudget unlimitedBudget = SliceBudget::unlimited();
3385 uint32_t i;
3386 bool collectedAny = true;
3387 for (i = 0; i < DEFAULT_SHUTDOWN_COLLECTIONS && collectedAny; ++i) {
3388 collectedAny = Collect(CCReason::SHUTDOWN, ccIsManual::CCIsManual,
3389 unlimitedBudget, nullptr);
3390 // Run any remaining tasks that may have been enqueued via RunInStableState
3391 // or DispatchToMicroTask. These can hold alive CCed objects, and we want to
3392 // clear them out before we run the CC again or finish shutting down.
3393 ccJSContext->PerformMicroTaskCheckPoint(true);
3394 ccJSContext->ProcessStableStateQueue();
3396 NS_WARNING_ASSERTION(i < NORMAL_SHUTDOWN_COLLECTIONS, "Extra shutdown CC");
3399 static void PrintPhase(const char* aPhase) {
3400 #ifdef DEBUG_PHASES
3401 printf("cc: begin %s on %s\n", aPhase,
3402 NS_IsMainThread() ? "mainthread" : "worker");
3403 #endif
3406 bool nsCycleCollector::Collect(CCReason aReason, ccIsManual aIsManual,
3407 SliceBudget& aBudget,
3408 nsICycleCollectorListener* aManualListener,
3409 bool aPreferShorterSlices) {
3410 AUTO_PROFILER_LABEL_RELEVANT_FOR_JS("Incremental CC", GCCC);
3412 CheckThreadSafety();
3414 // This can legitimately happen in a few cases. See bug 383651.
3415 if (mActivelyCollecting || mFreeingSnowWhite) {
3416 return false;
3418 mActivelyCollecting = true;
3420 MOZ_ASSERT(!IsIncrementalGCInProgress());
3422 mozilla::Maybe<mozilla::AutoGlobalTimelineMarker> marker;
3423 if (NS_IsMainThread()) {
3424 marker.emplace("nsCycleCollector::Collect", MarkerStackRequest::NO_STACK);
3427 bool startedIdle = IsIdle();
3428 bool collectedAny = false;
3430 // If the CC started idle, it will call BeginCollection, which
3431 // will do FreeSnowWhite, so it doesn't need to be done here.
3432 if (!startedIdle) {
3433 TimeLog timeLog;
3434 FreeSnowWhite(true);
3435 timeLog.Checkpoint("Collect::FreeSnowWhite");
3438 if (aIsManual == ccIsManual::CCIsManual) {
3439 mResults.mAnyManual = true;
3442 ++mResults.mNumSlices;
3444 bool continueSlice = aBudget.isUnlimited() || !aPreferShorterSlices;
3445 do {
3446 switch (mIncrementalPhase) {
3447 case IdlePhase:
3448 PrintPhase("BeginCollection");
3449 BeginCollection(aReason, aIsManual, aManualListener);
3450 break;
3451 case GraphBuildingPhase:
3452 PrintPhase("MarkRoots");
3453 MarkRoots(aBudget);
3455 // Only continue this slice if we're running synchronously or the
3456 // next phase will probably be short, to reduce the max pause for this
3457 // collection.
3458 // (There's no need to check if we've finished graph building, because
3459 // if we haven't, we've already exceeded our budget, and will finish
3460 // this slice anyways.)
3461 continueSlice = aBudget.isUnlimited() ||
3462 (mResults.mNumSlices < 3 && !aPreferShorterSlices);
3463 break;
3464 case ScanAndCollectWhitePhase:
3465 // We do ScanRoots and CollectWhite in a single slice to ensure
3466 // that we won't unlink a live object if a weak reference is
3467 // promoted to a strong reference after ScanRoots has finished.
3468 // See bug 926533.
3470 AUTO_PROFILER_LABEL_CATEGORY_PAIR(GCCC_ScanRoots);
3471 PrintPhase("ScanRoots");
3472 ScanRoots(startedIdle);
3475 AUTO_PROFILER_LABEL_CATEGORY_PAIR(GCCC_CollectWhite);
3476 PrintPhase("CollectWhite");
3477 collectedAny = CollectWhite();
3479 break;
3480 case CleanupPhase:
3481 PrintPhase("CleanupAfterCollection");
3482 CleanupAfterCollection();
3483 continueSlice = false;
3484 break;
3486 if (continueSlice) {
3487 aBudget.stepAndForceCheck();
3488 continueSlice = !aBudget.isOverBudget();
3490 } while (continueSlice);
3492 // Clear mActivelyCollecting here to ensure that a recursive call to
3493 // Collect() does something.
3494 mActivelyCollecting = false;
3496 if (aIsManual && !startedIdle) {
3497 // We were in the middle of an incremental CC (using its own listener).
3498 // Somebody has forced a CC, so after having finished out the current CC,
3499 // run the CC again using the new listener.
3500 MOZ_ASSERT(IsIdle());
3501 if (Collect(aReason, ccIsManual::CCIsManual, aBudget, aManualListener)) {
3502 collectedAny = true;
3506 MOZ_ASSERT_IF(aIsManual == CCIsManual, IsIdle());
3508 return collectedAny;
3511 // Any JS objects we have in the graph could die when we GC, but we
3512 // don't want to abandon the current CC, because the graph contains
3513 // information about purple roots. So we synchronously finish off
3514 // the current CC.
3515 void nsCycleCollector::PrepareForGarbageCollection() {
3516 if (IsIdle()) {
3517 MOZ_ASSERT(mGraph.IsEmpty(), "Non-empty graph when idle");
3518 MOZ_ASSERT(!mBuilder, "Non-null builder when idle");
3519 if (mJSPurpleBuffer) {
3520 mJSPurpleBuffer->Destroy();
3522 return;
3525 FinishAnyCurrentCollection(CCReason::GC_WAITING);
3528 void nsCycleCollector::FinishAnyCurrentCollection(CCReason aReason) {
3529 if (IsIdle()) {
3530 return;
3533 SliceBudget unlimitedBudget = SliceBudget::unlimited();
3534 PrintPhase("FinishAnyCurrentCollection");
3535 // Use CCIsNotManual because we only want to finish the CC in progress.
3536 Collect(aReason, ccIsManual::CCIsNotManual, unlimitedBudget, nullptr);
3538 // It is only okay for Collect() to have failed to finish the
3539 // current CC if we're reentering the CC at some point past
3540 // graph building. We need to be past the point where the CC will
3541 // look at JS objects so that it is safe to GC.
3542 MOZ_ASSERT(IsIdle() || (mActivelyCollecting &&
3543 mIncrementalPhase != GraphBuildingPhase),
3544 "Reentered CC during graph building");
3547 // Don't merge too many times in a row, and do at least a minimum
3548 // number of unmerged CCs in a row.
3549 static const uint32_t kMinConsecutiveUnmerged = 3;
3550 static const uint32_t kMaxConsecutiveMerged = 3;
3552 bool nsCycleCollector::ShouldMergeZones(ccIsManual aIsManual) {
3553 if (!mCCJSRuntime) {
3554 return false;
3557 MOZ_ASSERT(mUnmergedNeeded <= kMinConsecutiveUnmerged);
3558 MOZ_ASSERT(mMergedInARow <= kMaxConsecutiveMerged);
3560 if (mMergedInARow == kMaxConsecutiveMerged) {
3561 MOZ_ASSERT(mUnmergedNeeded == 0);
3562 mUnmergedNeeded = kMinConsecutiveUnmerged;
3565 if (mUnmergedNeeded > 0) {
3566 mUnmergedNeeded--;
3567 mMergedInARow = 0;
3568 return false;
3571 if (aIsManual == CCIsNotManual && mCCJSRuntime->UsefulToMergeZones()) {
3572 mMergedInARow++;
3573 return true;
3574 } else {
3575 mMergedInARow = 0;
3576 return false;
3580 void nsCycleCollector::BeginCollection(
3581 CCReason aReason, ccIsManual aIsManual,
3582 nsICycleCollectorListener* aManualListener) {
3583 TimeLog timeLog;
3584 MOZ_ASSERT(IsIdle());
3585 MOZ_RELEASE_ASSERT(!mScanInProgress);
3587 mCollectionStart = TimeStamp::Now();
3589 if (mCCJSRuntime) {
3590 mCCJSRuntime->BeginCycleCollectionCallback(aReason);
3591 timeLog.Checkpoint("BeginCycleCollectionCallback()");
3594 bool isShutdown = (aReason == CCReason::SHUTDOWN);
3596 // Set up the listener for this CC.
3597 MOZ_ASSERT_IF(isShutdown, !aManualListener);
3598 MOZ_ASSERT(!mLogger, "Forgot to clear a previous listener?");
3600 if (aManualListener) {
3601 aManualListener->AsLogger(getter_AddRefs(mLogger));
3604 aManualListener = nullptr;
3605 if (!mLogger && mParams.LogThisCC(isShutdown)) {
3606 mLogger = new nsCycleCollectorLogger();
3607 if (mParams.AllTracesThisCC(isShutdown)) {
3608 mLogger->SetAllTraces();
3612 // BeginCycleCollectionCallback() might have started an IGC, and we need
3613 // to finish it before we run FixGrayBits.
3614 FinishAnyIncrementalGCInProgress();
3615 timeLog.Checkpoint("Pre-FixGrayBits finish IGC");
3617 FixGrayBits(isShutdown, timeLog);
3618 if (mCCJSRuntime) {
3619 mCCJSRuntime->CheckGrayBits();
3622 FreeSnowWhite(true);
3623 timeLog.Checkpoint("BeginCollection FreeSnowWhite");
3625 if (mLogger && NS_FAILED(mLogger->Begin())) {
3626 mLogger = nullptr;
3629 // FreeSnowWhite could potentially have started an IGC, which we need
3630 // to finish before we look at any JS roots.
3631 FinishAnyIncrementalGCInProgress();
3632 timeLog.Checkpoint("Post-FreeSnowWhite finish IGC");
3634 // Set up the data structures for building the graph.
3635 JS::AutoAssertNoGC nogc;
3636 JS::AutoEnterCycleCollection autocc(mCCJSRuntime->Runtime());
3637 mGraph.Init();
3638 mResults.Init();
3639 mResults.mSuspectedAtCCStart = SuspectedCount();
3640 mResults.mAnyManual = aIsManual;
3641 bool mergeZones = ShouldMergeZones(aIsManual);
3642 mResults.mMergedZones = mergeZones;
3644 MOZ_ASSERT(!mBuilder, "Forgot to clear mBuilder");
3645 mBuilder = MakeUnique<CCGraphBuilder>(mGraph, mResults, mCCJSRuntime, mLogger,
3646 mergeZones);
3647 timeLog.Checkpoint("BeginCollection prepare graph builder");
3649 if (mCCJSRuntime) {
3650 mCCJSRuntime->TraverseRoots(*mBuilder);
3651 timeLog.Checkpoint("mJSContext->TraverseRoots()");
3654 AutoRestore<bool> ar(mScanInProgress);
3655 MOZ_RELEASE_ASSERT(!mScanInProgress);
3656 mScanInProgress = true;
3657 mPurpleBuf.SelectPointers(*mBuilder);
3658 timeLog.Checkpoint("SelectPointers()");
3660 mBuilder->DoneAddingRoots();
3661 mIncrementalPhase = GraphBuildingPhase;
3664 uint32_t nsCycleCollector::SuspectedCount() {
3665 CheckThreadSafety();
3666 if (NS_IsMainThread()) {
3667 return gNurseryPurpleBufferEntryCount + mPurpleBuf.Count();
3670 return mPurpleBuf.Count();
3673 void nsCycleCollector::Shutdown(bool aDoCollect) {
3674 CheckThreadSafety();
3676 if (NS_IsMainThread()) {
3677 gNurseryPurpleBufferEnabled = false;
3680 // Always delete snow white objects.
3681 FreeSnowWhite(true);
3683 if (aDoCollect) {
3684 ShutdownCollect();
3687 if (mJSPurpleBuffer) {
3688 mJSPurpleBuffer->Destroy();
3692 void nsCycleCollector::RemoveObjectFromGraph(void* aObj) {
3693 if (IsIdle()) {
3694 return;
3697 mGraph.RemoveObjectFromMap(aObj);
3698 if (mBuilder) {
3699 mBuilder->RemoveCachedEntry(aObj);
3703 void nsCycleCollector::SizeOfIncludingThis(mozilla::MallocSizeOf aMallocSizeOf,
3704 size_t* aObjectSize,
3705 size_t* aGraphSize,
3706 size_t* aPurpleBufferSize) const {
3707 *aObjectSize = aMallocSizeOf(this);
3709 *aGraphSize = mGraph.SizeOfExcludingThis(aMallocSizeOf);
3711 *aPurpleBufferSize = mPurpleBuf.SizeOfExcludingThis(aMallocSizeOf);
3713 // These fields are deliberately not measured:
3714 // - mCCJSRuntime: because it's non-owning and measured by JS reporters.
3715 // - mParams: because it only contains scalars.
3718 JSPurpleBuffer* nsCycleCollector::GetJSPurpleBuffer() {
3719 if (!mJSPurpleBuffer) {
3720 // The Release call here confuses the GC analysis.
3721 JS::AutoSuppressGCAnalysis nogc;
3722 // JSPurpleBuffer keeps itself alive, but we need to create it in such way
3723 // that it ends up in the normal purple buffer. That happens when
3724 // nsRefPtr goes out of the scope and calls Release.
3725 RefPtr<JSPurpleBuffer> pb = new JSPurpleBuffer(mJSPurpleBuffer);
3727 return mJSPurpleBuffer;
3730 ////////////////////////////////////////////////////////////////////////
3731 // Module public API (exported in nsCycleCollector.h)
3732 // Just functions that redirect into the singleton, once it's built.
3733 ////////////////////////////////////////////////////////////////////////
3735 void nsCycleCollector_registerJSContext(CycleCollectedJSContext* aCx) {
3736 CollectorData* data = sCollectorData.get();
3738 // We should have started the cycle collector by now.
3739 MOZ_ASSERT(data);
3740 MOZ_ASSERT(data->mCollector);
3741 // But we shouldn't already have a context.
3742 MOZ_ASSERT(!data->mContext);
3744 data->mContext = aCx;
3745 data->mCollector->SetCCJSRuntime(aCx->Runtime());
3748 void nsCycleCollector_forgetJSContext() {
3749 CollectorData* data = sCollectorData.get();
3751 // We should have started the cycle collector by now.
3752 MOZ_ASSERT(data);
3753 // And we shouldn't have already forgotten our context.
3754 MOZ_ASSERT(data->mContext);
3756 // But it may have shutdown already.
3757 if (data->mCollector) {
3758 data->mCollector->ClearCCJSRuntime();
3759 data->mContext = nullptr;
3760 } else {
3761 data->mContext = nullptr;
3762 delete data;
3763 sCollectorData.set(nullptr);
3767 /* static */
3768 CycleCollectedJSContext* CycleCollectedJSContext::Get() {
3769 CollectorData* data = sCollectorData.get();
3770 if (data) {
3771 return data->mContext;
3773 return nullptr;
3776 MOZ_NEVER_INLINE static void SuspectAfterShutdown(
3777 void* aPtr, nsCycleCollectionParticipant* aCp,
3778 nsCycleCollectingAutoRefCnt* aRefCnt, bool* aShouldDelete) {
3779 if (aRefCnt->get() == 0) {
3780 if (!aShouldDelete) {
3781 // The CC is shut down, so we can't be in the middle of an ICC.
3782 ToParticipant(aPtr, &aCp);
3783 aRefCnt->stabilizeForDeletion();
3784 aCp->DeleteCycleCollectable(aPtr);
3785 } else {
3786 *aShouldDelete = true;
3788 } else {
3789 // Make sure we'll get called again.
3790 aRefCnt->RemoveFromPurpleBuffer();
3794 void NS_CycleCollectorSuspect3(void* aPtr, nsCycleCollectionParticipant* aCp,
3795 nsCycleCollectingAutoRefCnt* aRefCnt,
3796 bool* aShouldDelete) {
3797 CollectorData* data = sCollectorData.get();
3799 // This assertion will happen if you AddRef or Release a cycle collected
3800 // object on a thread that does not have an active cycle collector.
3801 // This can happen in a few situations:
3802 // 1. We never cycle collect on this thread. (The cycle collector is only
3803 // run on the main thread and DOM worker threads.)
3804 // 2. The cycle collector hasn't been initialized on this thread yet.
3805 // 3. The cycle collector has already been shut down on this thread.
3806 MOZ_DIAGNOSTIC_ASSERT(
3807 data,
3808 "Cycle collected object used on a thread without a cycle collector.");
3810 if (MOZ_LIKELY(data->mCollector)) {
3811 data->mCollector->Suspect(aPtr, aCp, aRefCnt);
3812 return;
3814 SuspectAfterShutdown(aPtr, aCp, aRefCnt, aShouldDelete);
3817 void ClearNurseryPurpleBuffer() {
3818 MOZ_ASSERT(NS_IsMainThread(), "Wrong thread!");
3819 CollectorData* data = sCollectorData.get();
3820 MOZ_ASSERT(data);
3821 MOZ_ASSERT(data->mCollector);
3822 data->mCollector->SuspectNurseryEntries();
3825 void NS_CycleCollectorSuspectUsingNursery(void* aPtr,
3826 nsCycleCollectionParticipant* aCp,
3827 nsCycleCollectingAutoRefCnt* aRefCnt,
3828 bool* aShouldDelete) {
3829 MOZ_ASSERT(NS_IsMainThread(), "Wrong thread!");
3830 if (!gNurseryPurpleBufferEnabled) {
3831 NS_CycleCollectorSuspect3(aPtr, aCp, aRefCnt, aShouldDelete);
3832 return;
3835 SuspectUsingNurseryPurpleBuffer(aPtr, aCp, aRefCnt);
3838 uint32_t nsCycleCollector_suspectedCount() {
3839 CollectorData* data = sCollectorData.get();
3841 // We should have started the cycle collector by now.
3842 MOZ_ASSERT(data);
3844 if (!data->mCollector) {
3845 return 0;
3848 return data->mCollector->SuspectedCount();
3851 bool nsCycleCollector_init() {
3852 #ifdef DEBUG
3853 static bool sInitialized;
3855 MOZ_ASSERT(NS_IsMainThread(), "Wrong thread!");
3856 MOZ_ASSERT(!sInitialized, "Called twice!?");
3857 sInitialized = true;
3858 #endif
3860 return sCollectorData.init();
3863 void nsCycleCollector_startup() {
3864 if (sCollectorData.get()) {
3865 MOZ_CRASH();
3868 CollectorData* data = new CollectorData;
3869 data->mCollector = new nsCycleCollector();
3870 data->mContext = nullptr;
3872 sCollectorData.set(data);
3875 void nsCycleCollector_setBeforeUnlinkCallback(CC_BeforeUnlinkCallback aCB) {
3876 CollectorData* data = sCollectorData.get();
3878 // We should have started the cycle collector by now.
3879 MOZ_ASSERT(data);
3880 MOZ_ASSERT(data->mCollector);
3882 data->mCollector->SetBeforeUnlinkCallback(aCB);
3885 void nsCycleCollector_setForgetSkippableCallback(
3886 CC_ForgetSkippableCallback aCB) {
3887 CollectorData* data = sCollectorData.get();
3889 // We should have started the cycle collector by now.
3890 MOZ_ASSERT(data);
3891 MOZ_ASSERT(data->mCollector);
3893 data->mCollector->SetForgetSkippableCallback(aCB);
3896 void nsCycleCollector_forgetSkippable(js::SliceBudget& aBudget,
3897 bool aRemoveChildlessNodes,
3898 bool aAsyncSnowWhiteFreeing) {
3899 CollectorData* data = sCollectorData.get();
3901 // We should have started the cycle collector by now.
3902 MOZ_ASSERT(data);
3903 MOZ_ASSERT(data->mCollector);
3905 TimeLog timeLog;
3906 data->mCollector->ForgetSkippable(aBudget, aRemoveChildlessNodes,
3907 aAsyncSnowWhiteFreeing);
3908 timeLog.Checkpoint("ForgetSkippable()");
3911 void nsCycleCollector_dispatchDeferredDeletion(bool aContinuation,
3912 bool aPurge) {
3913 CycleCollectedJSRuntime* rt = CycleCollectedJSRuntime::Get();
3914 if (rt) {
3915 rt->DispatchDeferredDeletion(aContinuation, aPurge);
3919 bool nsCycleCollector_doDeferredDeletion() {
3920 CollectorData* data = sCollectorData.get();
3922 // We should have started the cycle collector by now.
3923 MOZ_ASSERT(data);
3924 MOZ_ASSERT(data->mCollector);
3925 MOZ_ASSERT(data->mContext);
3927 return data->mCollector->FreeSnowWhite(false);
3930 bool nsCycleCollector_doDeferredDeletionWithBudget(js::SliceBudget& aBudget) {
3931 CollectorData* data = sCollectorData.get();
3933 // We should have started the cycle collector by now.
3934 MOZ_ASSERT(data);
3935 MOZ_ASSERT(data->mCollector);
3936 MOZ_ASSERT(data->mContext);
3938 return data->mCollector->FreeSnowWhiteWithBudget(aBudget);
3941 already_AddRefed<nsICycleCollectorLogSink> nsCycleCollector_createLogSink() {
3942 nsCOMPtr<nsICycleCollectorLogSink> sink = new nsCycleCollectorLogSinkToFile();
3943 return sink.forget();
3946 bool nsCycleCollector_collect(CCReason aReason,
3947 nsICycleCollectorListener* aManualListener) {
3948 CollectorData* data = sCollectorData.get();
3950 // We should have started the cycle collector by now.
3951 MOZ_ASSERT(data);
3952 MOZ_ASSERT(data->mCollector);
3954 AUTO_PROFILER_LABEL("nsCycleCollector_collect", GCCC);
3956 SliceBudget unlimitedBudget = SliceBudget::unlimited();
3957 return data->mCollector->Collect(aReason, ccIsManual::CCIsManual,
3958 unlimitedBudget, aManualListener);
3961 void nsCycleCollector_collectSlice(SliceBudget& budget, CCReason aReason,
3962 bool aPreferShorterSlices) {
3963 CollectorData* data = sCollectorData.get();
3965 // We should have started the cycle collector by now.
3966 MOZ_ASSERT(data);
3967 MOZ_ASSERT(data->mCollector);
3969 AUTO_PROFILER_LABEL("nsCycleCollector_collectSlice", GCCC);
3971 data->mCollector->Collect(aReason, ccIsManual::CCIsNotManual, budget, nullptr,
3972 aPreferShorterSlices);
3975 void nsCycleCollector_prepareForGarbageCollection() {
3976 CollectorData* data = sCollectorData.get();
3978 MOZ_ASSERT(data);
3980 if (!data->mCollector) {
3981 return;
3984 data->mCollector->PrepareForGarbageCollection();
3987 void nsCycleCollector_finishAnyCurrentCollection() {
3988 CollectorData* data = sCollectorData.get();
3990 MOZ_ASSERT(data);
3992 if (!data->mCollector) {
3993 return;
3996 data->mCollector->FinishAnyCurrentCollection(CCReason::API);
3999 void nsCycleCollector_shutdown(bool aDoCollect) {
4000 CollectorData* data = sCollectorData.get();
4002 if (data) {
4003 MOZ_ASSERT(data->mCollector);
4004 AUTO_PROFILER_LABEL("nsCycleCollector_shutdown", OTHER);
4007 RefPtr<nsCycleCollector> collector = data->mCollector;
4008 collector->Shutdown(aDoCollect);
4009 data->mCollector = nullptr;
4012 if (!data->mContext) {
4013 delete data;
4014 sCollectorData.set(nullptr);