Bug 964976 - Prevent crash of unsupported pixel format gralloc allocation. r=nical
[gecko.git] / xpcom / base / nsCycleCollector.cpp
blob26e62ee0ad3e71353bf0600c7db72a1ecfb8e91c
1 /* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 4 -*- */
2 /* vim: set ts=8 sts=4 et sw=4 tw=80: */
3 /* This Source Code Form is subject to the terms of the Mozilla Public
4 * License, v. 2.0. If a copy of the MPL was not distributed with this
5 * file, You can obtain one at http://mozilla.org/MPL/2.0/. */
7 //
8 // This file implements a garbage-cycle collector based on the paper
9 //
10 // Concurrent Cycle Collection in Reference Counted Systems
11 // Bacon & Rajan (2001), ECOOP 2001 / Springer LNCS vol 2072
13 // We are not using the concurrent or acyclic cases of that paper; so
14 // the green, red and orange colors are not used.
16 // The collector is based on tracking pointers of four colors:
18 // Black nodes are definitely live. If we ever determine a node is
19 // black, it's ok to forget about, drop from our records.
21 // White nodes are definitely garbage cycles. Once we finish with our
22 // scanning, we unlink all the white nodes and expect that by
23 // unlinking them they will self-destruct (since a garbage cycle is
24 // only keeping itself alive with internal links, by definition).
26 // Snow-white is an addition to the original algorithm. Snow-white object
27 // has reference count zero and is just waiting for deletion.
29 // Grey nodes are being scanned. Nodes that turn grey will turn
30 // either black if we determine that they're live, or white if we
31 // determine that they're a garbage cycle. After the main collection
32 // algorithm there should be no grey nodes.
34 // Purple nodes are *candidates* for being scanned. They are nodes we
35 // haven't begun scanning yet because they're not old enough, or we're
36 // still partway through the algorithm.
38 // XPCOM objects participating in garbage-cycle collection are obliged
39 // to inform us when they ought to turn purple; that is, when their
40 // refcount transitions from N+1 -> N, for nonzero N. Furthermore we
41 // require that *after* an XPCOM object has informed us of turning
42 // purple, they will tell us when they either transition back to being
43 // black (incremented refcount) or are ultimately deleted.
45 // Incremental cycle collection
47 // Beyond the simple state machine required to implement incremental
48 // collection, the CC needs to be able to compensate for things the browser
49 // is doing during the collection. There are two kinds of problems. For each
50 // of these, there are two cases to deal with: purple-buffered C++ objects
51 // and JS objects.
53 // The first problem is that an object in the CC's graph can become garbage.
54 // This is bad because the CC touches the objects in its graph at every
55 // stage of its operation.
57 // All cycle collected C++ objects that die during a cycle collection
58 // will end up actually getting deleted by the SnowWhiteKiller. Before
59 // the SWK deletes an object, it checks if an ICC is running, and if so,
60 // if the object is in the graph. If it is, the CC clears mPointer and
61 // mParticipant so it does not point to the raw object any more. Because
62 // objects could die any time the CC returns to the mutator, any time the CC
63 // accesses a PtrInfo it must perform a null check on mParticipant to
64 // ensure the object has not gone away.
66 // JS objects don't always run finalizers, so the CC can't remove them from
67 // the graph when they die. Fortunately, JS objects can only die during a GC,
68 // so if a GC is begun during an ICC, the browser synchronously finishes off
69 // the ICC, which clears the entire CC graph. If the GC and CC are scheduled
70 // properly, this should be rare.
72 // The second problem is that objects in the graph can be changed, say by
73 // being addrefed or released, or by having a field updated, after the object
74 // has been added to the graph. The problem is that ICC can miss a newly
75 // created reference to an object, and end up unlinking an object that is
76 // actually alive.
78 // The basic idea of the solution, from "An on-the-fly Reference Counting
79 // Garbage Collector for Java" by Levanoni and Petrank, is to notice if an
80 // object has had an additional reference to it created during the collection,
81 // and if so, don't collect it during the current collection. This avoids having
82 // to rerun the scan as in Bacon & Rajan 2001.
84 // For cycle collected C++ objects, we modify AddRef to place the object in
85 // the purple buffer, in addition to Release. Then, in the CC, we treat any
86 // objects in the purple buffer as being alive, after graph building has
87 // completed. Because they are in the purple buffer, they will be suspected
88 // in the next CC, so there's no danger of leaks. This is imprecise, because
89 // we will treat as live an object that has been Released but not AddRefed
90 // during graph building, but that's probably rare enough that the additional
91 // bookkeeping overhead is not worthwhile.
93 // For JS objects, the cycle collector is only looking at gray objects. If a
94 // gray object is touched during ICC, it will be made black by UnmarkGray.
95 // Thus, if a JS object has become black during the ICC, we treat it as live.
96 // Merged JS zones have to be handled specially: we scan all zone globals.
97 // If any are black, we treat the zone as being black.
100 // Safety
102 // An XPCOM object is either scan-safe or scan-unsafe, purple-safe or
103 // purple-unsafe.
105 // An nsISupports object is scan-safe if:
107 // - It can be QI'ed to |nsXPCOMCycleCollectionParticipant|, though
108 // this operation loses ISupports identity (like nsIClassInfo).
109 // - Additionally, the operation |traverse| on the resulting
110 // nsXPCOMCycleCollectionParticipant does not cause *any* refcount
111 // adjustment to occur (no AddRef / Release calls).
113 // A non-nsISupports ("native") object is scan-safe by explicitly
114 // providing its nsCycleCollectionParticipant.
116 // An object is purple-safe if it satisfies the following properties:
118 // - The object is scan-safe.
120 // When we receive a pointer |ptr| via
121 // |nsCycleCollector::suspect(ptr)|, we assume it is purple-safe. We
122 // can check the scan-safety, but have no way to ensure the
123 // purple-safety; objects must obey, or else the entire system falls
124 // apart. Don't involve an object in this scheme if you can't
125 // guarantee its purple-safety. The easiest way to ensure that an
126 // object is purple-safe is to use nsCycleCollectingAutoRefCnt.
128 // When we have a scannable set of purple nodes ready, we begin
129 // our walks. During the walks, the nodes we |traverse| should only
130 // feed us more scan-safe nodes, and should not adjust the refcounts
131 // of those nodes.
133 // We do not |AddRef| or |Release| any objects during scanning. We
134 // rely on the purple-safety of the roots that call |suspect| to
135 // hold, such that we will clear the pointer from the purple buffer
136 // entry to the object before it is destroyed. The pointers that are
137 // merely scan-safe we hold only for the duration of scanning, and
138 // there should be no objects released from the scan-safe set during
139 // the scan.
141 // We *do* call |Root| and |Unroot| on every white object, on
142 // either side of the calls to |Unlink|. This keeps the set of white
143 // objects alive during the unlinking.
146 #if !defined(__MINGW32__)
147 #ifdef WIN32
148 #include <crtdbg.h>
149 #include <errno.h>
150 #endif
151 #endif
153 #include "base/process_util.h"
155 #include "mozilla/ArrayUtils.h"
156 #include "mozilla/AutoRestore.h"
157 #include "mozilla/CycleCollectedJSRuntime.h"
158 #include "mozilla/HoldDropJSObjects.h"
159 /* This must occur *after* base/process_util.h to avoid typedefs conflicts. */
160 #include "mozilla/MemoryReporting.h"
161 #include "mozilla/LinkedList.h"
163 #include "nsCycleCollectionParticipant.h"
164 #include "nsCycleCollectionNoteRootCallback.h"
165 #include "nsDeque.h"
166 #include "nsCycleCollector.h"
167 #include "nsThreadUtils.h"
168 #include "prenv.h"
169 #include "nsPrintfCString.h"
170 #include "nsTArray.h"
171 #include "nsIConsoleService.h"
172 #include "mozilla/Attributes.h"
173 #include "nsICycleCollectorListener.h"
174 #include "nsIMemoryReporter.h"
175 #include "nsIFile.h"
176 #include "nsMemoryInfoDumper.h"
177 #include "xpcpublic.h"
178 #include "GeckoProfiler.h"
179 #include "js/SliceBudget.h"
180 #include <stdint.h>
181 #include <stdio.h>
183 #include "mozilla/Likely.h"
184 #include "mozilla/PoisonIOInterposer.h"
185 #include "mozilla/Telemetry.h"
186 #include "mozilla/ThreadLocal.h"
188 using namespace mozilla;
190 //#define COLLECT_TIME_DEBUG
192 // Enable assertions that are useful for diagnosing errors in graph construction.
193 //#define DEBUG_CC_GRAPH
195 #define DEFAULT_SHUTDOWN_COLLECTIONS 5
197 // One to do the freeing, then another to detect there is no more work to do.
198 #define NORMAL_SHUTDOWN_COLLECTIONS 2
200 // Cycle collector environment variables
202 // XPCOM_CC_LOG_ALL: If defined, always log cycle collector heaps.
204 // XPCOM_CC_LOG_SHUTDOWN: If defined, log cycle collector heaps at shutdown.
206 // XPCOM_CC_ALL_TRACES_AT_SHUTDOWN: If defined, any cycle collector
207 // logging done at shutdown will be WantAllTraces, which disables
208 // various cycle collector optimizations to give a fuller picture of
209 // the heap.
211 // XPCOM_CC_RUN_DURING_SHUTDOWN: In non-DEBUG or builds, if this is set,
212 // run cycle collections at shutdown.
214 // MOZ_CC_LOG_DIRECTORY: The directory in which logs are placed (such as
215 // logs from XPCOM_CC_LOG_ALL and XPCOM_CC_LOG_SHUTDOWN, or other uses
216 // of nsICycleCollectorListener)
218 MOZ_NEVER_INLINE void
219 CC_AbortIfNull(void *ptr)
221 if (!ptr)
222 MOZ_CRASH();
225 // Various parameters of this collector can be tuned using environment
226 // variables.
228 struct nsCycleCollectorParams
230 bool mLogAll;
231 bool mLogShutdown;
232 bool mAllTracesAtShutdown;
234 nsCycleCollectorParams() :
235 mLogAll (PR_GetEnv("XPCOM_CC_LOG_ALL") != nullptr),
236 mLogShutdown (PR_GetEnv("XPCOM_CC_LOG_SHUTDOWN") != nullptr),
237 mAllTracesAtShutdown (PR_GetEnv("XPCOM_CC_ALL_TRACES_AT_SHUTDOWN") != nullptr)
242 #ifdef COLLECT_TIME_DEBUG
243 class TimeLog
245 public:
246 TimeLog() : mLastCheckpoint(TimeStamp::Now()) {}
248 void
249 Checkpoint(const char* aEvent)
251 TimeStamp now = TimeStamp::Now();
252 uint32_t dur = (uint32_t) ((now - mLastCheckpoint).ToMilliseconds());
253 if (dur > 0) {
254 printf("cc: %s took %dms\n", aEvent, dur);
256 mLastCheckpoint = now;
259 private:
260 TimeStamp mLastCheckpoint;
262 #else
263 class TimeLog
265 public:
266 TimeLog() {}
267 void Checkpoint(const char* aEvent) {}
269 #endif
272 ////////////////////////////////////////////////////////////////////////
273 // Base types
274 ////////////////////////////////////////////////////////////////////////
276 struct PtrInfo;
278 class EdgePool
280 public:
281 // EdgePool allocates arrays of void*, primarily to hold PtrInfo*.
282 // However, at the end of a block, the last two pointers are a null
283 // and then a void** pointing to the next block. This allows
284 // EdgePool::Iterators to be a single word but still capable of crossing
285 // block boundaries.
287 EdgePool()
289 mSentinelAndBlocks[0].block = nullptr;
290 mSentinelAndBlocks[1].block = nullptr;
293 ~EdgePool()
295 MOZ_ASSERT(!mSentinelAndBlocks[0].block &&
296 !mSentinelAndBlocks[1].block,
297 "Didn't call Clear()?");
300 void Clear()
302 Block *b = Blocks();
303 while (b) {
304 Block *next = b->Next();
305 delete b;
306 b = next;
309 mSentinelAndBlocks[0].block = nullptr;
310 mSentinelAndBlocks[1].block = nullptr;
313 #ifdef DEBUG
314 bool IsEmpty()
316 return !mSentinelAndBlocks[0].block &&
317 !mSentinelAndBlocks[1].block;
319 #endif
321 private:
322 struct Block;
323 union PtrInfoOrBlock {
324 // Use a union to avoid reinterpret_cast and the ensuing
325 // potential aliasing bugs.
326 PtrInfo *ptrInfo;
327 Block *block;
329 struct Block {
330 enum { BlockSize = 16 * 1024 };
332 PtrInfoOrBlock mPointers[BlockSize];
333 Block() {
334 mPointers[BlockSize - 2].block = nullptr; // sentinel
335 mPointers[BlockSize - 1].block = nullptr; // next block pointer
337 Block*& Next() { return mPointers[BlockSize - 1].block; }
338 PtrInfoOrBlock* Start() { return &mPointers[0]; }
339 PtrInfoOrBlock* End() { return &mPointers[BlockSize - 2]; }
342 // Store the null sentinel so that we can have valid iterators
343 // before adding any edges and without adding any blocks.
344 PtrInfoOrBlock mSentinelAndBlocks[2];
346 Block*& Blocks() { return mSentinelAndBlocks[1].block; }
347 Block* Blocks() const { return mSentinelAndBlocks[1].block; }
349 public:
350 class Iterator
352 public:
353 Iterator() : mPointer(nullptr) {}
354 Iterator(PtrInfoOrBlock *aPointer) : mPointer(aPointer) {}
355 Iterator(const Iterator& aOther) : mPointer(aOther.mPointer) {}
357 Iterator& operator++()
359 if (mPointer->ptrInfo == nullptr) {
360 // Null pointer is a sentinel for link to the next block.
361 mPointer = (mPointer + 1)->block->mPointers;
363 ++mPointer;
364 return *this;
367 PtrInfo* operator*() const
369 if (mPointer->ptrInfo == nullptr) {
370 // Null pointer is a sentinel for link to the next block.
371 return (mPointer + 1)->block->mPointers->ptrInfo;
373 return mPointer->ptrInfo;
375 bool operator==(const Iterator& aOther) const
376 { return mPointer == aOther.mPointer; }
377 bool operator!=(const Iterator& aOther) const
378 { return mPointer != aOther.mPointer; }
380 #ifdef DEBUG_CC_GRAPH
381 bool Initialized() const
383 return mPointer != nullptr;
385 #endif
387 private:
388 PtrInfoOrBlock *mPointer;
391 class Builder;
392 friend class Builder;
393 class Builder {
394 public:
395 Builder(EdgePool &aPool)
396 : mCurrent(&aPool.mSentinelAndBlocks[0]),
397 mBlockEnd(&aPool.mSentinelAndBlocks[0]),
398 mNextBlockPtr(&aPool.Blocks())
402 Iterator Mark() { return Iterator(mCurrent); }
404 void Add(PtrInfo* aEdge) {
405 if (mCurrent == mBlockEnd) {
406 Block *b = new Block();
407 *mNextBlockPtr = b;
408 mCurrent = b->Start();
409 mBlockEnd = b->End();
410 mNextBlockPtr = &b->Next();
412 (mCurrent++)->ptrInfo = aEdge;
414 private:
415 // mBlockEnd points to space for null sentinel
416 PtrInfoOrBlock *mCurrent, *mBlockEnd;
417 Block **mNextBlockPtr;
420 size_t SizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const {
421 size_t n = 0;
422 Block *b = Blocks();
423 while (b) {
424 n += aMallocSizeOf(b);
425 b = b->Next();
427 return n;
431 #ifdef DEBUG_CC_GRAPH
432 #define CC_GRAPH_ASSERT(b) MOZ_ASSERT(b)
433 #else
434 #define CC_GRAPH_ASSERT(b)
435 #endif
437 #define CC_TELEMETRY(_name, _value) \
438 PR_BEGIN_MACRO \
439 if (NS_IsMainThread()) { \
440 Telemetry::Accumulate(Telemetry::CYCLE_COLLECTOR##_name, _value); \
441 } else { \
442 Telemetry::Accumulate(Telemetry::CYCLE_COLLECTOR_WORKER##_name, _value); \
444 PR_END_MACRO
446 enum NodeColor { black, white, grey };
448 // This structure should be kept as small as possible; we may expect
449 // hundreds of thousands of them to be allocated and touched
450 // repeatedly during each cycle collection.
452 struct PtrInfo
454 void *mPointer;
455 nsCycleCollectionParticipant *mParticipant;
456 uint32_t mColor : 2;
457 uint32_t mInternalRefs : 30;
458 uint32_t mRefCount;
459 private:
460 EdgePool::Iterator mFirstChild;
462 public:
464 PtrInfo(void *aPointer, nsCycleCollectionParticipant *aParticipant)
465 : mPointer(aPointer),
466 mParticipant(aParticipant),
467 mColor(grey),
468 mInternalRefs(0),
469 mRefCount(UINT32_MAX - 1),
470 mFirstChild()
472 // We initialize mRefCount to a large non-zero value so
473 // that it doesn't look like a JS object to the cycle collector
474 // in the case where the object dies before being traversed.
476 MOZ_ASSERT(aParticipant);
479 // Allow NodePool::Block's constructor to compile.
480 PtrInfo() {
481 NS_NOTREACHED("should never be called");
484 EdgePool::Iterator FirstChild()
486 CC_GRAPH_ASSERT(mFirstChild.Initialized());
487 return mFirstChild;
490 // this PtrInfo must be part of a NodePool
491 EdgePool::Iterator LastChild()
493 CC_GRAPH_ASSERT((this + 1)->mFirstChild.Initialized());
494 return (this + 1)->mFirstChild;
497 void SetFirstChild(EdgePool::Iterator aFirstChild)
499 CC_GRAPH_ASSERT(aFirstChild.Initialized());
500 mFirstChild = aFirstChild;
503 // this PtrInfo must be part of a NodePool
504 void SetLastChild(EdgePool::Iterator aLastChild)
506 CC_GRAPH_ASSERT(aLastChild.Initialized());
507 (this + 1)->mFirstChild = aLastChild;
512 * A structure designed to be used like a linked list of PtrInfo, except
513 * that allocates the PtrInfo 32K-at-a-time.
515 class NodePool
517 private:
518 enum { BlockSize = 8 * 1024 }; // could be int template parameter
520 struct Block {
521 // We create and destroy Block using NS_Alloc/NS_Free rather
522 // than new and delete to avoid calling its constructor and
523 // destructor.
524 Block() { NS_NOTREACHED("should never be called"); }
525 ~Block() { NS_NOTREACHED("should never be called"); }
527 Block* mNext;
528 PtrInfo mEntries[BlockSize + 1]; // +1 to store last child of last node
531 public:
532 NodePool()
533 : mBlocks(nullptr),
534 mLast(nullptr)
538 ~NodePool()
540 MOZ_ASSERT(!mBlocks, "Didn't call Clear()?");
543 void Clear()
545 Block *b = mBlocks;
546 while (b) {
547 Block *n = b->mNext;
548 NS_Free(b);
549 b = n;
552 mBlocks = nullptr;
553 mLast = nullptr;
556 #ifdef DEBUG
557 bool IsEmpty()
559 return !mBlocks && !mLast;
561 #endif
563 class Builder;
564 friend class Builder;
565 class Builder {
566 public:
567 Builder(NodePool& aPool)
568 : mNextBlock(&aPool.mBlocks),
569 mNext(aPool.mLast),
570 mBlockEnd(nullptr)
572 MOZ_ASSERT(aPool.mBlocks == nullptr && aPool.mLast == nullptr,
573 "pool not empty");
575 PtrInfo *Add(void *aPointer, nsCycleCollectionParticipant *aParticipant)
577 if (mNext == mBlockEnd) {
578 Block *block = static_cast<Block*>(NS_Alloc(sizeof(Block)));
579 *mNextBlock = block;
580 mNext = block->mEntries;
581 mBlockEnd = block->mEntries + BlockSize;
582 block->mNext = nullptr;
583 mNextBlock = &block->mNext;
585 return new (mNext++) PtrInfo(aPointer, aParticipant);
587 private:
588 Block **mNextBlock;
589 PtrInfo *&mNext;
590 PtrInfo *mBlockEnd;
593 class Enumerator;
594 friend class Enumerator;
595 class Enumerator {
596 public:
597 Enumerator(NodePool& aPool)
598 : mFirstBlock(aPool.mBlocks),
599 mCurBlock(nullptr),
600 mNext(nullptr),
601 mBlockEnd(nullptr),
602 mLast(aPool.mLast)
606 bool IsDone() const
608 return mNext == mLast;
611 bool AtBlockEnd() const
613 return mNext == mBlockEnd;
616 PtrInfo* GetNext()
618 MOZ_ASSERT(!IsDone(), "calling GetNext when done");
619 if (mNext == mBlockEnd) {
620 Block *nextBlock = mCurBlock ? mCurBlock->mNext : mFirstBlock;
621 mNext = nextBlock->mEntries;
622 mBlockEnd = mNext + BlockSize;
623 mCurBlock = nextBlock;
625 return mNext++;
627 private:
628 // mFirstBlock is a reference to allow an Enumerator to be constructed
629 // for an empty graph.
630 Block *&mFirstBlock;
631 Block *mCurBlock;
632 // mNext is the next value we want to return, unless mNext == mBlockEnd
633 // NB: mLast is a reference to allow enumerating while building!
634 PtrInfo *mNext, *mBlockEnd, *&mLast;
637 size_t SizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const {
638 // We don't measure the things pointed to by mEntries[] because those
639 // pointers are non-owning.
640 size_t n = 0;
641 Block *b = mBlocks;
642 while (b) {
643 n += aMallocSizeOf(b);
644 b = b->mNext;
646 return n;
649 private:
650 Block *mBlocks;
651 PtrInfo *mLast;
655 // Declarations for mPtrToNodeMap.
657 struct PtrToNodeEntry : public PLDHashEntryHdr
659 // The key is mNode->mPointer
660 PtrInfo *mNode;
663 static bool
664 PtrToNodeMatchEntry(PLDHashTable *table,
665 const PLDHashEntryHdr *entry,
666 const void *key)
668 const PtrToNodeEntry *n = static_cast<const PtrToNodeEntry*>(entry);
669 return n->mNode->mPointer == key;
672 static PLDHashTableOps PtrNodeOps = {
673 PL_DHashAllocTable,
674 PL_DHashFreeTable,
675 PL_DHashVoidPtrKeyStub,
676 PtrToNodeMatchEntry,
677 PL_DHashMoveEntryStub,
678 PL_DHashClearEntryStub,
679 PL_DHashFinalizeStub,
680 nullptr
684 struct WeakMapping
686 // map and key will be null if the corresponding objects are GC marked
687 PtrInfo *mMap;
688 PtrInfo *mKey;
689 PtrInfo *mKeyDelegate;
690 PtrInfo *mVal;
693 class GCGraphBuilder;
695 struct GCGraph
697 NodePool mNodes;
698 EdgePool mEdges;
699 nsTArray<WeakMapping> mWeakMaps;
700 uint32_t mRootCount;
702 private:
703 PLDHashTable mPtrToNodeMap;
705 public:
706 GCGraph() : mRootCount(0)
708 mPtrToNodeMap.ops = nullptr;
711 ~GCGraph()
713 if (mPtrToNodeMap.ops) {
714 PL_DHashTableFinish(&mPtrToNodeMap);
718 void Init()
720 MOZ_ASSERT(IsEmpty(), "Failed to call GCGraph::Clear");
721 if (!PL_DHashTableInit(&mPtrToNodeMap, &PtrNodeOps, nullptr,
722 sizeof(PtrToNodeEntry), 32768)) {
723 MOZ_CRASH();
727 void Clear()
729 mNodes.Clear();
730 mEdges.Clear();
731 mWeakMaps.Clear();
732 mRootCount = 0;
733 PL_DHashTableFinish(&mPtrToNodeMap);
734 mPtrToNodeMap.ops = nullptr;
737 #ifdef DEBUG
738 bool IsEmpty()
740 return mNodes.IsEmpty() && mEdges.IsEmpty() &&
741 mWeakMaps.IsEmpty() && mRootCount == 0 &&
742 !mPtrToNodeMap.ops;
744 #endif
746 PtrInfo* FindNode(void *aPtr);
747 PtrToNodeEntry* AddNodeToMap(void *aPtr);
748 void RemoveNodeFromMap(void *aPtr);
750 uint32_t MapCount() const
752 return mPtrToNodeMap.entryCount;
755 void SizeOfExcludingThis(MallocSizeOf aMallocSizeOf,
756 size_t *aNodesSize, size_t *aEdgesSize,
757 size_t *aWeakMapsSize) const {
758 *aNodesSize = mNodes.SizeOfExcludingThis(aMallocSizeOf);
759 *aEdgesSize = mEdges.SizeOfExcludingThis(aMallocSizeOf);
761 // We don't measure what the WeakMappings point to, because the
762 // pointers are non-owning.
763 *aWeakMapsSize = mWeakMaps.SizeOfExcludingThis(aMallocSizeOf);
767 PtrInfo*
768 GCGraph::FindNode(void *aPtr)
770 PtrToNodeEntry *e = static_cast<PtrToNodeEntry*>(PL_DHashTableOperate(&mPtrToNodeMap, aPtr, PL_DHASH_LOOKUP));
771 if (!PL_DHASH_ENTRY_IS_BUSY(e)) {
772 return nullptr;
774 return e->mNode;
777 PtrToNodeEntry*
778 GCGraph::AddNodeToMap(void *aPtr)
780 PtrToNodeEntry *e = static_cast<PtrToNodeEntry*>(PL_DHashTableOperate(&mPtrToNodeMap, aPtr, PL_DHASH_ADD));
781 if (!e) {
782 // Caller should track OOMs
783 return nullptr;
785 return e;
788 void
789 GCGraph::RemoveNodeFromMap(void *aPtr)
791 PL_DHashTableOperate(&mPtrToNodeMap, aPtr, PL_DHASH_REMOVE);
795 static nsISupports *
796 CanonicalizeXPCOMParticipant(nsISupports *in)
798 nsISupports* out;
799 in->QueryInterface(NS_GET_IID(nsCycleCollectionISupports),
800 reinterpret_cast<void**>(&out));
801 return out;
804 static inline void
805 ToParticipant(nsISupports *s, nsXPCOMCycleCollectionParticipant **cp);
807 static void
808 CanonicalizeParticipant(void **parti, nsCycleCollectionParticipant **cp)
810 // If the participant is null, this is an nsISupports participant,
811 // so we must QI to get the real participant.
813 if (!*cp) {
814 nsISupports *nsparti = static_cast<nsISupports*>(*parti);
815 nsparti = CanonicalizeXPCOMParticipant(nsparti);
816 NS_ASSERTION(nsparti,
817 "Don't add objects that don't participate in collection!");
818 nsXPCOMCycleCollectionParticipant *xcp;
819 ToParticipant(nsparti, &xcp);
820 *parti = nsparti;
821 *cp = xcp;
825 struct nsPurpleBufferEntry {
826 union {
827 void *mObject; // when low bit unset
828 nsPurpleBufferEntry *mNextInFreeList; // when low bit set
831 nsCycleCollectingAutoRefCnt *mRefCnt;
833 nsCycleCollectionParticipant *mParticipant; // nullptr for nsISupports
836 class nsCycleCollector;
838 struct nsPurpleBuffer
840 private:
841 struct Block {
842 Block *mNext;
843 // Try to match the size of a jemalloc bucket, to minimize slop bytes.
844 // - On 32-bit platforms sizeof(nsPurpleBufferEntry) is 12, so mEntries
845 // is 16,380 bytes, which leaves 4 bytes for mNext.
846 // - On 64-bit platforms sizeof(nsPurpleBufferEntry) is 24, so mEntries
847 // is 32,544 bytes, which leaves 8 bytes for mNext.
848 nsPurpleBufferEntry mEntries[1365];
850 Block() : mNext(nullptr) {
851 // Ensure Block is the right size (see above).
852 static_assert(
853 sizeof(Block) == 16384 || // 32-bit
854 sizeof(Block) == 32768, // 64-bit
855 "ill-sized nsPurpleBuffer::Block"
859 template <class PurpleVisitor>
860 void VisitEntries(nsPurpleBuffer &aBuffer, PurpleVisitor &aVisitor)
862 nsPurpleBufferEntry *eEnd = ArrayEnd(mEntries);
863 for (nsPurpleBufferEntry *e = mEntries; e != eEnd; ++e) {
864 if (!(uintptr_t(e->mObject) & uintptr_t(1))) {
865 aVisitor.Visit(aBuffer, e);
870 // This class wraps a linked list of the elements in the purple
871 // buffer.
873 uint32_t mCount;
874 Block mFirstBlock;
875 nsPurpleBufferEntry *mFreeList;
877 public:
878 nsPurpleBuffer()
880 InitBlocks();
883 ~nsPurpleBuffer()
885 FreeBlocks();
888 template <class PurpleVisitor>
889 void VisitEntries(PurpleVisitor &aVisitor)
891 for (Block *b = &mFirstBlock; b; b = b->mNext) {
892 b->VisitEntries(*this, aVisitor);
896 void InitBlocks()
898 mCount = 0;
899 mFreeList = nullptr;
900 StartBlock(&mFirstBlock);
903 void StartBlock(Block *aBlock)
905 NS_ABORT_IF_FALSE(!mFreeList, "should not have free list");
907 // Put all the entries in the block on the free list.
908 nsPurpleBufferEntry *entries = aBlock->mEntries;
909 mFreeList = entries;
910 for (uint32_t i = 1; i < ArrayLength(aBlock->mEntries); ++i) {
911 entries[i - 1].mNextInFreeList =
912 (nsPurpleBufferEntry*)(uintptr_t(entries + i) | 1);
914 entries[ArrayLength(aBlock->mEntries) - 1].mNextInFreeList =
915 (nsPurpleBufferEntry*)1;
918 void FreeBlocks()
920 if (mCount > 0)
921 UnmarkRemainingPurple(&mFirstBlock);
922 Block *b = mFirstBlock.mNext;
923 while (b) {
924 if (mCount > 0)
925 UnmarkRemainingPurple(b);
926 Block *next = b->mNext;
927 delete b;
928 b = next;
930 mFirstBlock.mNext = nullptr;
933 struct UnmarkRemainingPurpleVisitor
935 void
936 Visit(nsPurpleBuffer &aBuffer, nsPurpleBufferEntry *aEntry)
938 if (aEntry->mRefCnt) {
939 aEntry->mRefCnt->RemoveFromPurpleBuffer();
940 aEntry->mRefCnt = nullptr;
942 aEntry->mObject = nullptr;
943 --aBuffer.mCount;
947 void UnmarkRemainingPurple(Block *b)
949 UnmarkRemainingPurpleVisitor visitor;
950 b->VisitEntries(*this, visitor);
953 void SelectPointers(GCGraphBuilder &builder);
955 // RemoveSkippable removes entries from the purple buffer synchronously
956 // (1) if aAsyncSnowWhiteFreeing is false and nsPurpleBufferEntry::mRefCnt is 0 or
957 // (2) if the object's nsXPCOMCycleCollectionParticipant::CanSkip() returns true or
958 // (3) if nsPurpleBufferEntry::mRefCnt->IsPurple() is false.
959 // (4) If removeChildlessNodes is true, then any nodes in the purple buffer
960 // that will have no children in the cycle collector graph will also be
961 // removed. CanSkip() may be run on these children.
962 void RemoveSkippable(nsCycleCollector* aCollector,
963 bool removeChildlessNodes,
964 bool aAsyncSnowWhiteFreeing,
965 CC_ForgetSkippableCallback aCb);
967 MOZ_ALWAYS_INLINE nsPurpleBufferEntry* NewEntry()
969 if (MOZ_UNLIKELY(!mFreeList)) {
970 Block *b = new Block;
971 StartBlock(b);
973 // Add the new block as the second block in the list.
974 b->mNext = mFirstBlock.mNext;
975 mFirstBlock.mNext = b;
978 nsPurpleBufferEntry *e = mFreeList;
979 mFreeList = (nsPurpleBufferEntry*)
980 (uintptr_t(mFreeList->mNextInFreeList) & ~uintptr_t(1));
981 return e;
984 MOZ_ALWAYS_INLINE void Put(void *p, nsCycleCollectionParticipant *cp,
985 nsCycleCollectingAutoRefCnt *aRefCnt)
987 nsPurpleBufferEntry *e = NewEntry();
989 ++mCount;
991 e->mObject = p;
992 e->mRefCnt = aRefCnt;
993 e->mParticipant = cp;
996 void Remove(nsPurpleBufferEntry *e)
998 MOZ_ASSERT(mCount != 0, "must have entries");
1000 if (e->mRefCnt) {
1001 e->mRefCnt->RemoveFromPurpleBuffer();
1002 e->mRefCnt = nullptr;
1004 e->mNextInFreeList =
1005 (nsPurpleBufferEntry*)(uintptr_t(mFreeList) | uintptr_t(1));
1006 mFreeList = e;
1008 --mCount;
1011 uint32_t Count() const
1013 return mCount;
1016 size_t SizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const
1018 size_t n = 0;
1020 // Don't measure mFirstBlock because it's within |this|.
1021 const Block *block = mFirstBlock.mNext;
1022 while (block) {
1023 n += aMallocSizeOf(block);
1024 block = block->mNext;
1027 // mFreeList is deliberately not measured because it points into
1028 // the purple buffer, which is within mFirstBlock and thus within |this|.
1030 // We also don't measure the things pointed to by mEntries[] because
1031 // those pointers are non-owning.
1033 return n;
1037 static bool
1038 AddPurpleRoot(GCGraphBuilder &aBuilder, void *aRoot, nsCycleCollectionParticipant *aParti);
1040 struct SelectPointersVisitor
1042 SelectPointersVisitor(GCGraphBuilder &aBuilder)
1043 : mBuilder(aBuilder)
1046 void
1047 Visit(nsPurpleBuffer &aBuffer, nsPurpleBufferEntry *aEntry)
1049 MOZ_ASSERT(aEntry->mObject, "Null object in purple buffer");
1050 MOZ_ASSERT(aEntry->mRefCnt->get() != 0,
1051 "SelectPointersVisitor: snow-white object in the purple buffer");
1052 if (!aEntry->mRefCnt->IsPurple() ||
1053 AddPurpleRoot(mBuilder, aEntry->mObject, aEntry->mParticipant)) {
1054 aBuffer.Remove(aEntry);
1058 private:
1059 GCGraphBuilder &mBuilder;
1062 void
1063 nsPurpleBuffer::SelectPointers(GCGraphBuilder &aBuilder)
1065 SelectPointersVisitor visitor(aBuilder);
1066 VisitEntries(visitor);
1068 NS_ASSERTION(mCount == 0, "AddPurpleRoot failed");
1069 if (mCount == 0) {
1070 FreeBlocks();
1071 InitBlocks();
1075 enum ccPhase {
1076 IdlePhase,
1077 GraphBuildingPhase,
1078 ScanAndCollectWhitePhase,
1079 CleanupPhase
1082 enum ccType {
1083 SliceCC, /* If a CC is in progress, continue it. Otherwise, start a new one. */
1084 ManualCC, /* Explicitly triggered. */
1085 ShutdownCC /* Shutdown CC, used for finding leaks. */
1088 #ifdef MOZ_NUWA_PROCESS
1089 #include "ipc/Nuwa.h"
1090 #endif
1092 ////////////////////////////////////////////////////////////////////////
1093 // Top level structure for the cycle collector.
1094 ////////////////////////////////////////////////////////////////////////
1096 typedef js::SliceBudget SliceBudget;
1098 class JSPurpleBuffer;
1100 class nsCycleCollector : public nsIMemoryReporter
1102 NS_DECL_ISUPPORTS
1103 NS_DECL_NSIMEMORYREPORTER
1105 bool mActivelyCollecting;
1106 // mScanInProgress should be false when we're collecting white objects.
1107 bool mScanInProgress;
1108 CycleCollectorResults mResults;
1109 TimeStamp mCollectionStart;
1111 CycleCollectedJSRuntime *mJSRuntime;
1113 ccPhase mIncrementalPhase;
1114 GCGraph mGraph;
1115 nsAutoPtr<GCGraphBuilder> mBuilder;
1116 nsAutoPtr<NodePool::Enumerator> mCurrNode;
1117 nsCOMPtr<nsICycleCollectorListener> mListener;
1119 nsIThread* mThread;
1121 nsCycleCollectorParams mParams;
1123 uint32_t mWhiteNodeCount;
1125 CC_BeforeUnlinkCallback mBeforeUnlinkCB;
1126 CC_ForgetSkippableCallback mForgetSkippableCB;
1128 nsPurpleBuffer mPurpleBuf;
1130 uint32_t mUnmergedNeeded;
1131 uint32_t mMergedInARow;
1133 JSPurpleBuffer* mJSPurpleBuffer;
1135 public:
1136 nsCycleCollector();
1137 virtual ~nsCycleCollector();
1139 void RegisterJSRuntime(CycleCollectedJSRuntime *aJSRuntime);
1140 void ForgetJSRuntime();
1142 void SetBeforeUnlinkCallback(CC_BeforeUnlinkCallback aBeforeUnlinkCB)
1144 CheckThreadSafety();
1145 mBeforeUnlinkCB = aBeforeUnlinkCB;
1148 void SetForgetSkippableCallback(CC_ForgetSkippableCallback aForgetSkippableCB)
1150 CheckThreadSafety();
1151 mForgetSkippableCB = aForgetSkippableCB;
1154 void Suspect(void *n, nsCycleCollectionParticipant *cp,
1155 nsCycleCollectingAutoRefCnt *aRefCnt);
1156 uint32_t SuspectedCount();
1157 void ForgetSkippable(bool aRemoveChildlessNodes, bool aAsyncSnowWhiteFreeing);
1158 bool FreeSnowWhite(bool aUntilNoSWInPurpleBuffer);
1160 // This method assumes its argument is already canonicalized.
1161 void RemoveObjectFromGraph(void *aPtr);
1163 void PrepareForGarbageCollection();
1165 bool Collect(ccType aCCType,
1166 SliceBudget &aBudget,
1167 nsICycleCollectorListener *aManualListener);
1168 void Shutdown();
1170 void SizeOfIncludingThis(mozilla::MallocSizeOf aMallocSizeOf,
1171 size_t *aObjectSize,
1172 size_t *aGraphNodesSize,
1173 size_t *aGraphEdgesSize,
1174 size_t *aWeakMapsSize,
1175 size_t *aPurpleBufferSize) const;
1177 JSPurpleBuffer* GetJSPurpleBuffer();
1178 private:
1179 void CheckThreadSafety();
1180 void ShutdownCollect();
1182 void FixGrayBits(bool aForceGC);
1183 bool ShouldMergeZones(ccType aCCType);
1185 void BeginCollection(ccType aCCType, nsICycleCollectorListener *aManualListener);
1186 void MarkRoots(SliceBudget &aBudget);
1187 void ScanRoots(bool aFullySynchGraphBuild);
1188 void ScanIncrementalRoots();
1189 void ScanWeakMaps();
1191 // returns whether anything was collected
1192 bool CollectWhite();
1194 void CleanupAfterCollection();
1197 NS_IMPL_ISUPPORTS1(nsCycleCollector, nsIMemoryReporter)
1200 * GraphWalker is templatized over a Visitor class that must provide
1201 * the following two methods:
1203 * bool ShouldVisitNode(PtrInfo const *pi);
1204 * void VisitNode(PtrInfo *pi);
1206 template <class Visitor>
1207 class GraphWalker
1209 private:
1210 Visitor mVisitor;
1212 void DoWalk(nsDeque &aQueue);
1214 void CheckedPush(nsDeque &aQueue, PtrInfo *pi)
1216 CC_AbortIfNull(pi);
1217 if (!aQueue.Push(pi, fallible_t())) {
1218 mVisitor.Failed();
1222 public:
1223 void Walk(PtrInfo *s0);
1224 void WalkFromRoots(GCGraph &aGraph);
1225 // copy-constructing the visitor should be cheap, and less
1226 // indirection than using a reference
1227 GraphWalker(const Visitor aVisitor) : mVisitor(aVisitor) {}
1231 ////////////////////////////////////////////////////////////////////////
1232 // The static collector struct
1233 ////////////////////////////////////////////////////////////////////////
1235 struct CollectorData {
1236 nsRefPtr<nsCycleCollector> mCollector;
1237 CycleCollectedJSRuntime* mRuntime;
1240 static mozilla::ThreadLocal<CollectorData*> sCollectorData;
1242 ////////////////////////////////////////////////////////////////////////
1243 // Utility functions
1244 ////////////////////////////////////////////////////////////////////////
1246 MOZ_NEVER_INLINE static void
1247 Fault(const char *msg, const void *ptr=nullptr)
1249 if (ptr)
1250 printf("Fault in cycle collector: %s (ptr: %p)\n", msg, ptr);
1251 else
1252 printf("Fault in cycle collector: %s\n", msg);
1254 NS_RUNTIMEABORT("cycle collector fault");
1257 static void
1258 Fault(const char *msg, PtrInfo *pi)
1260 Fault(msg, pi->mPointer);
1263 static inline void
1264 ToParticipant(nsISupports *s, nsXPCOMCycleCollectionParticipant **cp)
1266 // We use QI to move from an nsISupports to an
1267 // nsXPCOMCycleCollectionParticipant, which is a per-class singleton helper
1268 // object that implements traversal and unlinking logic for the nsISupports
1269 // in question.
1270 CallQueryInterface(s, cp);
1273 template <class Visitor>
1274 MOZ_NEVER_INLINE void
1275 GraphWalker<Visitor>::Walk(PtrInfo *s0)
1277 nsDeque queue;
1278 CheckedPush(queue, s0);
1279 DoWalk(queue);
1282 template <class Visitor>
1283 MOZ_NEVER_INLINE void
1284 GraphWalker<Visitor>::WalkFromRoots(GCGraph& aGraph)
1286 nsDeque queue;
1287 NodePool::Enumerator etor(aGraph.mNodes);
1288 for (uint32_t i = 0; i < aGraph.mRootCount; ++i) {
1289 CheckedPush(queue, etor.GetNext());
1291 DoWalk(queue);
1294 template <class Visitor>
1295 MOZ_NEVER_INLINE void
1296 GraphWalker<Visitor>::DoWalk(nsDeque &aQueue)
1298 // Use a aQueue to match the breadth-first traversal used when we
1299 // built the graph, for hopefully-better locality.
1300 while (aQueue.GetSize() > 0) {
1301 PtrInfo *pi = static_cast<PtrInfo*>(aQueue.PopFront());
1302 CC_AbortIfNull(pi);
1304 if (pi->mParticipant && mVisitor.ShouldVisitNode(pi)) {
1305 mVisitor.VisitNode(pi);
1306 for (EdgePool::Iterator child = pi->FirstChild(),
1307 child_end = pi->LastChild();
1308 child != child_end; ++child) {
1309 CheckedPush(aQueue, *child);
1315 struct CCGraphDescriber : public LinkedListElement<CCGraphDescriber>
1317 CCGraphDescriber()
1318 : mAddress("0x"), mCnt(0), mType(eUnknown) {}
1320 enum Type
1322 eRefCountedObject,
1323 eGCedObject,
1324 eGCMarkedObject,
1325 eEdge,
1326 eRoot,
1327 eGarbage,
1328 eUnknown
1331 nsCString mAddress;
1332 nsCString mName;
1333 nsCString mCompartmentOrToAddress;
1334 uint32_t mCnt;
1335 Type mType;
1338 class nsCycleCollectorLogger MOZ_FINAL : public nsICycleCollectorListener
1340 public:
1341 nsCycleCollectorLogger() :
1342 mStream(nullptr), mWantAllTraces(false),
1343 mDisableLog(false), mWantAfterProcessing(false)
1346 ~nsCycleCollectorLogger()
1348 ClearDescribers();
1349 if (mStream) {
1350 MozillaUnRegisterDebugFILE(mStream);
1351 fclose(mStream);
1354 NS_DECL_ISUPPORTS
1356 void SetAllTraces()
1358 mWantAllTraces = true;
1361 NS_IMETHOD AllTraces(nsICycleCollectorListener** aListener)
1363 SetAllTraces();
1364 NS_ADDREF(*aListener = this);
1365 return NS_OK;
1368 NS_IMETHOD GetWantAllTraces(bool* aAllTraces)
1370 *aAllTraces = mWantAllTraces;
1371 return NS_OK;
1374 NS_IMETHOD GetDisableLog(bool* aDisableLog)
1376 *aDisableLog = mDisableLog;
1377 return NS_OK;
1380 NS_IMETHOD SetDisableLog(bool aDisableLog)
1382 mDisableLog = aDisableLog;
1383 return NS_OK;
1386 NS_IMETHOD GetWantAfterProcessing(bool* aWantAfterProcessing)
1388 *aWantAfterProcessing = mWantAfterProcessing;
1389 return NS_OK;
1392 NS_IMETHOD SetWantAfterProcessing(bool aWantAfterProcessing)
1394 mWantAfterProcessing = aWantAfterProcessing;
1395 return NS_OK;
1398 NS_IMETHOD GetFilenameIdentifier(nsAString& aIdentifier)
1400 aIdentifier = mFilenameIdentifier;
1401 return NS_OK;
1404 NS_IMETHOD SetFilenameIdentifier(const nsAString& aIdentifier)
1406 mFilenameIdentifier = aIdentifier;
1407 return NS_OK;
1410 NS_IMETHOD Begin()
1412 mCurrentAddress.AssignLiteral("0x");
1413 ClearDescribers();
1414 if (mDisableLog) {
1415 return NS_OK;
1418 // Initially create the log in a file starting with
1419 // "incomplete-gc-edges". We'll move the file and strip off the
1420 // "incomplete-" once the dump completes. (We do this because we don't
1421 // want scripts which poll the filesystem looking for gc/cc dumps to
1422 // grab a file before we're finished writing to it.)
1423 nsCOMPtr<nsIFile> gcLogFile = CreateTempFile("incomplete-gc-edges");
1424 if (NS_WARN_IF(!gcLogFile))
1425 return NS_ERROR_UNEXPECTED;
1427 // Dump the JS heap.
1428 FILE* gcLogANSIFile = nullptr;
1429 gcLogFile->OpenANSIFileDesc("w", &gcLogANSIFile);
1430 if (NS_WARN_IF(!gcLogANSIFile))
1431 return NS_ERROR_UNEXPECTED;
1432 MozillaRegisterDebugFILE(gcLogANSIFile);
1433 CollectorData *data = sCollectorData.get();
1434 if (data && data->mRuntime)
1435 data->mRuntime->DumpJSHeap(gcLogANSIFile);
1436 MozillaUnRegisterDebugFILE(gcLogANSIFile);
1437 fclose(gcLogANSIFile);
1439 // Strip off "incomplete-".
1440 nsCOMPtr<nsIFile> gcLogFileFinalDestination =
1441 CreateTempFile("gc-edges");
1442 if (NS_WARN_IF(!gcLogFileFinalDestination))
1443 return NS_ERROR_UNEXPECTED;
1445 nsAutoString gcLogFileFinalDestinationName;
1446 gcLogFileFinalDestination->GetLeafName(gcLogFileFinalDestinationName);
1447 if (NS_WARN_IF(gcLogFileFinalDestinationName.IsEmpty()))
1448 return NS_ERROR_UNEXPECTED;
1450 gcLogFile->MoveTo(/* directory */ nullptr, gcLogFileFinalDestinationName);
1452 // Log to the error console.
1453 nsCOMPtr<nsIConsoleService> cs =
1454 do_GetService(NS_CONSOLESERVICE_CONTRACTID);
1455 if (cs) {
1456 nsAutoString gcLogPath;
1457 gcLogFileFinalDestination->GetPath(gcLogPath);
1459 nsString msg = NS_LITERAL_STRING("Garbage Collector log dumped to ") +
1460 gcLogPath;
1461 cs->LogStringMessage(msg.get());
1464 // Open a file for dumping the CC graph. We again prefix with
1465 // "incomplete-".
1466 mOutFile = CreateTempFile("incomplete-cc-edges");
1467 if (NS_WARN_IF(!mOutFile))
1468 return NS_ERROR_UNEXPECTED;
1469 MOZ_ASSERT(!mStream);
1470 mOutFile->OpenANSIFileDesc("w", &mStream);
1471 if (NS_WARN_IF(!mStream))
1472 return NS_ERROR_UNEXPECTED;
1473 MozillaRegisterDebugFILE(mStream);
1475 fprintf(mStream, "# WantAllTraces=%s\n", mWantAllTraces ? "true" : "false");
1477 return NS_OK;
1479 NS_IMETHOD NoteRefCountedObject(uint64_t aAddress, uint32_t refCount,
1480 const char *aObjectDescription)
1482 if (!mDisableLog) {
1483 fprintf(mStream, "%p [rc=%u] %s\n", (void*)aAddress, refCount,
1484 aObjectDescription);
1486 if (mWantAfterProcessing) {
1487 CCGraphDescriber* d = new CCGraphDescriber();
1488 mDescribers.insertBack(d);
1489 mCurrentAddress.AssignLiteral("0x");
1490 mCurrentAddress.AppendInt(aAddress, 16);
1491 d->mType = CCGraphDescriber::eRefCountedObject;
1492 d->mAddress = mCurrentAddress;
1493 d->mCnt = refCount;
1494 d->mName.Append(aObjectDescription);
1496 return NS_OK;
1498 NS_IMETHOD NoteGCedObject(uint64_t aAddress, bool aMarked,
1499 const char *aObjectDescription,
1500 uint64_t aCompartmentAddress)
1502 if (!mDisableLog) {
1503 fprintf(mStream, "%p [gc%s] %s\n", (void*)aAddress,
1504 aMarked ? ".marked" : "", aObjectDescription);
1506 if (mWantAfterProcessing) {
1507 CCGraphDescriber* d = new CCGraphDescriber();
1508 mDescribers.insertBack(d);
1509 mCurrentAddress.AssignLiteral("0x");
1510 mCurrentAddress.AppendInt(aAddress, 16);
1511 d->mType = aMarked ? CCGraphDescriber::eGCMarkedObject :
1512 CCGraphDescriber::eGCedObject;
1513 d->mAddress = mCurrentAddress;
1514 d->mName.Append(aObjectDescription);
1515 if (aCompartmentAddress) {
1516 d->mCompartmentOrToAddress.AssignLiteral("0x");
1517 d->mCompartmentOrToAddress.AppendInt(aCompartmentAddress, 16);
1518 } else {
1519 d->mCompartmentOrToAddress.SetIsVoid(true);
1522 return NS_OK;
1524 NS_IMETHOD NoteEdge(uint64_t aToAddress, const char *aEdgeName)
1526 if (!mDisableLog) {
1527 fprintf(mStream, "> %p %s\n", (void*)aToAddress, aEdgeName);
1529 if (mWantAfterProcessing) {
1530 CCGraphDescriber* d = new CCGraphDescriber();
1531 mDescribers.insertBack(d);
1532 d->mType = CCGraphDescriber::eEdge;
1533 d->mAddress = mCurrentAddress;
1534 d->mCompartmentOrToAddress.AssignLiteral("0x");
1535 d->mCompartmentOrToAddress.AppendInt(aToAddress, 16);
1536 d->mName.Append(aEdgeName);
1538 return NS_OK;
1540 NS_IMETHOD NoteWeakMapEntry(uint64_t aMap, uint64_t aKey,
1541 uint64_t aKeyDelegate, uint64_t aValue)
1543 if (!mDisableLog) {
1544 fprintf(mStream, "WeakMapEntry map=%p key=%p keyDelegate=%p value=%p\n",
1545 (void*)aMap, (void*)aKey, (void*)aKeyDelegate, (void*)aValue);
1547 // We don't support after-processing for weak map entries.
1548 return NS_OK;
1550 NS_IMETHOD BeginResults()
1552 if (!mDisableLog) {
1553 fputs("==========\n", mStream);
1555 return NS_OK;
1557 NS_IMETHOD DescribeRoot(uint64_t aAddress, uint32_t aKnownEdges)
1559 if (!mDisableLog) {
1560 fprintf(mStream, "%p [known=%u]\n", (void*)aAddress, aKnownEdges);
1562 if (mWantAfterProcessing) {
1563 CCGraphDescriber* d = new CCGraphDescriber();
1564 mDescribers.insertBack(d);
1565 d->mType = CCGraphDescriber::eRoot;
1566 d->mAddress.AppendInt(aAddress, 16);
1567 d->mCnt = aKnownEdges;
1569 return NS_OK;
1571 NS_IMETHOD DescribeGarbage(uint64_t aAddress)
1573 if (!mDisableLog) {
1574 fprintf(mStream, "%p [garbage]\n", (void*)aAddress);
1576 if (mWantAfterProcessing) {
1577 CCGraphDescriber* d = new CCGraphDescriber();
1578 mDescribers.insertBack(d);
1579 d->mType = CCGraphDescriber::eGarbage;
1580 d->mAddress.AppendInt(aAddress, 16);
1582 return NS_OK;
1584 NS_IMETHOD End()
1586 if (!mDisableLog) {
1587 MOZ_ASSERT(mStream);
1588 MOZ_ASSERT(mOutFile);
1590 MozillaUnRegisterDebugFILE(mStream);
1591 fclose(mStream);
1592 mStream = nullptr;
1594 // Strip off "incomplete-" from the log file's name.
1595 nsCOMPtr<nsIFile> logFileFinalDestination =
1596 CreateTempFile("cc-edges");
1597 if (NS_WARN_IF(!logFileFinalDestination))
1598 return NS_ERROR_UNEXPECTED;
1600 nsAutoString logFileFinalDestinationName;
1601 logFileFinalDestination->GetLeafName(logFileFinalDestinationName);
1602 if (NS_WARN_IF(logFileFinalDestinationName.IsEmpty()))
1603 return NS_ERROR_UNEXPECTED;
1605 mOutFile->MoveTo(/* directory = */ nullptr,
1606 logFileFinalDestinationName);
1607 mOutFile = nullptr;
1609 // Log to the error console.
1610 nsCOMPtr<nsIConsoleService> cs =
1611 do_GetService(NS_CONSOLESERVICE_CONTRACTID);
1612 if (cs) {
1613 nsAutoString ccLogPath;
1614 logFileFinalDestination->GetPath(ccLogPath);
1616 nsString msg = NS_LITERAL_STRING("Cycle Collector log dumped to ") +
1617 ccLogPath;
1618 cs->LogStringMessage(msg.get());
1621 return NS_OK;
1623 NS_IMETHOD ProcessNext(nsICycleCollectorHandler* aHandler,
1624 bool* aCanContinue)
1626 if (NS_WARN_IF(!aHandler) || NS_WARN_IF(!mWantAfterProcessing))
1627 return NS_ERROR_UNEXPECTED;
1628 CCGraphDescriber* d = mDescribers.popFirst();
1629 if (d) {
1630 switch (d->mType) {
1631 case CCGraphDescriber::eRefCountedObject:
1632 aHandler->NoteRefCountedObject(d->mAddress,
1633 d->mCnt,
1634 d->mName);
1635 break;
1636 case CCGraphDescriber::eGCedObject:
1637 case CCGraphDescriber::eGCMarkedObject:
1638 aHandler->NoteGCedObject(d->mAddress,
1639 d->mType ==
1640 CCGraphDescriber::eGCMarkedObject,
1641 d->mName,
1642 d->mCompartmentOrToAddress);
1643 break;
1644 case CCGraphDescriber::eEdge:
1645 aHandler->NoteEdge(d->mAddress,
1646 d->mCompartmentOrToAddress,
1647 d->mName);
1648 break;
1649 case CCGraphDescriber::eRoot:
1650 aHandler->DescribeRoot(d->mAddress,
1651 d->mCnt);
1652 break;
1653 case CCGraphDescriber::eGarbage:
1654 aHandler->DescribeGarbage(d->mAddress);
1655 break;
1656 case CCGraphDescriber::eUnknown:
1657 NS_NOTREACHED("CCGraphDescriber::eUnknown");
1658 break;
1660 delete d;
1662 if (!(*aCanContinue = !mDescribers.isEmpty())) {
1663 mCurrentAddress.AssignLiteral("0x");
1665 return NS_OK;
1667 private:
1669 * Create a new file named something like aPrefix.$PID.$IDENTIFIER.log in
1670 * $MOZ_CC_LOG_DIRECTORY or in the system's temp directory. No existing
1671 * file will be overwritten; if aPrefix.$PID.$IDENTIFIER.log exists, we'll
1672 * try a file named something like aPrefix.$PID.$IDENTIFIER-1.log, and so
1673 * on.
1675 already_AddRefed<nsIFile>
1676 CreateTempFile(const char* aPrefix)
1678 nsPrintfCString filename("%s.%d%s%s.log",
1679 aPrefix,
1680 base::GetCurrentProcId(),
1681 mFilenameIdentifier.IsEmpty() ? "" : ".",
1682 NS_ConvertUTF16toUTF8(mFilenameIdentifier).get());
1684 // Get the log directory either from $MOZ_CC_LOG_DIRECTORY or from
1685 // the fallback directories in OpenTempFile. We don't use an nsCOMPtr
1686 // here because OpenTempFile uses an in/out param and getter_AddRefs
1687 // wouldn't work.
1688 nsIFile* logFile = nullptr;
1689 if (char* env = PR_GetEnv("MOZ_CC_LOG_DIRECTORY")) {
1690 NS_NewNativeLocalFile(nsCString(env), /* followLinks = */ true,
1691 &logFile);
1693 nsresult rv = nsMemoryInfoDumper::OpenTempFile(filename, &logFile);
1694 if (NS_FAILED(rv)) {
1695 NS_IF_RELEASE(logFile);
1696 return nullptr;
1699 return dont_AddRef(logFile);
1702 void ClearDescribers()
1704 CCGraphDescriber* d;
1705 while((d = mDescribers.popFirst())) {
1706 delete d;
1710 FILE *mStream;
1711 nsCOMPtr<nsIFile> mOutFile;
1712 bool mWantAllTraces;
1713 bool mDisableLog;
1714 bool mWantAfterProcessing;
1715 nsString mFilenameIdentifier;
1716 nsCString mCurrentAddress;
1717 mozilla::LinkedList<CCGraphDescriber> mDescribers;
1720 NS_IMPL_ISUPPORTS1(nsCycleCollectorLogger, nsICycleCollectorListener)
1722 nsresult
1723 nsCycleCollectorLoggerConstructor(nsISupports* aOuter,
1724 const nsIID& aIID,
1725 void* *aInstancePtr)
1727 if (NS_WARN_IF(aOuter))
1728 return NS_ERROR_NO_AGGREGATION;
1730 nsISupports *logger = new nsCycleCollectorLogger();
1732 return logger->QueryInterface(aIID, aInstancePtr);
1735 ////////////////////////////////////////////////////////////////////////
1736 // Bacon & Rajan's |MarkRoots| routine.
1737 ////////////////////////////////////////////////////////////////////////
1739 class GCGraphBuilder : public nsCycleCollectionTraversalCallback,
1740 public nsCycleCollectionNoteRootCallback
1742 private:
1743 GCGraph &mGraph;
1744 CycleCollectorResults &mResults;
1745 NodePool::Builder mNodeBuilder;
1746 EdgePool::Builder mEdgeBuilder;
1747 PtrInfo *mCurrPi;
1748 nsCycleCollectionParticipant *mJSParticipant;
1749 nsCycleCollectionParticipant *mJSZoneParticipant;
1750 nsCString mNextEdgeName;
1751 nsICycleCollectorListener *mListener;
1752 bool mMergeZones;
1753 bool mRanOutOfMemory;
1755 public:
1756 GCGraphBuilder(GCGraph &aGraph,
1757 CycleCollectorResults &aResults,
1758 CycleCollectedJSRuntime *aJSRuntime,
1759 nsICycleCollectorListener *aListener,
1760 bool aMergeZones);
1761 virtual ~GCGraphBuilder();
1763 bool WantAllTraces() const
1765 return nsCycleCollectionNoteRootCallback::WantAllTraces();
1768 PtrInfo* AddNode(void *aPtr, nsCycleCollectionParticipant *aParticipant);
1769 PtrInfo* AddWeakMapNode(void* node);
1770 void Traverse(PtrInfo* aPtrInfo);
1771 void SetLastChild();
1773 bool RanOutOfMemory() const { return mRanOutOfMemory; }
1775 private:
1776 void DescribeNode(uint32_t refCount, const char *objName)
1778 mCurrPi->mRefCount = refCount;
1781 public:
1782 // nsCycleCollectionNoteRootCallback methods.
1783 NS_IMETHOD_(void) NoteXPCOMRoot(nsISupports *root);
1784 NS_IMETHOD_(void) NoteJSRoot(void *root);
1785 NS_IMETHOD_(void) NoteNativeRoot(void *root, nsCycleCollectionParticipant *participant);
1786 NS_IMETHOD_(void) NoteWeakMapping(void *map, void *key, void *kdelegate, void *val);
1788 // nsCycleCollectionTraversalCallback methods.
1789 NS_IMETHOD_(void) DescribeRefCountedNode(nsrefcnt refCount,
1790 const char *objName);
1791 NS_IMETHOD_(void) DescribeGCedNode(bool isMarked, const char *objName,
1792 uint64_t aCompartmentAddress);
1794 NS_IMETHOD_(void) NoteXPCOMChild(nsISupports *child);
1795 NS_IMETHOD_(void) NoteJSChild(void *child);
1796 NS_IMETHOD_(void) NoteNativeChild(void *child,
1797 nsCycleCollectionParticipant *participant);
1798 NS_IMETHOD_(void) NoteNextEdgeName(const char* name);
1800 private:
1801 NS_IMETHOD_(void) NoteRoot(void *root,
1802 nsCycleCollectionParticipant *participant)
1804 MOZ_ASSERT(root);
1805 MOZ_ASSERT(participant);
1807 if (!participant->CanSkipInCC(root) || MOZ_UNLIKELY(WantAllTraces())) {
1808 AddNode(root, participant);
1812 NS_IMETHOD_(void) NoteChild(void *child, nsCycleCollectionParticipant *cp,
1813 nsCString edgeName)
1815 PtrInfo *childPi = AddNode(child, cp);
1816 if (!childPi)
1817 return;
1818 mEdgeBuilder.Add(childPi);
1819 if (mListener) {
1820 mListener->NoteEdge((uint64_t)child, edgeName.get());
1822 ++childPi->mInternalRefs;
1825 JS::Zone *MergeZone(void *gcthing) {
1826 if (!mMergeZones) {
1827 return nullptr;
1829 JS::Zone *zone = JS::GetGCThingZone(gcthing);
1830 if (js::IsSystemZone(zone)) {
1831 return nullptr;
1833 return zone;
1837 GCGraphBuilder::GCGraphBuilder(GCGraph &aGraph,
1838 CycleCollectorResults &aResults,
1839 CycleCollectedJSRuntime *aJSRuntime,
1840 nsICycleCollectorListener *aListener,
1841 bool aMergeZones)
1842 : mGraph(aGraph),
1843 mResults(aResults),
1844 mNodeBuilder(aGraph.mNodes),
1845 mEdgeBuilder(aGraph.mEdges),
1846 mJSParticipant(nullptr),
1847 mJSZoneParticipant(nullptr),
1848 mListener(aListener),
1849 mMergeZones(aMergeZones),
1850 mRanOutOfMemory(false)
1852 if (aJSRuntime) {
1853 mJSParticipant = aJSRuntime->GCThingParticipant();
1854 mJSZoneParticipant = aJSRuntime->ZoneParticipant();
1857 uint32_t flags = 0;
1858 if (!flags && mListener) {
1859 flags = nsCycleCollectionTraversalCallback::WANT_DEBUG_INFO;
1860 bool all = false;
1861 mListener->GetWantAllTraces(&all);
1862 if (all) {
1863 flags |= nsCycleCollectionTraversalCallback::WANT_ALL_TRACES;
1864 mWantAllTraces = true; // for nsCycleCollectionNoteRootCallback
1868 mFlags |= flags;
1870 mMergeZones = mMergeZones && MOZ_LIKELY(!WantAllTraces());
1872 MOZ_ASSERT(nsCycleCollectionNoteRootCallback::WantAllTraces() ==
1873 nsCycleCollectionTraversalCallback::WantAllTraces());
1876 GCGraphBuilder::~GCGraphBuilder()
1880 PtrInfo*
1881 GCGraphBuilder::AddNode(void *aPtr, nsCycleCollectionParticipant *aParticipant)
1883 PtrToNodeEntry *e = mGraph.AddNodeToMap(aPtr);
1884 if (!e) {
1885 mRanOutOfMemory = true;
1886 return nullptr;
1889 PtrInfo *result;
1890 if (!e->mNode) {
1891 // New entry.
1892 result = mNodeBuilder.Add(aPtr, aParticipant);
1893 e->mNode = result;
1894 NS_ASSERTION(result, "mNodeBuilder.Add returned null");
1895 } else {
1896 result = e->mNode;
1897 MOZ_ASSERT(result->mParticipant == aParticipant,
1898 "nsCycleCollectionParticipant shouldn't change!");
1900 return result;
1903 MOZ_NEVER_INLINE void
1904 GCGraphBuilder::Traverse(PtrInfo* aPtrInfo)
1906 mCurrPi = aPtrInfo;
1908 mCurrPi->SetFirstChild(mEdgeBuilder.Mark());
1910 if (!aPtrInfo->mParticipant) {
1911 return;
1914 nsresult rv = aPtrInfo->mParticipant->Traverse(aPtrInfo->mPointer, *this);
1915 if (NS_FAILED(rv)) {
1916 Fault("script pointer traversal failed", aPtrInfo);
1920 void
1921 GCGraphBuilder::SetLastChild()
1923 mCurrPi->SetLastChild(mEdgeBuilder.Mark());
1926 NS_IMETHODIMP_(void)
1927 GCGraphBuilder::NoteXPCOMRoot(nsISupports *root)
1929 root = CanonicalizeXPCOMParticipant(root);
1930 NS_ASSERTION(root,
1931 "Don't add objects that don't participate in collection!");
1933 nsXPCOMCycleCollectionParticipant *cp;
1934 ToParticipant(root, &cp);
1936 NoteRoot(root, cp);
1939 NS_IMETHODIMP_(void)
1940 GCGraphBuilder::NoteJSRoot(void *root)
1942 if (JS::Zone *zone = MergeZone(root)) {
1943 NoteRoot(zone, mJSZoneParticipant);
1944 } else {
1945 NoteRoot(root, mJSParticipant);
1949 NS_IMETHODIMP_(void)
1950 GCGraphBuilder::NoteNativeRoot(void *root, nsCycleCollectionParticipant *participant)
1952 NoteRoot(root, participant);
1955 NS_IMETHODIMP_(void)
1956 GCGraphBuilder::DescribeRefCountedNode(nsrefcnt refCount, const char *objName)
1958 if (refCount == 0)
1959 Fault("zero refcount", mCurrPi);
1960 if (refCount == UINT32_MAX)
1961 Fault("overflowing refcount", mCurrPi);
1962 mResults.mVisitedRefCounted++;
1964 if (mListener) {
1965 mListener->NoteRefCountedObject((uint64_t)mCurrPi->mPointer, refCount,
1966 objName);
1969 DescribeNode(refCount, objName);
1972 NS_IMETHODIMP_(void)
1973 GCGraphBuilder::DescribeGCedNode(bool isMarked, const char *objName,
1974 uint64_t aCompartmentAddress)
1976 uint32_t refCount = isMarked ? UINT32_MAX : 0;
1977 mResults.mVisitedGCed++;
1979 if (mListener) {
1980 mListener->NoteGCedObject((uint64_t)mCurrPi->mPointer, isMarked,
1981 objName, aCompartmentAddress);
1984 DescribeNode(refCount, objName);
1987 NS_IMETHODIMP_(void)
1988 GCGraphBuilder::NoteXPCOMChild(nsISupports *child)
1990 nsCString edgeName;
1991 if (WantDebugInfo()) {
1992 edgeName.Assign(mNextEdgeName);
1993 mNextEdgeName.Truncate();
1995 if (!child || !(child = CanonicalizeXPCOMParticipant(child)))
1996 return;
1998 nsXPCOMCycleCollectionParticipant *cp;
1999 ToParticipant(child, &cp);
2000 if (cp && (!cp->CanSkipThis(child) || WantAllTraces())) {
2001 NoteChild(child, cp, edgeName);
2005 NS_IMETHODIMP_(void)
2006 GCGraphBuilder::NoteNativeChild(void *child,
2007 nsCycleCollectionParticipant *participant)
2009 nsCString edgeName;
2010 if (WantDebugInfo()) {
2011 edgeName.Assign(mNextEdgeName);
2012 mNextEdgeName.Truncate();
2014 if (!child)
2015 return;
2017 MOZ_ASSERT(participant, "Need a nsCycleCollectionParticipant!");
2018 NoteChild(child, participant, edgeName);
2021 NS_IMETHODIMP_(void)
2022 GCGraphBuilder::NoteJSChild(void *child)
2024 if (!child) {
2025 return;
2028 nsCString edgeName;
2029 if (MOZ_UNLIKELY(WantDebugInfo())) {
2030 edgeName.Assign(mNextEdgeName);
2031 mNextEdgeName.Truncate();
2034 if (xpc_GCThingIsGrayCCThing(child) || MOZ_UNLIKELY(WantAllTraces())) {
2035 if (JS::Zone *zone = MergeZone(child)) {
2036 NoteChild(zone, mJSZoneParticipant, edgeName);
2037 } else {
2038 NoteChild(child, mJSParticipant, edgeName);
2043 NS_IMETHODIMP_(void)
2044 GCGraphBuilder::NoteNextEdgeName(const char* name)
2046 if (WantDebugInfo()) {
2047 mNextEdgeName = name;
2051 PtrInfo*
2052 GCGraphBuilder::AddWeakMapNode(void *node)
2054 MOZ_ASSERT(node, "Weak map node should be non-null.");
2056 if (!xpc_GCThingIsGrayCCThing(node) && !WantAllTraces())
2057 return nullptr;
2059 if (JS::Zone *zone = MergeZone(node)) {
2060 return AddNode(zone, mJSZoneParticipant);
2061 } else {
2062 return AddNode(node, mJSParticipant);
2066 NS_IMETHODIMP_(void)
2067 GCGraphBuilder::NoteWeakMapping(void *map, void *key, void *kdelegate, void *val)
2069 // Don't try to optimize away the entry here, as we've already attempted to
2070 // do that in TraceWeakMapping in nsXPConnect.
2071 WeakMapping *mapping = mGraph.mWeakMaps.AppendElement();
2072 mapping->mMap = map ? AddWeakMapNode(map) : nullptr;
2073 mapping->mKey = key ? AddWeakMapNode(key) : nullptr;
2074 mapping->mKeyDelegate = kdelegate ? AddWeakMapNode(kdelegate) : mapping->mKey;
2075 mapping->mVal = val ? AddWeakMapNode(val) : nullptr;
2077 if (mListener) {
2078 mListener->NoteWeakMapEntry((uint64_t)map, (uint64_t)key,
2079 (uint64_t)kdelegate, (uint64_t)val);
2083 static bool
2084 AddPurpleRoot(GCGraphBuilder &aBuilder, void *aRoot, nsCycleCollectionParticipant *aParti)
2086 CanonicalizeParticipant(&aRoot, &aParti);
2088 if (aBuilder.WantAllTraces() || !aParti->CanSkipInCC(aRoot)) {
2089 PtrInfo *pinfo = aBuilder.AddNode(aRoot, aParti);
2090 if (!pinfo) {
2091 return false;
2095 return true;
2098 // MayHaveChild() will be false after a Traverse if the object does
2099 // not have any children the CC will visit.
2100 class ChildFinder : public nsCycleCollectionTraversalCallback
2102 public:
2103 ChildFinder() : mMayHaveChild(false) {}
2105 // The logic of the Note*Child functions must mirror that of their
2106 // respective functions in GCGraphBuilder.
2107 NS_IMETHOD_(void) NoteXPCOMChild(nsISupports *child);
2108 NS_IMETHOD_(void) NoteNativeChild(void *child,
2109 nsCycleCollectionParticipant *helper);
2110 NS_IMETHOD_(void) NoteJSChild(void *child);
2112 NS_IMETHOD_(void) DescribeRefCountedNode(nsrefcnt refcount,
2113 const char *objname) {}
2114 NS_IMETHOD_(void) DescribeGCedNode(bool ismarked,
2115 const char *objname,
2116 uint64_t aCompartmentAddress) {}
2117 NS_IMETHOD_(void) NoteNextEdgeName(const char* name) {}
2118 bool MayHaveChild() {
2119 return mMayHaveChild;
2121 private:
2122 bool mMayHaveChild;
2125 NS_IMETHODIMP_(void)
2126 ChildFinder::NoteXPCOMChild(nsISupports *child)
2128 if (!child || !(child = CanonicalizeXPCOMParticipant(child)))
2129 return;
2130 nsXPCOMCycleCollectionParticipant *cp;
2131 ToParticipant(child, &cp);
2132 if (cp && !cp->CanSkip(child, true))
2133 mMayHaveChild = true;
2136 NS_IMETHODIMP_(void)
2137 ChildFinder::NoteNativeChild(void *child,
2138 nsCycleCollectionParticipant *helper)
2140 if (child)
2141 mMayHaveChild = true;
2144 NS_IMETHODIMP_(void)
2145 ChildFinder::NoteJSChild(void *child)
2147 if (child && xpc_GCThingIsGrayCCThing(child)) {
2148 mMayHaveChild = true;
2152 static bool
2153 MayHaveChild(void *o, nsCycleCollectionParticipant* cp)
2155 ChildFinder cf;
2156 cp->Traverse(o, cf);
2157 return cf.MayHaveChild();
2160 template<class T>
2161 class SegmentedArrayElement : public LinkedListElement<SegmentedArrayElement<T>>
2162 , public AutoFallibleTArray<T, 60>
2166 template<class T>
2167 class SegmentedArray
2169 public:
2170 ~SegmentedArray()
2172 MOZ_ASSERT(IsEmpty());
2175 void AppendElement(T& aElement)
2177 SegmentedArrayElement<T>* last = mSegments.getLast();
2178 if (!last || last->Length() == last->Capacity()) {
2179 last = new SegmentedArrayElement<T>();
2180 mSegments.insertBack(last);
2182 last->AppendElement(aElement);
2185 void Clear()
2187 SegmentedArrayElement<T>* first;
2188 while ((first = mSegments.popFirst())) {
2189 delete first;
2193 SegmentedArrayElement<T>* GetFirstSegment()
2195 return mSegments.getFirst();
2198 bool IsEmpty()
2200 return !GetFirstSegment();
2203 private:
2204 mozilla::LinkedList<SegmentedArrayElement<T>> mSegments;
2207 // JSPurpleBuffer keeps references to GCThings which might affect the
2208 // next cycle collection. It is owned only by itself and during unlink its
2209 // self reference is broken down and the object ends up killing itself.
2210 // If GC happens before CC, references to GCthings and the self reference are
2211 // removed.
2212 class JSPurpleBuffer
2214 public:
2215 JSPurpleBuffer(JSPurpleBuffer*& aReferenceToThis)
2216 : mReferenceToThis(aReferenceToThis)
2218 mReferenceToThis = this;
2219 NS_ADDREF_THIS();
2220 mozilla::HoldJSObjects(this);
2223 ~JSPurpleBuffer()
2225 MOZ_ASSERT(mValues.IsEmpty());
2226 MOZ_ASSERT(mObjects.IsEmpty());
2229 void Destroy()
2231 mReferenceToThis = nullptr;
2232 mValues.Clear();
2233 mObjects.Clear();
2234 mozilla::DropJSObjects(this);
2235 NS_RELEASE_THIS();
2238 NS_INLINE_DECL_CYCLE_COLLECTING_NATIVE_REFCOUNTING(JSPurpleBuffer)
2239 NS_DECL_CYCLE_COLLECTION_SCRIPT_HOLDER_NATIVE_CLASS(JSPurpleBuffer)
2241 JSPurpleBuffer*& mReferenceToThis;
2242 SegmentedArray<JS::Heap<JS::Value>> mValues;
2243 SegmentedArray<JS::Heap<JSObject*>> mObjects;
2246 NS_IMPL_CYCLE_COLLECTION_CLASS(JSPurpleBuffer)
2248 NS_IMPL_CYCLE_COLLECTION_UNLINK_BEGIN(JSPurpleBuffer)
2249 tmp->Destroy();
2250 NS_IMPL_CYCLE_COLLECTION_UNLINK_END
2252 NS_IMPL_CYCLE_COLLECTION_TRAVERSE_BEGIN(JSPurpleBuffer)
2253 CycleCollectionNoteChild(cb, tmp, "self");
2254 NS_IMPL_CYCLE_COLLECTION_TRAVERSE_SCRIPT_OBJECTS
2255 NS_IMPL_CYCLE_COLLECTION_TRAVERSE_END
2257 #define NS_TRACE_SEGMENTED_ARRAY(_field) \
2259 auto segment = tmp->_field.GetFirstSegment(); \
2260 while (segment) { \
2261 for (uint32_t i = segment->Length(); i > 0;) { \
2262 aCallbacks.Trace(&segment->ElementAt(--i), #_field, aClosure); \
2264 segment = segment->getNext(); \
2268 NS_IMPL_CYCLE_COLLECTION_TRACE_BEGIN(JSPurpleBuffer)
2269 NS_TRACE_SEGMENTED_ARRAY(mValues)
2270 NS_TRACE_SEGMENTED_ARRAY(mObjects)
2271 NS_IMPL_CYCLE_COLLECTION_TRACE_END
2273 NS_IMPL_CYCLE_COLLECTION_ROOT_NATIVE(JSPurpleBuffer, AddRef)
2274 NS_IMPL_CYCLE_COLLECTION_UNROOT_NATIVE(JSPurpleBuffer, Release)
2276 struct SnowWhiteObject
2278 void* mPointer;
2279 nsCycleCollectionParticipant* mParticipant;
2280 nsCycleCollectingAutoRefCnt* mRefCnt;
2283 class SnowWhiteKiller : public TraceCallbacks
2285 public:
2286 SnowWhiteKiller(nsCycleCollector *aCollector, uint32_t aMaxCount)
2287 : mCollector(aCollector)
2289 MOZ_ASSERT(mCollector, "Calling SnowWhiteKiller after nsCC went away");
2290 while (true) {
2291 if (mObjects.SetCapacity(aMaxCount)) {
2292 break;
2294 if (aMaxCount == 1) {
2295 NS_RUNTIMEABORT("Not enough memory to even delete objects!");
2297 aMaxCount /= 2;
2301 ~SnowWhiteKiller()
2303 for (uint32_t i = 0; i < mObjects.Length(); ++i) {
2304 SnowWhiteObject& o = mObjects[i];
2305 if (!o.mRefCnt->get() && !o.mRefCnt->IsInPurpleBuffer()) {
2306 mCollector->RemoveObjectFromGraph(o.mPointer);
2307 o.mRefCnt->stabilizeForDeletion();
2308 o.mParticipant->Trace(o.mPointer, *this, nullptr);
2309 o.mParticipant->DeleteCycleCollectable(o.mPointer);
2314 void
2315 Visit(nsPurpleBuffer& aBuffer, nsPurpleBufferEntry* aEntry)
2317 MOZ_ASSERT(aEntry->mObject, "Null object in purple buffer");
2318 if (!aEntry->mRefCnt->get()) {
2319 void *o = aEntry->mObject;
2320 nsCycleCollectionParticipant *cp = aEntry->mParticipant;
2321 CanonicalizeParticipant(&o, &cp);
2322 SnowWhiteObject swo = { o, cp, aEntry->mRefCnt };
2323 if (mObjects.AppendElement(swo)) {
2324 aBuffer.Remove(aEntry);
2329 bool HasSnowWhiteObjects() const
2331 return mObjects.Length() > 0;
2334 virtual void Trace(JS::Heap<JS::Value>* aValue, const char* aName,
2335 void* aClosure) const
2337 void* thing = JSVAL_TO_TRACEABLE(aValue->get());
2338 if (thing && xpc_GCThingIsGrayCCThing(thing)) {
2339 mCollector->GetJSPurpleBuffer()->mValues.AppendElement(*aValue);
2343 virtual void Trace(JS::Heap<jsid>* aId, const char* aName,
2344 void* aClosure) const
2348 virtual void Trace(JS::Heap<JSObject*>* aObject, const char* aName,
2349 void* aClosure) const
2351 if (*aObject && xpc_GCThingIsGrayCCThing(*aObject)) {
2352 mCollector->GetJSPurpleBuffer()->mObjects.AppendElement(*aObject);
2356 virtual void Trace(JS::Heap<JSString*>* aString, const char* aName,
2357 void* aClosure) const
2361 virtual void Trace(JS::Heap<JSScript*>* aScript, const char* aName,
2362 void* aClosure) const
2366 virtual void Trace(JS::Heap<JSFunction*>* aFunction, const char* aName,
2367 void* aClosure) const
2371 private:
2372 nsCycleCollector *mCollector;
2373 FallibleTArray<SnowWhiteObject> mObjects;
2376 class RemoveSkippableVisitor : public SnowWhiteKiller
2378 public:
2379 RemoveSkippableVisitor(nsCycleCollector* aCollector,
2380 uint32_t aMaxCount, bool aRemoveChildlessNodes,
2381 bool aAsyncSnowWhiteFreeing,
2382 CC_ForgetSkippableCallback aCb)
2383 : SnowWhiteKiller(aCollector, aAsyncSnowWhiteFreeing ? 0 : aMaxCount),
2384 mRemoveChildlessNodes(aRemoveChildlessNodes),
2385 mAsyncSnowWhiteFreeing(aAsyncSnowWhiteFreeing),
2386 mDispatchedDeferredDeletion(false),
2387 mCallback(aCb)
2390 ~RemoveSkippableVisitor()
2392 // Note, we must call the callback before SnowWhiteKiller calls
2393 // DeleteCycleCollectable!
2394 if (mCallback) {
2395 mCallback();
2397 if (HasSnowWhiteObjects()) {
2398 // Effectively a continuation.
2399 nsCycleCollector_dispatchDeferredDeletion(true);
2403 void
2404 Visit(nsPurpleBuffer &aBuffer, nsPurpleBufferEntry *aEntry)
2406 MOZ_ASSERT(aEntry->mObject, "null mObject in purple buffer");
2407 if (!aEntry->mRefCnt->get()) {
2408 if (!mAsyncSnowWhiteFreeing) {
2409 SnowWhiteKiller::Visit(aBuffer, aEntry);
2410 } else if (!mDispatchedDeferredDeletion) {
2411 mDispatchedDeferredDeletion = true;
2412 nsCycleCollector_dispatchDeferredDeletion(false);
2414 return;
2416 void *o = aEntry->mObject;
2417 nsCycleCollectionParticipant *cp = aEntry->mParticipant;
2418 CanonicalizeParticipant(&o, &cp);
2419 if (aEntry->mRefCnt->IsPurple() && !cp->CanSkip(o, false) &&
2420 (!mRemoveChildlessNodes || MayHaveChild(o, cp))) {
2421 return;
2423 aBuffer.Remove(aEntry);
2426 private:
2427 bool mRemoveChildlessNodes;
2428 bool mAsyncSnowWhiteFreeing;
2429 bool mDispatchedDeferredDeletion;
2430 CC_ForgetSkippableCallback mCallback;
2433 void
2434 nsPurpleBuffer::RemoveSkippable(nsCycleCollector* aCollector,
2435 bool aRemoveChildlessNodes,
2436 bool aAsyncSnowWhiteFreeing,
2437 CC_ForgetSkippableCallback aCb)
2439 RemoveSkippableVisitor visitor(aCollector, Count(), aRemoveChildlessNodes,
2440 aAsyncSnowWhiteFreeing, aCb);
2441 VisitEntries(visitor);
2444 bool
2445 nsCycleCollector::FreeSnowWhite(bool aUntilNoSWInPurpleBuffer)
2447 CheckThreadSafety();
2449 bool hadSnowWhiteObjects = false;
2450 do {
2451 SnowWhiteKiller visitor(this, mPurpleBuf.Count());
2452 mPurpleBuf.VisitEntries(visitor);
2453 hadSnowWhiteObjects = hadSnowWhiteObjects ||
2454 visitor.HasSnowWhiteObjects();
2455 if (!visitor.HasSnowWhiteObjects()) {
2456 break;
2458 } while (aUntilNoSWInPurpleBuffer);
2459 return hadSnowWhiteObjects;
2462 void
2463 nsCycleCollector::ForgetSkippable(bool aRemoveChildlessNodes,
2464 bool aAsyncSnowWhiteFreeing)
2466 CheckThreadSafety();
2468 // If we remove things from the purple buffer during graph building, we may
2469 // lose track of an object that was mutated during graph building.
2470 MOZ_ASSERT(mIncrementalPhase == IdlePhase);
2472 if (mJSRuntime) {
2473 mJSRuntime->PrepareForForgetSkippable();
2475 MOZ_ASSERT(!mScanInProgress, "Don't forget skippable or free snow-white while scan is in progress.");
2476 mPurpleBuf.RemoveSkippable(this, aRemoveChildlessNodes,
2477 aAsyncSnowWhiteFreeing, mForgetSkippableCB);
2480 MOZ_NEVER_INLINE void
2481 nsCycleCollector::MarkRoots(SliceBudget &aBudget)
2483 const intptr_t kNumNodesBetweenTimeChecks = 1000;
2484 const intptr_t kStep = SliceBudget::CounterReset / kNumNodesBetweenTimeChecks;
2486 TimeLog timeLog;
2487 AutoRestore<bool> ar(mScanInProgress);
2488 MOZ_ASSERT(!mScanInProgress);
2489 mScanInProgress = true;
2490 MOZ_ASSERT(mIncrementalPhase == GraphBuildingPhase);
2491 MOZ_ASSERT(mCurrNode);
2493 while (!aBudget.isOverBudget() && !mCurrNode->IsDone()) {
2494 PtrInfo *pi = mCurrNode->GetNext();
2495 CC_AbortIfNull(pi);
2496 // We need to call the builder's Traverse() method on deleted nodes, to
2497 // set their firstChild() that may be read by a prior non-deleted
2498 // neighbor.
2499 mBuilder->Traverse(pi);
2500 if (mCurrNode->AtBlockEnd()) {
2501 mBuilder->SetLastChild();
2503 aBudget.step(kStep);
2506 if (!mCurrNode->IsDone()) {
2507 timeLog.Checkpoint("MarkRoots()");
2508 return;
2511 if (mGraph.mRootCount > 0) {
2512 mBuilder->SetLastChild();
2515 if (mBuilder->RanOutOfMemory()) {
2516 MOZ_ASSERT(false, "Ran out of memory while building cycle collector graph");
2517 CC_TELEMETRY(_OOM, true);
2520 mBuilder = nullptr;
2521 mCurrNode = nullptr;
2522 mIncrementalPhase = ScanAndCollectWhitePhase;
2523 timeLog.Checkpoint("MarkRoots()");
2527 ////////////////////////////////////////////////////////////////////////
2528 // Bacon & Rajan's |ScanRoots| routine.
2529 ////////////////////////////////////////////////////////////////////////
2532 struct ScanBlackVisitor
2534 ScanBlackVisitor(uint32_t &aWhiteNodeCount, bool &aFailed)
2535 : mWhiteNodeCount(aWhiteNodeCount), mFailed(aFailed)
2539 bool ShouldVisitNode(PtrInfo const *pi)
2541 return pi->mColor != black;
2544 MOZ_NEVER_INLINE void VisitNode(PtrInfo *pi)
2546 if (pi->mColor == white)
2547 --mWhiteNodeCount;
2548 pi->mColor = black;
2551 void Failed()
2553 mFailed = true;
2556 private:
2557 uint32_t &mWhiteNodeCount;
2558 bool &mFailed;
2562 struct scanVisitor
2564 scanVisitor(uint32_t &aWhiteNodeCount, bool &aFailed, bool aWasIncremental)
2565 : mWhiteNodeCount(aWhiteNodeCount), mFailed(aFailed),
2566 mWasIncremental(aWasIncremental)
2570 bool ShouldVisitNode(PtrInfo const *pi)
2572 return pi->mColor == grey;
2575 MOZ_NEVER_INLINE void VisitNode(PtrInfo *pi)
2577 if (pi->mInternalRefs > pi->mRefCount && pi->mRefCount > 0) {
2578 // If we found more references to an object than its ref count, then
2579 // the object should have already been marked as an incremental
2580 // root. Note that this is imprecise, because pi could have been
2581 // marked black for other reasons. Always fault if we weren't
2582 // incremental, as there were no incremental roots in that case.
2583 if (!mWasIncremental || pi->mColor != black) {
2584 Fault("traversed refs exceed refcount", pi);
2588 if (pi->mInternalRefs == pi->mRefCount || pi->mRefCount == 0) {
2589 pi->mColor = white;
2590 ++mWhiteNodeCount;
2591 } else {
2592 GraphWalker<ScanBlackVisitor>(ScanBlackVisitor(mWhiteNodeCount, mFailed)).Walk(pi);
2593 MOZ_ASSERT(pi->mColor == black,
2594 "Why didn't ScanBlackVisitor make pi black?");
2598 void Failed() {
2599 mFailed = true;
2602 private:
2603 uint32_t &mWhiteNodeCount;
2604 bool &mFailed;
2605 bool mWasIncremental;
2608 // Iterate over the WeakMaps. If we mark anything while iterating
2609 // over the WeakMaps, we must iterate over all of the WeakMaps again.
2610 void
2611 nsCycleCollector::ScanWeakMaps()
2613 bool anyChanged;
2614 bool failed = false;
2615 do {
2616 anyChanged = false;
2617 for (uint32_t i = 0; i < mGraph.mWeakMaps.Length(); i++) {
2618 WeakMapping *wm = &mGraph.mWeakMaps[i];
2620 // If any of these are null, the original object was marked black.
2621 uint32_t mColor = wm->mMap ? wm->mMap->mColor : black;
2622 uint32_t kColor = wm->mKey ? wm->mKey->mColor : black;
2623 uint32_t kdColor = wm->mKeyDelegate ? wm->mKeyDelegate->mColor : black;
2624 uint32_t vColor = wm->mVal ? wm->mVal->mColor : black;
2626 // All non-null weak mapping maps, keys and values are
2627 // roots (in the sense of WalkFromRoots) in the cycle
2628 // collector graph, and thus should have been colored
2629 // either black or white in ScanRoots().
2630 MOZ_ASSERT(mColor != grey, "Uncolored weak map");
2631 MOZ_ASSERT(kColor != grey, "Uncolored weak map key");
2632 MOZ_ASSERT(kdColor != grey, "Uncolored weak map key delegate");
2633 MOZ_ASSERT(vColor != grey, "Uncolored weak map value");
2635 if (mColor == black && kColor != black && kdColor == black) {
2636 GraphWalker<ScanBlackVisitor>(ScanBlackVisitor(mWhiteNodeCount, failed)).Walk(wm->mKey);
2637 anyChanged = true;
2640 if (mColor == black && kColor == black && vColor != black) {
2641 GraphWalker<ScanBlackVisitor>(ScanBlackVisitor(mWhiteNodeCount, failed)).Walk(wm->mVal);
2642 anyChanged = true;
2645 } while (anyChanged);
2647 if (failed) {
2648 MOZ_ASSERT(false, "Ran out of memory in ScanWeakMaps");
2649 CC_TELEMETRY(_OOM, true);
2653 // Flood black from any objects in the purple buffer that are in the CC graph.
2654 class PurpleScanBlackVisitor
2656 public:
2657 PurpleScanBlackVisitor(GCGraph &aGraph, uint32_t &aCount, bool &aFailed)
2658 : mGraph(aGraph), mCount(aCount), mFailed(aFailed)
2662 void
2663 Visit(nsPurpleBuffer &aBuffer, nsPurpleBufferEntry *aEntry)
2665 MOZ_ASSERT(aEntry->mObject, "Entries with null mObject shouldn't be in the purple buffer.");
2666 MOZ_ASSERT(aEntry->mRefCnt->get() != 0, "Snow-white objects shouldn't be in the purple buffer.");
2668 void *obj = aEntry->mObject;
2669 if (!aEntry->mParticipant) {
2670 obj = CanonicalizeXPCOMParticipant(static_cast<nsISupports*>(obj));
2671 MOZ_ASSERT(obj, "Don't add objects that don't participate in collection!");
2674 PtrInfo *pi = mGraph.FindNode(obj);
2675 if (!pi) {
2676 return;
2678 MOZ_ASSERT(pi->mParticipant, "No dead objects should be in the purple buffer.");
2679 if (pi->mColor == black) {
2680 return;
2682 GraphWalker<ScanBlackVisitor>(ScanBlackVisitor(mCount, mFailed)).Walk(pi);
2685 private:
2686 GCGraph &mGraph;
2687 uint32_t &mCount;
2688 bool &mFailed;
2691 // Objects that have been stored somewhere since the start of incremental graph building must
2692 // be treated as live for this cycle collection, because we may not have accurate information
2693 // about who holds references to them.
2694 void
2695 nsCycleCollector::ScanIncrementalRoots()
2697 TimeLog timeLog;
2699 // Reference counted objects:
2700 // We cleared the purple buffer at the start of the current ICC, so if a
2701 // refcounted object is purple, it may have been AddRef'd during the current
2702 // ICC. (It may also have only been released.) If that is the case, we cannot
2703 // be sure that the set of things pointing to the object in the CC graph
2704 // is accurate. Therefore, for safety, we treat any purple objects as being
2705 // live during the current CC. We don't remove anything from the purple
2706 // buffer here, so these objects will be suspected and freed in the next CC
2707 // if they are garbage.
2708 bool failed = false;
2709 PurpleScanBlackVisitor purpleScanBlackVisitor(mGraph, mWhiteNodeCount, failed);
2710 mPurpleBuf.VisitEntries(purpleScanBlackVisitor);
2711 timeLog.Checkpoint("ScanIncrementalRoots::fix purple");
2713 // Garbage collected objects:
2714 // If a GCed object was added to the graph with a refcount of zero, and is
2715 // now marked black by the GC, it was probably gray before and was exposed
2716 // to active JS, so it may have been stored somewhere, so it needs to be
2717 // treated as live.
2718 if (mJSRuntime) {
2719 nsCycleCollectionParticipant *jsParticipant = mJSRuntime->GCThingParticipant();
2720 nsCycleCollectionParticipant *zoneParticipant = mJSRuntime->ZoneParticipant();
2721 NodePool::Enumerator etor(mGraph.mNodes);
2723 while (!etor.IsDone()) {
2724 PtrInfo *pi = etor.GetNext();
2726 if (pi->mRefCount != 0 || pi->mColor == black) {
2727 continue;
2730 if (pi->mParticipant == jsParticipant) {
2731 if (xpc_GCThingIsGrayCCThing(pi->mPointer)) {
2732 continue;
2734 } else if (pi->mParticipant == zoneParticipant) {
2735 JS::Zone *zone = static_cast<JS::Zone*>(pi->mPointer);
2736 if (js::ZoneGlobalsAreAllGray(zone)) {
2737 continue;
2739 } else {
2740 MOZ_ASSERT(false, "Non-JS thing with 0 refcount? Treating as live.");
2743 GraphWalker<ScanBlackVisitor>(ScanBlackVisitor(mWhiteNodeCount, failed)).Walk(pi);
2746 timeLog.Checkpoint("ScanIncrementalRoots::fix JS");
2749 if (failed) {
2750 NS_ASSERTION(false, "Ran out of memory in ScanIncrementalRoots");
2751 CC_TELEMETRY(_OOM, true);
2755 void
2756 nsCycleCollector::ScanRoots(bool aFullySynchGraphBuild)
2758 AutoRestore<bool> ar(mScanInProgress);
2759 MOZ_ASSERT(!mScanInProgress);
2760 mScanInProgress = true;
2761 mWhiteNodeCount = 0;
2762 MOZ_ASSERT(mIncrementalPhase == ScanAndCollectWhitePhase);
2764 if (!aFullySynchGraphBuild) {
2765 ScanIncrementalRoots();
2768 TimeLog timeLog;
2770 // On the assumption that most nodes will be black, it's
2771 // probably faster to use a GraphWalker than a
2772 // NodePool::Enumerator.
2773 bool failed = false;
2774 scanVisitor sv(mWhiteNodeCount, failed, !aFullySynchGraphBuild);
2775 GraphWalker<scanVisitor>(sv).WalkFromRoots(mGraph);
2776 timeLog.Checkpoint("ScanRoots::WalkFromRoots");
2778 if (failed) {
2779 NS_ASSERTION(false, "Ran out of memory in ScanRoots");
2780 CC_TELEMETRY(_OOM, true);
2783 // Scanning weak maps must be done last.
2784 ScanWeakMaps();
2785 timeLog.Checkpoint("ScanRoots::ScanWeakMaps");
2787 if (mListener) {
2788 mListener->BeginResults();
2790 NodePool::Enumerator etor(mGraph.mNodes);
2791 while (!etor.IsDone()) {
2792 PtrInfo *pi = etor.GetNext();
2793 if (!pi->mParticipant) {
2794 continue;
2796 switch (pi->mColor) {
2797 case black:
2798 if (pi->mRefCount > 0 && pi->mRefCount < UINT32_MAX &&
2799 pi->mInternalRefs != pi->mRefCount) {
2800 mListener->DescribeRoot((uint64_t)pi->mPointer,
2801 pi->mInternalRefs);
2803 break;
2804 case white:
2805 mListener->DescribeGarbage((uint64_t)pi->mPointer);
2806 break;
2807 case grey:
2808 // With incremental CC, we can end up with a grey object after
2809 // scanning if it is only reachable from an object that gets freed.
2810 break;
2814 mListener->End();
2815 mListener = nullptr;
2816 timeLog.Checkpoint("ScanRoots::listener");
2821 ////////////////////////////////////////////////////////////////////////
2822 // Bacon & Rajan's |CollectWhite| routine, somewhat modified.
2823 ////////////////////////////////////////////////////////////////////////
2825 bool
2826 nsCycleCollector::CollectWhite()
2828 // Explanation of "somewhat modified": we have no way to collect the
2829 // set of whites "all at once", we have to ask each of them to drop
2830 // their outgoing links and assume this will cause the garbage cycle
2831 // to *mostly* self-destruct (except for the reference we continue
2832 // to hold).
2834 // To do this "safely" we must make sure that the white nodes we're
2835 // operating on are stable for the duration of our operation. So we
2836 // make 3 sets of calls to language runtimes:
2838 // - Root(whites), which should pin the whites in memory.
2839 // - Unlink(whites), which drops outgoing links on each white.
2840 // - Unroot(whites), which returns the whites to normal GC.
2842 TimeLog timeLog;
2843 nsAutoTArray<PtrInfo*, 4000> whiteNodes;
2845 MOZ_ASSERT(mIncrementalPhase == ScanAndCollectWhitePhase);
2847 whiteNodes.SetCapacity(mWhiteNodeCount);
2848 uint32_t numWhiteGCed = 0;
2850 NodePool::Enumerator etor(mGraph.mNodes);
2851 while (!etor.IsDone())
2853 PtrInfo *pinfo = etor.GetNext();
2854 if (pinfo->mColor == white && pinfo->mParticipant) {
2855 whiteNodes.AppendElement(pinfo);
2856 pinfo->mParticipant->Root(pinfo->mPointer);
2857 if (pinfo->mRefCount == 0) {
2858 // only JS objects have a refcount of 0
2859 ++numWhiteGCed;
2864 uint32_t count = whiteNodes.Length();
2865 MOZ_ASSERT(numWhiteGCed <= count,
2866 "More freed GCed nodes than total freed nodes.");
2867 mResults.mFreedRefCounted += count - numWhiteGCed;
2868 mResults.mFreedGCed += numWhiteGCed;
2870 timeLog.Checkpoint("CollectWhite::Root");
2872 if (mBeforeUnlinkCB) {
2873 mBeforeUnlinkCB();
2874 timeLog.Checkpoint("CollectWhite::BeforeUnlinkCB");
2877 for (uint32_t i = 0; i < count; ++i) {
2878 PtrInfo *pinfo = whiteNodes.ElementAt(i);
2879 MOZ_ASSERT(pinfo->mParticipant, "Unlink shouldn't see objects removed from graph.");
2880 pinfo->mParticipant->Unlink(pinfo->mPointer);
2881 #ifdef DEBUG
2882 if (mJSRuntime) {
2883 mJSRuntime->AssertNoObjectsToTrace(pinfo->mPointer);
2885 #endif
2887 timeLog.Checkpoint("CollectWhite::Unlink");
2889 for (uint32_t i = 0; i < count; ++i) {
2890 PtrInfo *pinfo = whiteNodes.ElementAt(i);
2891 MOZ_ASSERT(pinfo->mParticipant, "Unroot shouldn't see objects removed from graph.");
2892 pinfo->mParticipant->Unroot(pinfo->mPointer);
2894 timeLog.Checkpoint("CollectWhite::Unroot");
2896 nsCycleCollector_dispatchDeferredDeletion(false);
2897 mIncrementalPhase = CleanupPhase;
2899 return count > 0;
2903 ////////////////////////
2904 // Memory reporting
2905 ////////////////////////
2907 MOZ_DEFINE_MALLOC_SIZE_OF(CycleCollectorMallocSizeOf)
2909 NS_IMETHODIMP
2910 nsCycleCollector::CollectReports(nsIHandleReportCallback* aHandleReport,
2911 nsISupports* aData)
2913 size_t objectSize, graphNodesSize, graphEdgesSize, weakMapsSize,
2914 purpleBufferSize;
2915 SizeOfIncludingThis(CycleCollectorMallocSizeOf,
2916 &objectSize,
2917 &graphNodesSize, &graphEdgesSize,
2918 &weakMapsSize,
2919 &purpleBufferSize);
2921 #define REPORT(_path, _amount, _desc) \
2922 do { \
2923 size_t amount = _amount; /* evaluate |_amount| only once */ \
2924 if (amount > 0) { \
2925 nsresult rv; \
2926 rv = aHandleReport->Callback(EmptyCString(), \
2927 NS_LITERAL_CSTRING(_path), \
2928 KIND_HEAP, UNITS_BYTES, _amount, \
2929 NS_LITERAL_CSTRING(_desc), \
2930 aData); \
2931 if (NS_WARN_IF(NS_FAILED(rv))) \
2932 return rv; \
2934 } while (0)
2936 REPORT("explicit/cycle-collector/collector-object", objectSize,
2937 "Memory used for the cycle collector object itself.");
2939 REPORT("explicit/cycle-collector/graph-nodes", graphNodesSize,
2940 "Memory used for the nodes of the cycle collector's graph. "
2941 "This should be zero when the collector is idle.");
2943 REPORT("explicit/cycle-collector/graph-edges", graphEdgesSize,
2944 "Memory used for the edges of the cycle collector's graph. "
2945 "This should be zero when the collector is idle.");
2947 REPORT("explicit/cycle-collector/weak-maps", weakMapsSize,
2948 "Memory used for the representation of weak maps in the "
2949 "cycle collector's graph. "
2950 "This should be zero when the collector is idle.");
2952 REPORT("explicit/cycle-collector/purple-buffer", purpleBufferSize,
2953 "Memory used for the cycle collector's purple buffer.");
2955 #undef REPORT
2957 return NS_OK;
2961 ////////////////////////////////////////////////////////////////////////
2962 // Collector implementation
2963 ////////////////////////////////////////////////////////////////////////
2965 nsCycleCollector::nsCycleCollector() :
2966 mActivelyCollecting(false),
2967 mScanInProgress(false),
2968 mJSRuntime(nullptr),
2969 mIncrementalPhase(IdlePhase),
2970 mThread(NS_GetCurrentThread()),
2971 mWhiteNodeCount(0),
2972 mBeforeUnlinkCB(nullptr),
2973 mForgetSkippableCB(nullptr),
2974 mUnmergedNeeded(0),
2975 mMergedInARow(0),
2976 mJSPurpleBuffer(nullptr)
2980 nsCycleCollector::~nsCycleCollector()
2982 UnregisterWeakMemoryReporter(this);
2985 void
2986 nsCycleCollector::RegisterJSRuntime(CycleCollectedJSRuntime *aJSRuntime)
2988 if (mJSRuntime)
2989 Fault("multiple registrations of cycle collector JS runtime", aJSRuntime);
2991 mJSRuntime = aJSRuntime;
2993 // We can't register as a reporter in nsCycleCollector() because that runs
2994 // before the memory reporter manager is initialized. So we do it here
2995 // instead.
2996 static bool registered = false;
2997 if (!registered) {
2998 RegisterWeakMemoryReporter(this);
2999 registered = true;
3003 void
3004 nsCycleCollector::ForgetJSRuntime()
3006 if (!mJSRuntime)
3007 Fault("forgetting non-registered cycle collector JS runtime");
3009 mJSRuntime = nullptr;
3012 #ifdef DEBUG
3013 static bool
3014 HasParticipant(void *aPtr, nsCycleCollectionParticipant *aParti)
3016 if (aParti) {
3017 return true;
3020 nsXPCOMCycleCollectionParticipant *xcp;
3021 ToParticipant(static_cast<nsISupports*>(aPtr), &xcp);
3022 return xcp != nullptr;
3024 #endif
3026 MOZ_ALWAYS_INLINE void
3027 nsCycleCollector::Suspect(void *aPtr, nsCycleCollectionParticipant *aParti,
3028 nsCycleCollectingAutoRefCnt *aRefCnt)
3030 CheckThreadSafety();
3032 // Re-entering ::Suspect during collection used to be a fault, but
3033 // we are canonicalizing nsISupports pointers using QI, so we will
3034 // see some spurious refcount traffic here.
3036 if (MOZ_UNLIKELY(mScanInProgress)) {
3037 return;
3040 MOZ_ASSERT(aPtr, "Don't suspect null pointers");
3042 MOZ_ASSERT(HasParticipant(aPtr, aParti),
3043 "Suspected nsISupports pointer must QI to nsXPCOMCycleCollectionParticipant");
3045 mPurpleBuf.Put(aPtr, aParti, aRefCnt);
3048 void
3049 nsCycleCollector::CheckThreadSafety()
3051 #ifdef DEBUG
3052 nsIThread* currentThread = NS_GetCurrentThread();
3053 // XXXkhuey we can be called so late in shutdown that NS_GetCurrentThread
3054 // returns null (after the thread manager has shut down)
3055 MOZ_ASSERT(mThread == currentThread || !currentThread);
3056 #endif
3059 // The cycle collector uses the mark bitmap to discover what JS objects
3060 // were reachable only from XPConnect roots that might participate in
3061 // cycles. We ask the JS runtime whether we need to force a GC before
3062 // this CC. It returns true on startup (before the mark bits have been set),
3063 // and also when UnmarkGray has run out of stack. We also force GCs on shut
3064 // down to collect cycles involving both DOM and JS.
3065 void
3066 nsCycleCollector::FixGrayBits(bool aForceGC)
3068 CheckThreadSafety();
3070 if (!mJSRuntime)
3071 return;
3073 if (!aForceGC) {
3074 mJSRuntime->FixWeakMappingGrayBits();
3076 bool needGC = mJSRuntime->NeedCollect();
3077 // Only do a telemetry ping for non-shutdown CCs.
3078 CC_TELEMETRY(_NEED_GC, needGC);
3079 if (!needGC)
3080 return;
3081 mResults.mForcedGC = true;
3084 TimeLog timeLog;
3085 mJSRuntime->Collect(aForceGC ? JS::gcreason::SHUTDOWN_CC : JS::gcreason::CC_FORCED);
3086 timeLog.Checkpoint("GC()");
3089 void
3090 nsCycleCollector::CleanupAfterCollection()
3092 MOZ_ASSERT(mIncrementalPhase == CleanupPhase);
3093 mGraph.Clear();
3095 #ifdef XP_OS2
3096 // Now that the cycle collector has freed some memory, we can try to
3097 // force the C library to give back as much memory to the system as
3098 // possible.
3099 _heapmin();
3100 #endif
3102 uint32_t interval = (uint32_t) ((TimeStamp::Now() - mCollectionStart).ToMilliseconds());
3103 #ifdef COLLECT_TIME_DEBUG
3104 printf("cc: total cycle collector time was %ums\n", interval);
3105 printf("cc: visited %u ref counted and %u GCed objects, freed %d ref counted and %d GCed objects.\n",
3106 mResults.mVisitedRefCounted, mResults.mVisitedGCed,
3107 mResults.mFreedRefCounted, mResults.mFreedGCed);
3108 printf("cc: \n");
3109 #endif
3110 CC_TELEMETRY( , interval);
3111 CC_TELEMETRY(_VISITED_REF_COUNTED, mResults.mVisitedRefCounted);
3112 CC_TELEMETRY(_VISITED_GCED, mResults.mVisitedGCed);
3113 CC_TELEMETRY(_COLLECTED, mWhiteNodeCount);
3115 if (mJSRuntime) {
3116 mJSRuntime->EndCycleCollectionCallback(mResults);
3118 mIncrementalPhase = IdlePhase;
3121 void
3122 nsCycleCollector::ShutdownCollect()
3124 SliceBudget unlimitedBudget;
3125 uint32_t i;
3126 for (i = 0; i < DEFAULT_SHUTDOWN_COLLECTIONS; ++i) {
3127 if (!Collect(ShutdownCC, unlimitedBudget, nullptr)) {
3128 break;
3131 NS_ASSERTION(i < NORMAL_SHUTDOWN_COLLECTIONS, "Extra shutdown CC");
3134 static void
3135 PrintPhase(const char *aPhase)
3137 #ifdef DEBUG_PHASES
3138 printf("cc: begin %s on %s\n", aPhase,
3139 NS_IsMainThread() ? "mainthread" : "worker");
3140 #endif
3143 bool
3144 nsCycleCollector::Collect(ccType aCCType,
3145 SliceBudget &aBudget,
3146 nsICycleCollectorListener *aManualListener)
3148 CheckThreadSafety();
3150 // This can legitimately happen in a few cases. See bug 383651.
3151 if (mActivelyCollecting) {
3152 return false;
3154 mActivelyCollecting = true;
3156 bool startedIdle = (mIncrementalPhase == IdlePhase);
3157 bool collectedAny = false;
3159 // If the CC started idle, it will call BeginCollection, which
3160 // will do FreeSnowWhite, so it doesn't need to be done here.
3161 if (!startedIdle) {
3162 FreeSnowWhite(true);
3165 bool finished = false;
3166 do {
3167 switch (mIncrementalPhase) {
3168 case IdlePhase:
3169 PrintPhase("BeginCollection");
3170 BeginCollection(aCCType, aManualListener);
3171 break;
3172 case GraphBuildingPhase:
3173 PrintPhase("MarkRoots");
3174 MarkRoots(aBudget);
3175 break;
3176 case ScanAndCollectWhitePhase:
3177 // We do ScanRoots and CollectWhite in a single slice to ensure
3178 // that we won't unlink a live object if a weak reference is
3179 // promoted to a strong reference after ScanRoots has finished.
3180 // See bug 926533.
3181 PrintPhase("ScanRoots");
3182 ScanRoots(startedIdle);
3183 PrintPhase("CollectWhite");
3184 collectedAny = CollectWhite();
3185 break;
3186 case CleanupPhase:
3187 PrintPhase("CleanupAfterCollection");
3188 CleanupAfterCollection();
3189 finished = true;
3190 break;
3192 } while (!aBudget.checkOverBudget() && !finished);
3194 mActivelyCollecting = false;
3196 if (aCCType != SliceCC && !startedIdle) {
3197 // We were in the middle of an incremental CC (using its own listener).
3198 // Somebody has forced a CC, so after having finished out the current CC,
3199 // run the CC again using the new listener.
3200 MOZ_ASSERT(mIncrementalPhase == IdlePhase);
3201 if (Collect(aCCType, aBudget, aManualListener)) {
3202 collectedAny = true;
3206 MOZ_ASSERT_IF(aCCType != SliceCC, mIncrementalPhase == IdlePhase);
3208 return collectedAny;
3211 // Any JS objects we have in the graph could die when we GC, but we
3212 // don't want to abandon the current CC, because the graph contains
3213 // information about purple roots. So we synchronously finish off
3214 // the current CC.
3215 void
3216 nsCycleCollector::PrepareForGarbageCollection()
3218 if (mIncrementalPhase == IdlePhase) {
3219 MOZ_ASSERT(mGraph.IsEmpty(), "Non-empty graph when idle");
3220 MOZ_ASSERT(!mBuilder, "Non-null builder when idle");
3221 if (mJSPurpleBuffer) {
3222 mJSPurpleBuffer->Destroy();
3224 return;
3227 SliceBudget unlimitedBudget;
3228 PrintPhase("PrepareForGarbageCollection");
3229 // Use SliceCC because we only want to finish the CC in progress.
3230 Collect(SliceCC, unlimitedBudget, nullptr);
3231 MOZ_ASSERT(mIncrementalPhase == IdlePhase);
3234 // Don't merge too many times in a row, and do at least a minimum
3235 // number of unmerged CCs in a row.
3236 static const uint32_t kMinConsecutiveUnmerged = 3;
3237 static const uint32_t kMaxConsecutiveMerged = 3;
3239 bool
3240 nsCycleCollector::ShouldMergeZones(ccType aCCType)
3242 if (!mJSRuntime) {
3243 return false;
3246 MOZ_ASSERT(mUnmergedNeeded <= kMinConsecutiveUnmerged);
3247 MOZ_ASSERT(mMergedInARow <= kMaxConsecutiveMerged);
3249 if (mMergedInARow == kMaxConsecutiveMerged) {
3250 MOZ_ASSERT(mUnmergedNeeded == 0);
3251 mUnmergedNeeded = kMinConsecutiveUnmerged;
3254 if (mUnmergedNeeded > 0) {
3255 mUnmergedNeeded--;
3256 mMergedInARow = 0;
3257 return false;
3260 if (aCCType == SliceCC && mJSRuntime->UsefulToMergeZones()) {
3261 mMergedInARow++;
3262 return true;
3263 } else {
3264 mMergedInARow = 0;
3265 return false;
3269 void
3270 nsCycleCollector::BeginCollection(ccType aCCType,
3271 nsICycleCollectorListener *aManualListener)
3273 TimeLog timeLog;
3274 MOZ_ASSERT(mIncrementalPhase == IdlePhase);
3276 mCollectionStart = TimeStamp::Now();
3278 if (mJSRuntime) {
3279 mJSRuntime->BeginCycleCollectionCallback();
3280 timeLog.Checkpoint("BeginCycleCollectionCallback()");
3283 bool isShutdown = (aCCType == ShutdownCC);
3285 // Set up the listener for this CC.
3286 MOZ_ASSERT_IF(isShutdown, !aManualListener);
3287 MOZ_ASSERT(!mListener, "Forgot to clear a previous listener?");
3288 mListener = aManualListener;
3289 aManualListener = nullptr;
3290 if (!mListener) {
3291 if (mParams.mLogAll || (isShutdown && mParams.mLogShutdown)) {
3292 nsRefPtr<nsCycleCollectorLogger> logger = new nsCycleCollectorLogger();
3293 if (isShutdown && mParams.mAllTracesAtShutdown) {
3294 logger->SetAllTraces();
3296 mListener = logger.forget();
3300 bool forceGC = isShutdown;
3301 if (!forceGC && mListener) {
3302 // On a WantAllTraces CC, force a synchronous global GC to prevent
3303 // hijinks from ForgetSkippable and compartmental GCs.
3304 mListener->GetWantAllTraces(&forceGC);
3306 FixGrayBits(forceGC);
3308 FreeSnowWhite(true);
3310 if (mListener && NS_FAILED(mListener->Begin())) {
3311 mListener = nullptr;
3314 // Set up the data structures for building the graph.
3315 mGraph.Init();
3316 mResults.Init();
3317 bool mergeZones = ShouldMergeZones(aCCType);
3318 mResults.mMergedZones = mergeZones;
3320 MOZ_ASSERT(!mBuilder, "Forgot to clear mBuilder");
3321 mBuilder = new GCGraphBuilder(mGraph, mResults, mJSRuntime, mListener, mergeZones);
3323 if (mJSRuntime) {
3324 mJSRuntime->TraverseRoots(*mBuilder);
3325 timeLog.Checkpoint("mJSRuntime->TraverseRoots()");
3328 AutoRestore<bool> ar(mScanInProgress);
3329 MOZ_ASSERT(!mScanInProgress);
3330 mScanInProgress = true;
3331 mPurpleBuf.SelectPointers(*mBuilder);
3332 timeLog.Checkpoint("SelectPointers()");
3334 // We've finished adding roots, and everything in the graph is a root.
3335 mGraph.mRootCount = mGraph.MapCount();
3337 mCurrNode = new NodePool::Enumerator(mGraph.mNodes);
3338 mIncrementalPhase = GraphBuildingPhase;
3341 uint32_t
3342 nsCycleCollector::SuspectedCount()
3344 CheckThreadSafety();
3345 return mPurpleBuf.Count();
3348 void
3349 nsCycleCollector::Shutdown()
3351 CheckThreadSafety();
3353 // Always delete snow white objects.
3354 FreeSnowWhite(true);
3356 #ifndef DEBUG
3357 if (PR_GetEnv("XPCOM_CC_RUN_DURING_SHUTDOWN"))
3358 #endif
3360 ShutdownCollect();
3364 void
3365 nsCycleCollector::RemoveObjectFromGraph(void *aObj)
3367 if (mIncrementalPhase == IdlePhase) {
3368 return;
3371 if (PtrInfo *pinfo = mGraph.FindNode(aObj)) {
3372 mGraph.RemoveNodeFromMap(aObj);
3374 pinfo->mPointer = nullptr;
3375 pinfo->mParticipant = nullptr;
3379 void
3380 nsCycleCollector::SizeOfIncludingThis(mozilla::MallocSizeOf aMallocSizeOf,
3381 size_t *aObjectSize,
3382 size_t *aGraphNodesSize,
3383 size_t *aGraphEdgesSize,
3384 size_t *aWeakMapsSize,
3385 size_t *aPurpleBufferSize) const
3387 *aObjectSize = aMallocSizeOf(this);
3389 mGraph.SizeOfExcludingThis(aMallocSizeOf, aGraphNodesSize, aGraphEdgesSize,
3390 aWeakMapsSize);
3392 *aPurpleBufferSize = mPurpleBuf.SizeOfExcludingThis(aMallocSizeOf);
3394 // These fields are deliberately not measured:
3395 // - mJSRuntime: because it's non-owning and measured by JS reporters.
3396 // - mParams: because it only contains scalars.
3399 JSPurpleBuffer*
3400 nsCycleCollector::GetJSPurpleBuffer()
3402 if (!mJSPurpleBuffer) {
3403 // JSPurpleBuffer keeps itself alive, but we need to create it in such way
3404 // that it ends up in the normal purple buffer. That happens when
3405 // nsRefPtr goes out of the scope and calls Release.
3406 nsRefPtr<JSPurpleBuffer> pb = new JSPurpleBuffer(mJSPurpleBuffer);
3408 return mJSPurpleBuffer;
3411 ////////////////////////////////////////////////////////////////////////
3412 // Module public API (exported in nsCycleCollector.h)
3413 // Just functions that redirect into the singleton, once it's built.
3414 ////////////////////////////////////////////////////////////////////////
3416 void
3417 nsCycleCollector_registerJSRuntime(CycleCollectedJSRuntime *rt)
3419 CollectorData *data = sCollectorData.get();
3421 // We should have started the cycle collector by now.
3422 MOZ_ASSERT(data);
3423 MOZ_ASSERT(data->mCollector);
3424 // But we shouldn't already have a runtime.
3425 MOZ_ASSERT(!data->mRuntime);
3427 data->mRuntime = rt;
3428 data->mCollector->RegisterJSRuntime(rt);
3431 void
3432 nsCycleCollector_forgetJSRuntime()
3434 CollectorData *data = sCollectorData.get();
3436 // We should have started the cycle collector by now.
3437 MOZ_ASSERT(data);
3438 // And we shouldn't have already forgotten our runtime.
3439 MOZ_ASSERT(data->mRuntime);
3441 // But it may have shutdown already.
3442 if (data->mCollector) {
3443 data->mCollector->ForgetJSRuntime();
3444 data->mRuntime = nullptr;
3445 } else {
3446 data->mRuntime = nullptr;
3447 delete data;
3448 sCollectorData.set(nullptr);
3452 /* static */ CycleCollectedJSRuntime*
3453 CycleCollectedJSRuntime::Get()
3455 CollectorData* data = sCollectorData.get();
3456 if (data) {
3457 return data->mRuntime;
3459 return nullptr;
3463 namespace mozilla {
3464 namespace cyclecollector {
3466 void
3467 HoldJSObjectsImpl(void* aHolder, nsScriptObjectTracer* aTracer)
3469 CollectorData* data = sCollectorData.get();
3471 // We should have started the cycle collector by now.
3472 MOZ_ASSERT(data);
3473 MOZ_ASSERT(data->mCollector);
3474 // And we should have a runtime.
3475 MOZ_ASSERT(data->mRuntime);
3477 data->mRuntime->AddJSHolder(aHolder, aTracer);
3480 void
3481 HoldJSObjectsImpl(nsISupports* aHolder)
3483 nsXPCOMCycleCollectionParticipant* participant;
3484 CallQueryInterface(aHolder, &participant);
3485 MOZ_ASSERT(participant, "Failed to QI to nsXPCOMCycleCollectionParticipant!");
3486 MOZ_ASSERT(participant->CheckForRightISupports(aHolder),
3487 "The result of QIing a JS holder should be the same as ToSupports");
3489 HoldJSObjectsImpl(aHolder, participant);
3492 void
3493 DropJSObjectsImpl(void* aHolder)
3495 CollectorData* data = sCollectorData.get();
3497 // We should have started the cycle collector by now, and not completely
3498 // shut down.
3499 MOZ_ASSERT(data);
3500 // And we should have a runtime.
3501 MOZ_ASSERT(data->mRuntime);
3503 data->mRuntime->RemoveJSHolder(aHolder);
3506 void
3507 DropJSObjectsImpl(nsISupports* aHolder)
3509 #ifdef DEBUG
3510 nsXPCOMCycleCollectionParticipant* participant;
3511 CallQueryInterface(aHolder, &participant);
3512 MOZ_ASSERT(participant, "Failed to QI to nsXPCOMCycleCollectionParticipant!");
3513 MOZ_ASSERT(participant->CheckForRightISupports(aHolder),
3514 "The result of QIing a JS holder should be the same as ToSupports");
3515 #endif
3516 DropJSObjectsImpl(static_cast<void*>(aHolder));
3519 #ifdef DEBUG
3520 bool
3521 IsJSHolder(void* aHolder)
3523 CollectorData *data = sCollectorData.get();
3525 // We should have started the cycle collector by now, and not completely
3526 // shut down.
3527 MOZ_ASSERT(data);
3528 // And we should have a runtime.
3529 MOZ_ASSERT(data->mRuntime);
3531 return data->mRuntime->IsJSHolder(aHolder);
3533 #endif
3535 void
3536 DeferredFinalize(nsISupports* aSupports)
3538 CollectorData *data = sCollectorData.get();
3540 // We should have started the cycle collector by now, and not completely
3541 // shut down.
3542 MOZ_ASSERT(data);
3543 // And we should have a runtime.
3544 MOZ_ASSERT(data->mRuntime);
3546 data->mRuntime->DeferredFinalize(aSupports);
3549 void
3550 DeferredFinalize(DeferredFinalizeAppendFunction aAppendFunc,
3551 DeferredFinalizeFunction aFunc,
3552 void* aThing)
3554 CollectorData *data = sCollectorData.get();
3556 // We should have started the cycle collector by now, and not completely
3557 // shut down.
3558 MOZ_ASSERT(data);
3559 // And we should have a runtime.
3560 MOZ_ASSERT(data->mRuntime);
3562 data->mRuntime->DeferredFinalize(aAppendFunc, aFunc, aThing);
3565 } // namespace cyclecollector
3566 } // namespace mozilla
3569 MOZ_NEVER_INLINE static void
3570 SuspectAfterShutdown(void* n, nsCycleCollectionParticipant* cp,
3571 nsCycleCollectingAutoRefCnt* aRefCnt,
3572 bool* aShouldDelete)
3574 if (aRefCnt->get() == 0) {
3575 if (!aShouldDelete) {
3576 // The CC is shut down, so we can't be in the middle of an ICC.
3577 CanonicalizeParticipant(&n, &cp);
3578 aRefCnt->stabilizeForDeletion();
3579 cp->DeleteCycleCollectable(n);
3580 } else {
3581 *aShouldDelete = true;
3583 } else {
3584 // Make sure we'll get called again.
3585 aRefCnt->RemoveFromPurpleBuffer();
3589 void
3590 NS_CycleCollectorSuspect3(void *n, nsCycleCollectionParticipant *cp,
3591 nsCycleCollectingAutoRefCnt *aRefCnt,
3592 bool* aShouldDelete)
3594 CollectorData *data = sCollectorData.get();
3596 // We should have started the cycle collector by now.
3597 MOZ_ASSERT(data);
3599 if (MOZ_LIKELY(data->mCollector)) {
3600 data->mCollector->Suspect(n, cp, aRefCnt);
3601 return;
3603 SuspectAfterShutdown(n, cp, aRefCnt, aShouldDelete);
3606 uint32_t
3607 nsCycleCollector_suspectedCount()
3609 CollectorData *data = sCollectorData.get();
3611 // We should have started the cycle collector by now.
3612 MOZ_ASSERT(data);
3614 if (!data->mCollector) {
3615 return 0;
3618 return data->mCollector->SuspectedCount();
3621 bool
3622 nsCycleCollector_init()
3624 MOZ_ASSERT(NS_IsMainThread(), "Wrong thread!");
3625 MOZ_ASSERT(!sCollectorData.initialized(), "Called twice!?");
3627 return sCollectorData.init();
3630 void
3631 nsCycleCollector_startup()
3633 MOZ_ASSERT(sCollectorData.initialized(),
3634 "Forgot to call nsCycleCollector_init!");
3635 if (sCollectorData.get()) {
3636 MOZ_CRASH();
3639 CollectorData* data = new CollectorData;
3640 data->mCollector = new nsCycleCollector();
3641 data->mRuntime = nullptr;
3643 sCollectorData.set(data);
3646 void
3647 nsCycleCollector_setBeforeUnlinkCallback(CC_BeforeUnlinkCallback aCB)
3649 CollectorData *data = sCollectorData.get();
3651 // We should have started the cycle collector by now.
3652 MOZ_ASSERT(data);
3653 MOZ_ASSERT(data->mCollector);
3655 data->mCollector->SetBeforeUnlinkCallback(aCB);
3658 void
3659 nsCycleCollector_setForgetSkippableCallback(CC_ForgetSkippableCallback aCB)
3661 CollectorData *data = sCollectorData.get();
3663 // We should have started the cycle collector by now.
3664 MOZ_ASSERT(data);
3665 MOZ_ASSERT(data->mCollector);
3667 data->mCollector->SetForgetSkippableCallback(aCB);
3670 void
3671 nsCycleCollector_forgetSkippable(bool aRemoveChildlessNodes,
3672 bool aAsyncSnowWhiteFreeing)
3674 CollectorData *data = sCollectorData.get();
3676 // We should have started the cycle collector by now.
3677 MOZ_ASSERT(data);
3678 MOZ_ASSERT(data->mCollector);
3680 PROFILER_LABEL("CC", "nsCycleCollector_forgetSkippable");
3681 TimeLog timeLog;
3682 data->mCollector->ForgetSkippable(aRemoveChildlessNodes,
3683 aAsyncSnowWhiteFreeing);
3684 timeLog.Checkpoint("ForgetSkippable()");
3687 void
3688 nsCycleCollector_dispatchDeferredDeletion(bool aContinuation)
3690 CollectorData *data = sCollectorData.get();
3692 if (!data || !data->mRuntime) {
3693 return;
3696 data->mRuntime->DispatchDeferredDeletion(aContinuation);
3699 bool
3700 nsCycleCollector_doDeferredDeletion()
3702 CollectorData *data = sCollectorData.get();
3704 // We should have started the cycle collector by now.
3705 MOZ_ASSERT(data);
3706 MOZ_ASSERT(data->mCollector);
3707 MOZ_ASSERT(data->mRuntime);
3709 return data->mCollector->FreeSnowWhite(false);
3712 void
3713 nsCycleCollector_collect(nsICycleCollectorListener *aManualListener)
3715 CollectorData *data = sCollectorData.get();
3717 // We should have started the cycle collector by now.
3718 MOZ_ASSERT(data);
3719 MOZ_ASSERT(data->mCollector);
3721 PROFILER_LABEL("CC", "nsCycleCollector_collect");
3722 SliceBudget unlimitedBudget;
3723 data->mCollector->Collect(ManualCC, unlimitedBudget, aManualListener);
3726 void
3727 nsCycleCollector_collectSlice(int64_t aSliceTime)
3729 CollectorData *data = sCollectorData.get();
3731 // We should have started the cycle collector by now.
3732 MOZ_ASSERT(data);
3733 MOZ_ASSERT(data->mCollector);
3735 PROFILER_LABEL("CC", "nsCycleCollector_collectSlice");
3736 SliceBudget budget;
3737 if (aSliceTime > 0) {
3738 budget = SliceBudget::TimeBudget(aSliceTime);
3739 } else if (aSliceTime == 0) {
3740 budget = SliceBudget::WorkBudget(1);
3742 data->mCollector->Collect(SliceCC, budget, nullptr);
3745 void
3746 nsCycleCollector_prepareForGarbageCollection()
3748 CollectorData *data = sCollectorData.get();
3750 MOZ_ASSERT(data);
3752 if (!data->mCollector) {
3753 return;
3756 data->mCollector->PrepareForGarbageCollection();
3759 void
3760 nsCycleCollector_shutdown()
3762 CollectorData *data = sCollectorData.get();
3764 if (data) {
3765 MOZ_ASSERT(data->mCollector);
3766 PROFILER_LABEL("CC", "nsCycleCollector_shutdown");
3767 data->mCollector->Shutdown();
3768 data->mCollector = nullptr;
3769 if (!data->mRuntime) {
3770 delete data;
3771 sCollectorData.set(nullptr);