1 /* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*- */
2 /* vim: set ts=8 sts=2 et sw=2 tw=80: */
3 /* This Source Code Form is subject to the terms of the Mozilla Public
4 * License, v. 2.0. If a copy of the MPL was not distributed with this
5 * file, You can obtain one at http://mozilla.org/MPL/2.0/. */
8 // This file implements a garbage-cycle collector based on the paper
10 // Concurrent Cycle Collection in Reference Counted Systems
11 // Bacon & Rajan (2001), ECOOP 2001 / Springer LNCS vol 2072
13 // We are not using the concurrent or acyclic cases of that paper; so
14 // the green, red and orange colors are not used.
16 // The collector is based on tracking pointers of four colors:
18 // Black nodes are definitely live. If we ever determine a node is
19 // black, it's ok to forget about, drop from our records.
21 // White nodes are definitely garbage cycles. Once we finish with our
22 // scanning, we unlink all the white nodes and expect that by
23 // unlinking them they will self-destruct (since a garbage cycle is
24 // only keeping itself alive with internal links, by definition).
26 // Snow-white is an addition to the original algorithm. Snow-white object
27 // has reference count zero and is just waiting for deletion.
29 // Grey nodes are being scanned. Nodes that turn grey will turn
30 // either black if we determine that they're live, or white if we
31 // determine that they're a garbage cycle. After the main collection
32 // algorithm there should be no grey nodes.
34 // Purple nodes are *candidates* for being scanned. They are nodes we
35 // haven't begun scanning yet because they're not old enough, or we're
36 // still partway through the algorithm.
38 // XPCOM objects participating in garbage-cycle collection are obliged
39 // to inform us when they ought to turn purple; that is, when their
40 // refcount transitions from N+1 -> N, for nonzero N. Furthermore we
41 // require that *after* an XPCOM object has informed us of turning
42 // purple, they will tell us when they either transition back to being
43 // black (incremented refcount) or are ultimately deleted.
45 // Incremental cycle collection
47 // Beyond the simple state machine required to implement incremental
48 // collection, the CC needs to be able to compensate for things the browser
49 // is doing during the collection. There are two kinds of problems. For each
50 // of these, there are two cases to deal with: purple-buffered C++ objects
53 // The first problem is that an object in the CC's graph can become garbage.
54 // This is bad because the CC touches the objects in its graph at every
55 // stage of its operation.
57 // All cycle collected C++ objects that die during a cycle collection
58 // will end up actually getting deleted by the SnowWhiteKiller. Before
59 // the SWK deletes an object, it checks if an ICC is running, and if so,
60 // if the object is in the graph. If it is, the CC clears mPointer and
61 // mParticipant so it does not point to the raw object any more. Because
62 // objects could die any time the CC returns to the mutator, any time the CC
63 // accesses a PtrInfo it must perform a null check on mParticipant to
64 // ensure the object has not gone away.
66 // JS objects don't always run finalizers, so the CC can't remove them from
67 // the graph when they die. Fortunately, JS objects can only die during a GC,
68 // so if a GC is begun during an ICC, the browser synchronously finishes off
69 // the ICC, which clears the entire CC graph. If the GC and CC are scheduled
70 // properly, this should be rare.
72 // The second problem is that objects in the graph can be changed, say by
73 // being addrefed or released, or by having a field updated, after the object
74 // has been added to the graph. The problem is that ICC can miss a newly
75 // created reference to an object, and end up unlinking an object that is
78 // The basic idea of the solution, from "An on-the-fly Reference Counting
79 // Garbage Collector for Java" by Levanoni and Petrank, is to notice if an
80 // object has had an additional reference to it created during the collection,
81 // and if so, don't collect it during the current collection. This avoids having
82 // to rerun the scan as in Bacon & Rajan 2001.
84 // For cycle collected C++ objects, we modify AddRef to place the object in
85 // the purple buffer, in addition to Release. Then, in the CC, we treat any
86 // objects in the purple buffer as being alive, after graph building has
87 // completed. Because they are in the purple buffer, they will be suspected
88 // in the next CC, so there's no danger of leaks. This is imprecise, because
89 // we will treat as live an object that has been Released but not AddRefed
90 // during graph building, but that's probably rare enough that the additional
91 // bookkeeping overhead is not worthwhile.
93 // For JS objects, the cycle collector is only looking at gray objects. If a
94 // gray object is touched during ICC, it will be made black by UnmarkGray.
95 // Thus, if a JS object has become black during the ICC, we treat it as live.
96 // Merged JS zones have to be handled specially: we scan all zone globals.
97 // If any are black, we treat the zone as being black.
102 // An XPCOM object is either scan-safe or scan-unsafe, purple-safe or
105 // An nsISupports object is scan-safe if:
107 // - It can be QI'ed to |nsXPCOMCycleCollectionParticipant|, though
108 // this operation loses ISupports identity (like nsIClassInfo).
109 // - Additionally, the operation |traverse| on the resulting
110 // nsXPCOMCycleCollectionParticipant does not cause *any* refcount
111 // adjustment to occur (no AddRef / Release calls).
113 // A non-nsISupports ("native") object is scan-safe by explicitly
114 // providing its nsCycleCollectionParticipant.
116 // An object is purple-safe if it satisfies the following properties:
118 // - The object is scan-safe.
120 // When we receive a pointer |ptr| via
121 // |nsCycleCollector::suspect(ptr)|, we assume it is purple-safe. We
122 // can check the scan-safety, but have no way to ensure the
123 // purple-safety; objects must obey, or else the entire system falls
124 // apart. Don't involve an object in this scheme if you can't
125 // guarantee its purple-safety. The easiest way to ensure that an
126 // object is purple-safe is to use nsCycleCollectingAutoRefCnt.
128 // When we have a scannable set of purple nodes ready, we begin
129 // our walks. During the walks, the nodes we |traverse| should only
130 // feed us more scan-safe nodes, and should not adjust the refcounts
133 // We do not |AddRef| or |Release| any objects during scanning. We
134 // rely on the purple-safety of the roots that call |suspect| to
135 // hold, such that we will clear the pointer from the purple buffer
136 // entry to the object before it is destroyed. The pointers that are
137 // merely scan-safe we hold only for the duration of scanning, and
138 // there should be no objects released from the scan-safe set during
141 // We *do* call |Root| and |Unroot| on every white object, on
142 // either side of the calls to |Unlink|. This keeps the set of white
143 // objects alive during the unlinking.
146 #if !defined(__MINGW32__)
153 #include "base/process_util.h"
155 #include "mozilla/ArrayUtils.h"
156 #include "mozilla/AutoRestore.h"
157 #include "mozilla/CycleCollectedJSRuntime.h"
158 #include "mozilla/HoldDropJSObjects.h"
159 /* This must occur *after* base/process_util.h to avoid typedefs conflicts. */
160 #include "mozilla/MemoryReporting.h"
161 #include "mozilla/LinkedList.h"
163 #include "nsCycleCollectionParticipant.h"
164 #include "nsCycleCollectionNoteRootCallback.h"
166 #include "nsCycleCollector.h"
167 #include "nsThreadUtils.h"
168 #include "nsXULAppAPI.h"
170 #include "nsPrintfCString.h"
171 #include "nsTArray.h"
172 #include "nsIConsoleService.h"
173 #include "mozilla/Attributes.h"
174 #include "nsICycleCollectorListener.h"
175 #include "nsIMemoryReporter.h"
177 #include "nsDumpUtils.h"
178 #include "xpcpublic.h"
179 #include "GeckoProfiler.h"
180 #include "js/SliceBudget.h"
184 #include "mozilla/Likely.h"
185 #include "mozilla/PoisonIOInterposer.h"
186 #include "mozilla/Telemetry.h"
187 #include "mozilla/ThreadLocal.h"
189 using namespace mozilla
;
191 //#define COLLECT_TIME_DEBUG
193 // Enable assertions that are useful for diagnosing errors in graph construction.
194 //#define DEBUG_CC_GRAPH
196 #define DEFAULT_SHUTDOWN_COLLECTIONS 5
198 // One to do the freeing, then another to detect there is no more work to do.
199 #define NORMAL_SHUTDOWN_COLLECTIONS 2
201 // Cycle collector environment variables
203 // MOZ_CC_LOG_ALL: If defined, always log cycle collector heaps.
205 // MOZ_CC_LOG_SHUTDOWN: If defined, log cycle collector heaps at shutdown.
207 // MOZ_CC_LOG_THREAD: If set to "main", only automatically log main thread
208 // CCs. If set to "worker", only automatically log worker CCs. If set to "all",
209 // log either. The default value is "all". This must be used with either
210 // MOZ_CC_LOG_ALL or MOZ_CC_LOG_SHUTDOWN for it to do anything.
212 // MOZ_CC_LOG_PROCESS: If set to "main", only automatically log main process
213 // CCs. If set to "content", only automatically log tab CCs. If set to
214 // "plugins", only automatically log plugin CCs. If set to "all", log
215 // everything. The default value is "all". This must be used with either
216 // MOZ_CC_LOG_ALL or MOZ_CC_LOG_SHUTDOWN for it to do anything.
218 // MOZ_CC_ALL_TRACES: If set to "all", any cycle collector
219 // logging done will be WantAllTraces, which disables
220 // various cycle collector optimizations to give a fuller picture of
221 // the heap. If set to "shutdown", only shutdown logging will be WantAllTraces.
222 // The default is none.
224 // MOZ_CC_RUN_DURING_SHUTDOWN: In non-DEBUG or builds, if this is set,
225 // run cycle collections at shutdown.
227 // MOZ_CC_LOG_DIRECTORY: The directory in which logs are placed (such as
228 // logs from MOZ_CC_LOG_ALL and MOZ_CC_LOG_SHUTDOWN, or other uses
229 // of nsICycleCollectorListener)
231 // Various parameters of this collector can be tuned using environment
234 struct nsCycleCollectorParams
239 bool mAllTracesShutdown
;
242 nsCycleCollectorParams() :
243 mLogAll(PR_GetEnv("MOZ_CC_LOG_ALL") != nullptr),
244 mLogShutdown(PR_GetEnv("MOZ_CC_LOG_SHUTDOWN") != nullptr),
245 mAllTracesAll(false),
246 mAllTracesShutdown(false)
248 const char* logThreadEnv
= PR_GetEnv("MOZ_CC_LOG_THREAD");
249 bool threadLogging
= true;
250 if (logThreadEnv
&& !!strcmp(logThreadEnv
, "all")) {
251 if (NS_IsMainThread()) {
252 threadLogging
= !strcmp(logThreadEnv
, "main");
254 threadLogging
= !strcmp(logThreadEnv
, "worker");
258 const char* logProcessEnv
= PR_GetEnv("MOZ_CC_LOG_PROCESS");
259 bool processLogging
= true;
260 if (logProcessEnv
&& !!strcmp(logProcessEnv
, "all")) {
261 switch (XRE_GetProcessType()) {
262 case GeckoProcessType_Default
:
263 processLogging
= !strcmp(logProcessEnv
, "main");
265 case GeckoProcessType_Plugin
:
266 processLogging
= !strcmp(logProcessEnv
, "plugins");
268 case GeckoProcessType_Content
:
269 processLogging
= !strcmp(logProcessEnv
, "content");
272 processLogging
= false;
276 mLogThisThread
= threadLogging
&& processLogging
;
278 const char* allTracesEnv
= PR_GetEnv("MOZ_CC_ALL_TRACES");
280 if (!strcmp(allTracesEnv
, "all")) {
281 mAllTracesAll
= true;
282 } else if (!strcmp(allTracesEnv
, "shutdown")) {
283 mAllTracesShutdown
= true;
288 bool LogThisCC(bool aIsShutdown
)
290 return (mLogAll
|| (aIsShutdown
&& mLogShutdown
)) && mLogThisThread
;
293 bool AllTracesThisCC(bool aIsShutdown
)
295 return mAllTracesAll
|| (aIsShutdown
&& mAllTracesShutdown
);
299 #ifdef COLLECT_TIME_DEBUG
303 TimeLog() : mLastCheckpoint(TimeStamp::Now())
308 Checkpoint(const char* aEvent
)
310 TimeStamp now
= TimeStamp::Now();
311 double dur
= (now
- mLastCheckpoint
).ToMilliseconds();
313 printf("cc: %s took %.1fms\n", aEvent
, dur
);
315 mLastCheckpoint
= now
;
319 TimeStamp mLastCheckpoint
;
328 void Checkpoint(const char* aEvent
)
335 ////////////////////////////////////////////////////////////////////////
337 ////////////////////////////////////////////////////////////////////////
344 // EdgePool allocates arrays of void*, primarily to hold PtrInfo*.
345 // However, at the end of a block, the last two pointers are a null
346 // and then a void** pointing to the next block. This allows
347 // EdgePool::Iterators to be a single word but still capable of crossing
352 mSentinelAndBlocks
[0].block
= nullptr;
353 mSentinelAndBlocks
[1].block
= nullptr;
358 MOZ_ASSERT(!mSentinelAndBlocks
[0].block
&&
359 !mSentinelAndBlocks
[1].block
,
360 "Didn't call Clear()?");
367 Block
* next
= b
->Next();
372 mSentinelAndBlocks
[0].block
= nullptr;
373 mSentinelAndBlocks
[1].block
= nullptr;
379 return !mSentinelAndBlocks
[0].block
&&
380 !mSentinelAndBlocks
[1].block
;
388 // Use a union to avoid reinterpret_cast and the ensuing
389 // potential aliasing bugs.
395 enum { BlockSize
= 16 * 1024 };
397 PtrInfoOrBlock mPointers
[BlockSize
];
400 mPointers
[BlockSize
- 2].block
= nullptr; // sentinel
401 mPointers
[BlockSize
- 1].block
= nullptr; // next block pointer
405 return mPointers
[BlockSize
- 1].block
;
407 PtrInfoOrBlock
* Start()
409 return &mPointers
[0];
411 PtrInfoOrBlock
* End()
413 return &mPointers
[BlockSize
- 2];
417 // Store the null sentinel so that we can have valid iterators
418 // before adding any edges and without adding any blocks.
419 PtrInfoOrBlock mSentinelAndBlocks
[2];
423 return mSentinelAndBlocks
[1].block
;
425 Block
* Blocks() const
427 return mSentinelAndBlocks
[1].block
;
434 Iterator() : mPointer(nullptr) {}
435 explicit Iterator(PtrInfoOrBlock
* aPointer
) : mPointer(aPointer
) {}
436 Iterator(const Iterator
& aOther
) : mPointer(aOther
.mPointer
) {}
438 Iterator
& operator++()
440 if (!mPointer
->ptrInfo
) {
441 // Null pointer is a sentinel for link to the next block.
442 mPointer
= (mPointer
+ 1)->block
->mPointers
;
448 PtrInfo
* operator*() const
450 if (!mPointer
->ptrInfo
) {
451 // Null pointer is a sentinel for link to the next block.
452 return (mPointer
+ 1)->block
->mPointers
->ptrInfo
;
454 return mPointer
->ptrInfo
;
456 bool operator==(const Iterator
& aOther
) const
458 return mPointer
== aOther
.mPointer
;
460 bool operator!=(const Iterator
& aOther
) const
462 return mPointer
!= aOther
.mPointer
;
465 #ifdef DEBUG_CC_GRAPH
466 bool Initialized() const
468 return mPointer
!= nullptr;
473 PtrInfoOrBlock
* mPointer
;
477 friend class Builder
;
481 explicit Builder(EdgePool
& aPool
)
482 : mCurrent(&aPool
.mSentinelAndBlocks
[0])
483 , mBlockEnd(&aPool
.mSentinelAndBlocks
[0])
484 , mNextBlockPtr(&aPool
.Blocks())
490 return Iterator(mCurrent
);
493 void Add(PtrInfo
* aEdge
)
495 if (mCurrent
== mBlockEnd
) {
496 Block
* b
= new Block();
498 mCurrent
= b
->Start();
499 mBlockEnd
= b
->End();
500 mNextBlockPtr
= &b
->Next();
502 (mCurrent
++)->ptrInfo
= aEdge
;
505 // mBlockEnd points to space for null sentinel
506 PtrInfoOrBlock
* mCurrent
;
507 PtrInfoOrBlock
* mBlockEnd
;
508 Block
** mNextBlockPtr
;
511 size_t SizeOfExcludingThis(MallocSizeOf aMallocSizeOf
) const
516 n
+= aMallocSizeOf(b
);
523 #ifdef DEBUG_CC_GRAPH
524 #define CC_GRAPH_ASSERT(b) MOZ_ASSERT(b)
526 #define CC_GRAPH_ASSERT(b)
529 #define CC_TELEMETRY(_name, _value) \
531 if (NS_IsMainThread()) { \
532 Telemetry::Accumulate(Telemetry::CYCLE_COLLECTOR##_name, _value); \
534 Telemetry::Accumulate(Telemetry::CYCLE_COLLECTOR_WORKER##_name, _value); \
538 enum NodeColor
{ black
, white
, grey
};
540 // This structure should be kept as small as possible; we may expect
541 // hundreds of thousands of them to be allocated and touched
542 // repeatedly during each cycle collection.
547 nsCycleCollectionParticipant
* mParticipant
;
549 uint32_t mInternalRefs
: 30;
552 EdgePool::Iterator mFirstChild
;
554 static const uint32_t kInitialRefCount
= UINT32_MAX
- 1;
558 PtrInfo(void* aPointer
, nsCycleCollectionParticipant
* aParticipant
)
559 : mPointer(aPointer
),
560 mParticipant(aParticipant
),
563 mRefCount(kInitialRefCount
),
566 MOZ_ASSERT(aParticipant
);
568 // We initialize mRefCount to a large non-zero value so
569 // that it doesn't look like a JS object to the cycle collector
570 // in the case where the object dies before being traversed.
571 MOZ_ASSERT(!IsGrayJS() && !IsBlackJS());
574 // Allow NodePool::Block's constructor to compile.
577 NS_NOTREACHED("should never be called");
580 bool IsGrayJS() const
582 return mRefCount
== 0;
585 bool IsBlackJS() const
587 return mRefCount
== UINT32_MAX
;
590 bool WasTraversed() const
592 return mRefCount
!= kInitialRefCount
;
595 EdgePool::Iterator
FirstChild() const
597 CC_GRAPH_ASSERT(mFirstChild
.Initialized());
601 // this PtrInfo must be part of a NodePool
602 EdgePool::Iterator
LastChild() const
604 CC_GRAPH_ASSERT((this + 1)->mFirstChild
.Initialized());
605 return (this + 1)->mFirstChild
;
608 void SetFirstChild(EdgePool::Iterator aFirstChild
)
610 CC_GRAPH_ASSERT(aFirstChild
.Initialized());
611 mFirstChild
= aFirstChild
;
614 // this PtrInfo must be part of a NodePool
615 void SetLastChild(EdgePool::Iterator aLastChild
)
617 CC_GRAPH_ASSERT(aLastChild
.Initialized());
618 (this + 1)->mFirstChild
= aLastChild
;
623 * A structure designed to be used like a linked list of PtrInfo, except
624 * that allocates the PtrInfo 32K-at-a-time.
629 // The -2 allows us to use |BlockSize + 1| for |mEntries|, and fit |mNext|,
630 // all without causing slop.
631 enum { BlockSize
= 8 * 1024 - 2 };
635 // We create and destroy Block using NS_Alloc/NS_Free rather
636 // than new and delete to avoid calling its constructor and
640 NS_NOTREACHED("should never be called");
642 // Ensure Block is the right size (see the comment on BlockSize above).
644 sizeof(Block
) == 163824 || // 32-bit; equals 39.997 pages
645 sizeof(Block
) == 262120, // 64-bit; equals 63.994 pages
646 "ill-sized NodePool::Block"
651 NS_NOTREACHED("should never be called");
655 PtrInfo mEntries
[BlockSize
+ 1]; // +1 to store last child of last node
667 MOZ_ASSERT(!mBlocks
, "Didn't call Clear()?");
686 return !mBlocks
&& !mLast
;
691 friend class Builder
;
695 explicit Builder(NodePool
& aPool
)
696 : mNextBlock(&aPool
.mBlocks
)
700 MOZ_ASSERT(!aPool
.mBlocks
&& !aPool
.mLast
, "pool not empty");
702 PtrInfo
* Add(void* aPointer
, nsCycleCollectionParticipant
* aParticipant
)
704 if (mNext
== mBlockEnd
) {
705 Block
* block
= static_cast<Block
*>(NS_Alloc(sizeof(Block
)));
707 mNext
= block
->mEntries
;
708 mBlockEnd
= block
->mEntries
+ BlockSize
;
709 block
->mNext
= nullptr;
710 mNextBlock
= &block
->mNext
;
712 return new (mNext
++) PtrInfo(aPointer
, aParticipant
);
721 friend class Enumerator
;
725 explicit Enumerator(NodePool
& aPool
)
726 : mFirstBlock(aPool
.mBlocks
)
736 return mNext
== mLast
;
739 bool AtBlockEnd() const
741 return mNext
== mBlockEnd
;
746 MOZ_ASSERT(!IsDone(), "calling GetNext when done");
747 if (mNext
== mBlockEnd
) {
748 Block
* nextBlock
= mCurBlock
? mCurBlock
->mNext
: mFirstBlock
;
749 mNext
= nextBlock
->mEntries
;
750 mBlockEnd
= mNext
+ BlockSize
;
751 mCurBlock
= nextBlock
;
756 // mFirstBlock is a reference to allow an Enumerator to be constructed
757 // for an empty graph.
760 // mNext is the next value we want to return, unless mNext == mBlockEnd
761 // NB: mLast is a reference to allow enumerating while building!
767 size_t SizeOfExcludingThis(MallocSizeOf aMallocSizeOf
) const
769 // We don't measure the things pointed to by mEntries[] because those
770 // pointers are non-owning.
774 n
+= aMallocSizeOf(b
);
786 // Declarations for mPtrToNodeMap.
788 struct PtrToNodeEntry
: public PLDHashEntryHdr
790 // The key is mNode->mPointer
795 PtrToNodeMatchEntry(PLDHashTable
* aTable
,
796 const PLDHashEntryHdr
* aEntry
,
799 const PtrToNodeEntry
* n
= static_cast<const PtrToNodeEntry
*>(aEntry
);
800 return n
->mNode
->mPointer
== aKey
;
803 static PLDHashTableOps PtrNodeOps
= {
806 PL_DHashVoidPtrKeyStub
,
808 PL_DHashMoveEntryStub
,
809 PL_DHashClearEntryStub
,
810 PL_DHashFinalizeStub
,
817 // map and key will be null if the corresponding objects are GC marked
820 PtrInfo
* mKeyDelegate
;
824 class CCGraphBuilder
;
830 nsTArray
<WeakMapping
> mWeakMaps
;
834 PLDHashTable mPtrToNodeMap
;
837 CCGraph() : mRootCount(0)
839 mPtrToNodeMap
.ops
= nullptr;
844 if (mPtrToNodeMap
.ops
) {
845 PL_DHashTableFinish(&mPtrToNodeMap
);
851 MOZ_ASSERT(IsEmpty(), "Failed to call CCGraph::Clear");
852 PL_DHashTableInit(&mPtrToNodeMap
, &PtrNodeOps
, nullptr,
853 sizeof(PtrToNodeEntry
), 16384);
862 PL_DHashTableFinish(&mPtrToNodeMap
);
863 mPtrToNodeMap
.ops
= nullptr;
869 return mNodes
.IsEmpty() && mEdges
.IsEmpty() &&
870 mWeakMaps
.IsEmpty() && mRootCount
== 0 &&
875 PtrInfo
* FindNode(void* aPtr
);
876 PtrToNodeEntry
* AddNodeToMap(void* aPtr
);
877 void RemoveNodeFromMap(void* aPtr
);
879 uint32_t MapCount() const
881 return mPtrToNodeMap
.EntryCount();
884 void SizeOfExcludingThis(MallocSizeOf aMallocSizeOf
,
885 size_t* aNodesSize
, size_t* aEdgesSize
,
886 size_t* aWeakMapsSize
) const
888 *aNodesSize
= mNodes
.SizeOfExcludingThis(aMallocSizeOf
);
889 *aEdgesSize
= mEdges
.SizeOfExcludingThis(aMallocSizeOf
);
891 // We don't measure what the WeakMappings point to, because the
892 // pointers are non-owning.
893 *aWeakMapsSize
= mWeakMaps
.SizeOfExcludingThis(aMallocSizeOf
);
898 CCGraph::FindNode(void* aPtr
)
901 static_cast<PtrToNodeEntry
*>(PL_DHashTableOperate(&mPtrToNodeMap
, aPtr
,
903 if (!PL_DHASH_ENTRY_IS_BUSY(e
)) {
910 CCGraph::AddNodeToMap(void* aPtr
)
913 static_cast<PtrToNodeEntry
*>(PL_DHashTableOperate(&mPtrToNodeMap
, aPtr
,
916 // Caller should track OOMs
923 CCGraph::RemoveNodeFromMap(void* aPtr
)
925 PL_DHashTableOperate(&mPtrToNodeMap
, aPtr
, PL_DHASH_REMOVE
);
930 CanonicalizeXPCOMParticipant(nsISupports
* aIn
)
933 aIn
->QueryInterface(NS_GET_IID(nsCycleCollectionISupports
),
934 reinterpret_cast<void**>(&out
));
939 ToParticipant(nsISupports
* aPtr
, nsXPCOMCycleCollectionParticipant
** aCp
);
942 CanonicalizeParticipant(void** aParti
, nsCycleCollectionParticipant
** aCp
)
944 // If the participant is null, this is an nsISupports participant,
945 // so we must QI to get the real participant.
948 nsISupports
* nsparti
= static_cast<nsISupports
*>(*aParti
);
949 nsparti
= CanonicalizeXPCOMParticipant(nsparti
);
950 NS_ASSERTION(nsparti
,
951 "Don't add objects that don't participate in collection!");
952 nsXPCOMCycleCollectionParticipant
* xcp
;
953 ToParticipant(nsparti
, &xcp
);
959 struct nsPurpleBufferEntry
963 void* mObject
; // when low bit unset
964 nsPurpleBufferEntry
* mNextInFreeList
; // when low bit set
967 nsCycleCollectingAutoRefCnt
* mRefCnt
;
969 nsCycleCollectionParticipant
* mParticipant
; // nullptr for nsISupports
972 class nsCycleCollector
;
974 struct nsPurpleBuffer
980 // Try to match the size of a jemalloc bucket, to minimize slop bytes.
981 // - On 32-bit platforms sizeof(nsPurpleBufferEntry) is 12, so mEntries
982 // is 16,380 bytes, which leaves 4 bytes for mNext.
983 // - On 64-bit platforms sizeof(nsPurpleBufferEntry) is 24, so mEntries
984 // is 32,544 bytes, which leaves 8 bytes for mNext.
985 nsPurpleBufferEntry mEntries
[1365];
987 Block() : mNext(nullptr)
989 // Ensure Block is the right size (see above).
991 sizeof(Block
) == 16384 || // 32-bit
992 sizeof(Block
) == 32768, // 64-bit
993 "ill-sized nsPurpleBuffer::Block"
997 template<class PurpleVisitor
>
998 void VisitEntries(nsPurpleBuffer
& aBuffer
, PurpleVisitor
& aVisitor
)
1000 nsPurpleBufferEntry
* eEnd
= ArrayEnd(mEntries
);
1001 for (nsPurpleBufferEntry
* e
= mEntries
; e
!= eEnd
; ++e
) {
1002 MOZ_ASSERT(e
->mObject
, "There should be no null mObject when we iterate over the purple buffer");
1003 if (!(uintptr_t(e
->mObject
) & uintptr_t(1)) && e
->mObject
) {
1004 aVisitor
.Visit(aBuffer
, e
);
1009 // This class wraps a linked list of the elements in the purple
1014 nsPurpleBufferEntry
* mFreeList
;
1027 template<class PurpleVisitor
>
1028 void VisitEntries(PurpleVisitor
& aVisitor
)
1030 for (Block
* b
= &mFirstBlock
; b
; b
= b
->mNext
) {
1031 b
->VisitEntries(*this, aVisitor
);
1038 mFreeList
= nullptr;
1039 StartBlock(&mFirstBlock
);
1042 void StartBlock(Block
* aBlock
)
1044 NS_ABORT_IF_FALSE(!mFreeList
, "should not have free list");
1046 // Put all the entries in the block on the free list.
1047 nsPurpleBufferEntry
* entries
= aBlock
->mEntries
;
1048 mFreeList
= entries
;
1049 for (uint32_t i
= 1; i
< ArrayLength(aBlock
->mEntries
); ++i
) {
1050 entries
[i
- 1].mNextInFreeList
=
1051 (nsPurpleBufferEntry
*)(uintptr_t(entries
+ i
) | 1);
1053 entries
[ArrayLength(aBlock
->mEntries
) - 1].mNextInFreeList
=
1054 (nsPurpleBufferEntry
*)1;
1060 UnmarkRemainingPurple(&mFirstBlock
);
1062 Block
* b
= mFirstBlock
.mNext
;
1065 UnmarkRemainingPurple(b
);
1067 Block
* next
= b
->mNext
;
1071 mFirstBlock
.mNext
= nullptr;
1074 struct UnmarkRemainingPurpleVisitor
1077 Visit(nsPurpleBuffer
& aBuffer
, nsPurpleBufferEntry
* aEntry
)
1079 if (aEntry
->mRefCnt
) {
1080 aEntry
->mRefCnt
->RemoveFromPurpleBuffer();
1081 aEntry
->mRefCnt
= nullptr;
1083 aEntry
->mObject
= nullptr;
1088 void UnmarkRemainingPurple(Block
* aBlock
)
1090 UnmarkRemainingPurpleVisitor visitor
;
1091 aBlock
->VisitEntries(*this, visitor
);
1094 void SelectPointers(CCGraphBuilder
& aBuilder
);
1096 // RemoveSkippable removes entries from the purple buffer synchronously
1097 // (1) if aAsyncSnowWhiteFreeing is false and nsPurpleBufferEntry::mRefCnt is 0 or
1098 // (2) if the object's nsXPCOMCycleCollectionParticipant::CanSkip() returns true or
1099 // (3) if nsPurpleBufferEntry::mRefCnt->IsPurple() is false.
1100 // (4) If removeChildlessNodes is true, then any nodes in the purple buffer
1101 // that will have no children in the cycle collector graph will also be
1102 // removed. CanSkip() may be run on these children.
1103 void RemoveSkippable(nsCycleCollector
* aCollector
,
1104 bool aRemoveChildlessNodes
,
1105 bool aAsyncSnowWhiteFreeing
,
1106 CC_ForgetSkippableCallback aCb
);
1108 MOZ_ALWAYS_INLINE nsPurpleBufferEntry
* NewEntry()
1110 if (MOZ_UNLIKELY(!mFreeList
)) {
1111 Block
* b
= new Block
;
1114 // Add the new block as the second block in the list.
1115 b
->mNext
= mFirstBlock
.mNext
;
1116 mFirstBlock
.mNext
= b
;
1119 nsPurpleBufferEntry
* e
= mFreeList
;
1120 mFreeList
= (nsPurpleBufferEntry
*)
1121 (uintptr_t(mFreeList
->mNextInFreeList
) & ~uintptr_t(1));
1125 MOZ_ALWAYS_INLINE
void Put(void* aObject
, nsCycleCollectionParticipant
* aCp
,
1126 nsCycleCollectingAutoRefCnt
* aRefCnt
)
1128 nsPurpleBufferEntry
* e
= NewEntry();
1132 e
->mObject
= aObject
;
1133 e
->mRefCnt
= aRefCnt
;
1134 e
->mParticipant
= aCp
;
1137 void Remove(nsPurpleBufferEntry
* aEntry
)
1139 MOZ_ASSERT(mCount
!= 0, "must have entries");
1141 if (aEntry
->mRefCnt
) {
1142 aEntry
->mRefCnt
->RemoveFromPurpleBuffer();
1143 aEntry
->mRefCnt
= nullptr;
1145 aEntry
->mNextInFreeList
=
1146 (nsPurpleBufferEntry
*)(uintptr_t(mFreeList
) | uintptr_t(1));
1152 uint32_t Count() const
1157 size_t SizeOfExcludingThis(MallocSizeOf aMallocSizeOf
) const
1161 // Don't measure mFirstBlock because it's within |this|.
1162 const Block
* block
= mFirstBlock
.mNext
;
1164 n
+= aMallocSizeOf(block
);
1165 block
= block
->mNext
;
1168 // mFreeList is deliberately not measured because it points into
1169 // the purple buffer, which is within mFirstBlock and thus within |this|.
1171 // We also don't measure the things pointed to by mEntries[] because
1172 // those pointers are non-owning.
1179 AddPurpleRoot(CCGraphBuilder
& aBuilder
, void* aRoot
,
1180 nsCycleCollectionParticipant
* aParti
);
1182 struct SelectPointersVisitor
1184 explicit SelectPointersVisitor(CCGraphBuilder
& aBuilder
)
1185 : mBuilder(aBuilder
)
1190 Visit(nsPurpleBuffer
& aBuffer
, nsPurpleBufferEntry
* aEntry
)
1192 MOZ_ASSERT(aEntry
->mObject
, "Null object in purple buffer");
1193 MOZ_ASSERT(aEntry
->mRefCnt
->get() != 0,
1194 "SelectPointersVisitor: snow-white object in the purple buffer");
1195 if (!aEntry
->mRefCnt
->IsPurple() ||
1196 AddPurpleRoot(mBuilder
, aEntry
->mObject
, aEntry
->mParticipant
)) {
1197 aBuffer
.Remove(aEntry
);
1202 CCGraphBuilder
& mBuilder
;
1206 nsPurpleBuffer::SelectPointers(CCGraphBuilder
& aBuilder
)
1208 SelectPointersVisitor
visitor(aBuilder
);
1209 VisitEntries(visitor
);
1211 NS_ASSERTION(mCount
== 0, "AddPurpleRoot failed");
1222 ScanAndCollectWhitePhase
,
1228 SliceCC
, /* If a CC is in progress, continue it. Otherwise, start a new one. */
1229 ManualCC
, /* Explicitly triggered. */
1230 ShutdownCC
/* Shutdown CC, used for finding leaks. */
1233 #ifdef MOZ_NUWA_PROCESS
1234 #include "ipc/Nuwa.h"
1237 ////////////////////////////////////////////////////////////////////////
1238 // Top level structure for the cycle collector.
1239 ////////////////////////////////////////////////////////////////////////
1241 typedef js::SliceBudget SliceBudget
;
1243 class JSPurpleBuffer
;
1245 class nsCycleCollector
: public nsIMemoryReporter
1248 NS_DECL_NSIMEMORYREPORTER
1250 bool mActivelyCollecting
;
1251 bool mFreeingSnowWhite
;
1252 // mScanInProgress should be false when we're collecting white objects.
1253 bool mScanInProgress
;
1254 CycleCollectorResults mResults
;
1255 TimeStamp mCollectionStart
;
1257 CycleCollectedJSRuntime
* mJSRuntime
;
1259 ccPhase mIncrementalPhase
;
1261 nsAutoPtr
<CCGraphBuilder
> mBuilder
;
1262 nsAutoPtr
<NodePool::Enumerator
> mCurrNode
;
1263 nsCOMPtr
<nsICycleCollectorListener
> mListener
;
1267 nsCycleCollectorParams mParams
;
1269 uint32_t mWhiteNodeCount
;
1271 CC_BeforeUnlinkCallback mBeforeUnlinkCB
;
1272 CC_ForgetSkippableCallback mForgetSkippableCB
;
1274 nsPurpleBuffer mPurpleBuf
;
1276 uint32_t mUnmergedNeeded
;
1277 uint32_t mMergedInARow
;
1279 JSPurpleBuffer
* mJSPurpleBuffer
;
1282 virtual ~nsCycleCollector();
1287 void RegisterJSRuntime(CycleCollectedJSRuntime
* aJSRuntime
);
1288 void ForgetJSRuntime();
1290 void SetBeforeUnlinkCallback(CC_BeforeUnlinkCallback aBeforeUnlinkCB
)
1292 CheckThreadSafety();
1293 mBeforeUnlinkCB
= aBeforeUnlinkCB
;
1296 void SetForgetSkippableCallback(CC_ForgetSkippableCallback aForgetSkippableCB
)
1298 CheckThreadSafety();
1299 mForgetSkippableCB
= aForgetSkippableCB
;
1302 void Suspect(void* aPtr
, nsCycleCollectionParticipant
* aCp
,
1303 nsCycleCollectingAutoRefCnt
* aRefCnt
);
1304 uint32_t SuspectedCount();
1305 void ForgetSkippable(bool aRemoveChildlessNodes
, bool aAsyncSnowWhiteFreeing
);
1306 bool FreeSnowWhite(bool aUntilNoSWInPurpleBuffer
);
1308 // This method assumes its argument is already canonicalized.
1309 void RemoveObjectFromGraph(void* aPtr
);
1311 void PrepareForGarbageCollection();
1312 void FinishAnyCurrentCollection();
1314 bool Collect(ccType aCCType
,
1315 SliceBudget
& aBudget
,
1316 nsICycleCollectorListener
* aManualListener
);
1319 void SizeOfIncludingThis(mozilla::MallocSizeOf aMallocSizeOf
,
1320 size_t* aObjectSize
,
1321 size_t* aGraphNodesSize
,
1322 size_t* aGraphEdgesSize
,
1323 size_t* aWeakMapsSize
,
1324 size_t* aPurpleBufferSize
) const;
1326 JSPurpleBuffer
* GetJSPurpleBuffer();
1328 void CheckThreadSafety();
1329 void ShutdownCollect();
1331 void FixGrayBits(bool aForceGC
);
1332 bool ShouldMergeZones(ccType aCCType
);
1334 void BeginCollection(ccType aCCType
, nsICycleCollectorListener
* aManualListener
);
1335 void MarkRoots(SliceBudget
& aBudget
);
1336 void ScanRoots(bool aFullySynchGraphBuild
);
1337 void ScanIncrementalRoots();
1338 void ScanWhiteNodes(bool aFullySynchGraphBuild
);
1339 void ScanBlackNodes();
1340 void ScanWeakMaps();
1342 // returns whether anything was collected
1343 bool CollectWhite();
1345 void CleanupAfterCollection();
1348 NS_IMPL_ISUPPORTS(nsCycleCollector
, nsIMemoryReporter
)
1351 * GraphWalker is templatized over a Visitor class that must provide
1352 * the following two methods:
1354 * bool ShouldVisitNode(PtrInfo const *pi);
1355 * void VisitNode(PtrInfo *pi);
1357 template<class Visitor
>
1363 void DoWalk(nsDeque
& aQueue
);
1365 void CheckedPush(nsDeque
& aQueue
, PtrInfo
* aPi
)
1370 if (!aQueue
.Push(aPi
, fallible_t())) {
1376 void Walk(PtrInfo
* aPi
);
1377 void WalkFromRoots(CCGraph
& aGraph
);
1378 // copy-constructing the visitor should be cheap, and less
1379 // indirection than using a reference
1380 explicit GraphWalker(const Visitor aVisitor
) : mVisitor(aVisitor
)
1386 ////////////////////////////////////////////////////////////////////////
1387 // The static collector struct
1388 ////////////////////////////////////////////////////////////////////////
1390 struct CollectorData
1392 nsRefPtr
<nsCycleCollector
> mCollector
;
1393 CycleCollectedJSRuntime
* mRuntime
;
1396 static mozilla::ThreadLocal
<CollectorData
*> sCollectorData
;
1398 ////////////////////////////////////////////////////////////////////////
1399 // Utility functions
1400 ////////////////////////////////////////////////////////////////////////
1402 MOZ_NEVER_INLINE
static void
1403 Fault(const char* aMsg
, const void* aPtr
= nullptr)
1406 printf("Fault in cycle collector: %s (ptr: %p)\n", aMsg
, aPtr
);
1408 printf("Fault in cycle collector: %s\n", aMsg
);
1411 NS_RUNTIMEABORT("cycle collector fault");
1415 Fault(const char* aMsg
, PtrInfo
* aPi
)
1417 Fault(aMsg
, aPi
->mPointer
);
1421 ToParticipant(nsISupports
* aPtr
, nsXPCOMCycleCollectionParticipant
** aCp
)
1423 // We use QI to move from an nsISupports to an
1424 // nsXPCOMCycleCollectionParticipant, which is a per-class singleton helper
1425 // object that implements traversal and unlinking logic for the nsISupports
1427 CallQueryInterface(aPtr
, aCp
);
1430 template<class Visitor
>
1431 MOZ_NEVER_INLINE
void
1432 GraphWalker
<Visitor
>::Walk(PtrInfo
* aPi
)
1435 CheckedPush(queue
, aPi
);
1439 template<class Visitor
>
1440 MOZ_NEVER_INLINE
void
1441 GraphWalker
<Visitor
>::WalkFromRoots(CCGraph
& aGraph
)
1444 NodePool::Enumerator
etor(aGraph
.mNodes
);
1445 for (uint32_t i
= 0; i
< aGraph
.mRootCount
; ++i
) {
1446 CheckedPush(queue
, etor
.GetNext());
1451 template<class Visitor
>
1452 MOZ_NEVER_INLINE
void
1453 GraphWalker
<Visitor
>::DoWalk(nsDeque
& aQueue
)
1455 // Use a aQueue to match the breadth-first traversal used when we
1456 // built the graph, for hopefully-better locality.
1457 while (aQueue
.GetSize() > 0) {
1458 PtrInfo
* pi
= static_cast<PtrInfo
*>(aQueue
.PopFront());
1460 if (pi
->WasTraversed() && mVisitor
.ShouldVisitNode(pi
)) {
1461 mVisitor
.VisitNode(pi
);
1462 for (EdgePool::Iterator child
= pi
->FirstChild(),
1463 child_end
= pi
->LastChild();
1464 child
!= child_end
; ++child
) {
1465 CheckedPush(aQueue
, *child
);
1471 struct CCGraphDescriber
: public LinkedListElement
<CCGraphDescriber
>
1474 : mAddress("0x"), mCnt(0), mType(eUnknown
)
1491 nsCString mCompartmentOrToAddress
;
1496 class nsCycleCollectorLogSinkToFile MOZ_FINAL
: public nsICycleCollectorLogSink
1501 nsCycleCollectorLogSinkToFile() :
1502 mProcessIdentifier(base::GetCurrentProcId()),
1503 mGCLog("gc-edges"), mCCLog("cc-edges")
1507 NS_IMETHOD
GetFilenameIdentifier(nsAString
& aIdentifier
) MOZ_OVERRIDE
1509 aIdentifier
= mFilenameIdentifier
;
1513 NS_IMETHOD
SetFilenameIdentifier(const nsAString
& aIdentifier
) MOZ_OVERRIDE
1515 mFilenameIdentifier
= aIdentifier
;
1519 NS_IMETHOD
GetProcessIdentifier(int32_t* aIdentifier
) MOZ_OVERRIDE
1521 *aIdentifier
= mProcessIdentifier
;
1525 NS_IMETHOD
SetProcessIdentifier(int32_t aIdentifier
) MOZ_OVERRIDE
1527 mProcessIdentifier
= aIdentifier
;
1531 NS_IMETHOD
GetGcLog(nsIFile
** aPath
) MOZ_OVERRIDE
1533 NS_IF_ADDREF(*aPath
= mGCLog
.mFile
);
1537 NS_IMETHOD
GetCcLog(nsIFile
** aPath
) MOZ_OVERRIDE
1539 NS_IF_ADDREF(*aPath
= mCCLog
.mFile
);
1543 NS_IMETHOD
Open(FILE** aGCLog
, FILE** aCCLog
) MOZ_OVERRIDE
1547 if (mGCLog
.mStream
|| mCCLog
.mStream
) {
1548 return NS_ERROR_UNEXPECTED
;
1551 rv
= OpenLog(&mGCLog
);
1552 NS_ENSURE_SUCCESS(rv
, rv
);
1553 *aGCLog
= mGCLog
.mStream
;
1555 rv
= OpenLog(&mCCLog
);
1556 NS_ENSURE_SUCCESS(rv
, rv
);
1557 *aCCLog
= mCCLog
.mStream
;
1562 NS_IMETHOD
CloseGCLog() MOZ_OVERRIDE
1564 if (!mGCLog
.mStream
) {
1565 return NS_ERROR_UNEXPECTED
;
1567 CloseLog(&mGCLog
, NS_LITERAL_STRING("Garbage"));
1571 NS_IMETHOD
CloseCCLog() MOZ_OVERRIDE
1573 if (!mCCLog
.mStream
) {
1574 return NS_ERROR_UNEXPECTED
;
1576 CloseLog(&mCCLog
, NS_LITERAL_STRING("Cycle"));
1581 ~nsCycleCollectorLogSinkToFile()
1583 if (mGCLog
.mStream
) {
1584 MozillaUnRegisterDebugFILE(mGCLog
.mStream
);
1585 fclose(mGCLog
.mStream
);
1587 if (mCCLog
.mStream
) {
1588 MozillaUnRegisterDebugFILE(mCCLog
.mStream
);
1589 fclose(mCCLog
.mStream
);
1595 const char* const mPrefix
;
1596 nsCOMPtr
<nsIFile
> mFile
;
1599 explicit FileInfo(const char* aPrefix
) : mPrefix(aPrefix
), mStream(nullptr) { }
1603 * Create a new file named something like aPrefix.$PID.$IDENTIFIER.log in
1604 * $MOZ_CC_LOG_DIRECTORY or in the system's temp directory. No existing
1605 * file will be overwritten; if aPrefix.$PID.$IDENTIFIER.log exists, we'll
1606 * try a file named something like aPrefix.$PID.$IDENTIFIER-1.log, and so
1609 already_AddRefed
<nsIFile
> CreateTempFile(const char* aPrefix
)
1611 nsPrintfCString
filename("%s.%d%s%s.log",
1614 mFilenameIdentifier
.IsEmpty() ? "" : ".",
1615 NS_ConvertUTF16toUTF8(mFilenameIdentifier
).get());
1617 // Get the log directory either from $MOZ_CC_LOG_DIRECTORY or from
1618 // the fallback directories in OpenTempFile. We don't use an nsCOMPtr
1619 // here because OpenTempFile uses an in/out param and getter_AddRefs
1621 nsIFile
* logFile
= nullptr;
1622 if (char* env
= PR_GetEnv("MOZ_CC_LOG_DIRECTORY")) {
1623 NS_NewNativeLocalFile(nsCString(env
), /* followLinks = */ true,
1627 // On Android or B2G, this function will open a file named
1628 // aFilename under a memory-reporting-specific folder
1629 // (/data/local/tmp/memory-reports). Otherwise, it will open a
1630 // file named aFilename under "NS_OS_TEMP_DIR".
1631 nsresult rv
= nsDumpUtils::OpenTempFile(filename
, &logFile
,
1632 NS_LITERAL_CSTRING("memory-reports"));
1633 if (NS_FAILED(rv
)) {
1634 NS_IF_RELEASE(logFile
);
1638 return dont_AddRef(logFile
);
1641 nsresult
OpenLog(FileInfo
* aLog
)
1643 // Initially create the log in a file starting with "incomplete-".
1644 // We'll move the file and strip off the "incomplete-" once the dump
1645 // completes. (We do this because we don't want scripts which poll
1646 // the filesystem looking for GC/CC dumps to grab a file before we're
1647 // finished writing to it.)
1648 nsAutoCString incomplete
;
1649 incomplete
+= "incomplete-";
1650 incomplete
+= aLog
->mPrefix
;
1651 MOZ_ASSERT(!aLog
->mFile
);
1652 aLog
->mFile
= CreateTempFile(incomplete
.get());
1653 if (NS_WARN_IF(!aLog
->mFile
)) {
1654 return NS_ERROR_UNEXPECTED
;
1657 MOZ_ASSERT(!aLog
->mStream
);
1658 aLog
->mFile
->OpenANSIFileDesc("w", &aLog
->mStream
);
1659 if (NS_WARN_IF(!aLog
->mStream
)) {
1660 return NS_ERROR_UNEXPECTED
;
1662 MozillaRegisterDebugFILE(aLog
->mStream
);
1666 nsresult
CloseLog(FileInfo
* aLog
, const nsAString
& aCollectorKind
)
1668 MOZ_ASSERT(aLog
->mStream
);
1669 MOZ_ASSERT(aLog
->mFile
);
1671 MozillaUnRegisterDebugFILE(aLog
->mStream
);
1672 fclose(aLog
->mStream
);
1673 aLog
->mStream
= nullptr;
1675 // Strip off "incomplete-".
1676 nsCOMPtr
<nsIFile
> logFileFinalDestination
=
1677 CreateTempFile(aLog
->mPrefix
);
1678 if (NS_WARN_IF(!logFileFinalDestination
)) {
1679 return NS_ERROR_UNEXPECTED
;
1682 nsAutoString logFileFinalDestinationName
;
1683 logFileFinalDestination
->GetLeafName(logFileFinalDestinationName
);
1684 if (NS_WARN_IF(logFileFinalDestinationName
.IsEmpty())) {
1685 return NS_ERROR_UNEXPECTED
;
1688 aLog
->mFile
->MoveTo(/* directory */ nullptr, logFileFinalDestinationName
);
1690 // Save the file path.
1691 aLog
->mFile
= logFileFinalDestination
;
1693 // Log to the error console.
1694 nsCOMPtr
<nsIConsoleService
> cs
=
1695 do_GetService(NS_CONSOLESERVICE_CONTRACTID
);
1697 // Copy out the path.
1698 nsAutoString logPath
;
1699 logFileFinalDestination
->GetPath(logPath
);
1701 nsString msg
= aCollectorKind
+
1702 NS_LITERAL_STRING(" Collector log dumped to ") + logPath
;
1703 cs
->LogStringMessage(msg
.get());
1708 int32_t mProcessIdentifier
;
1709 nsString mFilenameIdentifier
;
1714 NS_IMPL_ISUPPORTS(nsCycleCollectorLogSinkToFile
, nsICycleCollectorLogSink
)
1717 class nsCycleCollectorLogger MOZ_FINAL
: public nsICycleCollectorListener
1719 ~nsCycleCollectorLogger()
1725 nsCycleCollectorLogger()
1726 : mLogSink(nsCycleCollector_createLogSink())
1727 , mWantAllTraces(false)
1728 , mDisableLog(false)
1729 , mWantAfterProcessing(false)
1738 mWantAllTraces
= true;
1741 NS_IMETHOD
AllTraces(nsICycleCollectorListener
** aListener
)
1744 NS_ADDREF(*aListener
= this);
1748 NS_IMETHOD
GetWantAllTraces(bool* aAllTraces
)
1750 *aAllTraces
= mWantAllTraces
;
1754 NS_IMETHOD
GetDisableLog(bool* aDisableLog
)
1756 *aDisableLog
= mDisableLog
;
1760 NS_IMETHOD
SetDisableLog(bool aDisableLog
)
1762 mDisableLog
= aDisableLog
;
1766 NS_IMETHOD
GetWantAfterProcessing(bool* aWantAfterProcessing
)
1768 *aWantAfterProcessing
= mWantAfterProcessing
;
1772 NS_IMETHOD
SetWantAfterProcessing(bool aWantAfterProcessing
)
1774 mWantAfterProcessing
= aWantAfterProcessing
;
1778 NS_IMETHOD
GetLogSink(nsICycleCollectorLogSink
** aLogSink
)
1780 NS_ADDREF(*aLogSink
= mLogSink
);
1784 NS_IMETHOD
SetLogSink(nsICycleCollectorLogSink
* aLogSink
)
1787 return NS_ERROR_INVALID_ARG
;
1789 mLogSink
= aLogSink
;
1797 mCurrentAddress
.AssignLiteral("0x");
1804 rv
= mLogSink
->Open(&gcLog
, &mCCLog
);
1805 NS_ENSURE_SUCCESS(rv
, rv
);
1806 // Dump the JS heap.
1807 CollectorData
* data
= sCollectorData
.get();
1808 if (data
&& data
->mRuntime
) {
1809 data
->mRuntime
->DumpJSHeap(gcLog
);
1811 rv
= mLogSink
->CloseGCLog();
1812 NS_ENSURE_SUCCESS(rv
, rv
);
1814 fprintf(mCCLog
, "# WantAllTraces=%s\n", mWantAllTraces
? "true" : "false");
1817 NS_IMETHOD
NoteRefCountedObject(uint64_t aAddress
, uint32_t aRefCount
,
1818 const char* aObjectDescription
)
1821 fprintf(mCCLog
, "%p [rc=%u] %s\n", (void*)aAddress
, aRefCount
,
1822 aObjectDescription
);
1824 if (mWantAfterProcessing
) {
1825 CCGraphDescriber
* d
= new CCGraphDescriber();
1826 mDescribers
.insertBack(d
);
1827 mCurrentAddress
.AssignLiteral("0x");
1828 mCurrentAddress
.AppendInt(aAddress
, 16);
1829 d
->mType
= CCGraphDescriber::eRefCountedObject
;
1830 d
->mAddress
= mCurrentAddress
;
1831 d
->mCnt
= aRefCount
;
1832 d
->mName
.Append(aObjectDescription
);
1836 NS_IMETHOD
NoteGCedObject(uint64_t aAddress
, bool aMarked
,
1837 const char* aObjectDescription
,
1838 uint64_t aCompartmentAddress
)
1841 fprintf(mCCLog
, "%p [gc%s] %s\n", (void*)aAddress
,
1842 aMarked
? ".marked" : "", aObjectDescription
);
1844 if (mWantAfterProcessing
) {
1845 CCGraphDescriber
* d
= new CCGraphDescriber();
1846 mDescribers
.insertBack(d
);
1847 mCurrentAddress
.AssignLiteral("0x");
1848 mCurrentAddress
.AppendInt(aAddress
, 16);
1849 d
->mType
= aMarked
? CCGraphDescriber::eGCMarkedObject
:
1850 CCGraphDescriber::eGCedObject
;
1851 d
->mAddress
= mCurrentAddress
;
1852 d
->mName
.Append(aObjectDescription
);
1853 if (aCompartmentAddress
) {
1854 d
->mCompartmentOrToAddress
.AssignLiteral("0x");
1855 d
->mCompartmentOrToAddress
.AppendInt(aCompartmentAddress
, 16);
1857 d
->mCompartmentOrToAddress
.SetIsVoid(true);
1862 NS_IMETHOD
NoteEdge(uint64_t aToAddress
, const char* aEdgeName
)
1865 fprintf(mCCLog
, "> %p %s\n", (void*)aToAddress
, aEdgeName
);
1867 if (mWantAfterProcessing
) {
1868 CCGraphDescriber
* d
= new CCGraphDescriber();
1869 mDescribers
.insertBack(d
);
1870 d
->mType
= CCGraphDescriber::eEdge
;
1871 d
->mAddress
= mCurrentAddress
;
1872 d
->mCompartmentOrToAddress
.AssignLiteral("0x");
1873 d
->mCompartmentOrToAddress
.AppendInt(aToAddress
, 16);
1874 d
->mName
.Append(aEdgeName
);
1878 NS_IMETHOD
NoteWeakMapEntry(uint64_t aMap
, uint64_t aKey
,
1879 uint64_t aKeyDelegate
, uint64_t aValue
)
1882 fprintf(mCCLog
, "WeakMapEntry map=%p key=%p keyDelegate=%p value=%p\n",
1883 (void*)aMap
, (void*)aKey
, (void*)aKeyDelegate
, (void*)aValue
);
1885 // We don't support after-processing for weak map entries.
1888 NS_IMETHOD
NoteIncrementalRoot(uint64_t aAddress
)
1891 fprintf(mCCLog
, "IncrementalRoot %p\n", (void*)aAddress
);
1893 // We don't support after-processing for incremental roots.
1896 NS_IMETHOD
BeginResults()
1899 fputs("==========\n", mCCLog
);
1903 NS_IMETHOD
DescribeRoot(uint64_t aAddress
, uint32_t aKnownEdges
)
1906 fprintf(mCCLog
, "%p [known=%u]\n", (void*)aAddress
, aKnownEdges
);
1908 if (mWantAfterProcessing
) {
1909 CCGraphDescriber
* d
= new CCGraphDescriber();
1910 mDescribers
.insertBack(d
);
1911 d
->mType
= CCGraphDescriber::eRoot
;
1912 d
->mAddress
.AppendInt(aAddress
, 16);
1913 d
->mCnt
= aKnownEdges
;
1917 NS_IMETHOD
DescribeGarbage(uint64_t aAddress
)
1920 fprintf(mCCLog
, "%p [garbage]\n", (void*)aAddress
);
1922 if (mWantAfterProcessing
) {
1923 CCGraphDescriber
* d
= new CCGraphDescriber();
1924 mDescribers
.insertBack(d
);
1925 d
->mType
= CCGraphDescriber::eGarbage
;
1926 d
->mAddress
.AppendInt(aAddress
, 16);
1934 nsresult rv
= mLogSink
->CloseCCLog();
1935 NS_ENSURE_SUCCESS(rv
, rv
);
1939 NS_IMETHOD
ProcessNext(nsICycleCollectorHandler
* aHandler
,
1942 if (NS_WARN_IF(!aHandler
) || NS_WARN_IF(!mWantAfterProcessing
)) {
1943 return NS_ERROR_UNEXPECTED
;
1945 CCGraphDescriber
* d
= mDescribers
.popFirst();
1948 case CCGraphDescriber::eRefCountedObject
:
1949 aHandler
->NoteRefCountedObject(d
->mAddress
,
1953 case CCGraphDescriber::eGCedObject
:
1954 case CCGraphDescriber::eGCMarkedObject
:
1955 aHandler
->NoteGCedObject(d
->mAddress
,
1957 CCGraphDescriber::eGCMarkedObject
,
1959 d
->mCompartmentOrToAddress
);
1961 case CCGraphDescriber::eEdge
:
1962 aHandler
->NoteEdge(d
->mAddress
,
1963 d
->mCompartmentOrToAddress
,
1966 case CCGraphDescriber::eRoot
:
1967 aHandler
->DescribeRoot(d
->mAddress
,
1970 case CCGraphDescriber::eGarbage
:
1971 aHandler
->DescribeGarbage(d
->mAddress
);
1973 case CCGraphDescriber::eUnknown
:
1974 NS_NOTREACHED("CCGraphDescriber::eUnknown");
1979 if (!(*aCanContinue
= !mDescribers
.isEmpty())) {
1980 mCurrentAddress
.AssignLiteral("0x");
1985 void ClearDescribers()
1987 CCGraphDescriber
* d
;
1988 while ((d
= mDescribers
.popFirst())) {
1993 nsCOMPtr
<nsICycleCollectorLogSink
> mLogSink
;
1994 bool mWantAllTraces
;
1996 bool mWantAfterProcessing
;
1997 nsCString mCurrentAddress
;
1998 mozilla::LinkedList
<CCGraphDescriber
> mDescribers
;
2002 NS_IMPL_ISUPPORTS(nsCycleCollectorLogger
, nsICycleCollectorListener
)
2005 nsCycleCollectorLoggerConstructor(nsISupports
* aOuter
,
2007 void** aInstancePtr
)
2009 if (NS_WARN_IF(aOuter
)) {
2010 return NS_ERROR_NO_AGGREGATION
;
2013 nsISupports
* logger
= new nsCycleCollectorLogger();
2015 return logger
->QueryInterface(aIID
, aInstancePtr
);
2018 ////////////////////////////////////////////////////////////////////////
2019 // Bacon & Rajan's |MarkRoots| routine.
2020 ////////////////////////////////////////////////////////////////////////
2022 class CCGraphBuilder MOZ_FINAL
: public nsCycleCollectionTraversalCallback
,
2023 public nsCycleCollectionNoteRootCallback
2027 CycleCollectorResults
& mResults
;
2028 NodePool::Builder mNodeBuilder
;
2029 EdgePool::Builder mEdgeBuilder
;
2031 nsCycleCollectionParticipant
* mJSParticipant
;
2032 nsCycleCollectionParticipant
* mJSZoneParticipant
;
2033 nsCString mNextEdgeName
;
2034 nsICycleCollectorListener
* mListener
;
2036 bool mRanOutOfMemory
;
2039 CCGraphBuilder(CCGraph
& aGraph
,
2040 CycleCollectorResults
& aResults
,
2041 CycleCollectedJSRuntime
* aJSRuntime
,
2042 nsICycleCollectorListener
* aListener
,
2044 virtual ~CCGraphBuilder();
2046 bool WantAllTraces() const
2048 return nsCycleCollectionNoteRootCallback::WantAllTraces();
2051 PtrInfo
* AddNode(void* aPtr
, nsCycleCollectionParticipant
* aParticipant
);
2052 PtrInfo
* AddWeakMapNode(void* aNode
);
2053 void Traverse(PtrInfo
* aPtrInfo
);
2054 void SetLastChild();
2056 bool RanOutOfMemory() const
2058 return mRanOutOfMemory
;
2062 void DescribeNode(uint32_t aRefCount
, const char* aObjName
)
2064 mCurrPi
->mRefCount
= aRefCount
;
2068 // nsCycleCollectionNoteRootCallback methods.
2069 NS_IMETHOD_(void) NoteXPCOMRoot(nsISupports
* aRoot
);
2070 NS_IMETHOD_(void) NoteJSRoot(void* aRoot
);
2071 NS_IMETHOD_(void) NoteNativeRoot(void* aRoot
,
2072 nsCycleCollectionParticipant
* aParticipant
);
2073 NS_IMETHOD_(void) NoteWeakMapping(void* aMap
, void* aKey
, void* aKdelegate
,
2076 // nsCycleCollectionTraversalCallback methods.
2077 NS_IMETHOD_(void) DescribeRefCountedNode(nsrefcnt aRefCount
,
2078 const char* aObjName
);
2079 NS_IMETHOD_(void) DescribeGCedNode(bool aIsMarked
, const char* aObjName
,
2080 uint64_t aCompartmentAddress
);
2082 NS_IMETHOD_(void) NoteXPCOMChild(nsISupports
* aChild
);
2083 NS_IMETHOD_(void) NoteJSChild(void* aChild
);
2084 NS_IMETHOD_(void) NoteNativeChild(void* aChild
,
2085 nsCycleCollectionParticipant
* aParticipant
);
2086 NS_IMETHOD_(void) NoteNextEdgeName(const char* aName
);
2089 NS_IMETHOD_(void) NoteRoot(void* aRoot
,
2090 nsCycleCollectionParticipant
* aParticipant
)
2093 MOZ_ASSERT(aParticipant
);
2095 if (!aParticipant
->CanSkipInCC(aRoot
) || MOZ_UNLIKELY(WantAllTraces())) {
2096 AddNode(aRoot
, aParticipant
);
2100 NS_IMETHOD_(void) NoteChild(void* aChild
, nsCycleCollectionParticipant
* aCp
,
2101 nsCString aEdgeName
)
2103 PtrInfo
* childPi
= AddNode(aChild
, aCp
);
2107 mEdgeBuilder
.Add(childPi
);
2109 mListener
->NoteEdge((uint64_t)aChild
, aEdgeName
.get());
2111 ++childPi
->mInternalRefs
;
2114 JS::Zone
* MergeZone(void* aGcthing
)
2119 JS::Zone
* zone
= JS::GetTenuredGCThingZone(aGcthing
);
2120 if (js::IsSystemZone(zone
)) {
2127 CCGraphBuilder::CCGraphBuilder(CCGraph
& aGraph
,
2128 CycleCollectorResults
& aResults
,
2129 CycleCollectedJSRuntime
* aJSRuntime
,
2130 nsICycleCollectorListener
* aListener
,
2133 , mResults(aResults
)
2134 , mNodeBuilder(aGraph
.mNodes
)
2135 , mEdgeBuilder(aGraph
.mEdges
)
2136 , mJSParticipant(nullptr)
2137 , mJSZoneParticipant(nullptr)
2138 , mListener(aListener
)
2139 , mMergeZones(aMergeZones
)
2140 , mRanOutOfMemory(false)
2143 mJSParticipant
= aJSRuntime
->GCThingParticipant();
2144 mJSZoneParticipant
= aJSRuntime
->ZoneParticipant();
2148 if (!flags
&& mListener
) {
2149 flags
= nsCycleCollectionTraversalCallback::WANT_DEBUG_INFO
;
2151 mListener
->GetWantAllTraces(&all
);
2153 flags
|= nsCycleCollectionTraversalCallback::WANT_ALL_TRACES
;
2154 mWantAllTraces
= true; // for nsCycleCollectionNoteRootCallback
2160 mMergeZones
= mMergeZones
&& MOZ_LIKELY(!WantAllTraces());
2162 MOZ_ASSERT(nsCycleCollectionNoteRootCallback::WantAllTraces() ==
2163 nsCycleCollectionTraversalCallback::WantAllTraces());
2166 CCGraphBuilder::~CCGraphBuilder()
2171 CCGraphBuilder::AddNode(void* aPtr
, nsCycleCollectionParticipant
* aParticipant
)
2173 PtrToNodeEntry
* e
= mGraph
.AddNodeToMap(aPtr
);
2175 mRanOutOfMemory
= true;
2182 result
= mNodeBuilder
.Add(aPtr
, aParticipant
);
2184 NS_ASSERTION(result
, "mNodeBuilder.Add returned null");
2187 MOZ_ASSERT(result
->mParticipant
== aParticipant
,
2188 "nsCycleCollectionParticipant shouldn't change!");
2193 MOZ_NEVER_INLINE
void
2194 CCGraphBuilder::Traverse(PtrInfo
* aPtrInfo
)
2198 mCurrPi
->SetFirstChild(mEdgeBuilder
.Mark());
2200 if (!aPtrInfo
->mParticipant
) {
2204 nsresult rv
= aPtrInfo
->mParticipant
->Traverse(aPtrInfo
->mPointer
, *this);
2205 if (NS_FAILED(rv
)) {
2206 Fault("script pointer traversal failed", aPtrInfo
);
2211 CCGraphBuilder::SetLastChild()
2213 mCurrPi
->SetLastChild(mEdgeBuilder
.Mark());
2216 NS_IMETHODIMP_(void)
2217 CCGraphBuilder::NoteXPCOMRoot(nsISupports
* aRoot
)
2219 aRoot
= CanonicalizeXPCOMParticipant(aRoot
);
2221 "Don't add objects that don't participate in collection!");
2223 nsXPCOMCycleCollectionParticipant
* cp
;
2224 ToParticipant(aRoot
, &cp
);
2226 NoteRoot(aRoot
, cp
);
2229 NS_IMETHODIMP_(void)
2230 CCGraphBuilder::NoteJSRoot(void* aRoot
)
2232 if (JS::Zone
* zone
= MergeZone(aRoot
)) {
2233 NoteRoot(zone
, mJSZoneParticipant
);
2235 NoteRoot(aRoot
, mJSParticipant
);
2239 NS_IMETHODIMP_(void)
2240 CCGraphBuilder::NoteNativeRoot(void* aRoot
,
2241 nsCycleCollectionParticipant
* aParticipant
)
2243 NoteRoot(aRoot
, aParticipant
);
2246 NS_IMETHODIMP_(void)
2247 CCGraphBuilder::DescribeRefCountedNode(nsrefcnt aRefCount
, const char* aObjName
)
2249 if (aRefCount
== 0) {
2250 Fault("zero refcount", mCurrPi
);
2252 if (aRefCount
== UINT32_MAX
) {
2253 Fault("overflowing refcount", mCurrPi
);
2255 mResults
.mVisitedRefCounted
++;
2258 mListener
->NoteRefCountedObject((uint64_t)mCurrPi
->mPointer
, aRefCount
,
2262 DescribeNode(aRefCount
, aObjName
);
2265 NS_IMETHODIMP_(void)
2266 CCGraphBuilder::DescribeGCedNode(bool aIsMarked
, const char* aObjName
,
2267 uint64_t aCompartmentAddress
)
2269 uint32_t refCount
= aIsMarked
? UINT32_MAX
: 0;
2270 mResults
.mVisitedGCed
++;
2273 mListener
->NoteGCedObject((uint64_t)mCurrPi
->mPointer
, aIsMarked
,
2274 aObjName
, aCompartmentAddress
);
2277 DescribeNode(refCount
, aObjName
);
2280 NS_IMETHODIMP_(void)
2281 CCGraphBuilder::NoteXPCOMChild(nsISupports
* aChild
)
2284 if (WantDebugInfo()) {
2285 edgeName
.Assign(mNextEdgeName
);
2286 mNextEdgeName
.Truncate();
2288 if (!aChild
|| !(aChild
= CanonicalizeXPCOMParticipant(aChild
))) {
2292 nsXPCOMCycleCollectionParticipant
* cp
;
2293 ToParticipant(aChild
, &cp
);
2294 if (cp
&& (!cp
->CanSkipThis(aChild
) || WantAllTraces())) {
2295 NoteChild(aChild
, cp
, edgeName
);
2299 NS_IMETHODIMP_(void)
2300 CCGraphBuilder::NoteNativeChild(void* aChild
,
2301 nsCycleCollectionParticipant
* aParticipant
)
2304 if (WantDebugInfo()) {
2305 edgeName
.Assign(mNextEdgeName
);
2306 mNextEdgeName
.Truncate();
2312 MOZ_ASSERT(aParticipant
, "Need a nsCycleCollectionParticipant!");
2313 NoteChild(aChild
, aParticipant
, edgeName
);
2316 NS_IMETHODIMP_(void)
2317 CCGraphBuilder::NoteJSChild(void* aChild
)
2324 if (MOZ_UNLIKELY(WantDebugInfo())) {
2325 edgeName
.Assign(mNextEdgeName
);
2326 mNextEdgeName
.Truncate();
2329 if (xpc_GCThingIsGrayCCThing(aChild
) || MOZ_UNLIKELY(WantAllTraces())) {
2330 if (JS::Zone
* zone
= MergeZone(aChild
)) {
2331 NoteChild(zone
, mJSZoneParticipant
, edgeName
);
2333 NoteChild(aChild
, mJSParticipant
, edgeName
);
2338 NS_IMETHODIMP_(void)
2339 CCGraphBuilder::NoteNextEdgeName(const char* aName
)
2341 if (WantDebugInfo()) {
2342 mNextEdgeName
= aName
;
2347 CCGraphBuilder::AddWeakMapNode(void* aNode
)
2349 MOZ_ASSERT(aNode
, "Weak map node should be non-null.");
2351 if (!xpc_GCThingIsGrayCCThing(aNode
) && !WantAllTraces()) {
2355 if (JS::Zone
* zone
= MergeZone(aNode
)) {
2356 return AddNode(zone
, mJSZoneParticipant
);
2358 return AddNode(aNode
, mJSParticipant
);
2361 NS_IMETHODIMP_(void)
2362 CCGraphBuilder::NoteWeakMapping(void* aMap
, void* aKey
, void* aKdelegate
,
2365 // Don't try to optimize away the entry here, as we've already attempted to
2366 // do that in TraceWeakMapping in nsXPConnect.
2367 WeakMapping
* mapping
= mGraph
.mWeakMaps
.AppendElement();
2368 mapping
->mMap
= aMap
? AddWeakMapNode(aMap
) : nullptr;
2369 mapping
->mKey
= aKey
? AddWeakMapNode(aKey
) : nullptr;
2370 mapping
->mKeyDelegate
= aKdelegate
? AddWeakMapNode(aKdelegate
) : mapping
->mKey
;
2371 mapping
->mVal
= aVal
? AddWeakMapNode(aVal
) : nullptr;
2374 mListener
->NoteWeakMapEntry((uint64_t)aMap
, (uint64_t)aKey
,
2375 (uint64_t)aKdelegate
, (uint64_t)aVal
);
2380 AddPurpleRoot(CCGraphBuilder
& aBuilder
, void* aRoot
,
2381 nsCycleCollectionParticipant
* aParti
)
2383 CanonicalizeParticipant(&aRoot
, &aParti
);
2385 if (aBuilder
.WantAllTraces() || !aParti
->CanSkipInCC(aRoot
)) {
2386 PtrInfo
* pinfo
= aBuilder
.AddNode(aRoot
, aParti
);
2395 // MayHaveChild() will be false after a Traverse if the object does
2396 // not have any children the CC will visit.
2397 class ChildFinder
: public nsCycleCollectionTraversalCallback
2400 ChildFinder() : mMayHaveChild(false)
2404 // The logic of the Note*Child functions must mirror that of their
2405 // respective functions in CCGraphBuilder.
2406 NS_IMETHOD_(void) NoteXPCOMChild(nsISupports
* aChild
);
2407 NS_IMETHOD_(void) NoteNativeChild(void* aChild
,
2408 nsCycleCollectionParticipant
* aHelper
);
2409 NS_IMETHOD_(void) NoteJSChild(void* aChild
);
2411 NS_IMETHOD_(void) DescribeRefCountedNode(nsrefcnt aRefcount
,
2412 const char* aObjname
)
2415 NS_IMETHOD_(void) DescribeGCedNode(bool aIsMarked
,
2416 const char* aObjname
,
2417 uint64_t aCompartmentAddress
)
2420 NS_IMETHOD_(void) NoteNextEdgeName(const char* aName
)
2425 return mMayHaveChild
;
2431 NS_IMETHODIMP_(void)
2432 ChildFinder::NoteXPCOMChild(nsISupports
* aChild
)
2434 if (!aChild
|| !(aChild
= CanonicalizeXPCOMParticipant(aChild
))) {
2437 nsXPCOMCycleCollectionParticipant
* cp
;
2438 ToParticipant(aChild
, &cp
);
2439 if (cp
&& !cp
->CanSkip(aChild
, true)) {
2440 mMayHaveChild
= true;
2444 NS_IMETHODIMP_(void)
2445 ChildFinder::NoteNativeChild(void* aChild
,
2446 nsCycleCollectionParticipant
* aHelper
)
2449 mMayHaveChild
= true;
2453 NS_IMETHODIMP_(void)
2454 ChildFinder::NoteJSChild(void* aChild
)
2456 if (aChild
&& xpc_GCThingIsGrayCCThing(aChild
)) {
2457 mMayHaveChild
= true;
2462 MayHaveChild(void* aObj
, nsCycleCollectionParticipant
* aCp
)
2465 aCp
->Traverse(aObj
, cf
);
2466 return cf
.MayHaveChild();
2470 class SegmentedArrayElement
2471 : public LinkedListElement
<SegmentedArrayElement
<T
>>
2472 , public AutoFallibleTArray
<T
, 60>
2477 class SegmentedArray
2482 MOZ_ASSERT(IsEmpty());
2485 void AppendElement(T
& aElement
)
2487 SegmentedArrayElement
<T
>* last
= mSegments
.getLast();
2488 if (!last
|| last
->Length() == last
->Capacity()) {
2489 last
= new SegmentedArrayElement
<T
>();
2490 mSegments
.insertBack(last
);
2492 last
->AppendElement(aElement
);
2497 SegmentedArrayElement
<T
>* first
;
2498 while ((first
= mSegments
.popFirst())) {
2503 SegmentedArrayElement
<T
>* GetFirstSegment()
2505 return mSegments
.getFirst();
2510 return !GetFirstSegment();
2514 mozilla::LinkedList
<SegmentedArrayElement
<T
>> mSegments
;
2517 // JSPurpleBuffer keeps references to GCThings which might affect the
2518 // next cycle collection. It is owned only by itself and during unlink its
2519 // self reference is broken down and the object ends up killing itself.
2520 // If GC happens before CC, references to GCthings and the self reference are
2522 class JSPurpleBuffer
2526 MOZ_ASSERT(mValues
.IsEmpty());
2527 MOZ_ASSERT(mObjects
.IsEmpty());
2531 explicit JSPurpleBuffer(JSPurpleBuffer
*& aReferenceToThis
)
2532 : mReferenceToThis(aReferenceToThis
)
2534 mReferenceToThis
= this;
2536 mozilla::HoldJSObjects(this);
2541 mReferenceToThis
= nullptr;
2544 mozilla::DropJSObjects(this);
2548 NS_INLINE_DECL_CYCLE_COLLECTING_NATIVE_REFCOUNTING(JSPurpleBuffer
)
2549 NS_DECL_CYCLE_COLLECTION_SCRIPT_HOLDER_NATIVE_CLASS(JSPurpleBuffer
)
2551 JSPurpleBuffer
*& mReferenceToThis
;
2553 // These are raw pointers instead of Heap<T> because we only need Heap<T> for
2554 // pointers which may point into the nursery. The purple buffer never contains
2555 // pointers to the nursery because nursery gcthings can never be gray and only
2556 // gray things can be inserted into the purple buffer.
2557 SegmentedArray
<JS::Value
> mValues
;
2558 SegmentedArray
<JSObject
*> mObjects
;
2561 NS_IMPL_CYCLE_COLLECTION_CLASS(JSPurpleBuffer
)
2563 NS_IMPL_CYCLE_COLLECTION_UNLINK_BEGIN(JSPurpleBuffer
)
2565 NS_IMPL_CYCLE_COLLECTION_UNLINK_END
2567 NS_IMPL_CYCLE_COLLECTION_TRAVERSE_BEGIN(JSPurpleBuffer
)
2568 CycleCollectionNoteChild(cb
, tmp
, "self");
2569 NS_IMPL_CYCLE_COLLECTION_TRAVERSE_SCRIPT_OBJECTS
2570 NS_IMPL_CYCLE_COLLECTION_TRAVERSE_END
2572 #define NS_TRACE_SEGMENTED_ARRAY(_field, _type) \
2574 auto segment = tmp->_field.GetFirstSegment(); \
2576 for (uint32_t i = segment->Length(); i > 0;) { \
2577 js::gc::CallTraceCallbackOnNonHeap<_type, TraceCallbacks>( \
2578 &segment->ElementAt(--i), aCallbacks, #_field, aClosure); \
2580 segment = segment->getNext(); \
2584 NS_IMPL_CYCLE_COLLECTION_TRACE_BEGIN(JSPurpleBuffer
)
2585 NS_TRACE_SEGMENTED_ARRAY(mValues
, JS::Value
)
2586 NS_TRACE_SEGMENTED_ARRAY(mObjects
, JSObject
*)
2587 NS_IMPL_CYCLE_COLLECTION_TRACE_END
2589 NS_IMPL_CYCLE_COLLECTION_ROOT_NATIVE(JSPurpleBuffer
, AddRef
)
2590 NS_IMPL_CYCLE_COLLECTION_UNROOT_NATIVE(JSPurpleBuffer
, Release
)
2592 struct SnowWhiteObject
2595 nsCycleCollectionParticipant
* mParticipant
;
2596 nsCycleCollectingAutoRefCnt
* mRefCnt
;
2599 class SnowWhiteKiller
: public TraceCallbacks
2602 SnowWhiteKiller(nsCycleCollector
* aCollector
, uint32_t aMaxCount
)
2603 : mCollector(aCollector
)
2605 MOZ_ASSERT(mCollector
, "Calling SnowWhiteKiller after nsCC went away");
2607 if (mObjects
.SetCapacity(aMaxCount
)) {
2610 if (aMaxCount
== 1) {
2611 NS_RUNTIMEABORT("Not enough memory to even delete objects!");
2619 for (uint32_t i
= 0; i
< mObjects
.Length(); ++i
) {
2620 SnowWhiteObject
& o
= mObjects
[i
];
2621 if (!o
.mRefCnt
->get() && !o
.mRefCnt
->IsInPurpleBuffer()) {
2622 mCollector
->RemoveObjectFromGraph(o
.mPointer
);
2623 o
.mRefCnt
->stabilizeForDeletion();
2624 o
.mParticipant
->Trace(o
.mPointer
, *this, nullptr);
2625 o
.mParticipant
->DeleteCycleCollectable(o
.mPointer
);
2631 Visit(nsPurpleBuffer
& aBuffer
, nsPurpleBufferEntry
* aEntry
)
2633 MOZ_ASSERT(aEntry
->mObject
, "Null object in purple buffer");
2634 if (!aEntry
->mRefCnt
->get()) {
2635 void* o
= aEntry
->mObject
;
2636 nsCycleCollectionParticipant
* cp
= aEntry
->mParticipant
;
2637 CanonicalizeParticipant(&o
, &cp
);
2638 SnowWhiteObject swo
= { o
, cp
, aEntry
->mRefCnt
};
2639 if (mObjects
.AppendElement(swo
)) {
2640 aBuffer
.Remove(aEntry
);
2645 bool HasSnowWhiteObjects() const
2647 return mObjects
.Length() > 0;
2650 virtual void Trace(JS::Heap
<JS::Value
>* aValue
, const char* aName
,
2651 void* aClosure
) const
2653 JS::Value val
= *aValue
;
2654 if (val
.isMarkable()) {
2655 void* thing
= val
.toGCThing();
2656 if (thing
&& xpc_GCThingIsGrayCCThing(thing
)) {
2657 MOZ_ASSERT(!js::gc::IsInsideNursery((js::gc::Cell
*)thing
));
2658 mCollector
->GetJSPurpleBuffer()->mValues
.AppendElement(val
);
2663 virtual void Trace(JS::Heap
<jsid
>* aId
, const char* aName
,
2664 void* aClosure
) const
2668 void AppendJSObjectToPurpleBuffer(JSObject
* obj
) const
2670 if (obj
&& xpc_GCThingIsGrayCCThing(obj
)) {
2671 MOZ_ASSERT(!js::gc::IsInsideNursery(JS::AsCell(obj
)));
2672 mCollector
->GetJSPurpleBuffer()->mObjects
.AppendElement(obj
);
2676 virtual void Trace(JS::Heap
<JSObject
*>* aObject
, const char* aName
,
2677 void* aClosure
) const
2679 AppendJSObjectToPurpleBuffer(*aObject
);
2682 virtual void Trace(JS::TenuredHeap
<JSObject
*>* aObject
, const char* aName
,
2683 void* aClosure
) const
2685 AppendJSObjectToPurpleBuffer(*aObject
);
2688 virtual void Trace(JS::Heap
<JSString
*>* aString
, const char* aName
,
2689 void* aClosure
) const
2693 virtual void Trace(JS::Heap
<JSScript
*>* aScript
, const char* aName
,
2694 void* aClosure
) const
2698 virtual void Trace(JS::Heap
<JSFunction
*>* aFunction
, const char* aName
,
2699 void* aClosure
) const
2704 nsCycleCollector
* mCollector
;
2705 FallibleTArray
<SnowWhiteObject
> mObjects
;
2708 class RemoveSkippableVisitor
: public SnowWhiteKiller
2711 RemoveSkippableVisitor(nsCycleCollector
* aCollector
,
2712 uint32_t aMaxCount
, bool aRemoveChildlessNodes
,
2713 bool aAsyncSnowWhiteFreeing
,
2714 CC_ForgetSkippableCallback aCb
)
2715 : SnowWhiteKiller(aCollector
, aAsyncSnowWhiteFreeing
? 0 : aMaxCount
)
2716 , mRemoveChildlessNodes(aRemoveChildlessNodes
)
2717 , mAsyncSnowWhiteFreeing(aAsyncSnowWhiteFreeing
)
2718 , mDispatchedDeferredDeletion(false)
2723 ~RemoveSkippableVisitor()
2725 // Note, we must call the callback before SnowWhiteKiller calls
2726 // DeleteCycleCollectable!
2730 if (HasSnowWhiteObjects()) {
2731 // Effectively a continuation.
2732 nsCycleCollector_dispatchDeferredDeletion(true);
2737 Visit(nsPurpleBuffer
& aBuffer
, nsPurpleBufferEntry
* aEntry
)
2739 MOZ_ASSERT(aEntry
->mObject
, "null mObject in purple buffer");
2740 if (!aEntry
->mRefCnt
->get()) {
2741 if (!mAsyncSnowWhiteFreeing
) {
2742 SnowWhiteKiller::Visit(aBuffer
, aEntry
);
2743 } else if (!mDispatchedDeferredDeletion
) {
2744 mDispatchedDeferredDeletion
= true;
2745 nsCycleCollector_dispatchDeferredDeletion(false);
2749 void* o
= aEntry
->mObject
;
2750 nsCycleCollectionParticipant
* cp
= aEntry
->mParticipant
;
2751 CanonicalizeParticipant(&o
, &cp
);
2752 if (aEntry
->mRefCnt
->IsPurple() && !cp
->CanSkip(o
, false) &&
2753 (!mRemoveChildlessNodes
|| MayHaveChild(o
, cp
))) {
2756 aBuffer
.Remove(aEntry
);
2760 bool mRemoveChildlessNodes
;
2761 bool mAsyncSnowWhiteFreeing
;
2762 bool mDispatchedDeferredDeletion
;
2763 CC_ForgetSkippableCallback mCallback
;
2767 nsPurpleBuffer::RemoveSkippable(nsCycleCollector
* aCollector
,
2768 bool aRemoveChildlessNodes
,
2769 bool aAsyncSnowWhiteFreeing
,
2770 CC_ForgetSkippableCallback aCb
)
2772 RemoveSkippableVisitor
visitor(aCollector
, Count(), aRemoveChildlessNodes
,
2773 aAsyncSnowWhiteFreeing
, aCb
);
2774 VisitEntries(visitor
);
2778 nsCycleCollector::FreeSnowWhite(bool aUntilNoSWInPurpleBuffer
)
2780 CheckThreadSafety();
2782 if (mFreeingSnowWhite
) {
2786 AutoRestore
<bool> ar(mFreeingSnowWhite
);
2787 mFreeingSnowWhite
= true;
2789 bool hadSnowWhiteObjects
= false;
2791 SnowWhiteKiller
visitor(this, mPurpleBuf
.Count());
2792 mPurpleBuf
.VisitEntries(visitor
);
2793 hadSnowWhiteObjects
= hadSnowWhiteObjects
||
2794 visitor
.HasSnowWhiteObjects();
2795 if (!visitor
.HasSnowWhiteObjects()) {
2798 } while (aUntilNoSWInPurpleBuffer
);
2799 return hadSnowWhiteObjects
;
2803 nsCycleCollector::ForgetSkippable(bool aRemoveChildlessNodes
,
2804 bool aAsyncSnowWhiteFreeing
)
2806 CheckThreadSafety();
2808 // If we remove things from the purple buffer during graph building, we may
2809 // lose track of an object that was mutated during graph building.
2810 MOZ_ASSERT(mIncrementalPhase
== IdlePhase
);
2813 mJSRuntime
->PrepareForForgetSkippable();
2815 MOZ_ASSERT(!mScanInProgress
,
2816 "Don't forget skippable or free snow-white while scan is in progress.");
2817 mPurpleBuf
.RemoveSkippable(this, aRemoveChildlessNodes
,
2818 aAsyncSnowWhiteFreeing
, mForgetSkippableCB
);
2821 MOZ_NEVER_INLINE
void
2822 nsCycleCollector::MarkRoots(SliceBudget
& aBudget
)
2824 const intptr_t kNumNodesBetweenTimeChecks
= 1000;
2825 const intptr_t kStep
= SliceBudget::CounterReset
/ kNumNodesBetweenTimeChecks
;
2828 AutoRestore
<bool> ar(mScanInProgress
);
2829 MOZ_ASSERT(!mScanInProgress
);
2830 mScanInProgress
= true;
2831 MOZ_ASSERT(mIncrementalPhase
== GraphBuildingPhase
);
2832 MOZ_ASSERT(mCurrNode
);
2834 while (!aBudget
.isOverBudget() && !mCurrNode
->IsDone()) {
2835 PtrInfo
* pi
= mCurrNode
->GetNext();
2840 // We need to call the builder's Traverse() method on deleted nodes, to
2841 // set their firstChild() that may be read by a prior non-deleted
2843 mBuilder
->Traverse(pi
);
2844 if (mCurrNode
->AtBlockEnd()) {
2845 mBuilder
->SetLastChild();
2847 aBudget
.step(kStep
);
2850 if (!mCurrNode
->IsDone()) {
2851 timeLog
.Checkpoint("MarkRoots()");
2855 if (mGraph
.mRootCount
> 0) {
2856 mBuilder
->SetLastChild();
2859 if (mBuilder
->RanOutOfMemory()) {
2860 MOZ_ASSERT(false, "Ran out of memory while building cycle collector graph");
2861 CC_TELEMETRY(_OOM
, true);
2865 mCurrNode
= nullptr;
2866 mIncrementalPhase
= ScanAndCollectWhitePhase
;
2867 timeLog
.Checkpoint("MarkRoots()");
2871 ////////////////////////////////////////////////////////////////////////
2872 // Bacon & Rajan's |ScanRoots| routine.
2873 ////////////////////////////////////////////////////////////////////////
2876 struct ScanBlackVisitor
2878 ScanBlackVisitor(uint32_t& aWhiteNodeCount
, bool& aFailed
)
2879 : mWhiteNodeCount(aWhiteNodeCount
), mFailed(aFailed
)
2883 bool ShouldVisitNode(PtrInfo
const* aPi
)
2885 return aPi
->mColor
!= black
;
2888 MOZ_NEVER_INLINE
void VisitNode(PtrInfo
* aPi
)
2890 if (aPi
->mColor
== white
) {
2893 aPi
->mColor
= black
;
2902 uint32_t& mWhiteNodeCount
;
2907 FloodBlackNode(uint32_t& aWhiteNodeCount
, bool& aFailed
, PtrInfo
* aPi
)
2909 GraphWalker
<ScanBlackVisitor
>(ScanBlackVisitor(aWhiteNodeCount
,
2910 aFailed
)).Walk(aPi
);
2911 MOZ_ASSERT(aPi
->mColor
== black
|| !aPi
->WasTraversed(),
2912 "FloodBlackNode should make aPi black");
2915 // Iterate over the WeakMaps. If we mark anything while iterating
2916 // over the WeakMaps, we must iterate over all of the WeakMaps again.
2918 nsCycleCollector::ScanWeakMaps()
2921 bool failed
= false;
2924 for (uint32_t i
= 0; i
< mGraph
.mWeakMaps
.Length(); i
++) {
2925 WeakMapping
* wm
= &mGraph
.mWeakMaps
[i
];
2927 // If any of these are null, the original object was marked black.
2928 uint32_t mColor
= wm
->mMap
? wm
->mMap
->mColor
: black
;
2929 uint32_t kColor
= wm
->mKey
? wm
->mKey
->mColor
: black
;
2930 uint32_t kdColor
= wm
->mKeyDelegate
? wm
->mKeyDelegate
->mColor
: black
;
2931 uint32_t vColor
= wm
->mVal
? wm
->mVal
->mColor
: black
;
2933 MOZ_ASSERT(mColor
!= grey
, "Uncolored weak map");
2934 MOZ_ASSERT(kColor
!= grey
, "Uncolored weak map key");
2935 MOZ_ASSERT(kdColor
!= grey
, "Uncolored weak map key delegate");
2936 MOZ_ASSERT(vColor
!= grey
, "Uncolored weak map value");
2938 if (mColor
== black
&& kColor
!= black
&& kdColor
== black
) {
2939 FloodBlackNode(mWhiteNodeCount
, failed
, wm
->mKey
);
2943 if (mColor
== black
&& kColor
== black
&& vColor
!= black
) {
2944 FloodBlackNode(mWhiteNodeCount
, failed
, wm
->mVal
);
2948 } while (anyChanged
);
2951 MOZ_ASSERT(false, "Ran out of memory in ScanWeakMaps");
2952 CC_TELEMETRY(_OOM
, true);
2956 // Flood black from any objects in the purple buffer that are in the CC graph.
2957 class PurpleScanBlackVisitor
2960 PurpleScanBlackVisitor(CCGraph
& aGraph
, nsICycleCollectorListener
* aListener
,
2961 uint32_t& aCount
, bool& aFailed
)
2962 : mGraph(aGraph
), mListener(aListener
), mCount(aCount
), mFailed(aFailed
)
2967 Visit(nsPurpleBuffer
& aBuffer
, nsPurpleBufferEntry
* aEntry
)
2969 MOZ_ASSERT(aEntry
->mObject
,
2970 "Entries with null mObject shouldn't be in the purple buffer.");
2971 MOZ_ASSERT(aEntry
->mRefCnt
->get() != 0,
2972 "Snow-white objects shouldn't be in the purple buffer.");
2974 void* obj
= aEntry
->mObject
;
2975 if (!aEntry
->mParticipant
) {
2976 obj
= CanonicalizeXPCOMParticipant(static_cast<nsISupports
*>(obj
));
2977 MOZ_ASSERT(obj
, "Don't add objects that don't participate in collection!");
2980 PtrInfo
* pi
= mGraph
.FindNode(obj
);
2984 MOZ_ASSERT(pi
->mParticipant
, "No dead objects should be in the purple buffer.");
2985 if (MOZ_UNLIKELY(mListener
)) {
2986 mListener
->NoteIncrementalRoot((uint64_t)pi
->mPointer
);
2988 if (pi
->mColor
== black
) {
2991 FloodBlackNode(mCount
, mFailed
, pi
);
2996 nsICycleCollectorListener
* mListener
;
3001 // Objects that have been stored somewhere since the start of incremental graph building must
3002 // be treated as live for this cycle collection, because we may not have accurate information
3003 // about who holds references to them.
3005 nsCycleCollector::ScanIncrementalRoots()
3009 // Reference counted objects:
3010 // We cleared the purple buffer at the start of the current ICC, so if a
3011 // refcounted object is purple, it may have been AddRef'd during the current
3012 // ICC. (It may also have only been released.) If that is the case, we cannot
3013 // be sure that the set of things pointing to the object in the CC graph
3014 // is accurate. Therefore, for safety, we treat any purple objects as being
3015 // live during the current CC. We don't remove anything from the purple
3016 // buffer here, so these objects will be suspected and freed in the next CC
3017 // if they are garbage.
3018 bool failed
= false;
3019 PurpleScanBlackVisitor
purpleScanBlackVisitor(mGraph
, mListener
,
3020 mWhiteNodeCount
, failed
);
3021 mPurpleBuf
.VisitEntries(purpleScanBlackVisitor
);
3022 timeLog
.Checkpoint("ScanIncrementalRoots::fix purple");
3024 bool hasJSRuntime
= !!mJSRuntime
;
3025 nsCycleCollectionParticipant
* jsParticipant
=
3026 hasJSRuntime
? mJSRuntime
->GCThingParticipant() : nullptr;
3027 nsCycleCollectionParticipant
* zoneParticipant
=
3028 hasJSRuntime
? mJSRuntime
->ZoneParticipant() : nullptr;
3029 bool hasListener
= !!mListener
;
3031 NodePool::Enumerator
etor(mGraph
.mNodes
);
3032 while (!etor
.IsDone()) {
3033 PtrInfo
* pi
= etor
.GetNext();
3035 // As an optimization, if an object has already been determined to be live,
3036 // don't consider it further. We can't do this if there is a listener,
3037 // because the listener wants to know the complete set of incremental roots.
3038 if (pi
->mColor
== black
&& MOZ_LIKELY(!hasListener
)) {
3042 // Garbage collected objects:
3043 // If a GCed object was added to the graph with a refcount of zero, and is
3044 // now marked black by the GC, it was probably gray before and was exposed
3045 // to active JS, so it may have been stored somewhere, so it needs to be
3047 if (pi
->IsGrayJS() && MOZ_LIKELY(hasJSRuntime
)) {
3048 // If the object is still marked gray by the GC, nothing could have gotten
3049 // hold of it, so it isn't an incremental root.
3050 if (pi
->mParticipant
== jsParticipant
) {
3051 if (xpc_GCThingIsGrayCCThing(pi
->mPointer
)) {
3054 } else if (pi
->mParticipant
== zoneParticipant
) {
3055 JS::Zone
* zone
= static_cast<JS::Zone
*>(pi
->mPointer
);
3056 if (js::ZoneGlobalsAreAllGray(zone
)) {
3060 MOZ_ASSERT(false, "Non-JS thing with 0 refcount? Treating as live.");
3062 } else if (!pi
->mParticipant
&& pi
->WasTraversed()) {
3063 // Dead traversed refcounted objects:
3064 // If the object was traversed, it must have been alive at the start of
3065 // the CC, and thus had a positive refcount. It is dead now, so its
3066 // refcount must have decreased at some point during the CC. Therefore,
3067 // it would be in the purple buffer if it wasn't dead, so treat it as an
3068 // incremental root.
3070 // This should not cause leaks because as the object died it should have
3071 // released anything it held onto, which will add them to the purple
3072 // buffer, which will cause them to be considered in the next CC.
3077 // At this point, pi must be an incremental root.
3079 // If there's a listener, tell it about this root. We don't bother with the
3080 // optimization of skipping the Walk() if pi is black: it will just return
3081 // without doing anything and there's no need to make this case faster.
3082 if (MOZ_UNLIKELY(hasListener
) && pi
->mPointer
) {
3083 // Dead objects aren't logged. See bug 1031370.
3084 mListener
->NoteIncrementalRoot((uint64_t)pi
->mPointer
);
3087 FloodBlackNode(mWhiteNodeCount
, failed
, pi
);
3090 timeLog
.Checkpoint("ScanIncrementalRoots::fix nodes");
3093 NS_ASSERTION(false, "Ran out of memory in ScanIncrementalRoots");
3094 CC_TELEMETRY(_OOM
, true);
3098 // Mark nodes white and make sure their refcounts are ok.
3099 // No nodes are marked black during this pass to ensure that refcount
3100 // checking is run on all nodes not marked black by ScanIncrementalRoots.
3102 nsCycleCollector::ScanWhiteNodes(bool aFullySynchGraphBuild
)
3104 NodePool::Enumerator
nodeEnum(mGraph
.mNodes
);
3105 while (!nodeEnum
.IsDone()) {
3106 PtrInfo
* pi
= nodeEnum
.GetNext();
3107 if (pi
->mColor
== black
) {
3108 // Incremental roots can be in a nonsensical state, so don't
3109 // check them. This will miss checking nodes that are merely
3110 // reachable from incremental roots.
3111 MOZ_ASSERT(!aFullySynchGraphBuild
,
3112 "In a synch CC, no nodes should be marked black early on.");
3115 MOZ_ASSERT(pi
->mColor
== grey
);
3117 if (!pi
->WasTraversed()) {
3118 // This node was deleted before it was traversed, so there's no reason
3120 MOZ_ASSERT(!pi
->mParticipant
, "Live nodes should all have been traversed");
3124 if (pi
->mInternalRefs
== pi
->mRefCount
|| pi
->IsGrayJS()) {
3130 if (MOZ_LIKELY(pi
->mInternalRefs
< pi
->mRefCount
)) {
3131 // This node will get marked black in the next pass.
3135 Fault("Traversed refs exceed refcount", pi
);
3139 // Any remaining grey nodes that haven't already been deleted must be alive,
3140 // so mark them and their children black. Any nodes that are black must have
3141 // already had their children marked black, so there's no need to look at them
3142 // again. This pass may turn some white nodes to black.
3144 nsCycleCollector::ScanBlackNodes()
3146 bool failed
= false;
3147 NodePool::Enumerator
nodeEnum(mGraph
.mNodes
);
3148 while (!nodeEnum
.IsDone()) {
3149 PtrInfo
* pi
= nodeEnum
.GetNext();
3150 if (pi
->mColor
== grey
&& pi
->WasTraversed()) {
3151 FloodBlackNode(mWhiteNodeCount
, failed
, pi
);
3156 NS_ASSERTION(false, "Ran out of memory in ScanBlackNodes");
3157 CC_TELEMETRY(_OOM
, true);
3162 nsCycleCollector::ScanRoots(bool aFullySynchGraphBuild
)
3164 AutoRestore
<bool> ar(mScanInProgress
);
3165 MOZ_ASSERT(!mScanInProgress
);
3166 mScanInProgress
= true;
3167 mWhiteNodeCount
= 0;
3168 MOZ_ASSERT(mIncrementalPhase
== ScanAndCollectWhitePhase
);
3170 if (!aFullySynchGraphBuild
) {
3171 ScanIncrementalRoots();
3175 ScanWhiteNodes(aFullySynchGraphBuild
);
3176 timeLog
.Checkpoint("ScanRoots::ScanWhiteNodes");
3179 timeLog
.Checkpoint("ScanRoots::ScanBlackNodes");
3181 // Scanning weak maps must be done last.
3183 timeLog
.Checkpoint("ScanRoots::ScanWeakMaps");
3186 mListener
->BeginResults();
3188 NodePool::Enumerator
etor(mGraph
.mNodes
);
3189 while (!etor
.IsDone()) {
3190 PtrInfo
* pi
= etor
.GetNext();
3191 if (!pi
->WasTraversed()) {
3194 switch (pi
->mColor
) {
3196 if (!pi
->IsGrayJS() && !pi
->IsBlackJS() &&
3197 pi
->mInternalRefs
!= pi
->mRefCount
) {
3198 mListener
->DescribeRoot((uint64_t)pi
->mPointer
,
3203 mListener
->DescribeGarbage((uint64_t)pi
->mPointer
);
3206 MOZ_ASSERT(false, "All traversed objects should be black or white");
3212 mListener
= nullptr;
3213 timeLog
.Checkpoint("ScanRoots::listener");
3218 ////////////////////////////////////////////////////////////////////////
3219 // Bacon & Rajan's |CollectWhite| routine, somewhat modified.
3220 ////////////////////////////////////////////////////////////////////////
3223 nsCycleCollector::CollectWhite()
3225 // Explanation of "somewhat modified": we have no way to collect the
3226 // set of whites "all at once", we have to ask each of them to drop
3227 // their outgoing links and assume this will cause the garbage cycle
3228 // to *mostly* self-destruct (except for the reference we continue
3231 // To do this "safely" we must make sure that the white nodes we're
3232 // operating on are stable for the duration of our operation. So we
3233 // make 3 sets of calls to language runtimes:
3235 // - Root(whites), which should pin the whites in memory.
3236 // - Unlink(whites), which drops outgoing links on each white.
3237 // - Unroot(whites), which returns the whites to normal GC.
3240 nsAutoTArray
<PtrInfo
*, 4000> whiteNodes
;
3242 MOZ_ASSERT(mIncrementalPhase
== ScanAndCollectWhitePhase
);
3244 whiteNodes
.SetCapacity(mWhiteNodeCount
);
3245 uint32_t numWhiteGCed
= 0;
3247 NodePool::Enumerator
etor(mGraph
.mNodes
);
3248 while (!etor
.IsDone()) {
3249 PtrInfo
* pinfo
= etor
.GetNext();
3250 if (pinfo
->mColor
== white
&& pinfo
->mParticipant
) {
3251 whiteNodes
.AppendElement(pinfo
);
3252 pinfo
->mParticipant
->Root(pinfo
->mPointer
);
3253 if (pinfo
->IsGrayJS()) {
3259 uint32_t count
= whiteNodes
.Length();
3260 MOZ_ASSERT(numWhiteGCed
<= count
,
3261 "More freed GCed nodes than total freed nodes.");
3262 mResults
.mFreedRefCounted
+= count
- numWhiteGCed
;
3263 mResults
.mFreedGCed
+= numWhiteGCed
;
3265 timeLog
.Checkpoint("CollectWhite::Root");
3267 if (mBeforeUnlinkCB
) {
3269 timeLog
.Checkpoint("CollectWhite::BeforeUnlinkCB");
3272 for (uint32_t i
= 0; i
< count
; ++i
) {
3273 PtrInfo
* pinfo
= whiteNodes
.ElementAt(i
);
3274 MOZ_ASSERT(pinfo
->mParticipant
,
3275 "Unlink shouldn't see objects removed from graph.");
3276 pinfo
->mParticipant
->Unlink(pinfo
->mPointer
);
3279 mJSRuntime
->AssertNoObjectsToTrace(pinfo
->mPointer
);
3283 timeLog
.Checkpoint("CollectWhite::Unlink");
3285 for (uint32_t i
= 0; i
< count
; ++i
) {
3286 PtrInfo
* pinfo
= whiteNodes
.ElementAt(i
);
3287 MOZ_ASSERT(pinfo
->mParticipant
,
3288 "Unroot shouldn't see objects removed from graph.");
3289 pinfo
->mParticipant
->Unroot(pinfo
->mPointer
);
3291 timeLog
.Checkpoint("CollectWhite::Unroot");
3293 nsCycleCollector_dispatchDeferredDeletion(false);
3294 timeLog
.Checkpoint("CollectWhite::dispatchDeferredDeletion");
3296 mIncrementalPhase
= CleanupPhase
;
3302 ////////////////////////
3304 ////////////////////////
3306 MOZ_DEFINE_MALLOC_SIZE_OF(CycleCollectorMallocSizeOf
)
3309 nsCycleCollector::CollectReports(nsIHandleReportCallback
* aHandleReport
,
3310 nsISupports
* aData
, bool aAnonymize
)
3312 size_t objectSize
, graphNodesSize
, graphEdgesSize
, weakMapsSize
,
3314 SizeOfIncludingThis(CycleCollectorMallocSizeOf
,
3316 &graphNodesSize
, &graphEdgesSize
,
3320 #define REPORT(_path, _amount, _desc) \
3322 size_t amount = _amount; /* evaluate |_amount| only once */ \
3325 rv = aHandleReport->Callback(EmptyCString(), \
3326 NS_LITERAL_CSTRING(_path), \
3327 KIND_HEAP, UNITS_BYTES, _amount, \
3328 NS_LITERAL_CSTRING(_desc), \
3330 if (NS_WARN_IF(NS_FAILED(rv))) \
3335 REPORT("explicit/cycle-collector/collector-object", objectSize
,
3336 "Memory used for the cycle collector object itself.");
3338 REPORT("explicit/cycle-collector/graph-nodes", graphNodesSize
,
3339 "Memory used for the nodes of the cycle collector's graph. "
3340 "This should be zero when the collector is idle.");
3342 REPORT("explicit/cycle-collector/graph-edges", graphEdgesSize
,
3343 "Memory used for the edges of the cycle collector's graph. "
3344 "This should be zero when the collector is idle.");
3346 REPORT("explicit/cycle-collector/weak-maps", weakMapsSize
,
3347 "Memory used for the representation of weak maps in the "
3348 "cycle collector's graph. "
3349 "This should be zero when the collector is idle.");
3351 REPORT("explicit/cycle-collector/purple-buffer", purpleBufferSize
,
3352 "Memory used for the cycle collector's purple buffer.");
3360 ////////////////////////////////////////////////////////////////////////
3361 // Collector implementation
3362 ////////////////////////////////////////////////////////////////////////
3364 nsCycleCollector::nsCycleCollector() :
3365 mActivelyCollecting(false),
3366 mFreeingSnowWhite(false),
3367 mScanInProgress(false),
3368 mJSRuntime(nullptr),
3369 mIncrementalPhase(IdlePhase
),
3370 mThread(NS_GetCurrentThread()),
3372 mBeforeUnlinkCB(nullptr),
3373 mForgetSkippableCB(nullptr),
3376 mJSPurpleBuffer(nullptr)
3380 nsCycleCollector::~nsCycleCollector()
3382 UnregisterWeakMemoryReporter(this);
3386 nsCycleCollector::RegisterJSRuntime(CycleCollectedJSRuntime
* aJSRuntime
)
3389 Fault("multiple registrations of cycle collector JS runtime", aJSRuntime
);
3392 mJSRuntime
= aJSRuntime
;
3394 // We can't register as a reporter in nsCycleCollector() because that runs
3395 // before the memory reporter manager is initialized. So we do it here
3397 static bool registered
= false;
3399 RegisterWeakMemoryReporter(this);
3405 nsCycleCollector::ForgetJSRuntime()
3408 Fault("forgetting non-registered cycle collector JS runtime");
3411 mJSRuntime
= nullptr;
3416 HasParticipant(void* aPtr
, nsCycleCollectionParticipant
* aParti
)
3422 nsXPCOMCycleCollectionParticipant
* xcp
;
3423 ToParticipant(static_cast<nsISupports
*>(aPtr
), &xcp
);
3424 return xcp
!= nullptr;
3428 MOZ_ALWAYS_INLINE
void
3429 nsCycleCollector::Suspect(void* aPtr
, nsCycleCollectionParticipant
* aParti
,
3430 nsCycleCollectingAutoRefCnt
* aRefCnt
)
3432 CheckThreadSafety();
3434 // Re-entering ::Suspect during collection used to be a fault, but
3435 // we are canonicalizing nsISupports pointers using QI, so we will
3436 // see some spurious refcount traffic here.
3438 if (MOZ_UNLIKELY(mScanInProgress
)) {
3442 MOZ_ASSERT(aPtr
, "Don't suspect null pointers");
3444 MOZ_ASSERT(HasParticipant(aPtr
, aParti
),
3445 "Suspected nsISupports pointer must QI to nsXPCOMCycleCollectionParticipant");
3447 mPurpleBuf
.Put(aPtr
, aParti
, aRefCnt
);
3451 nsCycleCollector::CheckThreadSafety()
3454 nsIThread
* currentThread
= NS_GetCurrentThread();
3455 // XXXkhuey we can be called so late in shutdown that NS_GetCurrentThread
3456 // returns null (after the thread manager has shut down)
3457 MOZ_ASSERT(mThread
== currentThread
|| !currentThread
);
3461 // The cycle collector uses the mark bitmap to discover what JS objects
3462 // were reachable only from XPConnect roots that might participate in
3463 // cycles. We ask the JS runtime whether we need to force a GC before
3464 // this CC. It returns true on startup (before the mark bits have been set),
3465 // and also when UnmarkGray has run out of stack. We also force GCs on shut
3466 // down to collect cycles involving both DOM and JS.
3468 nsCycleCollector::FixGrayBits(bool aForceGC
)
3470 CheckThreadSafety();
3477 mJSRuntime
->FixWeakMappingGrayBits();
3479 bool needGC
= !mJSRuntime
->AreGCGrayBitsValid();
3480 // Only do a telemetry ping for non-shutdown CCs.
3481 CC_TELEMETRY(_NEED_GC
, needGC
);
3485 mResults
.mForcedGC
= true;
3489 mJSRuntime
->GarbageCollect(aForceGC
? JS::gcreason::SHUTDOWN_CC
:
3490 JS::gcreason::CC_FORCED
);
3491 timeLog
.Checkpoint("GC()");
3495 nsCycleCollector::CleanupAfterCollection()
3498 MOZ_ASSERT(mIncrementalPhase
== CleanupPhase
);
3500 timeLog
.Checkpoint("CleanupAfterCollection::mGraph.Clear()");
3503 (uint32_t)((TimeStamp::Now() - mCollectionStart
).ToMilliseconds());
3504 #ifdef COLLECT_TIME_DEBUG
3505 printf("cc: total cycle collector time was %ums in %u slices\n", interval
,
3506 mResults
.mNumSlices
);
3507 printf("cc: visited %u ref counted and %u GCed objects, freed %d ref counted and %d GCed objects",
3508 mResults
.mVisitedRefCounted
, mResults
.mVisitedGCed
,
3509 mResults
.mFreedRefCounted
, mResults
.mFreedGCed
);
3510 uint32_t numVisited
= mResults
.mVisitedRefCounted
+ mResults
.mVisitedGCed
;
3511 if (numVisited
> 1000) {
3512 uint32_t numFreed
= mResults
.mFreedRefCounted
+ mResults
.mFreedGCed
;
3513 printf(" (%d%%)", 100 * numFreed
/ numVisited
);
3515 printf(".\ncc: \n");
3518 CC_TELEMETRY( , interval
);
3519 CC_TELEMETRY(_VISITED_REF_COUNTED
, mResults
.mVisitedRefCounted
);
3520 CC_TELEMETRY(_VISITED_GCED
, mResults
.mVisitedGCed
);
3521 CC_TELEMETRY(_COLLECTED
, mWhiteNodeCount
);
3522 timeLog
.Checkpoint("CleanupAfterCollection::telemetry");
3525 mJSRuntime
->EndCycleCollectionCallback(mResults
);
3526 timeLog
.Checkpoint("CleanupAfterCollection::EndCycleCollectionCallback()");
3528 mIncrementalPhase
= IdlePhase
;
3532 nsCycleCollector::ShutdownCollect()
3534 SliceBudget unlimitedBudget
;
3536 for (i
= 0; i
< DEFAULT_SHUTDOWN_COLLECTIONS
; ++i
) {
3537 if (!Collect(ShutdownCC
, unlimitedBudget
, nullptr)) {
3541 NS_WARN_IF_FALSE(i
< NORMAL_SHUTDOWN_COLLECTIONS
, "Extra shutdown CC");
3545 PrintPhase(const char* aPhase
)
3548 printf("cc: begin %s on %s\n", aPhase
,
3549 NS_IsMainThread() ? "mainthread" : "worker");
3554 nsCycleCollector::Collect(ccType aCCType
,
3555 SliceBudget
& aBudget
,
3556 nsICycleCollectorListener
* aManualListener
)
3558 CheckThreadSafety();
3560 // This can legitimately happen in a few cases. See bug 383651.
3561 if (mActivelyCollecting
|| mFreeingSnowWhite
) {
3564 mActivelyCollecting
= true;
3566 bool startedIdle
= (mIncrementalPhase
== IdlePhase
);
3567 bool collectedAny
= false;
3569 // If the CC started idle, it will call BeginCollection, which
3570 // will do FreeSnowWhite, so it doesn't need to be done here.
3573 FreeSnowWhite(true);
3574 timeLog
.Checkpoint("Collect::FreeSnowWhite");
3577 ++mResults
.mNumSlices
;
3579 bool continueSlice
= true;
3581 switch (mIncrementalPhase
) {
3583 PrintPhase("BeginCollection");
3584 BeginCollection(aCCType
, aManualListener
);
3586 case GraphBuildingPhase
:
3587 PrintPhase("MarkRoots");
3590 // Only continue this slice if we're running synchronously or the
3591 // next phase will probably be short, to reduce the max pause for this
3593 // (There's no need to check if we've finished graph building, because
3594 // if we haven't, we've already exceeded our budget, and will finish
3595 // this slice anyways.)
3596 continueSlice
= aBudget
.isUnlimited() || mResults
.mNumSlices
< 3;
3598 case ScanAndCollectWhitePhase
:
3599 // We do ScanRoots and CollectWhite in a single slice to ensure
3600 // that we won't unlink a live object if a weak reference is
3601 // promoted to a strong reference after ScanRoots has finished.
3603 PrintPhase("ScanRoots");
3604 ScanRoots(startedIdle
);
3605 PrintPhase("CollectWhite");
3606 collectedAny
= CollectWhite();
3609 PrintPhase("CleanupAfterCollection");
3610 CleanupAfterCollection();
3611 continueSlice
= false;
3614 if (continueSlice
) {
3615 continueSlice
= !aBudget
.checkOverBudget();
3617 } while (continueSlice
);
3619 // Clear mActivelyCollecting here to ensure that a recursive call to
3620 // Collect() does something.
3621 mActivelyCollecting
= false;
3623 if (aCCType
!= SliceCC
&& !startedIdle
) {
3624 // We were in the middle of an incremental CC (using its own listener).
3625 // Somebody has forced a CC, so after having finished out the current CC,
3626 // run the CC again using the new listener.
3627 MOZ_ASSERT(mIncrementalPhase
== IdlePhase
);
3628 if (Collect(aCCType
, aBudget
, aManualListener
)) {
3629 collectedAny
= true;
3633 MOZ_ASSERT_IF(aCCType
!= SliceCC
, mIncrementalPhase
== IdlePhase
);
3635 return collectedAny
;
3638 // Any JS objects we have in the graph could die when we GC, but we
3639 // don't want to abandon the current CC, because the graph contains
3640 // information about purple roots. So we synchronously finish off
3643 nsCycleCollector::PrepareForGarbageCollection()
3645 if (mIncrementalPhase
== IdlePhase
) {
3646 MOZ_ASSERT(mGraph
.IsEmpty(), "Non-empty graph when idle");
3647 MOZ_ASSERT(!mBuilder
, "Non-null builder when idle");
3648 if (mJSPurpleBuffer
) {
3649 mJSPurpleBuffer
->Destroy();
3654 FinishAnyCurrentCollection();
3658 nsCycleCollector::FinishAnyCurrentCollection()
3660 if (mIncrementalPhase
== IdlePhase
) {
3664 SliceBudget unlimitedBudget
;
3665 PrintPhase("FinishAnyCurrentCollection");
3666 // Use SliceCC because we only want to finish the CC in progress.
3667 Collect(SliceCC
, unlimitedBudget
, nullptr);
3668 MOZ_ASSERT(mIncrementalPhase
== IdlePhase
);
3671 // Don't merge too many times in a row, and do at least a minimum
3672 // number of unmerged CCs in a row.
3673 static const uint32_t kMinConsecutiveUnmerged
= 3;
3674 static const uint32_t kMaxConsecutiveMerged
= 3;
3677 nsCycleCollector::ShouldMergeZones(ccType aCCType
)
3683 MOZ_ASSERT(mUnmergedNeeded
<= kMinConsecutiveUnmerged
);
3684 MOZ_ASSERT(mMergedInARow
<= kMaxConsecutiveMerged
);
3686 if (mMergedInARow
== kMaxConsecutiveMerged
) {
3687 MOZ_ASSERT(mUnmergedNeeded
== 0);
3688 mUnmergedNeeded
= kMinConsecutiveUnmerged
;
3691 if (mUnmergedNeeded
> 0) {
3697 if (aCCType
== SliceCC
&& mJSRuntime
->UsefulToMergeZones()) {
3707 nsCycleCollector::BeginCollection(ccType aCCType
,
3708 nsICycleCollectorListener
* aManualListener
)
3711 MOZ_ASSERT(mIncrementalPhase
== IdlePhase
);
3713 mCollectionStart
= TimeStamp::Now();
3716 mJSRuntime
->BeginCycleCollectionCallback();
3717 timeLog
.Checkpoint("BeginCycleCollectionCallback()");
3720 bool isShutdown
= (aCCType
== ShutdownCC
);
3722 // Set up the listener for this CC.
3723 MOZ_ASSERT_IF(isShutdown
, !aManualListener
);
3724 MOZ_ASSERT(!mListener
, "Forgot to clear a previous listener?");
3725 mListener
= aManualListener
;
3726 aManualListener
= nullptr;
3727 if (!mListener
&& mParams
.LogThisCC(isShutdown
)) {
3728 nsRefPtr
<nsCycleCollectorLogger
> logger
= new nsCycleCollectorLogger();
3729 if (mParams
.AllTracesThisCC(isShutdown
)) {
3730 logger
->SetAllTraces();
3732 mListener
= logger
.forget();
3735 bool forceGC
= isShutdown
;
3736 if (!forceGC
&& mListener
) {
3737 // On a WantAllTraces CC, force a synchronous global GC to prevent
3738 // hijinks from ForgetSkippable and compartmental GCs.
3739 mListener
->GetWantAllTraces(&forceGC
);
3741 FixGrayBits(forceGC
);
3743 FreeSnowWhite(true);
3745 if (mListener
&& NS_FAILED(mListener
->Begin())) {
3746 mListener
= nullptr;
3749 // Set up the data structures for building the graph.
3752 bool mergeZones
= ShouldMergeZones(aCCType
);
3753 mResults
.mMergedZones
= mergeZones
;
3755 MOZ_ASSERT(!mBuilder
, "Forgot to clear mBuilder");
3756 mBuilder
= new CCGraphBuilder(mGraph
, mResults
, mJSRuntime
, mListener
,
3760 mJSRuntime
->TraverseRoots(*mBuilder
);
3761 timeLog
.Checkpoint("mJSRuntime->TraverseRoots()");
3764 AutoRestore
<bool> ar(mScanInProgress
);
3765 MOZ_ASSERT(!mScanInProgress
);
3766 mScanInProgress
= true;
3767 mPurpleBuf
.SelectPointers(*mBuilder
);
3768 timeLog
.Checkpoint("SelectPointers()");
3770 // We've finished adding roots, and everything in the graph is a root.
3771 mGraph
.mRootCount
= mGraph
.MapCount();
3773 mCurrNode
= new NodePool::Enumerator(mGraph
.mNodes
);
3774 mIncrementalPhase
= GraphBuildingPhase
;
3778 nsCycleCollector::SuspectedCount()
3780 CheckThreadSafety();
3781 return mPurpleBuf
.Count();
3785 nsCycleCollector::Shutdown()
3787 CheckThreadSafety();
3789 // Always delete snow white objects.
3790 FreeSnowWhite(true);
3793 if (PR_GetEnv("MOZ_CC_RUN_DURING_SHUTDOWN"))
3801 nsCycleCollector::RemoveObjectFromGraph(void* aObj
)
3803 if (mIncrementalPhase
== IdlePhase
) {
3807 if (PtrInfo
* pinfo
= mGraph
.FindNode(aObj
)) {
3808 mGraph
.RemoveNodeFromMap(aObj
);
3810 pinfo
->mPointer
= nullptr;
3811 pinfo
->mParticipant
= nullptr;
3816 nsCycleCollector::SizeOfIncludingThis(mozilla::MallocSizeOf aMallocSizeOf
,
3817 size_t* aObjectSize
,
3818 size_t* aGraphNodesSize
,
3819 size_t* aGraphEdgesSize
,
3820 size_t* aWeakMapsSize
,
3821 size_t* aPurpleBufferSize
) const
3823 *aObjectSize
= aMallocSizeOf(this);
3825 mGraph
.SizeOfExcludingThis(aMallocSizeOf
, aGraphNodesSize
, aGraphEdgesSize
,
3828 *aPurpleBufferSize
= mPurpleBuf
.SizeOfExcludingThis(aMallocSizeOf
);
3830 // These fields are deliberately not measured:
3831 // - mJSRuntime: because it's non-owning and measured by JS reporters.
3832 // - mParams: because it only contains scalars.
3836 nsCycleCollector::GetJSPurpleBuffer()
3838 if (!mJSPurpleBuffer
) {
3839 // The Release call here confuses the GC analysis.
3840 JS::AutoSuppressGCAnalysis nogc
;
3841 // JSPurpleBuffer keeps itself alive, but we need to create it in such way
3842 // that it ends up in the normal purple buffer. That happens when
3843 // nsRefPtr goes out of the scope and calls Release.
3844 nsRefPtr
<JSPurpleBuffer
> pb
= new JSPurpleBuffer(mJSPurpleBuffer
);
3846 return mJSPurpleBuffer
;
3849 ////////////////////////////////////////////////////////////////////////
3850 // Module public API (exported in nsCycleCollector.h)
3851 // Just functions that redirect into the singleton, once it's built.
3852 ////////////////////////////////////////////////////////////////////////
3855 nsCycleCollector_registerJSRuntime(CycleCollectedJSRuntime
* aRt
)
3857 CollectorData
* data
= sCollectorData
.get();
3859 // We should have started the cycle collector by now.
3861 MOZ_ASSERT(data
->mCollector
);
3862 // But we shouldn't already have a runtime.
3863 MOZ_ASSERT(!data
->mRuntime
);
3865 data
->mRuntime
= aRt
;
3866 data
->mCollector
->RegisterJSRuntime(aRt
);
3870 nsCycleCollector_forgetJSRuntime()
3872 CollectorData
* data
= sCollectorData
.get();
3874 // We should have started the cycle collector by now.
3876 // And we shouldn't have already forgotten our runtime.
3877 MOZ_ASSERT(data
->mRuntime
);
3879 // But it may have shutdown already.
3880 if (data
->mCollector
) {
3881 data
->mCollector
->ForgetJSRuntime();
3882 data
->mRuntime
= nullptr;
3884 data
->mRuntime
= nullptr;
3886 sCollectorData
.set(nullptr);
3890 /* static */ CycleCollectedJSRuntime
*
3891 CycleCollectedJSRuntime::Get()
3893 CollectorData
* data
= sCollectorData
.get();
3895 return data
->mRuntime
;
3902 namespace cyclecollector
{
3905 HoldJSObjectsImpl(void* aHolder
, nsScriptObjectTracer
* aTracer
)
3907 CollectorData
* data
= sCollectorData
.get();
3909 // We should have started the cycle collector by now.
3911 MOZ_ASSERT(data
->mCollector
);
3912 // And we should have a runtime.
3913 MOZ_ASSERT(data
->mRuntime
);
3915 data
->mRuntime
->AddJSHolder(aHolder
, aTracer
);
3919 HoldJSObjectsImpl(nsISupports
* aHolder
)
3921 nsXPCOMCycleCollectionParticipant
* participant
;
3922 CallQueryInterface(aHolder
, &participant
);
3923 MOZ_ASSERT(participant
, "Failed to QI to nsXPCOMCycleCollectionParticipant!");
3924 MOZ_ASSERT(participant
->CheckForRightISupports(aHolder
),
3925 "The result of QIing a JS holder should be the same as ToSupports");
3927 HoldJSObjectsImpl(aHolder
, participant
);
3931 DropJSObjectsImpl(void* aHolder
)
3933 CollectorData
* data
= sCollectorData
.get();
3935 // We should have started the cycle collector by now, and not completely
3938 // And we should have a runtime.
3939 MOZ_ASSERT(data
->mRuntime
);
3941 data
->mRuntime
->RemoveJSHolder(aHolder
);
3945 DropJSObjectsImpl(nsISupports
* aHolder
)
3948 nsXPCOMCycleCollectionParticipant
* participant
;
3949 CallQueryInterface(aHolder
, &participant
);
3950 MOZ_ASSERT(participant
, "Failed to QI to nsXPCOMCycleCollectionParticipant!");
3951 MOZ_ASSERT(participant
->CheckForRightISupports(aHolder
),
3952 "The result of QIing a JS holder should be the same as ToSupports");
3954 DropJSObjectsImpl(static_cast<void*>(aHolder
));
3959 IsJSHolder(void* aHolder
)
3961 CollectorData
* data
= sCollectorData
.get();
3963 // We should have started the cycle collector by now, and not completely
3966 // And we should have a runtime.
3967 MOZ_ASSERT(data
->mRuntime
);
3969 return data
->mRuntime
->IsJSHolder(aHolder
);
3974 DeferredFinalize(nsISupports
* aSupports
)
3976 CollectorData
* data
= sCollectorData
.get();
3978 // We should have started the cycle collector by now, and not completely
3981 // And we should have a runtime.
3982 MOZ_ASSERT(data
->mRuntime
);
3984 data
->mRuntime
->DeferredFinalize(aSupports
);
3988 DeferredFinalize(DeferredFinalizeAppendFunction aAppendFunc
,
3989 DeferredFinalizeFunction aFunc
,
3992 CollectorData
* data
= sCollectorData
.get();
3994 // We should have started the cycle collector by now, and not completely
3997 // And we should have a runtime.
3998 MOZ_ASSERT(data
->mRuntime
);
4000 data
->mRuntime
->DeferredFinalize(aAppendFunc
, aFunc
, aThing
);
4003 } // namespace cyclecollector
4004 } // namespace mozilla
4007 MOZ_NEVER_INLINE
static void
4008 SuspectAfterShutdown(void* aPtr
, nsCycleCollectionParticipant
* aCp
,
4009 nsCycleCollectingAutoRefCnt
* aRefCnt
,
4010 bool* aShouldDelete
)
4012 if (aRefCnt
->get() == 0) {
4013 if (!aShouldDelete
) {
4014 // The CC is shut down, so we can't be in the middle of an ICC.
4015 CanonicalizeParticipant(&aPtr
, &aCp
);
4016 aRefCnt
->stabilizeForDeletion();
4017 aCp
->DeleteCycleCollectable(aPtr
);
4019 *aShouldDelete
= true;
4022 // Make sure we'll get called again.
4023 aRefCnt
->RemoveFromPurpleBuffer();
4028 NS_CycleCollectorSuspect3(void* aPtr
, nsCycleCollectionParticipant
* aCp
,
4029 nsCycleCollectingAutoRefCnt
* aRefCnt
,
4030 bool* aShouldDelete
)
4032 CollectorData
* data
= sCollectorData
.get();
4034 // We should have started the cycle collector by now.
4037 if (MOZ_LIKELY(data
->mCollector
)) {
4038 data
->mCollector
->Suspect(aPtr
, aCp
, aRefCnt
);
4041 SuspectAfterShutdown(aPtr
, aCp
, aRefCnt
, aShouldDelete
);
4045 nsCycleCollector_suspectedCount()
4047 CollectorData
* data
= sCollectorData
.get();
4049 // We should have started the cycle collector by now.
4052 if (!data
->mCollector
) {
4056 return data
->mCollector
->SuspectedCount();
4060 nsCycleCollector_init()
4062 MOZ_ASSERT(NS_IsMainThread(), "Wrong thread!");
4063 MOZ_ASSERT(!sCollectorData
.initialized(), "Called twice!?");
4065 return sCollectorData
.init();
4069 nsCycleCollector_startup()
4071 MOZ_ASSERT(sCollectorData
.initialized(),
4072 "Forgot to call nsCycleCollector_init!");
4073 if (sCollectorData
.get()) {
4077 CollectorData
* data
= new CollectorData
;
4078 data
->mCollector
= new nsCycleCollector();
4079 data
->mRuntime
= nullptr;
4081 sCollectorData
.set(data
);
4085 nsCycleCollector_setBeforeUnlinkCallback(CC_BeforeUnlinkCallback aCB
)
4087 CollectorData
* data
= sCollectorData
.get();
4089 // We should have started the cycle collector by now.
4091 MOZ_ASSERT(data
->mCollector
);
4093 data
->mCollector
->SetBeforeUnlinkCallback(aCB
);
4097 nsCycleCollector_setForgetSkippableCallback(CC_ForgetSkippableCallback aCB
)
4099 CollectorData
* data
= sCollectorData
.get();
4101 // We should have started the cycle collector by now.
4103 MOZ_ASSERT(data
->mCollector
);
4105 data
->mCollector
->SetForgetSkippableCallback(aCB
);
4109 nsCycleCollector_forgetSkippable(bool aRemoveChildlessNodes
,
4110 bool aAsyncSnowWhiteFreeing
)
4112 CollectorData
* data
= sCollectorData
.get();
4114 // We should have started the cycle collector by now.
4116 MOZ_ASSERT(data
->mCollector
);
4118 PROFILER_LABEL("nsCycleCollector", "forgetSkippable",
4119 js::ProfileEntry::Category::CC
);
4122 data
->mCollector
->ForgetSkippable(aRemoveChildlessNodes
,
4123 aAsyncSnowWhiteFreeing
);
4124 timeLog
.Checkpoint("ForgetSkippable()");
4128 nsCycleCollector_dispatchDeferredDeletion(bool aContinuation
)
4130 CollectorData
* data
= sCollectorData
.get();
4132 if (!data
|| !data
->mRuntime
) {
4136 data
->mRuntime
->DispatchDeferredDeletion(aContinuation
);
4140 nsCycleCollector_doDeferredDeletion()
4142 CollectorData
* data
= sCollectorData
.get();
4144 // We should have started the cycle collector by now.
4146 MOZ_ASSERT(data
->mCollector
);
4147 MOZ_ASSERT(data
->mRuntime
);
4149 return data
->mCollector
->FreeSnowWhite(false);
4152 already_AddRefed
<nsICycleCollectorLogSink
>
4153 nsCycleCollector_createLogSink()
4155 nsCOMPtr
<nsICycleCollectorLogSink
> sink
= new nsCycleCollectorLogSinkToFile();
4156 return sink
.forget();
4160 nsCycleCollector_collect(nsICycleCollectorListener
* aManualListener
)
4162 CollectorData
* data
= sCollectorData
.get();
4164 // We should have started the cycle collector by now.
4166 MOZ_ASSERT(data
->mCollector
);
4168 PROFILER_LABEL("nsCycleCollector", "collect",
4169 js::ProfileEntry::Category::CC
);
4171 SliceBudget unlimitedBudget
;
4172 data
->mCollector
->Collect(ManualCC
, unlimitedBudget
, aManualListener
);
4176 nsCycleCollector_collectSlice(int64_t aSliceTime
)
4178 CollectorData
* data
= sCollectorData
.get();
4180 // We should have started the cycle collector by now.
4182 MOZ_ASSERT(data
->mCollector
);
4184 PROFILER_LABEL("nsCycleCollector", "collectSlice",
4185 js::ProfileEntry::Category::CC
);
4188 if (aSliceTime
>= 0) {
4189 budget
= SliceBudget(SliceBudget::TimeBudget(aSliceTime
));
4191 data
->mCollector
->Collect(SliceCC
, budget
, nullptr);
4195 nsCycleCollector_collectSliceWork(int64_t aSliceWork
)
4197 CollectorData
* data
= sCollectorData
.get();
4199 // We should have started the cycle collector by now.
4201 MOZ_ASSERT(data
->mCollector
);
4203 PROFILER_LABEL("nsCycleCollector", "collectSliceWork",
4204 js::ProfileEntry::Category::CC
);
4207 if (aSliceWork
>= 0) {
4208 budget
= SliceBudget(SliceBudget::WorkBudget(aSliceWork
));
4210 data
->mCollector
->Collect(SliceCC
, budget
, nullptr);
4214 nsCycleCollector_prepareForGarbageCollection()
4216 CollectorData
* data
= sCollectorData
.get();
4220 if (!data
->mCollector
) {
4224 data
->mCollector
->PrepareForGarbageCollection();
4228 nsCycleCollector_finishAnyCurrentCollection()
4230 CollectorData
* data
= sCollectorData
.get();
4234 if (!data
->mCollector
) {
4238 data
->mCollector
->FinishAnyCurrentCollection();
4242 nsCycleCollector_shutdown()
4244 CollectorData
* data
= sCollectorData
.get();
4247 MOZ_ASSERT(data
->mCollector
);
4248 PROFILER_LABEL("nsCycleCollector", "shutdown",
4249 js::ProfileEntry::Category::CC
);
4251 data
->mCollector
->Shutdown();
4252 data
->mCollector
= nullptr;
4253 if (!data
->mRuntime
) {
4255 sCollectorData
.set(nullptr);