Bug 1835241 - Part 3: Concatenate nested namespaces in GC code as per the coding...
[gecko.git] / js / src / gc / Barrier.h
blob8698db1899ee197cad5dafa3ff919143f28c425c
1 /* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*-
2 * vim: set ts=8 sts=2 et sw=2 tw=80:
3 * This Source Code Form is subject to the terms of the Mozilla Public
4 * License, v. 2.0. If a copy of the MPL was not distributed with this
5 * file, You can obtain one at http://mozilla.org/MPL/2.0/. */
7 #ifndef gc_Barrier_h
8 #define gc_Barrier_h
10 #include <type_traits> // std::true_type
12 #include "NamespaceImports.h"
14 #include "gc/Cell.h"
15 #include "gc/GCContext.h"
16 #include "gc/StoreBuffer.h"
17 #include "js/ComparisonOperators.h" // JS::detail::DefineComparisonOps
18 #include "js/experimental/TypedData.h" // js::EnableIfABOVType
19 #include "js/HeapAPI.h"
20 #include "js/Id.h"
21 #include "js/RootingAPI.h"
22 #include "js/Value.h"
23 #include "util/Poison.h"
26 * [SMDOC] GC Barriers
28 * Several kinds of barrier are necessary to allow the GC to function correctly.
29 * These are triggered by reading or writing to GC pointers in the heap and
30 * serve to tell the collector about changes to the graph of reachable GC
31 * things.
33 * Since it would be awkward to change every write to memory into a function
34 * call, this file contains a bunch of C++ classes and templates that use
35 * operator overloading to take care of barriers automatically. In most cases,
36 * all that's necessary is to replace:
38 * Type* field;
40 * with:
42 * HeapPtr<Type> field;
44 * All heap-based GC pointers and tagged pointers must use one of these classes,
45 * except in a couple of exceptional cases.
47 * These classes are designed to be used by the internals of the JS engine.
48 * Barriers designed to be used externally are provided in js/RootingAPI.h.
50 * Overview
51 * ========
53 * This file implements the following concrete classes:
55 * HeapPtr General wrapper for heap-based pointers that provides pre- and
56 * post-write barriers. Most clients should use this.
58 * GCPtr An optimisation of HeapPtr for objects which are only destroyed
59 * by GC finalization (this rules out use in Vector, for example).
61 * PreBarriered Provides a pre-barrier but not a post-barrier. Necessary when
62 * generational GC updates are handled manually, e.g. for hash
63 * table keys that don't use StableCellHasher.
65 * HeapSlot Provides pre and post-barriers, optimised for use in JSObject
66 * slots and elements.
68 * WeakHeapPtr Provides read and post-write barriers, for use with weak
69 * pointers.
71 * UnsafeBarePtr Provides no barriers. Don't add new uses of this, or only if
72 * you really know what you are doing.
74 * The following classes are implemented in js/RootingAPI.h (in the JS
75 * namespace):
77 * Heap General wrapper for external clients. Like HeapPtr but also
78 * handles cycle collector concerns. Most external clients should
79 * use this.
81 * Heap::Tenured Like Heap but doesn't allow nursery pointers. Allows storing
82 * flags in unused lower bits of the pointer.
84 * Which class to use?
85 * -------------------
87 * Answer the following questions to decide which barrier class is right for
88 * your use case:
90 * Is your code part of the JS engine?
91 * Yes, it's internal =>
92 * Is your pointer weak or strong?
93 * Strong =>
94 * Do you want automatic handling of nursery pointers?
95 * Yes, of course =>
96 * Can your object be destroyed outside of a GC?
97 * Yes => Use HeapPtr<T>
98 * No => Use GCPtr<T> (optimization)
99 * No, I'll do this myself =>
100 * Do you want pre-barriers so incremental marking works?
101 * Yes, of course => Use PreBarriered<T>
102 * No, and I'll fix all the bugs myself => Use UnsafeBarePtr<T>
103 * Weak => Use WeakHeapPtr<T>
104 * No, it's external =>
105 * Can your pointer refer to nursery objects?
106 * Yes => Use JS::Heap<T>
107 * Never => Use JS::Heap::Tenured<T> (optimization)
109 * If in doubt, use HeapPtr<T>.
111 * Write barriers
112 * ==============
114 * A write barrier is a mechanism used by incremental or generational GCs to
115 * ensure that every value that needs to be marked is marked. In general, the
116 * write barrier should be invoked whenever a write can cause the set of things
117 * traced through by the GC to change. This includes:
119 * - writes to object properties
120 * - writes to array slots
121 * - writes to fields like JSObject::shape_ that we trace through
122 * - writes to fields in private data
123 * - writes to non-markable fields like JSObject::private that point to
124 * markable data
126 * The last category is the trickiest. Even though the private pointer does not
127 * point to a GC thing, changing the private pointer may change the set of
128 * objects that are traced by the GC. Therefore it needs a write barrier.
130 * Every barriered write should have the following form:
132 * <pre-barrier>
133 * obj->field = value; // do the actual write
134 * <post-barrier>
136 * The pre-barrier is used for incremental GC and the post-barrier is for
137 * generational GC.
139 * Pre-write barrier
140 * -----------------
142 * To understand the pre-barrier, let's consider how incremental GC works. The
143 * GC itself is divided into "slices". Between each slice, JS code is allowed to
144 * run. Each slice should be short so that the user doesn't notice the
145 * interruptions. In our GC, the structure of the slices is as follows:
147 * 1. ... JS work, which leads to a request to do GC ...
148 * 2. [first GC slice, which performs all root marking and (maybe) more marking]
149 * 3. ... more JS work is allowed to run ...
150 * 4. [GC mark slice, which runs entirely in
151 * GCRuntime::markUntilBudgetExhausted]
152 * 5. ... more JS work ...
153 * 6. [GC mark slice, which runs entirely in
154 * GCRuntime::markUntilBudgetExhausted]
155 * 7. ... more JS work ...
156 * 8. [GC marking finishes; sweeping done non-incrementally; GC is done]
157 * 9. ... JS continues uninterrupted now that GC is finishes ...
159 * Of course, there may be a different number of slices depending on how much
160 * marking is to be done.
162 * The danger inherent in this scheme is that the JS code in steps 3, 5, and 7
163 * might change the heap in a way that causes the GC to collect an object that
164 * is actually reachable. The write barrier prevents this from happening. We use
165 * a variant of incremental GC called "snapshot at the beginning." This approach
166 * guarantees the invariant that if an object is reachable in step 2, then we
167 * will mark it eventually. The name comes from the idea that we take a
168 * theoretical "snapshot" of all reachable objects in step 2; all objects in
169 * that snapshot should eventually be marked. (Note that the write barrier
170 * verifier code takes an actual snapshot.)
172 * The basic correctness invariant of a snapshot-at-the-beginning collector is
173 * that any object reachable at the end of the GC (step 9) must either:
174 * (1) have been reachable at the beginning (step 2) and thus in the snapshot
175 * (2) or must have been newly allocated, in steps 3, 5, or 7.
176 * To deal with case (2), any objects allocated during an incremental GC are
177 * automatically marked black.
179 * This strategy is actually somewhat conservative: if an object becomes
180 * unreachable between steps 2 and 8, it would be safe to collect it. We won't,
181 * mainly for simplicity. (Also, note that the snapshot is entirely
182 * theoretical. We don't actually do anything special in step 2 that we wouldn't
183 * do in a non-incremental GC.
185 * It's the pre-barrier's job to maintain the snapshot invariant. Consider the
186 * write "obj->field = value". Let the prior value of obj->field be
187 * value0. Since it's possible that value0 may have been what obj->field
188 * contained in step 2, when the snapshot was taken, the barrier marks
189 * value0. Note that it only does this if we're in the middle of an incremental
190 * GC. Since this is rare, the cost of the write barrier is usually just an
191 * extra branch.
193 * In practice, we implement the pre-barrier differently based on the type of
194 * value0. E.g., see JSObject::preWriteBarrier, which is used if obj->field is
195 * a JSObject*. It takes value0 as a parameter.
197 * Post-write barrier
198 * ------------------
200 * For generational GC, we want to be able to quickly collect the nursery in a
201 * minor collection. Part of the way this is achieved is to only mark the
202 * nursery itself; tenured things, which may form the majority of the heap, are
203 * not traced through or marked. This leads to the problem of what to do about
204 * tenured objects that have pointers into the nursery: if such things are not
205 * marked, they may be discarded while there are still live objects which
206 * reference them. The solution is to maintain information about these pointers,
207 * and mark their targets when we start a minor collection.
209 * The pointers can be thought of as edges in an object graph, and the set of
210 * edges from the tenured generation into the nursery is known as the remembered
211 * set. Post barriers are used to track this remembered set.
213 * Whenever a slot which could contain such a pointer is written, we check
214 * whether the pointed-to thing is in the nursery (if storeBuffer() returns a
215 * buffer). If so we add the cell into the store buffer, which is the
216 * collector's representation of the remembered set. This means that when we
217 * come to do a minor collection we can examine the contents of the store buffer
218 * and mark any edge targets that are in the nursery.
220 * Read barriers
221 * =============
223 * Weak pointer read barrier
224 * -------------------------
226 * Weak pointers must have a read barrier to prevent the referent from being
227 * collected if it is read after the start of an incremental GC.
229 * The problem happens when, during an incremental GC, some code reads a weak
230 * pointer and writes it somewhere on the heap that has been marked black in a
231 * previous slice. Since the weak pointer will not otherwise be marked and will
232 * be swept and finalized in the last slice, this will leave the pointer just
233 * written dangling after the GC. To solve this, we immediately mark black all
234 * weak pointers that get read between slices so that it is safe to store them
235 * in an already marked part of the heap, e.g. in Rooted.
237 * Cycle collector read barrier
238 * ----------------------------
240 * Heap pointers external to the engine may be marked gray. The JS API has an
241 * invariant that no gray pointers may be passed, and this maintained by a read
242 * barrier that calls ExposeGCThingToActiveJS on such pointers. This is
243 * implemented by JS::Heap<T> in js/RootingAPI.h.
245 * Implementation Details
246 * ======================
248 * One additional note: not all object writes need to be pre-barriered. Writes
249 * to newly allocated objects do not need a pre-barrier. In these cases, we use
250 * the "obj->field.init(value)" method instead of "obj->field = value". We use
251 * the init naming idiom in many places to signify that a field is being
252 * assigned for the first time.
254 * This file implements the following hierarchy of classes:
256 * BarrieredBase base class of all barriers
257 * | |
258 * | WriteBarriered base class which provides common write operations
259 * | | | | |
260 * | | | | PreBarriered provides pre-barriers only
261 * | | | |
262 * | | | GCPtr provides pre- and post-barriers
263 * | | |
264 * | | HeapPtr provides pre- and post-barriers; is relocatable
265 * | | and deletable for use inside C++ managed memory
266 * | |
267 * | HeapSlot similar to GCPtr, but tailored to slots storage
269 * ReadBarriered base class which provides common read operations
271 * WeakHeapPtr provides read barriers only
274 * The implementation of the barrier logic is implemented in the
275 * Cell/TenuredCell base classes, which are called via:
277 * WriteBarriered<T>::pre
278 * -> InternalBarrierMethods<T*>::preBarrier
279 * -> Cell::preWriteBarrier
280 * -> InternalBarrierMethods<Value>::preBarrier
281 * -> InternalBarrierMethods<jsid>::preBarrier
282 * -> InternalBarrierMethods<T*>::preBarrier
283 * -> Cell::preWriteBarrier
285 * GCPtr<T>::post and HeapPtr<T>::post
286 * -> InternalBarrierMethods<T*>::postBarrier
287 * -> gc::PostWriteBarrierImpl
288 * -> InternalBarrierMethods<Value>::postBarrier
289 * -> StoreBuffer::put
291 * Barriers for use outside of the JS engine call into the same barrier
292 * implementations at InternalBarrierMethods<T>::post via an indirect call to
293 * Heap(.+)PostWriteBarrier.
295 * These clases are designed to be used to wrap GC thing pointers or values that
296 * act like them (i.e. JS::Value and jsid). It is possible to use them for
297 * other types by supplying the necessary barrier implementations but this
298 * is not usually necessary and should be done with caution.
301 namespace js {
303 class NativeObject;
305 namespace gc {
307 inline void ValueReadBarrier(const Value& v) {
308 MOZ_ASSERT(v.isGCThing());
309 ReadBarrierImpl(v.toGCThing());
312 inline void ValuePreWriteBarrier(const Value& v) {
313 MOZ_ASSERT(v.isGCThing());
314 PreWriteBarrierImpl(v.toGCThing());
317 inline void IdPreWriteBarrier(jsid id) {
318 MOZ_ASSERT(id.isGCThing());
319 PreWriteBarrierImpl(&id.toGCThing()->asTenured());
322 inline void CellPtrPreWriteBarrier(JS::GCCellPtr thing) {
323 MOZ_ASSERT(thing);
324 PreWriteBarrierImpl(thing.asCell());
327 } // namespace gc
329 #ifdef DEBUG
331 bool CurrentThreadIsTouchingGrayThings();
333 bool IsMarkedBlack(JSObject* obj);
335 #endif
337 template <typename T, typename Enable = void>
338 struct InternalBarrierMethods {};
340 template <typename T>
341 struct InternalBarrierMethods<T*> {
342 static_assert(std::is_base_of_v<gc::Cell, T>, "Expected a GC thing type");
344 static bool isMarkable(const T* v) { return v != nullptr; }
346 static void preBarrier(T* v) { gc::PreWriteBarrier(v); }
348 static void postBarrier(T** vp, T* prev, T* next) {
349 gc::PostWriteBarrier(vp, prev, next);
352 static void readBarrier(T* v) { gc::ReadBarrier(v); }
354 #ifdef DEBUG
355 static void assertThingIsNotGray(T* v) { return T::assertThingIsNotGray(v); }
356 #endif
359 template <>
360 struct InternalBarrierMethods<Value> {
361 static bool isMarkable(const Value& v) { return v.isGCThing(); }
363 static void preBarrier(const Value& v) {
364 if (v.isGCThing()) {
365 gc::ValuePreWriteBarrier(v);
369 static MOZ_ALWAYS_INLINE void postBarrier(Value* vp, const Value& prev,
370 const Value& next) {
371 MOZ_ASSERT(!CurrentThreadIsIonCompiling());
372 MOZ_ASSERT(vp);
374 // If the target needs an entry, add it.
375 js::gc::StoreBuffer* sb;
376 if (next.isGCThing() && (sb = next.toGCThing()->storeBuffer())) {
377 // If we know that the prev has already inserted an entry, we can
378 // skip doing the lookup to add the new entry. Note that we cannot
379 // safely assert the presence of the entry because it may have been
380 // added via a different store buffer.
381 if (prev.isGCThing() && prev.toGCThing()->storeBuffer()) {
382 return;
384 sb->putValue(vp);
385 return;
387 // Remove the prev entry if the new value does not need it.
388 if (prev.isGCThing() && (sb = prev.toGCThing()->storeBuffer())) {
389 sb->unputValue(vp);
393 static void readBarrier(const Value& v) {
394 if (v.isGCThing()) {
395 gc::ValueReadBarrier(v);
399 #ifdef DEBUG
400 static void assertThingIsNotGray(const Value& v) {
401 JS::AssertValueIsNotGray(v);
403 #endif
406 template <>
407 struct InternalBarrierMethods<jsid> {
408 static bool isMarkable(jsid id) { return id.isGCThing(); }
409 static void preBarrier(jsid id) {
410 if (id.isGCThing()) {
411 gc::IdPreWriteBarrier(id);
414 static void postBarrier(jsid* idp, jsid prev, jsid next) {}
415 #ifdef DEBUG
416 static void assertThingIsNotGray(jsid id) { JS::AssertIdIsNotGray(id); }
417 #endif
420 // Specialization for JS::ArrayBufferOrView subclasses.
421 template <typename T>
422 struct InternalBarrierMethods<T, EnableIfABOVType<T>> {
423 using BM = BarrierMethods<T>;
425 static bool isMarkable(const T& thing) { return bool(thing); }
426 static void preBarrier(const T& thing) {
427 gc::PreWriteBarrier(thing.asObjectUnbarriered());
429 static void postBarrier(T* tp, const T& prev, const T& next) {
430 BM::postWriteBarrier(tp, prev, next);
432 static void readBarrier(const T& thing) { BM::readBarrier(thing); }
433 #ifdef DEBUG
434 static void assertThingIsNotGray(const T& thing) {
435 JSObject* obj = thing.asObjectUnbarriered();
436 if (obj) {
437 JS::AssertValueIsNotGray(JS::ObjectValue(*obj));
440 #endif
443 template <typename T>
444 static inline void AssertTargetIsNotGray(const T& v) {
445 #ifdef DEBUG
446 if (!CurrentThreadIsTouchingGrayThings()) {
447 InternalBarrierMethods<T>::assertThingIsNotGray(v);
449 #endif
452 // Base class of all barrier types.
454 // This is marked non-memmovable since post barriers added by derived classes
455 // can add pointers to class instances to the store buffer.
456 template <typename T>
457 class MOZ_NON_MEMMOVABLE BarrieredBase {
458 protected:
459 // BarrieredBase is not directly instantiable.
460 explicit BarrieredBase(const T& v) : value(v) {}
462 // BarrieredBase subclasses cannot be copy constructed by default.
463 BarrieredBase(const BarrieredBase<T>& other) = default;
465 // Storage for all barrier classes. |value| must be a GC thing reference
466 // type: either a direct pointer to a GC thing or a supported tagged
467 // pointer that can reference GC things, such as JS::Value or jsid. Nested
468 // barrier types are NOT supported. See assertTypeConstraints.
469 T value;
471 public:
472 using ElementType = T;
474 // Note: this is public because C++ cannot friend to a specific template
475 // instantiation. Friending to the generic template leads to a number of
476 // unintended consequences, including template resolution ambiguity and a
477 // circular dependency with Tracing.h.
478 T* unbarrieredAddress() const { return const_cast<T*>(&value); }
481 // Base class for barriered pointer types that intercept only writes.
482 template <class T>
483 class WriteBarriered : public BarrieredBase<T>,
484 public WrappedPtrOperations<T, WriteBarriered<T>> {
485 protected:
486 using BarrieredBase<T>::value;
488 // WriteBarriered is not directly instantiable.
489 explicit WriteBarriered(const T& v) : BarrieredBase<T>(v) {}
491 public:
492 DECLARE_POINTER_CONSTREF_OPS(T);
494 // Use this if the automatic coercion to T isn't working.
495 const T& get() const { return this->value; }
497 // Use this if you want to change the value without invoking barriers.
498 // Obviously this is dangerous unless you know the barrier is not needed.
499 void unbarrieredSet(const T& v) { this->value = v; }
501 // For users who need to manually barrier the raw types.
502 static void preWriteBarrier(const T& v) {
503 InternalBarrierMethods<T>::preBarrier(v);
506 protected:
507 void pre() { InternalBarrierMethods<T>::preBarrier(this->value); }
508 MOZ_ALWAYS_INLINE void post(const T& prev, const T& next) {
509 InternalBarrierMethods<T>::postBarrier(&this->value, prev, next);
513 #define DECLARE_POINTER_ASSIGN_AND_MOVE_OPS(Wrapper, T) \
514 DECLARE_POINTER_ASSIGN_OPS(Wrapper, T) \
515 Wrapper<T>& operator=(Wrapper<T>&& other) noexcept { \
516 setUnchecked(other.release()); \
517 return *this; \
521 * PreBarriered only automatically handles pre-barriers. Post-barriers must be
522 * manually implemented when using this class. GCPtr and HeapPtr should be used
523 * in all cases that do not require explicit low-level control of moving
524 * behavior.
526 * This class is useful for example for HashMap keys where automatically
527 * updating a moved nursery pointer would break the hash table.
529 template <class T>
530 class PreBarriered : public WriteBarriered<T> {
531 public:
532 PreBarriered() : WriteBarriered<T>(JS::SafelyInitialized<T>::create()) {}
534 * Allow implicit construction for use in generic contexts.
536 MOZ_IMPLICIT PreBarriered(const T& v) : WriteBarriered<T>(v) {}
538 explicit PreBarriered(const PreBarriered<T>& other)
539 : WriteBarriered<T>(other.value) {}
541 PreBarriered(PreBarriered<T>&& other) noexcept
542 : WriteBarriered<T>(other.release()) {}
544 ~PreBarriered() { this->pre(); }
546 void init(const T& v) { this->value = v; }
548 /* Use to set the pointer to nullptr. */
549 void clear() { set(JS::SafelyInitialized<T>::create()); }
551 DECLARE_POINTER_ASSIGN_AND_MOVE_OPS(PreBarriered, T);
553 void set(const T& v) {
554 AssertTargetIsNotGray(v);
555 setUnchecked(v);
558 private:
559 void setUnchecked(const T& v) {
560 this->pre();
561 this->value = v;
564 T release() {
565 T tmp = this->value;
566 this->value = JS::SafelyInitialized<T>::create();
567 return tmp;
571 } // namespace js
573 namespace JS::detail {
574 template <typename T>
575 struct DefineComparisonOps<js::PreBarriered<T>> : std::true_type {
576 static const T& get(const js::PreBarriered<T>& v) { return v.get(); }
578 } // namespace JS::detail
580 namespace js {
583 * A pre- and post-barriered heap pointer, for use inside the JS engine.
585 * It must only be stored in memory that has GC lifetime. GCPtr must not be
586 * used in contexts where it may be implicitly moved or deleted, e.g. most
587 * containers.
589 * The post-barriers implemented by this class are faster than those
590 * implemented by js::HeapPtr<T> or JS::Heap<T> at the cost of not
591 * automatically handling deletion or movement.
593 template <class T>
594 class GCPtr : public WriteBarriered<T> {
595 public:
596 GCPtr() : WriteBarriered<T>(JS::SafelyInitialized<T>::create()) {}
598 explicit GCPtr(const T& v) : WriteBarriered<T>(v) {
599 this->post(JS::SafelyInitialized<T>::create(), v);
602 explicit GCPtr(const GCPtr<T>& v) : WriteBarriered<T>(v) {
603 this->post(JS::SafelyInitialized<T>::create(), v);
606 #ifdef DEBUG
607 ~GCPtr() {
608 // No barriers are necessary as this only happens when the GC is sweeping.
610 // If this assertion fails you may need to make the containing object use a
611 // HeapPtr instead, as this can be deleted from outside of GC.
612 MOZ_ASSERT(CurrentThreadIsGCSweeping() || CurrentThreadIsGCFinalizing());
614 Poison(this, JS_FREED_HEAP_PTR_PATTERN, sizeof(*this),
615 MemCheckKind::MakeNoAccess);
617 #endif
619 void init(const T& v) {
620 AssertTargetIsNotGray(v);
621 this->value = v;
622 this->post(JS::SafelyInitialized<T>::create(), v);
625 DECLARE_POINTER_ASSIGN_OPS(GCPtr, T);
627 void set(const T& v) {
628 AssertTargetIsNotGray(v);
629 setUnchecked(v);
632 private:
633 void setUnchecked(const T& v) {
634 this->pre();
635 T tmp = this->value;
636 this->value = v;
637 this->post(tmp, this->value);
641 * Unlike HeapPtr<T>, GCPtr<T> must be managed with GC lifetimes.
642 * Specifically, the memory used by the pointer itself must be live until
643 * at least the next minor GC. For that reason, move semantics are invalid
644 * and are deleted here. Please note that not all containers support move
645 * semantics, so this does not completely prevent invalid uses.
647 GCPtr(GCPtr<T>&&) = delete;
648 GCPtr<T>& operator=(GCPtr<T>&&) = delete;
651 } // namespace js
653 namespace JS::detail {
654 template <typename T>
655 struct DefineComparisonOps<js::GCPtr<T>> : std::true_type {
656 static const T& get(const js::GCPtr<T>& v) { return v.get(); }
658 } // namespace JS::detail
660 namespace js {
663 * A pre- and post-barriered heap pointer, for use inside the JS engine. These
664 * heap pointers can be stored in C++ containers like GCVector and GCHashMap.
666 * The GC sometimes keeps pointers to pointers to GC things --- for example, to
667 * track references into the nursery. However, C++ containers like GCVector and
668 * GCHashMap usually reserve the right to relocate their elements any time
669 * they're modified, invalidating all pointers to the elements. HeapPtr
670 * has a move constructor which knows how to keep the GC up to date if it is
671 * moved to a new location.
673 * However, because of this additional communication with the GC, HeapPtr
674 * is somewhat slower, so it should only be used in contexts where this ability
675 * is necessary.
677 * Obviously, JSObjects, JSStrings, and the like get tenured and compacted, so
678 * whatever pointers they contain get relocated, in the sense used here.
679 * However, since the GC itself is moving those values, it takes care of its
680 * internal pointers to those pointers itself. HeapPtr is only necessary
681 * when the relocation would otherwise occur without the GC's knowledge.
683 template <class T>
684 class HeapPtr : public WriteBarriered<T> {
685 public:
686 HeapPtr() : WriteBarriered<T>(JS::SafelyInitialized<T>::create()) {}
688 // Implicitly adding barriers is a reasonable default.
689 MOZ_IMPLICIT HeapPtr(const T& v) : WriteBarriered<T>(v) {
690 this->post(JS::SafelyInitialized<T>::create(), this->value);
693 MOZ_IMPLICIT HeapPtr(const HeapPtr<T>& other) : WriteBarriered<T>(other) {
694 this->post(JS::SafelyInitialized<T>::create(), this->value);
697 HeapPtr(HeapPtr<T>&& other) noexcept : WriteBarriered<T>(other.release()) {
698 this->post(JS::SafelyInitialized<T>::create(), this->value);
701 ~HeapPtr() {
702 this->pre();
703 this->post(this->value, JS::SafelyInitialized<T>::create());
706 void init(const T& v) {
707 MOZ_ASSERT(this->value == JS::SafelyInitialized<T>::create());
708 AssertTargetIsNotGray(v);
709 this->value = v;
710 this->post(JS::SafelyInitialized<T>::create(), this->value);
713 DECLARE_POINTER_ASSIGN_AND_MOVE_OPS(HeapPtr, T);
715 void set(const T& v) {
716 AssertTargetIsNotGray(v);
717 setUnchecked(v);
720 /* Make this friend so it can access pre() and post(). */
721 template <class T1, class T2>
722 friend inline void BarrieredSetPair(Zone* zone, HeapPtr<T1*>& v1, T1* val1,
723 HeapPtr<T2*>& v2, T2* val2);
725 protected:
726 void setUnchecked(const T& v) {
727 this->pre();
728 postBarrieredSet(v);
731 void postBarrieredSet(const T& v) {
732 T tmp = this->value;
733 this->value = v;
734 this->post(tmp, this->value);
737 T release() {
738 T tmp = this->value;
739 postBarrieredSet(JS::SafelyInitialized<T>::create());
740 return tmp;
745 * A pre-barriered heap pointer, for use inside the JS engine.
747 * Similar to GCPtr, but used for a pointer to a malloc-allocated structure
748 * containing GC thing pointers.
750 * It must only be stored in memory that has GC lifetime. It must not be used in
751 * contexts where it may be implicitly moved or deleted, e.g. most containers.
753 * A post-barrier is unnecessary since malloc-allocated structures cannot be in
754 * the nursery.
756 template <class T>
757 class GCStructPtr : public BarrieredBase<T> {
758 public:
759 // This is sometimes used to hold tagged pointers.
760 static constexpr uintptr_t MaxTaggedPointer = 0x2;
762 GCStructPtr() : BarrieredBase<T>(JS::SafelyInitialized<T>::create()) {}
764 // Implicitly adding barriers is a reasonable default.
765 MOZ_IMPLICIT GCStructPtr(const T& v) : BarrieredBase<T>(v) {}
767 GCStructPtr(const GCStructPtr<T>& other) : BarrieredBase<T>(other) {}
769 GCStructPtr(GCStructPtr<T>&& other) noexcept
770 : BarrieredBase<T>(other.release()) {}
772 ~GCStructPtr() {
773 // No barriers are necessary as this only happens when the GC is sweeping.
774 MOZ_ASSERT_IF(isTraceable(),
775 CurrentThreadIsGCSweeping() || CurrentThreadIsGCFinalizing());
778 void init(const T& v) {
779 MOZ_ASSERT(this->get() == JS::SafelyInitialized<T>());
780 AssertTargetIsNotGray(v);
781 this->value = v;
784 void set(JS::Zone* zone, const T& v) {
785 pre(zone);
786 this->value = v;
789 T get() const { return this->value; }
790 operator T() const { return get(); }
791 T operator->() const { return get(); }
793 protected:
794 bool isTraceable() const { return uintptr_t(get()) > MaxTaggedPointer; }
796 void pre(JS::Zone* zone) {
797 if (isTraceable()) {
798 PreWriteBarrier(zone, get());
803 } // namespace js
805 namespace JS::detail {
806 template <typename T>
807 struct DefineComparisonOps<js::HeapPtr<T>> : std::true_type {
808 static const T& get(const js::HeapPtr<T>& v) { return v.get(); }
810 } // namespace JS::detail
812 namespace js {
814 // Base class for barriered pointer types that intercept reads and writes.
815 template <typename T>
816 class ReadBarriered : public BarrieredBase<T> {
817 protected:
818 // ReadBarriered is not directly instantiable.
819 explicit ReadBarriered(const T& v) : BarrieredBase<T>(v) {}
821 void read() const { InternalBarrierMethods<T>::readBarrier(this->value); }
822 void post(const T& prev, const T& next) {
823 InternalBarrierMethods<T>::postBarrier(&this->value, prev, next);
827 // Incremental GC requires that weak pointers have read barriers. See the block
828 // comment at the top of Barrier.h for a complete discussion of why.
830 // Note that this class also has post-barriers, so is safe to use with nursery
831 // pointers. However, when used as a hashtable key, care must still be taken to
832 // insert manual post-barriers on the table for rekeying if the key is based in
833 // any way on the address of the object.
834 template <typename T>
835 class WeakHeapPtr : public ReadBarriered<T>,
836 public WrappedPtrOperations<T, WeakHeapPtr<T>> {
837 protected:
838 using ReadBarriered<T>::value;
840 public:
841 WeakHeapPtr() : ReadBarriered<T>(JS::SafelyInitialized<T>::create()) {}
843 // It is okay to add barriers implicitly.
844 MOZ_IMPLICIT WeakHeapPtr(const T& v) : ReadBarriered<T>(v) {
845 this->post(JS::SafelyInitialized<T>::create(), v);
848 // The copy constructor creates a new weak edge but the wrapped pointer does
849 // not escape, so no read barrier is necessary.
850 explicit WeakHeapPtr(const WeakHeapPtr& other) : ReadBarriered<T>(other) {
851 this->post(JS::SafelyInitialized<T>::create(), value);
854 // Move retains the lifetime status of the source edge, so does not fire
855 // the read barrier of the defunct edge.
856 WeakHeapPtr(WeakHeapPtr&& other) noexcept
857 : ReadBarriered<T>(other.release()) {
858 this->post(JS::SafelyInitialized<T>::create(), value);
861 ~WeakHeapPtr() {
862 this->post(this->value, JS::SafelyInitialized<T>::create());
865 WeakHeapPtr& operator=(const WeakHeapPtr& v) {
866 AssertTargetIsNotGray(v.value);
867 T prior = this->value;
868 this->value = v.value;
869 this->post(prior, v.value);
870 return *this;
873 const T& get() const {
874 if (InternalBarrierMethods<T>::isMarkable(this->value)) {
875 this->read();
877 return this->value;
880 const T& unbarrieredGet() const { return this->value; }
882 explicit operator bool() const { return bool(this->value); }
884 operator const T&() const { return get(); }
886 const T& operator->() const { return get(); }
888 void set(const T& v) {
889 AssertTargetIsNotGray(v);
890 setUnchecked(v);
893 void unbarrieredSet(const T& v) {
894 AssertTargetIsNotGray(v);
895 this->value = v;
898 private:
899 void setUnchecked(const T& v) {
900 T tmp = this->value;
901 this->value = v;
902 this->post(tmp, v);
905 T release() {
906 T tmp = value;
907 set(JS::SafelyInitialized<T>::create());
908 return tmp;
912 // A wrapper for a bare pointer, with no barriers.
914 // This should only be necessary in a limited number of cases. Please don't add
915 // more uses of this if at all possible.
916 template <typename T>
917 class UnsafeBarePtr : public BarrieredBase<T> {
918 public:
919 UnsafeBarePtr() : BarrieredBase<T>(JS::SafelyInitialized<T>::create()) {}
920 MOZ_IMPLICIT UnsafeBarePtr(T v) : BarrieredBase<T>(v) {}
921 const T& get() const { return this->value; }
922 void set(T newValue) { this->value = newValue; }
923 DECLARE_POINTER_CONSTREF_OPS(T);
926 } // namespace js
928 namespace JS::detail {
929 template <typename T>
930 struct DefineComparisonOps<js::WeakHeapPtr<T>> : std::true_type {
931 static const T& get(const js::WeakHeapPtr<T>& v) {
932 return v.unbarrieredGet();
935 } // namespace JS::detail
937 namespace js {
939 // A pre- and post-barriered Value that is specialized to be aware that it
940 // resides in a slots or elements vector. This allows it to be relocated in
941 // memory, but with substantially less overhead than a HeapPtr.
942 class HeapSlot : public WriteBarriered<Value> {
943 public:
944 enum Kind { Slot = 0, Element = 1 };
946 void init(NativeObject* owner, Kind kind, uint32_t slot, const Value& v) {
947 value = v;
948 post(owner, kind, slot, v);
951 void initAsUndefined() { value.setUndefined(); }
953 void destroy() { pre(); }
955 void setUndefinedUnchecked() {
956 pre();
957 value.setUndefined();
960 #ifdef DEBUG
961 bool preconditionForSet(NativeObject* owner, Kind kind, uint32_t slot) const;
962 void assertPreconditionForPostWriteBarrier(NativeObject* obj, Kind kind,
963 uint32_t slot,
964 const Value& target) const;
965 #endif
967 MOZ_ALWAYS_INLINE void set(NativeObject* owner, Kind kind, uint32_t slot,
968 const Value& v) {
969 MOZ_ASSERT(preconditionForSet(owner, kind, slot));
970 pre();
971 value = v;
972 post(owner, kind, slot, v);
975 private:
976 void post(NativeObject* owner, Kind kind, uint32_t slot,
977 const Value& target) {
978 #ifdef DEBUG
979 assertPreconditionForPostWriteBarrier(owner, kind, slot, target);
980 #endif
981 if (this->value.isGCThing()) {
982 gc::Cell* cell = this->value.toGCThing();
983 if (cell->storeBuffer()) {
984 cell->storeBuffer()->putSlot(owner, kind, slot, 1);
990 } // namespace js
992 namespace JS::detail {
993 template <>
994 struct DefineComparisonOps<js::HeapSlot> : std::true_type {
995 static const Value& get(const js::HeapSlot& v) { return v.get(); }
997 } // namespace JS::detail
999 namespace js {
1001 class HeapSlotArray {
1002 HeapSlot* array;
1004 public:
1005 explicit HeapSlotArray(HeapSlot* array) : array(array) {}
1007 HeapSlot* begin() const { return array; }
1009 operator const Value*() const {
1010 static_assert(sizeof(GCPtr<Value>) == sizeof(Value));
1011 static_assert(sizeof(HeapSlot) == sizeof(Value));
1012 return reinterpret_cast<const Value*>(array);
1014 operator HeapSlot*() const { return begin(); }
1016 HeapSlotArray operator+(int offset) const {
1017 return HeapSlotArray(array + offset);
1019 HeapSlotArray operator+(uint32_t offset) const {
1020 return HeapSlotArray(array + offset);
1025 * This is a hack for RegExpStatics::updateFromMatch. It allows us to do two
1026 * barriers with only one branch to check if we're in an incremental GC.
1028 template <class T1, class T2>
1029 static inline void BarrieredSetPair(Zone* zone, HeapPtr<T1*>& v1, T1* val1,
1030 HeapPtr<T2*>& v2, T2* val2) {
1031 AssertTargetIsNotGray(val1);
1032 AssertTargetIsNotGray(val2);
1033 if (T1::needPreWriteBarrier(zone)) {
1034 v1.pre();
1035 v2.pre();
1037 v1.postBarrieredSet(val1);
1038 v2.postBarrieredSet(val2);
1042 * ImmutableTenuredPtr is designed for one very narrow case: replacing
1043 * immutable raw pointers to GC-managed things, implicitly converting to a
1044 * handle type for ease of use. Pointers encapsulated by this type must:
1046 * be immutable (no incremental write barriers),
1047 * never point into the nursery (no generational write barriers), and
1048 * be traced via MarkRuntime (we use fromMarkedLocation).
1050 * In short: you *really* need to know what you're doing before you use this
1051 * class!
1053 template <typename T>
1054 class MOZ_HEAP_CLASS ImmutableTenuredPtr {
1055 T value;
1057 public:
1058 operator T() const { return value; }
1059 T operator->() const { return value; }
1061 // `ImmutableTenuredPtr<T>` is implicitly convertible to `Handle<T>`.
1063 // In case you need to convert to `Handle<U>` where `U` is base class of `T`,
1064 // convert this to `Handle<T>` by `toHandle()` and then use implicit
1065 // conversion from `Handle<T>` to `Handle<U>`.
1066 operator Handle<T>() const { return toHandle(); }
1067 Handle<T> toHandle() const { return Handle<T>::fromMarkedLocation(&value); }
1069 void init(T ptr) {
1070 MOZ_ASSERT(ptr->isTenured());
1071 AssertTargetIsNotGray(ptr);
1072 value = ptr;
1075 T get() const { return value; }
1076 const T* address() { return &value; }
1079 // Template to remove any barrier wrapper and get the underlying type.
1080 template <typename T>
1081 struct RemoveBarrier {
1082 using Type = T;
1084 template <typename T>
1085 struct RemoveBarrier<HeapPtr<T>> {
1086 using Type = T;
1088 template <typename T>
1089 struct RemoveBarrier<GCPtr<T>> {
1090 using Type = T;
1092 template <typename T>
1093 struct RemoveBarrier<PreBarriered<T>> {
1094 using Type = T;
1096 template <typename T>
1097 struct RemoveBarrier<WeakHeapPtr<T>> {
1098 using Type = T;
1101 #if MOZ_IS_GCC
1102 template struct JS_PUBLIC_API StableCellHasher<JSObject*>;
1103 #endif
1105 template <typename T>
1106 struct StableCellHasher<PreBarriered<T>> {
1107 using Key = PreBarriered<T>;
1108 using Lookup = T;
1110 static bool maybeGetHash(const Lookup& l, HashNumber* hashOut) {
1111 return StableCellHasher<T>::maybeGetHash(l, hashOut);
1113 static bool ensureHash(const Lookup& l, HashNumber* hashOut) {
1114 return StableCellHasher<T>::ensureHash(l, hashOut);
1116 static HashNumber hash(const Lookup& l) {
1117 return StableCellHasher<T>::hash(l);
1119 static bool match(const Key& k, const Lookup& l) {
1120 return StableCellHasher<T>::match(k, l);
1124 template <typename T>
1125 struct StableCellHasher<HeapPtr<T>> {
1126 using Key = HeapPtr<T>;
1127 using Lookup = T;
1129 static bool maybeGetHash(const Lookup& l, HashNumber* hashOut) {
1130 return StableCellHasher<T>::maybeGetHash(l, hashOut);
1132 static bool ensureHash(const Lookup& l, HashNumber* hashOut) {
1133 return StableCellHasher<T>::ensureHash(l, hashOut);
1135 static HashNumber hash(const Lookup& l) {
1136 return StableCellHasher<T>::hash(l);
1138 static bool match(const Key& k, const Lookup& l) {
1139 return StableCellHasher<T>::match(k, l);
1143 template <typename T>
1144 struct StableCellHasher<WeakHeapPtr<T>> {
1145 using Key = WeakHeapPtr<T>;
1146 using Lookup = T;
1148 static bool maybeGetHash(const Lookup& l, HashNumber* hashOut) {
1149 return StableCellHasher<T>::maybeGetHash(l, hashOut);
1151 static bool ensureHash(const Lookup& l, HashNumber* hashOut) {
1152 return StableCellHasher<T>::ensureHash(l, hashOut);
1154 static HashNumber hash(const Lookup& l) {
1155 return StableCellHasher<T>::hash(l);
1157 static bool match(const Key& k, const Lookup& l) {
1158 return StableCellHasher<T>::match(k.unbarrieredGet(), l);
1162 /* Useful for hashtables with a HeapPtr as key. */
1163 template <class T>
1164 struct HeapPtrHasher {
1165 using Key = HeapPtr<T>;
1166 using Lookup = T;
1168 static HashNumber hash(Lookup obj) { return DefaultHasher<T>::hash(obj); }
1169 static bool match(const Key& k, Lookup l) { return k.get() == l; }
1170 static void rekey(Key& k, const Key& newKey) { k.unbarrieredSet(newKey); }
1173 template <class T>
1174 struct PreBarrieredHasher {
1175 using Key = PreBarriered<T>;
1176 using Lookup = T;
1178 static HashNumber hash(Lookup obj) { return DefaultHasher<T>::hash(obj); }
1179 static bool match(const Key& k, Lookup l) { return k.get() == l; }
1180 static void rekey(Key& k, const Key& newKey) { k.unbarrieredSet(newKey); }
1183 /* Useful for hashtables with a WeakHeapPtr as key. */
1184 template <class T>
1185 struct WeakHeapPtrHasher {
1186 using Key = WeakHeapPtr<T>;
1187 using Lookup = T;
1189 static HashNumber hash(Lookup obj) { return DefaultHasher<T>::hash(obj); }
1190 static bool match(const Key& k, Lookup l) { return k.unbarrieredGet() == l; }
1191 static void rekey(Key& k, const Key& newKey) {
1192 k.set(newKey.unbarrieredGet());
1196 template <class T>
1197 struct UnsafeBarePtrHasher {
1198 using Key = UnsafeBarePtr<T>;
1199 using Lookup = T;
1201 static HashNumber hash(const Lookup& l) { return DefaultHasher<T>::hash(l); }
1202 static bool match(const Key& k, Lookup l) { return k.get() == l; }
1203 static void rekey(Key& k, const Key& newKey) { k.set(newKey.get()); }
1206 } // namespace js
1208 namespace mozilla {
1210 template <class T>
1211 struct DefaultHasher<js::HeapPtr<T>> : js::HeapPtrHasher<T> {};
1213 template <class T>
1214 struct DefaultHasher<js::GCPtr<T>> {
1215 // Not implemented. GCPtr can't be used as a hash table key because it has a
1216 // post barrier but doesn't support relocation.
1219 template <class T>
1220 struct DefaultHasher<js::PreBarriered<T>> : js::PreBarrieredHasher<T> {};
1222 template <class T>
1223 struct DefaultHasher<js::WeakHeapPtr<T>> : js::WeakHeapPtrHasher<T> {};
1225 template <class T>
1226 struct DefaultHasher<js::UnsafeBarePtr<T>> : js::UnsafeBarePtrHasher<T> {};
1228 } // namespace mozilla
1230 #endif /* gc_Barrier_h */