Tweak/Wontfix a few tests that we could care less about.
[chromium-blink-merge.git] / base / tracked_objects.h
blob48e2b5a3f6bf9e9f0653b7d14a6c885d20b051c6
1 // Copyright (c) 2006-2008 The Chromium Authors. All rights reserved.
2 // Use of this source code is governed by a BSD-style license that can be
3 // found in the LICENSE file.
5 #ifndef BASE_TRACKED_OBJECTS_H_
6 #define BASE_TRACKED_OBJECTS_H_
8 #include <map>
9 #include <string>
10 #include <vector>
12 #include "base/lock.h"
13 #include "base/task.h"
14 #include "base/thread_local_storage.h"
15 #include "base/tracked.h"
17 // TrackedObjects provides a database of stats about objects (generally Tasks)
18 // that are tracked. Tracking means their birth, death, duration, birth thread,
19 // death thread, and birth place are recorded. This data is carefully spread
20 // across a series of objects so that the counts and times can be rapidly
21 // updated without (usually) having to lock the data, and hence there is usually
22 // very little contention caused by the tracking. The data can be viewed via
23 // the about:objects URL, with a variety of sorting and filtering choices.
25 // Theese classes serve as the basis of a profiler of sorts for the Tasks
26 // system. As a result, design decisions were made to maximize speed, by
27 // minimizing recurring allocation/deallocation, lock contention and data
28 // copying. In the "stable" state, which is reached relatively quickly, there
29 // is no separate marginal allocation cost associated with construction or
30 // destruction of tracked objects, no locks are generally employed, and probably
31 // the largest computational cost is associated with obtaining start and stop
32 // times for instances as they are created and destroyed. The introduction of
33 // worker threads had a slight impact on this approach, and required use of some
34 // locks when accessing data from the worker threads.
36 // The following describes the lifecycle of tracking an instance.
38 // First off, when the instance is created, the FROM_HERE macro is expanded
39 // to specify the birth place (file, line, function) where the instance was
40 // created. That data is used to create a transient Location instance
41 // encapsulating the above triple of information. The strings (like __FILE__)
42 // are passed around by reference, with the assumption that they are static, and
43 // will never go away. This ensures that the strings can be dealt with as atoms
44 // with great efficiency (i.e., copying of strings is never needed, and
45 // comparisons for equality can be based on pointer comparisons).
47 // Next, a Births instance is created for use ONLY on the thread where this
48 // instance was created. That Births instance records (in a base class
49 // BirthOnThread) references to the static data provided in a Location instance,
50 // as well as a pointer specifying the thread on which the birth takes place.
51 // Hence there is at most one Births instance for each Location on each thread.
52 // The derived Births class contains slots for recording statistics about all
53 // instances born at the same location. Statistics currently include only the
54 // count of instances constructed.
55 // Since the base class BirthOnThread contains only constant data, it can be
56 // freely accessed by any thread at any time (i.e., only the statistic needs to
57 // be handled carefully, and it is ONLY read or written by the birth thread).
59 // Having now either constructed or found the Births instance described above, a
60 // pointer to the Births instance is then embedded in a base class of the
61 // instance we're tracking (usually a Task). This fact alone is very useful in
62 // debugging, when there is a question of where an instance came from. In
63 // addition, the birth time is also embedded in the base class Tracked (see
64 // tracked.h), and used to later evaluate the lifetime duration.
65 // As a result of the above embedding, we can (for any tracked instance) find
66 // out its location of birth, and thread of birth, without using any locks, as
67 // all that data is constant across the life of the process.
69 // The amount of memory used in the above data structures depends on how many
70 // threads there are, and how many Locations of construction there are.
71 // Fortunately, we don't use memory that is the product of those two counts, but
72 // rather we only need one Births instance for each thread that constructs an
73 // instance at a Location. In many cases, instances (such as Tasks) are only
74 // created on one thread, so the memory utilization is actually fairly
75 // restrained.
77 // Lastly, when an instance is deleted, the final tallies of statistics are
78 // carefully accumulated. That tallying wrties into slots (members) in a
79 // collection of DeathData instances. For each birth place Location that is
80 // destroyed on a thread, there is a DeathData instance to record the additional
81 // death count, as well as accumulate the lifetime duration of the instance as
82 // it is destroyed (dies). By maintaining a single place to aggregate this
83 // addition *only* for the given thread, we avoid the need to lock such
84 // DeathData instances.
86 // With the above lifecycle description complete, the major remaining detail is
87 // explaining how each thread maintains a list of DeathData instances, and of
88 // Births instances, and is able to avoid additional (redundant/unnecessary)
89 // allocations.
91 // Each thread maintains a list of data items specific to that thread in a
92 // ThreadData instance (for that specific thread only). The two critical items
93 // are lists of DeathData and Births instances. These lists are maintained in
94 // STL maps, which are indexed by Location. As noted earlier, we can compare
95 // locations very efficiently as we consider the underlying data (file,
96 // function, line) to be atoms, and hence pointer comparison is used rather than
97 // (slow) string comparisons.
99 // To provide a mechanism for iterating over all "known threads," which means
100 // threads that have recorded a birth or a death, we create a singly linked list
101 // of ThreadData instances. Each such instance maintains a pointer to the next
102 // one. A static member of ThreadData provides a pointer to the first_ item on
103 // this global list, and access to that first_ item requires the use of a lock_.
104 // When new ThreadData instances is added to the global list, it is pre-pended,
105 // which ensures that any prior acquisition of the list is valid (i.e., the
106 // holder can iterate over it without fear of it changing, or the necessity of
107 // using an additional lock. Iterations are actually pretty rare (used
108 // primarilly for cleanup, or snapshotting data for display), so this lock has
109 // very little global performance impact.
111 // The above description tries to define the high performance (run time)
112 // portions of these classes. After gathering statistics, calls instigated
113 // by visiting about:objects will assemble and aggregate data for display. The
114 // following data structures are used for producing such displays. They are
115 // not performance critical, and their only major constraint is that they should
116 // be able to run concurrently with ongoing augmentation of the birth and death
117 // data.
119 // For a given birth location, information about births are spread across data
120 // structures that are asynchronously changing on various threads. For display
121 // purposes, we need to construct Snapshot instances for each combination of
122 // birth thread, death thread, and location, along with the count of such
123 // lifetimes. We gather such data into a Snapshot instances, so that such
124 // instances can be sorted and aggregated (and remain frozen during our
125 // processing). Snapshot instances use pointers to constant portions of the
126 // birth and death datastructures, but have local (frozen) copies of the actual
127 // statistics (birth count, durations, etc. etc.).
129 // A DataCollector is a container object that holds a set of Snapshots. A
130 // DataCollector can be passed from thread to thread, and each thread
131 // contributes to it by adding or updating Snapshot instances. DataCollector
132 // instances are thread safe containers which are passed to various threads to
133 // accumulate all Snapshot instances.
135 // After an array of Snapshots instances are colleted into a DataCollector, they
136 // need to be sorted, and possibly aggregated (example: how many threads are in
137 // a specific consecutive set of Snapshots? What was the total birth count for
138 // that set? etc.). Aggregation instances collect running sums of any set of
139 // snapshot instances, and are used to print sub-totals in an about:objects
140 // page.
142 // TODO(jar): I need to store DataCollections, and provide facilities for taking
143 // the difference between two gathered DataCollections. For now, I'm just
144 // adding a hack that Reset()'s to zero all counts and stats. This is also
145 // done in a slighly thread-unsafe fashion, as the reseting is done
146 // asynchronously relative to ongoing updates, and worse yet, some data fields
147 // are 64bit quantities, and are not atomicly accessed (reset or incremented
148 // etc.). For basic profiling, this will work "most of the time," and should be
149 // sufficient... but storing away DataCollections is the "right way" to do this.
151 class MessageLoop;
154 namespace tracked_objects {
156 //------------------------------------------------------------------------------
157 // For a specific thread, and a specific birth place, the collection of all
158 // death info (with tallies for each death thread, to prevent access conflicts).
159 class ThreadData;
160 class BirthOnThread {
161 public:
162 explicit BirthOnThread(const Location& location);
164 const Location location() const { return location_; }
165 const ThreadData* birth_thread() const { return birth_thread_; }
167 private:
168 // File/lineno of birth. This defines the essence of the type, as the context
169 // of the birth (construction) often tell what the item is for. This field
170 // is const, and hence safe to access from any thread.
171 const Location location_;
173 // The thread that records births into this object. Only this thread is
174 // allowed to access birth_count_ (which changes over time).
175 const ThreadData* birth_thread_; // The thread this birth took place on.
177 DISALLOW_COPY_AND_ASSIGN(BirthOnThread);
180 //------------------------------------------------------------------------------
181 // A class for accumulating counts of births (without bothering with a map<>).
183 class Births: public BirthOnThread {
184 public:
185 explicit Births(const Location& location);
187 int birth_count() const { return birth_count_; }
189 // When we have a birth we update the count for this BirhPLace.
190 void RecordBirth() { ++birth_count_; }
192 // When a birthplace is changed (updated), we need to decrement the counter
193 // for the old instance.
194 void ForgetBirth() { --birth_count_; } // We corrected a birth place.
196 // Hack to quickly reset all counts to zero.
197 void Clear() { birth_count_ = 0; }
199 private:
200 // The number of births on this thread for our location_.
201 int birth_count_;
203 DISALLOW_COPY_AND_ASSIGN(Births);
206 //------------------------------------------------------------------------------
207 // Basic info summarizing multiple destructions of an object with a single
208 // birthplace (fixed Location). Used both on specific threads, and also used
209 // in snapshots when integrating assembled data.
211 class DeathData {
212 public:
213 // Default initializer.
214 DeathData() : count_(0), square_duration_(0) {}
216 // When deaths have not yet taken place, and we gather data from all the
217 // threads, we create DeathData stats that tally the number of births without
218 // a corrosponding death.
219 explicit DeathData(int count) : count_(count), square_duration_(0) {}
221 void RecordDeath(const base::TimeDelta& duration);
223 // Metrics accessors.
224 int count() const { return count_; }
225 base::TimeDelta life_duration() const { return life_duration_; }
226 int64 square_duration() const { return square_duration_; }
227 int AverageMsDuration() const;
228 double StandardDeviation() const;
230 // Accumulate metrics from other into this.
231 void AddDeathData(const DeathData& other);
233 // Simple print of internal state.
234 void Write(std::string* output) const;
236 // Reset all tallies to zero.
237 void Clear();
239 private:
240 int count_; // Number of destructions.
241 base::TimeDelta life_duration_; // Sum of all lifetime durations.
242 int64 square_duration_; // Sum of squares in milliseconds.
245 //------------------------------------------------------------------------------
246 // A temporary collection of data that can be sorted and summarized. It is
247 // gathered (carefully) from many threads. Instances are held in arrays and
248 // processed, filtered, and rendered.
249 // The source of this data was collected on many threads, and is asynchronously
250 // changing. The data in this instance is not asynchronously changing.
252 class Snapshot {
253 public:
254 // When snapshotting a full life cycle set (birth-to-death), use this:
255 Snapshot(const BirthOnThread& birth_on_thread, const ThreadData& death_thread,
256 const DeathData& death_data);
258 // When snapshotting a birth, with no death yet, use this:
259 Snapshot(const BirthOnThread& birth_on_thread, int count);
262 const ThreadData* birth_thread() const { return birth_->birth_thread(); }
263 const Location location() const { return birth_->location(); }
264 const BirthOnThread& birth() const { return *birth_; }
265 const ThreadData* death_thread() const {return death_thread_; }
266 const DeathData& death_data() const { return death_data_; }
267 const std::string DeathThreadName() const;
269 int count() const { return death_data_.count(); }
270 base::TimeDelta life_duration() const { return death_data_.life_duration(); }
271 int64 square_duration() const { return death_data_.square_duration(); }
272 int AverageMsDuration() const { return death_data_.AverageMsDuration(); }
274 void Write(std::string* output) const;
276 void Add(const Snapshot& other);
278 private:
279 const BirthOnThread* birth_; // Includes Location and birth_thread.
280 const ThreadData* death_thread_;
281 DeathData death_data_;
283 //------------------------------------------------------------------------------
284 // DataCollector is a container class for Snapshot and BirthOnThread count
285 // items. It protects the gathering under locks, so that it could be called via
286 // Posttask on any threads, or passed to all the target threads in parallel.
288 class DataCollector {
289 public:
290 typedef std::vector<Snapshot> Collection;
292 // Construct with a list of how many threads should contribute. This helps us
293 // determine (in the async case) when we are done with all contributions.
294 DataCollector();
296 // Add all stats from the indicated thread into our arrays. This function is
297 // mutex protected, and *could* be called from any threads (although current
298 // implementation serialized calls to Append).
299 void Append(const ThreadData& thread_data);
301 // After the accumulation phase, the following accessor is used to process the
302 // data.
303 Collection* collection();
305 // After collection of death data is complete, we can add entries for all the
306 // remaining living objects.
307 void AddListOfLivingObjects();
309 private:
310 // This instance may be provided to several threads to contribute data. The
311 // following counter tracks how many more threads will contribute. When it is
312 // zero, then all asynchronous contributions are complete, and locked access
313 // is no longer needed.
314 int count_of_contributing_threads_;
316 // The array that we collect data into.
317 Collection collection_;
319 // The total number of births recorded at each location for which we have not
320 // seen a death count.
321 typedef std::map<const BirthOnThread*, int> BirthCount;
322 BirthCount global_birth_count_;
324 Lock accumulation_lock_; // Protects access during accumulation phase.
326 DISALLOW_COPY_AND_ASSIGN(DataCollector);
329 //------------------------------------------------------------------------------
330 // Aggregation contains summaries (totals and subtotals) of groups of Snapshot
331 // instances to provide printing of these collections on a single line.
333 class Aggregation: public DeathData {
334 public:
335 Aggregation() : birth_count_(0) {}
337 void AddDeathSnapshot(const Snapshot& snapshot);
338 void AddBirths(const Births& births);
339 void AddBirth(const BirthOnThread& birth);
340 void AddBirthPlace(const Location& location);
341 void Write(std::string* output) const;
342 void Clear();
344 private:
345 int birth_count_;
346 std::map<std::string, int> birth_files_;
347 std::map<Location, int> locations_;
348 std::map<const ThreadData*, int> birth_threads_;
349 DeathData death_data_;
350 std::map<const ThreadData*, int> death_threads_;
352 DISALLOW_COPY_AND_ASSIGN(Aggregation);
355 //------------------------------------------------------------------------------
356 // Comparator is a class that supports the comparison of Snapshot instances.
357 // An instance is actually a list of chained Comparitors, that can provide for
358 // arbitrary ordering. The path portion of an about:objects URL is translated
359 // into such a chain, which is then used to order Snapshot instances in a
360 // vector. It orders them into groups (for aggregation), and can also order
361 // instances within the groups (for detailed rendering of the instances in an
362 // aggregation).
364 class Comparator {
365 public:
366 // Selector enum is the token identifier for each parsed keyword, most of
367 // which specify a sort order.
368 // Since it is not meaningful to sort more than once on a specific key, we
369 // use bitfields to accumulate what we have sorted on so far.
370 enum Selector {
371 // Sort orders.
372 NIL = 0,
373 BIRTH_THREAD = 1,
374 DEATH_THREAD = 2,
375 BIRTH_FILE = 4,
376 BIRTH_FUNCTION = 8,
377 BIRTH_LINE = 16,
378 COUNT = 32,
379 AVERAGE_DURATION = 64,
380 TOTAL_DURATION = 128,
382 // Imediate action keywords.
383 RESET_ALL_DATA = -1,
386 explicit Comparator();
388 // Reset the comparator to a NIL selector. Clear() and recursively delete any
389 // tiebreaker_ entries. NOTE: We can't use a standard destructor, because
390 // the sort algorithm makes copies of this object, and then deletes them,
391 // which would cause problems (either we'd make expensive deep copies, or we'd
392 // do more thna one delete on a tiebreaker_.
393 void Clear();
395 // The less() operator for sorting the array via std::sort().
396 bool operator()(const Snapshot& left, const Snapshot& right) const;
398 void Sort(DataCollector::Collection* collection) const;
400 // Check to see if the items are sort equivalents (should be aggregated).
401 bool Equivalent(const Snapshot& left, const Snapshot& right) const;
403 // Check to see if all required fields are present in the given sample.
404 bool Acceptable(const Snapshot& sample) const;
406 // A comparator can be refined by specifying what to do if the selected basis
407 // for comparison is insufficient to establish an ordering. This call adds
408 // the indicated attribute as the new "least significant" basis of comparison.
409 void SetTiebreaker(Selector selector, const std::string& required);
411 // Indicate if this instance is set up to sort by the given Selector, thereby
412 // putting that information in the SortGrouping, so it is not needed in each
413 // printed line.
414 bool IsGroupedBy(Selector selector) const;
416 // Using the tiebreakers as set above, we mostly get an ordering, which
417 // equivalent groups. If those groups are displayed (rather than just being
418 // aggregated, then the following is used to order them (within the group).
419 void SetSubgroupTiebreaker(Selector selector);
421 // Translate a keyword and restriction in URL path to a selector for sorting.
422 void ParseKeyphrase(const std::string& key_phrase);
424 // Parse a query in an about:objects URL to decide on sort ordering.
425 bool ParseQuery(const std::string& query);
427 // Output a header line that can be used to indicated what items will be
428 // collected in the group. It lists all (potentially) tested attributes and
429 // their values (in the sample item).
430 bool WriteSortGrouping(const Snapshot& sample, std::string* output) const;
432 // Output a sample, with SortGroup details not displayed.
433 void WriteSnapshot(const Snapshot& sample, std::string* output) const;
435 private:
436 // The selector directs this instance to compare based on the specified
437 // members of the tested elements.
438 enum Selector selector_;
440 // For filtering into acceptable and unacceptable snapshot instance, the
441 // following is required to be a substring of the selector_ field.
442 std::string required_;
444 // If this instance can't decide on an ordering, we can consult a tie-breaker
445 // which may have a different basis of comparison.
446 Comparator* tiebreaker_;
448 // We or together all the selectors we sort on (not counting sub-group
449 // selectors), so that we can tell if we've decided to group on any given
450 // criteria.
451 int combined_selectors_;
453 // Some tiebreakrs are for subgroup ordering, and not for basic ordering (in
454 // preparation for aggregation). The subgroup tiebreakers are not consulted
455 // when deciding if two items are in equivalent groups. This flag tells us
456 // to ignore the tiebreaker when doing Equivalent() testing.
457 bool use_tiebreaker_for_sort_only_;
461 //------------------------------------------------------------------------------
462 // For each thread, we have a ThreadData that stores all tracking info generated
463 // on this thread. This prevents the need for locking as data accumulates.
465 class ThreadData {
466 public:
467 typedef std::map<Location, Births*> BirthMap;
468 typedef std::map<const Births*, DeathData> DeathMap;
470 ThreadData();
472 // Using Thread Local Store, find the current instance for collecting data.
473 // If an instance does not exist, construct one (and remember it for use on
474 // this thread.
475 // If shutdown has already started, and we don't yet have an instance, then
476 // return null.
477 static ThreadData* current();
479 // For a given about:objects URL, develop resulting HTML, and append to
480 // output.
481 static void WriteHTML(const std::string& query, std::string* output);
483 // For a given accumulated array of results, use the comparator to sort and
484 // subtotal, writing the results to the output.
485 static void WriteHTMLTotalAndSubtotals(
486 const DataCollector::Collection& match_array,
487 const Comparator& comparator, std::string* output);
489 // In this thread's data, record a new birth.
490 Births* TallyABirth(const Location& location);
492 // Find a place to record a death on this thread.
493 void TallyADeath(const Births& lifetimes, const base::TimeDelta& duration);
495 // (Thread safe) Get start of list of instances.
496 static ThreadData* first();
497 // Iterate through the null terminated list of instances.
498 ThreadData* next() const { return next_; }
500 MessageLoop* message_loop() const { return message_loop_; }
501 const std::string ThreadName() const;
503 // Using our lock, make a copy of the specified maps. These calls may arrive
504 // from non-local threads, and are used to quickly scan data from all threads
505 // in order to build an HTML page for about:objects.
506 void SnapshotBirthMap(BirthMap *output) const;
507 void SnapshotDeathMap(DeathMap *output) const;
509 // Hack: asynchronously clear all birth counts and death tallies data values
510 // in all ThreadData instances. The numerical (zeroing) part is done without
511 // use of a locks or atomics exchanges, and may (for int64 values) produce
512 // bogus counts VERY rarely.
513 static void ResetAllThreadData();
515 // Using our lock to protect the iteration, Clear all birth and death data.
516 void Reset();
518 // Using the "known list of threads" gathered during births and deaths, the
519 // following attempts to run the given function once all all such threads.
520 // Note that the function can only be run on threads which have a message
521 // loop!
522 static void RunOnAllThreads(void (*Func)());
524 // Set internal status_ to either become ACTIVE, or later, to be SHUTDOWN,
525 // based on argument being true or false respectively.
526 // IF tracking is not compiled in, this function will return false.
527 static bool StartTracking(bool status);
528 static bool IsActive();
530 #ifdef OS_WIN
531 // WARNING: ONLY call this function when all MessageLoops are still intact for
532 // all registered threads. IF you call it later, you will crash.
533 // Note: You don't need to call it at all, and you can wait till you are
534 // single threaded (again) to do the cleanup via
535 // ShutdownSingleThreadedCleanup().
536 // Start the teardown (shutdown) process in a multi-thread mode by disabling
537 // further additions to thread database on all threads. First it makes a
538 // local (locked) change to prevent any more threads from registering. Then
539 // it Posts a Task to all registered threads to be sure they are aware that no
540 // more accumulation can take place.
541 static void ShutdownMultiThreadTracking();
542 #endif
544 // WARNING: ONLY call this function when you are running single threaded
545 // (again) and all message loops and threads have terminated. Until that
546 // point some threads may still attempt to write into our data structures.
547 // Delete recursively all data structures, starting with the list of
548 // ThreadData instances.
549 static void ShutdownSingleThreadedCleanup();
551 private:
552 // Current allowable states of the tracking system. The states always
553 // proceed towards SHUTDOWN, and never go backwards.
554 enum Status {
555 UNINITIALIZED,
556 ACTIVE,
557 SHUTDOWN,
560 // A class used to count down which is accessed by several threads. This is
561 // used to make sure RunOnAllThreads() actually runs a task on the expected
562 // count of threads.
563 class ThreadSafeDownCounter {
564 public:
565 // Constructor sets the count, once and for all.
566 explicit ThreadSafeDownCounter(size_t count);
568 // Decrement the count, and return true if we hit zero. Also delete this
569 // instance automatically when we hit zero.
570 bool LastCaller();
572 private:
573 size_t remaining_count_;
574 Lock lock_; // protect access to remaining_count_.
577 #ifdef OS_WIN
578 // A Task class that runs a static method supplied, and checks to see if this
579 // is the last tasks instance (on last thread) that will run the method.
580 // IF this is the last run, then the supplied event is signalled.
581 class RunTheStatic : public Task {
582 public:
583 typedef void (*FunctionPointer)();
584 RunTheStatic(FunctionPointer function,
585 HANDLE completion_handle,
586 ThreadSafeDownCounter* counter);
587 // Run the supplied static method, and optionally set the event.
588 void Run();
590 private:
591 FunctionPointer function_;
592 HANDLE completion_handle_;
593 // Make sure enough tasks are called before completion is signaled.
594 ThreadSafeDownCounter* counter_;
596 DISALLOW_COPY_AND_ASSIGN(RunTheStatic);
598 #endif
600 // Each registered thread is called to set status_ to SHUTDOWN.
601 // This is done redundantly on every registered thread because it is not
602 // protected by a mutex. Running on all threads guarantees we get the
603 // notification into the memory cache of all possible threads.
604 static void ShutdownDisablingFurtherTracking();
606 // We use thread local store to identify which ThreadData to interact with.
607 static TLSSlot tls_index_;
609 // Link to the most recently created instance (starts a null terminated list).
610 static ThreadData* first_;
611 // Protection for access to first_.
612 static Lock list_lock_;
614 // We set status_ to SHUTDOWN when we shut down the tracking service. This
615 // setting is redundantly established by all participating threads so that we
616 // are *guaranteed* (without locking) that all threads can "see" the status
617 // and avoid additional calls into the service.
618 static Status status_;
620 // Link to next instance (null terminated list). Used to globally track all
621 // registered instances (corresponds to all registered threads where we keep
622 // data).
623 ThreadData* next_;
625 // The message loop where tasks needing to access this instance's private data
626 // should be directed. Since some threads have no message loop, some
627 // instances have data that can't be (safely) modified externally.
628 MessageLoop* message_loop_;
630 // A map used on each thread to keep track of Births on this thread.
631 // This map should only be accessed on the thread it was constructed on.
632 // When a snapshot is needed, this structure can be locked in place for the
633 // duration of the snapshotting activity.
634 BirthMap birth_map_;
636 // Similar to birth_map_, this records informations about death of tracked
637 // instances (i.e., when a tracked instance was destroyed on this thread).
638 // It is locked before changing, and hence other threads may access it by
639 // locking before reading it.
640 DeathMap death_map_;
642 // Lock to protect *some* access to BirthMap and DeathMap. The maps are
643 // regularly read and written on this thread, but may only be read from other
644 // threads. To support this, we acquire this lock if we are writing from this
645 // thread, or reading from another thread. For reading from this thread we
646 // don't need a lock, as there is no potential for a conflict since the
647 // writing is only done from this thread.
648 mutable Lock lock_;
650 DISALLOW_COPY_AND_ASSIGN(ThreadData);
654 //------------------------------------------------------------------------------
655 // Provide simple way to to start global tracking, and to tear down tracking
656 // when done. Note that construction and destruction of this object must be
657 // done when running in threaded mode (before spawning a lot of threads
658 // for construction, and after shutting down all the threads for destruction).
660 // To prevent grabbing thread local store resources time and again if someone
661 // chooses to try to re-run the browser many times, we maintain global state and
662 // only allow the tracking system to be started up at most once, and shutdown
663 // at most once. See bug 31344 for an example.
665 class AutoTracking {
666 public:
667 AutoTracking() {
668 if (state_ != kNeverBeenRun)
669 return;
670 ThreadData::StartTracking(true);
671 state_ = kRunning;
674 ~AutoTracking() {
675 #ifndef NDEBUG
676 if (state_ != kRunning)
677 return;
678 // Don't call these in a Release build: they just waste time.
679 // The following should ONLY be called when in single threaded mode. It is
680 // unsafe to do this cleanup if other threads are still active.
681 // It is also very unnecessary, so I'm only doing this in debug to satisfy
682 // purify (if we need to!).
683 ThreadData::ShutdownSingleThreadedCleanup();
684 state_ = kTornDownAndStopped;
685 #endif
688 private:
689 enum State {
690 kNeverBeenRun,
691 kRunning,
692 kTornDownAndStopped,
694 static State state_;
696 DISALLOW_COPY_AND_ASSIGN(AutoTracking);
700 } // namespace tracked_objects
702 #endif // BASE_TRACKED_OBJECTS_H_