Raise SIGKILL if cast_service doesn't Finalize() within timeout.
[chromium-blink-merge.git] / base / tracked_objects.h
blob34c3667d4ca77eab92497d7c2cb8fd6b9d6fa452
1 // Copyright (c) 2012 The Chromium Authors. All rights reserved.
2 // Use of this source code is governed by a BSD-style license that can be
3 // found in the LICENSE file.
5 #ifndef BASE_TRACKED_OBJECTS_H_
6 #define BASE_TRACKED_OBJECTS_H_
8 #include <map>
9 #include <set>
10 #include <stack>
11 #include <string>
12 #include <utility>
13 #include <vector>
15 #include "base/base_export.h"
16 #include "base/basictypes.h"
17 #include "base/containers/hash_tables.h"
18 #include "base/gtest_prod_util.h"
19 #include "base/lazy_instance.h"
20 #include "base/location.h"
21 #include "base/profiler/alternate_timer.h"
22 #include "base/profiler/tracked_time.h"
23 #include "base/synchronization/lock.h"
24 #include "base/threading/thread_local_storage.h"
26 namespace base {
27 struct TrackingInfo;
30 // TrackedObjects provides a database of stats about objects (generally Tasks)
31 // that are tracked. Tracking means their birth, death, duration, birth thread,
32 // death thread, and birth place are recorded. This data is carefully spread
33 // across a series of objects so that the counts and times can be rapidly
34 // updated without (usually) having to lock the data, and hence there is usually
35 // very little contention caused by the tracking. The data can be viewed via
36 // the about:profiler URL, with a variety of sorting and filtering choices.
38 // These classes serve as the basis of a profiler of sorts for the Tasks system.
39 // As a result, design decisions were made to maximize speed, by minimizing
40 // recurring allocation/deallocation, lock contention and data copying. In the
41 // "stable" state, which is reached relatively quickly, there is no separate
42 // marginal allocation cost associated with construction or destruction of
43 // tracked objects, no locks are generally employed, and probably the largest
44 // computational cost is associated with obtaining start and stop times for
45 // instances as they are created and destroyed.
47 // The following describes the life cycle of tracking an instance.
49 // First off, when the instance is created, the FROM_HERE macro is expanded
50 // to specify the birth place (file, line, function) where the instance was
51 // created. That data is used to create a transient Location instance
52 // encapsulating the above triple of information. The strings (like __FILE__)
53 // are passed around by reference, with the assumption that they are static, and
54 // will never go away. This ensures that the strings can be dealt with as atoms
55 // with great efficiency (i.e., copying of strings is never needed, and
56 // comparisons for equality can be based on pointer comparisons).
58 // Next, a Births instance is created for use ONLY on the thread where this
59 // instance was created. That Births instance records (in a base class
60 // BirthOnThread) references to the static data provided in a Location instance,
61 // as well as a pointer specifying the thread on which the birth takes place.
62 // Hence there is at most one Births instance for each Location on each thread.
63 // The derived Births class contains slots for recording statistics about all
64 // instances born at the same location. Statistics currently include only the
65 // count of instances constructed.
67 // Since the base class BirthOnThread contains only constant data, it can be
68 // freely accessed by any thread at any time (i.e., only the statistic needs to
69 // be handled carefully, and stats are updated exclusively on the birth thread).
71 // For Tasks, having now either constructed or found the Births instance
72 // described above, a pointer to the Births instance is then recorded into the
73 // PendingTask structure in MessageLoop. This fact alone is very useful in
74 // debugging, when there is a question of where an instance came from. In
75 // addition, the birth time is also recorded and used to later evaluate the
76 // lifetime duration of the whole Task. As a result of the above embedding, we
77 // can find out a Task's location of birth, and thread of birth, without using
78 // any locks, as all that data is constant across the life of the process.
80 // The above work *could* also be done for any other object as well by calling
81 // TallyABirthIfActive() and TallyRunOnNamedThreadIfTracking() as appropriate.
83 // The amount of memory used in the above data structures depends on how many
84 // threads there are, and how many Locations of construction there are.
85 // Fortunately, we don't use memory that is the product of those two counts, but
86 // rather we only need one Births instance for each thread that constructs an
87 // instance at a Location. In many cases, instances are only created on one
88 // thread, so the memory utilization is actually fairly restrained.
90 // Lastly, when an instance is deleted, the final tallies of statistics are
91 // carefully accumulated. That tallying writes into slots (members) in a
92 // collection of DeathData instances. For each birth place Location that is
93 // destroyed on a thread, there is a DeathData instance to record the additional
94 // death count, as well as accumulate the run-time and queue-time durations for
95 // the instance as it is destroyed (dies). By maintaining a single place to
96 // aggregate this running sum *only* for the given thread, we avoid the need to
97 // lock such DeathData instances. (i.e., these accumulated stats in a DeathData
98 // instance are exclusively updated by the singular owning thread).
100 // With the above life cycle description complete, the major remaining detail
101 // is explaining how each thread maintains a list of DeathData instances, and
102 // of Births instances, and is able to avoid additional (redundant/unnecessary)
103 // allocations.
105 // Each thread maintains a list of data items specific to that thread in a
106 // ThreadData instance (for that specific thread only). The two critical items
107 // are lists of DeathData and Births instances. These lists are maintained in
108 // STL maps, which are indexed by Location. As noted earlier, we can compare
109 // locations very efficiently as we consider the underlying data (file,
110 // function, line) to be atoms, and hence pointer comparison is used rather than
111 // (slow) string comparisons.
113 // To provide a mechanism for iterating over all "known threads," which means
114 // threads that have recorded a birth or a death, we create a singly linked list
115 // of ThreadData instances. Each such instance maintains a pointer to the next
116 // one. A static member of ThreadData provides a pointer to the first item on
117 // this global list, and access via that all_thread_data_list_head_ item
118 // requires the use of the list_lock_.
119 // When new ThreadData instances is added to the global list, it is pre-pended,
120 // which ensures that any prior acquisition of the list is valid (i.e., the
121 // holder can iterate over it without fear of it changing, or the necessity of
122 // using an additional lock. Iterations are actually pretty rare (used
123 // primarily for cleanup, or snapshotting data for display), so this lock has
124 // very little global performance impact.
126 // The above description tries to define the high performance (run time)
127 // portions of these classes. After gathering statistics, calls instigated
128 // by visiting about:profiler will assemble and aggregate data for display. The
129 // following data structures are used for producing such displays. They are
130 // not performance critical, and their only major constraint is that they should
131 // be able to run concurrently with ongoing augmentation of the birth and death
132 // data.
134 // This header also exports collection of classes that provide "snapshotted"
135 // representations of the core tracked_objects:: classes. These snapshotted
136 // representations are designed for safe transmission of the tracked_objects::
137 // data across process boundaries. Each consists of:
138 // (1) a default constructor, to support the IPC serialization macros,
139 // (2) a constructor that extracts data from the type being snapshotted, and
140 // (3) the snapshotted data.
142 // For a given birth location, information about births is spread across data
143 // structures that are asynchronously changing on various threads. For
144 // serialization and display purposes, we need to construct TaskSnapshot
145 // instances for each combination of birth thread, death thread, and location,
146 // along with the count of such lifetimes. We gather such data into a
147 // TaskSnapshot instances, so that such instances can be sorted and
148 // aggregated (and remain frozen during our processing).
150 // The ProcessDataSnapshot struct is a serialized representation of the list
151 // of ThreadData objects for a process. It holds a set of TaskSnapshots
152 // and tracks parent/child relationships for the executed tasks. The statistics
153 // in a snapshot are gathered asynhcronously relative to their ongoing updates.
154 // It is possible, though highly unlikely, that stats could be incorrectly
155 // recorded by this process (all data is held in 32 bit ints, but we are not
156 // atomically collecting all data, so we could have count that does not, for
157 // example, match with the number of durations we accumulated). The advantage
158 // to having fast (non-atomic) updates of the data outweighs the minimal risk of
159 // a singular corrupt statistic snapshot (only the snapshot could be corrupt,
160 // not the underlying and ongoing statistic). In contrast, pointer data that
161 // is accessed during snapshotting is completely invariant, and hence is
162 // perfectly acquired (i.e., no potential corruption, and no risk of a bad
163 // memory reference).
165 // TODO(jar): We can implement a Snapshot system that *tries* to grab the
166 // snapshots on the source threads *when* they have MessageLoops available
167 // (worker threads don't have message loops generally, and hence gathering from
168 // them will continue to be asynchronous). We had an implementation of this in
169 // the past, but the difficulty is dealing with message loops being terminated.
170 // We can *try* to spam the available threads via some message loop proxy to
171 // achieve this feat, and it *might* be valuable when we are collecting data
172 // for upload via UMA (where correctness of data may be more significant than
173 // for a single screen of about:profiler).
175 // TODO(jar): We should support (optionally) the recording of parent-child
176 // relationships for tasks. This should be done by detecting what tasks are
177 // Born during the running of a parent task. The resulting data can be used by
178 // a smarter profiler to aggregate the cost of a series of child tasks into
179 // the ancestor task. It can also be used to illuminate what child or parent is
180 // related to each task.
182 // TODO(jar): We need to store DataCollections, and provide facilities for
183 // taking the difference between two gathered DataCollections. For now, we're
184 // just adding a hack that Reset()s to zero all counts and stats. This is also
185 // done in a slightly thread-unsafe fashion, as the resetting is done
186 // asynchronously relative to ongoing updates (but all data is 32 bit in size).
187 // For basic profiling, this will work "most of the time," and should be
188 // sufficient... but storing away DataCollections is the "right way" to do this.
189 // We'll accomplish this via JavaScript storage of snapshots, and then we'll
190 // remove the Reset() methods. We may also need a short-term-max value in
191 // DeathData that is reset (as synchronously as possible) during each snapshot.
192 // This will facilitate displaying a max value for each snapshot period.
194 namespace tracked_objects {
196 //------------------------------------------------------------------------------
197 // For a specific thread, and a specific birth place, the collection of all
198 // death info (with tallies for each death thread, to prevent access conflicts).
199 class ThreadData;
200 class BASE_EXPORT BirthOnThread {
201 public:
202 BirthOnThread(const Location& location, const ThreadData& current);
204 const Location location() const { return location_; }
205 const ThreadData* birth_thread() const { return birth_thread_; }
207 private:
208 // File/lineno of birth. This defines the essence of the task, as the context
209 // of the birth (construction) often tell what the item is for. This field
210 // is const, and hence safe to access from any thread.
211 const Location location_;
213 // The thread that records births into this object. Only this thread is
214 // allowed to update birth_count_ (which changes over time).
215 const ThreadData* const birth_thread_;
217 DISALLOW_COPY_AND_ASSIGN(BirthOnThread);
220 //------------------------------------------------------------------------------
221 // A "snapshotted" representation of the BirthOnThread class.
223 struct BASE_EXPORT BirthOnThreadSnapshot {
224 BirthOnThreadSnapshot();
225 explicit BirthOnThreadSnapshot(const BirthOnThread& birth);
226 ~BirthOnThreadSnapshot();
228 LocationSnapshot location;
229 std::string thread_name;
232 //------------------------------------------------------------------------------
233 // A class for accumulating counts of births (without bothering with a map<>).
235 class BASE_EXPORT Births: public BirthOnThread {
236 public:
237 Births(const Location& location, const ThreadData& current);
239 int birth_count() const;
241 // When we have a birth we update the count for this birthplace.
242 void RecordBirth();
244 private:
245 // The number of births on this thread for our location_.
246 int birth_count_;
248 DISALLOW_COPY_AND_ASSIGN(Births);
251 //------------------------------------------------------------------------------
252 // Basic info summarizing multiple destructions of a tracked object with a
253 // single birthplace (fixed Location). Used both on specific threads, and also
254 // in snapshots when integrating assembled data.
256 class BASE_EXPORT DeathData {
257 public:
258 // Default initializer.
259 DeathData();
261 // When deaths have not yet taken place, and we gather data from all the
262 // threads, we create DeathData stats that tally the number of births without
263 // a corresponding death.
264 explicit DeathData(int count);
266 // Update stats for a task destruction (death) that had a Run() time of
267 // |duration|, and has had a queueing delay of |queue_duration|.
268 void RecordDeath(const int32 queue_duration,
269 const int32 run_duration,
270 const uint32 random_number);
272 // Metrics accessors, used only for serialization and in tests.
273 int count() const;
274 int32 run_duration_sum() const;
275 int32 run_duration_max() const;
276 int32 run_duration_sample() const;
277 int32 queue_duration_sum() const;
278 int32 queue_duration_max() const;
279 int32 queue_duration_sample() const;
281 // Reset all tallies to zero. This is used as a hack on realtime data.
282 void Clear();
284 private:
285 // Members are ordered from most regularly read and updated, to least
286 // frequently used. This might help a bit with cache lines.
287 // Number of runs seen (divisor for calculating averages).
288 int count_;
289 // Basic tallies, used to compute averages.
290 int32 run_duration_sum_;
291 int32 queue_duration_sum_;
292 // Max values, used by local visualization routines. These are often read,
293 // but rarely updated.
294 int32 run_duration_max_;
295 int32 queue_duration_max_;
296 // Samples, used by crowd sourcing gatherers. These are almost never read,
297 // and rarely updated.
298 int32 run_duration_sample_;
299 int32 queue_duration_sample_;
302 //------------------------------------------------------------------------------
303 // A "snapshotted" representation of the DeathData class.
305 struct BASE_EXPORT DeathDataSnapshot {
306 DeathDataSnapshot();
307 explicit DeathDataSnapshot(const DeathData& death_data);
308 ~DeathDataSnapshot();
310 int count;
311 int32 run_duration_sum;
312 int32 run_duration_max;
313 int32 run_duration_sample;
314 int32 queue_duration_sum;
315 int32 queue_duration_max;
316 int32 queue_duration_sample;
319 //------------------------------------------------------------------------------
320 // A temporary collection of data that can be sorted and summarized. It is
321 // gathered (carefully) from many threads. Instances are held in arrays and
322 // processed, filtered, and rendered.
323 // The source of this data was collected on many threads, and is asynchronously
324 // changing. The data in this instance is not asynchronously changing.
326 struct BASE_EXPORT TaskSnapshot {
327 TaskSnapshot();
328 TaskSnapshot(const BirthOnThread& birth,
329 const DeathData& death_data,
330 const std::string& death_thread_name);
331 ~TaskSnapshot();
333 BirthOnThreadSnapshot birth;
334 DeathDataSnapshot death_data;
335 std::string death_thread_name;
338 //------------------------------------------------------------------------------
339 // For each thread, we have a ThreadData that stores all tracking info generated
340 // on this thread. This prevents the need for locking as data accumulates.
341 // We use ThreadLocalStorage to quickly identfy the current ThreadData context.
342 // We also have a linked list of ThreadData instances, and that list is used to
343 // harvest data from all existing instances.
345 struct ProcessDataSnapshot;
346 class BASE_EXPORT TaskStopwatch;
348 class BASE_EXPORT ThreadData {
349 public:
350 // Current allowable states of the tracking system. The states can vary
351 // between ACTIVE and DEACTIVATED, but can never go back to UNINITIALIZED.
352 enum Status {
353 UNINITIALIZED, // PRistine, link-time state before running.
354 DORMANT_DURING_TESTS, // Only used during testing.
355 DEACTIVATED, // No longer recording profiling.
356 PROFILING_ACTIVE, // Recording profiles (no parent-child links).
357 PROFILING_CHILDREN_ACTIVE, // Fully active, recording parent-child links.
358 STATUS_LAST = PROFILING_CHILDREN_ACTIVE
361 typedef base::hash_map<Location, Births*, Location::Hash> BirthMap;
362 typedef std::map<const Births*, DeathData> DeathMap;
363 typedef std::pair<const Births*, const Births*> ParentChildPair;
364 typedef std::set<ParentChildPair> ParentChildSet;
365 typedef std::stack<const Births*> ParentStack;
367 // Initialize the current thread context with a new instance of ThreadData.
368 // This is used by all threads that have names, and should be explicitly
369 // set *before* any births on the threads have taken place. It is generally
370 // only used by the message loop, which has a well defined thread name.
371 static void InitializeThreadContext(const std::string& suggested_name);
373 // Using Thread Local Store, find the current instance for collecting data.
374 // If an instance does not exist, construct one (and remember it for use on
375 // this thread.
376 // This may return NULL if the system is disabled for any reason.
377 static ThreadData* Get();
379 // Fills |process_data| with all the recursive results in our process.
380 static void Snapshot(ProcessDataSnapshot* process_data);
382 // Finds (or creates) a place to count births from the given location in this
383 // thread, and increment that tally.
384 // TallyABirthIfActive will returns NULL if the birth cannot be tallied.
385 static Births* TallyABirthIfActive(const Location& location);
387 // Records the end of a timed run of an object. The |completed_task| contains
388 // a pointer to a Births, the time_posted, and a delayed_start_time if any.
389 // The |start_of_run| indicates when we started to perform the run of the
390 // task. The delayed_start_time is non-null for tasks that were posted as
391 // delayed tasks, and it indicates when the task should have run (i.e., when
392 // it should have posted out of the timer queue, and into the work queue.
393 // The |end_of_run| was just obtained by a call to Now() (just after the task
394 // finished). It is provided as an argument to help with testing.
395 static void TallyRunOnNamedThreadIfTracking(
396 const base::TrackingInfo& completed_task,
397 const TaskStopwatch& stopwatch);
399 // Record the end of a timed run of an object. The |birth| is the record for
400 // the instance, the |time_posted| records that instant, which is presumed to
401 // be when the task was posted into a queue to run on a worker thread.
402 // The |start_of_run| is when the worker thread started to perform the run of
403 // the task.
404 // The |end_of_run| was just obtained by a call to Now() (just after the task
405 // finished).
406 static void TallyRunOnWorkerThreadIfTracking(
407 const Births* birth,
408 const TrackedTime& time_posted,
409 const TaskStopwatch& stopwatch);
411 // Record the end of execution in region, generally corresponding to a scope
412 // being exited.
413 static void TallyRunInAScopedRegionIfTracking(
414 const Births* birth,
415 const TaskStopwatch& stopwatch);
417 const std::string& thread_name() const { return thread_name_; }
419 // Initializes all statics if needed (this initialization call should be made
420 // while we are single threaded). Returns false if unable to initialize.
421 static bool Initialize();
423 // Sets internal status_.
424 // If |status| is false, then status_ is set to DEACTIVATED.
425 // If |status| is true, then status_ is set to, PROFILING_ACTIVE, or
426 // PROFILING_CHILDREN_ACTIVE.
427 // If tracking is not compiled in, this function will return false.
428 // If parent-child tracking is not compiled in, then an attempt to set the
429 // status to PROFILING_CHILDREN_ACTIVE will only result in a status of
430 // PROFILING_ACTIVE (i.e., it can't be set to a higher level than what is
431 // compiled into the binary, and parent-child tracking at the
432 // PROFILING_CHILDREN_ACTIVE level might not be compiled in).
433 static bool InitializeAndSetTrackingStatus(Status status);
435 static Status status();
437 // Indicate if any sort of profiling is being done (i.e., we are more than
438 // DEACTIVATED).
439 static bool TrackingStatus();
441 // For testing only, indicate if the status of parent-child tracking is turned
442 // on. This is currently a compiled option, atop TrackingStatus().
443 static bool TrackingParentChildStatus();
445 // Marks a start of a tracked run. It's super fast when tracking is disabled,
446 // and has some internal side effects when we are tracking, so that we can
447 // deduce the amount of time accumulated outside of execution of tracked runs.
448 // The task that will be tracked is passed in as |parent| so that parent-child
449 // relationships can be (optionally) calculated.
450 static void PrepareForStartOfRun(const Births* parent);
452 // Enables profiler timing.
453 static void EnableProfilerTiming();
455 // Provide a time function that does nothing (runs fast) when we don't have
456 // the profiler enabled. It will generally be optimized away when it is
457 // ifdef'ed to be small enough (allowing the profiler to be "compiled out" of
458 // the code).
459 static TrackedTime Now();
461 // Use the function |now| to provide current times, instead of calling the
462 // TrackedTime::Now() function. Since this alternate function is being used,
463 // the other time arguments (used for calculating queueing delay) will be
464 // ignored.
465 static void SetAlternateTimeSource(NowFunction* now);
467 // This function can be called at process termination to validate that thread
468 // cleanup routines have been called for at least some number of named
469 // threads.
470 static void EnsureCleanupWasCalled(int major_threads_shutdown_count);
472 private:
473 friend class TaskStopwatch;
474 // Allow only tests to call ShutdownSingleThreadedCleanup. We NEVER call it
475 // in production code.
476 // TODO(jar): Make this a friend in DEBUG only, so that the optimizer has a
477 // better change of optimizing (inlining? etc.) private methods (knowing that
478 // there will be no need for an external entry point).
479 friend class TrackedObjectsTest;
480 FRIEND_TEST_ALL_PREFIXES(TrackedObjectsTest, MinimalStartupShutdown);
481 FRIEND_TEST_ALL_PREFIXES(TrackedObjectsTest, TinyStartupShutdown);
482 FRIEND_TEST_ALL_PREFIXES(TrackedObjectsTest, ParentChildTest);
484 typedef std::map<const BirthOnThread*, int> BirthCountMap;
486 // Worker thread construction creates a name since there is none.
487 explicit ThreadData(int thread_number);
489 // Message loop based construction should provide a name.
490 explicit ThreadData(const std::string& suggested_name);
492 ~ThreadData();
494 // Push this instance to the head of all_thread_data_list_head_, linking it to
495 // the previous head. This is performed after each construction, and leaves
496 // the instance permanently on that list.
497 void PushToHeadOfList();
499 // (Thread safe) Get start of list of all ThreadData instances using the lock.
500 static ThreadData* first();
502 // Iterate through the null terminated list of ThreadData instances.
503 ThreadData* next() const;
506 // In this thread's data, record a new birth.
507 Births* TallyABirth(const Location& location);
509 // Find a place to record a death on this thread.
510 void TallyADeath(const Births& birth,
511 int32 queue_duration,
512 const TaskStopwatch& stopwatch);
514 // Snapshot (under a lock) the profiled data for the tasks in each ThreadData
515 // instance. Also updates the |birth_counts| tally for each task to keep
516 // track of the number of living instances of the task.
517 static void SnapshotAllExecutedTasks(ProcessDataSnapshot* process_data,
518 BirthCountMap* birth_counts);
520 // Snapshots (under a lock) the profiled data for the tasks for this thread
521 // and writes all of the executed tasks' data -- i.e. the data for the tasks
522 // with with entries in the death_map_ -- into |process_data|. Also updates
523 // the |birth_counts| tally for each task to keep track of the number of
524 // living instances of the task -- that is, each task maps to the number of
525 // births for the task that have not yet been balanced by a death.
526 void SnapshotExecutedTasks(ProcessDataSnapshot* process_data,
527 BirthCountMap* birth_counts);
529 // Using our lock, make a copy of the specified maps. This call may be made
530 // on non-local threads, which necessitate the use of the lock to prevent
531 // the map(s) from being reallocated while they are copied.
532 void SnapshotMaps(BirthMap* birth_map,
533 DeathMap* death_map,
534 ParentChildSet* parent_child_set);
536 // This method is called by the TLS system when a thread terminates.
537 // The argument may be NULL if this thread has never tracked a birth or death.
538 static void OnThreadTermination(void* thread_data);
540 // This method should be called when a worker thread terminates, so that we
541 // can save all the thread data into a cache of reusable ThreadData instances.
542 void OnThreadTerminationCleanup();
544 // Cleans up data structures, and returns statics to near pristine (mostly
545 // uninitialized) state. If there is any chance that other threads are still
546 // using the data structures, then the |leak| argument should be passed in as
547 // true, and the data structures (birth maps, death maps, ThreadData
548 // insntances, etc.) will be leaked and not deleted. If you have joined all
549 // threads since the time that InitializeAndSetTrackingStatus() was called,
550 // then you can pass in a |leak| value of false, and this function will
551 // delete recursively all data structures, starting with the list of
552 // ThreadData instances.
553 static void ShutdownSingleThreadedCleanup(bool leak);
555 // When non-null, this specifies an external function that supplies monotone
556 // increasing time functcion.
557 static NowFunction* now_function_;
559 // If true, now_function_ returns values that can be used to calculate queue
560 // time.
561 static bool now_function_is_time_;
563 // We use thread local store to identify which ThreadData to interact with.
564 static base::ThreadLocalStorage::StaticSlot tls_index_;
566 // List of ThreadData instances for use with worker threads. When a worker
567 // thread is done (terminated), we push it onto this list. When a new worker
568 // thread is created, we first try to re-use a ThreadData instance from the
569 // list, and if none are available, construct a new one.
570 // This is only accessed while list_lock_ is held.
571 static ThreadData* first_retired_worker_;
573 // Link to the most recently created instance (starts a null terminated list).
574 // The list is traversed by about:profiler when it needs to snapshot data.
575 // This is only accessed while list_lock_ is held.
576 static ThreadData* all_thread_data_list_head_;
578 // The next available worker thread number. This should only be accessed when
579 // the list_lock_ is held.
580 static int worker_thread_data_creation_count_;
582 // The number of times TLS has called us back to cleanup a ThreadData
583 // instance. This is only accessed while list_lock_ is held.
584 static int cleanup_count_;
586 // Incarnation sequence number, indicating how many times (during unittests)
587 // we've either transitioned out of UNINITIALIZED, or into that state. This
588 // value is only accessed while the list_lock_ is held.
589 static int incarnation_counter_;
591 // Protection for access to all_thread_data_list_head_, and to
592 // unregistered_thread_data_pool_. This lock is leaked at shutdown.
593 // The lock is very infrequently used, so we can afford to just make a lazy
594 // instance and be safe.
595 static base::LazyInstance<base::Lock>::Leaky list_lock_;
597 // We set status_ to SHUTDOWN when we shut down the tracking service.
598 static Status status_;
600 // Link to next instance (null terminated list). Used to globally track all
601 // registered instances (corresponds to all registered threads where we keep
602 // data).
603 ThreadData* next_;
605 // Pointer to another ThreadData instance for a Worker-Thread that has been
606 // retired (its thread was terminated). This value is non-NULL only for a
607 // retired ThreadData associated with a Worker-Thread.
608 ThreadData* next_retired_worker_;
610 // The name of the thread that is being recorded. If this thread has no
611 // message_loop, then this is a worker thread, with a sequence number postfix.
612 std::string thread_name_;
614 // Indicate if this is a worker thread, and the ThreadData contexts should be
615 // stored in the unregistered_thread_data_pool_ when not in use.
616 // Value is zero when it is not a worker thread. Value is a positive integer
617 // corresponding to the created thread name if it is a worker thread.
618 int worker_thread_number_;
620 // A map used on each thread to keep track of Births on this thread.
621 // This map should only be accessed on the thread it was constructed on.
622 // When a snapshot is needed, this structure can be locked in place for the
623 // duration of the snapshotting activity.
624 BirthMap birth_map_;
626 // Similar to birth_map_, this records informations about death of tracked
627 // instances (i.e., when a tracked instance was destroyed on this thread).
628 // It is locked before changing, and hence other threads may access it by
629 // locking before reading it.
630 DeathMap death_map_;
632 // A set of parents that created children tasks on this thread. Each pair
633 // corresponds to potentially non-local Births (location and thread), and a
634 // local Births (that took place on this thread).
635 ParentChildSet parent_child_set_;
637 // Lock to protect *some* access to BirthMap and DeathMap. The maps are
638 // regularly read and written on this thread, but may only be read from other
639 // threads. To support this, we acquire this lock if we are writing from this
640 // thread, or reading from another thread. For reading from this thread we
641 // don't need a lock, as there is no potential for a conflict since the
642 // writing is only done from this thread.
643 mutable base::Lock map_lock_;
645 // The stack of parents that are currently being profiled. This includes only
646 // tasks that have started a timer recently via PrepareForStartOfRun(), but
647 // not yet concluded with a NowForEndOfRun(). Usually this stack is one deep,
648 // but if a scoped region is profiled, or <sigh> a task runs a nested-message
649 // loop, then the stack can grow larger. Note that we don't try to deduct
650 // time in nested profiles, as our current timer is based on wall-clock time,
651 // and not CPU time (and we're hopeful that nested timing won't be a
652 // significant additional cost).
653 ParentStack parent_stack_;
655 // A random number that we used to select decide which sample to keep as a
656 // representative sample in each DeathData instance. We can't start off with
657 // much randomness (because we can't call RandInt() on all our threads), so
658 // we stir in more and more as we go.
659 uint32 random_number_;
661 // Record of what the incarnation_counter_ was when this instance was created.
662 // If the incarnation_counter_ has changed, then we avoid pushing into the
663 // pool (this is only critical in tests which go through multiple
664 // incarnations).
665 int incarnation_count_for_pool_;
667 // Most recently started (i.e. most nested) stopwatch on the current thread,
668 // if it exists; NULL otherwise.
669 TaskStopwatch* current_stopwatch_;
671 DISALLOW_COPY_AND_ASSIGN(ThreadData);
674 //------------------------------------------------------------------------------
675 // Stopwatch to measure task run time or simply create a time interval that will
676 // be subtracted from the current most nested task's run time. Stopwatches
677 // coordinate with the stopwatches in which they are nested to avoid
678 // double-counting nested tasks run times.
680 class BASE_EXPORT TaskStopwatch {
681 public:
682 // Starts the stopwatch.
683 TaskStopwatch();
684 ~TaskStopwatch();
686 // Starts stopwatch.
687 void Start();
689 // Stops stopwatch.
690 void Stop();
692 // Returns the start time.
693 TrackedTime StartTime() const;
695 // Task's duration is calculated as the wallclock duration between starting
696 // and stopping this stopwatch, minus the wallclock durations of any other
697 // instances that are immediately nested in this one, started and stopped on
698 // this thread during that period.
699 int32 RunDurationMs() const;
701 // Returns tracking info for the current thread.
702 ThreadData* GetThreadData() const;
704 private:
705 // Time when the stopwatch was started.
706 TrackedTime start_time_;
708 // Wallclock duration of the task.
709 int32 wallclock_duration_ms_;
711 // Tracking info for the current thread.
712 ThreadData* current_thread_data_;
714 // Sum of wallclock durations of all stopwatches that were directly nested in
715 // this one.
716 int32 excluded_duration_ms_;
718 // Stopwatch which was running on our thread when this stopwatch was started.
719 // That preexisting stopwatch must be adjusted to the exclude the wallclock
720 // duration of this stopwatch.
721 TaskStopwatch* parent_;
723 #if DCHECK_IS_ON()
724 // State of the stopwatch. Stopwatch is first constructed in a created state
725 // state, then is optionally started/stopped, then destructed.
726 enum { CREATED, RUNNING, STOPPED } state_;
728 // Currently running stopwatch that is directly nested in this one, if such
729 // stopwatch exists. NULL otherwise.
730 TaskStopwatch* child_;
731 #endif
734 //------------------------------------------------------------------------------
735 // A snapshotted representation of a (parent, child) task pair, for tracking
736 // hierarchical profiles.
738 struct BASE_EXPORT ParentChildPairSnapshot {
739 public:
740 ParentChildPairSnapshot();
741 explicit ParentChildPairSnapshot(
742 const ThreadData::ParentChildPair& parent_child);
743 ~ParentChildPairSnapshot();
745 BirthOnThreadSnapshot parent;
746 BirthOnThreadSnapshot child;
749 //------------------------------------------------------------------------------
750 // A snapshotted representation of the list of ThreadData objects for a process.
752 struct BASE_EXPORT ProcessDataSnapshot {
753 public:
754 ProcessDataSnapshot();
755 ~ProcessDataSnapshot();
757 std::vector<TaskSnapshot> tasks;
758 std::vector<ParentChildPairSnapshot> descendants;
759 int process_id;
762 } // namespace tracked_objects
764 #endif // BASE_TRACKED_OBJECTS_H_