1 <?xml version=
"1.0" encoding=
"UTF-8" standalone=
"no"?>
2 <!DOCTYPE html PUBLIC
"-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns=
"http://www.w3.org/1999/xhtml"><head><meta http-equiv=
"Content-Type" content=
"text/html; charset=UTF-8" /><title>Multiple Thread Example
</title><meta name=
"generator" content=
"DocBook XSL Stylesheets Vsnapshot" /><meta name=
"keywords" content=
"ISO C++, allocator" /><meta name=
"keywords" content=
"ISO C++, library" /><meta name=
"keywords" content=
"ISO C++, runtime, library" /><link rel=
"home" href=
"../index.html" title=
"The GNU C++ Library" /><link rel=
"up" href=
"mt_allocator.html" title=
"Chapter 19. The mt_allocator" /><link rel=
"prev" href=
"mt_allocator_ex_single.html" title=
"Single Thread Example" /><link rel=
"next" href=
"bitmap_allocator.html" title=
"Chapter 20. The bitmap_allocator" /></head><body><div class=
"navheader"><table width=
"100%" summary=
"Navigation header"><tr><th colspan=
"3" align=
"center">Multiple Thread Example
</th></tr><tr><td width=
"20%" align=
"left"><a accesskey=
"p" href=
"mt_allocator_ex_single.html">Prev
</a> </td><th width=
"60%" align=
"center">Chapter
19. The mt_allocator
</th><td width=
"20%" align=
"right"> <a accesskey=
"n" href=
"bitmap_allocator.html">Next
</a></td></tr></table><hr /></div><div class=
"section"><div class=
"titlepage"><div><div><h2 class=
"title" style=
"clear: both"><a id=
"allocator.mt.example_multi"></a>Multiple Thread Example
</h2></div></div></div><p>
3 In the ST example we never used the thread_id variable present in each block.
4 Let's start by explaining the purpose of this in a MT application.
6 The concept of
"ownership" was introduced since many MT applications
7 allocate and deallocate memory to shared containers from different
8 threads (such as a cache shared amongst all threads). This introduces
9 a problem if the allocator only returns memory to the current threads
10 freelist (I.e., there might be one thread doing all the allocation and
11 thus obtaining ever more memory from the system and another thread
12 that is getting a longer and longer freelist - this will in the end
13 consume all available memory).
15 Each time a block is moved from the global list (where ownership is
16 irrelevant), to a threads freelist (or when a new freelist is built
17 from a chunk directly onto a threads freelist or when a deallocation
18 occurs on a block which was not allocated by the same thread id as the
19 one doing the deallocation) the thread id is set to the current one.
21 What's the use? Well, when a deallocation occurs we can now look at
22 the thread id and find out if it was allocated by another thread id
23 and decrease the used counter of that thread instead, thus keeping the
24 free and used counters correct. And keeping the free and used counters
25 corrects is very important since the relationship between these two
26 variables decides if memory should be returned to the global pool or
27 not when a deallocation occurs.
29 When the application requests memory (calling allocate()) we first
30 look at the requested size and if this is
>_S_max_bytes we call new()
33 If the requested size is within limits we start by finding out from which
34 bin we should serve this request by looking in _S_binmap.
36 A call to _S_get_thread_id() returns the thread id for the calling thread
37 (and if no value has been set in _S_thread_key, a new id is assigned and
40 A quick look at _S_bin[ bin ].first[ thread_id ] tells us if there are
41 any blocks of this size on the current threads freelist. If this is
42 not NULL - fine, just remove the block that _S_bin[ bin ].first[
43 thread_id ] points to from the list, update _S_bin[ bin ].first[
44 thread_id ], update the free and used counters and return a pointer to
47 If the freelist is empty (the pointer is NULL) we start by looking at
48 the global freelist (
0). If there are blocks available on the global
49 freelist we lock this bins mutex and move up to block_count (the
50 number of blocks of this bins size that will fit into a _S_chunk_size)
51 or until end of list - whatever comes first - to the current threads
52 freelist and at the same time change the thread_id ownership and
53 update the counters and pointers. When the bins mutex has been
54 unlocked, we remove the block that _S_bin[ bin ].first[ thread_id ]
55 points to from the list, update _S_bin[ bin ].first[ thread_id ],
56 update the free and used counters, and return a pointer to that blocks
59 The reason that the number of blocks moved to the current threads
60 freelist is limited to block_count is to minimize the chance that a
61 subsequent deallocate() call will return the excess blocks to the
62 global freelist (based on the _S_freelist_headroom calculation, see
65 However if there isn't any memory on the global pool we need to get
66 memory from the system - this is done in exactly the same way as in a
67 single threaded application with one major difference; the list built
68 in the newly allocated memory (of _S_chunk_size size) is added to the
69 current threads freelist instead of to the global.
71 The basic process of a deallocation call is simple: always add the
72 block to the front of the current threads freelist and update the
73 counters and pointers (as described earlier with the specific check of
74 ownership that causes the used counter of the thread that originally
75 allocated the block to be decreased instead of the current threads
78 And here comes the free and used counters to service. Each time a
79 deallocation() call is made, the length of the current threads
80 freelist is compared to the amount memory in use by this thread.
82 Let's go back to the example of an application that has one thread
83 that does all the allocations and one that deallocates. Both these
84 threads use say
516 32-byte blocks that was allocated during thread
85 creation for example. Their used counters will both say
516 at this
86 point. The allocation thread now grabs
1000 32-byte blocks and puts
87 them in a shared container. The used counter for this thread is now
90 The deallocation thread now deallocates
500 of these blocks. For each
91 deallocation made the used counter of the allocating thread is
92 decreased and the freelist of the deallocation thread gets longer and
93 longer. But the calculation made in deallocate() will limit the length
94 of the freelist in the deallocation thread to _S_freelist_headroom %
95 of it's used counter. In this case, when the freelist (given that the
96 _S_freelist_headroom is at it's default value of
10%) exceeds
52
97 (
516/
10) blocks will be returned to the global pool where the
98 allocating thread may pick them up and reuse them.
100 In order to reduce lock contention (since this requires this bins
101 mutex to be locked) this operation is also made in chunks of blocks
102 (just like when chunks of blocks are moved from the global freelist to
103 a threads freelist mentioned above). The
"formula" used can probably
104 be improved to further reduce the risk of blocks being
"bounced back
105 and forth" between freelists.
106 </p></div><div class=
"navfooter"><hr /><table width=
"100%" summary=
"Navigation footer"><tr><td width=
"40%" align=
"left"><a accesskey=
"p" href=
"mt_allocator_ex_single.html">Prev
</a> </td><td width=
"20%" align=
"center"><a accesskey=
"u" href=
"mt_allocator.html">Up
</a></td><td width=
"40%" align=
"right"> <a accesskey=
"n" href=
"bitmap_allocator.html">Next
</a></td></tr><tr><td width=
"40%" align=
"left" valign=
"top">Single Thread Example
</td><td width=
"20%" align=
"center"><a accesskey=
"h" href=
"../index.html">Home
</a></td><td width=
"40%" align=
"right" valign=
"top"> Chapter
20. The bitmap_allocator
</td></tr></table></div></body></html>