[sgen] Add option for parallel nursery collector
[mono-project.git] / mono / sgen / sgen-gc.c
blobbcfee619b32c72412f15edb33a067a4ce163bdcf
1 /**
2 * \file
3 * Simple generational GC.
5 * Author:
6 * Paolo Molaro (lupus@ximian.com)
7 * Rodrigo Kumpera (kumpera@gmail.com)
9 * Copyright 2005-2011 Novell, Inc (http://www.novell.com)
10 * Copyright 2011 Xamarin Inc (http://www.xamarin.com)
12 * Thread start/stop adapted from Boehm's GC:
13 * Copyright (c) 1994 by Xerox Corporation. All rights reserved.
14 * Copyright (c) 1996 by Silicon Graphics. All rights reserved.
15 * Copyright (c) 1998 by Fergus Henderson. All rights reserved.
16 * Copyright (c) 2000-2004 by Hewlett-Packard Company. All rights reserved.
17 * Copyright 2001-2003 Ximian, Inc
18 * Copyright 2003-2010 Novell, Inc.
19 * Copyright 2011 Xamarin, Inc.
20 * Copyright (C) 2012 Xamarin Inc
22 * Licensed under the MIT license. See LICENSE file in the project root for full license information.
24 * Important: allocation provides always zeroed memory, having to do
25 * a memset after allocation is deadly for performance.
26 * Memory usage at startup is currently as follows:
27 * 64 KB pinned space
28 * 64 KB internal space
29 * size of nursery
30 * We should provide a small memory config with half the sizes
32 * We currently try to make as few mono assumptions as possible:
33 * 1) 2-word header with no GC pointers in it (first vtable, second to store the
34 * forwarding ptr)
35 * 2) gc descriptor is the second word in the vtable (first word in the class)
36 * 3) 8 byte alignment is the minimum and enough (not true for special structures (SIMD), FIXME)
37 * 4) there is a function to get an object's size and the number of
38 * elements in an array.
39 * 5) we know the special way bounds are allocated for complex arrays
40 * 6) we know about proxies and how to treat them when domains are unloaded
42 * Always try to keep stack usage to a minimum: no recursive behaviour
43 * and no large stack allocs.
45 * General description.
46 * Objects are initially allocated in a nursery using a fast bump-pointer technique.
47 * When the nursery is full we start a nursery collection: this is performed with a
48 * copying GC.
49 * When the old generation is full we start a copying GC of the old generation as well:
50 * this will be changed to mark&sweep with copying when fragmentation becomes to severe
51 * in the future. Maybe we'll even do both during the same collection like IMMIX.
53 * The things that complicate this description are:
54 * *) pinned objects: we can't move them so we need to keep track of them
55 * *) no precise info of the thread stacks and registers: we need to be able to
56 * quickly find the objects that may be referenced conservatively and pin them
57 * (this makes the first issues more important)
58 * *) large objects are too expensive to be dealt with using copying GC: we handle them
59 * with mark/sweep during major collections
60 * *) some objects need to not move even if they are small (interned strings, Type handles):
61 * we use mark/sweep for them, too: they are not allocated in the nursery, but inside
62 * PinnedChunks regions
66 * TODO:
68 *) we could have a function pointer in MonoClass to implement
69 customized write barriers for value types
71 *) investigate the stuff needed to advance a thread to a GC-safe
72 point (single-stepping, read from unmapped memory etc) and implement it.
73 This would enable us to inline allocations and write barriers, for example,
74 or at least parts of them, like the write barrier checks.
75 We may need this also for handling precise info on stacks, even simple things
76 as having uninitialized data on the stack and having to wait for the prolog
77 to zero it. Not an issue for the last frame that we scan conservatively.
78 We could always not trust the value in the slots anyway.
80 *) modify the jit to save info about references in stack locations:
81 this can be done just for locals as a start, so that at least
82 part of the stack is handled precisely.
84 *) test/fix endianess issues
86 *) Implement a card table as the write barrier instead of remembered
87 sets? Card tables are not easy to implement with our current
88 memory layout. We have several different kinds of major heap
89 objects: Small objects in regular blocks, small objects in pinned
90 chunks and LOS objects. If we just have a pointer we have no way
91 to tell which kind of object it points into, therefore we cannot
92 know where its card table is. The least we have to do to make
93 this happen is to get rid of write barriers for indirect stores.
94 (See next item)
96 *) Get rid of write barriers for indirect stores. We can do this by
97 telling the GC to wbarrier-register an object once we do an ldloca
98 or ldelema on it, and to unregister it once it's not used anymore
99 (it can only travel downwards on the stack). The problem with
100 unregistering is that it needs to happen eventually no matter
101 what, even if exceptions are thrown, the thread aborts, etc.
102 Rodrigo suggested that we could do only the registering part and
103 let the collector find out (pessimistically) when it's safe to
104 unregister, namely when the stack pointer of the thread that
105 registered the object is higher than it was when the registering
106 happened. This might make for a good first implementation to get
107 some data on performance.
109 *) Some sort of blacklist support? Blacklists is a concept from the
110 Boehm GC: if during a conservative scan we find pointers to an
111 area which we might use as heap, we mark that area as unusable, so
112 pointer retention by random pinning pointers is reduced.
114 *) experiment with max small object size (very small right now - 2kb,
115 because it's tied to the max freelist size)
117 *) add an option to mmap the whole heap in one chunk: it makes for many
118 simplifications in the checks (put the nursery at the top and just use a single
119 check for inclusion/exclusion): the issue this has is that on 32 bit systems it's
120 not flexible (too much of the address space may be used by default or we can't
121 increase the heap as needed) and we'd need a race-free mechanism to return memory
122 back to the system (mprotect(PROT_NONE) will still keep the memory allocated if it
123 was written to, munmap is needed, but the following mmap may not find the same segment
124 free...)
126 *) memzero the major fragments after restarting the world and optionally a smaller
127 chunk at a time
129 *) investigate having fragment zeroing threads
131 *) separate locks for finalization and other minor stuff to reduce
132 lock contention
134 *) try a different copying order to improve memory locality
136 *) a thread abort after a store but before the write barrier will
137 prevent the write barrier from executing
139 *) specialized dynamically generated markers/copiers
141 *) Dynamically adjust TLAB size to the number of threads. If we have
142 too many threads that do allocation, we might need smaller TLABs,
143 and we might get better performance with larger TLABs if we only
144 have a handful of threads. We could sum up the space left in all
145 assigned TLABs and if that's more than some percentage of the
146 nursery size, reduce the TLAB size.
148 *) Explore placing unreachable objects on unused nursery memory.
149 Instead of memset'ng a region to zero, place an int[] covering it.
150 A good place to start is add_nursery_frag. The tricky thing here is
151 placing those objects atomically outside of a collection.
153 *) Allocation should use asymmetric Dekker synchronization:
154 http://blogs.oracle.com/dave/resource/Asymmetric-Dekker-Synchronization.txt
155 This should help weak consistency archs.
157 #include "config.h"
158 #ifdef HAVE_SGEN_GC
160 #ifdef __MACH__
161 #undef _XOPEN_SOURCE
162 #define _XOPEN_SOURCE
163 #define _DARWIN_C_SOURCE
164 #endif
166 #ifdef HAVE_UNISTD_H
167 #include <unistd.h>
168 #endif
169 #ifdef HAVE_PTHREAD_H
170 #include <pthread.h>
171 #endif
172 #ifdef HAVE_PTHREAD_NP_H
173 #include <pthread_np.h>
174 #endif
175 #include <stdio.h>
176 #include <string.h>
177 #include <errno.h>
178 #include <assert.h>
179 #include <stdlib.h>
181 #include "mono/sgen/sgen-gc.h"
182 #include "mono/sgen/sgen-cardtable.h"
183 #include "mono/sgen/sgen-protocol.h"
184 #include "mono/sgen/sgen-memory-governor.h"
185 #include "mono/sgen/sgen-hash-table.h"
186 #include "mono/sgen/sgen-pinning.h"
187 #include "mono/sgen/sgen-workers.h"
188 #include "mono/sgen/sgen-client.h"
189 #include "mono/sgen/sgen-pointer-queue.h"
190 #include "mono/sgen/gc-internal-agnostic.h"
191 #include "mono/utils/mono-proclib.h"
192 #include "mono/utils/mono-memory-model.h"
193 #include "mono/utils/hazard-pointer.h"
195 #include <mono/utils/memcheck.h>
197 #undef pthread_create
198 #undef pthread_join
199 #undef pthread_detach
202 * ######################################################################
203 * ######## Types and constants used by the GC.
204 * ######################################################################
207 /* 0 means not initialized, 1 is initialized, -1 means in progress */
208 static int gc_initialized = 0;
209 /* If set, check if we need to do something every X allocations */
210 gboolean has_per_allocation_action;
211 /* If set, do a heap check every X allocation */
212 guint32 verify_before_allocs = 0;
213 /* If set, do a minor collection before every X allocation */
214 guint32 collect_before_allocs = 0;
215 /* If set, do a whole heap check before each collection */
216 static gboolean whole_heap_check_before_collection = FALSE;
217 /* If set, do a remset consistency check at various opportunities */
218 static gboolean remset_consistency_checks = FALSE;
219 /* If set, do a mod union consistency check before each finishing collection pause */
220 static gboolean mod_union_consistency_check = FALSE;
221 /* If set, check whether mark bits are consistent after major collections */
222 static gboolean check_mark_bits_after_major_collection = FALSE;
223 /* If set, check that all nursery objects are pinned/not pinned, depending on context */
224 static gboolean check_nursery_objects_pinned = FALSE;
225 /* If set, do a few checks when the concurrent collector is used */
226 static gboolean do_concurrent_checks = FALSE;
227 /* If set, do a plausibility check on the scan_starts before and after
228 each collection */
229 static gboolean do_scan_starts_check = FALSE;
231 static gboolean disable_minor_collections = FALSE;
232 static gboolean disable_major_collections = FALSE;
233 static gboolean do_verify_nursery = FALSE;
234 static gboolean do_dump_nursery_content = FALSE;
235 static gboolean enable_nursery_canaries = FALSE;
237 static gboolean precleaning_enabled = TRUE;
239 #ifdef HEAVY_STATISTICS
240 guint64 stat_objects_alloced_degraded = 0;
241 guint64 stat_bytes_alloced_degraded = 0;
243 guint64 stat_copy_object_called_nursery = 0;
244 guint64 stat_objects_copied_nursery = 0;
245 guint64 stat_copy_object_called_major = 0;
246 guint64 stat_objects_copied_major = 0;
248 guint64 stat_scan_object_called_nursery = 0;
249 guint64 stat_scan_object_called_major = 0;
251 guint64 stat_slots_allocated_in_vain;
253 guint64 stat_nursery_copy_object_failed_from_space = 0;
254 guint64 stat_nursery_copy_object_failed_forwarded = 0;
255 guint64 stat_nursery_copy_object_failed_pinned = 0;
256 guint64 stat_nursery_copy_object_failed_to_space = 0;
258 static guint64 stat_wbarrier_add_to_global_remset = 0;
259 static guint64 stat_wbarrier_arrayref_copy = 0;
260 static guint64 stat_wbarrier_generic_store = 0;
261 static guint64 stat_wbarrier_generic_store_atomic = 0;
262 static guint64 stat_wbarrier_set_root = 0;
263 #endif
265 static guint64 stat_pinned_objects = 0;
267 static guint64 time_minor_pre_collection_fragment_clear = 0;
268 static guint64 time_minor_pinning = 0;
269 static guint64 time_minor_scan_remsets = 0;
270 static guint64 time_minor_scan_pinned = 0;
271 static guint64 time_minor_scan_roots = 0;
272 static guint64 time_minor_finish_gray_stack = 0;
273 static guint64 time_minor_fragment_creation = 0;
275 static guint64 time_major_pre_collection_fragment_clear = 0;
276 static guint64 time_major_pinning = 0;
277 static guint64 time_major_scan_pinned = 0;
278 static guint64 time_major_scan_roots = 0;
279 static guint64 time_major_scan_mod_union = 0;
280 static guint64 time_major_finish_gray_stack = 0;
281 static guint64 time_major_free_bigobjs = 0;
282 static guint64 time_major_los_sweep = 0;
283 static guint64 time_major_sweep = 0;
284 static guint64 time_major_fragment_creation = 0;
286 static guint64 time_max = 0;
288 static SGEN_TV_DECLARE (time_major_conc_collection_start);
289 static SGEN_TV_DECLARE (time_major_conc_collection_end);
291 int gc_debug_level = 0;
292 FILE* gc_debug_file;
293 static char* gc_params_options;
294 static char* gc_debug_options;
297 void
298 mono_gc_flush_info (void)
300 fflush (gc_debug_file);
304 #define TV_DECLARE SGEN_TV_DECLARE
305 #define TV_GETTIME SGEN_TV_GETTIME
306 #define TV_ELAPSED SGEN_TV_ELAPSED
308 static SGEN_TV_DECLARE (sgen_init_timestamp);
310 NurseryClearPolicy nursery_clear_policy = CLEAR_AT_TLAB_CREATION;
312 #define object_is_forwarded SGEN_OBJECT_IS_FORWARDED
313 #define object_is_pinned SGEN_OBJECT_IS_PINNED
314 #define pin_object SGEN_PIN_OBJECT
316 #define ptr_in_nursery sgen_ptr_in_nursery
318 #define LOAD_VTABLE SGEN_LOAD_VTABLE
320 gboolean
321 nursery_canaries_enabled (void)
323 return enable_nursery_canaries;
326 #define safe_object_get_size sgen_safe_object_get_size
328 #if defined(HAVE_CONC_GC_AS_DEFAULT)
329 /* Use concurrent major on deskstop platforms */
330 #define DEFAULT_MAJOR_INIT sgen_marksweep_conc_init
331 #define DEFAULT_MAJOR_NAME "marksweep-conc"
332 #else
333 #define DEFAULT_MAJOR_INIT sgen_marksweep_init
334 #define DEFAULT_MAJOR_NAME "marksweep"
335 #endif
338 * ######################################################################
339 * ######## Global data.
340 * ######################################################################
342 MonoCoopMutex gc_mutex;
344 #define SCAN_START_SIZE SGEN_SCAN_START_SIZE
346 size_t degraded_mode = 0;
348 static mword bytes_pinned_from_failed_allocation = 0;
350 GCMemSection *nursery_section = NULL;
351 static volatile mword lowest_heap_address = ~(mword)0;
352 static volatile mword highest_heap_address = 0;
354 MonoCoopMutex sgen_interruption_mutex;
356 int current_collection_generation = -1;
357 static volatile gboolean concurrent_collection_in_progress = FALSE;
359 /* objects that are ready to be finalized */
360 static SgenPointerQueue fin_ready_queue = SGEN_POINTER_QUEUE_INIT (INTERNAL_MEM_FINALIZE_READY);
361 static SgenPointerQueue critical_fin_queue = SGEN_POINTER_QUEUE_INIT (INTERNAL_MEM_FINALIZE_READY);
363 /* registered roots: the key to the hash is the root start address */
365 * Different kinds of roots are kept separate to speed up pin_from_roots () for example.
367 SgenHashTable roots_hash [ROOT_TYPE_NUM] = {
368 SGEN_HASH_TABLE_INIT (INTERNAL_MEM_ROOTS_TABLE, INTERNAL_MEM_ROOT_RECORD, sizeof (RootRecord), sgen_aligned_addr_hash, NULL),
369 SGEN_HASH_TABLE_INIT (INTERNAL_MEM_ROOTS_TABLE, INTERNAL_MEM_ROOT_RECORD, sizeof (RootRecord), sgen_aligned_addr_hash, NULL),
370 SGEN_HASH_TABLE_INIT (INTERNAL_MEM_ROOTS_TABLE, INTERNAL_MEM_ROOT_RECORD, sizeof (RootRecord), sgen_aligned_addr_hash, NULL)
372 static mword roots_size = 0; /* amount of memory in the root set */
374 /* The size of a TLAB */
375 /* The bigger the value, the less often we have to go to the slow path to allocate a new
376 * one, but the more space is wasted by threads not allocating much memory.
377 * FIXME: Tune this.
378 * FIXME: Make this self-tuning for each thread.
380 guint32 tlab_size = (1024 * 4);
382 #define MAX_SMALL_OBJ_SIZE SGEN_MAX_SMALL_OBJ_SIZE
384 #define ALLOC_ALIGN SGEN_ALLOC_ALIGN
386 #define ALIGN_UP SGEN_ALIGN_UP
388 #ifdef SGEN_DEBUG_INTERNAL_ALLOC
389 MonoNativeThreadId main_gc_thread = NULL;
390 #endif
392 /*Object was pinned during the current collection*/
393 static mword objects_pinned;
396 * ######################################################################
397 * ######## Macros and function declarations.
398 * ######################################################################
401 /* forward declarations */
402 static void scan_from_registered_roots (char *addr_start, char *addr_end, int root_type, ScanCopyContext ctx);
404 static void pin_from_roots (void *start_nursery, void *end_nursery, ScanCopyContext ctx);
405 static void finish_gray_stack (int generation, ScanCopyContext ctx);
408 SgenMajorCollector major_collector;
409 SgenMinorCollector sgen_minor_collector;
411 static SgenRememberedSet remset;
414 * The gray queue a worker job must use. If we're not parallel or
415 * concurrent, we use the main gray queue.
417 static SgenGrayQueue*
418 sgen_workers_get_job_gray_queue (WorkerData *worker_data, SgenGrayQueue *default_gray_queue)
420 if (worker_data)
421 return &worker_data->private_gray_queue;
422 SGEN_ASSERT (0, default_gray_queue, "Why don't we have a default gray queue when we're not running in a worker thread?");
423 return default_gray_queue;
426 static void
427 gray_queue_redirect (SgenGrayQueue *queue)
429 SGEN_ASSERT (0, concurrent_collection_in_progress, "Where are we redirecting the gray queue to, without a concurrent collection?");
431 sgen_workers_take_from_queue (queue);
434 void
435 sgen_scan_area_with_callback (char *start, char *end, IterateObjectCallbackFunc callback, void *data, gboolean allow_flags, gboolean fail_on_canaries)
437 while (start < end) {
438 size_t size;
439 char *obj;
441 if (!*(void**)start) {
442 start += sizeof (void*); /* should be ALLOC_ALIGN, really */
443 continue;
446 if (allow_flags) {
447 if (!(obj = (char *)SGEN_OBJECT_IS_FORWARDED (start)))
448 obj = start;
449 } else {
450 obj = start;
453 if (!sgen_client_object_is_array_fill ((GCObject*)obj)) {
454 CHECK_CANARY_FOR_OBJECT ((GCObject*)obj, fail_on_canaries);
455 size = ALIGN_UP (safe_object_get_size ((GCObject*)obj));
456 callback ((GCObject*)obj, size, data);
457 CANARIFY_SIZE (size);
458 } else {
459 size = ALIGN_UP (safe_object_get_size ((GCObject*)obj));
462 start += size;
467 * sgen_add_to_global_remset:
469 * The global remset contains locations which point into newspace after
470 * a minor collection. This can happen if the objects they point to are pinned.
472 * LOCKING: If called from a parallel collector, the global remset
473 * lock must be held. For serial collectors that is not necessary.
475 void
476 sgen_add_to_global_remset (gpointer ptr, GCObject *obj)
478 SGEN_ASSERT (5, sgen_ptr_in_nursery (obj), "Target pointer of global remset must be in the nursery");
480 HEAVY_STAT (++stat_wbarrier_add_to_global_remset);
482 if (!major_collector.is_concurrent) {
483 SGEN_ASSERT (5, current_collection_generation != -1, "Global remsets can only be added during collections");
484 } else {
485 if (current_collection_generation == -1)
486 SGEN_ASSERT (5, sgen_concurrent_collection_in_progress (), "Global remsets outside of collection pauses can only be added by the concurrent collector");
489 if (!object_is_pinned (obj))
490 SGEN_ASSERT (5, sgen_minor_collector.is_split || sgen_concurrent_collection_in_progress (), "Non-pinned objects can only remain in nursery if it is a split nursery");
491 else if (sgen_cement_lookup_or_register (obj))
492 return;
494 remset.record_pointer (ptr);
496 sgen_pin_stats_register_global_remset (obj);
498 SGEN_LOG (8, "Adding global remset for %p", ptr);
499 binary_protocol_global_remset (ptr, obj, (gpointer)SGEN_LOAD_VTABLE (obj));
503 * sgen_drain_gray_stack:
505 * Scan objects in the gray stack until the stack is empty. This should be called
506 * frequently after each object is copied, to achieve better locality and cache
507 * usage.
510 gboolean
511 sgen_drain_gray_stack (ScanCopyContext ctx)
513 SGEN_ASSERT (0, ctx.ops->drain_gray_stack, "Why do we have a scan/copy context with a missing drain gray stack function?");
515 return ctx.ops->drain_gray_stack (ctx.queue);
519 * Addresses in the pin queue are already sorted. This function finds
520 * the object header for each address and pins the object. The
521 * addresses must be inside the nursery section. The (start of the)
522 * address array is overwritten with the addresses of the actually
523 * pinned objects. Return the number of pinned objects.
525 static int
526 pin_objects_from_nursery_pin_queue (gboolean do_scan_objects, ScanCopyContext ctx)
528 GCMemSection *section = nursery_section;
529 void **start = sgen_pinning_get_entry (section->pin_queue_first_entry);
530 void **end = sgen_pinning_get_entry (section->pin_queue_last_entry);
531 void *start_nursery = section->data;
532 void *end_nursery = section->next_data;
533 void *last = NULL;
534 int count = 0;
535 void *search_start;
536 void *addr;
537 void *pinning_front = start_nursery;
538 size_t idx;
539 void **definitely_pinned = start;
540 ScanObjectFunc scan_func = ctx.ops->scan_object;
541 SgenGrayQueue *queue = ctx.queue;
543 sgen_nursery_allocator_prepare_for_pinning ();
545 while (start < end) {
546 GCObject *obj_to_pin = NULL;
547 size_t obj_to_pin_size = 0;
548 SgenDescriptor desc;
550 addr = *start;
552 SGEN_ASSERT (0, addr >= start_nursery && addr < end_nursery, "Potential pinning address out of range");
553 SGEN_ASSERT (0, addr >= last, "Pin queue not sorted");
555 if (addr == last) {
556 ++start;
557 continue;
560 SGEN_LOG (5, "Considering pinning addr %p", addr);
561 /* We've already processed everything up to pinning_front. */
562 if (addr < pinning_front) {
563 start++;
564 continue;
568 * Find the closest scan start <= addr. We might search backward in the
569 * scan_starts array because entries might be NULL. In the worst case we
570 * start at start_nursery.
572 idx = ((char*)addr - (char*)section->data) / SCAN_START_SIZE;
573 SGEN_ASSERT (0, idx < section->num_scan_start, "Scan start index out of range");
574 search_start = (void*)section->scan_starts [idx];
575 if (!search_start || search_start > addr) {
576 while (idx) {
577 --idx;
578 search_start = section->scan_starts [idx];
579 if (search_start && search_start <= addr)
580 break;
582 if (!search_start || search_start > addr)
583 search_start = start_nursery;
587 * If the pinning front is closer than the scan start we found, start
588 * searching at the front.
590 if (search_start < pinning_front)
591 search_start = pinning_front;
594 * Now addr should be in an object a short distance from search_start.
596 * search_start must point to zeroed mem or point to an object.
598 do {
599 size_t obj_size, canarified_obj_size;
601 /* Skip zeros. */
602 if (!*(void**)search_start) {
603 search_start = (void*)ALIGN_UP ((mword)search_start + sizeof (gpointer));
604 /* The loop condition makes sure we don't overrun addr. */
605 continue;
608 canarified_obj_size = obj_size = ALIGN_UP (safe_object_get_size ((GCObject*)search_start));
611 * Filler arrays are marked by an invalid sync word. We don't
612 * consider them for pinning. They are not delimited by canaries,
613 * either.
615 if (!sgen_client_object_is_array_fill ((GCObject*)search_start)) {
616 CHECK_CANARY_FOR_OBJECT (search_start, TRUE);
617 CANARIFY_SIZE (canarified_obj_size);
619 if (addr >= search_start && (char*)addr < (char*)search_start + obj_size) {
620 /* This is the object we're looking for. */
621 obj_to_pin = (GCObject*)search_start;
622 obj_to_pin_size = canarified_obj_size;
623 break;
627 /* Skip to the next object */
628 search_start = (void*)((char*)search_start + canarified_obj_size);
629 } while (search_start <= addr);
631 /* We've searched past the address we were looking for. */
632 if (!obj_to_pin) {
633 pinning_front = search_start;
634 goto next_pin_queue_entry;
638 * We've found an object to pin. It might still be a dummy array, but we
639 * can advance the pinning front in any case.
641 pinning_front = (char*)obj_to_pin + obj_to_pin_size;
644 * If this is a dummy array marking the beginning of a nursery
645 * fragment, we don't pin it.
647 if (sgen_client_object_is_array_fill (obj_to_pin))
648 goto next_pin_queue_entry;
651 * Finally - pin the object!
653 desc = sgen_obj_get_descriptor_safe (obj_to_pin);
654 if (do_scan_objects) {
655 scan_func (obj_to_pin, desc, queue);
656 } else {
657 SGEN_LOG (4, "Pinned object %p, vtable %p (%s), count %d\n",
658 obj_to_pin, *(void**)obj_to_pin, sgen_client_vtable_get_name (SGEN_LOAD_VTABLE (obj_to_pin)), count);
659 binary_protocol_pin (obj_to_pin,
660 (gpointer)LOAD_VTABLE (obj_to_pin),
661 safe_object_get_size (obj_to_pin));
663 pin_object (obj_to_pin);
664 GRAY_OBJECT_ENQUEUE_SERIAL (queue, obj_to_pin, desc);
665 sgen_pin_stats_register_object (obj_to_pin, GENERATION_NURSERY);
666 definitely_pinned [count] = obj_to_pin;
667 count++;
669 if (concurrent_collection_in_progress)
670 sgen_pinning_register_pinned_in_nursery (obj_to_pin);
672 next_pin_queue_entry:
673 last = addr;
674 ++start;
676 sgen_client_nursery_objects_pinned (definitely_pinned, count);
677 stat_pinned_objects += count;
678 return count;
681 static void
682 pin_objects_in_nursery (gboolean do_scan_objects, ScanCopyContext ctx)
684 size_t reduced_to;
686 if (nursery_section->pin_queue_first_entry == nursery_section->pin_queue_last_entry)
687 return;
689 reduced_to = pin_objects_from_nursery_pin_queue (do_scan_objects, ctx);
690 nursery_section->pin_queue_last_entry = nursery_section->pin_queue_first_entry + reduced_to;
694 * This function is only ever called (via `collector_pin_object()` in `sgen-copy-object.h`)
695 * when we can't promote an object because we're out of memory.
697 void
698 sgen_pin_object (GCObject *object, SgenGrayQueue *queue)
700 SGEN_ASSERT (0, sgen_ptr_in_nursery (object), "We're only supposed to use this for pinning nursery objects when out of memory.");
703 * All pinned objects are assumed to have been staged, so we need to stage as well.
704 * Also, the count of staged objects shows that "late pinning" happened.
706 sgen_pin_stage_ptr (object);
708 SGEN_PIN_OBJECT (object);
709 binary_protocol_pin (object, (gpointer)LOAD_VTABLE (object), safe_object_get_size (object));
711 ++objects_pinned;
712 sgen_pin_stats_register_object (object, GENERATION_NURSERY);
714 GRAY_OBJECT_ENQUEUE_SERIAL (queue, object, sgen_obj_get_descriptor_safe (object));
717 /* Sort the addresses in array in increasing order.
718 * Done using a by-the book heap sort. Which has decent and stable performance, is pretty cache efficient.
720 void
721 sgen_sort_addresses (void **array, size_t size)
723 size_t i;
724 void *tmp;
726 for (i = 1; i < size; ++i) {
727 size_t child = i;
728 while (child > 0) {
729 size_t parent = (child - 1) / 2;
731 if (array [parent] >= array [child])
732 break;
734 tmp = array [parent];
735 array [parent] = array [child];
736 array [child] = tmp;
738 child = parent;
742 for (i = size - 1; i > 0; --i) {
743 size_t end, root;
744 tmp = array [i];
745 array [i] = array [0];
746 array [0] = tmp;
748 end = i - 1;
749 root = 0;
751 while (root * 2 + 1 <= end) {
752 size_t child = root * 2 + 1;
754 if (child < end && array [child] < array [child + 1])
755 ++child;
756 if (array [root] >= array [child])
757 break;
759 tmp = array [root];
760 array [root] = array [child];
761 array [child] = tmp;
763 root = child;
769 * Scan the memory between start and end and queue values which could be pointers
770 * to the area between start_nursery and end_nursery for later consideration.
771 * Typically used for thread stacks.
773 void
774 sgen_conservatively_pin_objects_from (void **start, void **end, void *start_nursery, void *end_nursery, int pin_type)
776 int count = 0;
778 SGEN_ASSERT (0, ((mword)start & (SIZEOF_VOID_P - 1)) == 0, "Why are we scanning for references in unaligned memory ?");
780 #if defined(VALGRIND_MAKE_MEM_DEFINED_IF_ADDRESSABLE) && !defined(_WIN64)
781 VALGRIND_MAKE_MEM_DEFINED_IF_ADDRESSABLE (start, (char*)end - (char*)start);
782 #endif
784 while (start < end) {
786 * *start can point to the middle of an object
787 * note: should we handle pointing at the end of an object?
788 * pinning in C# code disallows pointing at the end of an object
789 * but there is some small chance that an optimizing C compiler
790 * may keep the only reference to an object by pointing
791 * at the end of it. We ignore this small chance for now.
792 * Pointers to the end of an object are indistinguishable
793 * from pointers to the start of the next object in memory
794 * so if we allow that we'd need to pin two objects...
795 * We queue the pointer in an array, the
796 * array will then be sorted and uniqued. This way
797 * we can coalesce several pinning pointers and it should
798 * be faster since we'd do a memory scan with increasing
799 * addresses. Note: we can align the address to the allocation
800 * alignment, so the unique process is more effective.
802 mword addr = (mword)*start;
803 addr &= ~(ALLOC_ALIGN - 1);
804 if (addr >= (mword)start_nursery && addr < (mword)end_nursery) {
805 SGEN_LOG (6, "Pinning address %p from %p", (void*)addr, start);
806 sgen_pin_stage_ptr ((void*)addr);
807 binary_protocol_pin_stage (start, (void*)addr);
808 sgen_pin_stats_register_address ((char*)addr, pin_type);
809 count++;
811 start++;
813 if (count)
814 SGEN_LOG (7, "found %d potential pinned heap pointers", count);
818 * The first thing we do in a collection is to identify pinned objects.
819 * This function considers all the areas of memory that need to be
820 * conservatively scanned.
822 static void
823 pin_from_roots (void *start_nursery, void *end_nursery, ScanCopyContext ctx)
825 void **start_root;
826 RootRecord *root;
827 SGEN_LOG (2, "Scanning pinned roots (%d bytes, %d/%d entries)", (int)roots_size, roots_hash [ROOT_TYPE_NORMAL].num_entries, roots_hash [ROOT_TYPE_PINNED].num_entries);
828 /* objects pinned from the API are inside these roots */
829 SGEN_HASH_TABLE_FOREACH (&roots_hash [ROOT_TYPE_PINNED], void **, start_root, RootRecord *, root) {
830 SGEN_LOG (6, "Pinned roots %p-%p", start_root, root->end_root);
831 sgen_conservatively_pin_objects_from (start_root, (void**)root->end_root, start_nursery, end_nursery, PIN_TYPE_OTHER);
832 } SGEN_HASH_TABLE_FOREACH_END;
833 /* now deal with the thread stacks
834 * in the future we should be able to conservatively scan only:
835 * *) the cpu registers
836 * *) the unmanaged stack frames
837 * *) the _last_ managed stack frame
838 * *) pointers slots in managed frames
840 sgen_client_scan_thread_data (start_nursery, end_nursery, FALSE, ctx);
843 static void
844 single_arg_user_copy_or_mark (GCObject **obj, void *gc_data)
846 ScanCopyContext *ctx = (ScanCopyContext *)gc_data;
847 ctx->ops->copy_or_mark_object (obj, ctx->queue);
851 * The memory area from start_root to end_root contains pointers to objects.
852 * Their position is precisely described by @desc (this means that the pointer
853 * can be either NULL or the pointer to the start of an object).
854 * This functions copies them to to_space updates them.
856 * This function is not thread-safe!
858 static void
859 precisely_scan_objects_from (void** start_root, void** end_root, char* n_start, char *n_end, SgenDescriptor desc, ScanCopyContext ctx)
861 CopyOrMarkObjectFunc copy_func = ctx.ops->copy_or_mark_object;
862 ScanPtrFieldFunc scan_field_func = ctx.ops->scan_ptr_field;
863 SgenGrayQueue *queue = ctx.queue;
865 switch (desc & ROOT_DESC_TYPE_MASK) {
866 case ROOT_DESC_BITMAP:
867 desc >>= ROOT_DESC_TYPE_SHIFT;
868 while (desc) {
869 if ((desc & 1) && *start_root) {
870 copy_func ((GCObject**)start_root, queue);
871 SGEN_LOG (9, "Overwrote root at %p with %p", start_root, *start_root);
873 desc >>= 1;
874 start_root++;
876 return;
877 case ROOT_DESC_COMPLEX: {
878 gsize *bitmap_data = (gsize *)sgen_get_complex_descriptor_bitmap (desc);
879 gsize bwords = (*bitmap_data) - 1;
880 void **start_run = start_root;
881 bitmap_data++;
882 while (bwords-- > 0) {
883 gsize bmap = *bitmap_data++;
884 void **objptr = start_run;
885 while (bmap) {
886 if ((bmap & 1) && *objptr) {
887 copy_func ((GCObject**)objptr, queue);
888 SGEN_LOG (9, "Overwrote root at %p with %p", objptr, *objptr);
890 bmap >>= 1;
891 ++objptr;
893 start_run += GC_BITS_PER_WORD;
895 break;
897 case ROOT_DESC_VECTOR: {
898 void **p;
900 for (p = start_root; p < end_root; p++) {
901 if (*p)
902 scan_field_func (NULL, (GCObject**)p, queue);
904 break;
906 case ROOT_DESC_USER: {
907 SgenUserRootMarkFunc marker = sgen_get_user_descriptor_func (desc);
908 marker (start_root, single_arg_user_copy_or_mark, &ctx);
909 break;
911 case ROOT_DESC_RUN_LEN:
912 g_assert_not_reached ();
913 default:
914 g_assert_not_reached ();
918 static void
919 reset_heap_boundaries (void)
921 lowest_heap_address = ~(mword)0;
922 highest_heap_address = 0;
925 void
926 sgen_update_heap_boundaries (mword low, mword high)
928 mword old;
930 do {
931 old = lowest_heap_address;
932 if (low >= old)
933 break;
934 } while (SGEN_CAS_PTR ((gpointer*)&lowest_heap_address, (gpointer)low, (gpointer)old) != (gpointer)old);
936 do {
937 old = highest_heap_address;
938 if (high <= old)
939 break;
940 } while (SGEN_CAS_PTR ((gpointer*)&highest_heap_address, (gpointer)high, (gpointer)old) != (gpointer)old);
944 * Allocate and setup the data structures needed to be able to allocate objects
945 * in the nursery. The nursery is stored in nursery_section.
947 static void
948 alloc_nursery (void)
950 GCMemSection *section;
951 char *data;
952 size_t scan_starts;
953 size_t alloc_size;
955 if (nursery_section)
956 return;
957 SGEN_LOG (2, "Allocating nursery size: %zu", (size_t)sgen_nursery_size);
958 /* later we will alloc a larger area for the nursery but only activate
959 * what we need. The rest will be used as expansion if we have too many pinned
960 * objects in the existing nursery.
962 /* FIXME: handle OOM */
963 section = (GCMemSection *)sgen_alloc_internal (INTERNAL_MEM_SECTION);
965 alloc_size = sgen_nursery_size;
967 /* If there isn't enough space even for the nursery we should simply abort. */
968 g_assert (sgen_memgov_try_alloc_space (alloc_size, SPACE_NURSERY));
970 data = (char *)major_collector.alloc_heap (alloc_size, alloc_size, DEFAULT_NURSERY_BITS);
971 sgen_update_heap_boundaries ((mword)data, (mword)(data + sgen_nursery_size));
972 SGEN_LOG (4, "Expanding nursery size (%p-%p): %lu, total: %lu", data, data + alloc_size, (unsigned long)sgen_nursery_size, (unsigned long)sgen_gc_get_total_heap_allocation ());
973 section->data = section->next_data = data;
974 section->size = alloc_size;
975 section->end_data = data + sgen_nursery_size;
976 scan_starts = (alloc_size + SCAN_START_SIZE - 1) / SCAN_START_SIZE;
977 section->scan_starts = (char **)sgen_alloc_internal_dynamic (sizeof (char*) * scan_starts, INTERNAL_MEM_SCAN_STARTS, TRUE);
978 section->num_scan_start = scan_starts;
980 nursery_section = section;
982 sgen_nursery_allocator_set_nursery_bounds (data, data + sgen_nursery_size);
985 FILE *
986 mono_gc_get_logfile (void)
988 return gc_debug_file;
991 void
992 mono_gc_params_set (const char* options)
994 if (gc_params_options)
995 g_free (gc_params_options);
997 gc_params_options = g_strdup (options);
1000 void
1001 mono_gc_debug_set (const char* options)
1003 if (gc_debug_options)
1004 g_free (gc_debug_options);
1006 gc_debug_options = g_strdup (options);
1009 static void
1010 scan_finalizer_entries (SgenPointerQueue *fin_queue, ScanCopyContext ctx)
1012 CopyOrMarkObjectFunc copy_func = ctx.ops->copy_or_mark_object;
1013 SgenGrayQueue *queue = ctx.queue;
1014 size_t i;
1016 for (i = 0; i < fin_queue->next_slot; ++i) {
1017 GCObject *obj = (GCObject *)fin_queue->data [i];
1018 if (!obj)
1019 continue;
1020 SGEN_LOG (5, "Scan of fin ready object: %p (%s)\n", obj, sgen_client_vtable_get_name (SGEN_LOAD_VTABLE (obj)));
1021 copy_func ((GCObject**)&fin_queue->data [i], queue);
1025 static const char*
1026 generation_name (int generation)
1028 switch (generation) {
1029 case GENERATION_NURSERY: return "nursery";
1030 case GENERATION_OLD: return "old";
1031 default: g_assert_not_reached ();
1035 const char*
1036 sgen_generation_name (int generation)
1038 return generation_name (generation);
1041 static void
1042 finish_gray_stack (int generation, ScanCopyContext ctx)
1044 TV_DECLARE (atv);
1045 TV_DECLARE (btv);
1046 int done_with_ephemerons, ephemeron_rounds = 0;
1047 char *start_addr = generation == GENERATION_NURSERY ? sgen_get_nursery_start () : NULL;
1048 char *end_addr = generation == GENERATION_NURSERY ? sgen_get_nursery_end () : (char*)-1;
1049 SgenGrayQueue *queue = ctx.queue;
1051 binary_protocol_finish_gray_stack_start (sgen_timestamp (), generation);
1053 * We copied all the reachable objects. Now it's the time to copy
1054 * the objects that were not referenced by the roots, but by the copied objects.
1055 * we built a stack of objects pointed to by gray_start: they are
1056 * additional roots and we may add more items as we go.
1057 * We loop until gray_start == gray_objects which means no more objects have
1058 * been added. Note this is iterative: no recursion is involved.
1059 * We need to walk the LO list as well in search of marked big objects
1060 * (use a flag since this is needed only on major collections). We need to loop
1061 * here as well, so keep a counter of marked LO (increasing it in copy_object).
1062 * To achieve better cache locality and cache usage, we drain the gray stack
1063 * frequently, after each object is copied, and just finish the work here.
1065 sgen_drain_gray_stack (ctx);
1066 TV_GETTIME (atv);
1067 SGEN_LOG (2, "%s generation done", generation_name (generation));
1070 Reset bridge data, we might have lingering data from a previous collection if this is a major
1071 collection trigged by minor overflow.
1073 We must reset the gathered bridges since their original block might be evacuated due to major
1074 fragmentation in the meanwhile and the bridge code should not have to deal with that.
1076 if (sgen_client_bridge_need_processing ())
1077 sgen_client_bridge_reset_data ();
1080 * Mark all strong toggleref objects. This must be done before we walk ephemerons or finalizers
1081 * to ensure they see the full set of live objects.
1083 sgen_client_mark_togglerefs (start_addr, end_addr, ctx);
1086 * Walk the ephemeron tables marking all values with reachable keys. This must be completely done
1087 * before processing finalizable objects and non-tracking weak links to avoid finalizing/clearing
1088 * objects that are in fact reachable.
1090 done_with_ephemerons = 0;
1091 do {
1092 done_with_ephemerons = sgen_client_mark_ephemerons (ctx);
1093 sgen_drain_gray_stack (ctx);
1094 ++ephemeron_rounds;
1095 } while (!done_with_ephemerons);
1097 if (sgen_client_bridge_need_processing ()) {
1098 /*Make sure the gray stack is empty before we process bridge objects so we get liveness right*/
1099 sgen_drain_gray_stack (ctx);
1100 sgen_collect_bridge_objects (generation, ctx);
1101 if (generation == GENERATION_OLD)
1102 sgen_collect_bridge_objects (GENERATION_NURSERY, ctx);
1105 Do the first bridge step here, as the collector liveness state will become useless after that.
1107 An important optimization is to only proccess the possibly dead part of the object graph and skip
1108 over all live objects as we transitively know everything they point must be alive too.
1110 The above invariant is completely wrong if we let the gray queue be drained and mark/copy everything.
1112 This has the unfortunate side effect of making overflow collections perform the first step twice, but
1113 given we now have heuristics that perform major GC in anticipation of minor overflows this should not
1114 be a big deal.
1116 sgen_client_bridge_processing_stw_step ();
1120 Make sure we drain the gray stack before processing disappearing links and finalizers.
1121 If we don't make sure it is empty we might wrongly see a live object as dead.
1123 sgen_drain_gray_stack (ctx);
1126 We must clear weak links that don't track resurrection before processing object ready for
1127 finalization so they can be cleared before that.
1129 sgen_null_link_in_range (generation, ctx, FALSE);
1130 if (generation == GENERATION_OLD)
1131 sgen_null_link_in_range (GENERATION_NURSERY, ctx, FALSE);
1134 /* walk the finalization queue and move also the objects that need to be
1135 * finalized: use the finalized objects as new roots so the objects they depend
1136 * on are also not reclaimed. As with the roots above, only objects in the nursery
1137 * are marked/copied.
1139 sgen_finalize_in_range (generation, ctx);
1140 if (generation == GENERATION_OLD)
1141 sgen_finalize_in_range (GENERATION_NURSERY, ctx);
1142 /* drain the new stack that might have been created */
1143 SGEN_LOG (6, "Precise scan of gray area post fin");
1144 sgen_drain_gray_stack (ctx);
1147 * This must be done again after processing finalizable objects since CWL slots are cleared only after the key is finalized.
1149 done_with_ephemerons = 0;
1150 do {
1151 done_with_ephemerons = sgen_client_mark_ephemerons (ctx);
1152 sgen_drain_gray_stack (ctx);
1153 ++ephemeron_rounds;
1154 } while (!done_with_ephemerons);
1156 sgen_client_clear_unreachable_ephemerons (ctx);
1159 * We clear togglerefs only after all possible chances of revival are done.
1160 * This is semantically more inline with what users expect and it allows for
1161 * user finalizers to correctly interact with TR objects.
1163 sgen_client_clear_togglerefs (start_addr, end_addr, ctx);
1165 TV_GETTIME (btv);
1166 SGEN_LOG (2, "Finalize queue handling scan for %s generation: %lld usecs %d ephemeron rounds", generation_name (generation), (long long)TV_ELAPSED (atv, btv), ephemeron_rounds);
1169 * handle disappearing links
1170 * Note we do this after checking the finalization queue because if an object
1171 * survives (at least long enough to be finalized) we don't clear the link.
1172 * This also deals with a possible issue with the monitor reclamation: with the Boehm
1173 * GC a finalized object my lose the monitor because it is cleared before the finalizer is
1174 * called.
1176 g_assert (sgen_gray_object_queue_is_empty (queue));
1177 for (;;) {
1178 sgen_null_link_in_range (generation, ctx, TRUE);
1179 if (generation == GENERATION_OLD)
1180 sgen_null_link_in_range (GENERATION_NURSERY, ctx, TRUE);
1181 if (sgen_gray_object_queue_is_empty (queue))
1182 break;
1183 sgen_drain_gray_stack (ctx);
1186 g_assert (sgen_gray_object_queue_is_empty (queue));
1188 binary_protocol_finish_gray_stack_end (sgen_timestamp (), generation);
1191 void
1192 sgen_check_section_scan_starts (GCMemSection *section)
1194 size_t i;
1195 for (i = 0; i < section->num_scan_start; ++i) {
1196 if (section->scan_starts [i]) {
1197 mword size = safe_object_get_size ((GCObject*) section->scan_starts [i]);
1198 SGEN_ASSERT (0, size >= SGEN_CLIENT_MINIMUM_OBJECT_SIZE && size <= MAX_SMALL_OBJ_SIZE, "Weird object size at scan starts.");
1203 static void
1204 check_scan_starts (void)
1206 if (!do_scan_starts_check)
1207 return;
1208 sgen_check_section_scan_starts (nursery_section);
1209 major_collector.check_scan_starts ();
1212 static void
1213 scan_from_registered_roots (char *addr_start, char *addr_end, int root_type, ScanCopyContext ctx)
1215 void **start_root;
1216 RootRecord *root;
1217 SGEN_HASH_TABLE_FOREACH (&roots_hash [root_type], void **, start_root, RootRecord *, root) {
1218 SGEN_LOG (6, "Precise root scan %p-%p (desc: %p)", start_root, root->end_root, (void*)root->root_desc);
1219 precisely_scan_objects_from (start_root, (void**)root->end_root, addr_start, addr_end, root->root_desc, ctx);
1220 } SGEN_HASH_TABLE_FOREACH_END;
1223 static void
1224 init_stats (void)
1226 static gboolean inited = FALSE;
1228 if (inited)
1229 return;
1231 mono_counters_register ("Collection max time", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME | MONO_COUNTER_MONOTONIC, &time_max);
1233 mono_counters_register ("Minor fragment clear", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME, &time_minor_pre_collection_fragment_clear);
1234 mono_counters_register ("Minor pinning", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME, &time_minor_pinning);
1235 mono_counters_register ("Minor scan remembered set", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME, &time_minor_scan_remsets);
1236 mono_counters_register ("Minor scan pinned", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME, &time_minor_scan_pinned);
1237 mono_counters_register ("Minor scan roots", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME, &time_minor_scan_roots);
1238 mono_counters_register ("Minor fragment creation", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME, &time_minor_fragment_creation);
1240 mono_counters_register ("Major fragment clear", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME, &time_major_pre_collection_fragment_clear);
1241 mono_counters_register ("Major pinning", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME, &time_major_pinning);
1242 mono_counters_register ("Major scan pinned", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME, &time_major_scan_pinned);
1243 mono_counters_register ("Major scan roots", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME, &time_major_scan_roots);
1244 mono_counters_register ("Major scan mod union", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME, &time_major_scan_mod_union);
1245 mono_counters_register ("Major finish gray stack", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME, &time_major_finish_gray_stack);
1246 mono_counters_register ("Major free big objects", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME, &time_major_free_bigobjs);
1247 mono_counters_register ("Major LOS sweep", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME, &time_major_los_sweep);
1248 mono_counters_register ("Major sweep", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME, &time_major_sweep);
1249 mono_counters_register ("Major fragment creation", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME, &time_major_fragment_creation);
1251 mono_counters_register ("Number of pinned objects", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_pinned_objects);
1253 #ifdef HEAVY_STATISTICS
1254 mono_counters_register ("WBarrier remember pointer", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_wbarrier_add_to_global_remset);
1255 mono_counters_register ("WBarrier arrayref copy", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_wbarrier_arrayref_copy);
1256 mono_counters_register ("WBarrier generic store called", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_wbarrier_generic_store);
1257 mono_counters_register ("WBarrier generic atomic store called", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_wbarrier_generic_store_atomic);
1258 mono_counters_register ("WBarrier set root", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_wbarrier_set_root);
1260 mono_counters_register ("# objects allocated degraded", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_objects_alloced_degraded);
1261 mono_counters_register ("bytes allocated degraded", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_bytes_alloced_degraded);
1263 mono_counters_register ("# copy_object() called (nursery)", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_copy_object_called_nursery);
1264 mono_counters_register ("# objects copied (nursery)", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_objects_copied_nursery);
1265 mono_counters_register ("# copy_object() called (major)", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_copy_object_called_major);
1266 mono_counters_register ("# objects copied (major)", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_objects_copied_major);
1268 mono_counters_register ("# scan_object() called (nursery)", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_scan_object_called_nursery);
1269 mono_counters_register ("# scan_object() called (major)", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_scan_object_called_major);
1271 mono_counters_register ("Slots allocated in vain", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_slots_allocated_in_vain);
1273 mono_counters_register ("# nursery copy_object() failed from space", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_nursery_copy_object_failed_from_space);
1274 mono_counters_register ("# nursery copy_object() failed forwarded", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_nursery_copy_object_failed_forwarded);
1275 mono_counters_register ("# nursery copy_object() failed pinned", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_nursery_copy_object_failed_pinned);
1276 mono_counters_register ("# nursery copy_object() failed to space", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_nursery_copy_object_failed_to_space);
1278 sgen_nursery_allocator_init_heavy_stats ();
1279 #endif
1281 inited = TRUE;
1285 static void
1286 reset_pinned_from_failed_allocation (void)
1288 bytes_pinned_from_failed_allocation = 0;
1291 void
1292 sgen_set_pinned_from_failed_allocation (mword objsize)
1294 bytes_pinned_from_failed_allocation += objsize;
1297 gboolean
1298 sgen_collection_is_concurrent (void)
1300 switch (current_collection_generation) {
1301 case GENERATION_NURSERY:
1302 return FALSE;
1303 case GENERATION_OLD:
1304 return concurrent_collection_in_progress;
1305 default:
1306 g_error ("Invalid current generation %d", current_collection_generation);
1308 return FALSE;
1311 gboolean
1312 sgen_concurrent_collection_in_progress (void)
1314 return concurrent_collection_in_progress;
1317 typedef struct {
1318 SgenThreadPoolJob job;
1319 SgenObjectOperations *ops;
1320 SgenGrayQueue *gc_thread_gray_queue;
1321 } ScanJob;
1323 typedef struct {
1324 ScanJob scan_job;
1325 int job_index;
1326 } ParallelScanJob;
1328 static ScanCopyContext
1329 scan_copy_context_for_scan_job (void *worker_data_untyped, ScanJob *job)
1331 WorkerData *worker_data = (WorkerData *)worker_data_untyped;
1333 return CONTEXT_FROM_OBJECT_OPERATIONS (job->ops, sgen_workers_get_job_gray_queue (worker_data, job->gc_thread_gray_queue));
1336 static void
1337 job_remembered_set_scan (void *worker_data_untyped, SgenThreadPoolJob *job)
1339 remset.scan_remsets (scan_copy_context_for_scan_job (worker_data_untyped, (ScanJob*)job));
1342 typedef struct {
1343 ScanJob scan_job;
1344 char *heap_start;
1345 char *heap_end;
1346 int root_type;
1347 } ScanFromRegisteredRootsJob;
1349 static void
1350 job_scan_from_registered_roots (void *worker_data_untyped, SgenThreadPoolJob *job)
1352 ScanFromRegisteredRootsJob *job_data = (ScanFromRegisteredRootsJob*)job;
1353 ScanCopyContext ctx = scan_copy_context_for_scan_job (worker_data_untyped, &job_data->scan_job);
1355 scan_from_registered_roots (job_data->heap_start, job_data->heap_end, job_data->root_type, ctx);
1358 typedef struct {
1359 ScanJob scan_job;
1360 char *heap_start;
1361 char *heap_end;
1362 } ScanThreadDataJob;
1364 static void
1365 job_scan_thread_data (void *worker_data_untyped, SgenThreadPoolJob *job)
1367 ScanThreadDataJob *job_data = (ScanThreadDataJob*)job;
1368 ScanCopyContext ctx = scan_copy_context_for_scan_job (worker_data_untyped, &job_data->scan_job);
1370 sgen_client_scan_thread_data (job_data->heap_start, job_data->heap_end, TRUE, ctx);
1373 typedef struct {
1374 ScanJob scan_job;
1375 SgenPointerQueue *queue;
1376 } ScanFinalizerEntriesJob;
1378 static void
1379 job_scan_finalizer_entries (void *worker_data_untyped, SgenThreadPoolJob *job)
1381 ScanFinalizerEntriesJob *job_data = (ScanFinalizerEntriesJob*)job;
1382 ScanCopyContext ctx = scan_copy_context_for_scan_job (worker_data_untyped, &job_data->scan_job);
1384 scan_finalizer_entries (job_data->queue, ctx);
1387 static void
1388 job_scan_major_mod_union_card_table (void *worker_data_untyped, SgenThreadPoolJob *job)
1390 ParallelScanJob *job_data = (ParallelScanJob*)job;
1391 ScanCopyContext ctx = scan_copy_context_for_scan_job (worker_data_untyped, (ScanJob*)job_data);
1393 g_assert (concurrent_collection_in_progress);
1394 major_collector.scan_card_table (CARDTABLE_SCAN_MOD_UNION, ctx, job_data->job_index, sgen_workers_get_job_split_count ());
1397 static void
1398 job_scan_los_mod_union_card_table (void *worker_data_untyped, SgenThreadPoolJob *job)
1400 ParallelScanJob *job_data = (ParallelScanJob*)job;
1401 ScanCopyContext ctx = scan_copy_context_for_scan_job (worker_data_untyped, (ScanJob*)job_data);
1403 g_assert (concurrent_collection_in_progress);
1404 sgen_los_scan_card_table (CARDTABLE_SCAN_MOD_UNION, ctx, job_data->job_index, sgen_workers_get_job_split_count ());
1407 static void
1408 job_major_mod_union_preclean (void *worker_data_untyped, SgenThreadPoolJob *job)
1410 ParallelScanJob *job_data = (ParallelScanJob*)job;
1411 ScanCopyContext ctx = scan_copy_context_for_scan_job (worker_data_untyped, (ScanJob*)job_data);
1413 g_assert (concurrent_collection_in_progress);
1415 major_collector.scan_card_table (CARDTABLE_SCAN_MOD_UNION_PRECLEAN, ctx, job_data->job_index, sgen_workers_get_job_split_count ());
1418 static void
1419 job_los_mod_union_preclean (void *worker_data_untyped, SgenThreadPoolJob *job)
1421 ParallelScanJob *job_data = (ParallelScanJob*)job;
1422 ScanCopyContext ctx = scan_copy_context_for_scan_job (worker_data_untyped, (ScanJob*)job_data);
1424 g_assert (concurrent_collection_in_progress);
1426 sgen_los_scan_card_table (CARDTABLE_SCAN_MOD_UNION_PRECLEAN, ctx, job_data->job_index, sgen_workers_get_job_split_count ());
1429 static void
1430 job_scan_last_pinned (void *worker_data_untyped, SgenThreadPoolJob *job)
1432 ScanJob *job_data = (ScanJob*)job;
1433 ScanCopyContext ctx = scan_copy_context_for_scan_job (worker_data_untyped, job_data);
1435 g_assert (concurrent_collection_in_progress);
1437 sgen_scan_pin_queue_objects (ctx);
1440 static void
1441 workers_finish_callback (void)
1443 ParallelScanJob *psj;
1444 ScanJob *sj;
1445 int split_count = sgen_workers_get_job_split_count ();
1446 int i;
1447 /* Mod union preclean jobs */
1448 for (i = 0; i < split_count; i++) {
1449 psj = (ParallelScanJob*)sgen_thread_pool_job_alloc ("preclean major mod union cardtable", job_major_mod_union_preclean, sizeof (ParallelScanJob));
1450 psj->scan_job.ops = sgen_workers_get_idle_func_object_ops ();
1451 psj->scan_job.gc_thread_gray_queue = NULL;
1452 psj->job_index = i;
1453 sgen_workers_enqueue_job (&psj->scan_job.job, TRUE);
1456 for (i = 0; i < split_count; i++) {
1457 psj = (ParallelScanJob*)sgen_thread_pool_job_alloc ("preclean los mod union cardtable", job_los_mod_union_preclean, sizeof (ParallelScanJob));
1458 psj->scan_job.ops = sgen_workers_get_idle_func_object_ops ();
1459 psj->scan_job.gc_thread_gray_queue = NULL;
1460 psj->job_index = i;
1461 sgen_workers_enqueue_job (&psj->scan_job.job, TRUE);
1464 sj = (ScanJob*)sgen_thread_pool_job_alloc ("scan last pinned", job_scan_last_pinned, sizeof (ScanJob));
1465 sj->ops = sgen_workers_get_idle_func_object_ops ();
1466 sj->gc_thread_gray_queue = NULL;
1467 sgen_workers_enqueue_job (&sj->job, TRUE);
1470 static void
1471 init_gray_queue (SgenGrayQueue *gc_thread_gray_queue, gboolean use_workers)
1473 if (use_workers)
1474 sgen_workers_init_distribute_gray_queue ();
1475 sgen_gray_object_queue_init (gc_thread_gray_queue, NULL, TRUE);
1478 static void
1479 enqueue_scan_from_roots_jobs (SgenGrayQueue *gc_thread_gray_queue, char *heap_start, char *heap_end, SgenObjectOperations *ops, gboolean enqueue)
1481 ScanFromRegisteredRootsJob *scrrj;
1482 ScanThreadDataJob *stdj;
1483 ScanFinalizerEntriesJob *sfej;
1485 /* registered roots, this includes static fields */
1487 scrrj = (ScanFromRegisteredRootsJob*)sgen_thread_pool_job_alloc ("scan from registered roots normal", job_scan_from_registered_roots, sizeof (ScanFromRegisteredRootsJob));
1488 scrrj->scan_job.ops = ops;
1489 scrrj->scan_job.gc_thread_gray_queue = gc_thread_gray_queue;
1490 scrrj->heap_start = heap_start;
1491 scrrj->heap_end = heap_end;
1492 scrrj->root_type = ROOT_TYPE_NORMAL;
1493 sgen_workers_enqueue_job (&scrrj->scan_job.job, enqueue);
1495 if (current_collection_generation == GENERATION_OLD) {
1496 /* During minors we scan the cardtable for these roots instead */
1497 scrrj = (ScanFromRegisteredRootsJob*)sgen_thread_pool_job_alloc ("scan from registered roots wbarrier", job_scan_from_registered_roots, sizeof (ScanFromRegisteredRootsJob));
1498 scrrj->scan_job.ops = ops;
1499 scrrj->scan_job.gc_thread_gray_queue = gc_thread_gray_queue;
1500 scrrj->heap_start = heap_start;
1501 scrrj->heap_end = heap_end;
1502 scrrj->root_type = ROOT_TYPE_WBARRIER;
1503 sgen_workers_enqueue_job (&scrrj->scan_job.job, enqueue);
1506 /* Threads */
1508 stdj = (ScanThreadDataJob*)sgen_thread_pool_job_alloc ("scan thread data", job_scan_thread_data, sizeof (ScanThreadDataJob));
1509 stdj->scan_job.ops = ops;
1510 stdj->scan_job.gc_thread_gray_queue = gc_thread_gray_queue;
1511 stdj->heap_start = heap_start;
1512 stdj->heap_end = heap_end;
1513 sgen_workers_enqueue_job (&stdj->scan_job.job, enqueue);
1515 /* Scan the list of objects ready for finalization. */
1517 sfej = (ScanFinalizerEntriesJob*)sgen_thread_pool_job_alloc ("scan finalizer entries", job_scan_finalizer_entries, sizeof (ScanFinalizerEntriesJob));
1518 sfej->scan_job.ops = ops;
1519 sfej->scan_job.gc_thread_gray_queue = gc_thread_gray_queue;
1520 sfej->queue = &fin_ready_queue;
1521 sgen_workers_enqueue_job (&sfej->scan_job.job, enqueue);
1523 sfej = (ScanFinalizerEntriesJob*)sgen_thread_pool_job_alloc ("scan critical finalizer entries", job_scan_finalizer_entries, sizeof (ScanFinalizerEntriesJob));
1524 sfej->scan_job.ops = ops;
1525 sfej->scan_job.gc_thread_gray_queue = gc_thread_gray_queue;
1526 sfej->queue = &critical_fin_queue;
1527 sgen_workers_enqueue_job (&sfej->scan_job.job, enqueue);
1531 * Perform a nursery collection.
1533 * Return whether any objects were late-pinned due to being out of memory.
1535 static gboolean
1536 collect_nursery (const char *reason, gboolean is_overflow, SgenGrayQueue *unpin_queue)
1538 gboolean needs_major;
1539 size_t max_garbage_amount;
1540 char *nursery_next;
1541 mword fragment_total;
1542 ScanJob *sj;
1543 SgenGrayQueue gc_thread_gray_queue;
1544 SgenObjectOperations *object_ops;
1545 ScanCopyContext ctx;
1546 TV_DECLARE (atv);
1547 TV_DECLARE (btv);
1548 SGEN_TV_DECLARE (last_minor_collection_start_tv);
1549 SGEN_TV_DECLARE (last_minor_collection_end_tv);
1551 if (disable_minor_collections)
1552 return TRUE;
1554 TV_GETTIME (last_minor_collection_start_tv);
1555 atv = last_minor_collection_start_tv;
1557 binary_protocol_collection_begin (gc_stats.minor_gc_count, GENERATION_NURSERY);
1559 if (sgen_concurrent_collection_in_progress ())
1560 object_ops = &sgen_minor_collector.serial_ops_with_concurrent_major;
1561 else
1562 object_ops = &sgen_minor_collector.serial_ops;
1564 if (do_verify_nursery || do_dump_nursery_content)
1565 sgen_debug_verify_nursery (do_dump_nursery_content);
1567 current_collection_generation = GENERATION_NURSERY;
1569 SGEN_ASSERT (0, !sgen_collection_is_concurrent (), "Why is the nursery collection concurrent?");
1571 reset_pinned_from_failed_allocation ();
1573 check_scan_starts ();
1575 sgen_nursery_alloc_prepare_for_minor ();
1577 degraded_mode = 0;
1578 objects_pinned = 0;
1579 nursery_next = sgen_nursery_alloc_get_upper_alloc_bound ();
1580 /* FIXME: optimize later to use the higher address where an object can be present */
1581 nursery_next = MAX (nursery_next, sgen_get_nursery_end ());
1583 SGEN_LOG (1, "Start nursery collection %d %p-%p, size: %d", gc_stats.minor_gc_count, sgen_get_nursery_start (), nursery_next, (int)(nursery_next - sgen_get_nursery_start ()));
1584 max_garbage_amount = nursery_next - sgen_get_nursery_start ();
1585 g_assert (nursery_section->size >= max_garbage_amount);
1587 /* world must be stopped already */
1588 TV_GETTIME (btv);
1589 time_minor_pre_collection_fragment_clear += TV_ELAPSED (atv, btv);
1591 sgen_client_pre_collection_checks ();
1593 nursery_section->next_data = nursery_next;
1595 major_collector.start_nursery_collection ();
1597 sgen_memgov_minor_collection_start ();
1599 init_gray_queue (&gc_thread_gray_queue, FALSE);
1600 ctx = CONTEXT_FROM_OBJECT_OPERATIONS (object_ops, &gc_thread_gray_queue);
1602 gc_stats.minor_gc_count ++;
1604 sgen_process_fin_stage_entries ();
1606 /* pin from pinned handles */
1607 sgen_init_pinning ();
1608 sgen_client_binary_protocol_mark_start (GENERATION_NURSERY);
1609 pin_from_roots (sgen_get_nursery_start (), nursery_next, ctx);
1610 /* pin cemented objects */
1611 sgen_pin_cemented_objects ();
1612 /* identify pinned objects */
1613 sgen_optimize_pin_queue ();
1614 sgen_pinning_setup_section (nursery_section);
1616 pin_objects_in_nursery (FALSE, ctx);
1617 sgen_pinning_trim_queue_to_section (nursery_section);
1619 if (remset_consistency_checks)
1620 sgen_check_remset_consistency ();
1622 if (whole_heap_check_before_collection) {
1623 sgen_clear_nursery_fragments ();
1624 sgen_check_whole_heap (FALSE);
1627 TV_GETTIME (atv);
1628 time_minor_pinning += TV_ELAPSED (btv, atv);
1629 SGEN_LOG (2, "Finding pinned pointers: %zd in %lld usecs", sgen_get_pinned_count (), (long long)TV_ELAPSED (btv, atv));
1630 SGEN_LOG (4, "Start scan with %zd pinned objects", sgen_get_pinned_count ());
1632 sj = (ScanJob*)sgen_thread_pool_job_alloc ("scan remset", job_remembered_set_scan, sizeof (ScanJob));
1633 sj->ops = object_ops;
1634 sj->gc_thread_gray_queue = &gc_thread_gray_queue;
1635 sgen_workers_enqueue_job (&sj->job, FALSE);
1637 /* we don't have complete write barrier yet, so we scan all the old generation sections */
1638 TV_GETTIME (btv);
1639 time_minor_scan_remsets += TV_ELAPSED (atv, btv);
1640 SGEN_LOG (2, "Old generation scan: %lld usecs", (long long)TV_ELAPSED (atv, btv));
1642 sgen_pin_stats_report ();
1644 /* FIXME: Why do we do this at this specific, seemingly random, point? */
1645 sgen_client_collecting_minor (&fin_ready_queue, &critical_fin_queue);
1647 TV_GETTIME (atv);
1648 time_minor_scan_pinned += TV_ELAPSED (btv, atv);
1650 enqueue_scan_from_roots_jobs (&gc_thread_gray_queue, sgen_get_nursery_start (), nursery_next, object_ops, FALSE);
1652 TV_GETTIME (btv);
1653 time_minor_scan_roots += TV_ELAPSED (atv, btv);
1655 finish_gray_stack (GENERATION_NURSERY, ctx);
1657 TV_GETTIME (atv);
1658 time_minor_finish_gray_stack += TV_ELAPSED (btv, atv);
1659 sgen_client_binary_protocol_mark_end (GENERATION_NURSERY);
1661 if (objects_pinned) {
1662 sgen_optimize_pin_queue ();
1663 sgen_pinning_setup_section (nursery_section);
1667 * This is the latest point at which we can do this check, because
1668 * sgen_build_nursery_fragments() unpins nursery objects again.
1670 if (remset_consistency_checks)
1671 sgen_check_remset_consistency ();
1673 /* walk the pin_queue, build up the fragment list of free memory, unmark
1674 * pinned objects as we go, memzero() the empty fragments so they are ready for the
1675 * next allocations.
1677 sgen_client_binary_protocol_reclaim_start (GENERATION_NURSERY);
1678 fragment_total = sgen_build_nursery_fragments (nursery_section, unpin_queue);
1679 if (!fragment_total)
1680 degraded_mode = 1;
1682 /* Clear TLABs for all threads */
1683 sgen_clear_tlabs ();
1685 sgen_client_binary_protocol_reclaim_end (GENERATION_NURSERY);
1686 TV_GETTIME (btv);
1687 time_minor_fragment_creation += TV_ELAPSED (atv, btv);
1688 SGEN_LOG (2, "Fragment creation: %lld usecs, %lu bytes available", (long long)TV_ELAPSED (atv, btv), (unsigned long)fragment_total);
1690 if (remset_consistency_checks)
1691 sgen_check_major_refs ();
1693 major_collector.finish_nursery_collection ();
1695 TV_GETTIME (last_minor_collection_end_tv);
1696 gc_stats.minor_gc_time += TV_ELAPSED (last_minor_collection_start_tv, last_minor_collection_end_tv);
1698 sgen_debug_dump_heap ("minor", gc_stats.minor_gc_count - 1, NULL);
1700 /* prepare the pin queue for the next collection */
1701 sgen_finish_pinning ();
1702 if (sgen_have_pending_finalizers ()) {
1703 SGEN_LOG (4, "Finalizer-thread wakeup");
1704 sgen_client_finalize_notify ();
1706 sgen_pin_stats_reset ();
1707 /* clear cemented hash */
1708 sgen_cement_clear_below_threshold ();
1710 sgen_gray_object_queue_dispose (&gc_thread_gray_queue);
1712 remset.finish_minor_collection ();
1714 check_scan_starts ();
1716 binary_protocol_flush_buffers (FALSE);
1718 sgen_memgov_minor_collection_end (reason, is_overflow);
1720 /*objects are late pinned because of lack of memory, so a major is a good call*/
1721 needs_major = objects_pinned > 0;
1722 current_collection_generation = -1;
1723 objects_pinned = 0;
1725 binary_protocol_collection_end (gc_stats.minor_gc_count - 1, GENERATION_NURSERY, 0, 0);
1727 if (check_nursery_objects_pinned && !sgen_minor_collector.is_split)
1728 sgen_check_nursery_objects_pinned (unpin_queue != NULL);
1730 return needs_major;
1733 typedef enum {
1734 COPY_OR_MARK_FROM_ROOTS_SERIAL,
1735 COPY_OR_MARK_FROM_ROOTS_START_CONCURRENT,
1736 COPY_OR_MARK_FROM_ROOTS_FINISH_CONCURRENT
1737 } CopyOrMarkFromRootsMode;
1739 static void
1740 major_copy_or_mark_from_roots (SgenGrayQueue *gc_thread_gray_queue, size_t *old_next_pin_slot, CopyOrMarkFromRootsMode mode, SgenObjectOperations *object_ops_nopar, SgenObjectOperations *object_ops_par)
1742 LOSObject *bigobj;
1743 TV_DECLARE (atv);
1744 TV_DECLARE (btv);
1745 /* FIXME: only use these values for the precise scan
1746 * note that to_space pointers should be excluded anyway...
1748 char *heap_start = NULL;
1749 char *heap_end = (char*)-1;
1750 ScanCopyContext ctx = CONTEXT_FROM_OBJECT_OPERATIONS (object_ops_nopar, gc_thread_gray_queue);
1751 gboolean concurrent = mode != COPY_OR_MARK_FROM_ROOTS_SERIAL;
1753 SGEN_ASSERT (0, !!concurrent == !!concurrent_collection_in_progress, "We've been called with the wrong mode.");
1755 if (mode == COPY_OR_MARK_FROM_ROOTS_START_CONCURRENT) {
1756 /*This cleans up unused fragments */
1757 sgen_nursery_allocator_prepare_for_pinning ();
1759 if (do_concurrent_checks)
1760 sgen_debug_check_nursery_is_clean ();
1761 } else {
1762 /* The concurrent collector doesn't touch the nursery. */
1763 sgen_nursery_alloc_prepare_for_major ();
1766 TV_GETTIME (atv);
1768 /* Pinning depends on this */
1769 sgen_clear_nursery_fragments ();
1771 if (whole_heap_check_before_collection)
1772 sgen_check_whole_heap (TRUE);
1774 TV_GETTIME (btv);
1775 time_major_pre_collection_fragment_clear += TV_ELAPSED (atv, btv);
1777 if (!sgen_collection_is_concurrent ())
1778 nursery_section->next_data = sgen_get_nursery_end ();
1779 /* we should also coalesce scanning from sections close to each other
1780 * and deal with pointers outside of the sections later.
1783 objects_pinned = 0;
1785 sgen_client_pre_collection_checks ();
1787 if (mode != COPY_OR_MARK_FROM_ROOTS_START_CONCURRENT) {
1788 /* Remsets are not useful for a major collection */
1789 remset.clear_cards ();
1792 sgen_process_fin_stage_entries ();
1794 TV_GETTIME (atv);
1795 sgen_init_pinning ();
1796 SGEN_LOG (6, "Collecting pinned addresses");
1797 pin_from_roots ((void*)lowest_heap_address, (void*)highest_heap_address, ctx);
1798 if (mode == COPY_OR_MARK_FROM_ROOTS_FINISH_CONCURRENT) {
1799 /* Pin cemented objects that were forced */
1800 sgen_pin_cemented_objects ();
1802 sgen_optimize_pin_queue ();
1803 if (mode == COPY_OR_MARK_FROM_ROOTS_START_CONCURRENT) {
1805 * Cemented objects that are in the pinned list will be marked. When
1806 * marking concurrently we won't mark mod-union cards for these objects.
1807 * Instead they will remain cemented until the next major collection,
1808 * when we will recheck if they are still pinned in the roots.
1810 sgen_cement_force_pinned ();
1813 sgen_client_collecting_major_1 ();
1816 * pin_queue now contains all candidate pointers, sorted and
1817 * uniqued. We must do two passes now to figure out which
1818 * objects are pinned.
1820 * The first is to find within the pin_queue the area for each
1821 * section. This requires that the pin_queue be sorted. We
1822 * also process the LOS objects and pinned chunks here.
1824 * The second, destructive, pass is to reduce the section
1825 * areas to pointers to the actually pinned objects.
1827 SGEN_LOG (6, "Pinning from sections");
1828 /* first pass for the sections */
1829 sgen_find_section_pin_queue_start_end (nursery_section);
1830 /* identify possible pointers to the insize of large objects */
1831 SGEN_LOG (6, "Pinning from large objects");
1832 for (bigobj = los_object_list; bigobj; bigobj = bigobj->next) {
1833 size_t dummy;
1834 if (sgen_find_optimized_pin_queue_area ((char*)bigobj->data, (char*)bigobj->data + sgen_los_object_size (bigobj), &dummy, &dummy)) {
1835 binary_protocol_pin (bigobj->data, (gpointer)LOAD_VTABLE (bigobj->data), safe_object_get_size (bigobj->data));
1837 if (sgen_los_object_is_pinned (bigobj->data)) {
1838 SGEN_ASSERT (0, mode == COPY_OR_MARK_FROM_ROOTS_FINISH_CONCURRENT, "LOS objects can only be pinned here after concurrent marking.");
1839 continue;
1841 sgen_los_pin_object (bigobj->data);
1842 if (SGEN_OBJECT_HAS_REFERENCES (bigobj->data))
1843 GRAY_OBJECT_ENQUEUE_SERIAL (gc_thread_gray_queue, bigobj->data, sgen_obj_get_descriptor ((GCObject*)bigobj->data));
1844 sgen_pin_stats_register_object (bigobj->data, GENERATION_OLD);
1845 SGEN_LOG (6, "Marked large object %p (%s) size: %lu from roots", bigobj->data,
1846 sgen_client_vtable_get_name (SGEN_LOAD_VTABLE (bigobj->data)),
1847 (unsigned long)sgen_los_object_size (bigobj));
1849 sgen_client_pinned_los_object (bigobj->data);
1853 pin_objects_in_nursery (mode == COPY_OR_MARK_FROM_ROOTS_START_CONCURRENT, ctx);
1854 if (check_nursery_objects_pinned && !sgen_minor_collector.is_split)
1855 sgen_check_nursery_objects_pinned (mode != COPY_OR_MARK_FROM_ROOTS_START_CONCURRENT);
1857 major_collector.pin_objects (gc_thread_gray_queue);
1858 if (old_next_pin_slot)
1859 *old_next_pin_slot = sgen_get_pinned_count ();
1861 TV_GETTIME (btv);
1862 time_major_pinning += TV_ELAPSED (atv, btv);
1863 SGEN_LOG (2, "Finding pinned pointers: %zd in %lld usecs", sgen_get_pinned_count (), (long long)TV_ELAPSED (atv, btv));
1864 SGEN_LOG (4, "Start scan with %zd pinned objects", sgen_get_pinned_count ());
1866 major_collector.init_to_space ();
1868 SGEN_ASSERT (0, sgen_workers_all_done (), "Why are the workers not done when we start or finish a major collection?");
1869 if (mode == COPY_OR_MARK_FROM_ROOTS_FINISH_CONCURRENT) {
1870 sgen_workers_set_num_active_workers (0);
1871 if (sgen_workers_have_idle_work ()) {
1873 * We force the finish of the worker with the new object ops context
1874 * which can also do copying. We need to have finished pinning.
1876 sgen_workers_start_all_workers (object_ops_nopar, object_ops_par, NULL);
1878 sgen_workers_join ();
1882 #ifdef SGEN_DEBUG_INTERNAL_ALLOC
1883 main_gc_thread = mono_native_thread_self ();
1884 #endif
1886 sgen_client_collecting_major_2 ();
1888 TV_GETTIME (atv);
1889 time_major_scan_pinned += TV_ELAPSED (btv, atv);
1891 sgen_client_collecting_major_3 (&fin_ready_queue, &critical_fin_queue);
1893 enqueue_scan_from_roots_jobs (gc_thread_gray_queue, heap_start, heap_end, object_ops_nopar, FALSE);
1895 TV_GETTIME (btv);
1896 time_major_scan_roots += TV_ELAPSED (atv, btv);
1899 * We start the concurrent worker after pinning and after we scanned the roots
1900 * in order to make sure that the worker does not finish before handling all
1901 * the roots.
1903 if (mode == COPY_OR_MARK_FROM_ROOTS_START_CONCURRENT) {
1904 sgen_workers_set_num_active_workers (1);
1905 gray_queue_redirect (gc_thread_gray_queue);
1906 if (precleaning_enabled) {
1907 sgen_workers_start_all_workers (object_ops_nopar, object_ops_par, workers_finish_callback);
1908 } else {
1909 sgen_workers_start_all_workers (object_ops_nopar, object_ops_par, NULL);
1913 if (mode == COPY_OR_MARK_FROM_ROOTS_FINISH_CONCURRENT) {
1914 int i, split_count = sgen_workers_get_job_split_count ();
1916 gray_queue_redirect (gc_thread_gray_queue);
1918 /* Mod union card table */
1919 for (i = 0; i < split_count; i++) {
1920 ParallelScanJob *psj;
1922 psj = (ParallelScanJob*)sgen_thread_pool_job_alloc ("scan mod union cardtable", job_scan_major_mod_union_card_table, sizeof (ParallelScanJob));
1923 psj->scan_job.ops = object_ops_par ? object_ops_par : object_ops_nopar;
1924 psj->scan_job.gc_thread_gray_queue = NULL;
1925 psj->job_index = i;
1926 sgen_workers_enqueue_job (&psj->scan_job.job, TRUE);
1928 psj = (ParallelScanJob*)sgen_thread_pool_job_alloc ("scan LOS mod union cardtable", job_scan_los_mod_union_card_table, sizeof (ParallelScanJob));
1929 psj->scan_job.ops = object_ops_par ? object_ops_par : object_ops_nopar;
1930 psj->scan_job.gc_thread_gray_queue = NULL;
1931 psj->job_index = i;
1932 sgen_workers_enqueue_job (&psj->scan_job.job, TRUE);
1936 * If we enqueue a job while workers are running we need to sgen_workers_ensure_awake
1937 * in order to make sure that we are running the idle func and draining all worker
1938 * gray queues. The operation of starting workers implies this, so we start them after
1939 * in order to avoid doing this operation twice. The workers will drain the main gray
1940 * stack that contained roots and pinned objects and also scan the mod union card
1941 * table.
1943 sgen_workers_start_all_workers (object_ops_nopar, object_ops_par, NULL);
1944 sgen_workers_join ();
1947 sgen_pin_stats_report ();
1949 if (mode == COPY_OR_MARK_FROM_ROOTS_START_CONCURRENT) {
1950 sgen_finish_pinning ();
1952 sgen_pin_stats_reset ();
1954 if (do_concurrent_checks)
1955 sgen_debug_check_nursery_is_clean ();
1959 static void
1960 major_start_collection (SgenGrayQueue *gc_thread_gray_queue, const char *reason, gboolean concurrent, size_t *old_next_pin_slot)
1962 SgenObjectOperations *object_ops_nopar, *object_ops_par = NULL;
1964 binary_protocol_collection_begin (gc_stats.major_gc_count, GENERATION_OLD);
1966 current_collection_generation = GENERATION_OLD;
1968 sgen_workers_assert_gray_queue_is_empty ();
1970 if (!concurrent)
1971 sgen_cement_reset ();
1973 if (concurrent) {
1974 g_assert (major_collector.is_concurrent);
1975 concurrent_collection_in_progress = TRUE;
1977 object_ops_nopar = &major_collector.major_ops_concurrent_start;
1978 if (major_collector.is_parallel)
1979 object_ops_par = &major_collector.major_ops_conc_par_start;
1981 } else {
1982 object_ops_nopar = &major_collector.major_ops_serial;
1985 reset_pinned_from_failed_allocation ();
1987 sgen_memgov_major_collection_start (concurrent, reason);
1989 //count_ref_nonref_objs ();
1990 //consistency_check ();
1992 check_scan_starts ();
1994 degraded_mode = 0;
1995 SGEN_LOG (1, "Start major collection %d", gc_stats.major_gc_count);
1996 gc_stats.major_gc_count ++;
1998 if (major_collector.start_major_collection)
1999 major_collector.start_major_collection ();
2001 major_copy_or_mark_from_roots (gc_thread_gray_queue, old_next_pin_slot, concurrent ? COPY_OR_MARK_FROM_ROOTS_START_CONCURRENT : COPY_OR_MARK_FROM_ROOTS_SERIAL, object_ops_nopar, object_ops_par);
2004 static void
2005 major_finish_collection (SgenGrayQueue *gc_thread_gray_queue, const char *reason, gboolean is_overflow, size_t old_next_pin_slot, gboolean forced)
2007 ScannedObjectCounts counts;
2008 SgenObjectOperations *object_ops_nopar;
2009 mword fragment_total;
2010 TV_DECLARE (atv);
2011 TV_DECLARE (btv);
2013 TV_GETTIME (btv);
2015 if (concurrent_collection_in_progress) {
2016 SgenObjectOperations *object_ops_par = NULL;
2018 object_ops_nopar = &major_collector.major_ops_concurrent_finish;
2019 if (major_collector.is_parallel)
2020 object_ops_par = &major_collector.major_ops_conc_par_finish;
2022 major_copy_or_mark_from_roots (gc_thread_gray_queue, NULL, COPY_OR_MARK_FROM_ROOTS_FINISH_CONCURRENT, object_ops_nopar, object_ops_par);
2024 #ifdef SGEN_DEBUG_INTERNAL_ALLOC
2025 main_gc_thread = NULL;
2026 #endif
2027 } else {
2028 object_ops_nopar = &major_collector.major_ops_serial;
2031 sgen_workers_assert_gray_queue_is_empty ();
2033 finish_gray_stack (GENERATION_OLD, CONTEXT_FROM_OBJECT_OPERATIONS (object_ops_nopar, gc_thread_gray_queue));
2034 TV_GETTIME (atv);
2035 time_major_finish_gray_stack += TV_ELAPSED (btv, atv);
2037 SGEN_ASSERT (0, sgen_workers_all_done (), "Can't have workers working after joining");
2039 if (objects_pinned) {
2040 g_assert (!concurrent_collection_in_progress);
2043 * This is slow, but we just OOM'd.
2045 * See comment at `sgen_pin_queue_clear_discarded_entries` for how the pin
2046 * queue is laid out at this point.
2048 sgen_pin_queue_clear_discarded_entries (nursery_section, old_next_pin_slot);
2050 * We need to reestablish all pinned nursery objects in the pin queue
2051 * because they're needed for fragment creation. Unpinning happens by
2052 * walking the whole queue, so it's not necessary to reestablish where major
2053 * heap block pins are - all we care is that they're still in there
2054 * somewhere.
2056 sgen_optimize_pin_queue ();
2057 sgen_find_section_pin_queue_start_end (nursery_section);
2058 objects_pinned = 0;
2061 reset_heap_boundaries ();
2062 sgen_update_heap_boundaries ((mword)sgen_get_nursery_start (), (mword)sgen_get_nursery_end ());
2064 /* walk the pin_queue, build up the fragment list of free memory, unmark
2065 * pinned objects as we go, memzero() the empty fragments so they are ready for the
2066 * next allocations.
2068 fragment_total = sgen_build_nursery_fragments (nursery_section, NULL);
2069 if (!fragment_total)
2070 degraded_mode = 1;
2071 SGEN_LOG (4, "Free space in nursery after major %ld", (long)fragment_total);
2073 if (do_concurrent_checks && concurrent_collection_in_progress)
2074 sgen_debug_check_nursery_is_clean ();
2076 /* prepare the pin queue for the next collection */
2077 sgen_finish_pinning ();
2079 /* Clear TLABs for all threads */
2080 sgen_clear_tlabs ();
2082 sgen_pin_stats_reset ();
2084 sgen_cement_clear_below_threshold ();
2086 if (check_mark_bits_after_major_collection)
2087 sgen_check_heap_marked (concurrent_collection_in_progress);
2089 TV_GETTIME (btv);
2090 time_major_fragment_creation += TV_ELAPSED (atv, btv);
2092 binary_protocol_sweep_begin (GENERATION_OLD, !major_collector.sweeps_lazily);
2093 sgen_memgov_major_pre_sweep ();
2095 TV_GETTIME (atv);
2096 time_major_free_bigobjs += TV_ELAPSED (btv, atv);
2098 sgen_los_sweep ();
2100 TV_GETTIME (btv);
2101 time_major_los_sweep += TV_ELAPSED (atv, btv);
2103 major_collector.sweep ();
2105 binary_protocol_sweep_end (GENERATION_OLD, !major_collector.sweeps_lazily);
2107 TV_GETTIME (atv);
2108 time_major_sweep += TV_ELAPSED (btv, atv);
2110 sgen_debug_dump_heap ("major", gc_stats.major_gc_count - 1, reason);
2112 if (sgen_have_pending_finalizers ()) {
2113 SGEN_LOG (4, "Finalizer-thread wakeup");
2114 sgen_client_finalize_notify ();
2117 sgen_memgov_major_collection_end (forced, concurrent_collection_in_progress, reason, is_overflow);
2118 current_collection_generation = -1;
2120 memset (&counts, 0, sizeof (ScannedObjectCounts));
2121 major_collector.finish_major_collection (&counts);
2123 sgen_workers_assert_gray_queue_is_empty ();
2125 SGEN_ASSERT (0, sgen_workers_all_done (), "Can't have workers working after major collection has finished");
2126 if (concurrent_collection_in_progress)
2127 concurrent_collection_in_progress = FALSE;
2129 check_scan_starts ();
2131 binary_protocol_flush_buffers (FALSE);
2133 //consistency_check ();
2135 binary_protocol_collection_end (gc_stats.major_gc_count - 1, GENERATION_OLD, counts.num_scanned_objects, counts.num_unique_scanned_objects);
2138 static gboolean
2139 major_do_collection (const char *reason, gboolean is_overflow, gboolean forced)
2141 TV_DECLARE (time_start);
2142 TV_DECLARE (time_end);
2143 size_t old_next_pin_slot;
2144 SgenGrayQueue gc_thread_gray_queue;
2146 if (disable_major_collections)
2147 return FALSE;
2149 if (major_collector.get_and_reset_num_major_objects_marked) {
2150 long long num_marked = major_collector.get_and_reset_num_major_objects_marked ();
2151 g_assert (!num_marked);
2154 /* world must be stopped already */
2155 TV_GETTIME (time_start);
2157 init_gray_queue (&gc_thread_gray_queue, FALSE);
2158 major_start_collection (&gc_thread_gray_queue, reason, FALSE, &old_next_pin_slot);
2159 major_finish_collection (&gc_thread_gray_queue, reason, is_overflow, old_next_pin_slot, forced);
2160 sgen_gray_object_queue_dispose (&gc_thread_gray_queue);
2162 TV_GETTIME (time_end);
2163 gc_stats.major_gc_time += TV_ELAPSED (time_start, time_end);
2165 /* FIXME: also report this to the user, preferably in gc-end. */
2166 if (major_collector.get_and_reset_num_major_objects_marked)
2167 major_collector.get_and_reset_num_major_objects_marked ();
2169 return bytes_pinned_from_failed_allocation > 0;
2172 static void
2173 major_start_concurrent_collection (const char *reason)
2175 TV_DECLARE (time_start);
2176 TV_DECLARE (time_end);
2177 long long num_objects_marked;
2178 SgenGrayQueue gc_thread_gray_queue;
2180 if (disable_major_collections)
2181 return;
2183 TV_GETTIME (time_start);
2184 SGEN_TV_GETTIME (time_major_conc_collection_start);
2186 num_objects_marked = major_collector.get_and_reset_num_major_objects_marked ();
2187 g_assert (num_objects_marked == 0);
2189 binary_protocol_concurrent_start ();
2191 init_gray_queue (&gc_thread_gray_queue, TRUE);
2192 // FIXME: store reason and pass it when finishing
2193 major_start_collection (&gc_thread_gray_queue, reason, TRUE, NULL);
2194 sgen_gray_object_queue_dispose (&gc_thread_gray_queue);
2196 num_objects_marked = major_collector.get_and_reset_num_major_objects_marked ();
2198 TV_GETTIME (time_end);
2199 gc_stats.major_gc_time += TV_ELAPSED (time_start, time_end);
2201 current_collection_generation = -1;
2205 * Returns whether the major collection has finished.
2207 static gboolean
2208 major_should_finish_concurrent_collection (void)
2210 return sgen_workers_all_done ();
2213 static void
2214 major_update_concurrent_collection (void)
2216 TV_DECLARE (total_start);
2217 TV_DECLARE (total_end);
2219 TV_GETTIME (total_start);
2221 binary_protocol_concurrent_update ();
2223 major_collector.update_cardtable_mod_union ();
2224 sgen_los_update_cardtable_mod_union ();
2226 TV_GETTIME (total_end);
2227 gc_stats.major_gc_time += TV_ELAPSED (total_start, total_end);
2230 static void
2231 major_finish_concurrent_collection (gboolean forced)
2233 SgenGrayQueue gc_thread_gray_queue;
2234 TV_DECLARE (total_start);
2235 TV_DECLARE (total_end);
2237 TV_GETTIME (total_start);
2239 binary_protocol_concurrent_finish ();
2242 * We need to stop all workers since we're updating the cardtable below.
2243 * The workers will be resumed with a finishing pause context to avoid
2244 * additional cardtable and object scanning.
2246 sgen_workers_stop_all_workers ();
2248 SGEN_TV_GETTIME (time_major_conc_collection_end);
2249 gc_stats.major_gc_time_concurrent += SGEN_TV_ELAPSED (time_major_conc_collection_start, time_major_conc_collection_end);
2251 major_collector.update_cardtable_mod_union ();
2252 sgen_los_update_cardtable_mod_union ();
2254 if (mod_union_consistency_check)
2255 sgen_check_mod_union_consistency ();
2257 current_collection_generation = GENERATION_OLD;
2258 sgen_cement_reset ();
2259 init_gray_queue (&gc_thread_gray_queue, FALSE);
2260 major_finish_collection (&gc_thread_gray_queue, "finishing", FALSE, -1, forced);
2261 sgen_gray_object_queue_dispose (&gc_thread_gray_queue);
2263 TV_GETTIME (total_end);
2264 gc_stats.major_gc_time += TV_ELAPSED (total_start, total_end);
2266 current_collection_generation = -1;
2270 * Ensure an allocation request for @size will succeed by freeing enough memory.
2272 * LOCKING: The GC lock MUST be held.
2274 void
2275 sgen_ensure_free_space (size_t size, int generation)
2277 int generation_to_collect = -1;
2278 const char *reason = NULL;
2280 if (generation == GENERATION_OLD) {
2281 if (sgen_need_major_collection (size)) {
2282 reason = "LOS overflow";
2283 generation_to_collect = GENERATION_OLD;
2285 } else {
2286 if (degraded_mode) {
2287 if (sgen_need_major_collection (size)) {
2288 reason = "Degraded mode overflow";
2289 generation_to_collect = GENERATION_OLD;
2291 } else if (sgen_need_major_collection (size)) {
2292 reason = concurrent_collection_in_progress ? "Forced finish concurrent collection" : "Minor allowance";
2293 generation_to_collect = GENERATION_OLD;
2294 } else {
2295 generation_to_collect = GENERATION_NURSERY;
2296 reason = "Nursery full";
2300 if (generation_to_collect == -1) {
2301 if (concurrent_collection_in_progress && sgen_workers_all_done ()) {
2302 generation_to_collect = GENERATION_OLD;
2303 reason = "Finish concurrent collection";
2307 if (generation_to_collect == -1)
2308 return;
2309 sgen_perform_collection (size, generation_to_collect, reason, FALSE, TRUE);
2313 * LOCKING: Assumes the GC lock is held.
2315 void
2316 sgen_perform_collection (size_t requested_size, int generation_to_collect, const char *reason, gboolean wait_to_finish, gboolean stw)
2318 TV_DECLARE (gc_total_start);
2319 TV_DECLARE (gc_total_end);
2320 int overflow_generation_to_collect = -1;
2321 int oldest_generation_collected = generation_to_collect;
2322 const char *overflow_reason = NULL;
2323 gboolean finish_concurrent = concurrent_collection_in_progress && (major_should_finish_concurrent_collection () || generation_to_collect == GENERATION_OLD);
2325 binary_protocol_collection_requested (generation_to_collect, requested_size, wait_to_finish ? 1 : 0);
2327 SGEN_ASSERT (0, generation_to_collect == GENERATION_NURSERY || generation_to_collect == GENERATION_OLD, "What generation is this?");
2329 if (stw)
2330 sgen_stop_world (generation_to_collect);
2331 else
2332 SGEN_ASSERT (0, sgen_is_world_stopped (), "We can only collect if the world is stopped");
2335 TV_GETTIME (gc_total_start);
2337 // FIXME: extract overflow reason
2338 // FIXME: minor overflow for concurrent case
2339 if (generation_to_collect == GENERATION_NURSERY && !finish_concurrent) {
2340 if (concurrent_collection_in_progress)
2341 major_update_concurrent_collection ();
2343 if (collect_nursery (reason, FALSE, NULL) && !concurrent_collection_in_progress) {
2344 overflow_generation_to_collect = GENERATION_OLD;
2345 overflow_reason = "Minor overflow";
2347 } else if (finish_concurrent) {
2348 major_finish_concurrent_collection (wait_to_finish);
2349 oldest_generation_collected = GENERATION_OLD;
2350 } else {
2351 SGEN_ASSERT (0, generation_to_collect == GENERATION_OLD, "We should have handled nursery collections above");
2352 if (major_collector.is_concurrent && !wait_to_finish) {
2353 collect_nursery ("Concurrent start", FALSE, NULL);
2354 major_start_concurrent_collection (reason);
2355 oldest_generation_collected = GENERATION_NURSERY;
2356 } else if (major_do_collection (reason, FALSE, wait_to_finish)) {
2357 overflow_generation_to_collect = GENERATION_NURSERY;
2358 overflow_reason = "Excessive pinning";
2362 if (overflow_generation_to_collect != -1) {
2363 SGEN_ASSERT (0, !concurrent_collection_in_progress, "We don't yet support overflow collections with the concurrent collector");
2366 * We need to do an overflow collection, either because we ran out of memory
2367 * or the nursery is fully pinned.
2370 if (overflow_generation_to_collect == GENERATION_NURSERY)
2371 collect_nursery (overflow_reason, TRUE, NULL);
2372 else
2373 major_do_collection (overflow_reason, TRUE, wait_to_finish);
2375 oldest_generation_collected = MAX (oldest_generation_collected, overflow_generation_to_collect);
2378 SGEN_LOG (2, "Heap size: %lu, LOS size: %lu", (unsigned long)sgen_gc_get_total_heap_allocation (), (unsigned long)los_memory_usage);
2380 /* this also sets the proper pointers for the next allocation */
2381 if (generation_to_collect == GENERATION_NURSERY && !sgen_can_alloc_size (requested_size)) {
2382 /* TypeBuilder and MonoMethod are killing mcs with fragmentation */
2383 SGEN_LOG (1, "nursery collection didn't find enough room for %zd alloc (%zd pinned)", requested_size, sgen_get_pinned_count ());
2384 sgen_dump_pin_queue ();
2385 degraded_mode = 1;
2388 TV_GETTIME (gc_total_end);
2389 time_max = MAX (time_max, TV_ELAPSED (gc_total_start, gc_total_end));
2391 if (stw)
2392 sgen_restart_world (oldest_generation_collected);
2396 * ######################################################################
2397 * ######## Memory allocation from the OS
2398 * ######################################################################
2399 * This section of code deals with getting memory from the OS and
2400 * allocating memory for GC-internal data structures.
2401 * Internal memory can be handled with a freelist for small objects.
2405 * Debug reporting.
2407 G_GNUC_UNUSED static void
2408 report_internal_mem_usage (void)
2410 printf ("Internal memory usage:\n");
2411 sgen_report_internal_mem_usage ();
2412 printf ("Pinned memory usage:\n");
2413 major_collector.report_pinned_memory_usage ();
2417 * ######################################################################
2418 * ######## Finalization support
2419 * ######################################################################
2423 * If the object has been forwarded it means it's still referenced from a root.
2424 * If it is pinned it's still alive as well.
2425 * A LOS object is only alive if we have pinned it.
2426 * Return TRUE if @obj is ready to be finalized.
2428 static inline gboolean
2429 sgen_is_object_alive (GCObject *object)
2431 if (ptr_in_nursery (object))
2432 return sgen_nursery_is_object_alive (object);
2434 return sgen_major_is_object_alive (object);
2438 * This function returns true if @object is either alive and belongs to the
2439 * current collection - major collections are full heap, so old gen objects
2440 * are never alive during a minor collection.
2442 static inline int
2443 sgen_is_object_alive_and_on_current_collection (GCObject *object)
2445 if (ptr_in_nursery (object))
2446 return sgen_nursery_is_object_alive (object);
2448 if (current_collection_generation == GENERATION_NURSERY)
2449 return FALSE;
2451 return sgen_major_is_object_alive (object);
2455 gboolean
2456 sgen_gc_is_object_ready_for_finalization (GCObject *object)
2458 return !sgen_is_object_alive (object);
2461 void
2462 sgen_queue_finalization_entry (GCObject *obj)
2464 gboolean critical = sgen_client_object_has_critical_finalizer (obj);
2466 sgen_pointer_queue_add (critical ? &critical_fin_queue : &fin_ready_queue, obj);
2468 sgen_client_object_queued_for_finalization (obj);
2471 gboolean
2472 sgen_object_is_live (GCObject *obj)
2474 return sgen_is_object_alive_and_on_current_collection (obj);
2478 * `System.GC.WaitForPendingFinalizers` first checks `sgen_have_pending_finalizers()` to
2479 * determine whether it can exit quickly. The latter must therefore only return FALSE if
2480 * all finalizers have really finished running.
2482 * `sgen_gc_invoke_finalizers()` first dequeues a finalizable object, and then finalizes it.
2483 * This means that just checking whether the queues are empty leaves the possibility that an
2484 * object might have been dequeued but not yet finalized. That's why we need the additional
2485 * flag `pending_unqueued_finalizer`.
2488 static volatile gboolean pending_unqueued_finalizer = FALSE;
2489 volatile gboolean sgen_suspend_finalizers = FALSE;
2491 void
2492 sgen_set_suspend_finalizers (void)
2494 sgen_suspend_finalizers = TRUE;
2498 sgen_gc_invoke_finalizers (void)
2500 int count = 0;
2502 g_assert (!pending_unqueued_finalizer);
2504 /* FIXME: batch to reduce lock contention */
2505 while (sgen_have_pending_finalizers ()) {
2506 GCObject *obj;
2508 LOCK_GC;
2511 * We need to set `pending_unqueued_finalizer` before dequeing the
2512 * finalizable object.
2514 if (!sgen_pointer_queue_is_empty (&fin_ready_queue)) {
2515 pending_unqueued_finalizer = TRUE;
2516 mono_memory_write_barrier ();
2517 obj = (GCObject *)sgen_pointer_queue_pop (&fin_ready_queue);
2518 } else if (!sgen_pointer_queue_is_empty (&critical_fin_queue)) {
2519 pending_unqueued_finalizer = TRUE;
2520 mono_memory_write_barrier ();
2521 obj = (GCObject *)sgen_pointer_queue_pop (&critical_fin_queue);
2522 } else {
2523 obj = NULL;
2526 if (obj)
2527 SGEN_LOG (7, "Finalizing object %p (%s)", obj, sgen_client_vtable_get_name (SGEN_LOAD_VTABLE (obj)));
2529 UNLOCK_GC;
2531 if (!obj)
2532 break;
2534 count++;
2535 /* the object is on the stack so it is pinned */
2536 /*g_print ("Calling finalizer for object: %p (%s)\n", obj, sgen_client_object_safe_name (obj));*/
2537 sgen_client_run_finalize (obj);
2540 if (pending_unqueued_finalizer) {
2541 mono_memory_write_barrier ();
2542 pending_unqueued_finalizer = FALSE;
2545 return count;
2548 gboolean
2549 sgen_have_pending_finalizers (void)
2551 if (sgen_suspend_finalizers)
2552 return FALSE;
2553 return pending_unqueued_finalizer || !sgen_pointer_queue_is_empty (&fin_ready_queue) || !sgen_pointer_queue_is_empty (&critical_fin_queue);
2557 * ######################################################################
2558 * ######## registered roots support
2559 * ######################################################################
2563 * We do not coalesce roots.
2566 sgen_register_root (char *start, size_t size, SgenDescriptor descr, int root_type, int source, const char *msg)
2568 RootRecord new_root;
2569 int i;
2570 LOCK_GC;
2571 for (i = 0; i < ROOT_TYPE_NUM; ++i) {
2572 RootRecord *root = (RootRecord *)sgen_hash_table_lookup (&roots_hash [i], start);
2573 /* we allow changing the size and the descriptor (for thread statics etc) */
2574 if (root) {
2575 size_t old_size = root->end_root - start;
2576 root->end_root = start + size;
2577 SGEN_ASSERT (0, !!root->root_desc == !!descr, "Can't change whether a root is precise or conservative.");
2578 SGEN_ASSERT (0, root->source == source, "Can't change a root's source identifier.");
2579 SGEN_ASSERT (0, !!root->msg == !!msg, "Can't change a root's message.");
2580 root->root_desc = descr;
2581 roots_size += size;
2582 roots_size -= old_size;
2583 UNLOCK_GC;
2584 return TRUE;
2588 new_root.end_root = start + size;
2589 new_root.root_desc = descr;
2590 new_root.source = source;
2591 new_root.msg = msg;
2593 sgen_hash_table_replace (&roots_hash [root_type], start, &new_root, NULL);
2594 roots_size += size;
2596 SGEN_LOG (3, "Added root for range: %p-%p, descr: %llx (%d/%d bytes)", start, new_root.end_root, (long long)descr, (int)size, (int)roots_size);
2598 UNLOCK_GC;
2599 return TRUE;
2602 void
2603 sgen_deregister_root (char* addr)
2605 int root_type;
2606 RootRecord root;
2608 LOCK_GC;
2609 for (root_type = 0; root_type < ROOT_TYPE_NUM; ++root_type) {
2610 if (sgen_hash_table_remove (&roots_hash [root_type], addr, &root))
2611 roots_size -= (root.end_root - addr);
2613 UNLOCK_GC;
2616 void
2617 sgen_wbroots_iterate_live_block_ranges (sgen_cardtable_block_callback cb)
2619 void **start_root;
2620 RootRecord *root;
2621 SGEN_HASH_TABLE_FOREACH (&roots_hash [ROOT_TYPE_WBARRIER], void **, start_root, RootRecord *, root) {
2622 cb ((mword)start_root, (mword)root->end_root - (mword)start_root);
2623 } SGEN_HASH_TABLE_FOREACH_END;
2626 /* Root equivalent of sgen_client_cardtable_scan_object */
2627 static void
2628 sgen_wbroot_scan_card_table (void** start_root, mword size, ScanCopyContext ctx)
2630 ScanPtrFieldFunc scan_field_func = ctx.ops->scan_ptr_field;
2631 guint8 *card_data = sgen_card_table_get_card_scan_address ((mword)start_root);
2632 guint8 *card_base = card_data;
2633 mword card_count = sgen_card_table_number_of_cards_in_range ((mword)start_root, size);
2634 guint8 *card_data_end = card_data + card_count;
2635 mword extra_idx = 0;
2636 char *obj_start = sgen_card_table_align_pointer (start_root);
2637 char *obj_end = (char*)start_root + size;
2638 #ifdef SGEN_HAVE_OVERLAPPING_CARDS
2639 guint8 *overflow_scan_end = NULL;
2640 #endif
2642 #ifdef SGEN_HAVE_OVERLAPPING_CARDS
2643 /*Check for overflow and if so, setup to scan in two steps*/
2644 if (card_data_end >= SGEN_SHADOW_CARDTABLE_END) {
2645 overflow_scan_end = sgen_shadow_cardtable + (card_data_end - SGEN_SHADOW_CARDTABLE_END);
2646 card_data_end = SGEN_SHADOW_CARDTABLE_END;
2649 LOOP_HEAD:
2650 #endif
2652 card_data = sgen_find_next_card (card_data, card_data_end);
2654 for (; card_data < card_data_end; card_data = sgen_find_next_card (card_data + 1, card_data_end)) {
2655 size_t idx = (card_data - card_base) + extra_idx;
2656 char *start = (char*)(obj_start + idx * CARD_SIZE_IN_BYTES);
2657 char *card_end = start + CARD_SIZE_IN_BYTES;
2658 char *elem = start, *first_elem = start;
2661 * Don't clean first and last card on 32bit systems since they
2662 * may also be part from other roots.
2664 if (card_data != card_base && card_data != (card_data_end - 1))
2665 sgen_card_table_prepare_card_for_scanning (card_data);
2667 card_end = MIN (card_end, obj_end);
2669 if (elem < (char*)start_root)
2670 first_elem = elem = (char*)start_root;
2672 for (; elem < card_end; elem += SIZEOF_VOID_P) {
2673 if (*(GCObject**)elem)
2674 scan_field_func (NULL, (GCObject**)elem, ctx.queue);
2677 binary_protocol_card_scan (first_elem, elem - first_elem);
2680 #ifdef SGEN_HAVE_OVERLAPPING_CARDS
2681 if (overflow_scan_end) {
2682 extra_idx = card_data - card_base;
2683 card_base = card_data = sgen_shadow_cardtable;
2684 card_data_end = overflow_scan_end;
2685 overflow_scan_end = NULL;
2686 goto LOOP_HEAD;
2688 #endif
2691 void
2692 sgen_wbroots_scan_card_table (ScanCopyContext ctx)
2694 void **start_root;
2695 RootRecord *root;
2697 SGEN_HASH_TABLE_FOREACH (&roots_hash [ROOT_TYPE_WBARRIER], void **, start_root, RootRecord *, root) {
2698 SGEN_ASSERT (0, (root->root_desc & ROOT_DESC_TYPE_MASK) == ROOT_DESC_VECTOR, "Unsupported root type");
2700 sgen_wbroot_scan_card_table (start_root, (mword)root->end_root - (mword)start_root, ctx);
2701 } SGEN_HASH_TABLE_FOREACH_END;
2705 * ######################################################################
2706 * ######## Thread handling (stop/start code)
2707 * ######################################################################
2711 sgen_get_current_collection_generation (void)
2713 return current_collection_generation;
2716 void*
2717 sgen_thread_register (SgenThreadInfo* info, void *stack_bottom_fallback)
2719 info->tlab_start = info->tlab_next = info->tlab_temp_end = info->tlab_real_end = NULL;
2721 sgen_client_thread_register (info, stack_bottom_fallback);
2723 return info;
2726 void
2727 sgen_thread_unregister (SgenThreadInfo *p)
2729 sgen_client_thread_unregister (p);
2733 * ######################################################################
2734 * ######## Write barriers
2735 * ######################################################################
2739 * Note: the write barriers first do the needed GC work and then do the actual store:
2740 * this way the value is visible to the conservative GC scan after the write barrier
2741 * itself. If a GC interrupts the barrier in the middle, value will be kept alive by
2742 * the conservative scan, otherwise by the remembered set scan.
2746 * mono_gc_wbarrier_arrayref_copy:
2748 void
2749 mono_gc_wbarrier_arrayref_copy (gpointer dest_ptr, gpointer src_ptr, int count)
2751 HEAVY_STAT (++stat_wbarrier_arrayref_copy);
2752 /*This check can be done without taking a lock since dest_ptr array is pinned*/
2753 if (ptr_in_nursery (dest_ptr) || count <= 0) {
2754 mono_gc_memmove_aligned (dest_ptr, src_ptr, count * sizeof (gpointer));
2755 return;
2758 #ifdef SGEN_HEAVY_BINARY_PROTOCOL
2759 if (binary_protocol_is_heavy_enabled ()) {
2760 int i;
2761 for (i = 0; i < count; ++i) {
2762 gpointer dest = (gpointer*)dest_ptr + i;
2763 gpointer obj = *((gpointer*)src_ptr + i);
2764 if (obj)
2765 binary_protocol_wbarrier (dest, obj, (gpointer)LOAD_VTABLE (obj));
2768 #endif
2770 remset.wbarrier_arrayref_copy (dest_ptr, src_ptr, count);
2774 * mono_gc_wbarrier_generic_nostore:
2776 void
2777 mono_gc_wbarrier_generic_nostore (gpointer ptr)
2779 gpointer obj;
2781 HEAVY_STAT (++stat_wbarrier_generic_store);
2783 sgen_client_wbarrier_generic_nostore_check (ptr);
2785 obj = *(gpointer*)ptr;
2786 if (obj)
2787 binary_protocol_wbarrier (ptr, obj, (gpointer)LOAD_VTABLE (obj));
2790 * We need to record old->old pointer locations for the
2791 * concurrent collector.
2793 if (!ptr_in_nursery (obj) && !concurrent_collection_in_progress) {
2794 SGEN_LOG (8, "Skipping remset at %p", ptr);
2795 return;
2798 SGEN_LOG (8, "Adding remset at %p", ptr);
2800 remset.wbarrier_generic_nostore (ptr);
2804 * mono_gc_wbarrier_generic_store:
2806 void
2807 mono_gc_wbarrier_generic_store (gpointer ptr, GCObject* value)
2809 SGEN_LOG (8, "Wbarrier store at %p to %p (%s)", ptr, value, value ? sgen_client_vtable_get_name (SGEN_LOAD_VTABLE (value)) : "null");
2810 SGEN_UPDATE_REFERENCE_ALLOW_NULL (ptr, value);
2811 if (ptr_in_nursery (value) || concurrent_collection_in_progress)
2812 mono_gc_wbarrier_generic_nostore (ptr);
2813 sgen_dummy_use (value);
2817 * mono_gc_wbarrier_generic_store_atomic:
2818 * Same as \c mono_gc_wbarrier_generic_store but performs the store
2819 * as an atomic operation with release semantics.
2821 void
2822 mono_gc_wbarrier_generic_store_atomic (gpointer ptr, GCObject *value)
2824 HEAVY_STAT (++stat_wbarrier_generic_store_atomic);
2826 SGEN_LOG (8, "Wbarrier atomic store at %p to %p (%s)", ptr, value, value ? sgen_client_vtable_get_name (SGEN_LOAD_VTABLE (value)) : "null");
2828 InterlockedWritePointer ((volatile gpointer *)ptr, value);
2830 if (ptr_in_nursery (value) || concurrent_collection_in_progress)
2831 mono_gc_wbarrier_generic_nostore (ptr);
2833 sgen_dummy_use (value);
2836 void
2837 sgen_wbarrier_value_copy_bitmap (gpointer _dest, gpointer _src, int size, unsigned bitmap)
2839 GCObject **dest = (GCObject **)_dest;
2840 GCObject **src = (GCObject **)_src;
2842 while (size) {
2843 if (bitmap & 0x1)
2844 mono_gc_wbarrier_generic_store (dest, *src);
2845 else
2846 *dest = *src;
2847 ++src;
2848 ++dest;
2849 size -= SIZEOF_VOID_P;
2850 bitmap >>= 1;
2855 * ######################################################################
2856 * ######## Other mono public interface functions.
2857 * ######################################################################
2860 void
2861 sgen_gc_collect (int generation)
2863 LOCK_GC;
2864 if (generation > 1)
2865 generation = 1;
2866 sgen_perform_collection (0, generation, "user request", TRUE, TRUE);
2867 UNLOCK_GC;
2871 sgen_gc_collection_count (int generation)
2873 if (generation == 0)
2874 return gc_stats.minor_gc_count;
2875 return gc_stats.major_gc_count;
2878 size_t
2879 sgen_gc_get_used_size (void)
2881 gint64 tot = 0;
2882 LOCK_GC;
2883 tot = los_memory_usage;
2884 tot += nursery_section->next_data - nursery_section->data;
2885 tot += major_collector.get_used_size ();
2886 /* FIXME: account for pinned objects */
2887 UNLOCK_GC;
2888 return tot;
2891 void
2892 sgen_env_var_error (const char *env_var, const char *fallback, const char *description_format, ...)
2894 va_list ap;
2896 va_start (ap, description_format);
2898 fprintf (stderr, "Warning: In environment variable `%s': ", env_var);
2899 vfprintf (stderr, description_format, ap);
2900 if (fallback)
2901 fprintf (stderr, " - %s", fallback);
2902 fprintf (stderr, "\n");
2904 va_end (ap);
2907 static gboolean
2908 parse_double_in_interval (const char *env_var, const char *opt_name, const char *opt, double min, double max, double *result)
2910 char *endptr;
2911 double val = strtod (opt, &endptr);
2912 if (endptr == opt) {
2913 sgen_env_var_error (env_var, "Using default value.", "`%s` must be a number.", opt_name);
2914 return FALSE;
2916 else if (val < min || val > max) {
2917 sgen_env_var_error (env_var, "Using default value.", "`%s` must be between %.2f - %.2f.", opt_name, min, max);
2918 return FALSE;
2920 *result = val;
2921 return TRUE;
2924 void
2925 sgen_gc_init (void)
2927 const char *env;
2928 char **opts, **ptr;
2929 char *major_collector_opt = NULL;
2930 char *minor_collector_opt = NULL;
2931 char *params_opts = NULL;
2932 char *debug_opts = NULL;
2933 size_t max_heap = 0;
2934 size_t soft_limit = 0;
2935 int result;
2936 gboolean debug_print_allowance = FALSE;
2937 double allowance_ratio = 0, save_target = 0;
2938 gboolean cement_enabled = TRUE;
2940 do {
2941 result = InterlockedCompareExchange (&gc_initialized, -1, 0);
2942 switch (result) {
2943 case 1:
2944 /* already inited */
2945 return;
2946 case -1:
2947 /* being inited by another thread */
2948 mono_thread_info_usleep (1000);
2949 break;
2950 case 0:
2951 /* we will init it */
2952 break;
2953 default:
2954 g_assert_not_reached ();
2956 } while (result != 0);
2958 SGEN_TV_GETTIME (sgen_init_timestamp);
2960 #ifdef SGEN_WITHOUT_MONO
2961 mono_thread_smr_init ();
2962 #endif
2964 mono_coop_mutex_init (&gc_mutex);
2966 gc_debug_file = stderr;
2968 mono_coop_mutex_init (&sgen_interruption_mutex);
2970 if ((env = g_getenv (MONO_GC_PARAMS_NAME)) || gc_params_options) {
2971 params_opts = g_strdup_printf ("%s,%s", gc_params_options ? gc_params_options : "", env ? env : "");
2974 if (params_opts) {
2975 opts = g_strsplit (params_opts, ",", -1);
2976 for (ptr = opts; *ptr; ++ptr) {
2977 char *opt = *ptr;
2978 if (g_str_has_prefix (opt, "major=")) {
2979 opt = strchr (opt, '=') + 1;
2980 major_collector_opt = g_strdup (opt);
2981 } else if (g_str_has_prefix (opt, "minor=")) {
2982 opt = strchr (opt, '=') + 1;
2983 minor_collector_opt = g_strdup (opt);
2986 } else {
2987 opts = NULL;
2990 init_stats ();
2991 sgen_init_internal_allocator ();
2992 sgen_init_nursery_allocator ();
2993 sgen_init_fin_weak_hash ();
2994 sgen_init_hash_table ();
2995 sgen_init_descriptors ();
2996 sgen_init_gray_queues ();
2997 sgen_init_allocator ();
2998 sgen_init_gchandles ();
3000 sgen_register_fixed_internal_mem_type (INTERNAL_MEM_SECTION, SGEN_SIZEOF_GC_MEM_SECTION);
3001 sgen_register_fixed_internal_mem_type (INTERNAL_MEM_GRAY_QUEUE, sizeof (GrayQueueSection));
3003 sgen_client_init ();
3005 if (!minor_collector_opt) {
3006 sgen_simple_nursery_init (&sgen_minor_collector, FALSE);
3007 } else {
3008 if (!strcmp (minor_collector_opt, "simple")) {
3009 use_simple_nursery:
3010 sgen_simple_nursery_init (&sgen_minor_collector, FALSE);
3011 } else if (!strcmp (minor_collector_opt, "simple-par")) {
3012 sgen_simple_nursery_init (&sgen_minor_collector, TRUE);
3013 } else if (!strcmp (minor_collector_opt, "split")) {
3014 sgen_split_nursery_init (&sgen_minor_collector);
3015 } else {
3016 sgen_env_var_error (MONO_GC_PARAMS_NAME, "Using `simple` instead.", "Unknown minor collector `%s'.", minor_collector_opt);
3017 goto use_simple_nursery;
3021 if (!major_collector_opt) {
3022 use_default_major:
3023 DEFAULT_MAJOR_INIT (&major_collector);
3024 } else if (!strcmp (major_collector_opt, "marksweep")) {
3025 sgen_marksweep_init (&major_collector);
3026 } else if (!strcmp (major_collector_opt, "marksweep-conc")) {
3027 sgen_marksweep_conc_init (&major_collector);
3028 } else if (!strcmp (major_collector_opt, "marksweep-conc-par")) {
3029 sgen_marksweep_conc_par_init (&major_collector);
3030 } else {
3031 sgen_env_var_error (MONO_GC_PARAMS_NAME, "Using `" DEFAULT_MAJOR_NAME "` instead.", "Unknown major collector `%s'.", major_collector_opt);
3032 goto use_default_major;
3035 sgen_nursery_size = DEFAULT_NURSERY_SIZE;
3037 if (opts) {
3038 gboolean usage_printed = FALSE;
3040 for (ptr = opts; *ptr; ++ptr) {
3041 char *opt = *ptr;
3042 if (!strcmp (opt, ""))
3043 continue;
3044 if (g_str_has_prefix (opt, "major="))
3045 continue;
3046 if (g_str_has_prefix (opt, "minor="))
3047 continue;
3048 if (g_str_has_prefix (opt, "max-heap-size=")) {
3049 size_t page_size = mono_pagesize ();
3050 size_t max_heap_candidate = 0;
3051 opt = strchr (opt, '=') + 1;
3052 if (*opt && mono_gc_parse_environment_string_extract_number (opt, &max_heap_candidate)) {
3053 max_heap = (max_heap_candidate + page_size - 1) & ~(size_t)(page_size - 1);
3054 if (max_heap != max_heap_candidate)
3055 sgen_env_var_error (MONO_GC_PARAMS_NAME, "Rounding up.", "`max-heap-size` size must be a multiple of %d.", page_size);
3056 } else {
3057 sgen_env_var_error (MONO_GC_PARAMS_NAME, NULL, "`max-heap-size` must be an integer.");
3059 continue;
3061 if (g_str_has_prefix (opt, "soft-heap-limit=")) {
3062 opt = strchr (opt, '=') + 1;
3063 if (*opt && mono_gc_parse_environment_string_extract_number (opt, &soft_limit)) {
3064 if (soft_limit <= 0) {
3065 sgen_env_var_error (MONO_GC_PARAMS_NAME, NULL, "`soft-heap-limit` must be positive.");
3066 soft_limit = 0;
3068 } else {
3069 sgen_env_var_error (MONO_GC_PARAMS_NAME, NULL, "`soft-heap-limit` must be an integer.");
3071 continue;
3074 #ifdef USER_CONFIG
3075 if (g_str_has_prefix (opt, "nursery-size=")) {
3076 size_t val;
3077 opt = strchr (opt, '=') + 1;
3078 if (*opt && mono_gc_parse_environment_string_extract_number (opt, &val)) {
3079 if ((val & (val - 1))) {
3080 sgen_env_var_error (MONO_GC_PARAMS_NAME, "Using default value.", "`nursery-size` must be a power of two.");
3081 continue;
3084 if (val < SGEN_MAX_NURSERY_WASTE) {
3085 sgen_env_var_error (MONO_GC_PARAMS_NAME, "Using default value.",
3086 "`nursery-size` must be at least %d bytes.", SGEN_MAX_NURSERY_WASTE);
3087 continue;
3090 sgen_nursery_size = val;
3091 sgen_nursery_bits = 0;
3092 while (ONE_P << (++ sgen_nursery_bits) != sgen_nursery_size)
3094 } else {
3095 sgen_env_var_error (MONO_GC_PARAMS_NAME, "Using default value.", "`nursery-size` must be an integer.");
3096 continue;
3098 continue;
3100 #endif
3101 if (g_str_has_prefix (opt, "save-target-ratio=")) {
3102 double val;
3103 opt = strchr (opt, '=') + 1;
3104 if (parse_double_in_interval (MONO_GC_PARAMS_NAME, "save-target-ratio", opt,
3105 SGEN_MIN_SAVE_TARGET_RATIO, SGEN_MAX_SAVE_TARGET_RATIO, &val)) {
3106 save_target = val;
3108 continue;
3110 if (g_str_has_prefix (opt, "default-allowance-ratio=")) {
3111 double val;
3112 opt = strchr (opt, '=') + 1;
3113 if (parse_double_in_interval (MONO_GC_PARAMS_NAME, "default-allowance-ratio", opt,
3114 SGEN_MIN_ALLOWANCE_NURSERY_SIZE_RATIO, SGEN_MAX_ALLOWANCE_NURSERY_SIZE_RATIO, &val)) {
3115 allowance_ratio = val;
3117 continue;
3120 if (!strcmp (opt, "cementing")) {
3121 cement_enabled = TRUE;
3122 continue;
3124 if (!strcmp (opt, "no-cementing")) {
3125 cement_enabled = FALSE;
3126 continue;
3129 if (!strcmp (opt, "precleaning")) {
3130 precleaning_enabled = TRUE;
3131 continue;
3133 if (!strcmp (opt, "no-precleaning")) {
3134 precleaning_enabled = FALSE;
3135 continue;
3138 if (major_collector.handle_gc_param && major_collector.handle_gc_param (opt))
3139 continue;
3141 if (sgen_minor_collector.handle_gc_param && sgen_minor_collector.handle_gc_param (opt))
3142 continue;
3144 if (sgen_client_handle_gc_param (opt))
3145 continue;
3147 sgen_env_var_error (MONO_GC_PARAMS_NAME, "Ignoring.", "Unknown option `%s`.", opt);
3149 if (usage_printed)
3150 continue;
3152 fprintf (stderr, "\n%s must be a comma-delimited list of one or more of the following:\n", MONO_GC_PARAMS_NAME);
3153 fprintf (stderr, " max-heap-size=N (where N is an integer, possibly with a k, m or a g suffix)\n");
3154 fprintf (stderr, " soft-heap-limit=n (where N is an integer, possibly with a k, m or a g suffix)\n");
3155 fprintf (stderr, " nursery-size=N (where N is an integer, possibly with a k, m or a g suffix)\n");
3156 fprintf (stderr, " major=COLLECTOR (where COLLECTOR is `marksweep', `marksweep-conc', `marksweep-par')\n");
3157 fprintf (stderr, " minor=COLLECTOR (where COLLECTOR is `simple' or `split')\n");
3158 fprintf (stderr, " wbarrier=WBARRIER (where WBARRIER is `remset' or `cardtable')\n");
3159 fprintf (stderr, " [no-]cementing\n");
3160 if (major_collector.print_gc_param_usage)
3161 major_collector.print_gc_param_usage ();
3162 if (sgen_minor_collector.print_gc_param_usage)
3163 sgen_minor_collector.print_gc_param_usage ();
3164 sgen_client_print_gc_params_usage ();
3165 fprintf (stderr, " Experimental options:\n");
3166 fprintf (stderr, " save-target-ratio=R (where R must be between %.2f - %.2f).\n", SGEN_MIN_SAVE_TARGET_RATIO, SGEN_MAX_SAVE_TARGET_RATIO);
3167 fprintf (stderr, " default-allowance-ratio=R (where R must be between %.2f - %.2f).\n", SGEN_MIN_ALLOWANCE_NURSERY_SIZE_RATIO, SGEN_MAX_ALLOWANCE_NURSERY_SIZE_RATIO);
3168 fprintf (stderr, "\n");
3170 usage_printed = TRUE;
3172 g_strfreev (opts);
3175 if (major_collector_opt)
3176 g_free (major_collector_opt);
3178 if (minor_collector_opt)
3179 g_free (minor_collector_opt);
3181 if (params_opts)
3182 g_free (params_opts);
3184 alloc_nursery ();
3186 sgen_pinning_init ();
3187 sgen_cement_init (cement_enabled);
3189 if ((env = g_getenv (MONO_GC_DEBUG_NAME)) || gc_debug_options) {
3190 debug_opts = g_strdup_printf ("%s,%s", gc_debug_options ? gc_debug_options : "", env ? env : "");
3193 if (debug_opts) {
3194 gboolean usage_printed = FALSE;
3196 opts = g_strsplit (debug_opts, ",", -1);
3197 for (ptr = opts; ptr && *ptr; ptr ++) {
3198 char *opt = *ptr;
3199 if (!strcmp (opt, ""))
3200 continue;
3201 if (opt [0] >= '0' && opt [0] <= '9') {
3202 gc_debug_level = atoi (opt);
3203 opt++;
3204 if (opt [0] == ':')
3205 opt++;
3206 if (opt [0]) {
3207 char *rf = g_strdup_printf ("%s.%d", opt, mono_process_current_pid ());
3208 gc_debug_file = fopen (rf, "wb");
3209 if (!gc_debug_file)
3210 gc_debug_file = stderr;
3211 g_free (rf);
3213 } else if (!strcmp (opt, "print-allowance")) {
3214 debug_print_allowance = TRUE;
3215 } else if (!strcmp (opt, "print-pinning")) {
3216 sgen_pin_stats_enable ();
3217 } else if (!strcmp (opt, "verify-before-allocs")) {
3218 verify_before_allocs = 1;
3219 has_per_allocation_action = TRUE;
3220 } else if (g_str_has_prefix (opt, "verify-before-allocs=")) {
3221 char *arg = strchr (opt, '=') + 1;
3222 verify_before_allocs = atoi (arg);
3223 has_per_allocation_action = TRUE;
3224 } else if (!strcmp (opt, "collect-before-allocs")) {
3225 collect_before_allocs = 1;
3226 has_per_allocation_action = TRUE;
3227 } else if (g_str_has_prefix (opt, "collect-before-allocs=")) {
3228 char *arg = strchr (opt, '=') + 1;
3229 has_per_allocation_action = TRUE;
3230 collect_before_allocs = atoi (arg);
3231 } else if (!strcmp (opt, "verify-before-collections")) {
3232 whole_heap_check_before_collection = TRUE;
3233 } else if (!strcmp (opt, "check-remset-consistency")) {
3234 remset_consistency_checks = TRUE;
3235 nursery_clear_policy = CLEAR_AT_GC;
3236 } else if (!strcmp (opt, "mod-union-consistency-check")) {
3237 if (!major_collector.is_concurrent) {
3238 sgen_env_var_error (MONO_GC_DEBUG_NAME, "Ignoring.", "`mod-union-consistency-check` only works with concurrent major collector.");
3239 continue;
3241 mod_union_consistency_check = TRUE;
3242 } else if (!strcmp (opt, "check-mark-bits")) {
3243 check_mark_bits_after_major_collection = TRUE;
3244 } else if (!strcmp (opt, "check-nursery-pinned")) {
3245 check_nursery_objects_pinned = TRUE;
3246 } else if (!strcmp (opt, "clear-at-gc")) {
3247 nursery_clear_policy = CLEAR_AT_GC;
3248 } else if (!strcmp (opt, "clear-nursery-at-gc")) {
3249 nursery_clear_policy = CLEAR_AT_GC;
3250 } else if (!strcmp (opt, "clear-at-tlab-creation")) {
3251 nursery_clear_policy = CLEAR_AT_TLAB_CREATION;
3252 } else if (!strcmp (opt, "debug-clear-at-tlab-creation")) {
3253 nursery_clear_policy = CLEAR_AT_TLAB_CREATION_DEBUG;
3254 } else if (!strcmp (opt, "check-scan-starts")) {
3255 do_scan_starts_check = TRUE;
3256 } else if (!strcmp (opt, "verify-nursery-at-minor-gc")) {
3257 do_verify_nursery = TRUE;
3258 } else if (!strcmp (opt, "check-concurrent")) {
3259 if (!major_collector.is_concurrent) {
3260 sgen_env_var_error (MONO_GC_DEBUG_NAME, "Ignoring.", "`check-concurrent` only works with concurrent major collectors.");
3261 continue;
3263 nursery_clear_policy = CLEAR_AT_GC;
3264 do_concurrent_checks = TRUE;
3265 } else if (!strcmp (opt, "dump-nursery-at-minor-gc")) {
3266 do_dump_nursery_content = TRUE;
3267 } else if (!strcmp (opt, "disable-minor")) {
3268 disable_minor_collections = TRUE;
3269 } else if (!strcmp (opt, "disable-major")) {
3270 disable_major_collections = TRUE;
3271 } else if (g_str_has_prefix (opt, "heap-dump=")) {
3272 char *filename = strchr (opt, '=') + 1;
3273 nursery_clear_policy = CLEAR_AT_GC;
3274 sgen_debug_enable_heap_dump (filename);
3275 } else if (g_str_has_prefix (opt, "binary-protocol=")) {
3276 char *filename = strchr (opt, '=') + 1;
3277 char *colon = strrchr (filename, ':');
3278 size_t limit = 0;
3279 if (colon) {
3280 if (!mono_gc_parse_environment_string_extract_number (colon + 1, &limit)) {
3281 sgen_env_var_error (MONO_GC_DEBUG_NAME, "Ignoring limit.", "Binary protocol file size limit must be an integer.");
3282 limit = -1;
3284 *colon = '\0';
3286 binary_protocol_init (filename, (long long)limit);
3287 } else if (!strcmp (opt, "nursery-canaries")) {
3288 do_verify_nursery = TRUE;
3289 enable_nursery_canaries = TRUE;
3290 } else if (!sgen_client_handle_gc_debug (opt)) {
3291 sgen_env_var_error (MONO_GC_DEBUG_NAME, "Ignoring.", "Unknown option `%s`.", opt);
3293 if (usage_printed)
3294 continue;
3296 fprintf (stderr, "\n%s must be of the format [<l>[:<filename>]|<option>]+ where <l> is a debug level 0-9.\n", MONO_GC_DEBUG_NAME);
3297 fprintf (stderr, "Valid <option>s are:\n");
3298 fprintf (stderr, " collect-before-allocs[=<n>]\n");
3299 fprintf (stderr, " verify-before-allocs[=<n>]\n");
3300 fprintf (stderr, " check-remset-consistency\n");
3301 fprintf (stderr, " check-mark-bits\n");
3302 fprintf (stderr, " check-nursery-pinned\n");
3303 fprintf (stderr, " verify-before-collections\n");
3304 fprintf (stderr, " verify-nursery-at-minor-gc\n");
3305 fprintf (stderr, " dump-nursery-at-minor-gc\n");
3306 fprintf (stderr, " disable-minor\n");
3307 fprintf (stderr, " disable-major\n");
3308 fprintf (stderr, " check-concurrent\n");
3309 fprintf (stderr, " clear-[nursery-]at-gc\n");
3310 fprintf (stderr, " clear-at-tlab-creation\n");
3311 fprintf (stderr, " debug-clear-at-tlab-creation\n");
3312 fprintf (stderr, " check-scan-starts\n");
3313 fprintf (stderr, " print-allowance\n");
3314 fprintf (stderr, " print-pinning\n");
3315 fprintf (stderr, " heap-dump=<filename>\n");
3316 fprintf (stderr, " binary-protocol=<filename>[:<file-size-limit>]\n");
3317 fprintf (stderr, " nursery-canaries\n");
3318 sgen_client_print_gc_debug_usage ();
3319 fprintf (stderr, "\n");
3321 usage_printed = TRUE;
3324 g_strfreev (opts);
3327 if (debug_opts)
3328 g_free (debug_opts);
3330 if (check_mark_bits_after_major_collection)
3331 nursery_clear_policy = CLEAR_AT_GC;
3333 if (major_collector.post_param_init)
3334 major_collector.post_param_init (&major_collector);
3336 if (major_collector.needs_thread_pool) {
3337 int num_workers = 1;
3338 if (major_collector.is_parallel) {
3339 /* FIXME Detect the number of physical cores, instead of logical */
3340 num_workers = mono_cpu_count () / 2;
3341 if (num_workers < 1)
3342 num_workers = 1;
3344 sgen_workers_init (num_workers, (SgenWorkerCallback) major_collector.worker_init_cb);
3347 sgen_memgov_init (max_heap, soft_limit, debug_print_allowance, allowance_ratio, save_target);
3349 memset (&remset, 0, sizeof (remset));
3351 sgen_card_table_init (&remset);
3353 sgen_register_root (NULL, 0, sgen_make_user_root_descriptor (sgen_mark_normal_gc_handles), ROOT_TYPE_NORMAL, MONO_ROOT_SOURCE_GC_HANDLE, "normal gc handles");
3355 gc_initialized = 1;
3357 sgen_init_bridge ();
3360 gboolean
3361 sgen_gc_initialized ()
3363 return gc_initialized > 0;
3366 NurseryClearPolicy
3367 sgen_get_nursery_clear_policy (void)
3369 return nursery_clear_policy;
3372 void
3373 sgen_gc_lock (void)
3375 mono_coop_mutex_lock (&gc_mutex);
3378 void
3379 sgen_gc_unlock (void)
3381 mono_coop_mutex_unlock (&gc_mutex);
3384 void
3385 sgen_major_collector_iterate_live_block_ranges (sgen_cardtable_block_callback callback)
3387 major_collector.iterate_live_block_ranges (callback);
3390 void
3391 sgen_major_collector_iterate_block_ranges (sgen_cardtable_block_callback callback)
3393 major_collector.iterate_block_ranges (callback);
3396 SgenMajorCollector*
3397 sgen_get_major_collector (void)
3399 return &major_collector;
3402 SgenRememberedSet*
3403 sgen_get_remset (void)
3405 return &remset;
3408 static void
3409 count_cards (long long *major_total, long long *major_marked, long long *los_total, long long *los_marked)
3411 sgen_get_major_collector ()->count_cards (major_total, major_marked);
3412 sgen_los_count_cards (los_total, los_marked);
3415 static gboolean world_is_stopped = FALSE;
3417 /* LOCKING: assumes the GC lock is held */
3418 void
3419 sgen_stop_world (int generation)
3421 long long major_total = -1, major_marked = -1, los_total = -1, los_marked = -1;
3423 SGEN_ASSERT (0, !world_is_stopped, "Why are we stopping a stopped world?");
3425 binary_protocol_world_stopping (generation, sgen_timestamp (), (gpointer) (gsize) mono_native_thread_id_get ());
3427 sgen_client_stop_world (generation);
3429 world_is_stopped = TRUE;
3431 if (binary_protocol_is_heavy_enabled ())
3432 count_cards (&major_total, &major_marked, &los_total, &los_marked);
3433 binary_protocol_world_stopped (generation, sgen_timestamp (), major_total, major_marked, los_total, los_marked);
3436 /* LOCKING: assumes the GC lock is held */
3437 void
3438 sgen_restart_world (int generation)
3440 long long major_total = -1, major_marked = -1, los_total = -1, los_marked = -1;
3441 gint64 stw_time;
3443 SGEN_ASSERT (0, world_is_stopped, "Why are we restarting a running world?");
3445 if (binary_protocol_is_heavy_enabled ())
3446 count_cards (&major_total, &major_marked, &los_total, &los_marked);
3447 binary_protocol_world_restarting (generation, sgen_timestamp (), major_total, major_marked, los_total, los_marked);
3449 world_is_stopped = FALSE;
3451 sgen_client_restart_world (generation, &stw_time);
3453 binary_protocol_world_restarted (generation, sgen_timestamp ());
3455 if (sgen_client_bridge_need_processing ())
3456 sgen_client_bridge_processing_finish (generation);
3458 sgen_memgov_collection_end (generation, stw_time);
3461 gboolean
3462 sgen_is_world_stopped (void)
3464 return world_is_stopped;
3467 void
3468 sgen_check_whole_heap_stw (void)
3470 sgen_stop_world (0);
3471 sgen_clear_nursery_fragments ();
3472 sgen_check_whole_heap (TRUE);
3473 sgen_restart_world (0);
3476 gint64
3477 sgen_timestamp (void)
3479 SGEN_TV_DECLARE (timestamp);
3480 SGEN_TV_GETTIME (timestamp);
3481 return SGEN_TV_ELAPSED (sgen_init_timestamp, timestamp);
3484 #endif /* HAVE_SGEN_GC */