[sgen] Add debug option for valloc limit
[mono-project.git] / mono / sgen / sgen-gc.c
blobf6aad7eeb37f624f7c2acdda4bd8630b3a8a72b5
1 /**
2 * \file
3 * Simple generational GC.
5 * Author:
6 * Paolo Molaro (lupus@ximian.com)
7 * Rodrigo Kumpera (kumpera@gmail.com)
9 * Copyright 2005-2011 Novell, Inc (http://www.novell.com)
10 * Copyright 2011 Xamarin Inc (http://www.xamarin.com)
12 * Thread start/stop adapted from Boehm's GC:
13 * Copyright (c) 1994 by Xerox Corporation. All rights reserved.
14 * Copyright (c) 1996 by Silicon Graphics. All rights reserved.
15 * Copyright (c) 1998 by Fergus Henderson. All rights reserved.
16 * Copyright (c) 2000-2004 by Hewlett-Packard Company. All rights reserved.
17 * Copyright 2001-2003 Ximian, Inc
18 * Copyright 2003-2010 Novell, Inc.
19 * Copyright 2011 Xamarin, Inc.
20 * Copyright (C) 2012 Xamarin Inc
22 * Licensed under the MIT license. See LICENSE file in the project root for full license information.
24 * Important: allocation provides always zeroed memory, having to do
25 * a memset after allocation is deadly for performance.
26 * Memory usage at startup is currently as follows:
27 * 64 KB pinned space
28 * 64 KB internal space
29 * size of nursery
30 * We should provide a small memory config with half the sizes
32 * We currently try to make as few mono assumptions as possible:
33 * 1) 2-word header with no GC pointers in it (first vtable, second to store the
34 * forwarding ptr)
35 * 2) gc descriptor is the second word in the vtable (first word in the class)
36 * 3) 8 byte alignment is the minimum and enough (not true for special structures (SIMD), FIXME)
37 * 4) there is a function to get an object's size and the number of
38 * elements in an array.
39 * 5) we know the special way bounds are allocated for complex arrays
40 * 6) we know about proxies and how to treat them when domains are unloaded
42 * Always try to keep stack usage to a minimum: no recursive behaviour
43 * and no large stack allocs.
45 * General description.
46 * Objects are initially allocated in a nursery using a fast bump-pointer technique.
47 * When the nursery is full we start a nursery collection: this is performed with a
48 * copying GC.
49 * When the old generation is full we start a copying GC of the old generation as well:
50 * this will be changed to mark&sweep with copying when fragmentation becomes to severe
51 * in the future. Maybe we'll even do both during the same collection like IMMIX.
53 * The things that complicate this description are:
54 * *) pinned objects: we can't move them so we need to keep track of them
55 * *) no precise info of the thread stacks and registers: we need to be able to
56 * quickly find the objects that may be referenced conservatively and pin them
57 * (this makes the first issues more important)
58 * *) large objects are too expensive to be dealt with using copying GC: we handle them
59 * with mark/sweep during major collections
60 * *) some objects need to not move even if they are small (interned strings, Type handles):
61 * we use mark/sweep for them, too: they are not allocated in the nursery, but inside
62 * PinnedChunks regions
66 * TODO:
68 *) we could have a function pointer in MonoClass to implement
69 customized write barriers for value types
71 *) investigate the stuff needed to advance a thread to a GC-safe
72 point (single-stepping, read from unmapped memory etc) and implement it.
73 This would enable us to inline allocations and write barriers, for example,
74 or at least parts of them, like the write barrier checks.
75 We may need this also for handling precise info on stacks, even simple things
76 as having uninitialized data on the stack and having to wait for the prolog
77 to zero it. Not an issue for the last frame that we scan conservatively.
78 We could always not trust the value in the slots anyway.
80 *) modify the jit to save info about references in stack locations:
81 this can be done just for locals as a start, so that at least
82 part of the stack is handled precisely.
84 *) test/fix endianess issues
86 *) Implement a card table as the write barrier instead of remembered
87 sets? Card tables are not easy to implement with our current
88 memory layout. We have several different kinds of major heap
89 objects: Small objects in regular blocks, small objects in pinned
90 chunks and LOS objects. If we just have a pointer we have no way
91 to tell which kind of object it points into, therefore we cannot
92 know where its card table is. The least we have to do to make
93 this happen is to get rid of write barriers for indirect stores.
94 (See next item)
96 *) Get rid of write barriers for indirect stores. We can do this by
97 telling the GC to wbarrier-register an object once we do an ldloca
98 or ldelema on it, and to unregister it once it's not used anymore
99 (it can only travel downwards on the stack). The problem with
100 unregistering is that it needs to happen eventually no matter
101 what, even if exceptions are thrown, the thread aborts, etc.
102 Rodrigo suggested that we could do only the registering part and
103 let the collector find out (pessimistically) when it's safe to
104 unregister, namely when the stack pointer of the thread that
105 registered the object is higher than it was when the registering
106 happened. This might make for a good first implementation to get
107 some data on performance.
109 *) Some sort of blacklist support? Blacklists is a concept from the
110 Boehm GC: if during a conservative scan we find pointers to an
111 area which we might use as heap, we mark that area as unusable, so
112 pointer retention by random pinning pointers is reduced.
114 *) experiment with max small object size (very small right now - 2kb,
115 because it's tied to the max freelist size)
117 *) add an option to mmap the whole heap in one chunk: it makes for many
118 simplifications in the checks (put the nursery at the top and just use a single
119 check for inclusion/exclusion): the issue this has is that on 32 bit systems it's
120 not flexible (too much of the address space may be used by default or we can't
121 increase the heap as needed) and we'd need a race-free mechanism to return memory
122 back to the system (mprotect(PROT_NONE) will still keep the memory allocated if it
123 was written to, munmap is needed, but the following mmap may not find the same segment
124 free...)
126 *) memzero the major fragments after restarting the world and optionally a smaller
127 chunk at a time
129 *) investigate having fragment zeroing threads
131 *) separate locks for finalization and other minor stuff to reduce
132 lock contention
134 *) try a different copying order to improve memory locality
136 *) a thread abort after a store but before the write barrier will
137 prevent the write barrier from executing
139 *) specialized dynamically generated markers/copiers
141 *) Dynamically adjust TLAB size to the number of threads. If we have
142 too many threads that do allocation, we might need smaller TLABs,
143 and we might get better performance with larger TLABs if we only
144 have a handful of threads. We could sum up the space left in all
145 assigned TLABs and if that's more than some percentage of the
146 nursery size, reduce the TLAB size.
148 *) Explore placing unreachable objects on unused nursery memory.
149 Instead of memset'ng a region to zero, place an int[] covering it.
150 A good place to start is add_nursery_frag. The tricky thing here is
151 placing those objects atomically outside of a collection.
153 *) Allocation should use asymmetric Dekker synchronization:
154 http://blogs.oracle.com/dave/resource/Asymmetric-Dekker-Synchronization.txt
155 This should help weak consistency archs.
157 #include "config.h"
158 #ifdef HAVE_SGEN_GC
160 #ifdef __MACH__
161 #undef _XOPEN_SOURCE
162 #define _XOPEN_SOURCE
163 #define _DARWIN_C_SOURCE
164 #endif
166 #ifdef HAVE_UNISTD_H
167 #include <unistd.h>
168 #endif
169 #ifdef HAVE_PTHREAD_H
170 #include <pthread.h>
171 #endif
172 #ifdef HAVE_PTHREAD_NP_H
173 #include <pthread_np.h>
174 #endif
175 #include <stdio.h>
176 #include <string.h>
177 #include <errno.h>
178 #include <assert.h>
179 #include <stdlib.h>
181 #include "mono/sgen/sgen-gc.h"
182 #include "mono/sgen/sgen-cardtable.h"
183 #include "mono/sgen/sgen-protocol.h"
184 #include "mono/sgen/sgen-memory-governor.h"
185 #include "mono/sgen/sgen-hash-table.h"
186 #include "mono/sgen/sgen-pinning.h"
187 #include "mono/sgen/sgen-workers.h"
188 #include "mono/sgen/sgen-client.h"
189 #include "mono/sgen/sgen-pointer-queue.h"
190 #include "mono/sgen/gc-internal-agnostic.h"
191 #include "mono/utils/mono-proclib.h"
192 #include "mono/utils/mono-memory-model.h"
193 #include "mono/utils/hazard-pointer.h"
195 #include <mono/utils/memcheck.h>
196 #include <mono/utils/mono-mmap-internals.h>
198 #undef pthread_create
199 #undef pthread_join
200 #undef pthread_detach
203 * ######################################################################
204 * ######## Types and constants used by the GC.
205 * ######################################################################
208 /* 0 means not initialized, 1 is initialized, -1 means in progress */
209 static int gc_initialized = 0;
210 /* If set, check if we need to do something every X allocations */
211 gboolean has_per_allocation_action;
212 /* If set, do a heap check every X allocation */
213 guint32 verify_before_allocs = 0;
214 /* If set, do a minor collection before every X allocation */
215 guint32 collect_before_allocs = 0;
216 /* If set, do a whole heap check before each collection */
217 static gboolean whole_heap_check_before_collection = FALSE;
218 /* If set, do a remset consistency check at various opportunities */
219 static gboolean remset_consistency_checks = FALSE;
220 /* If set, do a mod union consistency check before each finishing collection pause */
221 static gboolean mod_union_consistency_check = FALSE;
222 /* If set, check whether mark bits are consistent after major collections */
223 static gboolean check_mark_bits_after_major_collection = FALSE;
224 /* If set, check that all nursery objects are pinned/not pinned, depending on context */
225 static gboolean check_nursery_objects_pinned = FALSE;
226 /* If set, do a few checks when the concurrent collector is used */
227 static gboolean do_concurrent_checks = FALSE;
228 /* If set, do a plausibility check on the scan_starts before and after
229 each collection */
230 static gboolean do_scan_starts_check = FALSE;
232 static gboolean disable_minor_collections = FALSE;
233 static gboolean disable_major_collections = FALSE;
234 static gboolean do_verify_nursery = FALSE;
235 static gboolean do_dump_nursery_content = FALSE;
236 static gboolean enable_nursery_canaries = FALSE;
238 static gboolean precleaning_enabled = TRUE;
240 #ifdef HEAVY_STATISTICS
241 guint64 stat_objects_alloced_degraded = 0;
242 guint64 stat_bytes_alloced_degraded = 0;
244 guint64 stat_copy_object_called_nursery = 0;
245 guint64 stat_objects_copied_nursery = 0;
246 guint64 stat_copy_object_called_major = 0;
247 guint64 stat_objects_copied_major = 0;
249 guint64 stat_scan_object_called_nursery = 0;
250 guint64 stat_scan_object_called_major = 0;
252 guint64 stat_slots_allocated_in_vain;
254 guint64 stat_nursery_copy_object_failed_from_space = 0;
255 guint64 stat_nursery_copy_object_failed_forwarded = 0;
256 guint64 stat_nursery_copy_object_failed_pinned = 0;
257 guint64 stat_nursery_copy_object_failed_to_space = 0;
259 static guint64 stat_wbarrier_add_to_global_remset = 0;
260 static guint64 stat_wbarrier_arrayref_copy = 0;
261 static guint64 stat_wbarrier_generic_store = 0;
262 static guint64 stat_wbarrier_generic_store_atomic = 0;
263 static guint64 stat_wbarrier_set_root = 0;
264 #endif
266 static guint64 stat_pinned_objects = 0;
268 static guint64 time_minor_pre_collection_fragment_clear = 0;
269 static guint64 time_minor_pinning = 0;
270 static guint64 time_minor_scan_remsets = 0;
271 static guint64 time_minor_scan_pinned = 0;
272 static guint64 time_minor_scan_roots = 0;
273 static guint64 time_minor_finish_gray_stack = 0;
274 static guint64 time_minor_fragment_creation = 0;
276 static guint64 time_major_pre_collection_fragment_clear = 0;
277 static guint64 time_major_pinning = 0;
278 static guint64 time_major_scan_pinned = 0;
279 static guint64 time_major_scan_roots = 0;
280 static guint64 time_major_scan_mod_union = 0;
281 static guint64 time_major_finish_gray_stack = 0;
282 static guint64 time_major_free_bigobjs = 0;
283 static guint64 time_major_los_sweep = 0;
284 static guint64 time_major_sweep = 0;
285 static guint64 time_major_fragment_creation = 0;
287 static guint64 time_max = 0;
289 static SGEN_TV_DECLARE (time_major_conc_collection_start);
290 static SGEN_TV_DECLARE (time_major_conc_collection_end);
292 int gc_debug_level = 0;
293 FILE* gc_debug_file;
294 static char* gc_params_options;
295 static char* gc_debug_options;
298 void
299 mono_gc_flush_info (void)
301 fflush (gc_debug_file);
305 #define TV_DECLARE SGEN_TV_DECLARE
306 #define TV_GETTIME SGEN_TV_GETTIME
307 #define TV_ELAPSED SGEN_TV_ELAPSED
309 static SGEN_TV_DECLARE (sgen_init_timestamp);
311 NurseryClearPolicy nursery_clear_policy = CLEAR_AT_TLAB_CREATION;
313 #define object_is_forwarded SGEN_OBJECT_IS_FORWARDED
314 #define object_is_pinned SGEN_OBJECT_IS_PINNED
315 #define pin_object SGEN_PIN_OBJECT
317 #define ptr_in_nursery sgen_ptr_in_nursery
319 #define LOAD_VTABLE SGEN_LOAD_VTABLE
321 gboolean
322 nursery_canaries_enabled (void)
324 return enable_nursery_canaries;
327 #define safe_object_get_size sgen_safe_object_get_size
329 #if defined(HAVE_CONC_GC_AS_DEFAULT)
330 /* Use concurrent major on deskstop platforms */
331 #define DEFAULT_MAJOR_INIT sgen_marksweep_conc_init
332 #define DEFAULT_MAJOR_NAME "marksweep-conc"
333 #else
334 #define DEFAULT_MAJOR_INIT sgen_marksweep_init
335 #define DEFAULT_MAJOR_NAME "marksweep"
336 #endif
339 * ######################################################################
340 * ######## Global data.
341 * ######################################################################
343 MonoCoopMutex gc_mutex;
345 #define SCAN_START_SIZE SGEN_SCAN_START_SIZE
347 size_t degraded_mode = 0;
349 static mword bytes_pinned_from_failed_allocation = 0;
351 GCMemSection *nursery_section = NULL;
352 static volatile mword lowest_heap_address = ~(mword)0;
353 static volatile mword highest_heap_address = 0;
355 MonoCoopMutex sgen_interruption_mutex;
357 int current_collection_generation = -1;
358 static volatile gboolean concurrent_collection_in_progress = FALSE;
360 /* objects that are ready to be finalized */
361 static SgenPointerQueue fin_ready_queue = SGEN_POINTER_QUEUE_INIT (INTERNAL_MEM_FINALIZE_READY);
362 static SgenPointerQueue critical_fin_queue = SGEN_POINTER_QUEUE_INIT (INTERNAL_MEM_FINALIZE_READY);
364 /* registered roots: the key to the hash is the root start address */
366 * Different kinds of roots are kept separate to speed up pin_from_roots () for example.
368 SgenHashTable roots_hash [ROOT_TYPE_NUM] = {
369 SGEN_HASH_TABLE_INIT (INTERNAL_MEM_ROOTS_TABLE, INTERNAL_MEM_ROOT_RECORD, sizeof (RootRecord), sgen_aligned_addr_hash, NULL),
370 SGEN_HASH_TABLE_INIT (INTERNAL_MEM_ROOTS_TABLE, INTERNAL_MEM_ROOT_RECORD, sizeof (RootRecord), sgen_aligned_addr_hash, NULL),
371 SGEN_HASH_TABLE_INIT (INTERNAL_MEM_ROOTS_TABLE, INTERNAL_MEM_ROOT_RECORD, sizeof (RootRecord), sgen_aligned_addr_hash, NULL)
373 static mword roots_size = 0; /* amount of memory in the root set */
375 /* The size of a TLAB */
376 /* The bigger the value, the less often we have to go to the slow path to allocate a new
377 * one, but the more space is wasted by threads not allocating much memory.
378 * FIXME: Tune this.
379 * FIXME: Make this self-tuning for each thread.
381 guint32 tlab_size = (1024 * 4);
383 #define MAX_SMALL_OBJ_SIZE SGEN_MAX_SMALL_OBJ_SIZE
385 #define ALLOC_ALIGN SGEN_ALLOC_ALIGN
387 #define ALIGN_UP SGEN_ALIGN_UP
389 #ifdef SGEN_DEBUG_INTERNAL_ALLOC
390 MonoNativeThreadId main_gc_thread = NULL;
391 #endif
393 /*Object was pinned during the current collection*/
394 static mword objects_pinned;
397 * ######################################################################
398 * ######## Macros and function declarations.
399 * ######################################################################
402 /* forward declarations */
403 static void scan_from_registered_roots (char *addr_start, char *addr_end, int root_type, ScanCopyContext ctx);
405 static void pin_from_roots (void *start_nursery, void *end_nursery, ScanCopyContext ctx);
406 static void finish_gray_stack (int generation, ScanCopyContext ctx);
409 SgenMajorCollector major_collector;
410 SgenMinorCollector sgen_minor_collector;
412 static SgenRememberedSet remset;
415 * The gray queue a worker job must use. If we're not parallel or
416 * concurrent, we use the main gray queue.
418 static SgenGrayQueue*
419 sgen_workers_get_job_gray_queue (WorkerData *worker_data, SgenGrayQueue *default_gray_queue)
421 if (worker_data)
422 return &worker_data->private_gray_queue;
423 SGEN_ASSERT (0, default_gray_queue, "Why don't we have a default gray queue when we're not running in a worker thread?");
424 return default_gray_queue;
427 static void
428 gray_queue_redirect (SgenGrayQueue *queue)
430 SGEN_ASSERT (0, concurrent_collection_in_progress, "Where are we redirecting the gray queue to, without a concurrent collection?");
432 sgen_workers_take_from_queue (queue);
435 void
436 sgen_scan_area_with_callback (char *start, char *end, IterateObjectCallbackFunc callback, void *data, gboolean allow_flags, gboolean fail_on_canaries)
438 while (start < end) {
439 size_t size;
440 char *obj;
442 if (!*(void**)start) {
443 start += sizeof (void*); /* should be ALLOC_ALIGN, really */
444 continue;
447 if (allow_flags) {
448 if (!(obj = (char *)SGEN_OBJECT_IS_FORWARDED (start)))
449 obj = start;
450 } else {
451 obj = start;
454 if (!sgen_client_object_is_array_fill ((GCObject*)obj)) {
455 CHECK_CANARY_FOR_OBJECT ((GCObject*)obj, fail_on_canaries);
456 size = ALIGN_UP (safe_object_get_size ((GCObject*)obj));
457 callback ((GCObject*)obj, size, data);
458 CANARIFY_SIZE (size);
459 } else {
460 size = ALIGN_UP (safe_object_get_size ((GCObject*)obj));
463 start += size;
468 * sgen_add_to_global_remset:
470 * The global remset contains locations which point into newspace after
471 * a minor collection. This can happen if the objects they point to are pinned.
473 * LOCKING: If called from a parallel collector, the global remset
474 * lock must be held. For serial collectors that is not necessary.
476 void
477 sgen_add_to_global_remset (gpointer ptr, GCObject *obj)
479 SGEN_ASSERT (5, sgen_ptr_in_nursery (obj), "Target pointer of global remset must be in the nursery");
481 HEAVY_STAT (++stat_wbarrier_add_to_global_remset);
483 if (!major_collector.is_concurrent) {
484 SGEN_ASSERT (5, current_collection_generation != -1, "Global remsets can only be added during collections");
485 } else {
486 if (current_collection_generation == -1)
487 SGEN_ASSERT (5, sgen_concurrent_collection_in_progress (), "Global remsets outside of collection pauses can only be added by the concurrent collector");
490 if (!object_is_pinned (obj))
491 SGEN_ASSERT (5, sgen_minor_collector.is_split || sgen_concurrent_collection_in_progress (), "Non-pinned objects can only remain in nursery if it is a split nursery");
492 else if (sgen_cement_lookup_or_register (obj))
493 return;
495 remset.record_pointer (ptr);
497 sgen_pin_stats_register_global_remset (obj);
499 SGEN_LOG (8, "Adding global remset for %p", ptr);
500 binary_protocol_global_remset (ptr, obj, (gpointer)SGEN_LOAD_VTABLE (obj));
504 * sgen_drain_gray_stack:
506 * Scan objects in the gray stack until the stack is empty. This should be called
507 * frequently after each object is copied, to achieve better locality and cache
508 * usage.
511 gboolean
512 sgen_drain_gray_stack (ScanCopyContext ctx)
514 SGEN_ASSERT (0, ctx.ops->drain_gray_stack, "Why do we have a scan/copy context with a missing drain gray stack function?");
516 return ctx.ops->drain_gray_stack (ctx.queue);
520 * Addresses in the pin queue are already sorted. This function finds
521 * the object header for each address and pins the object. The
522 * addresses must be inside the nursery section. The (start of the)
523 * address array is overwritten with the addresses of the actually
524 * pinned objects. Return the number of pinned objects.
526 static int
527 pin_objects_from_nursery_pin_queue (gboolean do_scan_objects, ScanCopyContext ctx)
529 GCMemSection *section = nursery_section;
530 void **start = sgen_pinning_get_entry (section->pin_queue_first_entry);
531 void **end = sgen_pinning_get_entry (section->pin_queue_last_entry);
532 void *start_nursery = section->data;
533 void *end_nursery = section->next_data;
534 void *last = NULL;
535 int count = 0;
536 void *search_start;
537 void *addr;
538 void *pinning_front = start_nursery;
539 size_t idx;
540 void **definitely_pinned = start;
541 ScanObjectFunc scan_func = ctx.ops->scan_object;
542 SgenGrayQueue *queue = ctx.queue;
544 sgen_nursery_allocator_prepare_for_pinning ();
546 while (start < end) {
547 GCObject *obj_to_pin = NULL;
548 size_t obj_to_pin_size = 0;
549 SgenDescriptor desc;
551 addr = *start;
553 SGEN_ASSERT (0, addr >= start_nursery && addr < end_nursery, "Potential pinning address out of range");
554 SGEN_ASSERT (0, addr >= last, "Pin queue not sorted");
556 if (addr == last) {
557 ++start;
558 continue;
561 SGEN_LOG (5, "Considering pinning addr %p", addr);
562 /* We've already processed everything up to pinning_front. */
563 if (addr < pinning_front) {
564 start++;
565 continue;
569 * Find the closest scan start <= addr. We might search backward in the
570 * scan_starts array because entries might be NULL. In the worst case we
571 * start at start_nursery.
573 idx = ((char*)addr - (char*)section->data) / SCAN_START_SIZE;
574 SGEN_ASSERT (0, idx < section->num_scan_start, "Scan start index out of range");
575 search_start = (void*)section->scan_starts [idx];
576 if (!search_start || search_start > addr) {
577 while (idx) {
578 --idx;
579 search_start = section->scan_starts [idx];
580 if (search_start && search_start <= addr)
581 break;
583 if (!search_start || search_start > addr)
584 search_start = start_nursery;
588 * If the pinning front is closer than the scan start we found, start
589 * searching at the front.
591 if (search_start < pinning_front)
592 search_start = pinning_front;
595 * Now addr should be in an object a short distance from search_start.
597 * search_start must point to zeroed mem or point to an object.
599 do {
600 size_t obj_size, canarified_obj_size;
602 /* Skip zeros. */
603 if (!*(void**)search_start) {
604 search_start = (void*)ALIGN_UP ((mword)search_start + sizeof (gpointer));
605 /* The loop condition makes sure we don't overrun addr. */
606 continue;
609 canarified_obj_size = obj_size = ALIGN_UP (safe_object_get_size ((GCObject*)search_start));
612 * Filler arrays are marked by an invalid sync word. We don't
613 * consider them for pinning. They are not delimited by canaries,
614 * either.
616 if (!sgen_client_object_is_array_fill ((GCObject*)search_start)) {
617 CHECK_CANARY_FOR_OBJECT (search_start, TRUE);
618 CANARIFY_SIZE (canarified_obj_size);
620 if (addr >= search_start && (char*)addr < (char*)search_start + obj_size) {
621 /* This is the object we're looking for. */
622 obj_to_pin = (GCObject*)search_start;
623 obj_to_pin_size = canarified_obj_size;
624 break;
628 /* Skip to the next object */
629 search_start = (void*)((char*)search_start + canarified_obj_size);
630 } while (search_start <= addr);
632 /* We've searched past the address we were looking for. */
633 if (!obj_to_pin) {
634 pinning_front = search_start;
635 goto next_pin_queue_entry;
639 * We've found an object to pin. It might still be a dummy array, but we
640 * can advance the pinning front in any case.
642 pinning_front = (char*)obj_to_pin + obj_to_pin_size;
645 * If this is a dummy array marking the beginning of a nursery
646 * fragment, we don't pin it.
648 if (sgen_client_object_is_array_fill (obj_to_pin))
649 goto next_pin_queue_entry;
652 * Finally - pin the object!
654 desc = sgen_obj_get_descriptor_safe (obj_to_pin);
655 if (do_scan_objects) {
656 scan_func (obj_to_pin, desc, queue);
657 } else {
658 SGEN_LOG (4, "Pinned object %p, vtable %p (%s), count %d\n",
659 obj_to_pin, *(void**)obj_to_pin, sgen_client_vtable_get_name (SGEN_LOAD_VTABLE (obj_to_pin)), count);
660 binary_protocol_pin (obj_to_pin,
661 (gpointer)LOAD_VTABLE (obj_to_pin),
662 safe_object_get_size (obj_to_pin));
664 pin_object (obj_to_pin);
665 GRAY_OBJECT_ENQUEUE_SERIAL (queue, obj_to_pin, desc);
666 sgen_pin_stats_register_object (obj_to_pin, GENERATION_NURSERY);
667 definitely_pinned [count] = obj_to_pin;
668 count++;
670 if (concurrent_collection_in_progress)
671 sgen_pinning_register_pinned_in_nursery (obj_to_pin);
673 next_pin_queue_entry:
674 last = addr;
675 ++start;
677 sgen_client_nursery_objects_pinned (definitely_pinned, count);
678 stat_pinned_objects += count;
679 return count;
682 static void
683 pin_objects_in_nursery (gboolean do_scan_objects, ScanCopyContext ctx)
685 size_t reduced_to;
687 if (nursery_section->pin_queue_first_entry == nursery_section->pin_queue_last_entry)
688 return;
690 reduced_to = pin_objects_from_nursery_pin_queue (do_scan_objects, ctx);
691 nursery_section->pin_queue_last_entry = nursery_section->pin_queue_first_entry + reduced_to;
695 * This function is only ever called (via `collector_pin_object()` in `sgen-copy-object.h`)
696 * when we can't promote an object because we're out of memory.
698 void
699 sgen_pin_object (GCObject *object, SgenGrayQueue *queue)
701 SGEN_ASSERT (0, sgen_ptr_in_nursery (object), "We're only supposed to use this for pinning nursery objects when out of memory.");
704 * All pinned objects are assumed to have been staged, so we need to stage as well.
705 * Also, the count of staged objects shows that "late pinning" happened.
707 sgen_pin_stage_ptr (object);
709 SGEN_PIN_OBJECT (object);
710 binary_protocol_pin (object, (gpointer)LOAD_VTABLE (object), safe_object_get_size (object));
712 ++objects_pinned;
713 sgen_pin_stats_register_object (object, GENERATION_NURSERY);
715 GRAY_OBJECT_ENQUEUE_SERIAL (queue, object, sgen_obj_get_descriptor_safe (object));
718 /* Sort the addresses in array in increasing order.
719 * Done using a by-the book heap sort. Which has decent and stable performance, is pretty cache efficient.
721 void
722 sgen_sort_addresses (void **array, size_t size)
724 size_t i;
725 void *tmp;
727 for (i = 1; i < size; ++i) {
728 size_t child = i;
729 while (child > 0) {
730 size_t parent = (child - 1) / 2;
732 if (array [parent] >= array [child])
733 break;
735 tmp = array [parent];
736 array [parent] = array [child];
737 array [child] = tmp;
739 child = parent;
743 for (i = size - 1; i > 0; --i) {
744 size_t end, root;
745 tmp = array [i];
746 array [i] = array [0];
747 array [0] = tmp;
749 end = i - 1;
750 root = 0;
752 while (root * 2 + 1 <= end) {
753 size_t child = root * 2 + 1;
755 if (child < end && array [child] < array [child + 1])
756 ++child;
757 if (array [root] >= array [child])
758 break;
760 tmp = array [root];
761 array [root] = array [child];
762 array [child] = tmp;
764 root = child;
770 * Scan the memory between start and end and queue values which could be pointers
771 * to the area between start_nursery and end_nursery for later consideration.
772 * Typically used for thread stacks.
774 void
775 sgen_conservatively_pin_objects_from (void **start, void **end, void *start_nursery, void *end_nursery, int pin_type)
777 int count = 0;
779 SGEN_ASSERT (0, ((mword)start & (SIZEOF_VOID_P - 1)) == 0, "Why are we scanning for references in unaligned memory ?");
781 #if defined(VALGRIND_MAKE_MEM_DEFINED_IF_ADDRESSABLE) && !defined(_WIN64)
782 VALGRIND_MAKE_MEM_DEFINED_IF_ADDRESSABLE (start, (char*)end - (char*)start);
783 #endif
785 while (start < end) {
787 * *start can point to the middle of an object
788 * note: should we handle pointing at the end of an object?
789 * pinning in C# code disallows pointing at the end of an object
790 * but there is some small chance that an optimizing C compiler
791 * may keep the only reference to an object by pointing
792 * at the end of it. We ignore this small chance for now.
793 * Pointers to the end of an object are indistinguishable
794 * from pointers to the start of the next object in memory
795 * so if we allow that we'd need to pin two objects...
796 * We queue the pointer in an array, the
797 * array will then be sorted and uniqued. This way
798 * we can coalesce several pinning pointers and it should
799 * be faster since we'd do a memory scan with increasing
800 * addresses. Note: we can align the address to the allocation
801 * alignment, so the unique process is more effective.
803 mword addr = (mword)*start;
804 addr &= ~(ALLOC_ALIGN - 1);
805 if (addr >= (mword)start_nursery && addr < (mword)end_nursery) {
806 SGEN_LOG (6, "Pinning address %p from %p", (void*)addr, start);
807 sgen_pin_stage_ptr ((void*)addr);
808 binary_protocol_pin_stage (start, (void*)addr);
809 sgen_pin_stats_register_address ((char*)addr, pin_type);
810 count++;
812 start++;
814 if (count)
815 SGEN_LOG (7, "found %d potential pinned heap pointers", count);
819 * The first thing we do in a collection is to identify pinned objects.
820 * This function considers all the areas of memory that need to be
821 * conservatively scanned.
823 static void
824 pin_from_roots (void *start_nursery, void *end_nursery, ScanCopyContext ctx)
826 void **start_root;
827 RootRecord *root;
828 SGEN_LOG (2, "Scanning pinned roots (%d bytes, %d/%d entries)", (int)roots_size, roots_hash [ROOT_TYPE_NORMAL].num_entries, roots_hash [ROOT_TYPE_PINNED].num_entries);
829 /* objects pinned from the API are inside these roots */
830 SGEN_HASH_TABLE_FOREACH (&roots_hash [ROOT_TYPE_PINNED], void **, start_root, RootRecord *, root) {
831 SGEN_LOG (6, "Pinned roots %p-%p", start_root, root->end_root);
832 sgen_conservatively_pin_objects_from (start_root, (void**)root->end_root, start_nursery, end_nursery, PIN_TYPE_OTHER);
833 } SGEN_HASH_TABLE_FOREACH_END;
834 /* now deal with the thread stacks
835 * in the future we should be able to conservatively scan only:
836 * *) the cpu registers
837 * *) the unmanaged stack frames
838 * *) the _last_ managed stack frame
839 * *) pointers slots in managed frames
841 sgen_client_scan_thread_data (start_nursery, end_nursery, FALSE, ctx);
844 static void
845 single_arg_user_copy_or_mark (GCObject **obj, void *gc_data)
847 ScanCopyContext *ctx = (ScanCopyContext *)gc_data;
848 ctx->ops->copy_or_mark_object (obj, ctx->queue);
852 * The memory area from start_root to end_root contains pointers to objects.
853 * Their position is precisely described by @desc (this means that the pointer
854 * can be either NULL or the pointer to the start of an object).
855 * This functions copies them to to_space updates them.
857 * This function is not thread-safe!
859 static void
860 precisely_scan_objects_from (void** start_root, void** end_root, char* n_start, char *n_end, SgenDescriptor desc, ScanCopyContext ctx)
862 CopyOrMarkObjectFunc copy_func = ctx.ops->copy_or_mark_object;
863 ScanPtrFieldFunc scan_field_func = ctx.ops->scan_ptr_field;
864 SgenGrayQueue *queue = ctx.queue;
866 switch (desc & ROOT_DESC_TYPE_MASK) {
867 case ROOT_DESC_BITMAP:
868 desc >>= ROOT_DESC_TYPE_SHIFT;
869 while (desc) {
870 if ((desc & 1) && *start_root) {
871 copy_func ((GCObject**)start_root, queue);
872 SGEN_LOG (9, "Overwrote root at %p with %p", start_root, *start_root);
874 desc >>= 1;
875 start_root++;
877 return;
878 case ROOT_DESC_COMPLEX: {
879 gsize *bitmap_data = (gsize *)sgen_get_complex_descriptor_bitmap (desc);
880 gsize bwords = (*bitmap_data) - 1;
881 void **start_run = start_root;
882 bitmap_data++;
883 while (bwords-- > 0) {
884 gsize bmap = *bitmap_data++;
885 void **objptr = start_run;
886 while (bmap) {
887 if ((bmap & 1) && *objptr) {
888 copy_func ((GCObject**)objptr, queue);
889 SGEN_LOG (9, "Overwrote root at %p with %p", objptr, *objptr);
891 bmap >>= 1;
892 ++objptr;
894 start_run += GC_BITS_PER_WORD;
896 break;
898 case ROOT_DESC_VECTOR: {
899 void **p;
901 for (p = start_root; p < end_root; p++) {
902 if (*p)
903 scan_field_func (NULL, (GCObject**)p, queue);
905 break;
907 case ROOT_DESC_USER: {
908 SgenUserRootMarkFunc marker = sgen_get_user_descriptor_func (desc);
909 marker (start_root, single_arg_user_copy_or_mark, &ctx);
910 break;
912 case ROOT_DESC_RUN_LEN:
913 g_assert_not_reached ();
914 default:
915 g_assert_not_reached ();
919 static void
920 reset_heap_boundaries (void)
922 lowest_heap_address = ~(mword)0;
923 highest_heap_address = 0;
926 void
927 sgen_update_heap_boundaries (mword low, mword high)
929 mword old;
931 do {
932 old = lowest_heap_address;
933 if (low >= old)
934 break;
935 } while (SGEN_CAS_PTR ((gpointer*)&lowest_heap_address, (gpointer)low, (gpointer)old) != (gpointer)old);
937 do {
938 old = highest_heap_address;
939 if (high <= old)
940 break;
941 } while (SGEN_CAS_PTR ((gpointer*)&highest_heap_address, (gpointer)high, (gpointer)old) != (gpointer)old);
945 * Allocate and setup the data structures needed to be able to allocate objects
946 * in the nursery. The nursery is stored in nursery_section.
948 static void
949 alloc_nursery (void)
951 GCMemSection *section;
952 char *data;
953 size_t scan_starts;
954 size_t alloc_size;
956 if (nursery_section)
957 return;
958 SGEN_LOG (2, "Allocating nursery size: %zu", (size_t)sgen_nursery_size);
959 /* later we will alloc a larger area for the nursery but only activate
960 * what we need. The rest will be used as expansion if we have too many pinned
961 * objects in the existing nursery.
963 /* FIXME: handle OOM */
964 section = (GCMemSection *)sgen_alloc_internal (INTERNAL_MEM_SECTION);
966 alloc_size = sgen_nursery_size;
968 /* If there isn't enough space even for the nursery we should simply abort. */
969 g_assert (sgen_memgov_try_alloc_space (alloc_size, SPACE_NURSERY));
971 data = (char *)major_collector.alloc_heap (alloc_size, alloc_size, DEFAULT_NURSERY_BITS);
972 sgen_update_heap_boundaries ((mword)data, (mword)(data + sgen_nursery_size));
973 SGEN_LOG (4, "Expanding nursery size (%p-%p): %lu, total: %lu", data, data + alloc_size, (unsigned long)sgen_nursery_size, (unsigned long)sgen_gc_get_total_heap_allocation ());
974 section->data = section->next_data = data;
975 section->size = alloc_size;
976 section->end_data = data + sgen_nursery_size;
977 scan_starts = (alloc_size + SCAN_START_SIZE - 1) / SCAN_START_SIZE;
978 section->scan_starts = (char **)sgen_alloc_internal_dynamic (sizeof (char*) * scan_starts, INTERNAL_MEM_SCAN_STARTS, TRUE);
979 section->num_scan_start = scan_starts;
981 nursery_section = section;
983 sgen_nursery_allocator_set_nursery_bounds (data, data + sgen_nursery_size);
986 FILE *
987 mono_gc_get_logfile (void)
989 return gc_debug_file;
992 void
993 mono_gc_params_set (const char* options)
995 if (gc_params_options)
996 g_free (gc_params_options);
998 gc_params_options = g_strdup (options);
1001 void
1002 mono_gc_debug_set (const char* options)
1004 if (gc_debug_options)
1005 g_free (gc_debug_options);
1007 gc_debug_options = g_strdup (options);
1010 static void
1011 scan_finalizer_entries (SgenPointerQueue *fin_queue, ScanCopyContext ctx)
1013 CopyOrMarkObjectFunc copy_func = ctx.ops->copy_or_mark_object;
1014 SgenGrayQueue *queue = ctx.queue;
1015 size_t i;
1017 for (i = 0; i < fin_queue->next_slot; ++i) {
1018 GCObject *obj = (GCObject *)fin_queue->data [i];
1019 if (!obj)
1020 continue;
1021 SGEN_LOG (5, "Scan of fin ready object: %p (%s)\n", obj, sgen_client_vtable_get_name (SGEN_LOAD_VTABLE (obj)));
1022 copy_func ((GCObject**)&fin_queue->data [i], queue);
1026 static const char*
1027 generation_name (int generation)
1029 switch (generation) {
1030 case GENERATION_NURSERY: return "nursery";
1031 case GENERATION_OLD: return "old";
1032 default: g_assert_not_reached ();
1036 const char*
1037 sgen_generation_name (int generation)
1039 return generation_name (generation);
1042 static void
1043 finish_gray_stack (int generation, ScanCopyContext ctx)
1045 TV_DECLARE (atv);
1046 TV_DECLARE (btv);
1047 int done_with_ephemerons, ephemeron_rounds = 0;
1048 char *start_addr = generation == GENERATION_NURSERY ? sgen_get_nursery_start () : NULL;
1049 char *end_addr = generation == GENERATION_NURSERY ? sgen_get_nursery_end () : (char*)-1;
1050 SgenGrayQueue *queue = ctx.queue;
1052 binary_protocol_finish_gray_stack_start (sgen_timestamp (), generation);
1054 * We copied all the reachable objects. Now it's the time to copy
1055 * the objects that were not referenced by the roots, but by the copied objects.
1056 * we built a stack of objects pointed to by gray_start: they are
1057 * additional roots and we may add more items as we go.
1058 * We loop until gray_start == gray_objects which means no more objects have
1059 * been added. Note this is iterative: no recursion is involved.
1060 * We need to walk the LO list as well in search of marked big objects
1061 * (use a flag since this is needed only on major collections). We need to loop
1062 * here as well, so keep a counter of marked LO (increasing it in copy_object).
1063 * To achieve better cache locality and cache usage, we drain the gray stack
1064 * frequently, after each object is copied, and just finish the work here.
1066 sgen_drain_gray_stack (ctx);
1067 TV_GETTIME (atv);
1068 SGEN_LOG (2, "%s generation done", generation_name (generation));
1071 Reset bridge data, we might have lingering data from a previous collection if this is a major
1072 collection trigged by minor overflow.
1074 We must reset the gathered bridges since their original block might be evacuated due to major
1075 fragmentation in the meanwhile and the bridge code should not have to deal with that.
1077 if (sgen_client_bridge_need_processing ())
1078 sgen_client_bridge_reset_data ();
1081 * Mark all strong toggleref objects. This must be done before we walk ephemerons or finalizers
1082 * to ensure they see the full set of live objects.
1084 sgen_client_mark_togglerefs (start_addr, end_addr, ctx);
1087 * Walk the ephemeron tables marking all values with reachable keys. This must be completely done
1088 * before processing finalizable objects and non-tracking weak links to avoid finalizing/clearing
1089 * objects that are in fact reachable.
1091 done_with_ephemerons = 0;
1092 do {
1093 done_with_ephemerons = sgen_client_mark_ephemerons (ctx);
1094 sgen_drain_gray_stack (ctx);
1095 ++ephemeron_rounds;
1096 } while (!done_with_ephemerons);
1098 if (sgen_client_bridge_need_processing ()) {
1099 /*Make sure the gray stack is empty before we process bridge objects so we get liveness right*/
1100 sgen_drain_gray_stack (ctx);
1101 sgen_collect_bridge_objects (generation, ctx);
1102 if (generation == GENERATION_OLD)
1103 sgen_collect_bridge_objects (GENERATION_NURSERY, ctx);
1106 Do the first bridge step here, as the collector liveness state will become useless after that.
1108 An important optimization is to only proccess the possibly dead part of the object graph and skip
1109 over all live objects as we transitively know everything they point must be alive too.
1111 The above invariant is completely wrong if we let the gray queue be drained and mark/copy everything.
1113 This has the unfortunate side effect of making overflow collections perform the first step twice, but
1114 given we now have heuristics that perform major GC in anticipation of minor overflows this should not
1115 be a big deal.
1117 sgen_client_bridge_processing_stw_step ();
1121 Make sure we drain the gray stack before processing disappearing links and finalizers.
1122 If we don't make sure it is empty we might wrongly see a live object as dead.
1124 sgen_drain_gray_stack (ctx);
1127 We must clear weak links that don't track resurrection before processing object ready for
1128 finalization so they can be cleared before that.
1130 sgen_null_link_in_range (generation, ctx, FALSE);
1131 if (generation == GENERATION_OLD)
1132 sgen_null_link_in_range (GENERATION_NURSERY, ctx, FALSE);
1135 /* walk the finalization queue and move also the objects that need to be
1136 * finalized: use the finalized objects as new roots so the objects they depend
1137 * on are also not reclaimed. As with the roots above, only objects in the nursery
1138 * are marked/copied.
1140 sgen_finalize_in_range (generation, ctx);
1141 if (generation == GENERATION_OLD)
1142 sgen_finalize_in_range (GENERATION_NURSERY, ctx);
1143 /* drain the new stack that might have been created */
1144 SGEN_LOG (6, "Precise scan of gray area post fin");
1145 sgen_drain_gray_stack (ctx);
1148 * This must be done again after processing finalizable objects since CWL slots are cleared only after the key is finalized.
1150 done_with_ephemerons = 0;
1151 do {
1152 done_with_ephemerons = sgen_client_mark_ephemerons (ctx);
1153 sgen_drain_gray_stack (ctx);
1154 ++ephemeron_rounds;
1155 } while (!done_with_ephemerons);
1157 sgen_client_clear_unreachable_ephemerons (ctx);
1160 * We clear togglerefs only after all possible chances of revival are done.
1161 * This is semantically more inline with what users expect and it allows for
1162 * user finalizers to correctly interact with TR objects.
1164 sgen_client_clear_togglerefs (start_addr, end_addr, ctx);
1166 TV_GETTIME (btv);
1167 SGEN_LOG (2, "Finalize queue handling scan for %s generation: %lld usecs %d ephemeron rounds", generation_name (generation), (long long)TV_ELAPSED (atv, btv), ephemeron_rounds);
1170 * handle disappearing links
1171 * Note we do this after checking the finalization queue because if an object
1172 * survives (at least long enough to be finalized) we don't clear the link.
1173 * This also deals with a possible issue with the monitor reclamation: with the Boehm
1174 * GC a finalized object my lose the monitor because it is cleared before the finalizer is
1175 * called.
1177 g_assert (sgen_gray_object_queue_is_empty (queue));
1178 for (;;) {
1179 sgen_null_link_in_range (generation, ctx, TRUE);
1180 if (generation == GENERATION_OLD)
1181 sgen_null_link_in_range (GENERATION_NURSERY, ctx, TRUE);
1182 if (sgen_gray_object_queue_is_empty (queue))
1183 break;
1184 sgen_drain_gray_stack (ctx);
1187 g_assert (sgen_gray_object_queue_is_empty (queue));
1189 binary_protocol_finish_gray_stack_end (sgen_timestamp (), generation);
1192 void
1193 sgen_check_section_scan_starts (GCMemSection *section)
1195 size_t i;
1196 for (i = 0; i < section->num_scan_start; ++i) {
1197 if (section->scan_starts [i]) {
1198 mword size = safe_object_get_size ((GCObject*) section->scan_starts [i]);
1199 SGEN_ASSERT (0, size >= SGEN_CLIENT_MINIMUM_OBJECT_SIZE && size <= MAX_SMALL_OBJ_SIZE, "Weird object size at scan starts.");
1204 static void
1205 check_scan_starts (void)
1207 if (!do_scan_starts_check)
1208 return;
1209 sgen_check_section_scan_starts (nursery_section);
1210 major_collector.check_scan_starts ();
1213 static void
1214 scan_from_registered_roots (char *addr_start, char *addr_end, int root_type, ScanCopyContext ctx)
1216 void **start_root;
1217 RootRecord *root;
1218 SGEN_HASH_TABLE_FOREACH (&roots_hash [root_type], void **, start_root, RootRecord *, root) {
1219 SGEN_LOG (6, "Precise root scan %p-%p (desc: %p)", start_root, root->end_root, (void*)root->root_desc);
1220 precisely_scan_objects_from (start_root, (void**)root->end_root, addr_start, addr_end, root->root_desc, ctx);
1221 } SGEN_HASH_TABLE_FOREACH_END;
1224 static void
1225 init_stats (void)
1227 static gboolean inited = FALSE;
1229 if (inited)
1230 return;
1232 mono_counters_register ("Collection max time", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME | MONO_COUNTER_MONOTONIC, &time_max);
1234 mono_counters_register ("Minor fragment clear", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME, &time_minor_pre_collection_fragment_clear);
1235 mono_counters_register ("Minor pinning", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME, &time_minor_pinning);
1236 mono_counters_register ("Minor scan remembered set", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME, &time_minor_scan_remsets);
1237 mono_counters_register ("Minor scan pinned", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME, &time_minor_scan_pinned);
1238 mono_counters_register ("Minor scan roots", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME, &time_minor_scan_roots);
1239 mono_counters_register ("Minor fragment creation", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME, &time_minor_fragment_creation);
1241 mono_counters_register ("Major fragment clear", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME, &time_major_pre_collection_fragment_clear);
1242 mono_counters_register ("Major pinning", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME, &time_major_pinning);
1243 mono_counters_register ("Major scan pinned", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME, &time_major_scan_pinned);
1244 mono_counters_register ("Major scan roots", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME, &time_major_scan_roots);
1245 mono_counters_register ("Major scan mod union", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME, &time_major_scan_mod_union);
1246 mono_counters_register ("Major finish gray stack", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME, &time_major_finish_gray_stack);
1247 mono_counters_register ("Major free big objects", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME, &time_major_free_bigobjs);
1248 mono_counters_register ("Major LOS sweep", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME, &time_major_los_sweep);
1249 mono_counters_register ("Major sweep", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME, &time_major_sweep);
1250 mono_counters_register ("Major fragment creation", MONO_COUNTER_GC | MONO_COUNTER_ULONG | MONO_COUNTER_TIME, &time_major_fragment_creation);
1252 mono_counters_register ("Number of pinned objects", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_pinned_objects);
1254 #ifdef HEAVY_STATISTICS
1255 mono_counters_register ("WBarrier remember pointer", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_wbarrier_add_to_global_remset);
1256 mono_counters_register ("WBarrier arrayref copy", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_wbarrier_arrayref_copy);
1257 mono_counters_register ("WBarrier generic store called", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_wbarrier_generic_store);
1258 mono_counters_register ("WBarrier generic atomic store called", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_wbarrier_generic_store_atomic);
1259 mono_counters_register ("WBarrier set root", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_wbarrier_set_root);
1261 mono_counters_register ("# objects allocated degraded", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_objects_alloced_degraded);
1262 mono_counters_register ("bytes allocated degraded", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_bytes_alloced_degraded);
1264 mono_counters_register ("# copy_object() called (nursery)", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_copy_object_called_nursery);
1265 mono_counters_register ("# objects copied (nursery)", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_objects_copied_nursery);
1266 mono_counters_register ("# copy_object() called (major)", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_copy_object_called_major);
1267 mono_counters_register ("# objects copied (major)", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_objects_copied_major);
1269 mono_counters_register ("# scan_object() called (nursery)", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_scan_object_called_nursery);
1270 mono_counters_register ("# scan_object() called (major)", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_scan_object_called_major);
1272 mono_counters_register ("Slots allocated in vain", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_slots_allocated_in_vain);
1274 mono_counters_register ("# nursery copy_object() failed from space", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_nursery_copy_object_failed_from_space);
1275 mono_counters_register ("# nursery copy_object() failed forwarded", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_nursery_copy_object_failed_forwarded);
1276 mono_counters_register ("# nursery copy_object() failed pinned", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_nursery_copy_object_failed_pinned);
1277 mono_counters_register ("# nursery copy_object() failed to space", MONO_COUNTER_GC | MONO_COUNTER_ULONG, &stat_nursery_copy_object_failed_to_space);
1279 sgen_nursery_allocator_init_heavy_stats ();
1280 #endif
1282 inited = TRUE;
1286 static void
1287 reset_pinned_from_failed_allocation (void)
1289 bytes_pinned_from_failed_allocation = 0;
1292 void
1293 sgen_set_pinned_from_failed_allocation (mword objsize)
1295 bytes_pinned_from_failed_allocation += objsize;
1298 gboolean
1299 sgen_collection_is_concurrent (void)
1301 switch (current_collection_generation) {
1302 case GENERATION_NURSERY:
1303 return FALSE;
1304 case GENERATION_OLD:
1305 return concurrent_collection_in_progress;
1306 default:
1307 g_error ("Invalid current generation %d", current_collection_generation);
1309 return FALSE;
1312 gboolean
1313 sgen_concurrent_collection_in_progress (void)
1315 return concurrent_collection_in_progress;
1318 typedef struct {
1319 SgenThreadPoolJob job;
1320 SgenObjectOperations *ops;
1321 SgenGrayQueue *gc_thread_gray_queue;
1322 } ScanJob;
1324 typedef struct {
1325 ScanJob scan_job;
1326 int job_index;
1327 } ParallelScanJob;
1329 static ScanCopyContext
1330 scan_copy_context_for_scan_job (void *worker_data_untyped, ScanJob *job)
1332 WorkerData *worker_data = (WorkerData *)worker_data_untyped;
1334 return CONTEXT_FROM_OBJECT_OPERATIONS (job->ops, sgen_workers_get_job_gray_queue (worker_data, job->gc_thread_gray_queue));
1337 static void
1338 job_remembered_set_scan (void *worker_data_untyped, SgenThreadPoolJob *job)
1340 remset.scan_remsets (scan_copy_context_for_scan_job (worker_data_untyped, (ScanJob*)job));
1343 typedef struct {
1344 ScanJob scan_job;
1345 char *heap_start;
1346 char *heap_end;
1347 int root_type;
1348 } ScanFromRegisteredRootsJob;
1350 static void
1351 job_scan_from_registered_roots (void *worker_data_untyped, SgenThreadPoolJob *job)
1353 ScanFromRegisteredRootsJob *job_data = (ScanFromRegisteredRootsJob*)job;
1354 ScanCopyContext ctx = scan_copy_context_for_scan_job (worker_data_untyped, &job_data->scan_job);
1356 scan_from_registered_roots (job_data->heap_start, job_data->heap_end, job_data->root_type, ctx);
1359 typedef struct {
1360 ScanJob scan_job;
1361 char *heap_start;
1362 char *heap_end;
1363 } ScanThreadDataJob;
1365 static void
1366 job_scan_thread_data (void *worker_data_untyped, SgenThreadPoolJob *job)
1368 ScanThreadDataJob *job_data = (ScanThreadDataJob*)job;
1369 ScanCopyContext ctx = scan_copy_context_for_scan_job (worker_data_untyped, &job_data->scan_job);
1371 sgen_client_scan_thread_data (job_data->heap_start, job_data->heap_end, TRUE, ctx);
1374 typedef struct {
1375 ScanJob scan_job;
1376 SgenPointerQueue *queue;
1377 } ScanFinalizerEntriesJob;
1379 static void
1380 job_scan_finalizer_entries (void *worker_data_untyped, SgenThreadPoolJob *job)
1382 ScanFinalizerEntriesJob *job_data = (ScanFinalizerEntriesJob*)job;
1383 ScanCopyContext ctx = scan_copy_context_for_scan_job (worker_data_untyped, &job_data->scan_job);
1385 scan_finalizer_entries (job_data->queue, ctx);
1388 static void
1389 job_scan_major_mod_union_card_table (void *worker_data_untyped, SgenThreadPoolJob *job)
1391 ParallelScanJob *job_data = (ParallelScanJob*)job;
1392 ScanCopyContext ctx = scan_copy_context_for_scan_job (worker_data_untyped, (ScanJob*)job_data);
1394 g_assert (concurrent_collection_in_progress);
1395 major_collector.scan_card_table (CARDTABLE_SCAN_MOD_UNION, ctx, job_data->job_index, sgen_workers_get_job_split_count ());
1398 static void
1399 job_scan_los_mod_union_card_table (void *worker_data_untyped, SgenThreadPoolJob *job)
1401 ParallelScanJob *job_data = (ParallelScanJob*)job;
1402 ScanCopyContext ctx = scan_copy_context_for_scan_job (worker_data_untyped, (ScanJob*)job_data);
1404 g_assert (concurrent_collection_in_progress);
1405 sgen_los_scan_card_table (CARDTABLE_SCAN_MOD_UNION, ctx, job_data->job_index, sgen_workers_get_job_split_count ());
1408 static void
1409 job_major_mod_union_preclean (void *worker_data_untyped, SgenThreadPoolJob *job)
1411 ParallelScanJob *job_data = (ParallelScanJob*)job;
1412 ScanCopyContext ctx = scan_copy_context_for_scan_job (worker_data_untyped, (ScanJob*)job_data);
1414 g_assert (concurrent_collection_in_progress);
1416 major_collector.scan_card_table (CARDTABLE_SCAN_MOD_UNION_PRECLEAN, ctx, job_data->job_index, sgen_workers_get_job_split_count ());
1419 static void
1420 job_los_mod_union_preclean (void *worker_data_untyped, SgenThreadPoolJob *job)
1422 ParallelScanJob *job_data = (ParallelScanJob*)job;
1423 ScanCopyContext ctx = scan_copy_context_for_scan_job (worker_data_untyped, (ScanJob*)job_data);
1425 g_assert (concurrent_collection_in_progress);
1427 sgen_los_scan_card_table (CARDTABLE_SCAN_MOD_UNION_PRECLEAN, ctx, job_data->job_index, sgen_workers_get_job_split_count ());
1430 static void
1431 job_scan_last_pinned (void *worker_data_untyped, SgenThreadPoolJob *job)
1433 ScanJob *job_data = (ScanJob*)job;
1434 ScanCopyContext ctx = scan_copy_context_for_scan_job (worker_data_untyped, job_data);
1436 g_assert (concurrent_collection_in_progress);
1438 sgen_scan_pin_queue_objects (ctx);
1441 static void
1442 workers_finish_callback (void)
1444 ParallelScanJob *psj;
1445 ScanJob *sj;
1446 int split_count = sgen_workers_get_job_split_count ();
1447 int i;
1448 /* Mod union preclean jobs */
1449 for (i = 0; i < split_count; i++) {
1450 psj = (ParallelScanJob*)sgen_thread_pool_job_alloc ("preclean major mod union cardtable", job_major_mod_union_preclean, sizeof (ParallelScanJob));
1451 psj->scan_job.ops = sgen_workers_get_idle_func_object_ops ();
1452 psj->scan_job.gc_thread_gray_queue = NULL;
1453 psj->job_index = i;
1454 sgen_workers_enqueue_job (&psj->scan_job.job, TRUE);
1457 for (i = 0; i < split_count; i++) {
1458 psj = (ParallelScanJob*)sgen_thread_pool_job_alloc ("preclean los mod union cardtable", job_los_mod_union_preclean, sizeof (ParallelScanJob));
1459 psj->scan_job.ops = sgen_workers_get_idle_func_object_ops ();
1460 psj->scan_job.gc_thread_gray_queue = NULL;
1461 psj->job_index = i;
1462 sgen_workers_enqueue_job (&psj->scan_job.job, TRUE);
1465 sj = (ScanJob*)sgen_thread_pool_job_alloc ("scan last pinned", job_scan_last_pinned, sizeof (ScanJob));
1466 sj->ops = sgen_workers_get_idle_func_object_ops ();
1467 sj->gc_thread_gray_queue = NULL;
1468 sgen_workers_enqueue_job (&sj->job, TRUE);
1471 static void
1472 init_gray_queue (SgenGrayQueue *gc_thread_gray_queue, gboolean use_workers)
1474 if (use_workers)
1475 sgen_workers_init_distribute_gray_queue ();
1476 sgen_gray_object_queue_init (gc_thread_gray_queue, NULL, TRUE);
1479 static void
1480 enqueue_scan_from_roots_jobs (SgenGrayQueue *gc_thread_gray_queue, char *heap_start, char *heap_end, SgenObjectOperations *ops, gboolean enqueue)
1482 ScanFromRegisteredRootsJob *scrrj;
1483 ScanThreadDataJob *stdj;
1484 ScanFinalizerEntriesJob *sfej;
1486 /* registered roots, this includes static fields */
1488 scrrj = (ScanFromRegisteredRootsJob*)sgen_thread_pool_job_alloc ("scan from registered roots normal", job_scan_from_registered_roots, sizeof (ScanFromRegisteredRootsJob));
1489 scrrj->scan_job.ops = ops;
1490 scrrj->scan_job.gc_thread_gray_queue = gc_thread_gray_queue;
1491 scrrj->heap_start = heap_start;
1492 scrrj->heap_end = heap_end;
1493 scrrj->root_type = ROOT_TYPE_NORMAL;
1494 sgen_workers_enqueue_job (&scrrj->scan_job.job, enqueue);
1496 if (current_collection_generation == GENERATION_OLD) {
1497 /* During minors we scan the cardtable for these roots instead */
1498 scrrj = (ScanFromRegisteredRootsJob*)sgen_thread_pool_job_alloc ("scan from registered roots wbarrier", job_scan_from_registered_roots, sizeof (ScanFromRegisteredRootsJob));
1499 scrrj->scan_job.ops = ops;
1500 scrrj->scan_job.gc_thread_gray_queue = gc_thread_gray_queue;
1501 scrrj->heap_start = heap_start;
1502 scrrj->heap_end = heap_end;
1503 scrrj->root_type = ROOT_TYPE_WBARRIER;
1504 sgen_workers_enqueue_job (&scrrj->scan_job.job, enqueue);
1507 /* Threads */
1509 stdj = (ScanThreadDataJob*)sgen_thread_pool_job_alloc ("scan thread data", job_scan_thread_data, sizeof (ScanThreadDataJob));
1510 stdj->scan_job.ops = ops;
1511 stdj->scan_job.gc_thread_gray_queue = gc_thread_gray_queue;
1512 stdj->heap_start = heap_start;
1513 stdj->heap_end = heap_end;
1514 sgen_workers_enqueue_job (&stdj->scan_job.job, enqueue);
1516 /* Scan the list of objects ready for finalization. */
1518 sfej = (ScanFinalizerEntriesJob*)sgen_thread_pool_job_alloc ("scan finalizer entries", job_scan_finalizer_entries, sizeof (ScanFinalizerEntriesJob));
1519 sfej->scan_job.ops = ops;
1520 sfej->scan_job.gc_thread_gray_queue = gc_thread_gray_queue;
1521 sfej->queue = &fin_ready_queue;
1522 sgen_workers_enqueue_job (&sfej->scan_job.job, enqueue);
1524 sfej = (ScanFinalizerEntriesJob*)sgen_thread_pool_job_alloc ("scan critical finalizer entries", job_scan_finalizer_entries, sizeof (ScanFinalizerEntriesJob));
1525 sfej->scan_job.ops = ops;
1526 sfej->scan_job.gc_thread_gray_queue = gc_thread_gray_queue;
1527 sfej->queue = &critical_fin_queue;
1528 sgen_workers_enqueue_job (&sfej->scan_job.job, enqueue);
1532 * Perform a nursery collection.
1534 * Return whether any objects were late-pinned due to being out of memory.
1536 static gboolean
1537 collect_nursery (const char *reason, gboolean is_overflow, SgenGrayQueue *unpin_queue)
1539 gboolean needs_major;
1540 size_t max_garbage_amount;
1541 char *nursery_next;
1542 mword fragment_total;
1543 ScanJob *sj;
1544 SgenGrayQueue gc_thread_gray_queue;
1545 SgenObjectOperations *object_ops;
1546 ScanCopyContext ctx;
1547 TV_DECLARE (atv);
1548 TV_DECLARE (btv);
1549 SGEN_TV_DECLARE (last_minor_collection_start_tv);
1550 SGEN_TV_DECLARE (last_minor_collection_end_tv);
1552 if (disable_minor_collections)
1553 return TRUE;
1555 TV_GETTIME (last_minor_collection_start_tv);
1556 atv = last_minor_collection_start_tv;
1558 binary_protocol_collection_begin (gc_stats.minor_gc_count, GENERATION_NURSERY);
1560 if (sgen_concurrent_collection_in_progress ())
1561 object_ops = &sgen_minor_collector.serial_ops_with_concurrent_major;
1562 else
1563 object_ops = &sgen_minor_collector.serial_ops;
1565 if (do_verify_nursery || do_dump_nursery_content)
1566 sgen_debug_verify_nursery (do_dump_nursery_content);
1568 current_collection_generation = GENERATION_NURSERY;
1570 SGEN_ASSERT (0, !sgen_collection_is_concurrent (), "Why is the nursery collection concurrent?");
1572 reset_pinned_from_failed_allocation ();
1574 check_scan_starts ();
1576 sgen_nursery_alloc_prepare_for_minor ();
1578 degraded_mode = 0;
1579 objects_pinned = 0;
1580 nursery_next = sgen_nursery_alloc_get_upper_alloc_bound ();
1581 /* FIXME: optimize later to use the higher address where an object can be present */
1582 nursery_next = MAX (nursery_next, sgen_get_nursery_end ());
1584 SGEN_LOG (1, "Start nursery collection %d %p-%p, size: %d", gc_stats.minor_gc_count, sgen_get_nursery_start (), nursery_next, (int)(nursery_next - sgen_get_nursery_start ()));
1585 max_garbage_amount = nursery_next - sgen_get_nursery_start ();
1586 g_assert (nursery_section->size >= max_garbage_amount);
1588 /* world must be stopped already */
1589 TV_GETTIME (btv);
1590 time_minor_pre_collection_fragment_clear += TV_ELAPSED (atv, btv);
1592 sgen_client_pre_collection_checks ();
1594 nursery_section->next_data = nursery_next;
1596 major_collector.start_nursery_collection ();
1598 sgen_memgov_minor_collection_start ();
1600 init_gray_queue (&gc_thread_gray_queue, FALSE);
1601 ctx = CONTEXT_FROM_OBJECT_OPERATIONS (object_ops, &gc_thread_gray_queue);
1603 gc_stats.minor_gc_count ++;
1605 sgen_process_fin_stage_entries ();
1607 /* pin from pinned handles */
1608 sgen_init_pinning ();
1609 sgen_client_binary_protocol_mark_start (GENERATION_NURSERY);
1610 pin_from_roots (sgen_get_nursery_start (), nursery_next, ctx);
1611 /* pin cemented objects */
1612 sgen_pin_cemented_objects ();
1613 /* identify pinned objects */
1614 sgen_optimize_pin_queue ();
1615 sgen_pinning_setup_section (nursery_section);
1617 pin_objects_in_nursery (FALSE, ctx);
1618 sgen_pinning_trim_queue_to_section (nursery_section);
1620 if (remset_consistency_checks)
1621 sgen_check_remset_consistency ();
1623 if (whole_heap_check_before_collection) {
1624 sgen_clear_nursery_fragments ();
1625 sgen_check_whole_heap (FALSE);
1628 TV_GETTIME (atv);
1629 time_minor_pinning += TV_ELAPSED (btv, atv);
1630 SGEN_LOG (2, "Finding pinned pointers: %zd in %lld usecs", sgen_get_pinned_count (), (long long)TV_ELAPSED (btv, atv));
1631 SGEN_LOG (4, "Start scan with %zd pinned objects", sgen_get_pinned_count ());
1633 sj = (ScanJob*)sgen_thread_pool_job_alloc ("scan remset", job_remembered_set_scan, sizeof (ScanJob));
1634 sj->ops = object_ops;
1635 sj->gc_thread_gray_queue = &gc_thread_gray_queue;
1636 sgen_workers_enqueue_job (&sj->job, FALSE);
1638 /* we don't have complete write barrier yet, so we scan all the old generation sections */
1639 TV_GETTIME (btv);
1640 time_minor_scan_remsets += TV_ELAPSED (atv, btv);
1641 SGEN_LOG (2, "Old generation scan: %lld usecs", (long long)TV_ELAPSED (atv, btv));
1643 sgen_pin_stats_report ();
1645 /* FIXME: Why do we do this at this specific, seemingly random, point? */
1646 sgen_client_collecting_minor (&fin_ready_queue, &critical_fin_queue);
1648 TV_GETTIME (atv);
1649 time_minor_scan_pinned += TV_ELAPSED (btv, atv);
1651 enqueue_scan_from_roots_jobs (&gc_thread_gray_queue, sgen_get_nursery_start (), nursery_next, object_ops, FALSE);
1653 TV_GETTIME (btv);
1654 time_minor_scan_roots += TV_ELAPSED (atv, btv);
1656 finish_gray_stack (GENERATION_NURSERY, ctx);
1658 TV_GETTIME (atv);
1659 time_minor_finish_gray_stack += TV_ELAPSED (btv, atv);
1660 sgen_client_binary_protocol_mark_end (GENERATION_NURSERY);
1662 if (objects_pinned) {
1663 sgen_optimize_pin_queue ();
1664 sgen_pinning_setup_section (nursery_section);
1668 * This is the latest point at which we can do this check, because
1669 * sgen_build_nursery_fragments() unpins nursery objects again.
1671 if (remset_consistency_checks)
1672 sgen_check_remset_consistency ();
1674 /* walk the pin_queue, build up the fragment list of free memory, unmark
1675 * pinned objects as we go, memzero() the empty fragments so they are ready for the
1676 * next allocations.
1678 sgen_client_binary_protocol_reclaim_start (GENERATION_NURSERY);
1679 fragment_total = sgen_build_nursery_fragments (nursery_section, unpin_queue);
1680 if (!fragment_total)
1681 degraded_mode = 1;
1683 /* Clear TLABs for all threads */
1684 sgen_clear_tlabs ();
1686 sgen_client_binary_protocol_reclaim_end (GENERATION_NURSERY);
1687 TV_GETTIME (btv);
1688 time_minor_fragment_creation += TV_ELAPSED (atv, btv);
1689 SGEN_LOG (2, "Fragment creation: %lld usecs, %lu bytes available", (long long)TV_ELAPSED (atv, btv), (unsigned long)fragment_total);
1691 if (remset_consistency_checks)
1692 sgen_check_major_refs ();
1694 major_collector.finish_nursery_collection ();
1696 TV_GETTIME (last_minor_collection_end_tv);
1697 gc_stats.minor_gc_time += TV_ELAPSED (last_minor_collection_start_tv, last_minor_collection_end_tv);
1699 sgen_debug_dump_heap ("minor", gc_stats.minor_gc_count - 1, NULL);
1701 /* prepare the pin queue for the next collection */
1702 sgen_finish_pinning ();
1703 if (sgen_have_pending_finalizers ()) {
1704 SGEN_LOG (4, "Finalizer-thread wakeup");
1705 sgen_client_finalize_notify ();
1707 sgen_pin_stats_reset ();
1708 /* clear cemented hash */
1709 sgen_cement_clear_below_threshold ();
1711 sgen_gray_object_queue_dispose (&gc_thread_gray_queue);
1713 remset.finish_minor_collection ();
1715 check_scan_starts ();
1717 binary_protocol_flush_buffers (FALSE);
1719 sgen_memgov_minor_collection_end (reason, is_overflow);
1721 /*objects are late pinned because of lack of memory, so a major is a good call*/
1722 needs_major = objects_pinned > 0;
1723 current_collection_generation = -1;
1724 objects_pinned = 0;
1726 binary_protocol_collection_end (gc_stats.minor_gc_count - 1, GENERATION_NURSERY, 0, 0);
1728 if (check_nursery_objects_pinned && !sgen_minor_collector.is_split)
1729 sgen_check_nursery_objects_pinned (unpin_queue != NULL);
1731 return needs_major;
1734 typedef enum {
1735 COPY_OR_MARK_FROM_ROOTS_SERIAL,
1736 COPY_OR_MARK_FROM_ROOTS_START_CONCURRENT,
1737 COPY_OR_MARK_FROM_ROOTS_FINISH_CONCURRENT
1738 } CopyOrMarkFromRootsMode;
1740 static void
1741 major_copy_or_mark_from_roots (SgenGrayQueue *gc_thread_gray_queue, size_t *old_next_pin_slot, CopyOrMarkFromRootsMode mode, SgenObjectOperations *object_ops_nopar, SgenObjectOperations *object_ops_par)
1743 LOSObject *bigobj;
1744 TV_DECLARE (atv);
1745 TV_DECLARE (btv);
1746 /* FIXME: only use these values for the precise scan
1747 * note that to_space pointers should be excluded anyway...
1749 char *heap_start = NULL;
1750 char *heap_end = (char*)-1;
1751 ScanCopyContext ctx = CONTEXT_FROM_OBJECT_OPERATIONS (object_ops_nopar, gc_thread_gray_queue);
1752 gboolean concurrent = mode != COPY_OR_MARK_FROM_ROOTS_SERIAL;
1754 SGEN_ASSERT (0, !!concurrent == !!concurrent_collection_in_progress, "We've been called with the wrong mode.");
1756 if (mode == COPY_OR_MARK_FROM_ROOTS_START_CONCURRENT) {
1757 /*This cleans up unused fragments */
1758 sgen_nursery_allocator_prepare_for_pinning ();
1760 if (do_concurrent_checks)
1761 sgen_debug_check_nursery_is_clean ();
1762 } else {
1763 /* The concurrent collector doesn't touch the nursery. */
1764 sgen_nursery_alloc_prepare_for_major ();
1767 TV_GETTIME (atv);
1769 /* Pinning depends on this */
1770 sgen_clear_nursery_fragments ();
1772 if (whole_heap_check_before_collection)
1773 sgen_check_whole_heap (TRUE);
1775 TV_GETTIME (btv);
1776 time_major_pre_collection_fragment_clear += TV_ELAPSED (atv, btv);
1778 if (!sgen_collection_is_concurrent ())
1779 nursery_section->next_data = sgen_get_nursery_end ();
1780 /* we should also coalesce scanning from sections close to each other
1781 * and deal with pointers outside of the sections later.
1784 objects_pinned = 0;
1786 sgen_client_pre_collection_checks ();
1788 if (mode != COPY_OR_MARK_FROM_ROOTS_START_CONCURRENT) {
1789 /* Remsets are not useful for a major collection */
1790 remset.clear_cards ();
1793 sgen_process_fin_stage_entries ();
1795 TV_GETTIME (atv);
1796 sgen_init_pinning ();
1797 SGEN_LOG (6, "Collecting pinned addresses");
1798 pin_from_roots ((void*)lowest_heap_address, (void*)highest_heap_address, ctx);
1799 if (mode == COPY_OR_MARK_FROM_ROOTS_FINISH_CONCURRENT) {
1800 /* Pin cemented objects that were forced */
1801 sgen_pin_cemented_objects ();
1803 sgen_optimize_pin_queue ();
1804 if (mode == COPY_OR_MARK_FROM_ROOTS_START_CONCURRENT) {
1806 * Cemented objects that are in the pinned list will be marked. When
1807 * marking concurrently we won't mark mod-union cards for these objects.
1808 * Instead they will remain cemented until the next major collection,
1809 * when we will recheck if they are still pinned in the roots.
1811 sgen_cement_force_pinned ();
1814 sgen_client_collecting_major_1 ();
1817 * pin_queue now contains all candidate pointers, sorted and
1818 * uniqued. We must do two passes now to figure out which
1819 * objects are pinned.
1821 * The first is to find within the pin_queue the area for each
1822 * section. This requires that the pin_queue be sorted. We
1823 * also process the LOS objects and pinned chunks here.
1825 * The second, destructive, pass is to reduce the section
1826 * areas to pointers to the actually pinned objects.
1828 SGEN_LOG (6, "Pinning from sections");
1829 /* first pass for the sections */
1830 sgen_find_section_pin_queue_start_end (nursery_section);
1831 /* identify possible pointers to the insize of large objects */
1832 SGEN_LOG (6, "Pinning from large objects");
1833 for (bigobj = los_object_list; bigobj; bigobj = bigobj->next) {
1834 size_t dummy;
1835 if (sgen_find_optimized_pin_queue_area ((char*)bigobj->data, (char*)bigobj->data + sgen_los_object_size (bigobj), &dummy, &dummy)) {
1836 binary_protocol_pin (bigobj->data, (gpointer)LOAD_VTABLE (bigobj->data), safe_object_get_size (bigobj->data));
1838 if (sgen_los_object_is_pinned (bigobj->data)) {
1839 SGEN_ASSERT (0, mode == COPY_OR_MARK_FROM_ROOTS_FINISH_CONCURRENT, "LOS objects can only be pinned here after concurrent marking.");
1840 continue;
1842 sgen_los_pin_object (bigobj->data);
1843 if (SGEN_OBJECT_HAS_REFERENCES (bigobj->data))
1844 GRAY_OBJECT_ENQUEUE_SERIAL (gc_thread_gray_queue, bigobj->data, sgen_obj_get_descriptor ((GCObject*)bigobj->data));
1845 sgen_pin_stats_register_object (bigobj->data, GENERATION_OLD);
1846 SGEN_LOG (6, "Marked large object %p (%s) size: %lu from roots", bigobj->data,
1847 sgen_client_vtable_get_name (SGEN_LOAD_VTABLE (bigobj->data)),
1848 (unsigned long)sgen_los_object_size (bigobj));
1850 sgen_client_pinned_los_object (bigobj->data);
1854 pin_objects_in_nursery (mode == COPY_OR_MARK_FROM_ROOTS_START_CONCURRENT, ctx);
1855 if (check_nursery_objects_pinned && !sgen_minor_collector.is_split)
1856 sgen_check_nursery_objects_pinned (mode != COPY_OR_MARK_FROM_ROOTS_START_CONCURRENT);
1858 major_collector.pin_objects (gc_thread_gray_queue);
1859 if (old_next_pin_slot)
1860 *old_next_pin_slot = sgen_get_pinned_count ();
1862 TV_GETTIME (btv);
1863 time_major_pinning += TV_ELAPSED (atv, btv);
1864 SGEN_LOG (2, "Finding pinned pointers: %zd in %lld usecs", sgen_get_pinned_count (), (long long)TV_ELAPSED (atv, btv));
1865 SGEN_LOG (4, "Start scan with %zd pinned objects", sgen_get_pinned_count ());
1867 major_collector.init_to_space ();
1869 SGEN_ASSERT (0, sgen_workers_all_done (), "Why are the workers not done when we start or finish a major collection?");
1870 if (mode == COPY_OR_MARK_FROM_ROOTS_FINISH_CONCURRENT) {
1871 sgen_workers_set_num_active_workers (0);
1872 if (sgen_workers_have_idle_work ()) {
1874 * We force the finish of the worker with the new object ops context
1875 * which can also do copying. We need to have finished pinning.
1877 sgen_workers_start_all_workers (object_ops_nopar, object_ops_par, NULL);
1879 sgen_workers_join ();
1883 #ifdef SGEN_DEBUG_INTERNAL_ALLOC
1884 main_gc_thread = mono_native_thread_self ();
1885 #endif
1887 sgen_client_collecting_major_2 ();
1889 TV_GETTIME (atv);
1890 time_major_scan_pinned += TV_ELAPSED (btv, atv);
1892 sgen_client_collecting_major_3 (&fin_ready_queue, &critical_fin_queue);
1894 enqueue_scan_from_roots_jobs (gc_thread_gray_queue, heap_start, heap_end, object_ops_nopar, FALSE);
1896 TV_GETTIME (btv);
1897 time_major_scan_roots += TV_ELAPSED (atv, btv);
1900 * We start the concurrent worker after pinning and after we scanned the roots
1901 * in order to make sure that the worker does not finish before handling all
1902 * the roots.
1904 if (mode == COPY_OR_MARK_FROM_ROOTS_START_CONCURRENT) {
1905 sgen_workers_set_num_active_workers (1);
1906 gray_queue_redirect (gc_thread_gray_queue);
1907 if (precleaning_enabled) {
1908 sgen_workers_start_all_workers (object_ops_nopar, object_ops_par, workers_finish_callback);
1909 } else {
1910 sgen_workers_start_all_workers (object_ops_nopar, object_ops_par, NULL);
1914 if (mode == COPY_OR_MARK_FROM_ROOTS_FINISH_CONCURRENT) {
1915 int i, split_count = sgen_workers_get_job_split_count ();
1917 gray_queue_redirect (gc_thread_gray_queue);
1919 /* Mod union card table */
1920 for (i = 0; i < split_count; i++) {
1921 ParallelScanJob *psj;
1923 psj = (ParallelScanJob*)sgen_thread_pool_job_alloc ("scan mod union cardtable", job_scan_major_mod_union_card_table, sizeof (ParallelScanJob));
1924 psj->scan_job.ops = object_ops_par ? object_ops_par : object_ops_nopar;
1925 psj->scan_job.gc_thread_gray_queue = NULL;
1926 psj->job_index = i;
1927 sgen_workers_enqueue_job (&psj->scan_job.job, TRUE);
1929 psj = (ParallelScanJob*)sgen_thread_pool_job_alloc ("scan LOS mod union cardtable", job_scan_los_mod_union_card_table, sizeof (ParallelScanJob));
1930 psj->scan_job.ops = object_ops_par ? object_ops_par : object_ops_nopar;
1931 psj->scan_job.gc_thread_gray_queue = NULL;
1932 psj->job_index = i;
1933 sgen_workers_enqueue_job (&psj->scan_job.job, TRUE);
1937 * If we enqueue a job while workers are running we need to sgen_workers_ensure_awake
1938 * in order to make sure that we are running the idle func and draining all worker
1939 * gray queues. The operation of starting workers implies this, so we start them after
1940 * in order to avoid doing this operation twice. The workers will drain the main gray
1941 * stack that contained roots and pinned objects and also scan the mod union card
1942 * table.
1944 sgen_workers_start_all_workers (object_ops_nopar, object_ops_par, NULL);
1945 sgen_workers_join ();
1948 sgen_pin_stats_report ();
1950 if (mode == COPY_OR_MARK_FROM_ROOTS_START_CONCURRENT) {
1951 sgen_finish_pinning ();
1953 sgen_pin_stats_reset ();
1955 if (do_concurrent_checks)
1956 sgen_debug_check_nursery_is_clean ();
1960 static void
1961 major_start_collection (SgenGrayQueue *gc_thread_gray_queue, const char *reason, gboolean concurrent, size_t *old_next_pin_slot)
1963 SgenObjectOperations *object_ops_nopar, *object_ops_par = NULL;
1965 binary_protocol_collection_begin (gc_stats.major_gc_count, GENERATION_OLD);
1967 current_collection_generation = GENERATION_OLD;
1969 sgen_workers_assert_gray_queue_is_empty ();
1971 if (!concurrent)
1972 sgen_cement_reset ();
1974 if (concurrent) {
1975 g_assert (major_collector.is_concurrent);
1976 concurrent_collection_in_progress = TRUE;
1978 object_ops_nopar = &major_collector.major_ops_concurrent_start;
1979 if (major_collector.is_parallel)
1980 object_ops_par = &major_collector.major_ops_conc_par_start;
1982 } else {
1983 object_ops_nopar = &major_collector.major_ops_serial;
1986 reset_pinned_from_failed_allocation ();
1988 sgen_memgov_major_collection_start (concurrent, reason);
1990 //count_ref_nonref_objs ();
1991 //consistency_check ();
1993 check_scan_starts ();
1995 degraded_mode = 0;
1996 SGEN_LOG (1, "Start major collection %d", gc_stats.major_gc_count);
1997 gc_stats.major_gc_count ++;
1999 if (major_collector.start_major_collection)
2000 major_collector.start_major_collection ();
2002 major_copy_or_mark_from_roots (gc_thread_gray_queue, old_next_pin_slot, concurrent ? COPY_OR_MARK_FROM_ROOTS_START_CONCURRENT : COPY_OR_MARK_FROM_ROOTS_SERIAL, object_ops_nopar, object_ops_par);
2005 static void
2006 major_finish_collection (SgenGrayQueue *gc_thread_gray_queue, const char *reason, gboolean is_overflow, size_t old_next_pin_slot, gboolean forced)
2008 ScannedObjectCounts counts;
2009 SgenObjectOperations *object_ops_nopar;
2010 mword fragment_total;
2011 TV_DECLARE (atv);
2012 TV_DECLARE (btv);
2014 TV_GETTIME (btv);
2016 if (concurrent_collection_in_progress) {
2017 SgenObjectOperations *object_ops_par = NULL;
2019 object_ops_nopar = &major_collector.major_ops_concurrent_finish;
2020 if (major_collector.is_parallel)
2021 object_ops_par = &major_collector.major_ops_conc_par_finish;
2023 major_copy_or_mark_from_roots (gc_thread_gray_queue, NULL, COPY_OR_MARK_FROM_ROOTS_FINISH_CONCURRENT, object_ops_nopar, object_ops_par);
2025 #ifdef SGEN_DEBUG_INTERNAL_ALLOC
2026 main_gc_thread = NULL;
2027 #endif
2028 } else {
2029 object_ops_nopar = &major_collector.major_ops_serial;
2032 sgen_workers_assert_gray_queue_is_empty ();
2034 finish_gray_stack (GENERATION_OLD, CONTEXT_FROM_OBJECT_OPERATIONS (object_ops_nopar, gc_thread_gray_queue));
2035 TV_GETTIME (atv);
2036 time_major_finish_gray_stack += TV_ELAPSED (btv, atv);
2038 SGEN_ASSERT (0, sgen_workers_all_done (), "Can't have workers working after joining");
2040 if (objects_pinned) {
2041 g_assert (!concurrent_collection_in_progress);
2044 * This is slow, but we just OOM'd.
2046 * See comment at `sgen_pin_queue_clear_discarded_entries` for how the pin
2047 * queue is laid out at this point.
2049 sgen_pin_queue_clear_discarded_entries (nursery_section, old_next_pin_slot);
2051 * We need to reestablish all pinned nursery objects in the pin queue
2052 * because they're needed for fragment creation. Unpinning happens by
2053 * walking the whole queue, so it's not necessary to reestablish where major
2054 * heap block pins are - all we care is that they're still in there
2055 * somewhere.
2057 sgen_optimize_pin_queue ();
2058 sgen_find_section_pin_queue_start_end (nursery_section);
2059 objects_pinned = 0;
2062 reset_heap_boundaries ();
2063 sgen_update_heap_boundaries ((mword)sgen_get_nursery_start (), (mword)sgen_get_nursery_end ());
2065 /* walk the pin_queue, build up the fragment list of free memory, unmark
2066 * pinned objects as we go, memzero() the empty fragments so they are ready for the
2067 * next allocations.
2069 fragment_total = sgen_build_nursery_fragments (nursery_section, NULL);
2070 if (!fragment_total)
2071 degraded_mode = 1;
2072 SGEN_LOG (4, "Free space in nursery after major %ld", (long)fragment_total);
2074 if (do_concurrent_checks && concurrent_collection_in_progress)
2075 sgen_debug_check_nursery_is_clean ();
2077 /* prepare the pin queue for the next collection */
2078 sgen_finish_pinning ();
2080 /* Clear TLABs for all threads */
2081 sgen_clear_tlabs ();
2083 sgen_pin_stats_reset ();
2085 sgen_cement_clear_below_threshold ();
2087 if (check_mark_bits_after_major_collection)
2088 sgen_check_heap_marked (concurrent_collection_in_progress);
2090 TV_GETTIME (btv);
2091 time_major_fragment_creation += TV_ELAPSED (atv, btv);
2093 binary_protocol_sweep_begin (GENERATION_OLD, !major_collector.sweeps_lazily);
2094 sgen_memgov_major_pre_sweep ();
2096 TV_GETTIME (atv);
2097 time_major_free_bigobjs += TV_ELAPSED (btv, atv);
2099 sgen_los_sweep ();
2101 TV_GETTIME (btv);
2102 time_major_los_sweep += TV_ELAPSED (atv, btv);
2104 major_collector.sweep ();
2106 binary_protocol_sweep_end (GENERATION_OLD, !major_collector.sweeps_lazily);
2108 TV_GETTIME (atv);
2109 time_major_sweep += TV_ELAPSED (btv, atv);
2111 sgen_debug_dump_heap ("major", gc_stats.major_gc_count - 1, reason);
2113 if (sgen_have_pending_finalizers ()) {
2114 SGEN_LOG (4, "Finalizer-thread wakeup");
2115 sgen_client_finalize_notify ();
2118 sgen_memgov_major_collection_end (forced, concurrent_collection_in_progress, reason, is_overflow);
2119 current_collection_generation = -1;
2121 memset (&counts, 0, sizeof (ScannedObjectCounts));
2122 major_collector.finish_major_collection (&counts);
2124 sgen_workers_assert_gray_queue_is_empty ();
2126 SGEN_ASSERT (0, sgen_workers_all_done (), "Can't have workers working after major collection has finished");
2127 if (concurrent_collection_in_progress)
2128 concurrent_collection_in_progress = FALSE;
2130 check_scan_starts ();
2132 binary_protocol_flush_buffers (FALSE);
2134 //consistency_check ();
2136 binary_protocol_collection_end (gc_stats.major_gc_count - 1, GENERATION_OLD, counts.num_scanned_objects, counts.num_unique_scanned_objects);
2139 static gboolean
2140 major_do_collection (const char *reason, gboolean is_overflow, gboolean forced)
2142 TV_DECLARE (time_start);
2143 TV_DECLARE (time_end);
2144 size_t old_next_pin_slot;
2145 SgenGrayQueue gc_thread_gray_queue;
2147 if (disable_major_collections)
2148 return FALSE;
2150 if (major_collector.get_and_reset_num_major_objects_marked) {
2151 long long num_marked = major_collector.get_and_reset_num_major_objects_marked ();
2152 g_assert (!num_marked);
2155 /* world must be stopped already */
2156 TV_GETTIME (time_start);
2158 init_gray_queue (&gc_thread_gray_queue, FALSE);
2159 major_start_collection (&gc_thread_gray_queue, reason, FALSE, &old_next_pin_slot);
2160 major_finish_collection (&gc_thread_gray_queue, reason, is_overflow, old_next_pin_slot, forced);
2161 sgen_gray_object_queue_dispose (&gc_thread_gray_queue);
2163 TV_GETTIME (time_end);
2164 gc_stats.major_gc_time += TV_ELAPSED (time_start, time_end);
2166 /* FIXME: also report this to the user, preferably in gc-end. */
2167 if (major_collector.get_and_reset_num_major_objects_marked)
2168 major_collector.get_and_reset_num_major_objects_marked ();
2170 return bytes_pinned_from_failed_allocation > 0;
2173 static void
2174 major_start_concurrent_collection (const char *reason)
2176 TV_DECLARE (time_start);
2177 TV_DECLARE (time_end);
2178 long long num_objects_marked;
2179 SgenGrayQueue gc_thread_gray_queue;
2181 if (disable_major_collections)
2182 return;
2184 TV_GETTIME (time_start);
2185 SGEN_TV_GETTIME (time_major_conc_collection_start);
2187 num_objects_marked = major_collector.get_and_reset_num_major_objects_marked ();
2188 g_assert (num_objects_marked == 0);
2190 binary_protocol_concurrent_start ();
2192 init_gray_queue (&gc_thread_gray_queue, TRUE);
2193 // FIXME: store reason and pass it when finishing
2194 major_start_collection (&gc_thread_gray_queue, reason, TRUE, NULL);
2195 sgen_gray_object_queue_dispose (&gc_thread_gray_queue);
2197 num_objects_marked = major_collector.get_and_reset_num_major_objects_marked ();
2199 TV_GETTIME (time_end);
2200 gc_stats.major_gc_time += TV_ELAPSED (time_start, time_end);
2202 current_collection_generation = -1;
2206 * Returns whether the major collection has finished.
2208 static gboolean
2209 major_should_finish_concurrent_collection (void)
2211 return sgen_workers_all_done ();
2214 static void
2215 major_update_concurrent_collection (void)
2217 TV_DECLARE (total_start);
2218 TV_DECLARE (total_end);
2220 TV_GETTIME (total_start);
2222 binary_protocol_concurrent_update ();
2224 major_collector.update_cardtable_mod_union ();
2225 sgen_los_update_cardtable_mod_union ();
2227 TV_GETTIME (total_end);
2228 gc_stats.major_gc_time += TV_ELAPSED (total_start, total_end);
2231 static void
2232 major_finish_concurrent_collection (gboolean forced)
2234 SgenGrayQueue gc_thread_gray_queue;
2235 TV_DECLARE (total_start);
2236 TV_DECLARE (total_end);
2238 TV_GETTIME (total_start);
2240 binary_protocol_concurrent_finish ();
2243 * We need to stop all workers since we're updating the cardtable below.
2244 * The workers will be resumed with a finishing pause context to avoid
2245 * additional cardtable and object scanning.
2247 sgen_workers_stop_all_workers ();
2249 SGEN_TV_GETTIME (time_major_conc_collection_end);
2250 gc_stats.major_gc_time_concurrent += SGEN_TV_ELAPSED (time_major_conc_collection_start, time_major_conc_collection_end);
2252 major_collector.update_cardtable_mod_union ();
2253 sgen_los_update_cardtable_mod_union ();
2255 if (mod_union_consistency_check)
2256 sgen_check_mod_union_consistency ();
2258 current_collection_generation = GENERATION_OLD;
2259 sgen_cement_reset ();
2260 init_gray_queue (&gc_thread_gray_queue, FALSE);
2261 major_finish_collection (&gc_thread_gray_queue, "finishing", FALSE, -1, forced);
2262 sgen_gray_object_queue_dispose (&gc_thread_gray_queue);
2264 TV_GETTIME (total_end);
2265 gc_stats.major_gc_time += TV_ELAPSED (total_start, total_end);
2267 current_collection_generation = -1;
2271 * Ensure an allocation request for @size will succeed by freeing enough memory.
2273 * LOCKING: The GC lock MUST be held.
2275 void
2276 sgen_ensure_free_space (size_t size, int generation)
2278 int generation_to_collect = -1;
2279 const char *reason = NULL;
2281 if (generation == GENERATION_OLD) {
2282 if (sgen_need_major_collection (size)) {
2283 reason = "LOS overflow";
2284 generation_to_collect = GENERATION_OLD;
2286 } else {
2287 if (degraded_mode) {
2288 if (sgen_need_major_collection (size)) {
2289 reason = "Degraded mode overflow";
2290 generation_to_collect = GENERATION_OLD;
2292 } else if (sgen_need_major_collection (size)) {
2293 reason = concurrent_collection_in_progress ? "Forced finish concurrent collection" : "Minor allowance";
2294 generation_to_collect = GENERATION_OLD;
2295 } else {
2296 generation_to_collect = GENERATION_NURSERY;
2297 reason = "Nursery full";
2301 if (generation_to_collect == -1) {
2302 if (concurrent_collection_in_progress && sgen_workers_all_done ()) {
2303 generation_to_collect = GENERATION_OLD;
2304 reason = "Finish concurrent collection";
2308 if (generation_to_collect == -1)
2309 return;
2310 sgen_perform_collection (size, generation_to_collect, reason, FALSE, TRUE);
2314 * LOCKING: Assumes the GC lock is held.
2316 void
2317 sgen_perform_collection (size_t requested_size, int generation_to_collect, const char *reason, gboolean wait_to_finish, gboolean stw)
2319 TV_DECLARE (gc_total_start);
2320 TV_DECLARE (gc_total_end);
2321 int overflow_generation_to_collect = -1;
2322 int oldest_generation_collected = generation_to_collect;
2323 const char *overflow_reason = NULL;
2324 gboolean finish_concurrent = concurrent_collection_in_progress && (major_should_finish_concurrent_collection () || generation_to_collect == GENERATION_OLD);
2326 binary_protocol_collection_requested (generation_to_collect, requested_size, wait_to_finish ? 1 : 0);
2328 SGEN_ASSERT (0, generation_to_collect == GENERATION_NURSERY || generation_to_collect == GENERATION_OLD, "What generation is this?");
2330 if (stw)
2331 sgen_stop_world (generation_to_collect);
2332 else
2333 SGEN_ASSERT (0, sgen_is_world_stopped (), "We can only collect if the world is stopped");
2336 TV_GETTIME (gc_total_start);
2338 // FIXME: extract overflow reason
2339 // FIXME: minor overflow for concurrent case
2340 if (generation_to_collect == GENERATION_NURSERY && !finish_concurrent) {
2341 if (concurrent_collection_in_progress)
2342 major_update_concurrent_collection ();
2344 if (collect_nursery (reason, FALSE, NULL) && !concurrent_collection_in_progress) {
2345 overflow_generation_to_collect = GENERATION_OLD;
2346 overflow_reason = "Minor overflow";
2348 } else if (finish_concurrent) {
2349 major_finish_concurrent_collection (wait_to_finish);
2350 oldest_generation_collected = GENERATION_OLD;
2351 } else {
2352 SGEN_ASSERT (0, generation_to_collect == GENERATION_OLD, "We should have handled nursery collections above");
2353 if (major_collector.is_concurrent && !wait_to_finish) {
2354 collect_nursery ("Concurrent start", FALSE, NULL);
2355 major_start_concurrent_collection (reason);
2356 oldest_generation_collected = GENERATION_NURSERY;
2357 } else if (major_do_collection (reason, FALSE, wait_to_finish)) {
2358 overflow_generation_to_collect = GENERATION_NURSERY;
2359 overflow_reason = "Excessive pinning";
2363 if (overflow_generation_to_collect != -1) {
2364 SGEN_ASSERT (0, !concurrent_collection_in_progress, "We don't yet support overflow collections with the concurrent collector");
2367 * We need to do an overflow collection, either because we ran out of memory
2368 * or the nursery is fully pinned.
2371 if (overflow_generation_to_collect == GENERATION_NURSERY)
2372 collect_nursery (overflow_reason, TRUE, NULL);
2373 else
2374 major_do_collection (overflow_reason, TRUE, wait_to_finish);
2376 oldest_generation_collected = MAX (oldest_generation_collected, overflow_generation_to_collect);
2379 SGEN_LOG (2, "Heap size: %lu, LOS size: %lu", (unsigned long)sgen_gc_get_total_heap_allocation (), (unsigned long)los_memory_usage);
2381 /* this also sets the proper pointers for the next allocation */
2382 if (generation_to_collect == GENERATION_NURSERY && !sgen_can_alloc_size (requested_size)) {
2383 /* TypeBuilder and MonoMethod are killing mcs with fragmentation */
2384 SGEN_LOG (1, "nursery collection didn't find enough room for %zd alloc (%zd pinned)", requested_size, sgen_get_pinned_count ());
2385 sgen_dump_pin_queue ();
2386 degraded_mode = 1;
2389 TV_GETTIME (gc_total_end);
2390 time_max = MAX (time_max, TV_ELAPSED (gc_total_start, gc_total_end));
2392 if (stw)
2393 sgen_restart_world (oldest_generation_collected);
2397 * ######################################################################
2398 * ######## Memory allocation from the OS
2399 * ######################################################################
2400 * This section of code deals with getting memory from the OS and
2401 * allocating memory for GC-internal data structures.
2402 * Internal memory can be handled with a freelist for small objects.
2406 * Debug reporting.
2408 G_GNUC_UNUSED static void
2409 report_internal_mem_usage (void)
2411 printf ("Internal memory usage:\n");
2412 sgen_report_internal_mem_usage ();
2413 printf ("Pinned memory usage:\n");
2414 major_collector.report_pinned_memory_usage ();
2418 * ######################################################################
2419 * ######## Finalization support
2420 * ######################################################################
2424 * If the object has been forwarded it means it's still referenced from a root.
2425 * If it is pinned it's still alive as well.
2426 * A LOS object is only alive if we have pinned it.
2427 * Return TRUE if @obj is ready to be finalized.
2429 static inline gboolean
2430 sgen_is_object_alive (GCObject *object)
2432 if (ptr_in_nursery (object))
2433 return sgen_nursery_is_object_alive (object);
2435 return sgen_major_is_object_alive (object);
2439 * This function returns true if @object is either alive and belongs to the
2440 * current collection - major collections are full heap, so old gen objects
2441 * are never alive during a minor collection.
2443 static inline int
2444 sgen_is_object_alive_and_on_current_collection (GCObject *object)
2446 if (ptr_in_nursery (object))
2447 return sgen_nursery_is_object_alive (object);
2449 if (current_collection_generation == GENERATION_NURSERY)
2450 return FALSE;
2452 return sgen_major_is_object_alive (object);
2456 gboolean
2457 sgen_gc_is_object_ready_for_finalization (GCObject *object)
2459 return !sgen_is_object_alive (object);
2462 void
2463 sgen_queue_finalization_entry (GCObject *obj)
2465 gboolean critical = sgen_client_object_has_critical_finalizer (obj);
2467 sgen_pointer_queue_add (critical ? &critical_fin_queue : &fin_ready_queue, obj);
2469 sgen_client_object_queued_for_finalization (obj);
2472 gboolean
2473 sgen_object_is_live (GCObject *obj)
2475 return sgen_is_object_alive_and_on_current_collection (obj);
2479 * `System.GC.WaitForPendingFinalizers` first checks `sgen_have_pending_finalizers()` to
2480 * determine whether it can exit quickly. The latter must therefore only return FALSE if
2481 * all finalizers have really finished running.
2483 * `sgen_gc_invoke_finalizers()` first dequeues a finalizable object, and then finalizes it.
2484 * This means that just checking whether the queues are empty leaves the possibility that an
2485 * object might have been dequeued but not yet finalized. That's why we need the additional
2486 * flag `pending_unqueued_finalizer`.
2489 static volatile gboolean pending_unqueued_finalizer = FALSE;
2490 volatile gboolean sgen_suspend_finalizers = FALSE;
2492 void
2493 sgen_set_suspend_finalizers (void)
2495 sgen_suspend_finalizers = TRUE;
2499 sgen_gc_invoke_finalizers (void)
2501 int count = 0;
2503 g_assert (!pending_unqueued_finalizer);
2505 /* FIXME: batch to reduce lock contention */
2506 while (sgen_have_pending_finalizers ()) {
2507 GCObject *obj;
2509 LOCK_GC;
2512 * We need to set `pending_unqueued_finalizer` before dequeing the
2513 * finalizable object.
2515 if (!sgen_pointer_queue_is_empty (&fin_ready_queue)) {
2516 pending_unqueued_finalizer = TRUE;
2517 mono_memory_write_barrier ();
2518 obj = (GCObject *)sgen_pointer_queue_pop (&fin_ready_queue);
2519 } else if (!sgen_pointer_queue_is_empty (&critical_fin_queue)) {
2520 pending_unqueued_finalizer = TRUE;
2521 mono_memory_write_barrier ();
2522 obj = (GCObject *)sgen_pointer_queue_pop (&critical_fin_queue);
2523 } else {
2524 obj = NULL;
2527 if (obj)
2528 SGEN_LOG (7, "Finalizing object %p (%s)", obj, sgen_client_vtable_get_name (SGEN_LOAD_VTABLE (obj)));
2530 UNLOCK_GC;
2532 if (!obj)
2533 break;
2535 count++;
2536 /* the object is on the stack so it is pinned */
2537 /*g_print ("Calling finalizer for object: %p (%s)\n", obj, sgen_client_object_safe_name (obj));*/
2538 sgen_client_run_finalize (obj);
2541 if (pending_unqueued_finalizer) {
2542 mono_memory_write_barrier ();
2543 pending_unqueued_finalizer = FALSE;
2546 return count;
2549 gboolean
2550 sgen_have_pending_finalizers (void)
2552 if (sgen_suspend_finalizers)
2553 return FALSE;
2554 return pending_unqueued_finalizer || !sgen_pointer_queue_is_empty (&fin_ready_queue) || !sgen_pointer_queue_is_empty (&critical_fin_queue);
2558 * ######################################################################
2559 * ######## registered roots support
2560 * ######################################################################
2564 * We do not coalesce roots.
2567 sgen_register_root (char *start, size_t size, SgenDescriptor descr, int root_type, int source, const char *msg)
2569 RootRecord new_root;
2570 int i;
2571 LOCK_GC;
2572 for (i = 0; i < ROOT_TYPE_NUM; ++i) {
2573 RootRecord *root = (RootRecord *)sgen_hash_table_lookup (&roots_hash [i], start);
2574 /* we allow changing the size and the descriptor (for thread statics etc) */
2575 if (root) {
2576 size_t old_size = root->end_root - start;
2577 root->end_root = start + size;
2578 SGEN_ASSERT (0, !!root->root_desc == !!descr, "Can't change whether a root is precise or conservative.");
2579 SGEN_ASSERT (0, root->source == source, "Can't change a root's source identifier.");
2580 SGEN_ASSERT (0, !!root->msg == !!msg, "Can't change a root's message.");
2581 root->root_desc = descr;
2582 roots_size += size;
2583 roots_size -= old_size;
2584 UNLOCK_GC;
2585 return TRUE;
2589 new_root.end_root = start + size;
2590 new_root.root_desc = descr;
2591 new_root.source = source;
2592 new_root.msg = msg;
2594 sgen_hash_table_replace (&roots_hash [root_type], start, &new_root, NULL);
2595 roots_size += size;
2597 SGEN_LOG (3, "Added root for range: %p-%p, descr: %llx (%d/%d bytes)", start, new_root.end_root, (long long)descr, (int)size, (int)roots_size);
2599 UNLOCK_GC;
2600 return TRUE;
2603 void
2604 sgen_deregister_root (char* addr)
2606 int root_type;
2607 RootRecord root;
2609 LOCK_GC;
2610 for (root_type = 0; root_type < ROOT_TYPE_NUM; ++root_type) {
2611 if (sgen_hash_table_remove (&roots_hash [root_type], addr, &root))
2612 roots_size -= (root.end_root - addr);
2614 UNLOCK_GC;
2617 void
2618 sgen_wbroots_iterate_live_block_ranges (sgen_cardtable_block_callback cb)
2620 void **start_root;
2621 RootRecord *root;
2622 SGEN_HASH_TABLE_FOREACH (&roots_hash [ROOT_TYPE_WBARRIER], void **, start_root, RootRecord *, root) {
2623 cb ((mword)start_root, (mword)root->end_root - (mword)start_root);
2624 } SGEN_HASH_TABLE_FOREACH_END;
2627 /* Root equivalent of sgen_client_cardtable_scan_object */
2628 static void
2629 sgen_wbroot_scan_card_table (void** start_root, mword size, ScanCopyContext ctx)
2631 ScanPtrFieldFunc scan_field_func = ctx.ops->scan_ptr_field;
2632 guint8 *card_data = sgen_card_table_get_card_scan_address ((mword)start_root);
2633 guint8 *card_base = card_data;
2634 mword card_count = sgen_card_table_number_of_cards_in_range ((mword)start_root, size);
2635 guint8 *card_data_end = card_data + card_count;
2636 mword extra_idx = 0;
2637 char *obj_start = sgen_card_table_align_pointer (start_root);
2638 char *obj_end = (char*)start_root + size;
2639 #ifdef SGEN_HAVE_OVERLAPPING_CARDS
2640 guint8 *overflow_scan_end = NULL;
2641 #endif
2643 #ifdef SGEN_HAVE_OVERLAPPING_CARDS
2644 /*Check for overflow and if so, setup to scan in two steps*/
2645 if (card_data_end >= SGEN_SHADOW_CARDTABLE_END) {
2646 overflow_scan_end = sgen_shadow_cardtable + (card_data_end - SGEN_SHADOW_CARDTABLE_END);
2647 card_data_end = SGEN_SHADOW_CARDTABLE_END;
2650 LOOP_HEAD:
2651 #endif
2653 card_data = sgen_find_next_card (card_data, card_data_end);
2655 for (; card_data < card_data_end; card_data = sgen_find_next_card (card_data + 1, card_data_end)) {
2656 size_t idx = (card_data - card_base) + extra_idx;
2657 char *start = (char*)(obj_start + idx * CARD_SIZE_IN_BYTES);
2658 char *card_end = start + CARD_SIZE_IN_BYTES;
2659 char *elem = start, *first_elem = start;
2662 * Don't clean first and last card on 32bit systems since they
2663 * may also be part from other roots.
2665 if (card_data != card_base && card_data != (card_data_end - 1))
2666 sgen_card_table_prepare_card_for_scanning (card_data);
2668 card_end = MIN (card_end, obj_end);
2670 if (elem < (char*)start_root)
2671 first_elem = elem = (char*)start_root;
2673 for (; elem < card_end; elem += SIZEOF_VOID_P) {
2674 if (*(GCObject**)elem)
2675 scan_field_func (NULL, (GCObject**)elem, ctx.queue);
2678 binary_protocol_card_scan (first_elem, elem - first_elem);
2681 #ifdef SGEN_HAVE_OVERLAPPING_CARDS
2682 if (overflow_scan_end) {
2683 extra_idx = card_data - card_base;
2684 card_base = card_data = sgen_shadow_cardtable;
2685 card_data_end = overflow_scan_end;
2686 overflow_scan_end = NULL;
2687 goto LOOP_HEAD;
2689 #endif
2692 void
2693 sgen_wbroots_scan_card_table (ScanCopyContext ctx)
2695 void **start_root;
2696 RootRecord *root;
2698 SGEN_HASH_TABLE_FOREACH (&roots_hash [ROOT_TYPE_WBARRIER], void **, start_root, RootRecord *, root) {
2699 SGEN_ASSERT (0, (root->root_desc & ROOT_DESC_TYPE_MASK) == ROOT_DESC_VECTOR, "Unsupported root type");
2701 sgen_wbroot_scan_card_table (start_root, (mword)root->end_root - (mword)start_root, ctx);
2702 } SGEN_HASH_TABLE_FOREACH_END;
2706 * ######################################################################
2707 * ######## Thread handling (stop/start code)
2708 * ######################################################################
2712 sgen_get_current_collection_generation (void)
2714 return current_collection_generation;
2717 void*
2718 sgen_thread_register (SgenThreadInfo* info, void *stack_bottom_fallback)
2720 info->tlab_start = info->tlab_next = info->tlab_temp_end = info->tlab_real_end = NULL;
2722 sgen_client_thread_register (info, stack_bottom_fallback);
2724 return info;
2727 void
2728 sgen_thread_unregister (SgenThreadInfo *p)
2730 sgen_client_thread_unregister (p);
2734 * ######################################################################
2735 * ######## Write barriers
2736 * ######################################################################
2740 * Note: the write barriers first do the needed GC work and then do the actual store:
2741 * this way the value is visible to the conservative GC scan after the write barrier
2742 * itself. If a GC interrupts the barrier in the middle, value will be kept alive by
2743 * the conservative scan, otherwise by the remembered set scan.
2747 * mono_gc_wbarrier_arrayref_copy:
2749 void
2750 mono_gc_wbarrier_arrayref_copy (gpointer dest_ptr, gpointer src_ptr, int count)
2752 HEAVY_STAT (++stat_wbarrier_arrayref_copy);
2753 /*This check can be done without taking a lock since dest_ptr array is pinned*/
2754 if (ptr_in_nursery (dest_ptr) || count <= 0) {
2755 mono_gc_memmove_aligned (dest_ptr, src_ptr, count * sizeof (gpointer));
2756 return;
2759 #ifdef SGEN_HEAVY_BINARY_PROTOCOL
2760 if (binary_protocol_is_heavy_enabled ()) {
2761 int i;
2762 for (i = 0; i < count; ++i) {
2763 gpointer dest = (gpointer*)dest_ptr + i;
2764 gpointer obj = *((gpointer*)src_ptr + i);
2765 if (obj)
2766 binary_protocol_wbarrier (dest, obj, (gpointer)LOAD_VTABLE (obj));
2769 #endif
2771 remset.wbarrier_arrayref_copy (dest_ptr, src_ptr, count);
2775 * mono_gc_wbarrier_generic_nostore:
2777 void
2778 mono_gc_wbarrier_generic_nostore (gpointer ptr)
2780 gpointer obj;
2782 HEAVY_STAT (++stat_wbarrier_generic_store);
2784 sgen_client_wbarrier_generic_nostore_check (ptr);
2786 obj = *(gpointer*)ptr;
2787 if (obj)
2788 binary_protocol_wbarrier (ptr, obj, (gpointer)LOAD_VTABLE (obj));
2791 * We need to record old->old pointer locations for the
2792 * concurrent collector.
2794 if (!ptr_in_nursery (obj) && !concurrent_collection_in_progress) {
2795 SGEN_LOG (8, "Skipping remset at %p", ptr);
2796 return;
2799 SGEN_LOG (8, "Adding remset at %p", ptr);
2801 remset.wbarrier_generic_nostore (ptr);
2805 * mono_gc_wbarrier_generic_store:
2807 void
2808 mono_gc_wbarrier_generic_store (gpointer ptr, GCObject* value)
2810 SGEN_LOG (8, "Wbarrier store at %p to %p (%s)", ptr, value, value ? sgen_client_vtable_get_name (SGEN_LOAD_VTABLE (value)) : "null");
2811 SGEN_UPDATE_REFERENCE_ALLOW_NULL (ptr, value);
2812 if (ptr_in_nursery (value) || concurrent_collection_in_progress)
2813 mono_gc_wbarrier_generic_nostore (ptr);
2814 sgen_dummy_use (value);
2818 * mono_gc_wbarrier_generic_store_atomic:
2819 * Same as \c mono_gc_wbarrier_generic_store but performs the store
2820 * as an atomic operation with release semantics.
2822 void
2823 mono_gc_wbarrier_generic_store_atomic (gpointer ptr, GCObject *value)
2825 HEAVY_STAT (++stat_wbarrier_generic_store_atomic);
2827 SGEN_LOG (8, "Wbarrier atomic store at %p to %p (%s)", ptr, value, value ? sgen_client_vtable_get_name (SGEN_LOAD_VTABLE (value)) : "null");
2829 InterlockedWritePointer ((volatile gpointer *)ptr, value);
2831 if (ptr_in_nursery (value) || concurrent_collection_in_progress)
2832 mono_gc_wbarrier_generic_nostore (ptr);
2834 sgen_dummy_use (value);
2837 void
2838 sgen_wbarrier_value_copy_bitmap (gpointer _dest, gpointer _src, int size, unsigned bitmap)
2840 GCObject **dest = (GCObject **)_dest;
2841 GCObject **src = (GCObject **)_src;
2843 while (size) {
2844 if (bitmap & 0x1)
2845 mono_gc_wbarrier_generic_store (dest, *src);
2846 else
2847 *dest = *src;
2848 ++src;
2849 ++dest;
2850 size -= SIZEOF_VOID_P;
2851 bitmap >>= 1;
2856 * ######################################################################
2857 * ######## Other mono public interface functions.
2858 * ######################################################################
2861 void
2862 sgen_gc_collect (int generation)
2864 LOCK_GC;
2865 if (generation > 1)
2866 generation = 1;
2867 sgen_perform_collection (0, generation, "user request", TRUE, TRUE);
2868 UNLOCK_GC;
2872 sgen_gc_collection_count (int generation)
2874 if (generation == 0)
2875 return gc_stats.minor_gc_count;
2876 return gc_stats.major_gc_count;
2879 size_t
2880 sgen_gc_get_used_size (void)
2882 gint64 tot = 0;
2883 LOCK_GC;
2884 tot = los_memory_usage;
2885 tot += nursery_section->next_data - nursery_section->data;
2886 tot += major_collector.get_used_size ();
2887 /* FIXME: account for pinned objects */
2888 UNLOCK_GC;
2889 return tot;
2892 void
2893 sgen_env_var_error (const char *env_var, const char *fallback, const char *description_format, ...)
2895 va_list ap;
2897 va_start (ap, description_format);
2899 fprintf (stderr, "Warning: In environment variable `%s': ", env_var);
2900 vfprintf (stderr, description_format, ap);
2901 if (fallback)
2902 fprintf (stderr, " - %s", fallback);
2903 fprintf (stderr, "\n");
2905 va_end (ap);
2908 static gboolean
2909 parse_double_in_interval (const char *env_var, const char *opt_name, const char *opt, double min, double max, double *result)
2911 char *endptr;
2912 double val = strtod (opt, &endptr);
2913 if (endptr == opt) {
2914 sgen_env_var_error (env_var, "Using default value.", "`%s` must be a number.", opt_name);
2915 return FALSE;
2917 else if (val < min || val > max) {
2918 sgen_env_var_error (env_var, "Using default value.", "`%s` must be between %.2f - %.2f.", opt_name, min, max);
2919 return FALSE;
2921 *result = val;
2922 return TRUE;
2925 void
2926 sgen_gc_init (void)
2928 const char *env;
2929 char **opts, **ptr;
2930 char *major_collector_opt = NULL;
2931 char *minor_collector_opt = NULL;
2932 char *params_opts = NULL;
2933 char *debug_opts = NULL;
2934 size_t max_heap = 0;
2935 size_t soft_limit = 0;
2936 int result;
2937 gboolean debug_print_allowance = FALSE;
2938 double allowance_ratio = 0, save_target = 0;
2939 gboolean cement_enabled = TRUE;
2941 do {
2942 result = InterlockedCompareExchange (&gc_initialized, -1, 0);
2943 switch (result) {
2944 case 1:
2945 /* already inited */
2946 return;
2947 case -1:
2948 /* being inited by another thread */
2949 mono_thread_info_usleep (1000);
2950 break;
2951 case 0:
2952 /* we will init it */
2953 break;
2954 default:
2955 g_assert_not_reached ();
2957 } while (result != 0);
2959 SGEN_TV_GETTIME (sgen_init_timestamp);
2961 #ifdef SGEN_WITHOUT_MONO
2962 mono_thread_smr_init ();
2963 #endif
2965 mono_coop_mutex_init (&gc_mutex);
2967 gc_debug_file = stderr;
2969 mono_coop_mutex_init (&sgen_interruption_mutex);
2971 if ((env = g_getenv (MONO_GC_PARAMS_NAME)) || gc_params_options) {
2972 params_opts = g_strdup_printf ("%s,%s", gc_params_options ? gc_params_options : "", env ? env : "");
2975 if (params_opts) {
2976 opts = g_strsplit (params_opts, ",", -1);
2977 for (ptr = opts; *ptr; ++ptr) {
2978 char *opt = *ptr;
2979 if (g_str_has_prefix (opt, "major=")) {
2980 opt = strchr (opt, '=') + 1;
2981 major_collector_opt = g_strdup (opt);
2982 } else if (g_str_has_prefix (opt, "minor=")) {
2983 opt = strchr (opt, '=') + 1;
2984 minor_collector_opt = g_strdup (opt);
2987 } else {
2988 opts = NULL;
2991 init_stats ();
2992 sgen_init_internal_allocator ();
2993 sgen_init_nursery_allocator ();
2994 sgen_init_fin_weak_hash ();
2995 sgen_init_hash_table ();
2996 sgen_init_descriptors ();
2997 sgen_init_gray_queues ();
2998 sgen_init_allocator ();
2999 sgen_init_gchandles ();
3001 sgen_register_fixed_internal_mem_type (INTERNAL_MEM_SECTION, SGEN_SIZEOF_GC_MEM_SECTION);
3002 sgen_register_fixed_internal_mem_type (INTERNAL_MEM_GRAY_QUEUE, sizeof (GrayQueueSection));
3004 sgen_client_init ();
3006 if (!minor_collector_opt) {
3007 sgen_simple_nursery_init (&sgen_minor_collector);
3008 } else {
3009 if (!strcmp (minor_collector_opt, "simple")) {
3010 use_simple_nursery:
3011 sgen_simple_nursery_init (&sgen_minor_collector);
3012 } else if (!strcmp (minor_collector_opt, "split")) {
3013 sgen_split_nursery_init (&sgen_minor_collector);
3014 } else {
3015 sgen_env_var_error (MONO_GC_PARAMS_NAME, "Using `simple` instead.", "Unknown minor collector `%s'.", minor_collector_opt);
3016 goto use_simple_nursery;
3020 if (!major_collector_opt) {
3021 use_default_major:
3022 DEFAULT_MAJOR_INIT (&major_collector);
3023 } else if (!strcmp (major_collector_opt, "marksweep")) {
3024 sgen_marksweep_init (&major_collector);
3025 } else if (!strcmp (major_collector_opt, "marksweep-conc")) {
3026 sgen_marksweep_conc_init (&major_collector);
3027 } else if (!strcmp (major_collector_opt, "marksweep-conc-par")) {
3028 sgen_marksweep_conc_par_init (&major_collector);
3029 } else {
3030 sgen_env_var_error (MONO_GC_PARAMS_NAME, "Using `" DEFAULT_MAJOR_NAME "` instead.", "Unknown major collector `%s'.", major_collector_opt);
3031 goto use_default_major;
3034 sgen_nursery_size = DEFAULT_NURSERY_SIZE;
3036 if (opts) {
3037 gboolean usage_printed = FALSE;
3039 for (ptr = opts; *ptr; ++ptr) {
3040 char *opt = *ptr;
3041 if (!strcmp (opt, ""))
3042 continue;
3043 if (g_str_has_prefix (opt, "major="))
3044 continue;
3045 if (g_str_has_prefix (opt, "minor="))
3046 continue;
3047 if (g_str_has_prefix (opt, "max-heap-size=")) {
3048 size_t page_size = mono_pagesize ();
3049 size_t max_heap_candidate = 0;
3050 opt = strchr (opt, '=') + 1;
3051 if (*opt && mono_gc_parse_environment_string_extract_number (opt, &max_heap_candidate)) {
3052 max_heap = (max_heap_candidate + page_size - 1) & ~(size_t)(page_size - 1);
3053 if (max_heap != max_heap_candidate)
3054 sgen_env_var_error (MONO_GC_PARAMS_NAME, "Rounding up.", "`max-heap-size` size must be a multiple of %d.", page_size);
3055 } else {
3056 sgen_env_var_error (MONO_GC_PARAMS_NAME, NULL, "`max-heap-size` must be an integer.");
3058 continue;
3060 if (g_str_has_prefix (opt, "soft-heap-limit=")) {
3061 opt = strchr (opt, '=') + 1;
3062 if (*opt && mono_gc_parse_environment_string_extract_number (opt, &soft_limit)) {
3063 if (soft_limit <= 0) {
3064 sgen_env_var_error (MONO_GC_PARAMS_NAME, NULL, "`soft-heap-limit` must be positive.");
3065 soft_limit = 0;
3067 } else {
3068 sgen_env_var_error (MONO_GC_PARAMS_NAME, NULL, "`soft-heap-limit` must be an integer.");
3070 continue;
3073 #ifdef USER_CONFIG
3074 if (g_str_has_prefix (opt, "nursery-size=")) {
3075 size_t val;
3076 opt = strchr (opt, '=') + 1;
3077 if (*opt && mono_gc_parse_environment_string_extract_number (opt, &val)) {
3078 if ((val & (val - 1))) {
3079 sgen_env_var_error (MONO_GC_PARAMS_NAME, "Using default value.", "`nursery-size` must be a power of two.");
3080 continue;
3083 if (val < SGEN_MAX_NURSERY_WASTE) {
3084 sgen_env_var_error (MONO_GC_PARAMS_NAME, "Using default value.",
3085 "`nursery-size` must be at least %d bytes.", SGEN_MAX_NURSERY_WASTE);
3086 continue;
3089 sgen_nursery_size = val;
3090 sgen_nursery_bits = 0;
3091 while (ONE_P << (++ sgen_nursery_bits) != sgen_nursery_size)
3093 } else {
3094 sgen_env_var_error (MONO_GC_PARAMS_NAME, "Using default value.", "`nursery-size` must be an integer.");
3095 continue;
3097 continue;
3099 #endif
3100 if (g_str_has_prefix (opt, "save-target-ratio=")) {
3101 double val;
3102 opt = strchr (opt, '=') + 1;
3103 if (parse_double_in_interval (MONO_GC_PARAMS_NAME, "save-target-ratio", opt,
3104 SGEN_MIN_SAVE_TARGET_RATIO, SGEN_MAX_SAVE_TARGET_RATIO, &val)) {
3105 save_target = val;
3107 continue;
3109 if (g_str_has_prefix (opt, "default-allowance-ratio=")) {
3110 double val;
3111 opt = strchr (opt, '=') + 1;
3112 if (parse_double_in_interval (MONO_GC_PARAMS_NAME, "default-allowance-ratio", opt,
3113 SGEN_MIN_ALLOWANCE_NURSERY_SIZE_RATIO, SGEN_MAX_ALLOWANCE_NURSERY_SIZE_RATIO, &val)) {
3114 allowance_ratio = val;
3116 continue;
3119 if (!strcmp (opt, "cementing")) {
3120 cement_enabled = TRUE;
3121 continue;
3123 if (!strcmp (opt, "no-cementing")) {
3124 cement_enabled = FALSE;
3125 continue;
3128 if (!strcmp (opt, "precleaning")) {
3129 precleaning_enabled = TRUE;
3130 continue;
3132 if (!strcmp (opt, "no-precleaning")) {
3133 precleaning_enabled = FALSE;
3134 continue;
3137 if (major_collector.handle_gc_param && major_collector.handle_gc_param (opt))
3138 continue;
3140 if (sgen_minor_collector.handle_gc_param && sgen_minor_collector.handle_gc_param (opt))
3141 continue;
3143 if (sgen_client_handle_gc_param (opt))
3144 continue;
3146 sgen_env_var_error (MONO_GC_PARAMS_NAME, "Ignoring.", "Unknown option `%s`.", opt);
3148 if (usage_printed)
3149 continue;
3151 fprintf (stderr, "\n%s must be a comma-delimited list of one or more of the following:\n", MONO_GC_PARAMS_NAME);
3152 fprintf (stderr, " max-heap-size=N (where N is an integer, possibly with a k, m or a g suffix)\n");
3153 fprintf (stderr, " soft-heap-limit=n (where N is an integer, possibly with a k, m or a g suffix)\n");
3154 fprintf (stderr, " nursery-size=N (where N is an integer, possibly with a k, m or a g suffix)\n");
3155 fprintf (stderr, " major=COLLECTOR (where COLLECTOR is `marksweep', `marksweep-conc', `marksweep-par')\n");
3156 fprintf (stderr, " minor=COLLECTOR (where COLLECTOR is `simple' or `split')\n");
3157 fprintf (stderr, " wbarrier=WBARRIER (where WBARRIER is `remset' or `cardtable')\n");
3158 fprintf (stderr, " [no-]cementing\n");
3159 if (major_collector.print_gc_param_usage)
3160 major_collector.print_gc_param_usage ();
3161 if (sgen_minor_collector.print_gc_param_usage)
3162 sgen_minor_collector.print_gc_param_usage ();
3163 sgen_client_print_gc_params_usage ();
3164 fprintf (stderr, " Experimental options:\n");
3165 fprintf (stderr, " save-target-ratio=R (where R must be between %.2f - %.2f).\n", SGEN_MIN_SAVE_TARGET_RATIO, SGEN_MAX_SAVE_TARGET_RATIO);
3166 fprintf (stderr, " default-allowance-ratio=R (where R must be between %.2f - %.2f).\n", SGEN_MIN_ALLOWANCE_NURSERY_SIZE_RATIO, SGEN_MAX_ALLOWANCE_NURSERY_SIZE_RATIO);
3167 fprintf (stderr, "\n");
3169 usage_printed = TRUE;
3171 g_strfreev (opts);
3174 if (major_collector_opt)
3175 g_free (major_collector_opt);
3177 if (minor_collector_opt)
3178 g_free (minor_collector_opt);
3180 if (params_opts)
3181 g_free (params_opts);
3183 alloc_nursery ();
3185 sgen_pinning_init ();
3186 sgen_cement_init (cement_enabled);
3188 if ((env = g_getenv (MONO_GC_DEBUG_NAME)) || gc_debug_options) {
3189 debug_opts = g_strdup_printf ("%s,%s", gc_debug_options ? gc_debug_options : "", env ? env : "");
3192 if (debug_opts) {
3193 gboolean usage_printed = FALSE;
3195 opts = g_strsplit (debug_opts, ",", -1);
3196 for (ptr = opts; ptr && *ptr; ptr ++) {
3197 char *opt = *ptr;
3198 if (!strcmp (opt, ""))
3199 continue;
3200 if (opt [0] >= '0' && opt [0] <= '9') {
3201 gc_debug_level = atoi (opt);
3202 opt++;
3203 if (opt [0] == ':')
3204 opt++;
3205 if (opt [0]) {
3206 char *rf = g_strdup_printf ("%s.%d", opt, mono_process_current_pid ());
3207 gc_debug_file = fopen (rf, "wb");
3208 if (!gc_debug_file)
3209 gc_debug_file = stderr;
3210 g_free (rf);
3212 } else if (!strcmp (opt, "print-allowance")) {
3213 debug_print_allowance = TRUE;
3214 } else if (!strcmp (opt, "print-pinning")) {
3215 sgen_pin_stats_enable ();
3216 } else if (!strcmp (opt, "verify-before-allocs")) {
3217 verify_before_allocs = 1;
3218 has_per_allocation_action = TRUE;
3219 } else if (g_str_has_prefix (opt, "max-valloc-size=")) {
3220 size_t max_valloc_size;
3221 char *arg = strchr (opt, '=') + 1;
3222 if (*opt && mono_gc_parse_environment_string_extract_number (arg, &max_valloc_size)) {
3223 mono_valloc_set_limit (max_valloc_size);
3224 } else {
3225 sgen_env_var_error (MONO_GC_DEBUG_NAME, NULL, "`max-valloc-size` must be an integer.");
3227 continue;
3228 } else if (g_str_has_prefix (opt, "verify-before-allocs=")) {
3229 char *arg = strchr (opt, '=') + 1;
3230 verify_before_allocs = atoi (arg);
3231 has_per_allocation_action = TRUE;
3232 } else if (!strcmp (opt, "collect-before-allocs")) {
3233 collect_before_allocs = 1;
3234 has_per_allocation_action = TRUE;
3235 } else if (g_str_has_prefix (opt, "collect-before-allocs=")) {
3236 char *arg = strchr (opt, '=') + 1;
3237 has_per_allocation_action = TRUE;
3238 collect_before_allocs = atoi (arg);
3239 } else if (!strcmp (opt, "verify-before-collections")) {
3240 whole_heap_check_before_collection = TRUE;
3241 } else if (!strcmp (opt, "check-remset-consistency")) {
3242 remset_consistency_checks = TRUE;
3243 nursery_clear_policy = CLEAR_AT_GC;
3244 } else if (!strcmp (opt, "mod-union-consistency-check")) {
3245 if (!major_collector.is_concurrent) {
3246 sgen_env_var_error (MONO_GC_DEBUG_NAME, "Ignoring.", "`mod-union-consistency-check` only works with concurrent major collector.");
3247 continue;
3249 mod_union_consistency_check = TRUE;
3250 } else if (!strcmp (opt, "check-mark-bits")) {
3251 check_mark_bits_after_major_collection = TRUE;
3252 } else if (!strcmp (opt, "check-nursery-pinned")) {
3253 check_nursery_objects_pinned = TRUE;
3254 } else if (!strcmp (opt, "clear-at-gc")) {
3255 nursery_clear_policy = CLEAR_AT_GC;
3256 } else if (!strcmp (opt, "clear-nursery-at-gc")) {
3257 nursery_clear_policy = CLEAR_AT_GC;
3258 } else if (!strcmp (opt, "clear-at-tlab-creation")) {
3259 nursery_clear_policy = CLEAR_AT_TLAB_CREATION;
3260 } else if (!strcmp (opt, "debug-clear-at-tlab-creation")) {
3261 nursery_clear_policy = CLEAR_AT_TLAB_CREATION_DEBUG;
3262 } else if (!strcmp (opt, "check-scan-starts")) {
3263 do_scan_starts_check = TRUE;
3264 } else if (!strcmp (opt, "verify-nursery-at-minor-gc")) {
3265 do_verify_nursery = TRUE;
3266 } else if (!strcmp (opt, "check-concurrent")) {
3267 if (!major_collector.is_concurrent) {
3268 sgen_env_var_error (MONO_GC_DEBUG_NAME, "Ignoring.", "`check-concurrent` only works with concurrent major collectors.");
3269 continue;
3271 nursery_clear_policy = CLEAR_AT_GC;
3272 do_concurrent_checks = TRUE;
3273 } else if (!strcmp (opt, "dump-nursery-at-minor-gc")) {
3274 do_dump_nursery_content = TRUE;
3275 } else if (!strcmp (opt, "disable-minor")) {
3276 disable_minor_collections = TRUE;
3277 } else if (!strcmp (opt, "disable-major")) {
3278 disable_major_collections = TRUE;
3279 } else if (g_str_has_prefix (opt, "heap-dump=")) {
3280 char *filename = strchr (opt, '=') + 1;
3281 nursery_clear_policy = CLEAR_AT_GC;
3282 sgen_debug_enable_heap_dump (filename);
3283 } else if (g_str_has_prefix (opt, "binary-protocol=")) {
3284 char *filename = strchr (opt, '=') + 1;
3285 char *colon = strrchr (filename, ':');
3286 size_t limit = 0;
3287 if (colon) {
3288 if (!mono_gc_parse_environment_string_extract_number (colon + 1, &limit)) {
3289 sgen_env_var_error (MONO_GC_DEBUG_NAME, "Ignoring limit.", "Binary protocol file size limit must be an integer.");
3290 limit = -1;
3292 *colon = '\0';
3294 binary_protocol_init (filename, (long long)limit);
3295 } else if (!strcmp (opt, "nursery-canaries")) {
3296 do_verify_nursery = TRUE;
3297 enable_nursery_canaries = TRUE;
3298 } else if (!sgen_client_handle_gc_debug (opt)) {
3299 sgen_env_var_error (MONO_GC_DEBUG_NAME, "Ignoring.", "Unknown option `%s`.", opt);
3301 if (usage_printed)
3302 continue;
3304 fprintf (stderr, "\n%s must be of the format [<l>[:<filename>]|<option>]+ where <l> is a debug level 0-9.\n", MONO_GC_DEBUG_NAME);
3305 fprintf (stderr, "Valid <option>s are:\n");
3306 fprintf (stderr, " collect-before-allocs[=<n>]\n");
3307 fprintf (stderr, " verify-before-allocs[=<n>]\n");
3308 fprintf (stderr, " max-valloc-size=N (where N is an integer, possibly with a k, m or a g suffix)\n");
3309 fprintf (stderr, " check-remset-consistency\n");
3310 fprintf (stderr, " check-mark-bits\n");
3311 fprintf (stderr, " check-nursery-pinned\n");
3312 fprintf (stderr, " verify-before-collections\n");
3313 fprintf (stderr, " verify-nursery-at-minor-gc\n");
3314 fprintf (stderr, " dump-nursery-at-minor-gc\n");
3315 fprintf (stderr, " disable-minor\n");
3316 fprintf (stderr, " disable-major\n");
3317 fprintf (stderr, " check-concurrent\n");
3318 fprintf (stderr, " clear-[nursery-]at-gc\n");
3319 fprintf (stderr, " clear-at-tlab-creation\n");
3320 fprintf (stderr, " debug-clear-at-tlab-creation\n");
3321 fprintf (stderr, " check-scan-starts\n");
3322 fprintf (stderr, " print-allowance\n");
3323 fprintf (stderr, " print-pinning\n");
3324 fprintf (stderr, " heap-dump=<filename>\n");
3325 fprintf (stderr, " binary-protocol=<filename>[:<file-size-limit>]\n");
3326 fprintf (stderr, " nursery-canaries\n");
3327 sgen_client_print_gc_debug_usage ();
3328 fprintf (stderr, "\n");
3330 usage_printed = TRUE;
3333 g_strfreev (opts);
3336 if (debug_opts)
3337 g_free (debug_opts);
3339 if (check_mark_bits_after_major_collection)
3340 nursery_clear_policy = CLEAR_AT_GC;
3342 if (major_collector.post_param_init)
3343 major_collector.post_param_init (&major_collector);
3345 if (major_collector.needs_thread_pool) {
3346 int num_workers = 1;
3347 if (major_collector.is_parallel) {
3348 /* FIXME Detect the number of physical cores, instead of logical */
3349 num_workers = mono_cpu_count () / 2;
3350 if (num_workers < 1)
3351 num_workers = 1;
3353 sgen_workers_init (num_workers, (SgenWorkerCallback) major_collector.worker_init_cb);
3356 sgen_memgov_init (max_heap, soft_limit, debug_print_allowance, allowance_ratio, save_target);
3358 memset (&remset, 0, sizeof (remset));
3360 sgen_card_table_init (&remset);
3362 sgen_register_root (NULL, 0, sgen_make_user_root_descriptor (sgen_mark_normal_gc_handles), ROOT_TYPE_NORMAL, MONO_ROOT_SOURCE_GC_HANDLE, "normal gc handles");
3364 gc_initialized = 1;
3366 sgen_init_bridge ();
3369 gboolean
3370 sgen_gc_initialized ()
3372 return gc_initialized > 0;
3375 NurseryClearPolicy
3376 sgen_get_nursery_clear_policy (void)
3378 return nursery_clear_policy;
3381 void
3382 sgen_gc_lock (void)
3384 mono_coop_mutex_lock (&gc_mutex);
3387 void
3388 sgen_gc_unlock (void)
3390 mono_coop_mutex_unlock (&gc_mutex);
3393 void
3394 sgen_major_collector_iterate_live_block_ranges (sgen_cardtable_block_callback callback)
3396 major_collector.iterate_live_block_ranges (callback);
3399 void
3400 sgen_major_collector_iterate_block_ranges (sgen_cardtable_block_callback callback)
3402 major_collector.iterate_block_ranges (callback);
3405 SgenMajorCollector*
3406 sgen_get_major_collector (void)
3408 return &major_collector;
3411 SgenRememberedSet*
3412 sgen_get_remset (void)
3414 return &remset;
3417 static void
3418 count_cards (long long *major_total, long long *major_marked, long long *los_total, long long *los_marked)
3420 sgen_get_major_collector ()->count_cards (major_total, major_marked);
3421 sgen_los_count_cards (los_total, los_marked);
3424 static gboolean world_is_stopped = FALSE;
3426 /* LOCKING: assumes the GC lock is held */
3427 void
3428 sgen_stop_world (int generation)
3430 long long major_total = -1, major_marked = -1, los_total = -1, los_marked = -1;
3432 SGEN_ASSERT (0, !world_is_stopped, "Why are we stopping a stopped world?");
3434 binary_protocol_world_stopping (generation, sgen_timestamp (), (gpointer) (gsize) mono_native_thread_id_get ());
3436 sgen_client_stop_world (generation);
3438 world_is_stopped = TRUE;
3440 if (binary_protocol_is_heavy_enabled ())
3441 count_cards (&major_total, &major_marked, &los_total, &los_marked);
3442 binary_protocol_world_stopped (generation, sgen_timestamp (), major_total, major_marked, los_total, los_marked);
3445 /* LOCKING: assumes the GC lock is held */
3446 void
3447 sgen_restart_world (int generation)
3449 long long major_total = -1, major_marked = -1, los_total = -1, los_marked = -1;
3450 gint64 stw_time;
3452 SGEN_ASSERT (0, world_is_stopped, "Why are we restarting a running world?");
3454 if (binary_protocol_is_heavy_enabled ())
3455 count_cards (&major_total, &major_marked, &los_total, &los_marked);
3456 binary_protocol_world_restarting (generation, sgen_timestamp (), major_total, major_marked, los_total, los_marked);
3458 world_is_stopped = FALSE;
3460 sgen_client_restart_world (generation, &stw_time);
3462 binary_protocol_world_restarted (generation, sgen_timestamp ());
3464 if (sgen_client_bridge_need_processing ())
3465 sgen_client_bridge_processing_finish (generation);
3467 sgen_memgov_collection_end (generation, stw_time);
3470 gboolean
3471 sgen_is_world_stopped (void)
3473 return world_is_stopped;
3476 void
3477 sgen_check_whole_heap_stw (void)
3479 sgen_stop_world (0);
3480 sgen_clear_nursery_fragments ();
3481 sgen_check_whole_heap (TRUE);
3482 sgen_restart_world (0);
3485 gint64
3486 sgen_timestamp (void)
3488 SGEN_TV_DECLARE (timestamp);
3489 SGEN_TV_GETTIME (timestamp);
3490 return SGEN_TV_ELAPSED (sgen_init_timestamp, timestamp);
3493 #endif /* HAVE_SGEN_GC */