1 // Copyright 2014 The Go Authors. All rights reserved.
2 // Use of this source code is governed by a BSD-style
3 // license that can be found in the LICENSE file.
11 "runtime/internal/atomic"
12 "runtime/internal/sys"
16 // Functions called by C code.
18 //go:linkname goparkunlock
19 //go:linkname newextram
20 //go:linkname acquirep
21 //go:linkname releasep
22 //go:linkname incidlelocked
24 //go:linkname schedinit
27 //go:linkname handoffp
29 //go:linkname stoplockedm
30 //go:linkname schedule
33 //go:linkname reentersyscall
34 //go:linkname reentersyscallblock
35 //go:linkname exitsyscall
40 //go:linkname globrunqput
41 //go:linkname pidleget
43 // Exported for test (see runtime/testdata/testprogcgo/dropm_stub.go).
46 // Function called by misc/cgo/test.
47 //go:linkname lockedOSThread
49 // C functions for thread and context management.
53 func malg(bool, bool, *unsafe
.Pointer
, *uintptr) *g
56 func resetNewG(*g
, *unsafe
.Pointer
, *uintptr)
59 func makeGContext(*g
, unsafe
.Pointer
, uintptr)
60 func getTraceback(me
, gp
*g
)
62 func _cgo_notify_runtime_init_done()
63 func alreadyInCallers() bool
66 // Functions created by the compiler.
67 //extern __go_init_main
73 // set using cmd/go/internal/modload.ModInfoProg
76 // Goroutine scheduler
77 // The scheduler's job is to distribute ready-to-run goroutines over worker threads.
79 // The main concepts are:
81 // M - worker thread, or machine.
82 // P - processor, a resource that is required to execute Go code.
83 // M must have an associated P to execute Go code, however it can be
84 // blocked or in a syscall w/o an associated P.
86 // Design doc at https://golang.org/s/go11sched.
88 // Worker thread parking/unparking.
89 // We need to balance between keeping enough running worker threads to utilize
90 // available hardware parallelism and parking excessive running worker threads
91 // to conserve CPU resources and power. This is not simple for two reasons:
92 // (1) scheduler state is intentionally distributed (in particular, per-P work
93 // queues), so it is not possible to compute global predicates on fast paths;
94 // (2) for optimal thread management we would need to know the future (don't park
95 // a worker thread when a new goroutine will be readied in near future).
97 // Three rejected approaches that would work badly:
98 // 1. Centralize all scheduler state (would inhibit scalability).
99 // 2. Direct goroutine handoff. That is, when we ready a new goroutine and there
100 // is a spare P, unpark a thread and handoff it the thread and the goroutine.
101 // This would lead to thread state thrashing, as the thread that readied the
102 // goroutine can be out of work the very next moment, we will need to park it.
103 // Also, it would destroy locality of computation as we want to preserve
104 // dependent goroutines on the same thread; and introduce additional latency.
105 // 3. Unpark an additional thread whenever we ready a goroutine and there is an
106 // idle P, but don't do handoff. This would lead to excessive thread parking/
107 // unparking as the additional threads will instantly park without discovering
110 // The current approach:
112 // This approach applies to three primary sources of potential work: readying a
113 // goroutine, new/modified-earlier timers, and idle-priority GC. See below for
114 // additional details.
116 // We unpark an additional thread when we submit work if (this is wakep()):
117 // 1. There is an idle P, and
118 // 2. There are no "spinning" worker threads.
120 // A worker thread is considered spinning if it is out of local work and did
121 // not find work in the global run queue or netpoller; the spinning state is
122 // denoted in m.spinning and in sched.nmspinning. Threads unparked this way are
123 // also considered spinning; we don't do goroutine handoff so such threads are
124 // out of work initially. Spinning threads spin on looking for work in per-P
125 // run queues and timer heaps or from the GC before parking. If a spinning
126 // thread finds work it takes itself out of the spinning state and proceeds to
127 // execution. If it does not find work it takes itself out of the spinning
128 // state and then parks.
130 // If there is at least one spinning thread (sched.nmspinning>1), we don't
131 // unpark new threads when submitting work. To compensate for that, if the last
132 // spinning thread finds work and stops spinning, it must unpark a new spinning
133 // thread. This approach smooths out unjustified spikes of thread unparking,
134 // but at the same time guarantees eventual maximal CPU parallelism
137 // The main implementation complication is that we need to be very careful
138 // during spinning->non-spinning thread transition. This transition can race
139 // with submission of new work, and either one part or another needs to unpark
140 // another worker thread. If they both fail to do that, we can end up with
141 // semi-persistent CPU underutilization.
143 // The general pattern for submission is:
144 // 1. Submit work to the local run queue, timer heap, or GC state.
145 // 2. #StoreLoad-style memory barrier.
146 // 3. Check sched.nmspinning.
148 // The general pattern for spinning->non-spinning transition is:
149 // 1. Decrement nmspinning.
150 // 2. #StoreLoad-style memory barrier.
151 // 3. Check all per-P work queues and GC for new work.
153 // Note that all this complexity does not apply to global run queue as we are
154 // not sloppy about thread unparking when submitting to global queue. Also see
155 // comments for nmspinning manipulation.
157 // How these different sources of work behave varies, though it doesn't affect
158 // the synchronization approach:
159 // * Ready goroutine: this is an obvious source of work; the goroutine is
160 // immediately ready and must run on some thread eventually.
161 // * New/modified-earlier timer: The current timer implementation (see time.go)
162 // uses netpoll in a thread with no work available to wait for the soonest
163 // timer. If there is no thread waiting, we want a new spinning thread to go
165 // * Idle-priority GC: The GC wakes a stopped idle thread to contribute to
166 // background GC work (note: currently disabled per golang.org/issue/19112).
167 // Also see golang.org/issue/44313, as this should be extended to all GC
177 // main_init_done is a signal used by cgocallbackg that initialization
178 // has been completed. It is made before _cgo_notify_runtime_init_done,
179 // so all cgo calls can rely on it existing. When main_init is complete,
180 // it is closed, meaning cgocallbackg can reliably receive from it.
181 var main_init_done
chan bool
183 // mainStarted indicates that the main M has started.
186 // runtimeInitTime is the nanotime() at which the runtime started.
187 var runtimeInitTime
int64
189 // Value to use for signal mask for newly created M's.
190 var initSigmask sigset
192 // The main goroutine.
193 func main(unsafe
.Pointer
) {
196 // Max stack size is 1 GB on 64-bit, 250 MB on 32-bit.
197 // Using decimal instead of binary GB and MB because
198 // they look nicer in the stack overflow failure message.
199 if goarch
.PtrSize
== 8 {
200 maxstacksize
= 1000000000
202 maxstacksize
= 250000000
205 // An upper limit for max stack size. Used to avoid random crashes
206 // after calling SetMaxStack and trying to allocate a stack that is too big,
207 // since stackalloc works with 32-bit sizes.
208 // Not used by gofrontend.
209 // maxstackceiling = 2 * maxstacksize
211 // Allow newproc to start new Ms.
214 if GOARCH
!= "wasm" { // no threads on wasm yet, so no sysmon
216 newm(sysmon
, nil, -1)
220 // Lock the main goroutine onto this, the main OS thread,
221 // during initialization. Most programs won't care, but a few
222 // do require certain calls to be made by the main thread.
223 // Those can arrange for main.main to run in the main thread
224 // by calling runtime.LockOSThread during initialization
225 // to preserve the lock.
229 throw("runtime.main not on m0")
232 // Record when the world started.
233 // Must be before doInit for tracing init.
234 runtimeInitTime
= nanotime()
235 if runtimeInitTime
== 0 {
236 throw("nanotime returning zero")
239 if debug
.inittrace
!= 0 {
240 inittrace
.id
= getg().goid
241 inittrace
.active
= true
244 // doInit(&runtime_inittask) // Must be before defer.
246 // Defer unlock so that runtime.Goexit during init does the unlock too.
254 main_init_done
= make(chan bool)
256 // Start the template thread in case we enter Go from
257 // a C-created thread and need to create a new thread.
258 startTemplateThread()
259 _cgo_notify_runtime_init_done()
262 fn
:= main_init
// make an indirect call, as the linker doesn't know the address of the main package when laying down the runtime
266 // For gccgo we have to wait until after main is initialized
267 // to enable GC, because initializing main registers the GC roots.
270 // Disable init tracing after main init done to avoid overhead
271 // of collecting statistics in malloc and newproc
272 inittrace
.active
= false
274 close(main_init_done
)
279 if isarchive || islibrary
{
280 // A program compiled with -buildmode=c-archive or c-shared
281 // has a main, but it is not executed.
284 fn
= main_main
// make an indirect call, as the linker doesn't know the address of the main package when laying down the runtime
290 // Make racy client program work: if panicking on
291 // another goroutine at the same time as main returns,
292 // let the other goroutine finish printing the panic trace.
293 // Once it does, it will exit. See issues 3934 and 20018.
294 if atomic
.Load(&runningPanicDefers
) != 0 {
295 // Running deferred functions should not take long.
296 for c
:= 0; c
< 1000; c
++ {
297 if atomic
.Load(&runningPanicDefers
) == 0 {
303 if atomic
.Load(&panicking
) != 0 {
304 gopark(nil, nil, waitReasonPanicWait
, traceEvGoStop
, 1)
314 // os_beforeExit is called from os.Exit(0).
315 //go:linkname os_beforeExit os.runtime__beforeExit
316 func os_beforeExit() {
322 // start forcegc helper goroutine
324 expectSystemGoroutine()
328 func forcegchelper() {
332 lockInit(&forcegc
.lock
, lockRankForcegc
)
335 if forcegc
.idle
!= 0 {
336 throw("forcegc: phase error")
338 atomic
.Store(&forcegc
.idle
, 1)
339 goparkunlock(&forcegc
.lock
, waitReasonForceGCIdle
, traceEvGoBlock
, 1)
340 // this goroutine is explicitly resumed by sysmon
341 if debug
.gctrace
> 0 {
344 // Time-triggered, fully concurrent.
345 gcStart(gcTrigger
{kind
: gcTriggerTime
, now
: nanotime()})
351 // Gosched yields the processor, allowing other goroutines to run. It does not
352 // suspend the current goroutine, so execution resumes automatically.
358 // goschedguarded yields the processor like gosched, but also checks
359 // for forbidden states and opts out of the yield in those cases.
361 func goschedguarded() {
362 mcall(goschedguarded_m
)
365 // Puts the current goroutine into a waiting state and calls unlockf on the
368 // If unlockf returns false, the goroutine is resumed.
370 // unlockf must not access this G's stack, as it may be moved between
371 // the call to gopark and the call to unlockf.
373 // Note that because unlockf is called after putting the G into a waiting
374 // state, the G may have already been readied by the time unlockf is called
375 // unless there is external synchronization preventing the G from being
376 // readied. If unlockf returns false, it must guarantee that the G cannot be
377 // externally readied.
379 // Reason explains why the goroutine has been parked. It is displayed in stack
380 // traces and heap dumps. Reasons should be unique and descriptive. Do not
381 // re-use reasons, add new ones.
382 func gopark(unlockf
func(*g
, unsafe
.Pointer
) bool, lock unsafe
.Pointer
, reason waitReason
, traceEv
byte, traceskip
int) {
383 if reason
!= waitReasonSleep
{
384 checkTimeouts() // timeouts may expire while two goroutines keep the scheduler busy
388 status
:= readgstatus(gp
)
389 if status
!= _Grunning
&& status
!= _Gscanrunning
{
390 throw("gopark: bad g status")
393 mp
.waitunlockf
= unlockf
394 gp
.waitreason
= reason
395 mp
.waittraceev
= traceEv
396 mp
.waittraceskip
= traceskip
398 // can't do anything that might move the G between Ms here.
402 // Puts the current goroutine into a waiting state and unlocks the lock.
403 // The goroutine can be made runnable again by calling goready(gp).
404 func goparkunlock(lock
*mutex
, reason waitReason
, traceEv
byte, traceskip
int) {
405 gopark(parkunlock_c
, unsafe
.Pointer(lock
), reason
, traceEv
, traceskip
)
408 func goready(gp
*g
, traceskip
int) {
410 ready(gp
, traceskip
, true)
415 func acquireSudog() *sudog
{
416 // Delicate dance: the semaphore implementation calls
417 // acquireSudog, acquireSudog calls new(sudog),
418 // new calls malloc, malloc can call the garbage collector,
419 // and the garbage collector calls the semaphore implementation
421 // Break the cycle by doing acquirem/releasem around new(sudog).
422 // The acquirem/releasem increments m.locks during new(sudog),
423 // which keeps the garbage collector from being invoked.
426 if len(pp
.sudogcache
) == 0 {
427 lock(&sched
.sudoglock
)
428 // First, try to grab a batch from central cache.
429 for len(pp
.sudogcache
) < cap(pp
.sudogcache
)/2 && sched
.sudogcache
!= nil {
430 s
:= sched
.sudogcache
431 sched
.sudogcache
= s
.next
433 pp
.sudogcache
= append(pp
.sudogcache
, s
)
435 unlock(&sched
.sudoglock
)
436 // If the central cache is empty, allocate a new one.
437 if len(pp
.sudogcache
) == 0 {
438 pp
.sudogcache
= append(pp
.sudogcache
, new(sudog
))
441 n
:= len(pp
.sudogcache
)
442 s
:= pp
.sudogcache
[n
-1]
443 pp
.sudogcache
[n
-1] = nil
444 pp
.sudogcache
= pp
.sudogcache
[:n
-1]
446 throw("acquireSudog: found s.elem != nil in cache")
453 func releaseSudog(s
*sudog
) {
455 throw("runtime: sudog with non-nil elem")
458 throw("runtime: sudog with non-false isSelect")
461 throw("runtime: sudog with non-nil next")
464 throw("runtime: sudog with non-nil prev")
466 if s
.waitlink
!= nil {
467 throw("runtime: sudog with non-nil waitlink")
470 throw("runtime: sudog with non-nil c")
474 throw("runtime: releaseSudog with non-nil gp.param")
476 mp
:= acquirem() // avoid rescheduling to another P
478 if len(pp
.sudogcache
) == cap(pp
.sudogcache
) {
479 // Transfer half of local cache to the central cache.
480 var first
, last
*sudog
481 for len(pp
.sudogcache
) > cap(pp
.sudogcache
)/2 {
482 n
:= len(pp
.sudogcache
)
483 p
:= pp
.sudogcache
[n
-1]
484 pp
.sudogcache
[n
-1] = nil
485 pp
.sudogcache
= pp
.sudogcache
[:n
-1]
493 lock(&sched
.sudoglock
)
494 last
.next
= sched
.sudogcache
495 sched
.sudogcache
= first
496 unlock(&sched
.sudoglock
)
498 pp
.sudogcache
= append(pp
.sudogcache
, s
)
502 func lockedOSThread() bool {
504 return gp
.lockedm
!= 0 && gp
.m
.lockedg
!= 0
508 // allgs contains all Gs ever created (including dead Gs), and thus
511 // Access via the slice is protected by allglock or stop-the-world.
512 // Readers that cannot take the lock may (carefully!) use the atomic
517 // allglen and allgptr are atomic variables that contain len(allgs) and
518 // &allgs[0] respectively. Proper ordering depends on totally-ordered
519 // loads and stores. Writes are protected by allglock.
521 // allgptr is updated before allglen. Readers should read allglen
522 // before allgptr to ensure that allglen is always <= len(allgptr). New
523 // Gs appended during the race can be missed. For a consistent view of
524 // all Gs, allglock must be held.
526 // allgptr copies should always be stored as a concrete type or
527 // unsafe.Pointer, not uintptr, to ensure that GC can still reach it
528 // even if it points to a stale array.
533 func allgadd(gp
*g
) {
534 if readgstatus(gp
) == _Gidle
{
535 throw("allgadd: bad status Gidle")
539 allgs
= append(allgs
, gp
)
540 if &allgs
[0] != allgptr
{
541 atomicstorep(unsafe
.Pointer(&allgptr
), unsafe
.Pointer(&allgs
[0]))
543 atomic
.Storeuintptr(&allglen
, uintptr(len(allgs
)))
547 // allGsSnapshot returns a snapshot of the slice of all Gs.
549 // The world must be stopped or allglock must be held.
550 func allGsSnapshot() []*g
{
551 assertWorldStoppedOrLockHeld(&allglock
)
553 // Because the world is stopped or allglock is held, allgadd
554 // cannot happen concurrently with this. allgs grows
555 // monotonically and existing entries never change, so we can
556 // simply return a copy of the slice header. For added safety,
557 // we trim everything past len because that can still change.
558 return allgs
[:len(allgs
):len(allgs
)]
561 // atomicAllG returns &allgs[0] and len(allgs) for use with atomicAllGIndex.
562 func atomicAllG() (**g
, uintptr) {
563 length
:= atomic
.Loaduintptr(&allglen
)
564 ptr
:= (**g
)(atomic
.Loadp(unsafe
.Pointer(&allgptr
)))
568 // atomicAllGIndex returns ptr[i] with the allgptr returned from atomicAllG.
569 func atomicAllGIndex(ptr
**g
, i
uintptr) *g
{
570 return *(**g
)(add(unsafe
.Pointer(ptr
), i
*goarch
.PtrSize
))
573 // forEachG calls fn on every G from allgs.
575 // forEachG takes a lock to exclude concurrent addition of new Gs.
576 func forEachG(fn
func(gp
*g
)) {
578 for _
, gp
:= range allgs
{
584 // forEachGRace calls fn on every G from allgs.
586 // forEachGRace avoids locking, but does not exclude addition of new Gs during
587 // execution, which may be missed.
588 func forEachGRace(fn
func(gp
*g
)) {
589 ptr
, length
:= atomicAllG()
590 for i
:= uintptr(0); i
< length
; i
++ {
591 gp
:= atomicAllGIndex(ptr
, i
)
598 // Number of goroutine ids to grab from sched.goidgen to local per-P cache at once.
599 // 16 seems to provide enough amortization, but other than that it's mostly arbitrary number.
603 // cpuinit extracts the environment variable GODEBUG from the environment on
604 // Unix-like operating systems and calls internal/cpu.Initialize.
606 const prefix
= "GODEBUG="
610 case "aix", "darwin", "ios", "dragonfly", "freebsd", "netbsd", "openbsd", "illumos", "solaris", "linux":
611 cpu
.DebugOptions
= true
613 // Similar to goenv_unix but extracts the environment value for
615 // TODO(moehrmann): remove when general goenvs() can be called before cpuinit()
617 for argv_index(argv
, argc
+1+n
) != nil {
621 for i
:= int32(0); i
< n
; i
++ {
622 p
:= argv_index(argv
, argc
+1+i
)
623 s
:= *(*string)(unsafe
.Pointer(&stringStruct
{unsafe
.Pointer(p
), findnull(p
)}))
625 if hasPrefix(s
, prefix
) {
626 env
= gostring(p
)[len(prefix
):]
644 // The bootstrap sequence is:
648 // make & queue new G
649 // call runtime·mstart
651 // The new G calls runtime·main.
653 lockInit(&sched
.lock
, lockRankSched
)
654 lockInit(&sched
.sysmonlock
, lockRankSysmon
)
655 lockInit(&sched
.deferlock
, lockRankDefer
)
656 lockInit(&sched
.sudoglock
, lockRankSudog
)
657 lockInit(&deadlock
, lockRankDeadlock
)
658 lockInit(&paniclk
, lockRankPanic
)
659 lockInit(&allglock
, lockRankAllg
)
660 lockInit(&allpLock
, lockRankAllp
)
661 // lockInit(&reflectOffs.lock, lockRankReflectOffs)
662 lockInit(&finlock
, lockRankFin
)
663 lockInit(&trace
.bufLock
, lockRankTraceBuf
)
664 lockInit(&trace
.stringsLock
, lockRankTraceStrings
)
665 lockInit(&trace
.lock
, lockRankTrace
)
666 lockInit(&cpuprof
.lock
, lockRankCpuprof
)
667 lockInit(&trace
.stackTab
.lock
, lockRankTraceStackTab
)
668 // Enforce that this lock is always a leaf lock.
669 // All of this lock's critical sections should be
671 lockInit(&memstats
.heapStats
.noPLock
, lockRankLeafRank
)
674 sched
.maxmcount
= 10000
676 usestackmaps
= probestackmaps()
678 // The world starts stopped.
682 cpuinit() // must run before alginit
683 alginit() // maps, hash, fastrand must not be used before this call
684 fastrandinit() // must run before mcommoninit
685 mcommoninit(_g_
.m
, -1)
687 sigsave(&_g_
.m
.sigmask
)
688 initSigmask
= _g_
.m
.sigmask
690 if offset
:= unsafe
.Offsetof(sched
.timeToRun
); offset%8
!= 0 {
692 throw("sched.timeToRun not aligned to 8 bytes")
701 sched
.lastpoll
= uint64(nanotime())
704 // In 32-bit mode, we can burn a lot of memory on thread stacks.
705 // Try to avoid this by limiting the number of threads we run
707 if goarch
.PtrSize
== 4 && procs
> 32 {
711 if n
, ok
:= atoi32(gogetenv("GOMAXPROCS")); ok
&& n
> 0 {
714 if procresize(procs
) != nil {
715 throw("unknown runnable goroutine during bootstrap")
719 // World is effectively started now, as P's can run.
722 // For cgocheck > 1, we turn on the write barrier at all times
723 // and check all pointer writes. We can't do this until after
724 // procresize because the write barrier needs a P.
725 if debug
.cgocheck
> 1 {
726 writeBarrier
.cgo
= true
727 writeBarrier
.enabled
= true
728 for _
, p
:= range allp
{
733 if buildVersion
== "" {
734 // Condition should never trigger. This code just serves
735 // to ensure runtime·buildVersion is kept in the resulting binary.
736 buildVersion
= "unknown"
738 if len(modinfo
) == 1 {
739 // Condition should never trigger. This code just serves
740 // to ensure runtime·modinfo is kept in the resulting binary.
745 func dumpgstatus(gp
*g
) {
747 print("runtime: gp: gp=", gp
, ", goid=", gp
.goid
, ", gp->atomicstatus=", readgstatus(gp
), "\n")
748 print("runtime: g: g=", _g_
, ", goid=", _g_
.goid
, ", g->atomicstatus=", readgstatus(_g_
), "\n")
751 // sched.lock must be held.
753 assertLockHeld(&sched
.lock
)
755 if mcount() > sched
.maxmcount
{
756 print("runtime: program exceeds ", sched
.maxmcount
, "-thread limit\n")
757 throw("thread exhaustion")
761 // mReserveID returns the next ID to use for a new m. This new m is immediately
762 // considered 'running' by checkdead.
764 // sched.lock must be held.
765 func mReserveID() int64 {
766 assertLockHeld(&sched
.lock
)
768 if sched
.mnext
+1 < sched
.mnext
{
769 throw("runtime: thread ID overflow")
777 // Pre-allocated ID may be passed as 'id', or omitted by passing -1.
778 func mcommoninit(mp
*m
, id
int64) {
781 // g0 stack won't make sense for user (and is not necessary unwindable).
783 callers(1, mp
.createstack
[:])
794 lo
:= uint32(int64Hash(uint64(mp
.id
), fastrandseed
))
795 hi
:= uint32(int64Hash(uint64(cputicks()), ^fastrandseed
))
799 // Same behavior as for 1.17.
800 // TODO: Simplify ths.
801 if goarch
.BigEndian
{
802 mp
.fastrand
= uint64(lo
)<<32 |
uint64(hi
)
804 mp
.fastrand
= uint64(hi
)<<32 |
uint64(lo
)
809 // Add to allm so garbage collector doesn't free g->m
810 // when it is just in a register or thread-local storage.
813 // NumCgoCall() iterates over allm w/o schedlock,
814 // so we need to publish it safely.
815 atomicstorep(unsafe
.Pointer(&allm
), unsafe
.Pointer(mp
))
819 var fastrandseed
uintptr
821 func fastrandinit() {
822 s
:= (*[unsafe
.Sizeof(fastrandseed
)]byte)(unsafe
.Pointer(&fastrandseed
))[:]
826 // Mark gp ready to run.
827 func ready(gp
*g
, traceskip
int, next
bool) {
829 traceGoUnpark(gp
, traceskip
)
832 status
:= readgstatus(gp
)
836 mp
:= acquirem() // disable preemption because it can be holding p in a local var
837 if status
&^_Gscan
!= _Gwaiting
{
839 throw("bad g->status in ready")
842 // status is Gwaiting or Gscanwaiting, make Grunnable and put on runq
843 casgstatus(gp
, _Gwaiting
, _Grunnable
)
844 runqput(_g_
.m
.p
.ptr(), gp
, next
)
849 // freezeStopWait is a large value that freezetheworld sets
850 // sched.stopwait to in order to request that all Gs permanently stop.
851 const freezeStopWait
= 0x7fffffff
853 // freezing is set to non-zero if the runtime is trying to freeze the
857 // Similar to stopTheWorld but best-effort and can be called several times.
858 // There is no reverse operation, used during crashing.
859 // This function must not lock any mutexes.
860 func freezetheworld() {
861 atomic
.Store(&freezing
, 1)
862 // stopwait and preemption requests can be lost
863 // due to races with concurrently executing threads,
864 // so try several times
865 for i
:= 0; i
< 5; i
++ {
866 // this should tell the scheduler to not start any new goroutines
867 sched
.stopwait
= freezeStopWait
868 atomic
.Store(&sched
.gcwaiting
, 1)
869 // this should stop running goroutines
871 break // no running goroutines
881 // All reads and writes of g's status go through readgstatus, casgstatus
882 // castogscanstatus, casfrom_Gscanstatus.
884 func readgstatus(gp
*g
) uint32 {
885 return atomic
.Load(&gp
.atomicstatus
)
888 // The Gscanstatuses are acting like locks and this releases them.
889 // If it proves to be a performance hit we should be able to make these
890 // simple atomic stores but for now we are going to throw if
891 // we see an inconsistent state.
892 func casfrom_Gscanstatus(gp
*g
, oldval
, newval
uint32) {
895 // Check that transition is valid.
898 print("runtime: casfrom_Gscanstatus bad oldval gp=", gp
, ", oldval=", hex(oldval
), ", newval=", hex(newval
), "\n")
900 throw("casfrom_Gscanstatus:top gp->status is not in scan state")
906 if newval
== oldval
&^_Gscan
{
907 success
= atomic
.Cas(&gp
.atomicstatus
, oldval
, newval
)
911 print("runtime: casfrom_Gscanstatus failed gp=", gp
, ", oldval=", hex(oldval
), ", newval=", hex(newval
), "\n")
913 throw("casfrom_Gscanstatus: gp->status is not in scan state")
915 releaseLockRank(lockRankGscan
)
918 // This will return false if the gp is not in the expected status and the cas fails.
919 // This acts like a lock acquire while the casfromgstatus acts like a lock release.
920 func castogscanstatus(gp
*g
, oldval
, newval
uint32) bool {
926 if newval
== oldval|_Gscan
{
927 r
:= atomic
.Cas(&gp
.atomicstatus
, oldval
, newval
)
929 acquireLockRank(lockRankGscan
)
935 print("runtime: castogscanstatus oldval=", hex(oldval
), " newval=", hex(newval
), "\n")
936 throw("castogscanstatus")
940 // If asked to move to or from a Gscanstatus this will throw. Use the castogscanstatus
941 // and casfrom_Gscanstatus instead.
942 // casgstatus will loop if the g->atomicstatus is in a Gscan status until the routine that
943 // put it in the Gscan state is finished.
945 func casgstatus(gp
*g
, oldval
, newval
uint32) {
946 if (oldval
&_Gscan
!= 0) ||
(newval
&_Gscan
!= 0) || oldval
== newval
{
948 print("runtime: casgstatus: oldval=", hex(oldval
), " newval=", hex(newval
), "\n")
949 throw("casgstatus: bad incoming values")
953 acquireLockRank(lockRankGscan
)
954 releaseLockRank(lockRankGscan
)
956 // See https://golang.org/cl/21503 for justification of the yield delay.
957 const yieldDelay
= 5 * 1000
960 // loop if gp->atomicstatus is in a scan state giving
961 // GC time to finish and change the state to oldval.
962 for i
:= 0; !atomic
.Cas(&gp
.atomicstatus
, oldval
, newval
); i
++ {
963 if oldval
== _Gwaiting
&& gp
.atomicstatus
== _Grunnable
{
964 throw("casgstatus: waiting for Gwaiting but is Grunnable")
967 nextYield
= nanotime() + yieldDelay
969 if nanotime() < nextYield
{
970 for x
:= 0; x
< 10 && gp
.atomicstatus
!= oldval
; x
++ {
975 nextYield
= nanotime() + yieldDelay
/2
979 // Handle tracking for scheduling latencies.
980 if oldval
== _Grunning
{
981 // Track every 8th time a goroutine transitions out of running.
982 if gp
.trackingSeq%gTrackingPeriod
== 0 {
988 if oldval
== _Grunnable
{
989 // We transitioned out of runnable, so measure how much
990 // time we spent in this state and add it to
993 gp
.runnableTime
+= now
- gp
.runnableStamp
996 if newval
== _Grunnable
{
997 // We just transitioned into runnable, so record what
998 // time that happened.
1000 gp
.runnableStamp
= now
1001 } else if newval
== _Grunning
{
1002 // We're transitioning into running, so turn off
1003 // tracking and record how much time we spent in
1006 sched
.timeToRun
.record(gp
.runnableTime
)
1012 // casGToPreemptScan transitions gp from _Grunning to _Gscan|_Gpreempted.
1014 // TODO(austin): This is the only status operation that both changes
1015 // the status and locks the _Gscan bit. Rethink this.
1016 func casGToPreemptScan(gp
*g
, old
, new uint32) {
1017 if old
!= _Grunning ||
new != _Gscan|_Gpreempted
{
1018 throw("bad g transition")
1020 acquireLockRank(lockRankGscan
)
1021 for !atomic
.Cas(&gp
.atomicstatus
, _Grunning
, _Gscan|_Gpreempted
) {
1025 // casGFromPreempted attempts to transition gp from _Gpreempted to
1026 // _Gwaiting. If successful, the caller is responsible for
1027 // re-scheduling gp.
1028 func casGFromPreempted(gp
*g
, old
, new uint32) bool {
1029 if old
!= _Gpreempted ||
new != _Gwaiting
{
1030 throw("bad g transition")
1032 return atomic
.Cas(&gp
.atomicstatus
, _Gpreempted
, _Gwaiting
)
1035 // stopTheWorld stops all P's from executing goroutines, interrupting
1036 // all goroutines at GC safe points and records reason as the reason
1037 // for the stop. On return, only the current goroutine's P is running.
1038 // stopTheWorld must not be called from a system stack and the caller
1039 // must not hold worldsema. The caller must call startTheWorld when
1040 // other P's should resume execution.
1042 // stopTheWorld is safe for multiple goroutines to call at the
1043 // same time. Each will execute its own stop, and the stops will
1046 // This is also used by routines that do stack dumps. If the system is
1047 // in panic or being exited, this may not reliably stop all
1049 func stopTheWorld(reason
string) {
1050 semacquire(&worldsema
)
1052 gp
.m
.preemptoff
= reason
1053 systemstack(func() {
1054 // Mark the goroutine which called stopTheWorld preemptible so its
1055 // stack may be scanned.
1056 // This lets a mark worker scan us while we try to stop the world
1057 // since otherwise we could get in a mutual preemption deadlock.
1058 // We must not modify anything on the G stack because a stack shrink
1059 // may occur. A stack shrink is otherwise OK though because in order
1060 // to return from this function (and to leave the system stack) we
1061 // must have preempted all goroutines, including any attempting
1062 // to scan our stack, in which case, any stack shrinking will
1063 // have already completed by the time we exit.
1064 casgstatus(gp
, _Grunning
, _Gwaiting
)
1065 stopTheWorldWithSema()
1066 casgstatus(gp
, _Gwaiting
, _Grunning
)
1070 // startTheWorld undoes the effects of stopTheWorld.
1071 func startTheWorld() {
1072 systemstack(func() { startTheWorldWithSema(false) })
1074 // worldsema must be held over startTheWorldWithSema to ensure
1075 // gomaxprocs cannot change while worldsema is held.
1077 // Release worldsema with direct handoff to the next waiter, but
1078 // acquirem so that semrelease1 doesn't try to yield our time.
1080 // Otherwise if e.g. ReadMemStats is being called in a loop,
1081 // it might stomp on other attempts to stop the world, such as
1082 // for starting or ending GC. The operation this blocks is
1083 // so heavy-weight that we should just try to be as fair as
1086 // We don't want to just allow us to get preempted between now
1087 // and releasing the semaphore because then we keep everyone
1088 // (including, for example, GCs) waiting longer.
1091 semrelease1(&worldsema
, true, 0)
1095 // stopTheWorldGC has the same effect as stopTheWorld, but blocks
1096 // until the GC is not running. It also blocks a GC from starting
1097 // until startTheWorldGC is called.
1098 func stopTheWorldGC(reason
string) {
1100 stopTheWorld(reason
)
1103 // startTheWorldGC undoes the effects of stopTheWorldGC.
1104 func startTheWorldGC() {
1109 // Holding worldsema grants an M the right to try to stop the world.
1110 var worldsema
uint32 = 1
1112 // Holding gcsema grants the M the right to block a GC, and blocks
1113 // until the current GC is done. In particular, it prevents gomaxprocs
1114 // from changing concurrently.
1116 // TODO(mknyszek): Once gomaxprocs and the execution tracer can handle
1117 // being changed/enabled during a GC, remove this.
1118 var gcsema
uint32 = 1
1120 // stopTheWorldWithSema is the core implementation of stopTheWorld.
1121 // The caller is responsible for acquiring worldsema and disabling
1122 // preemption first and then should stopTheWorldWithSema on the system
1125 // semacquire(&worldsema, 0)
1126 // m.preemptoff = "reason"
1127 // systemstack(stopTheWorldWithSema)
1129 // When finished, the caller must either call startTheWorld or undo
1130 // these three operations separately:
1132 // m.preemptoff = ""
1133 // systemstack(startTheWorldWithSema)
1134 // semrelease(&worldsema)
1136 // It is allowed to acquire worldsema once and then execute multiple
1137 // startTheWorldWithSema/stopTheWorldWithSema pairs.
1138 // Other P's are able to execute between successive calls to
1139 // startTheWorldWithSema and stopTheWorldWithSema.
1140 // Holding worldsema causes any other goroutines invoking
1141 // stopTheWorld to block.
1142 func stopTheWorldWithSema() {
1145 // If we hold a lock, then we won't be able to stop another M
1146 // that is blocked trying to acquire the lock.
1147 if _g_
.m
.locks
> 0 {
1148 throw("stopTheWorld: holding locks")
1152 sched
.stopwait
= gomaxprocs
1153 atomic
.Store(&sched
.gcwaiting
, 1)
1156 _g_
.m
.p
.ptr().status
= _Pgcstop
// Pgcstop is only diagnostic.
1158 // try to retake all P's in Psyscall status
1159 for _
, p
:= range allp
{
1161 if s
== _Psyscall
&& atomic
.Cas(&p
.status
, s
, _Pgcstop
) {
1179 wait
:= sched
.stopwait
> 0
1182 // wait for remaining P's to stop voluntarily
1185 // wait for 100us, then try to re-preempt in case of any races
1186 if notetsleep(&sched
.stopnote
, 100*1000) {
1187 noteclear(&sched
.stopnote
)
1196 if sched
.stopwait
!= 0 {
1197 bad
= "stopTheWorld: not stopped (stopwait != 0)"
1199 for _
, p
:= range allp
{
1200 if p
.status
!= _Pgcstop
{
1201 bad
= "stopTheWorld: not stopped (status != _Pgcstop)"
1205 if atomic
.Load(&freezing
) != 0 {
1206 // Some other thread is panicking. This can cause the
1207 // sanity checks above to fail if the panic happens in
1208 // the signal handler on a stopped thread. Either way,
1209 // we should halt this thread.
1220 func startTheWorldWithSema(emitTraceEvent
bool) int64 {
1221 assertWorldStopped()
1223 mp
:= acquirem() // disable preemption because it can be holding p in a local var
1224 if netpollinited() {
1225 list
:= netpoll(0) // non-blocking
1235 p1
:= procresize(procs
)
1237 if sched
.sysmonwait
!= 0 {
1238 sched
.sysmonwait
= 0
1239 notewakeup(&sched
.sysmonnote
)
1252 throw("startTheWorld: inconsistent mp->nextp")
1255 notewakeup(&mp
.park
)
1257 // Start M to run P. Do not start another M below.
1262 // Capture start-the-world time before doing clean-up tasks.
1263 startTime
:= nanotime()
1268 // Wakeup an additional proc in case we have excessive runnable goroutines
1269 // in local queues or in the global queue. If we don't, the proc will park itself.
1270 // If we have lots of excessive work, resetspinning will unpark additional procs as necessary.
1278 // First function run by a new goroutine.
1279 // This is passed to makecontext.
1283 if gp
.traceback
!= 0 {
1290 // When running on the g0 stack we can wind up here without a p,
1291 // for example from mcall(exitsyscall0) in exitsyscall, in
1292 // which case we can not run a write barrier.
1293 // It is also possible for us to get here from the systemstack
1294 // call in wbBufFlush, at which point the write barrier buffer
1295 // is full and we can not run a write barrier.
1296 // Setting gp.entry = nil or gp.param = nil will try to run a
1297 // write barrier, so if we are on the g0 stack due to mcall
1298 // (systemstack calls mcall) then clear the field using uintptr.
1299 // This is OK when gp.param is gp.m.curg, as curg will be kept
1300 // alive elsewhere, and gp.entry always points into g, or
1301 // to a statically allocated value, or (in the case of mcall)
1303 if gp
== gp
.m
.g0
&& gp
.param
== unsafe
.Pointer(gp
.m
.curg
) {
1304 *(*uintptr)(unsafe
.Pointer(&gp
.entry
)) = 0
1305 *(*uintptr)(unsafe
.Pointer(&gp
.param
)) = 0
1306 } else if gp
.m
.p
== 0 {
1307 throw("no p in kickoff")
1313 // Record the entry SP to help stack scan.
1314 gp
.entrysp
= getsp()
1320 // The go:noinline is to guarantee the getcallerpc/getcallersp below are safe,
1321 // so that we can set up g0.sched to return to the call of mstart1 above.
1326 if _g_
!= _g_
.m
.g0
{
1327 throw("bad runtime·mstart")
1332 // Install signal handlers; after minit so that minit can
1333 // prepare the thread to be able to handle the signals.
1334 // For gccgo minit was called by C code.
1339 if fn
:= _g_
.m
.mstartfn
; fn
!= nil {
1344 acquirep(_g_
.m
.nextp
.ptr())
1350 // mstartm0 implements part of mstart1 that only runs on the m0.
1352 // Write barriers are allowed here because we know the GC can't be
1353 // running yet, so they'll be no-ops.
1355 //go:yeswritebarrierrec
1357 // Create an extra M for callbacks on threads not created by Go.
1358 // An extra M is also needed on Windows for callbacks created by
1359 // syscall.NewCallback. See issue #6751 for details.
1360 if (iscgo || GOOS
== "windows") && !cgoHasExtraM
{
1367 // mPark causes a thread to park itself, returning once woken.
1371 notesleep(&gp
.m
.park
)
1372 noteclear(&gp
.m
.park
)
1375 // mexit tears down and exits the current thread.
1377 // Don't call this directly to exit the thread, since it must run at
1378 // the top of the thread stack. Instead, use gogo(&_g_.m.g0.sched) to
1379 // unwind the stack to the point that exits the thread.
1381 // It is entered with m.p != nil, so write barriers are allowed. It
1382 // will release the P before exiting.
1384 //go:yeswritebarrierrec
1385 func mexit(osStack
bool) {
1390 // This is the main thread. Just wedge it.
1392 // On Linux, exiting the main thread puts the process
1393 // into a non-waitable zombie state. On Plan 9,
1394 // exiting the main thread unblocks wait even though
1395 // other threads are still running. On Solaris we can
1396 // neither exitThread nor return from mstart. Other
1397 // bad things probably happen on other platforms.
1399 // We could try to clean up this M more before wedging
1400 // it, but that complicates signal handling.
1401 handoffp(releasep())
1407 throw("locked m0 woke up")
1413 // Free the gsignal stack.
1414 if m
.gsignal
!= nil {
1415 stackfree(m
.gsignal
)
1416 // On some platforms, when calling into VDSO (e.g. nanotime)
1417 // we store our g on the gsignal stack, if there is one.
1418 // Now the stack is freed, unlink it from the m, so we
1419 // won't write to it when calling VDSO code.
1423 // Remove m from allm.
1425 for pprev
:= &allm
; *pprev
!= nil; pprev
= &(*pprev
).alllink
{
1431 throw("m not found in allm")
1434 // Delay reaping m until it's done with the stack.
1436 // If this is using an OS stack, the OS will free it
1437 // so there's no need for reaping.
1438 atomic
.Store(&m
.freeWait
, 1)
1439 // Put m on the free list, though it will not be reaped until
1440 // freeWait is 0. Note that the free list must not be linked
1441 // through alllink because some functions walk allm without
1442 // locking, so may be using alllink.
1443 m
.freelink
= sched
.freem
1448 atomic
.Xadd64(&ncgocall
, int64(m
.ncgocall
))
1451 handoffp(releasep())
1452 // After this point we must not have write barriers.
1454 // Invoke the deadlock detector. This must happen after
1455 // handoffp because it may have started a new M to take our
1462 if GOOS
== "darwin" || GOOS
== "ios" {
1463 // Make sure pendingPreemptSignals is correct when an M exits.
1465 if atomic
.Load(&m
.signalPending
) != 0 {
1466 atomic
.Xadd(&pendingPreemptSignals
, -1)
1470 // Destroy all allocated resources. After this is called, we may no
1471 // longer take any locks.
1475 // Return from mstart and let the system thread
1476 // library free the g0 stack and terminate the thread.
1480 // mstart is the thread's entry point, so there's nothing to
1481 // return to. Exit the thread directly. exitThread will clear
1482 // m.freeWait when it's done with the stack and the m can be
1484 exitThread(&m
.freeWait
)
1487 // forEachP calls fn(p) for every P p when p reaches a GC safe point.
1488 // If a P is currently executing code, this will bring the P to a GC
1489 // safe point and execute fn on that P. If the P is not executing code
1490 // (it is idle or in a syscall), this will call fn(p) directly while
1491 // preventing the P from exiting its state. This does not ensure that
1492 // fn will run on every CPU executing Go code, but it acts as a global
1493 // memory barrier. GC uses this as a "ragged barrier."
1495 // The caller must hold worldsema.
1498 func forEachP(fn
func(*p
)) {
1500 _p_
:= getg().m
.p
.ptr()
1503 if sched
.safePointWait
!= 0 {
1504 throw("forEachP: sched.safePointWait != 0")
1506 sched
.safePointWait
= gomaxprocs
- 1
1507 sched
.safePointFn
= fn
1509 // Ask all Ps to run the safe point function.
1510 for _
, p
:= range allp
{
1512 atomic
.Store(&p
.runSafePointFn
, 1)
1517 // Any P entering _Pidle or _Psyscall from now on will observe
1518 // p.runSafePointFn == 1 and will call runSafePointFn when
1519 // changing its status to _Pidle/_Psyscall.
1521 // Run safe point function for all idle Ps. sched.pidle will
1522 // not change because we hold sched.lock.
1523 for p
:= sched
.pidle
.ptr(); p
!= nil; p
= p
.link
.ptr() {
1524 if atomic
.Cas(&p
.runSafePointFn
, 1, 0) {
1526 sched
.safePointWait
--
1530 wait
:= sched
.safePointWait
> 0
1533 // Run fn for the current P.
1536 // Force Ps currently in _Psyscall into _Pidle and hand them
1537 // off to induce safe point function execution.
1538 for _
, p
:= range allp
{
1540 if s
== _Psyscall
&& p
.runSafePointFn
== 1 && atomic
.Cas(&p
.status
, s
, _Pidle
) {
1550 // Wait for remaining Ps to run fn.
1553 // Wait for 100us, then try to re-preempt in
1554 // case of any races.
1556 // Requires system stack.
1557 if notetsleep(&sched
.safePointNote
, 100*1000) {
1558 noteclear(&sched
.safePointNote
)
1564 if sched
.safePointWait
!= 0 {
1565 throw("forEachP: not done")
1567 for _
, p
:= range allp
{
1568 if p
.runSafePointFn
!= 0 {
1569 throw("forEachP: P did not run fn")
1574 sched
.safePointFn
= nil
1579 // runSafePointFn runs the safe point function, if any, for this P.
1580 // This should be called like
1582 // if getg().m.p.runSafePointFn != 0 {
1586 // runSafePointFn must be checked on any transition in to _Pidle or
1587 // _Psyscall to avoid a race where forEachP sees that the P is running
1588 // just before the P goes into _Pidle/_Psyscall and neither forEachP
1589 // nor the P run the safe-point function.
1590 func runSafePointFn() {
1591 p
:= getg().m
.p
.ptr()
1592 // Resolve the race between forEachP running the safe-point
1593 // function on this P's behalf and this P running the
1594 // safe-point function directly.
1595 if !atomic
.Cas(&p
.runSafePointFn
, 1, 0) {
1598 sched
.safePointFn(p
)
1600 sched
.safePointWait
--
1601 if sched
.safePointWait
== 0 {
1602 notewakeup(&sched
.safePointNote
)
1607 // Allocate a new m unassociated with any thread.
1608 // Can use p for allocation context if needed.
1609 // fn is recorded as the new m's m.mstartfn.
1610 // id is optional pre-allocated m ID. Omit by passing -1.
1612 // This function is allowed to have write barriers even if the caller
1613 // isn't because it borrows _p_.
1615 //go:yeswritebarrierrec
1616 func allocm(_p_
*p
, fn
func(), id
int64, allocatestack
bool) (mp
*m
, g0Stack unsafe
.Pointer
, g0StackSize
uintptr) {
1619 // The caller owns _p_, but we may borrow (i.e., acquirep) it. We must
1620 // disable preemption to ensure it is not stolen, which would make the
1621 // caller lose ownership.
1626 acquirep(_p_
) // temporarily borrow p for mallocs in this function
1629 // Release the free M list. We need to do this somewhere and
1630 // this may free up a stack we can use.
1631 if sched
.freem
!= nil {
1634 for freem
:= sched
.freem
; freem
!= nil; {
1635 if freem
.freeWait
!= 0 {
1636 next
:= freem
.freelink
1637 freem
.freelink
= newList
1642 // stackfree must be on the system stack, but allocm is
1643 // reachable off the system stack transitively from
1645 systemstack(func() {
1648 freem
= freem
.freelink
1650 sched
.freem
= newList
1658 mp
.g0
= malg(allocatestack
, false, &g0Stack
, &g0StackSize
)
1661 if _p_
== _g_
.m
.p
.ptr() {
1666 allocmLock
.runlock()
1667 return mp
, g0Stack
, g0StackSize
1670 // needm is called when a cgo callback happens on a
1671 // thread without an m (a thread not created by Go).
1672 // In this case, needm is expected to find an m to use
1673 // and return with m, g initialized correctly.
1674 // Since m and g are not set now (likely nil, but see below)
1675 // needm is limited in what routines it can call. In particular
1676 // it can only call nosplit functions (textflag 7) and cannot
1677 // do any scheduling that requires an m.
1679 // In order to avoid needing heavy lifting here, we adopt
1680 // the following strategy: there is a stack of available m's
1681 // that can be stolen. Using compare-and-swap
1682 // to pop from the stack has ABA races, so we simulate
1683 // a lock by doing an exchange (via Casuintptr) to steal the stack
1684 // head and replace the top pointer with MLOCKED (1).
1685 // This serves as a simple spin lock that we can use even
1686 // without an m. The thread that locks the stack in this way
1687 // unlocks the stack by storing a valid stack head pointer.
1689 // In order to make sure that there is always an m structure
1690 // available to be stolen, we maintain the invariant that there
1691 // is always one more than needed. At the beginning of the
1692 // program (if cgo is in use) the list is seeded with a single m.
1693 // If needm finds that it has taken the last m off the list, its job
1694 // is - once it has installed its own m so that it can do things like
1695 // allocate memory - to create a spare m and put it on the list.
1697 // Each of these extra m's also has a g0 and a curg that are
1698 // pressed into service as the scheduling stack and current
1699 // goroutine for the duration of the cgo callback.
1701 // When the callback is done with the m, it calls dropm to
1702 // put the m back on the list.
1705 if (iscgo || GOOS
== "windows") && !cgoHasExtraM
{
1706 // Can happen if C/C++ code calls Go from a global ctor.
1707 // Can also happen on Windows if a global ctor uses a
1708 // callback created by syscall.NewCallback. See issue #6751
1711 // Can not throw, because scheduler is not initialized yet.
1712 write(2, unsafe
.Pointer(&earlycgocallback
[0]), int32(len(earlycgocallback
)))
1716 // Save and block signals before getting an M.
1717 // The signal handler may call needm itself,
1718 // and we must avoid a deadlock. Also, once g is installed,
1719 // any incoming signals will try to execute,
1720 // but we won't have the sigaltstack settings and other data
1721 // set up appropriately until the end of minit, which will
1722 // unblock the signals. This is the same dance as when
1723 // starting a new m to run Go code via newosproc.
1728 // Lock extra list, take head, unlock popped list.
1729 // nilokay=false is safe here because of the invariant above,
1730 // that the extra list always contains or will soon contain
1732 mp
:= lockextra(false)
1734 // Set needextram when we've just emptied the list,
1735 // so that the eventual call into cgocallbackg will
1736 // allocate a new m for the extra list. We delay the
1737 // allocation until then so that it can be done
1738 // after exitsyscall makes sure it is okay to be
1739 // running at all (that is, there's no garbage collection
1740 // running right now).
1741 mp
.needextram
= mp
.schedlink
== 0
1743 unlockextra(mp
.schedlink
.ptr())
1745 // Store the original signal mask for use by minit.
1746 mp
.sigmask
= sigmask
1748 // Install TLS on some platforms (previously setg
1749 // would do this if necessary).
1752 // Install g (= m->curg).
1755 // Initialize this thread to use the m.
1761 // mp.curg is now a real goroutine.
1762 casgstatus(mp
.curg
, _Gdead
, _Gsyscall
)
1763 atomic
.Xadd(&sched
.ngsys
, -1)
1766 var earlycgocallback
= []byte("fatal error: cgo callback before cgo call\n")
1768 // newextram allocates m's and puts them on the extra list.
1769 // It is called with a working local m, so that it can do things
1770 // like call schedlock and allocate.
1772 c
:= atomic
.Xchg(&extraMWaiters
, 0)
1774 for i
:= uint32(0); i
< c
; i
++ {
1778 // Make sure there is at least one extra M.
1779 mp
:= lockextra(true)
1787 // oneNewExtraM allocates an m and puts it on the extra list.
1788 func oneNewExtraM() {
1789 // Create extra goroutine locked to extra m.
1790 // The goroutine is the context in which the cgo callback will run.
1791 // The sched.pc will never be returned to, but setting it to
1792 // goexit makes clear to the traceback routines where
1793 // the goroutine stack ends.
1794 mp
, g0SP
, g0SPSize
:= allocm(nil, nil, -1, true)
1795 gp
:= malg(true, false, nil, nil)
1796 // malg returns status as _Gidle. Change to _Gdead before
1797 // adding to allg where GC can see it. We use _Gdead to hide
1798 // this from tracebacks and stack scans since it isn't a
1799 // "real" goroutine until needm grabs it.
1800 casgstatus(gp
, _Gidle
, _Gdead
)
1806 gp
.goid
= int64(atomic
.Xadd64(&sched
.goidgen
, 1))
1807 // put on allg for garbage collector
1810 // The context for gp will be set up in needm.
1811 // Here we need to set the context for g0.
1812 makeGContext(mp
.g0
, g0SP
, g0SPSize
)
1814 // gp is now on the allg list, but we don't want it to be
1815 // counted by gcount. It would be more "proper" to increment
1816 // sched.ngfree, but that requires locking. Incrementing ngsys
1817 // has the same effect.
1818 atomic
.Xadd(&sched
.ngsys
, +1)
1820 // Add m to the extra list.
1821 mnext
:= lockextra(true)
1822 mp
.schedlink
.set(mnext
)
1827 // dropm is called when a cgo callback has called needm but is now
1828 // done with the callback and returning back into the non-Go thread.
1829 // It puts the current m back onto the extra list.
1831 // The main expense here is the call to signalstack to release the
1832 // m's signal stack, and then the call to needm on the next callback
1833 // from this thread. It is tempting to try to save the m for next time,
1834 // which would eliminate both these costs, but there might not be
1835 // a next time: the current thread (which Go does not control) might exit.
1836 // If we saved the m for that thread, there would be an m leak each time
1837 // such a thread exited. Instead, we acquire and release an m on each
1838 // call. These should typically not be scheduling operations, just a few
1839 // atomics, so the cost should be small.
1841 // TODO(rsc): An alternative would be to allocate a dummy pthread per-thread
1842 // variable using pthread_key_create. Unlike the pthread keys we already use
1843 // on OS X, this dummy key would never be read by Go code. It would exist
1844 // only so that we could register at thread-exit-time destructor.
1845 // That destructor would put the m back onto the extra list.
1846 // This is purely a performance optimization. The current version,
1847 // in which dropm happens on each cgo call, is still correct too.
1848 // We may have to keep the current version on systems with cgo
1849 // but without pthreads, like Windows.
1851 // CgocallBackDone calls this after releasing p, so no write barriers.
1852 //go:nowritebarrierrec
1854 // Clear m and g, and return m to the extra list.
1855 // After the call to setg we can only call nosplit functions
1856 // with no pointer manipulation.
1859 // Return mp.curg to dead state.
1860 casgstatus(mp
.curg
, _Gsyscall
, _Gdead
)
1861 mp
.curg
.preemptStop
= false
1862 atomic
.Xadd(&sched
.ngsys
, +1)
1864 // Block signals before unminit.
1865 // Unminit unregisters the signal handling stack (but needs g on some systems).
1866 // Setg(nil) clears g, which is the signal handler's cue not to run Go handlers.
1867 // It's important not to try to handle a signal between those two steps.
1868 sigmask
:= mp
.sigmask
1872 // gccgo sets the stack to Gdead here, because the splitstack
1873 // context is not initialized.
1874 atomic
.Store(&mp
.curg
.atomicstatus
, _Gdead
)
1876 mp
.curg
.gcnextsp
= 0
1878 mnext
:= lockextra(true)
1880 mp
.schedlink
.set(mnext
)
1884 // Commit the release of mp.
1887 msigrestore(sigmask
)
1890 // A helper function for EnsureDropM.
1891 func getm() uintptr {
1892 return uintptr(unsafe
.Pointer(getg().m
))
1896 var extraMCount
uint32 // Protected by lockextra
1897 var extraMWaiters
uint32
1899 // lockextra locks the extra list and returns the list head.
1900 // The caller must unlock the list by storing a new list head
1901 // to extram. If nilokay is true, then lockextra will
1902 // return a nil list head if that's what it finds. If nilokay is false,
1903 // lockextra will keep waiting until the list head is no longer nil.
1905 //go:nowritebarrierrec
1906 func lockextra(nilokay
bool) *m
{
1911 old
:= atomic
.Loaduintptr(&extram
)
1916 if old
== 0 && !nilokay
{
1918 // Add 1 to the number of threads
1919 // waiting for an M.
1920 // This is cleared by newextram.
1921 atomic
.Xadd(&extraMWaiters
, 1)
1927 if atomic
.Casuintptr(&extram
, old
, locked
) {
1928 return (*m
)(unsafe
.Pointer(old
))
1936 //go:nowritebarrierrec
1937 func unlockextra(mp
*m
) {
1938 atomic
.Storeuintptr(&extram
, uintptr(unsafe
.Pointer(mp
)))
1942 // allocmLock is locked for read when creating new Ms in allocm and their
1943 // addition to allm. Thus acquiring this lock for write blocks the
1944 // creation of new Ms.
1947 // execLock serializes exec and clone to avoid bugs or unspecified
1948 // behaviour around exec'ing while creating/destroying threads. See
1953 // newmHandoff contains a list of m structures that need new OS threads.
1954 // This is used by newm in situations where newm itself can't safely
1955 // start an OS thread.
1956 var newmHandoff
struct {
1959 // newm points to a list of M structures that need new OS
1960 // threads. The list is linked through m.schedlink.
1963 // waiting indicates that wake needs to be notified when an m
1964 // is put on the list.
1968 // haveTemplateThread indicates that the templateThread has
1969 // been started. This is not protected by lock. Use cas to set
1971 haveTemplateThread
uint32
1974 // Create a new m. It will start off with a call to fn, or else the scheduler.
1975 // fn needs to be static and not a heap allocated closure.
1976 // May run with m.p==nil, so write barriers are not allowed.
1978 // id is optional pre-allocated m ID. Omit by passing -1.
1979 //go:nowritebarrierrec
1980 func newm(fn
func(), _p_
*p
, id
int64) {
1981 // allocm adds a new M to allm, but they do not start until created by
1982 // the OS in newm1 or the template thread.
1984 // doAllThreadsSyscall requires that every M in allm will eventually
1985 // start and be signal-able, even with a STW.
1987 // Disable preemption here until we start the thread to ensure that
1988 // newm is not preempted between allocm and starting the new thread,
1989 // ensuring that anything added to allm is guaranteed to eventually
1993 mp
, _
, _
:= allocm(_p_
, fn
, id
, false)
1995 mp
.sigmask
= initSigmask
1996 if gp
:= getg(); gp
!= nil && gp
.m
!= nil && (gp
.m
.lockedExt
!= 0 || gp
.m
.incgo
) && GOOS
!= "plan9" {
1997 // We're on a locked M or a thread that may have been
1998 // started by C. The kernel state of this thread may
1999 // be strange (the user may have locked it for that
2000 // purpose). We don't want to clone that into another
2001 // thread. Instead, ask a known-good thread to create
2002 // the thread for us.
2004 // This is disabled on Plan 9. See golang.org/issue/22227.
2006 // TODO: This may be unnecessary on Windows, which
2007 // doesn't model thread creation off fork.
2008 lock(&newmHandoff
.lock
)
2009 if newmHandoff
.haveTemplateThread
== 0 {
2010 throw("on a locked thread with no template thread")
2012 mp
.schedlink
= newmHandoff
.newm
2013 newmHandoff
.newm
.set(mp
)
2014 if newmHandoff
.waiting
{
2015 newmHandoff
.waiting
= false
2016 notewakeup(&newmHandoff
.wake
)
2018 unlock(&newmHandoff
.lock
)
2019 // The M has not started yet, but the template thread does not
2020 // participate in STW, so it will always process queued Ms and
2021 // it is safe to releasem.
2030 execLock
.rlock() // Prevent process clone.
2035 // startTemplateThread starts the template thread if it is not already
2038 // The calling thread must itself be in a known-good state.
2039 func startTemplateThread() {
2040 if GOARCH
== "wasm" { // no threads on wasm yet
2044 // Disable preemption to guarantee that the template thread will be
2045 // created before a park once haveTemplateThread is set.
2047 if !atomic
.Cas(&newmHandoff
.haveTemplateThread
, 0, 1) {
2051 newm(templateThread
, nil, -1)
2055 // templateThread is a thread in a known-good state that exists solely
2056 // to start new threads in known-good states when the calling thread
2057 // may not be in a good state.
2059 // Many programs never need this, so templateThread is started lazily
2060 // when we first enter a state that might lead to running on a thread
2061 // in an unknown state.
2063 // templateThread runs on an M without a P, so it must not have write
2066 //go:nowritebarrierrec
2067 func templateThread() {
2074 lock(&newmHandoff
.lock
)
2075 for newmHandoff
.newm
!= 0 {
2076 newm
:= newmHandoff
.newm
.ptr()
2077 newmHandoff
.newm
= 0
2078 unlock(&newmHandoff
.lock
)
2080 next
:= newm
.schedlink
.ptr()
2085 lock(&newmHandoff
.lock
)
2087 newmHandoff
.waiting
= true
2088 noteclear(&newmHandoff
.wake
)
2089 unlock(&newmHandoff
.lock
)
2090 notesleep(&newmHandoff
.wake
)
2094 // Stops execution of the current m until new work is available.
2095 // Returns with acquired P.
2099 if _g_
.m
.locks
!= 0 {
2100 throw("stopm holding locks")
2103 throw("stopm holding p")
2106 throw("stopm spinning")
2113 acquirep(_g_
.m
.nextp
.ptr())
2118 // startm's caller incremented nmspinning. Set the new M's spinning.
2119 getg().m
.spinning
= true
2122 // Schedules some M to run the p (creates an M if necessary).
2123 // If p==nil, tries to get an idle P, if no idle P's does nothing.
2124 // May run with m.p==nil, so write barriers are not allowed.
2125 // If spinning is set, the caller has incremented nmspinning and startm will
2126 // either decrement nmspinning or set m.spinning in the newly started M.
2128 // Callers passing a non-nil P must call from a non-preemptible context. See
2129 // comment on acquirem below.
2131 // Must not have write barriers because this may be called without a P.
2132 //go:nowritebarrierrec
2133 func startm(_p_
*p
, spinning
bool) {
2134 // Disable preemption.
2136 // Every owned P must have an owner that will eventually stop it in the
2137 // event of a GC stop request. startm takes transient ownership of a P
2138 // (either from argument or pidleget below) and transfers ownership to
2139 // a started M, which will be responsible for performing the stop.
2141 // Preemption must be disabled during this transient ownership,
2142 // otherwise the P this is running on may enter GC stop while still
2143 // holding the transient P, leaving that P in limbo and deadlocking the
2146 // Callers passing a non-nil P must already be in non-preemptible
2147 // context, otherwise such preemption could occur on function entry to
2148 // startm. Callers passing a nil P may be preemptible, so we must
2149 // disable preemption before acquiring a P from pidleget below.
2157 // The caller incremented nmspinning, but there are no idle Ps,
2158 // so it's okay to just undo the increment and give up.
2159 if int32(atomic
.Xadd(&sched
.nmspinning
, -1)) < 0 {
2160 throw("startm: negative nmspinning")
2169 // No M is available, we must drop sched.lock and call newm.
2170 // However, we already own a P to assign to the M.
2172 // Once sched.lock is released, another G (e.g., in a syscall),
2173 // could find no idle P while checkdead finds a runnable G but
2174 // no running M's because this new M hasn't started yet, thus
2175 // throwing in an apparent deadlock.
2177 // Avoid this situation by pre-allocating the ID for the new M,
2178 // thus marking it as 'running' before we drop sched.lock. This
2179 // new M will eventually run the scheduler to execute any
2186 // The caller incremented nmspinning, so set m.spinning in the new M.
2190 // Ownership transfer of _p_ committed by start in newm.
2191 // Preemption is now safe.
2197 throw("startm: m is spinning")
2200 throw("startm: m has p")
2202 if spinning
&& !runqempty(_p_
) {
2203 throw("startm: p has runnable gs")
2205 // The caller incremented nmspinning, so set m.spinning in the new M.
2206 nmp
.spinning
= spinning
2208 notewakeup(&nmp
.park
)
2209 // Ownership transfer of _p_ committed by wakeup. Preemption is now
2214 // Hands off P from syscall or locked M.
2215 // Always runs without a P, so write barriers are not allowed.
2216 //go:nowritebarrierrec
2217 func handoffp(_p_
*p
) {
2218 // handoffp must start an M in any situation where
2219 // findrunnable would return a G to run on _p_.
2221 // if it has local work, start it straight away
2222 if !runqempty(_p_
) || sched
.runqsize
!= 0 {
2226 // if it has GC work, start it straight away
2227 if gcBlackenEnabled
!= 0 && gcMarkWorkAvailable(_p_
) {
2231 // no local work, check that there are no spinning/idle M's,
2232 // otherwise our help is not required
2233 if atomic
.Load(&sched
.nmspinning
)+atomic
.Load(&sched
.npidle
) == 0 && atomic
.Cas(&sched
.nmspinning
, 0, 1) { // TODO: fast atomic
2238 if sched
.gcwaiting
!= 0 {
2239 _p_
.status
= _Pgcstop
2241 if sched
.stopwait
== 0 {
2242 notewakeup(&sched
.stopnote
)
2247 if _p_
.runSafePointFn
!= 0 && atomic
.Cas(&_p_
.runSafePointFn
, 1, 0) {
2248 sched
.safePointFn(_p_
)
2249 sched
.safePointWait
--
2250 if sched
.safePointWait
== 0 {
2251 notewakeup(&sched
.safePointNote
)
2254 if sched
.runqsize
!= 0 {
2259 // If this is the last running P and nobody is polling network,
2260 // need to wakeup another M to poll network.
2261 if sched
.npidle
== uint32(gomaxprocs
-1) && atomic
.Load64(&sched
.lastpoll
) != 0 {
2267 // The scheduler lock cannot be held when calling wakeNetPoller below
2268 // because wakeNetPoller may call wakep which may call startm.
2269 when
:= nobarrierWakeTime(_p_
)
2278 // Tries to add one more P to execute G's.
2279 // Called when a G is made runnable (newproc, ready).
2281 if atomic
.Load(&sched
.npidle
) == 0 {
2284 // be conservative about spinning threads
2285 if atomic
.Load(&sched
.nmspinning
) != 0 ||
!atomic
.Cas(&sched
.nmspinning
, 0, 1) {
2291 // Stops execution of the current m that is locked to a g until the g is runnable again.
2292 // Returns with acquired P.
2293 func stoplockedm() {
2296 if _g_
.m
.lockedg
== 0 || _g_
.m
.lockedg
.ptr().lockedm
.ptr() != _g_
.m
{
2297 throw("stoplockedm: inconsistent locking")
2300 // Schedule another M to run this p.
2305 // Wait until another thread schedules lockedg again.
2307 status
:= readgstatus(_g_
.m
.lockedg
.ptr())
2308 if status
&^_Gscan
!= _Grunnable
{
2309 print("runtime:stoplockedm: lockedg (atomicstatus=", status
, ") is not Grunnable or Gscanrunnable\n")
2310 dumpgstatus(_g_
.m
.lockedg
.ptr())
2311 throw("stoplockedm: not runnable")
2313 acquirep(_g_
.m
.nextp
.ptr())
2317 // Schedules the locked m to run the locked gp.
2318 // May run during STW, so write barriers are not allowed.
2319 //go:nowritebarrierrec
2320 func startlockedm(gp
*g
) {
2323 mp
:= gp
.lockedm
.ptr()
2325 throw("startlockedm: locked to me")
2328 throw("startlockedm: m has p")
2330 // directly handoff current P to the locked m
2334 notewakeup(&mp
.park
)
2338 // Stops the current m for stopTheWorld.
2339 // Returns when the world is restarted.
2343 if sched
.gcwaiting
== 0 {
2344 throw("gcstopm: not waiting for gc")
2347 _g_
.m
.spinning
= false
2348 // OK to just drop nmspinning here,
2349 // startTheWorld will unpark threads as necessary.
2350 if int32(atomic
.Xadd(&sched
.nmspinning
, -1)) < 0 {
2351 throw("gcstopm: negative nmspinning")
2356 _p_
.status
= _Pgcstop
2358 if sched
.stopwait
== 0 {
2359 notewakeup(&sched
.stopnote
)
2365 // Schedules gp to run on the current M.
2366 // If inheritTime is true, gp inherits the remaining time in the
2367 // current time slice. Otherwise, it starts a new time slice.
2370 // Write barriers are allowed because this is called immediately after
2371 // acquiring a P in several places.
2373 //go:yeswritebarrierrec
2374 func execute(gp
*g
, inheritTime
bool) {
2377 // Assign gp.m before entering _Grunning so running Gs have an
2381 casgstatus(gp
, _Grunnable
, _Grunning
)
2385 _g_
.m
.p
.ptr().schedtick
++
2388 // Check whether the profiler needs to be turned on or off.
2389 hz
:= sched
.profilehz
2390 if _g_
.m
.profilehz
!= hz
{
2391 setThreadCPUProfiler(hz
)
2395 // GoSysExit has to happen when we have a P, but before GoStart.
2396 // So we emit it here.
2397 if gp
.syscallsp
!= 0 && gp
.sysblocktraced
{
2398 traceGoSysExit(gp
.sysexitticks
)
2406 // Finds a runnable goroutine to execute.
2407 // Tries to steal from other P's, get g from local or global queue, poll network.
2408 func findrunnable() (gp
*g
, inheritTime
bool) {
2411 // The conditions here and in handoffp must agree: if
2412 // findrunnable would return a G to run, handoffp must start
2416 _p_
:= _g_
.m
.p
.ptr()
2417 if sched
.gcwaiting
!= 0 {
2421 if _p_
.runSafePointFn
!= 0 {
2425 now
, pollUntil
, _
:= checkTimers(_p_
, 0)
2427 if fingwait
&& fingwake
{
2428 if gp
:= wakefing(); gp
!= nil {
2432 if *cgo_yield
!= nil {
2433 asmcgocall(*cgo_yield
, nil)
2437 if gp
, inheritTime
:= runqget(_p_
); gp
!= nil {
2438 return gp
, inheritTime
2442 if sched
.runqsize
!= 0 {
2444 gp
:= globrunqget(_p_
, 0)
2452 // This netpoll is only an optimization before we resort to stealing.
2453 // We can safely skip it if there are no waiters or a thread is blocked
2454 // in netpoll already. If there is any kind of logical race with that
2455 // blocked thread (e.g. it has already returned from netpoll, but does
2456 // not set lastpoll yet), this thread will do blocking netpoll below
2458 if netpollinited() && atomic
.Load(&netpollWaiters
) > 0 && atomic
.Load64(&sched
.lastpoll
) != 0 {
2459 if list
:= netpoll(0); !list
.empty() { // non-blocking
2462 casgstatus(gp
, _Gwaiting
, _Grunnable
)
2464 traceGoUnpark(gp
, 0)
2470 // Spinning Ms: steal work from other Ps.
2472 // Limit the number of spinning Ms to half the number of busy Ps.
2473 // This is necessary to prevent excessive CPU consumption when
2474 // GOMAXPROCS>>1 but the program parallelism is low.
2475 procs
:= uint32(gomaxprocs
)
2476 if _g_
.m
.spinning ||
2*atomic
.Load(&sched
.nmspinning
) < procs
-atomic
.Load(&sched
.npidle
) {
2477 if !_g_
.m
.spinning
{
2478 _g_
.m
.spinning
= true
2479 atomic
.Xadd(&sched
.nmspinning
, 1)
2482 gp
, inheritTime
, tnow
, w
, newWork
:= stealWork(now
)
2485 // Successfully stole.
2486 return gp
, inheritTime
2489 // There may be new timer or GC work; restart to
2493 if w
!= 0 && (pollUntil
== 0 || w
< pollUntil
) {
2494 // Earlier timer to wait for.
2499 // We have nothing to do.
2501 // If we're in the GC mark phase, can safely scan and blacken objects,
2502 // and have work to do, run idle-time marking rather than give up the
2504 if gcBlackenEnabled
!= 0 && gcMarkWorkAvailable(_p_
) {
2505 node
:= (*gcBgMarkWorkerNode
)(gcBgMarkWorkerPool
.pop())
2507 _p_
.gcMarkWorkerMode
= gcMarkWorkerIdleMode
2509 casgstatus(gp
, _Gwaiting
, _Grunnable
)
2511 traceGoUnpark(gp
, 0)
2518 // If a callback returned and no other goroutine is awake,
2519 // then wake event handler goroutine which pauses execution
2520 // until a callback was triggered.
2521 gp
, otherReady
:= beforeIdle(now
, pollUntil
)
2523 casgstatus(gp
, _Gwaiting
, _Grunnable
)
2525 traceGoUnpark(gp
, 0)
2533 // Before we drop our P, make a snapshot of the allp slice,
2534 // which can change underfoot once we no longer block
2535 // safe-points. We don't need to snapshot the contents because
2536 // everything up to cap(allp) is immutable.
2537 allpSnapshot
:= allp
2538 // Also snapshot masks. Value changes are OK, but we can't allow
2539 // len to change out from under us.
2540 idlepMaskSnapshot
:= idlepMask
2541 timerpMaskSnapshot
:= timerpMask
2543 // return P and block
2545 if sched
.gcwaiting
!= 0 || _p_
.runSafePointFn
!= 0 {
2549 if sched
.runqsize
!= 0 {
2550 gp
:= globrunqget(_p_
, 0)
2554 if releasep() != _p_
{
2555 throw("findrunnable: wrong p")
2560 // Delicate dance: thread transitions from spinning to non-spinning
2561 // state, potentially concurrently with submission of new work. We must
2562 // drop nmspinning first and then check all sources again (with
2563 // #StoreLoad memory barrier in between). If we do it the other way
2564 // around, another thread can submit work after we've checked all
2565 // sources but before we drop nmspinning; as a result nobody will
2566 // unpark a thread to run the work.
2568 // This applies to the following sources of work:
2570 // * Goroutines added to a per-P run queue.
2571 // * New/modified-earlier timers on a per-P timer heap.
2572 // * Idle-priority GC work (barring golang.org/issue/19112).
2574 // If we discover new work below, we need to restore m.spinning as a signal
2575 // for resetspinning to unpark a new worker thread (because there can be more
2576 // than one starving goroutine). However, if after discovering new work
2577 // we also observe no idle Ps it is OK to skip unparking a new worker
2578 // thread: the system is fully loaded so no spinning threads are required.
2579 // Also see "Worker thread parking/unparking" comment at the top of the file.
2580 wasSpinning
:= _g_
.m
.spinning
2582 _g_
.m
.spinning
= false
2583 if int32(atomic
.Xadd(&sched
.nmspinning
, -1)) < 0 {
2584 throw("findrunnable: negative nmspinning")
2587 // Note the for correctness, only the last M transitioning from
2588 // spinning to non-spinning must perform these rechecks to
2589 // ensure no missed work. We are performing it on every M that
2590 // transitions as a conservative change to monitor effects on
2591 // latency. See golang.org/issue/43997.
2593 // Check all runqueues once again.
2594 _p_
= checkRunqsNoP(allpSnapshot
, idlepMaskSnapshot
)
2597 _g_
.m
.spinning
= true
2598 atomic
.Xadd(&sched
.nmspinning
, 1)
2602 // Check for idle-priority GC work again.
2603 _p_
, gp
= checkIdleGCNoP()
2606 _g_
.m
.spinning
= true
2607 atomic
.Xadd(&sched
.nmspinning
, 1)
2609 // Run the idle worker.
2610 _p_
.gcMarkWorkerMode
= gcMarkWorkerIdleMode
2611 casgstatus(gp
, _Gwaiting
, _Grunnable
)
2613 traceGoUnpark(gp
, 0)
2618 // Finally, check for timer creation or expiry concurrently with
2619 // transitioning from spinning to non-spinning.
2621 // Note that we cannot use checkTimers here because it calls
2622 // adjusttimers which may need to allocate memory, and that isn't
2623 // allowed when we don't have an active P.
2624 pollUntil
= checkTimersNoP(allpSnapshot
, timerpMaskSnapshot
, pollUntil
)
2627 // Poll network until next timer.
2628 if netpollinited() && (atomic
.Load(&netpollWaiters
) > 0 || pollUntil
!= 0) && atomic
.Xchg64(&sched
.lastpoll
, 0) != 0 {
2629 atomic
.Store64(&sched
.pollUntil
, uint64(pollUntil
))
2631 throw("findrunnable: netpoll with p")
2634 throw("findrunnable: netpoll with spinning")
2641 delay
= pollUntil
- now
2647 // When using fake time, just poll.
2650 list
:= netpoll(delay
) // block until new work is available
2651 atomic
.Store64(&sched
.pollUntil
, 0)
2652 atomic
.Store64(&sched
.lastpoll
, uint64(nanotime()))
2653 if faketime
!= 0 && list
.empty() {
2654 // Using fake time and nothing is ready; stop M.
2655 // When all M's stop, checkdead will call timejump.
2669 casgstatus(gp
, _Gwaiting
, _Grunnable
)
2671 traceGoUnpark(gp
, 0)
2676 _g_
.m
.spinning
= true
2677 atomic
.Xadd(&sched
.nmspinning
, 1)
2681 } else if pollUntil
!= 0 && netpollinited() {
2682 pollerPollUntil
:= int64(atomic
.Load64(&sched
.pollUntil
))
2683 if pollerPollUntil
== 0 || pollerPollUntil
> pollUntil
{
2691 // pollWork reports whether there is non-background work this P could
2692 // be doing. This is a fairly lightweight check to be used for
2693 // background work loops, like idle GC. It checks a subset of the
2694 // conditions checked by the actual scheduler.
2695 func pollWork() bool {
2696 if sched
.runqsize
!= 0 {
2699 p
:= getg().m
.p
.ptr()
2703 if netpollinited() && atomic
.Load(&netpollWaiters
) > 0 && sched
.lastpoll
!= 0 {
2704 if list
:= netpoll(0); !list
.empty() {
2712 // stealWork attempts to steal a runnable goroutine or timer from any P.
2714 // If newWork is true, new work may have been readied.
2716 // If now is not 0 it is the current time. stealWork returns the passed time or
2717 // the current time if now was passed as 0.
2718 func stealWork(now
int64) (gp
*g
, inheritTime
bool, rnow
, pollUntil
int64, newWork
bool) {
2719 pp
:= getg().m
.p
.ptr()
2723 const stealTries
= 4
2724 for i
:= 0; i
< stealTries
; i
++ {
2725 stealTimersOrRunNextG
:= i
== stealTries
-1
2727 for enum
:= stealOrder
.start(fastrand()); !enum
.done(); enum
.next() {
2728 if sched
.gcwaiting
!= 0 {
2729 // GC work may be available.
2730 return nil, false, now
, pollUntil
, true
2732 p2
:= allp
[enum
.position()]
2737 // Steal timers from p2. This call to checkTimers is the only place
2738 // where we might hold a lock on a different P's timers. We do this
2739 // once on the last pass before checking runnext because stealing
2740 // from the other P's runnext should be the last resort, so if there
2741 // are timers to steal do that first.
2743 // We only check timers on one of the stealing iterations because
2744 // the time stored in now doesn't change in this loop and checking
2745 // the timers for each P more than once with the same value of now
2746 // is probably a waste of time.
2748 // timerpMask tells us whether the P may have timers at all. If it
2749 // can't, no need to check at all.
2750 if stealTimersOrRunNextG
&& timerpMask
.read(enum
.position()) {
2751 tnow
, w
, ran
:= checkTimers(p2
, now
)
2753 if w
!= 0 && (pollUntil
== 0 || w
< pollUntil
) {
2757 // Running the timers may have
2758 // made an arbitrary number of G's
2759 // ready and added them to this P's
2760 // local run queue. That invalidates
2761 // the assumption of runqsteal
2762 // that it always has room to add
2763 // stolen G's. So check now if there
2764 // is a local G to run.
2765 if gp
, inheritTime
:= runqget(pp
); gp
!= nil {
2766 return gp
, inheritTime
, now
, pollUntil
, ranTimer
2772 // Don't bother to attempt to steal if p2 is idle.
2773 if !idlepMask
.read(enum
.position()) {
2774 if gp
:= runqsteal(pp
, p2
, stealTimersOrRunNextG
); gp
!= nil {
2775 return gp
, false, now
, pollUntil
, ranTimer
2781 // No goroutines found to steal. Regardless, running a timer may have
2782 // made some goroutine ready that we missed. Indicate the next timer to
2784 return nil, false, now
, pollUntil
, ranTimer
2787 // Check all Ps for a runnable G to steal.
2789 // On entry we have no P. If a G is available to steal and a P is available,
2790 // the P is returned which the caller should acquire and attempt to steal the
2792 func checkRunqsNoP(allpSnapshot
[]*p
, idlepMaskSnapshot pMask
) *p
{
2793 for id
, p2
:= range allpSnapshot
{
2794 if !idlepMaskSnapshot
.read(uint32(id
)) && !runqempty(p2
) {
2802 // Can't get a P, don't bother checking remaining Ps.
2810 // Check all Ps for a timer expiring sooner than pollUntil.
2812 // Returns updated pollUntil value.
2813 func checkTimersNoP(allpSnapshot
[]*p
, timerpMaskSnapshot pMask
, pollUntil
int64) int64 {
2814 for id
, p2
:= range allpSnapshot
{
2815 if timerpMaskSnapshot
.read(uint32(id
)) {
2816 w
:= nobarrierWakeTime(p2
)
2817 if w
!= 0 && (pollUntil
== 0 || w
< pollUntil
) {
2826 // Check for idle-priority GC, without a P on entry.
2828 // If some GC work, a P, and a worker G are all available, the P and G will be
2829 // returned. The returned P has not been wired yet.
2830 func checkIdleGCNoP() (*p
, *g
) {
2831 // N.B. Since we have no P, gcBlackenEnabled may change at any time; we
2832 // must check again after acquiring a P.
2833 if atomic
.Load(&gcBlackenEnabled
) == 0 {
2836 if !gcMarkWorkAvailable(nil) {
2840 // Work is available; we can start an idle GC worker only if there is
2841 // an available P and available worker G.
2843 // We can attempt to acquire these in either order, though both have
2844 // synchronization concerns (see below). Workers are almost always
2845 // available (see comment in findRunnableGCWorker for the one case
2846 // there may be none). Since we're slightly less likely to find a P,
2847 // check for that first.
2849 // Synchronization: note that we must hold sched.lock until we are
2850 // committed to keeping it. Otherwise we cannot put the unnecessary P
2851 // back in sched.pidle without performing the full set of idle
2852 // transition checks.
2854 // If we were to check gcBgMarkWorkerPool first, we must somehow handle
2855 // the assumption in gcControllerState.findRunnableGCWorker that an
2856 // empty gcBgMarkWorkerPool is only possible if gcMarkDone is running.
2864 // Now that we own a P, gcBlackenEnabled can't change (as it requires
2866 if gcBlackenEnabled
== 0 {
2872 node
:= (*gcBgMarkWorkerNode
)(gcBgMarkWorkerPool
.pop())
2881 return pp
, node
.gp
.ptr()
2884 // wakeNetPoller wakes up the thread sleeping in the network poller if it isn't
2885 // going to wake up before the when argument; or it wakes an idle P to service
2886 // timers and the network poller if there isn't one already.
2887 func wakeNetPoller(when
int64) {
2888 if atomic
.Load64(&sched
.lastpoll
) == 0 {
2889 // In findrunnable we ensure that when polling the pollUntil
2890 // field is either zero or the time to which the current
2891 // poll is expected to run. This can have a spurious wakeup
2892 // but should never miss a wakeup.
2893 pollerPollUntil
:= int64(atomic
.Load64(&sched
.pollUntil
))
2894 if pollerPollUntil
== 0 || pollerPollUntil
> when
{
2898 // There are no threads in the network poller, try to get
2899 // one there so it can handle new timers.
2900 if GOOS
!= "plan9" { // Temporary workaround - see issue #42303.
2906 func resetspinning() {
2908 if !_g_
.m
.spinning
{
2909 throw("resetspinning: not a spinning m")
2911 _g_
.m
.spinning
= false
2912 nmspinning
:= atomic
.Xadd(&sched
.nmspinning
, -1)
2913 if int32(nmspinning
) < 0 {
2914 throw("findrunnable: negative nmspinning")
2916 // M wakeup policy is deliberately somewhat conservative, so check if we
2917 // need to wakeup another P here. See "Worker thread parking/unparking"
2918 // comment at the top of the file for details.
2922 // injectglist adds each runnable G on the list to some run queue,
2923 // and clears glist. If there is no current P, they are added to the
2924 // global queue, and up to npidle M's are started to run them.
2925 // Otherwise, for each idle P, this adds a G to the global queue
2926 // and starts an M. Any remaining G's are added to the current P's
2928 // This may temporarily acquire sched.lock.
2929 // Can run concurrently with GC.
2930 func injectglist(glist
*gList
) {
2935 for gp
:= glist
.head
.ptr(); gp
!= nil; gp
= gp
.schedlink
.ptr() {
2936 traceGoUnpark(gp
, 0)
2940 // Mark all the goroutines as runnable before we put them
2941 // on the run queues.
2942 head
:= glist
.head
.ptr()
2945 for gp
:= head
; gp
!= nil; gp
= gp
.schedlink
.ptr() {
2948 casgstatus(gp
, _Gwaiting
, _Grunnable
)
2951 // Turn the gList into a gQueue.
2957 startIdle
:= func(n
int) {
2958 for ; n
!= 0 && sched
.npidle
!= 0; n
-- {
2963 pp
:= getg().m
.p
.ptr()
2966 globrunqputbatch(&q
, int32(qsize
))
2972 npidle
:= int(atomic
.Load(&sched
.npidle
))
2975 for n
= 0; n
< npidle
&& !q
.empty(); n
++ {
2981 globrunqputbatch(&globq
, int32(n
))
2988 runqputbatch(pp
, &q
, qsize
)
2992 // One round of scheduler: find a runnable goroutine and execute it.
2997 if _g_
.m
.locks
!= 0 {
2998 throw("schedule: holding locks")
3001 if _g_
.m
.lockedg
!= 0 {
3003 execute(_g_
.m
.lockedg
.ptr(), false) // Never returns.
3006 // We should not schedule away from a g that is executing a cgo call,
3007 // since the cgo call is using the m's g0 stack.
3009 throw("schedule: in cgo")
3016 if sched
.gcwaiting
!= 0 {
3020 if pp
.runSafePointFn
!= 0 {
3024 // Sanity check: if we are spinning, the run queue should be empty.
3025 // Check this before calling checkTimers, as that might call
3026 // goready to put a ready goroutine on the local run queue.
3027 if _g_
.m
.spinning
&& (pp
.runnext
!= 0 || pp
.runqhead
!= pp
.runqtail
) {
3028 throw("schedule: spinning with local work")
3034 var inheritTime
bool
3036 // Normal goroutines will check for need to wakeP in ready,
3037 // but GCworkers and tracereaders will not, so the check must
3038 // be done here instead.
3040 if trace
.enabled || trace
.shutdown
{
3043 casgstatus(gp
, _Gwaiting
, _Grunnable
)
3044 traceGoUnpark(gp
, 0)
3048 if gp
== nil && gcBlackenEnabled
!= 0 {
3049 gp
= gcController
.findRunnableGCWorker(_g_
.m
.p
.ptr())
3055 // Check the global runnable queue once in a while to ensure fairness.
3056 // Otherwise two goroutines can completely occupy the local runqueue
3057 // by constantly respawning each other.
3058 if _g_
.m
.p
.ptr().schedtick%61
== 0 && sched
.runqsize
> 0 {
3060 gp
= globrunqget(_g_
.m
.p
.ptr(), 1)
3065 gp
, inheritTime
= runqget(_g_
.m
.p
.ptr())
3066 // We can see gp != nil here even if the M is spinning,
3067 // if checkTimers added a local goroutine via goready.
3069 // Because gccgo does not implement preemption as a stack check,
3070 // we need to check for preemption here for fairness.
3071 // Otherwise goroutines on the local queue may starve
3072 // goroutines on the global queue.
3073 // Since we preempt by storing the goroutine on the global
3074 // queue, this is the only place we need to check preempt.
3075 // This does not call checkPreempt because gp is not running.
3076 if gp
!= nil && gp
.preempt
{
3085 gp
, inheritTime
= findrunnable() // blocks until work is available
3088 // This thread is going to run a goroutine and is not spinning anymore,
3089 // so if it was marked as spinning we need to reset it now and potentially
3090 // start a new spinning M.
3095 if sched
.disable
.user
&& !schedEnabled(gp
) {
3096 // Scheduling of this goroutine is disabled. Put it on
3097 // the list of pending runnable goroutines for when we
3098 // re-enable user scheduling and look again.
3100 if schedEnabled(gp
) {
3101 // Something re-enabled scheduling while we
3102 // were acquiring the lock.
3105 sched
.disable
.runnable
.pushBack(gp
)
3112 // If about to schedule a not-normal goroutine (a GCworker or tracereader),
3113 // wake a P if there is one.
3117 if gp
.lockedm
!= 0 {
3118 // Hands off own p to the locked m,
3119 // then blocks waiting for a new p.
3124 execute(gp
, inheritTime
)
3127 // dropg removes the association between m and the current goroutine m->curg (gp for short).
3128 // Typically a caller sets gp's status away from Grunning and then
3129 // immediately calls dropg to finish the job. The caller is also responsible
3130 // for arranging that gp will be restarted using ready at an
3131 // appropriate time. After calling dropg and arranging for gp to be
3132 // readied later, the caller can do other work but eventually should
3133 // call schedule to restart the scheduling of goroutines on this m.
3137 setMNoWB(&_g_
.m
.curg
.m
, nil)
3138 setGNoWB(&_g_
.m
.curg
, nil)
3141 // checkTimers runs any timers for the P that are ready.
3142 // If now is not 0 it is the current time.
3143 // It returns the passed time or the current time if now was passed as 0.
3144 // and the time when the next timer should run or 0 if there is no next timer,
3145 // and reports whether it ran any timers.
3146 // If the time when the next timer should run is not 0,
3147 // it is always larger than the returned time.
3148 // We pass now in and out to avoid extra calls of nanotime.
3149 //go:yeswritebarrierrec
3150 func checkTimers(pp
*p
, now
int64) (rnow
, pollUntil
int64, ran
bool) {
3151 // If it's not yet time for the first timer, or the first adjusted
3152 // timer, then there is nothing to do.
3153 next
:= int64(atomic
.Load64(&pp
.timer0When
))
3154 nextAdj
:= int64(atomic
.Load64(&pp
.timerModifiedEarliest
))
3155 if next
== 0 ||
(nextAdj
!= 0 && nextAdj
< next
) {
3160 // No timers to run or adjust.
3161 return now
, 0, false
3168 // Next timer is not ready to run, but keep going
3169 // if we would clear deleted timers.
3170 // This corresponds to the condition below where
3171 // we decide whether to call clearDeletedTimers.
3172 if pp
!= getg().m
.p
.ptr() ||
int(atomic
.Load(&pp
.deletedTimers
)) <= int(atomic
.Load(&pp
.numTimers
)/4) {
3173 return now
, next
, false
3177 lock(&pp
.timersLock
)
3179 if len(pp
.timers
) > 0 {
3180 adjusttimers(pp
, now
)
3181 for len(pp
.timers
) > 0 {
3182 // Note that runtimer may temporarily unlock
3184 if tw
:= runtimer(pp
, now
); tw
!= 0 {
3194 // If this is the local P, and there are a lot of deleted timers,
3195 // clear them out. We only do this for the local P to reduce
3196 // lock contention on timersLock.
3197 if pp
== getg().m
.p
.ptr() && int(atomic
.Load(&pp
.deletedTimers
)) > len(pp
.timers
)/4 {
3198 clearDeletedTimers(pp
)
3201 unlock(&pp
.timersLock
)
3203 return now
, pollUntil
, ran
3206 func parkunlock_c(gp
*g
, lock unsafe
.Pointer
) bool {
3207 unlock((*mutex
)(lock
))
3211 // park continuation on g0.
3212 func park_m(gp
*g
) {
3216 traceGoPark(_g_
.m
.waittraceev
, _g_
.m
.waittraceskip
)
3219 casgstatus(gp
, _Grunning
, _Gwaiting
)
3222 if fn
:= _g_
.m
.waitunlockf
; fn
!= nil {
3223 ok
:= fn(gp
, _g_
.m
.waitlock
)
3224 _g_
.m
.waitunlockf
= nil
3225 _g_
.m
.waitlock
= nil
3228 traceGoUnpark(gp
, 2)
3230 casgstatus(gp
, _Gwaiting
, _Grunnable
)
3231 execute(gp
, true) // Schedule it back, never returns.
3237 func goschedImpl(gp
*g
) {
3238 status
:= readgstatus(gp
)
3239 if status
&^_Gscan
!= _Grunning
{
3241 throw("bad g status")
3243 casgstatus(gp
, _Grunning
, _Grunnable
)
3252 // Gosched continuation on g0.
3253 func gosched_m(gp
*g
) {
3260 // goschedguarded is a forbidden-states-avoided version of gosched_m
3261 func goschedguarded_m(gp
*g
) {
3263 if !canPreemptM(gp
.m
) {
3264 gogo(gp
) // never return
3273 func gopreempt_m(gp
*g
) {
3280 // preemptPark parks gp and puts it in _Gpreempted.
3283 func preemptPark(gp
*g
) {
3285 traceGoPark(traceEvGoBlock
, 0)
3287 status
:= readgstatus(gp
)
3288 if status
&^_Gscan
!= _Grunning
{
3290 throw("bad g status")
3292 gp
.waitreason
= waitReasonPreempted
3294 // Transition from _Grunning to _Gscan|_Gpreempted. We can't
3295 // be in _Grunning when we dropg because then we'd be running
3296 // without an M, but the moment we're in _Gpreempted,
3297 // something could claim this G before we've fully cleaned it
3298 // up. Hence, we set the scan bit to lock down further
3299 // transitions until we can dropg.
3300 casGToPreemptScan(gp
, _Grunning
, _Gscan|_Gpreempted
)
3302 casfrom_Gscanstatus(gp
, _Gscan|_Gpreempted
, _Gpreempted
)
3306 // goyield is like Gosched, but it:
3307 // - emits a GoPreempt trace event instead of a GoSched trace event
3308 // - puts the current G on the runq of the current P instead of the globrunq
3314 func goyield_m(gp
*g
) {
3319 casgstatus(gp
, _Grunning
, _Grunnable
)
3321 runqput(pp
, gp
, false)
3325 // Finishes execution of the current goroutine.
3333 // goexit continuation on g0.
3334 func goexit0(gp
*g
) {
3336 _p_
:= _g_
.m
.p
.ptr()
3338 casgstatus(gp
, _Grunning
, _Gdead
)
3339 // gcController.addScannableStack(_p_, -int64(gp.stack.hi-gp.stack.lo))
3340 if isSystemGoroutine(gp
, false) {
3341 atomic
.Xadd(&sched
.ngsys
, -1)
3342 gp
.isSystemGoroutine
= false
3345 locked
:= gp
.lockedm
!= 0
3349 gp
.preemptStop
= false
3350 gp
.paniconfault
= false
3351 gp
._defer
= nil // should be true already but just in case.
3352 gp
._panic
= nil // non-nil for Goexit during panic. points at stack-allocated data.
3359 if gcBlackenEnabled
!= 0 && gp
.gcAssistBytes
> 0 {
3360 // Flush assist credit to the global pool. This gives
3361 // better information to pacing if the application is
3362 // rapidly creating an exiting goroutines.
3363 assistWorkPerByte
:= gcController
.assistWorkPerByte
.Load()
3364 scanCredit
:= int64(assistWorkPerByte
* float64(gp
.gcAssistBytes
))
3365 atomic
.Xaddint64(&gcController
.bgScanCredit
, scanCredit
)
3366 gp
.gcAssistBytes
= 0
3371 if GOARCH
== "wasm" { // no threads yet on wasm
3373 schedule() // never returns
3376 if _g_
.m
.lockedInt
!= 0 {
3377 print("invalid m->lockedInt = ", _g_
.m
.lockedInt
, "\n")
3378 throw("internal lockOSThread error")
3382 // The goroutine may have locked this thread because
3383 // it put it in an unusual kernel state. Kill it
3384 // rather than returning it to the thread pool.
3386 // Return to mstart, which will release the P and exit
3388 if GOOS
!= "plan9" { // See golang.org/issue/22227.
3389 _g_
.m
.exiting
= true
3392 // Clear lockedExt on plan9 since we may end up re-using
3400 // The goroutine g is about to enter a system call.
3401 // Record that it's not using the cpu anymore.
3402 // This is called only from the go syscall library and cgocall,
3403 // not from the low-level system calls used by the runtime.
3405 // The entersyscall function is written in C, so that it can save the
3406 // current register context so that the GC will see them.
3407 // It calls reentersyscall.
3410 // At the start of a syscall we emit traceGoSysCall to capture the stack trace.
3411 // If the syscall does not block, that is it, we do not emit any other events.
3412 // If the syscall blocks (that is, P is retaken), retaker emits traceGoSysBlock;
3413 // when syscall returns we emit traceGoSysExit and when the goroutine starts running
3414 // (potentially instantly, if exitsyscallfast returns true) we emit traceGoStart.
3415 // To ensure that traceGoSysExit is emitted strictly after traceGoSysBlock,
3416 // we remember current value of syscalltick in m (_g_.m.syscalltick = _g_.m.p.ptr().syscalltick),
3417 // whoever emits traceGoSysBlock increments p.syscalltick afterwards;
3418 // and we wait for the increment before emitting traceGoSysExit.
3419 // Note that the increment is done even if tracing is not enabled,
3420 // because tracing can be enabled in the middle of syscall. We don't want the wait to hang.
3424 func reentersyscall(pc
, sp
uintptr) {
3427 // Disable preemption because during this function g is in Gsyscall status,
3428 // but can have inconsistent g->sched, do not let GC observe it.
3433 casgstatus(_g_
, _Grunning
, _Gsyscall
)
3436 systemstack(traceGoSysCall
)
3439 if atomic
.Load(&sched
.sysmonwait
) != 0 {
3440 systemstack(entersyscall_sysmon
)
3443 if _g_
.m
.p
.ptr().runSafePointFn
!= 0 {
3444 // runSafePointFn may stack split if run on this stack
3445 systemstack(runSafePointFn
)
3448 _g_
.m
.syscalltick
= _g_
.m
.p
.ptr().syscalltick
3449 _g_
.sysblocktraced
= true
3454 atomic
.Store(&pp
.status
, _Psyscall
)
3455 if sched
.gcwaiting
!= 0 {
3456 systemstack(entersyscall_gcwait
)
3462 func entersyscall_sysmon() {
3464 if atomic
.Load(&sched
.sysmonwait
) != 0 {
3465 atomic
.Store(&sched
.sysmonwait
, 0)
3466 notewakeup(&sched
.sysmonnote
)
3471 func entersyscall_gcwait() {
3473 _p_
:= _g_
.m
.oldp
.ptr()
3476 if sched
.stopwait
> 0 && atomic
.Cas(&_p_
.status
, _Psyscall
, _Pgcstop
) {
3478 traceGoSysBlock(_p_
)
3482 if sched
.stopwait
--; sched
.stopwait
== 0 {
3483 notewakeup(&sched
.stopnote
)
3489 func reentersyscallblock(pc
, sp
uintptr) {
3492 _g_
.m
.locks
++ // see comment in entersyscall
3493 _g_
.throwsplit
= true
3494 _g_
.m
.syscalltick
= _g_
.m
.p
.ptr().syscalltick
3495 _g_
.sysblocktraced
= true
3496 _g_
.m
.p
.ptr().syscalltick
++
3498 // Leave SP around for GC and traceback.
3501 casgstatus(_g_
, _Grunning
, _Gsyscall
)
3502 systemstack(entersyscallblock_handoff
)
3507 func entersyscallblock_handoff() {
3510 traceGoSysBlock(getg().m
.p
.ptr())
3512 handoffp(releasep())
3515 // The goroutine g exited its system call.
3516 // Arrange for it to run on a cpu again.
3517 // This is called only from the go syscall library, not
3518 // from the low-level system calls used by the runtime.
3520 // Write barriers are not allowed because our P may have been stolen.
3523 //go:nowritebarrierrec
3524 func exitsyscall() {
3527 _g_
.m
.locks
++ // see comment in entersyscall
3530 oldp
:= _g_
.m
.oldp
.ptr()
3532 if exitsyscallfast(oldp
) {
3534 if oldp
!= _g_
.m
.p
.ptr() || _g_
.m
.syscalltick
!= _g_
.m
.p
.ptr().syscalltick
{
3535 systemstack(traceGoStart
)
3538 // There's a cpu for us, so we can run.
3539 _g_
.m
.p
.ptr().syscalltick
++
3540 // We need to cas the status and scan before resuming...
3541 casgstatus(_g_
, _Gsyscall
, _Grunning
)
3543 exitsyscallclear(_g_
)
3545 _g_
.throwsplit
= false
3547 // Check preemption, since unlike gc we don't check on
3552 _g_
.throwsplit
= false
3554 if sched
.disable
.user
&& !schedEnabled(_g_
) {
3555 // Scheduling of this goroutine is disabled.
3562 _g_
.sysexitticks
= 0
3564 // Wait till traceGoSysBlock event is emitted.
3565 // This ensures consistency of the trace (the goroutine is started after it is blocked).
3566 for oldp
!= nil && oldp
.syscalltick
== _g_
.m
.syscalltick
{
3569 // We can't trace syscall exit right now because we don't have a P.
3570 // Tracing code can invoke write barriers that cannot run without a P.
3571 // So instead we remember the syscall exit time and emit the event
3572 // in execute when we have a P.
3573 _g_
.sysexitticks
= cputicks()
3578 // Call the scheduler.
3581 // Scheduler returned, so we're allowed to run now.
3582 // Delete the syscallsp information that we left for
3583 // the garbage collector during the system call.
3584 // Must wait until now because until gosched returns
3585 // we don't know for sure that the garbage collector
3587 exitsyscallclear(_g_
)
3589 _g_
.m
.p
.ptr().syscalltick
++
3590 _g_
.throwsplit
= false
3594 func exitsyscallfast(oldp
*p
) bool {
3597 // Freezetheworld sets stopwait but does not retake P's.
3598 if sched
.stopwait
== freezeStopWait
{
3602 // Try to re-acquire the last P.
3603 if oldp
!= nil && oldp
.status
== _Psyscall
&& atomic
.Cas(&oldp
.status
, _Psyscall
, _Pidle
) {
3604 // There's a cpu for us, so we can run.
3606 exitsyscallfast_reacquired()
3610 // Try to get any other idle P.
3611 if sched
.pidle
!= 0 {
3613 systemstack(func() {
3614 ok
= exitsyscallfast_pidle()
3615 if ok
&& trace
.enabled
{
3617 // Wait till traceGoSysBlock event is emitted.
3618 // This ensures consistency of the trace (the goroutine is started after it is blocked).
3619 for oldp
.syscalltick
== _g_
.m
.syscalltick
{
3633 // exitsyscallfast_reacquired is the exitsyscall path on which this G
3634 // has successfully reacquired the P it was running on before the
3638 func exitsyscallfast_reacquired() {
3640 if _g_
.m
.syscalltick
!= _g_
.m
.p
.ptr().syscalltick
{
3642 // The p was retaken and then enter into syscall again (since _g_.m.syscalltick has changed).
3643 // traceGoSysBlock for this syscall was already emitted,
3644 // but here we effectively retake the p from the new syscall running on the same p.
3645 systemstack(func() {
3646 // Denote blocking of the new syscall.
3647 traceGoSysBlock(_g_
.m
.p
.ptr())
3648 // Denote completion of the current syscall.
3652 _g_
.m
.p
.ptr().syscalltick
++
3656 func exitsyscallfast_pidle() bool {
3659 if _p_
!= nil && atomic
.Load(&sched
.sysmonwait
) != 0 {
3660 atomic
.Store(&sched
.sysmonwait
, 0)
3661 notewakeup(&sched
.sysmonnote
)
3671 // exitsyscall slow path on g0.
3672 // Failed to acquire P, enqueue gp as runnable.
3674 // Called via mcall, so gp is the calling g from this M.
3676 //go:nowritebarrierrec
3677 func exitsyscall0(gp
*g
) {
3678 casgstatus(gp
, _Gsyscall
, _Gexitingsyscall
)
3680 casgstatus(gp
, _Gexitingsyscall
, _Grunnable
)
3683 if schedEnabled(gp
) {
3690 // Below, we stoplockedm if gp is locked. globrunqput releases
3691 // ownership of gp, so we must check if gp is locked prior to
3692 // committing the release by unlocking sched.lock, otherwise we
3693 // could race with another M transitioning gp from unlocked to
3695 locked
= gp
.lockedm
!= 0
3696 } else if atomic
.Load(&sched
.sysmonwait
) != 0 {
3697 atomic
.Store(&sched
.sysmonwait
, 0)
3698 notewakeup(&sched
.sysmonnote
)
3703 execute(gp
, false) // Never returns.
3706 // Wait until another thread schedules gp and so m again.
3708 // N.B. lockedm must be this M, as this g was running on this M
3709 // before entersyscall.
3711 execute(gp
, false) // Never returns.
3714 schedule() // Never returns.
3717 // exitsyscallclear clears GC-related information that we only track
3718 // during a syscall.
3719 func exitsyscallclear(gp
*g
) {
3720 // Garbage collector isn't running (since we are), so okay to
3726 memclrNoHeapPointers(unsafe
.Pointer(&gp
.gcregs
), unsafe
.Sizeof(gp
.gcregs
))
3729 // Code generated by cgo, and some library code, calls syscall.Entersyscall
3730 // and syscall.Exitsyscall.
3732 //go:linkname syscall_entersyscall syscall.Entersyscall
3734 func syscall_entersyscall() {
3738 //go:linkname syscall_exitsyscall syscall.Exitsyscall
3740 func syscall_exitsyscall() {
3744 // Called from syscall package before fork.
3745 //go:linkname syscall_runtime_BeforeFork syscall.runtime__BeforeFork
3747 func syscall_runtime_BeforeFork() {
3750 // Block signals during a fork, so that the child does not run
3751 // a signal handler before exec if a signal is sent to the process
3752 // group. See issue #18600.
3754 sigsave(&gp
.m
.sigmask
)
3758 // Called from syscall package after fork in parent.
3759 //go:linkname syscall_runtime_AfterFork syscall.runtime__AfterFork
3761 func syscall_runtime_AfterFork() {
3764 msigrestore(gp
.m
.sigmask
)
3769 // inForkedChild is true while manipulating signals in the child process.
3770 // This is used to avoid calling libc functions in case we are using vfork.
3771 var inForkedChild
bool
3773 // Called from syscall package after fork in child.
3774 // It resets non-sigignored signals to the default handler, and
3775 // restores the signal mask in preparation for the exec.
3777 // Because this might be called during a vfork, and therefore may be
3778 // temporarily sharing address space with the parent process, this must
3779 // not change any global variables or calling into C code that may do so.
3781 //go:linkname syscall_runtime_AfterForkInChild syscall.runtime__AfterForkInChild
3783 //go:nowritebarrierrec
3784 func syscall_runtime_AfterForkInChild() {
3785 // It's OK to change the global variable inForkedChild here
3786 // because we are going to change it back. There is no race here,
3787 // because if we are sharing address space with the parent process,
3788 // then the parent process can not be running concurrently.
3789 inForkedChild
= true
3791 clearSignalHandlers()
3793 // When we are the child we are the only thread running,
3794 // so we know that nothing else has changed gp.m.sigmask.
3795 msigrestore(getg().m
.sigmask
)
3797 inForkedChild
= false
3800 // pendingPreemptSignals is the number of preemption signals
3801 // that have been sent but not received. This is only used on Darwin.
3803 var pendingPreemptSignals
uint32
3805 // Called from syscall package before Exec.
3806 //go:linkname syscall_runtime_BeforeExec syscall.runtime__BeforeExec
3807 func syscall_runtime_BeforeExec() {
3808 // Prevent thread creation during exec.
3811 // On Darwin, wait for all pending preemption signals to
3812 // be received. See issue #41702.
3813 if GOOS
== "darwin" || GOOS
== "ios" {
3814 for int32(atomic
.Load(&pendingPreemptSignals
)) > 0 {
3820 // Called from syscall package after Exec.
3821 //go:linkname syscall_runtime_AfterExec syscall.runtime__AfterExec
3822 func syscall_runtime_AfterExec() {
3826 // panicgonil is used for gccgo as we need to use a compiler check for
3827 // a nil func, in case we have to build a thunk.
3828 //go:linkname panicgonil
3830 getg().m
.throwing
= -1 // do not dump full stacks
3831 throw("go of nil func value")
3834 // Create a new g running fn passing arg as the single argument.
3835 // Put it on the queue of g's waiting to run.
3836 // The compiler turns a go statement into a call to this.
3837 //go:linkname newproc __go_go
3838 func newproc(fn
uintptr, arg unsafe
.Pointer
) *g
{
3842 _g_
.m
.throwing
= -1 // do not dump full stacks
3843 throw("go of nil func value")
3845 acquirem() // disable preemption because it can be holding p in a local var
3847 _p_
:= _g_
.m
.p
.ptr()
3854 newg
= malg(true, false, &sp
, &spsize
)
3855 casgstatus(newg
, _Gidle
, _Gdead
)
3856 allgadd(newg
) // publishes with a g->status of Gdead so GC scanner doesn't look at uninitialized stack.
3858 resetNewG(newg
, &sp
, &spsize
)
3862 if readgstatus(newg
) != _Gdead
{
3863 throw("newproc1: new g is not Gdead")
3866 // Store the C function pointer into entryfn, take the address
3867 // of entryfn, convert it to a Go function value, and store
3870 var entry
func(unsafe
.Pointer
)
3871 *(*unsafe
.Pointer
)(unsafe
.Pointer(&entry
)) = unsafe
.Pointer(&newg
.entryfn
)
3875 newg
.gopc
= getcallerpc()
3876 newg
.ancestors
= saveAncestors(_g_
)
3878 if _g_
.m
.curg
!= nil {
3879 newg
.labels
= _g_
.m
.curg
.labels
3881 if isSystemGoroutine(newg
, false) {
3882 atomic
.Xadd(&sched
.ngsys
, +1)
3884 // Only user goroutines inherit pprof labels.
3885 if _g_
.m
.curg
!= nil {
3886 newg
.labels
= _g_
.m
.curg
.labels
3889 // Track initial transition?
3890 newg
.trackingSeq
= uint8(fastrand())
3891 if newg
.trackingSeq%gTrackingPeriod
== 0 {
3892 newg
.tracking
= true
3894 casgstatus(newg
, _Gdead
, _Grunnable
)
3895 // gcController.addScannableStack(_p_, int64(newg.stack.hi-newg.stack.lo))
3897 if _p_
.goidcache
== _p_
.goidcacheend
{
3898 // Sched.goidgen is the last allocated id,
3899 // this batch must be [sched.goidgen+1, sched.goidgen+GoidCacheBatch].
3900 // At startup sched.goidgen=0, so main goroutine receives goid=1.
3901 _p_
.goidcache
= atomic
.Xadd64(&sched
.goidgen
, _GoidCacheBatch
)
3902 _p_
.goidcache
-= _GoidCacheBatch
- 1
3903 _p_
.goidcacheend
= _p_
.goidcache
+ _GoidCacheBatch
3905 newg
.goid
= int64(_p_
.goidcache
)
3908 traceGoCreate(newg
, newg
.startpc
)
3911 makeGContext(newg
, sp
, spsize
)
3915 runqput(_p_
, newg
, true)
3924 // expectedSystemGoroutines counts the number of goroutines expected
3925 // to mark themselves as system goroutines. After they mark themselves
3926 // by calling setSystemGoroutine, this is decremented. NumGoroutines
3927 // uses this to wait for all system goroutines to mark themselves
3928 // before it counts them.
3929 var expectedSystemGoroutines
uint32
3931 // expectSystemGoroutine is called when starting a goroutine that will
3932 // call setSystemGoroutine. It increments expectedSystemGoroutines.
3933 func expectSystemGoroutine() {
3934 atomic
.Xadd(&expectedSystemGoroutines
, +1)
3937 // waitForSystemGoroutines waits for all currently expected system
3938 // goroutines to register themselves.
3939 func waitForSystemGoroutines() {
3940 for atomic
.Load(&expectedSystemGoroutines
) > 0 {
3946 // setSystemGoroutine marks this goroutine as a "system goroutine".
3947 // In the gc toolchain this is done by comparing startpc to a list of
3948 // saved special PCs. In gccgo that approach does not work as startpc
3949 // is often a thunk that invokes the real function with arguments,
3950 // so the thunk address never matches the saved special PCs. Instead,
3951 // since there are only a limited number of "system goroutines",
3952 // we force each one to mark itself as special.
3953 func setSystemGoroutine() {
3954 getg().isSystemGoroutine
= true
3955 atomic
.Xadd(&sched
.ngsys
, +1)
3956 atomic
.Xadd(&expectedSystemGoroutines
, -1)
3959 // saveAncestors copies previous ancestors of the given caller g and
3960 // includes infor for the current caller into a new set of tracebacks for
3961 // a g being created.
3962 func saveAncestors(callergp
*g
) *[]ancestorInfo
{
3963 // Copy all prior info, except for the root goroutine (goid 0).
3964 if debug
.tracebackancestors
<= 0 || callergp
.goid
== 0 {
3967 var callerAncestors
[]ancestorInfo
3968 if callergp
.ancestors
!= nil {
3969 callerAncestors
= *callergp
.ancestors
3971 n
:= int32(len(callerAncestors
)) + 1
3972 if n
> debug
.tracebackancestors
{
3973 n
= debug
.tracebackancestors
3975 ancestors
:= make([]ancestorInfo
, n
)
3976 copy(ancestors
[1:], callerAncestors
)
3978 var pcs
[_TracebackMaxFrames
]uintptr
3979 // FIXME: This should get a traceback of callergp.
3980 // npcs := gcallers(callergp, 0, pcs[:])
3982 ipcs
:= make([]uintptr, npcs
)
3984 ancestors
[0] = ancestorInfo
{
3986 goid
: callergp
.goid
,
3987 gopc
: callergp
.gopc
,
3990 ancestorsp
:= new([]ancestorInfo
)
3991 *ancestorsp
= ancestors
3995 // Put on gfree list.
3996 // If local list is too long, transfer a batch to the global list.
3997 func gfput(_p_
*p
, gp
*g
) {
3998 if readgstatus(gp
) != _Gdead
{
3999 throw("gfput: bad status (not Gdead)")
4004 if _p_
.gFree
.n
>= 64 {
4009 for _p_
.gFree
.n
>= 32 {
4010 gp
= _p_
.gFree
.pop()
4015 lock(&sched
.gFree
.lock
)
4016 sched
.gFree
.list
.pushAll(noStackQ
)
4017 sched
.gFree
.n
+= inc
4018 unlock(&sched
.gFree
.lock
)
4022 // Get from gfree list.
4023 // If local list is empty, grab a batch from global list.
4024 func gfget(_p_
*p
) *g
{
4026 if _p_
.gFree
.empty() && !sched
.gFree
.list
.empty() {
4027 lock(&sched
.gFree
.lock
)
4028 // Move a batch of free Gs to the P.
4029 for _p_
.gFree
.n
< 32 {
4030 gp
:= sched
.gFree
.list
.pop()
4038 unlock(&sched
.gFree
.lock
)
4041 gp
:= _p_
.gFree
.pop()
4049 // Purge all cached G's from gfree list to the global list.
4050 func gfpurge(_p_
*p
) {
4055 for !_p_
.gFree
.empty() {
4056 gp
:= _p_
.gFree
.pop()
4061 lock(&sched
.gFree
.lock
)
4062 sched
.gFree
.list
.pushAll(noStackQ
)
4063 sched
.gFree
.n
+= inc
4064 unlock(&sched
.gFree
.lock
)
4067 // Breakpoint executes a breakpoint trap.
4072 // dolockOSThread is called by LockOSThread and lockOSThread below
4073 // after they modify m.locked. Do not allow preemption during this call,
4074 // or else the m might be different in this function than in the caller.
4076 func dolockOSThread() {
4077 if GOARCH
== "wasm" {
4078 return // no threads on wasm yet
4081 _g_
.m
.lockedg
.set(_g_
)
4082 _g_
.lockedm
.set(_g_
.m
)
4087 // LockOSThread wires the calling goroutine to its current operating system thread.
4088 // The calling goroutine will always execute in that thread,
4089 // and no other goroutine will execute in it,
4090 // until the calling goroutine has made as many calls to
4091 // UnlockOSThread as to LockOSThread.
4092 // If the calling goroutine exits without unlocking the thread,
4093 // the thread will be terminated.
4095 // All init functions are run on the startup thread. Calling LockOSThread
4096 // from an init function will cause the main function to be invoked on
4099 // A goroutine should call LockOSThread before calling OS services or
4100 // non-Go library functions that depend on per-thread state.
4101 func LockOSThread() {
4102 if atomic
.Load(&newmHandoff
.haveTemplateThread
) == 0 && GOOS
!= "plan9" {
4103 // If we need to start a new thread from the locked
4104 // thread, we need the template thread. Start it now
4105 // while we're in a known-good state.
4106 startTemplateThread()
4110 if _g_
.m
.lockedExt
== 0 {
4112 panic("LockOSThread nesting overflow")
4118 func lockOSThread() {
4119 getg().m
.lockedInt
++
4123 // dounlockOSThread is called by UnlockOSThread and unlockOSThread below
4124 // after they update m->locked. Do not allow preemption during this call,
4125 // or else the m might be in different in this function than in the caller.
4127 func dounlockOSThread() {
4128 if GOARCH
== "wasm" {
4129 return // no threads on wasm yet
4132 if _g_
.m
.lockedInt
!= 0 || _g_
.m
.lockedExt
!= 0 {
4141 // UnlockOSThread undoes an earlier call to LockOSThread.
4142 // If this drops the number of active LockOSThread calls on the
4143 // calling goroutine to zero, it unwires the calling goroutine from
4144 // its fixed operating system thread.
4145 // If there are no active LockOSThread calls, this is a no-op.
4147 // Before calling UnlockOSThread, the caller must ensure that the OS
4148 // thread is suitable for running other goroutines. If the caller made
4149 // any permanent changes to the state of the thread that would affect
4150 // other goroutines, it should not call this function and thus leave
4151 // the goroutine locked to the OS thread until the goroutine (and
4152 // hence the thread) exits.
4153 func UnlockOSThread() {
4155 if _g_
.m
.lockedExt
== 0 {
4163 func unlockOSThread() {
4165 if _g_
.m
.lockedInt
== 0 {
4166 systemstack(badunlockosthread
)
4172 func badunlockosthread() {
4173 throw("runtime: internal error: misuse of lockOSThread/unlockOSThread")
4176 func gcount() int32 {
4177 n
:= int32(atomic
.Loaduintptr(&allglen
)) - sched
.gFree
.n
- int32(atomic
.Load(&sched
.ngsys
))
4178 for _
, _p_
:= range allp
{
4182 // All these variables can be changed concurrently, so the result can be inconsistent.
4183 // But at least the current goroutine is running.
4190 func mcount() int32 {
4191 return int32(sched
.mnext
- sched
.nmfreed
)
4199 func _System() { _System() }
4200 func _ExternalCode() { _ExternalCode() }
4201 func _LostExternalCode() { _LostExternalCode() }
4202 func _GC() { _GC() }
4203 func _LostSIGPROFDuringAtomic64() { _LostSIGPROFDuringAtomic64() }
4204 func _VDSO() { _VDSO() }
4206 // Called if we receive a SIGPROF signal.
4207 // Called by the signal handler, may run during STW.
4208 //go:nowritebarrierrec
4209 func sigprof(pc
uintptr, gp
*g
, mp
*m
) {
4214 // If mp.profilehz is 0, then profiling is not enabled for this thread.
4215 // We must check this to avoid a deadlock between setcpuprofilerate
4216 // and the call to cpuprof.add, below.
4217 if mp
!= nil && mp
.profilehz
== 0 {
4220 // Profiling runs concurrently with GC, so it must not allocate.
4221 // Set a trap in case the code does allocate.
4222 // Note that on windows, one thread takes profiles of all the
4223 // other threads, so mp is usually not getg().m.
4224 // In fact mp may not even be stopped.
4225 // See golang.org/issue/17165.
4226 getg().m
.mallocing
++
4230 // If SIGPROF arrived while already fetching runtime callers
4231 // we can have trouble on older systems because the unwind
4232 // library calls dl_iterate_phdr which was not reentrant in
4233 // the past. alreadyInCallers checks for that.
4234 if gp
== nil ||
alreadyInCallers() {
4238 var stk
[maxCPUProfStack
]uintptr
4241 var stklocs
[maxCPUProfStack
]location
4242 n
= callers(1, stklocs
[:])
4244 // Issue 26595: the stack trace we've just collected is going
4245 // to include frames that we don't want to report in the CPU
4246 // profile, including signal handler frames. Here is what we
4247 // might typically see at the point of "callers" above for a
4248 // signal delivered to the application routine "interesting"
4249 // called by "main".
4251 // 0: runtime.sigprof
4252 // 1: runtime.sighandler
4253 // 2: runtime.sigtrampgo
4254 // 3: runtime.sigtramp
4255 // 4: <signal handler called>
4256 // 5: main.interesting_routine
4259 // To ensure a sane profile, walk through the frames in
4260 // "stklocs" until we find the "runtime.sigtramp" frame, then
4261 // report only those frames below the frame one down from
4262 // that. On systems that don't split stack, "sigtramp" can
4263 // do a sibling call to "sigtrampgo", so use "sigtrampgo"
4264 // if we don't find "sigtramp". If for some reason
4265 // neither "runtime.sigtramp" nor "runtime.sigtrampgo" is
4266 // present, don't make any changes.
4267 framesToDiscard
:= 0
4268 for i
:= 0; i
< n
; i
++ {
4269 if stklocs
[i
].function
== "runtime.sigtrampgo" && i
+2 < n
{
4270 framesToDiscard
= i
+ 2
4272 if stklocs
[i
].function
== "runtime.sigtramp" && i
+2 < n
{
4273 framesToDiscard
= i
+ 2
4277 n
-= framesToDiscard
4278 for i
:= 0; i
< n
; i
++ {
4279 stk
[i
] = stklocs
[i
+framesToDiscard
].pc
4284 // Normal traceback is impossible or has failed.
4285 // Account it against abstract "System" or "GC".
4288 if mp
.preemptoff
!= "" {
4289 stk
[1] = abi
.FuncPCABIInternal(_GC
) + sys
.PCQuantum
4291 stk
[1] = abi
.FuncPCABIInternal(_System
) + sys
.PCQuantum
4296 // Note: it can happen on Windows that we interrupted a system thread
4297 // with no g, so gp could nil. The other nil checks are done out of
4298 // caution, but not expected to be nil in practice.
4299 var tagPtr
*unsafe
.Pointer
4300 if gp
!= nil && gp
.m
!= nil && gp
.m
.curg
!= nil {
4301 tagPtr
= &gp
.m
.curg
.labels
4303 cpuprof
.add(tagPtr
, stk
[:n
])
4305 getg().m
.mallocing
--
4308 // setcpuprofilerate sets the CPU profiling rate to hz times per second.
4309 // If hz <= 0, setcpuprofilerate turns off CPU profiling.
4310 func setcpuprofilerate(hz
int32) {
4311 // Force sane arguments.
4316 // Disable preemption, otherwise we can be rescheduled to another thread
4317 // that has profiling enabled.
4321 // Stop profiler on this thread so that it is safe to lock prof.
4322 // if a profiling signal came in while we had prof locked,
4323 // it would deadlock.
4324 setThreadCPUProfiler(0)
4326 for !atomic
.Cas(&prof
.signalLock
, 0, 1) {
4330 setProcessCPUProfiler(hz
)
4333 atomic
.Store(&prof
.signalLock
, 0)
4336 sched
.profilehz
= hz
4340 setThreadCPUProfiler(hz
)
4346 // init initializes pp, which may be a freshly allocated p or a
4347 // previously destroyed p, and transitions it to status _Pgcstop.
4348 func (pp
*p
) init(id
int32) {
4350 pp
.status
= _Pgcstop
4351 pp
.sudogcache
= pp
.sudogbuf
[:0]
4352 pp
.deferpool
= pp
.deferpoolbuf
[:0]
4354 if pp
.mcache
== nil {
4357 throw("missing mcache?")
4359 // Use the bootstrap mcache0. Only one P will get
4360 // mcache0: the one with ID 0.
4363 pp
.mcache
= allocmcache()
4366 if raceenabled
&& pp
.raceprocctx
== 0 {
4368 pp
.raceprocctx
= raceprocctx0
4369 raceprocctx0
= 0 // bootstrap
4371 pp
.raceprocctx
= raceproccreate()
4374 lockInit(&pp
.timersLock
, lockRankTimers
)
4376 // This P may get timers when it starts running. Set the mask here
4377 // since the P may not go through pidleget (notably P 0 on startup).
4379 // Similarly, we may not go through pidleget before this P starts
4380 // running if it is P 0 on startup.
4384 // destroy releases all of the resources associated with pp and
4385 // transitions it to status _Pdead.
4387 // sched.lock must be held and the world must be stopped.
4388 func (pp
*p
) destroy() {
4389 assertLockHeld(&sched
.lock
)
4390 assertWorldStopped()
4392 // Move all runnable goroutines to the global queue
4393 for pp
.runqhead
!= pp
.runqtail
{
4394 // Pop from tail of local queue
4396 gp
:= pp
.runq
[pp
.runqtail%uint
32(len(pp
.runq
))].ptr()
4397 // Push onto head of global queue
4400 if pp
.runnext
!= 0 {
4401 globrunqputhead(pp
.runnext
.ptr())
4404 if len(pp
.timers
) > 0 {
4405 plocal
:= getg().m
.p
.ptr()
4406 // The world is stopped, but we acquire timersLock to
4407 // protect against sysmon calling timeSleepUntil.
4408 // This is the only case where we hold the timersLock of
4409 // more than one P, so there are no deadlock concerns.
4410 lock(&plocal
.timersLock
)
4411 lock(&pp
.timersLock
)
4412 moveTimers(plocal
, pp
.timers
)
4415 pp
.deletedTimers
= 0
4416 atomic
.Store64(&pp
.timer0When
, 0)
4417 unlock(&pp
.timersLock
)
4418 unlock(&plocal
.timersLock
)
4420 // Flush p's write barrier buffer.
4421 if gcphase
!= _GCoff
{
4425 for i
:= range pp
.sudogbuf
{
4426 pp
.sudogbuf
[i
] = nil
4428 pp
.sudogcache
= pp
.sudogbuf
[:0]
4429 for j
:= range pp
.deferpoolbuf
{
4430 pp
.deferpoolbuf
[j
] = nil
4432 pp
.deferpool
= pp
.deferpoolbuf
[:0]
4433 systemstack(func() {
4434 for i
:= 0; i
< pp
.mspancache
.len; i
++ {
4435 // Safe to call since the world is stopped.
4436 mheap_
.spanalloc
.free(unsafe
.Pointer(pp
.mspancache
.buf
[i
]))
4438 pp
.mspancache
.len = 0
4440 pp
.pcache
.flush(&mheap_
.pages
)
4441 unlock(&mheap_
.lock
)
4443 freemcache(pp
.mcache
)
4451 // Change number of processors.
4453 // sched.lock must be held, and the world must be stopped.
4455 // gcworkbufs must not be being modified by either the GC or the write barrier
4456 // code, so the GC must not be running if the number of Ps actually changes.
4458 // Returns list of Ps with local work, they need to be scheduled by the caller.
4459 func procresize(nprocs
int32) *p
{
4460 assertLockHeld(&sched
.lock
)
4461 assertWorldStopped()
4464 if old
< 0 || nprocs
<= 0 {
4465 throw("procresize: invalid arg")
4468 traceGomaxprocs(nprocs
)
4471 // update statistics
4473 if sched
.procresizetime
!= 0 {
4474 sched
.totaltime
+= int64(old
) * (now
- sched
.procresizetime
)
4476 sched
.procresizetime
= now
4478 maskWords
:= (nprocs
+ 31) / 32
4480 // Grow allp if necessary.
4481 if nprocs
> int32(len(allp
)) {
4482 // Synchronize with retake, which could be running
4483 // concurrently since it doesn't run on a P.
4485 if nprocs
<= int32(cap(allp
)) {
4486 allp
= allp
[:nprocs
]
4488 nallp
:= make([]*p
, nprocs
)
4489 // Copy everything up to allp's cap so we
4490 // never lose old allocated Ps.
4491 copy(nallp
, allp
[:cap(allp
)])
4495 if maskWords
<= int32(cap(idlepMask
)) {
4496 idlepMask
= idlepMask
[:maskWords
]
4497 timerpMask
= timerpMask
[:maskWords
]
4499 nidlepMask
:= make([]uint32, maskWords
)
4500 // No need to copy beyond len, old Ps are irrelevant.
4501 copy(nidlepMask
, idlepMask
)
4502 idlepMask
= nidlepMask
4504 ntimerpMask
:= make([]uint32, maskWords
)
4505 copy(ntimerpMask
, timerpMask
)
4506 timerpMask
= ntimerpMask
4511 // initialize new P's
4512 for i
:= old
; i
< nprocs
; i
++ {
4518 atomicstorep(unsafe
.Pointer(&allp
[i
]), unsafe
.Pointer(pp
))
4522 if _g_
.m
.p
!= 0 && _g_
.m
.p
.ptr().id
< nprocs
{
4523 // continue to use the current P
4524 _g_
.m
.p
.ptr().status
= _Prunning
4525 _g_
.m
.p
.ptr().mcache
.prepareForSweep()
4527 // release the current P and acquire allp[0].
4529 // We must do this before destroying our current P
4530 // because p.destroy itself has write barriers, so we
4531 // need to do that from a valid P.
4534 // Pretend that we were descheduled
4535 // and then scheduled again to keep
4538 traceProcStop(_g_
.m
.p
.ptr())
4552 // g.m.p is now set, so we no longer need mcache0 for bootstrapping.
4555 // release resources from unused P's
4556 for i
:= nprocs
; i
< old
; i
++ {
4559 // can't free P itself because it can be referenced by an M in syscall
4563 if int32(len(allp
)) != nprocs
{
4565 allp
= allp
[:nprocs
]
4566 idlepMask
= idlepMask
[:maskWords
]
4567 timerpMask
= timerpMask
[:maskWords
]
4572 for i
:= nprocs
- 1; i
>= 0; i
-- {
4574 if _g_
.m
.p
.ptr() == p
{
4582 p
.link
.set(runnablePs
)
4586 stealOrder
.reset(uint32(nprocs
))
4587 var int32p
*int32 = &gomaxprocs
// make compiler check that gomaxprocs is an int32
4588 atomic
.Store((*uint32)(unsafe
.Pointer(int32p
)), uint32(nprocs
))
4592 // Associate p and the current m.
4594 // This function is allowed to have write barriers even if the caller
4595 // isn't because it immediately acquires _p_.
4597 //go:yeswritebarrierrec
4598 func acquirep(_p_
*p
) {
4599 // Do the part that isn't allowed to have write barriers.
4602 // Have p; write barriers now allowed.
4604 // Perform deferred mcache flush before this P can allocate
4605 // from a potentially stale mcache.
4606 _p_
.mcache
.prepareForSweep()
4613 // wirep is the first step of acquirep, which actually associates the
4614 // current M to _p_. This is broken out so we can disallow write
4615 // barriers for this part, since we don't yet have a P.
4617 //go:nowritebarrierrec
4619 func wirep(_p_
*p
) {
4623 throw("wirep: already in go")
4625 if _p_
.m
!= 0 || _p_
.status
!= _Pidle
{
4630 print("wirep: p->m=", _p_
.m
, "(", id
, ") p->status=", _p_
.status
, "\n")
4631 throw("wirep: invalid p state")
4635 _p_
.status
= _Prunning
4638 // Disassociate p and the current m.
4639 func releasep() *p
{
4643 throw("releasep: invalid arg")
4645 _p_
:= _g_
.m
.p
.ptr()
4646 if _p_
.m
.ptr() != _g_
.m || _p_
.status
!= _Prunning
{
4647 print("releasep: m=", _g_
.m
, " m->p=", _g_
.m
.p
.ptr(), " p->m=", hex(_p_
.m
), " p->status=", _p_
.status
, "\n")
4648 throw("releasep: invalid p state")
4651 traceProcStop(_g_
.m
.p
.ptr())
4659 func incidlelocked(v
int32) {
4661 sched
.nmidlelocked
+= v
4668 // Check for deadlock situation.
4669 // The check is based on number of running M's, if 0 -> deadlock.
4670 // sched.lock must be held.
4672 assertLockHeld(&sched
.lock
)
4674 // For -buildmode=c-shared or -buildmode=c-archive it's OK if
4675 // there are no running goroutines. The calling program is
4676 // assumed to be running.
4677 if islibrary || isarchive
{
4681 // If we are dying because of a signal caught on an already idle thread,
4682 // freezetheworld will cause all running threads to block.
4683 // And runtime will essentially enter into deadlock state,
4684 // except that there is a thread that will call exit soon.
4689 // If we are not running under cgo, but we have an extra M then account
4690 // for it. (It is possible to have an extra M on Windows without cgo to
4691 // accommodate callbacks created by syscall.NewCallback. See issue #6751
4694 if !iscgo
&& cgoHasExtraM
{
4695 mp
:= lockextra(true)
4696 haveExtraM
:= extraMCount
> 0
4703 run
:= mcount() - sched
.nmidle
- sched
.nmidlelocked
- sched
.nmsys
4708 print("runtime: checkdead: nmidle=", sched
.nmidle
, " nmidlelocked=", sched
.nmidlelocked
, " mcount=", mcount(), " nmsys=", sched
.nmsys
, "\n")
4709 throw("checkdead: inconsistent counts")
4713 forEachG(func(gp
*g
) {
4714 if isSystemGoroutine(gp
, false) {
4717 s
:= readgstatus(gp
)
4718 switch s
&^ _Gscan
{
4725 print("runtime: checkdead: find g ", gp
.goid
, " in status ", s
, "\n")
4726 throw("checkdead: runnable g")
4729 if grunning
== 0 { // possible if main goroutine calls runtime·Goexit()
4730 unlock(&sched
.lock
) // unlock so that GODEBUG=scheddetail=1 doesn't hang
4731 throw("no goroutines (main called runtime.Goexit) - deadlock!")
4734 // Maybe jump time forward for playground.
4736 when
, _p_
:= timeSleepUntil()
4739 for pp
:= &sched
.pidle
; *pp
!= 0; pp
= &(*pp
).ptr().link
{
4740 if (*pp
).ptr() == _p_
{
4747 // There should always be a free M since
4748 // nothing is running.
4749 throw("checkdead: no m for timer")
4752 notewakeup(&mp
.park
)
4757 // There are no goroutines running, so we can look at the P's.
4758 for _
, _p_
:= range allp
{
4759 if len(_p_
.timers
) > 0 {
4764 getg().m
.throwing
= -1 // do not dump full stacks
4765 unlock(&sched
.lock
) // unlock so that GODEBUG=scheddetail=1 doesn't hang
4766 throw("all goroutines are asleep - deadlock!")
4769 // forcegcperiod is the maximum time in nanoseconds between garbage
4770 // collections. If we go this long without a garbage collection, one
4771 // is forced to run.
4773 // This is a variable for testing purposes. It normally doesn't change.
4774 var forcegcperiod
int64 = 2 * 60 * 1e9
4776 // needSysmonWorkaround is true if the workaround for
4777 // golang.org/issue/42515 is needed on NetBSD.
4778 var needSysmonWorkaround
bool = false
4780 // Always runs without a P, so write barriers are not allowed.
4782 //go:nowritebarrierrec
4789 lasttrace
:= int64(0)
4790 idle
:= 0 // how many cycles in succession we had not wokeup somebody
4794 if idle
== 0 { // start with 20us sleep...
4796 } else if idle
> 50 { // start doubling the sleep after 1ms...
4799 if delay
> 10*1000 { // up to 10ms
4804 // sysmon should not enter deep sleep if schedtrace is enabled so that
4805 // it can print that information at the right time.
4807 // It should also not enter deep sleep if there are any active P's so
4808 // that it can retake P's from syscalls, preempt long running G's, and
4809 // poll the network if all P's are busy for long stretches.
4811 // It should wakeup from deep sleep if any P's become active either due
4812 // to exiting a syscall or waking up due to a timer expiring so that it
4813 // can resume performing those duties. If it wakes from a syscall it
4814 // resets idle and delay as a bet that since it had retaken a P from a
4815 // syscall before, it may need to do it again shortly after the
4816 // application starts work again. It does not reset idle when waking
4817 // from a timer to avoid adding system load to applications that spend
4818 // most of their time sleeping.
4820 if debug
.schedtrace
<= 0 && (sched
.gcwaiting
!= 0 || atomic
.Load(&sched
.npidle
) == uint32(gomaxprocs
)) {
4822 if atomic
.Load(&sched
.gcwaiting
) != 0 || atomic
.Load(&sched
.npidle
) == uint32(gomaxprocs
) {
4823 syscallWake
:= false
4824 next
, _
:= timeSleepUntil()
4826 atomic
.Store(&sched
.sysmonwait
, 1)
4828 // Make wake-up period small enough
4829 // for the sampling to be correct.
4830 sleep
:= forcegcperiod
/ 2
4831 if next
-now
< sleep
{
4834 shouldRelax
:= sleep
>= osRelaxMinNS
4838 syscallWake
= notetsleep(&sched
.sysmonnote
, sleep
)
4843 atomic
.Store(&sched
.sysmonwait
, 0)
4844 noteclear(&sched
.sysmonnote
)
4854 lock(&sched
.sysmonlock
)
4855 // Update now in case we blocked on sysmonnote or spent a long time
4856 // blocked on schedlock or sysmonlock above.
4859 // trigger libc interceptors if needed
4860 if *cgo_yield
!= nil {
4861 asmcgocall(*cgo_yield
, nil)
4863 // poll network if not polled for more than 10ms
4864 lastpoll
:= int64(atomic
.Load64(&sched
.lastpoll
))
4865 if netpollinited() && lastpoll
!= 0 && lastpoll
+10*1000*1000 < now
{
4866 atomic
.Cas64(&sched
.lastpoll
, uint64(lastpoll
), uint64(now
))
4867 list
:= netpoll(0) // non-blocking - returns list of goroutines
4869 // Need to decrement number of idle locked M's
4870 // (pretending that one more is running) before injectglist.
4871 // Otherwise it can lead to the following situation:
4872 // injectglist grabs all P's but before it starts M's to run the P's,
4873 // another M returns from syscall, finishes running its G,
4874 // observes that there is no work to do and no other running M's
4875 // and reports deadlock.
4881 if GOOS
== "netbsd" && needSysmonWorkaround
{
4882 // netpoll is responsible for waiting for timer
4883 // expiration, so we typically don't have to worry
4884 // about starting an M to service timers. (Note that
4885 // sleep for timeSleepUntil above simply ensures sysmon
4886 // starts running again when that timer expiration may
4887 // cause Go code to run again).
4889 // However, netbsd has a kernel bug that sometimes
4890 // misses netpollBreak wake-ups, which can lead to
4891 // unbounded delays servicing timers. If we detect this
4892 // overrun, then startm to get something to handle the
4895 // See issue 42515 and
4896 // https://gnats.netbsd.org/cgi-bin/query-pr-single.pl?number=50094.
4897 if next
, _
:= timeSleepUntil(); next
< now
{
4901 if atomic
.Load(&scavenge
.sysmonWake
) != 0 {
4902 // Kick the scavenger awake if someone requested it.
4905 // retake P's blocked in syscalls
4906 // and preempt long running G's
4907 if retake(now
) != 0 {
4912 // check if we need to force a GC
4913 if t
:= (gcTrigger
{kind
: gcTriggerTime
, now
: now
}); t
.test() && atomic
.Load(&forcegc
.idle
) != 0 {
4917 list
.push(forcegc
.g
)
4919 unlock(&forcegc
.lock
)
4921 if debug
.schedtrace
> 0 && lasttrace
+int64(debug
.schedtrace
)*1000000 <= now
{
4923 schedtrace(debug
.scheddetail
> 0)
4925 unlock(&sched
.sysmonlock
)
4929 type sysmontick
struct {
4936 // forcePreemptNS is the time slice given to a G before it is
4938 const forcePreemptNS
= 10 * 1000 * 1000 // 10ms
4940 func retake(now
int64) uint32 {
4942 // Prevent allp slice changes. This lock will be completely
4943 // uncontended unless we're already stopping the world.
4945 // We can't use a range loop over allp because we may
4946 // temporarily drop the allpLock. Hence, we need to re-fetch
4947 // allp each time around the loop.
4948 for i
:= 0; i
< len(allp
); i
++ {
4951 // This can happen if procresize has grown
4952 // allp but not yet created new Ps.
4955 pd
:= &_p_
.sysmontick
4958 if s
== _Prunning || s
== _Psyscall
{
4959 // Preempt G if it's running for too long.
4960 t
:= int64(_p_
.schedtick
)
4961 if int64(pd
.schedtick
) != t
{
4962 pd
.schedtick
= uint32(t
)
4964 } else if pd
.schedwhen
+forcePreemptNS
<= now
{
4966 // In case of syscall, preemptone() doesn't
4967 // work, because there is no M wired to P.
4972 // Retake P from syscall if it's there for more than 1 sysmon tick (at least 20us).
4973 t
:= int64(_p_
.syscalltick
)
4974 if !sysretake
&& int64(pd
.syscalltick
) != t
{
4975 pd
.syscalltick
= uint32(t
)
4976 pd
.syscallwhen
= now
4979 // On the one hand we don't want to retake Ps if there is no other work to do,
4980 // but on the other hand we want to retake them eventually
4981 // because they can prevent the sysmon thread from deep sleep.
4982 if runqempty(_p_
) && atomic
.Load(&sched
.nmspinning
)+atomic
.Load(&sched
.npidle
) > 0 && pd
.syscallwhen
+10*1000*1000 > now
{
4985 // Drop allpLock so we can take sched.lock.
4987 // Need to decrement number of idle locked M's
4988 // (pretending that one more is running) before the CAS.
4989 // Otherwise the M from which we retake can exit the syscall,
4990 // increment nmidle and report deadlock.
4992 if atomic
.Cas(&_p_
.status
, s
, _Pidle
) {
4994 traceGoSysBlock(_p_
)
5009 // Tell all goroutines that they have been preempted and they should stop.
5010 // This function is purely best-effort. It can fail to inform a goroutine if a
5011 // processor just started running it.
5012 // No locks need to be held.
5013 // Returns true if preemption request was issued to at least one goroutine.
5014 func preemptall() bool {
5016 for _
, _p_
:= range allp
{
5017 if _p_
.status
!= _Prunning
{
5020 if preemptone(_p_
) {
5027 // Tell the goroutine running on processor P to stop.
5028 // This function is purely best-effort. It can incorrectly fail to inform the
5029 // goroutine. It can inform the wrong goroutine. Even if it informs the
5030 // correct goroutine, that goroutine might ignore the request if it is
5031 // simultaneously executing newstack.
5032 // No lock needs to be held.
5033 // Returns true if preemption request was issued.
5034 // The actual preemption will happen at some point in the future
5035 // and will be indicated by the gp->status no longer being
5037 func preemptone(_p_
*p
) bool {
5039 if mp
== nil || mp
== getg().m
{
5043 if gp
== nil || gp
== mp
.g0
{
5049 // At this point the gc implementation sets gp.stackguard0 to
5050 // a value that causes the goroutine to suspend itself.
5051 // gccgo has no support for this, and it's hard to support.
5052 // The split stack code reads a value from its TCB.
5053 // We have no way to set a value in the TCB of a different thread.
5054 // And, of course, not all systems support split stack anyhow.
5055 // Checking the field in the g is expensive, since it requires
5056 // loading the g from TLS. The best mechanism is likely to be
5057 // setting a global variable and figuring out a way to efficiently
5058 // check that global variable.
5060 // For now we check gp.preempt in schedule, mallocgc, selectgo,
5061 // and a few other places, which is at least better than doing
5064 // Request an async preemption of this P.
5065 if preemptMSupported
&& debug
.asyncpreemptoff
== 0 {
5075 func schedtrace(detailed
bool) {
5082 print("SCHED ", (now
-starttime
)/1e6
, "ms: gomaxprocs=", gomaxprocs
, " idleprocs=", sched
.npidle
, " threads=", mcount(), " spinningthreads=", sched
.nmspinning
, " idlethreads=", sched
.nmidle
, " runqueue=", sched
.runqsize
)
5084 print(" gcwaiting=", sched
.gcwaiting
, " nmidlelocked=", sched
.nmidlelocked
, " stopwait=", sched
.stopwait
, " sysmonwait=", sched
.sysmonwait
, "\n")
5086 // We must be careful while reading data from P's, M's and G's.
5087 // Even if we hold schedlock, most data can be changed concurrently.
5088 // E.g. (p->m ? p->m->id : -1) can crash if p->m changes from non-nil to nil.
5089 for i
, _p_
:= range allp
{
5091 h
:= atomic
.Load(&_p_
.runqhead
)
5092 t
:= atomic
.Load(&_p_
.runqtail
)
5098 print(" P", i
, ": status=", _p_
.status
, " schedtick=", _p_
.schedtick
, " syscalltick=", _p_
.syscalltick
, " m=", id
, " runqsize=", t
-h
, " gfreecnt=", _p_
.gFree
.n
, " timerslen=", len(_p_
.timers
), "\n")
5100 // In non-detailed mode format lengths of per-P run queues as:
5101 // [len1 len2 len3 len4]
5107 if i
== len(allp
)-1 {
5118 for mp
:= allm
; mp
!= nil; mp
= mp
.alllink
{
5121 lockedg
:= mp
.lockedg
.ptr()
5134 print(" M", mp
.id
, ": p=", id1
, " curg=", id2
, " mallocing=", mp
.mallocing
, " throwing=", mp
.throwing
, " preemptoff=", mp
.preemptoff
, ""+" locks=", mp
.locks
, " dying=", mp
.dying
, " spinning=", mp
.spinning
, " blocked=", mp
.blocked
, " lockedg=", id3
, "\n")
5137 forEachG(func(gp
*g
) {
5139 lockedm
:= gp
.lockedm
.ptr()
5148 print(" G", gp
.goid
, ": status=", readgstatus(gp
), "(", gp
.waitreason
.String(), ") m=", id1
, " lockedm=", id2
, "\n")
5153 // schedEnableUser enables or disables the scheduling of user
5156 // This does not stop already running user goroutines, so the caller
5157 // should first stop the world when disabling user goroutines.
5158 func schedEnableUser(enable
bool) {
5160 if sched
.disable
.user
== !enable
{
5164 sched
.disable
.user
= !enable
5166 n
:= sched
.disable
.n
5168 globrunqputbatch(&sched
.disable
.runnable
, n
)
5170 for ; n
!= 0 && sched
.npidle
!= 0; n
-- {
5178 // schedEnabled reports whether gp should be scheduled. It returns
5179 // false is scheduling of gp is disabled.
5181 // sched.lock must be held.
5182 func schedEnabled(gp
*g
) bool {
5183 assertLockHeld(&sched
.lock
)
5185 if sched
.disable
.user
{
5186 return isSystemGoroutine(gp
, true)
5191 // Put mp on midle list.
5192 // sched.lock must be held.
5193 // May run during STW, so write barriers are not allowed.
5194 //go:nowritebarrierrec
5196 assertLockHeld(&sched
.lock
)
5198 mp
.schedlink
= sched
.midle
5204 // Try to get an m from midle list.
5205 // sched.lock must be held.
5206 // May run during STW, so write barriers are not allowed.
5207 //go:nowritebarrierrec
5209 assertLockHeld(&sched
.lock
)
5211 mp
:= sched
.midle
.ptr()
5213 sched
.midle
= mp
.schedlink
5219 // Put gp on the global runnable queue.
5220 // sched.lock must be held.
5221 // May run during STW, so write barriers are not allowed.
5222 //go:nowritebarrierrec
5223 func globrunqput(gp
*g
) {
5224 assertLockHeld(&sched
.lock
)
5226 sched
.runq
.pushBack(gp
)
5230 // Put gp at the head of the global runnable queue.
5231 // sched.lock must be held.
5232 // May run during STW, so write barriers are not allowed.
5233 //go:nowritebarrierrec
5234 func globrunqputhead(gp
*g
) {
5235 assertLockHeld(&sched
.lock
)
5241 // Put a batch of runnable goroutines on the global runnable queue.
5242 // This clears *batch.
5243 // sched.lock must be held.
5244 // May run during STW, so write barriers are not allowed.
5245 //go:nowritebarrierrec
5246 func globrunqputbatch(batch
*gQueue
, n
int32) {
5247 assertLockHeld(&sched
.lock
)
5249 sched
.runq
.pushBackAll(*batch
)
5254 // Try get a batch of G's from the global runnable queue.
5255 // sched.lock must be held.
5256 func globrunqget(_p_
*p
, max
int32) *g
{
5257 assertLockHeld(&sched
.lock
)
5259 if sched
.runqsize
== 0 {
5263 n
:= sched
.runqsize
/gomaxprocs
+ 1
5264 if n
> sched
.runqsize
{
5267 if max
> 0 && n
> max
{
5270 if n
> int32(len(_p_
.runq
))/2 {
5271 n
= int32(len(_p_
.runq
)) / 2
5276 gp
:= sched
.runq
.pop()
5279 gp1
:= sched
.runq
.pop()
5280 runqput(_p_
, gp1
, false)
5285 // pMask is an atomic bitstring with one bit per P.
5288 // read returns true if P id's bit is set.
5289 func (p pMask
) read(id
uint32) bool {
5291 mask
:= uint32(1) << (id
% 32)
5292 return (atomic
.Load(&p
[word
]) & mask
) != 0
5295 // set sets P id's bit.
5296 func (p pMask
) set(id
int32) {
5298 mask
:= uint32(1) << (id
% 32)
5299 atomic
.Or(&p
[word
], mask
)
5302 // clear clears P id's bit.
5303 func (p pMask
) clear(id
int32) {
5305 mask
:= uint32(1) << (id
% 32)
5306 atomic
.And(&p
[word
], ^mask
)
5309 // updateTimerPMask clears pp's timer mask if it has no timers on its heap.
5311 // Ideally, the timer mask would be kept immediately consistent on any timer
5312 // operations. Unfortunately, updating a shared global data structure in the
5313 // timer hot path adds too much overhead in applications frequently switching
5314 // between no timers and some timers.
5316 // As a compromise, the timer mask is updated only on pidleget / pidleput. A
5317 // running P (returned by pidleget) may add a timer at any time, so its mask
5318 // must be set. An idle P (passed to pidleput) cannot add new timers while
5319 // idle, so if it has no timers at that time, its mask may be cleared.
5321 // Thus, we get the following effects on timer-stealing in findrunnable:
5323 // * Idle Ps with no timers when they go idle are never checked in findrunnable
5324 // (for work- or timer-stealing; this is the ideal case).
5325 // * Running Ps must always be checked.
5326 // * Idle Ps whose timers are stolen must continue to be checked until they run
5327 // again, even after timer expiration.
5329 // When the P starts running again, the mask should be set, as a timer may be
5330 // added at any time.
5332 // TODO(prattmic): Additional targeted updates may improve the above cases.
5333 // e.g., updating the mask when stealing a timer.
5334 func updateTimerPMask(pp
*p
) {
5335 if atomic
.Load(&pp
.numTimers
) > 0 {
5339 // Looks like there are no timers, however another P may transiently
5340 // decrement numTimers when handling a timerModified timer in
5341 // checkTimers. We must take timersLock to serialize with these changes.
5342 lock(&pp
.timersLock
)
5343 if atomic
.Load(&pp
.numTimers
) == 0 {
5344 timerpMask
.clear(pp
.id
)
5346 unlock(&pp
.timersLock
)
5349 // pidleput puts p to on the _Pidle list.
5351 // This releases ownership of p. Once sched.lock is released it is no longer
5354 // sched.lock must be held.
5356 // May run during STW, so write barriers are not allowed.
5357 //go:nowritebarrierrec
5358 func pidleput(_p_
*p
) {
5359 assertLockHeld(&sched
.lock
)
5361 if !runqempty(_p_
) {
5362 throw("pidleput: P has non-empty run queue")
5364 updateTimerPMask(_p_
) // clear if there are no timers.
5365 idlepMask
.set(_p_
.id
)
5366 _p_
.link
= sched
.pidle
5367 sched
.pidle
.set(_p_
)
5368 atomic
.Xadd(&sched
.npidle
, 1) // TODO: fast atomic
5371 // pidleget tries to get a p from the _Pidle list, acquiring ownership.
5373 // sched.lock must be held.
5375 // May run during STW, so write barriers are not allowed.
5376 //go:nowritebarrierrec
5377 func pidleget() *p
{
5378 assertLockHeld(&sched
.lock
)
5380 _p_
:= sched
.pidle
.ptr()
5382 // Timer may get added at any time now.
5383 timerpMask
.set(_p_
.id
)
5384 idlepMask
.clear(_p_
.id
)
5385 sched
.pidle
= _p_
.link
5386 atomic
.Xadd(&sched
.npidle
, -1) // TODO: fast atomic
5391 // runqempty reports whether _p_ has no Gs on its local run queue.
5392 // It never returns true spuriously.
5393 func runqempty(_p_
*p
) bool {
5394 // Defend against a race where 1) _p_ has G1 in runqnext but runqhead == runqtail,
5395 // 2) runqput on _p_ kicks G1 to the runq, 3) runqget on _p_ empties runqnext.
5396 // Simply observing that runqhead == runqtail and then observing that runqnext == nil
5397 // does not mean the queue is empty.
5399 head
:= atomic
.Load(&_p_
.runqhead
)
5400 tail
:= atomic
.Load(&_p_
.runqtail
)
5401 runnext
:= atomic
.Loaduintptr((*uintptr)(unsafe
.Pointer(&_p_
.runnext
)))
5402 if tail
== atomic
.Load(&_p_
.runqtail
) {
5403 return head
== tail
&& runnext
== 0
5408 // To shake out latent assumptions about scheduling order,
5409 // we introduce some randomness into scheduling decisions
5410 // when running with the race detector.
5411 // The need for this was made obvious by changing the
5412 // (deterministic) scheduling order in Go 1.5 and breaking
5413 // many poorly-written tests.
5414 // With the randomness here, as long as the tests pass
5415 // consistently with -race, they shouldn't have latent scheduling
5417 const randomizeScheduler
= raceenabled
5419 // runqput tries to put g on the local runnable queue.
5420 // If next is false, runqput adds g to the tail of the runnable queue.
5421 // If next is true, runqput puts g in the _p_.runnext slot.
5422 // If the run queue is full, runnext puts g on the global queue.
5423 // Executed only by the owner P.
5424 func runqput(_p_
*p
, gp
*g
, next
bool) {
5425 if randomizeScheduler
&& next
&& fastrandn(2) == 0 {
5431 oldnext
:= _p_
.runnext
5432 if !_p_
.runnext
.cas(oldnext
, guintptr(unsafe
.Pointer(gp
))) {
5438 // Kick the old runnext out to the regular run queue.
5443 h
:= atomic
.LoadAcq(&_p_
.runqhead
) // load-acquire, synchronize with consumers
5445 if t
-h
< uint32(len(_p_
.runq
)) {
5446 _p_
.runq
[t%uint
32(len(_p_
.runq
))].set(gp
)
5447 atomic
.StoreRel(&_p_
.runqtail
, t
+1) // store-release, makes the item available for consumption
5450 if runqputslow(_p_
, gp
, h
, t
) {
5453 // the queue is not full, now the put above must succeed
5457 // Put g and a batch of work from local runnable queue on global queue.
5458 // Executed only by the owner P.
5459 func runqputslow(_p_
*p
, gp
*g
, h
, t
uint32) bool {
5460 var batch
[len(_p_
.runq
)/2 + 1]*g
5462 // First, grab a batch from local queue.
5465 if n
!= uint32(len(_p_
.runq
)/2) {
5466 throw("runqputslow: queue is not full")
5468 for i
:= uint32(0); i
< n
; i
++ {
5469 batch
[i
] = _p_
.runq
[(h
+i
)%uint
32(len(_p_
.runq
))].ptr()
5471 if !atomic
.CasRel(&_p_
.runqhead
, h
, h
+n
) { // cas-release, commits consume
5476 if randomizeScheduler
{
5477 for i
:= uint32(1); i
<= n
; i
++ {
5478 j
:= fastrandn(i
+ 1)
5479 batch
[i
], batch
[j
] = batch
[j
], batch
[i
]
5483 // Link the goroutines.
5484 for i
:= uint32(0); i
< n
; i
++ {
5485 batch
[i
].schedlink
.set(batch
[i
+1])
5488 q
.head
.set(batch
[0])
5489 q
.tail
.set(batch
[n
])
5491 // Now put the batch on global queue.
5493 globrunqputbatch(&q
, int32(n
+1))
5498 // runqputbatch tries to put all the G's on q on the local runnable queue.
5499 // If the queue is full, they are put on the global queue; in that case
5500 // this will temporarily acquire the scheduler lock.
5501 // Executed only by the owner P.
5502 func runqputbatch(pp
*p
, q
*gQueue
, qsize
int) {
5503 h
:= atomic
.LoadAcq(&pp
.runqhead
)
5506 for !q
.empty() && t
-h
< uint32(len(pp
.runq
)) {
5508 pp
.runq
[t%uint
32(len(pp
.runq
))].set(gp
)
5514 if randomizeScheduler
{
5515 off
:= func(o
uint32) uint32 {
5516 return (pp
.runqtail
+ o
) % uint32(len(pp
.runq
))
5518 for i
:= uint32(1); i
< n
; i
++ {
5519 j
:= fastrandn(i
+ 1)
5520 pp
.runq
[off(i
)], pp
.runq
[off(j
)] = pp
.runq
[off(j
)], pp
.runq
[off(i
)]
5524 atomic
.StoreRel(&pp
.runqtail
, t
)
5527 globrunqputbatch(q
, int32(qsize
))
5532 // Get g from local runnable queue.
5533 // If inheritTime is true, gp should inherit the remaining time in the
5534 // current time slice. Otherwise, it should start a new time slice.
5535 // Executed only by the owner P.
5536 func runqget(_p_
*p
) (gp
*g
, inheritTime
bool) {
5537 // If there's a runnext, it's the next G to run.
5539 // If the runnext is non-0 and the CAS fails, it could only have been stolen by another P,
5540 // because other Ps can race to set runnext to 0, but only the current P can set it to non-0.
5541 // Hence, there's no need to retry this CAS if it falls.
5542 if next
!= 0 && _p_
.runnext
.cas(next
, 0) {
5543 return next
.ptr(), true
5547 h
:= atomic
.LoadAcq(&_p_
.runqhead
) // load-acquire, synchronize with other consumers
5552 gp
:= _p_
.runq
[h%uint
32(len(_p_
.runq
))].ptr()
5553 if atomic
.CasRel(&_p_
.runqhead
, h
, h
+1) { // cas-release, commits consume
5559 // runqdrain drains the local runnable queue of _p_ and returns all goroutines in it.
5560 // Executed only by the owner P.
5561 func runqdrain(_p_
*p
) (drainQ gQueue
, n
uint32) {
5562 oldNext
:= _p_
.runnext
5563 if oldNext
!= 0 && _p_
.runnext
.cas(oldNext
, 0) {
5564 drainQ
.pushBack(oldNext
.ptr())
5569 h
:= atomic
.LoadAcq(&_p_
.runqhead
) // load-acquire, synchronize with other consumers
5575 if qn
> uint32(len(_p_
.runq
)) { // read inconsistent h and t
5579 if !atomic
.CasRel(&_p_
.runqhead
, h
, h
+qn
) { // cas-release, commits consume
5583 // We've inverted the order in which it gets G's from the local P's runnable queue
5584 // and then advances the head pointer because we don't want to mess up the statuses of G's
5585 // while runqdrain() and runqsteal() are running in parallel.
5586 // Thus we should advance the head pointer before draining the local P into a gQueue,
5587 // so that we can update any gp.schedlink only after we take the full ownership of G,
5588 // meanwhile, other P's can't access to all G's in local P's runnable queue and steal them.
5589 // See https://groups.google.com/g/golang-dev/c/0pTKxEKhHSc/m/6Q85QjdVBQAJ for more details.
5590 for i
:= uint32(0); i
< qn
; i
++ {
5591 gp
:= _p_
.runq
[(h
+i
)%uint
32(len(_p_
.runq
))].ptr()
5598 // Grabs a batch of goroutines from _p_'s runnable queue into batch.
5599 // Batch is a ring buffer starting at batchHead.
5600 // Returns number of grabbed goroutines.
5601 // Can be executed by any P.
5602 func runqgrab(_p_
*p
, batch
*[256]guintptr
, batchHead
uint32, stealRunNextG
bool) uint32 {
5604 h
:= atomic
.LoadAcq(&_p_
.runqhead
) // load-acquire, synchronize with other consumers
5605 t
:= atomic
.LoadAcq(&_p_
.runqtail
) // load-acquire, synchronize with the producer
5610 // Try to steal from _p_.runnext.
5611 if next
:= _p_
.runnext
; next
!= 0 {
5612 if _p_
.status
== _Prunning
{
5613 // Sleep to ensure that _p_ isn't about to run the g
5614 // we are about to steal.
5615 // The important use case here is when the g running
5616 // on _p_ ready()s another g and then almost
5617 // immediately blocks. Instead of stealing runnext
5618 // in this window, back off to give _p_ a chance to
5619 // schedule runnext. This will avoid thrashing gs
5620 // between different Ps.
5621 // A sync chan send/recv takes ~50ns as of time of
5622 // writing, so 3us gives ~50x overshoot.
5623 if GOOS
!= "windows" {
5626 // On windows system timer granularity is
5627 // 1-15ms, which is way too much for this
5628 // optimization. So just yield.
5632 if !_p_
.runnext
.cas(next
, 0) {
5635 batch
[batchHead%uint
32(len(batch
))] = next
5641 if n
> uint32(len(_p_
.runq
)/2) { // read inconsistent h and t
5644 for i
:= uint32(0); i
< n
; i
++ {
5645 g
:= _p_
.runq
[(h
+i
)%uint
32(len(_p_
.runq
))]
5646 batch
[(batchHead
+i
)%uint
32(len(batch
))] = g
5648 if atomic
.CasRel(&_p_
.runqhead
, h
, h
+n
) { // cas-release, commits consume
5654 // Steal half of elements from local runnable queue of p2
5655 // and put onto local runnable queue of p.
5656 // Returns one of the stolen elements (or nil if failed).
5657 func runqsteal(_p_
, p2
*p
, stealRunNextG
bool) *g
{
5659 n
:= runqgrab(p2
, &_p_
.runq
, t
, stealRunNextG
)
5664 gp
:= _p_
.runq
[(t
+n
)%uint
32(len(_p_
.runq
))].ptr()
5668 h
:= atomic
.LoadAcq(&_p_
.runqhead
) // load-acquire, synchronize with consumers
5669 if t
-h
+n
>= uint32(len(_p_
.runq
)) {
5670 throw("runqsteal: runq overflow")
5672 atomic
.StoreRel(&_p_
.runqtail
, t
+n
) // store-release, makes the item available for consumption
5676 // A gQueue is a dequeue of Gs linked through g.schedlink. A G can only
5677 // be on one gQueue or gList at a time.
5678 type gQueue
struct {
5683 // empty reports whether q is empty.
5684 func (q
*gQueue
) empty() bool {
5688 // push adds gp to the head of q.
5689 func (q
*gQueue
) push(gp
*g
) {
5690 gp
.schedlink
= q
.head
5697 // pushBack adds gp to the tail of q.
5698 func (q
*gQueue
) pushBack(gp
*g
) {
5701 q
.tail
.ptr().schedlink
.set(gp
)
5708 // pushBackAll adds all Gs in q2 to the tail of q. After this q2 must
5710 func (q
*gQueue
) pushBackAll(q2 gQueue
) {
5714 q2
.tail
.ptr().schedlink
= 0
5716 q
.tail
.ptr().schedlink
= q2
.head
5723 // pop removes and returns the head of queue q. It returns nil if
5725 func (q
*gQueue
) pop() *g
{
5728 q
.head
= gp
.schedlink
5736 // popList takes all Gs in q and returns them as a gList.
5737 func (q
*gQueue
) popList() gList
{
5738 stack
:= gList
{q
.head
}
5743 // A gList is a list of Gs linked through g.schedlink. A G can only be
5744 // on one gQueue or gList at a time.
5749 // empty reports whether l is empty.
5750 func (l
*gList
) empty() bool {
5754 // push adds gp to the head of l.
5755 func (l
*gList
) push(gp
*g
) {
5756 gp
.schedlink
= l
.head
5760 // pushAll prepends all Gs in q to l.
5761 func (l
*gList
) pushAll(q gQueue
) {
5763 q
.tail
.ptr().schedlink
= l
.head
5768 // pop removes and returns the head of l. If l is empty, it returns nil.
5769 func (l
*gList
) pop() *g
{
5772 l
.head
= gp
.schedlink
5777 //go:linkname setMaxThreads runtime_1debug.setMaxThreads
5778 func setMaxThreads(in
int) (out
int) {
5780 out
= int(sched
.maxmcount
)
5781 if in
> 0x7fffffff { // MaxInt32
5782 sched
.maxmcount
= 0x7fffffff
5784 sched
.maxmcount
= int32(in
)
5792 func procPin() int {
5797 return int(mp
.p
.ptr().id
)
5806 //go:linkname sync_runtime_procPin sync.runtime__procPin
5808 func sync_runtime_procPin() int {
5812 //go:linkname sync_runtime_procUnpin sync.runtime__procUnpin
5814 func sync_runtime_procUnpin() {
5818 //go:linkname sync_atomic_runtime_procPin sync_1atomic.runtime__procPin
5820 func sync_atomic_runtime_procPin() int {
5824 //go:linkname sync_atomic_runtime_procUnpin sync_1atomic.runtime__procUnpin
5826 func sync_atomic_runtime_procUnpin() {
5830 // Active spinning for sync.Mutex.
5831 //go:linkname sync_runtime_canSpin sync.runtime__canSpin
5833 func sync_runtime_canSpin(i
int) bool {
5834 // sync.Mutex is cooperative, so we are conservative with spinning.
5835 // Spin only few times and only if running on a multicore machine and
5836 // GOMAXPROCS>1 and there is at least one other running P and local runq is empty.
5837 // As opposed to runtime mutex we don't do passive spinning here,
5838 // because there can be work on global runq or on other Ps.
5839 if i
>= active_spin || ncpu
<= 1 || gomaxprocs
<= int32(sched
.npidle
+sched
.nmspinning
)+1 {
5842 if p
:= getg().m
.p
.ptr(); !runqempty(p
) {
5848 //go:linkname sync_runtime_doSpin sync.runtime__doSpin
5850 func sync_runtime_doSpin() {
5851 procyield(active_spin_cnt
)
5854 var stealOrder randomOrder
5856 // randomOrder/randomEnum are helper types for randomized work stealing.
5857 // They allow to enumerate all Ps in different pseudo-random orders without repetitions.
5858 // The algorithm is based on the fact that if we have X such that X and GOMAXPROCS
5859 // are coprime, then a sequences of (i + X) % GOMAXPROCS gives the required enumeration.
5860 type randomOrder
struct {
5865 type randomEnum
struct {
5872 func (ord
*randomOrder
) reset(count
uint32) {
5874 ord
.coprimes
= ord
.coprimes
[:0]
5875 for i
:= uint32(1); i
<= count
; i
++ {
5876 if gcd(i
, count
) == 1 {
5877 ord
.coprimes
= append(ord
.coprimes
, i
)
5882 func (ord
*randomOrder
) start(i
uint32) randomEnum
{
5886 inc
: ord
.coprimes
[i%uint
32(len(ord
.coprimes
))],
5890 func (enum
*randomEnum
) done() bool {
5891 return enum
.i
== enum
.count
5894 func (enum
*randomEnum
) next() {
5896 enum
.pos
= (enum
.pos
+ enum
.inc
) % enum
.count
5899 func (enum
*randomEnum
) position() uint32 {
5903 func gcd(a
, b
uint32) uint32 {
5910 // inittrace stores statistics for init functions which are
5911 // updated by malloc and newproc when active is true.
5912 var inittrace tracestat
5914 type tracestat
struct {
5915 active
bool // init tracing activation status
5916 id
int64 // init goroutine id
5917 allocs
uint64 // heap allocations
5918 bytes
uint64 // heap allocated bytes