-> 3.17.0 final.
[valgrind.git] / coregrind / m_signals.c
blob720fb5f661221cf72e945177461abb053ab6e448
1 /* -*- mode: C; c-basic-offset: 3; -*- */
3 /*--------------------------------------------------------------------*/
4 /*--- Implementation of POSIX signals. m_signals.c ---*/
5 /*--------------------------------------------------------------------*/
7 /*
8 This file is part of Valgrind, a dynamic binary instrumentation
9 framework.
11 Copyright (C) 2000-2017 Julian Seward
12 jseward@acm.org
14 This program is free software; you can redistribute it and/or
15 modify it under the terms of the GNU General Public License as
16 published by the Free Software Foundation; either version 2 of the
17 License, or (at your option) any later version.
19 This program is distributed in the hope that it will be useful, but
20 WITHOUT ANY WARRANTY; without even the implied warranty of
21 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
22 General Public License for more details.
24 You should have received a copy of the GNU General Public License
25 along with this program; if not, see <http://www.gnu.org/licenses/>.
27 The GNU General Public License is contained in the file COPYING.
30 /*
31 Signal handling.
33 There are 4 distinct classes of signal:
35 1. Synchronous, instruction-generated (SIGILL, FPE, BUS, SEGV and
36 TRAP): these are signals as a result of an instruction fault. If
37 we get one while running client code, then we just do the
38 appropriate thing. If it happens while running Valgrind code, then
39 it indicates a Valgrind bug. Note that we "manually" implement
40 automatic stack growth, such that if a fault happens near the
41 client process stack, it is extended in the same way the kernel
42 would, and the fault is never reported to the client program.
44 2. Asynchronous variants of the above signals: If the kernel tries
45 to deliver a sync signal while it is blocked, it just kills the
46 process. Therefore, we can't block those signals if we want to be
47 able to report on bugs in Valgrind. This means that we're also
48 open to receiving those signals from other processes, sent with
49 kill. We could get away with just dropping them, since they aren't
50 really signals that processes send to each other.
52 3. Synchronous, general signals. If a thread/process sends itself
53 a signal with kill, its expected to be synchronous: ie, the signal
54 will have been delivered by the time the syscall finishes.
56 4. Asynchronous, general signals. All other signals, sent by
57 another process with kill. These are generally blocked, except for
58 two special cases: we poll for them each time we're about to run a
59 thread for a time quanta, and while running blocking syscalls.
62 In addition, we reserve one signal for internal use: SIGVGKILL.
63 SIGVGKILL is used to terminate threads. When one thread wants
64 another to exit, it will set its exitreason and send it SIGVGKILL
65 if it appears to be blocked in a syscall.
68 We use a kernel thread for each application thread. When the
69 thread allows itself to be open to signals, it sets the thread
70 signal mask to what the client application set it to. This means
71 that we get the kernel to do all signal routing: under Valgrind,
72 signals get delivered in the same way as in the non-Valgrind case
73 (the exception being for the sync signal set, since they're almost
74 always unblocked).
78 Some more details...
80 First off, we take note of the client's requests (via sys_sigaction
81 and sys_sigprocmask) to set the signal state (handlers for each
82 signal, which are process-wide, + a mask for each signal, which is
83 per-thread). This info is duly recorded in the SCSS (static Client
84 signal state) in m_signals.c, and if the client later queries what
85 the state is, we merely fish the relevant info out of SCSS and give
86 it back.
88 However, we set the real signal state in the kernel to something
89 entirely different. This is recorded in SKSS, the static Kernel
90 signal state. What's nice (to the extent that anything is nice w.r.t
91 signals) is that there's a pure function to calculate SKSS from SCSS,
92 calculate_SKSS_from_SCSS. So when the client changes SCSS then we
93 recompute the associated SKSS and apply any changes from the previous
94 SKSS through to the kernel.
96 Now, that said, the general scheme we have now is, that regardless of
97 what the client puts into the SCSS (viz, asks for), what we would
98 like to do is as follows:
100 (1) run code on the virtual CPU with all signals blocked
102 (2) at convenient moments for us (that is, when the VCPU stops, and
103 control is back with the scheduler), ask the kernel "do you have
104 any signals for me?" and if it does, collect up the info, and
105 deliver them to the client (by building sigframes).
107 And that's almost what we do. The signal polling is done by
108 VG_(poll_signals), which calls through to VG_(sigtimedwait_zero) to
109 do the dirty work. (of which more later).
111 By polling signals, rather than catching them, we get to deal with
112 them only at convenient moments, rather than having to recover from
113 taking a signal while generated code is running.
115 Now unfortunately .. the above scheme only works for so-called async
116 signals. An async signal is one which isn't associated with any
117 particular instruction, eg Control-C (SIGINT). For those, it doesn't
118 matter if we don't deliver the signal to the client immediately; it
119 only matters that we deliver it eventually. Hence polling is OK.
121 But the other group -- sync signals -- are all related by the fact
122 that they are various ways for the host CPU to fail to execute an
123 instruction: SIGILL, SIGSEGV, SIGFPU. And they can't be deferred,
124 because obviously if a host instruction can't execute, well then we
125 have to immediately do Plan B, whatever that is.
127 So the next approximation of what happens is:
129 (1) run code on vcpu with all async signals blocked
131 (2) at convenient moments (when NOT running the vcpu), poll for async
132 signals.
134 (1) and (2) together imply that if the host does deliver a signal to
135 async_signalhandler while the VCPU is running, something's
136 seriously wrong.
138 (3) when running code on vcpu, don't block sync signals. Instead
139 register sync_signalhandler and catch any such via that. Of
140 course, that means an ugly recovery path if we do -- the
141 sync_signalhandler has to longjump, exiting out of the generated
142 code, and the assembly-dispatcher thingy that runs it, and gets
143 caught in m_scheduler, which then tells m_signals to deliver the
144 signal.
146 Now naturally (ha ha) even that might be tolerable, but there's
147 something worse: dealing with signals delivered to threads in
148 syscalls.
150 Obviously from the above, SKSS's signal mask (viz, what we really run
151 with) is way different from SCSS's signal mask (viz, what the client
152 thread thought it asked for). (eg) It may well be that the client
153 did not block control-C, so that it just expects to drop dead if it
154 receives ^C whilst blocked in a syscall, but by default we are
155 running with all async signals blocked, and so that signal could be
156 arbitrarily delayed, or perhaps even lost (not sure).
158 So what we have to do, when doing any syscall which SfMayBlock, is to
159 quickly switch in the SCSS-specified signal mask just before the
160 syscall, and switch it back just afterwards, and hope that we don't
161 get caught up in some weird race condition. This is the primary
162 purpose of the ultra-magical pieces of assembly code in
163 coregrind/m_syswrap/syscall-<plat>.S
165 -----------
167 The ways in which V can come to hear of signals that need to be
168 forwarded to the client as are follows:
170 sync signals: can arrive at any time whatsoever. These are caught
171 by sync_signalhandler
173 async signals:
175 if running generated code
176 then these are blocked, so we don't expect to catch them in
177 async_signalhandler
179 else
180 if thread is blocked in a syscall marked SfMayBlock
181 then signals may be delivered to async_sighandler, since we
182 temporarily unblocked them for the duration of the syscall,
183 by using the real (SCSS) mask for this thread
185 else we're doing misc housekeeping activities (eg, making a translation,
186 washing our hair, etc). As in the normal case, these signals are
187 blocked, but we can and do poll for them using VG_(poll_signals).
189 Now, re VG_(poll_signals), it polls the kernel by doing
190 VG_(sigtimedwait_zero). This is trivial on Linux, since it's just a
191 syscall. But on Darwin and AIX, we have to cobble together the
192 functionality in a tedious, longwinded and probably error-prone way.
194 Finally, if a gdb is debugging the process under valgrind,
195 the signal can be ignored if gdb tells this. So, before resuming the
196 scheduler/delivering the signal, a call to VG_(gdbserver_report_signal)
197 is done. If this returns True, the signal is delivered.
200 #include "pub_core_basics.h"
201 #include "pub_core_vki.h"
202 #include "pub_core_vkiscnums.h"
203 #include "pub_core_debuglog.h"
204 #include "pub_core_threadstate.h"
205 #include "pub_core_xarray.h"
206 #include "pub_core_clientstate.h"
207 #include "pub_core_aspacemgr.h"
208 #include "pub_core_errormgr.h"
209 #include "pub_core_gdbserver.h"
210 #include "pub_core_libcbase.h"
211 #include "pub_core_libcassert.h"
212 #include "pub_core_libcprint.h"
213 #include "pub_core_libcproc.h"
214 #include "pub_core_libcsignal.h"
215 #include "pub_core_machine.h"
216 #include "pub_core_mallocfree.h"
217 #include "pub_core_options.h"
218 #include "pub_core_scheduler.h"
219 #include "pub_core_signals.h"
220 #include "pub_core_sigframe.h" // For VG_(sigframe_create)()
221 #include "pub_core_stacks.h" // For VG_(change_stack)()
222 #include "pub_core_stacktrace.h" // For VG_(get_and_pp_StackTrace)()
223 #include "pub_core_syscall.h"
224 #include "pub_core_syswrap.h"
225 #include "pub_core_tooliface.h"
226 #include "pub_core_coredump.h"
229 /* ---------------------------------------------------------------------
230 Forwards decls.
231 ------------------------------------------------------------------ */
233 static void sync_signalhandler ( Int sigNo, vki_siginfo_t *info,
234 struct vki_ucontext * );
235 static void async_signalhandler ( Int sigNo, vki_siginfo_t *info,
236 struct vki_ucontext * );
237 static void sigvgkill_handler ( Int sigNo, vki_siginfo_t *info,
238 struct vki_ucontext * );
240 /* Maximum usable signal. */
241 Int VG_(max_signal) = _VKI_NSIG;
243 #define N_QUEUED_SIGNALS 8
245 typedef struct SigQueue {
246 Int next;
247 vki_siginfo_t sigs[N_QUEUED_SIGNALS];
248 } SigQueue;
250 /* ------ Macros for pulling stuff out of ucontexts ------ */
252 /* Q: what does VG_UCONTEXT_SYSCALL_SYSRES do? A: let's suppose the
253 machine context (uc) reflects the situation that a syscall had just
254 completed, quite literally -- that is, that the program counter was
255 now at the instruction following the syscall. (or we're slightly
256 downstream, but we're sure no relevant register has yet changed
257 value.) Then VG_UCONTEXT_SYSCALL_SYSRES returns a SysRes reflecting
258 the result of the syscall; it does this by fishing relevant bits of
259 the machine state out of the uc. Of course if the program counter
260 was somewhere else entirely then the result is likely to be
261 meaningless, so the caller of VG_UCONTEXT_SYSCALL_SYSRES has to be
262 very careful to pay attention to the results only when it is sure
263 that the said constraint on the program counter is indeed valid. */
265 #if defined(VGP_x86_linux)
266 # define VG_UCONTEXT_INSTR_PTR(uc) ((uc)->uc_mcontext.eip)
267 # define VG_UCONTEXT_STACK_PTR(uc) ((uc)->uc_mcontext.esp)
268 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
269 /* Convert the value in uc_mcontext.eax into a SysRes. */ \
270 VG_(mk_SysRes_x86_linux)( (uc)->uc_mcontext.eax )
271 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
272 { (srP)->r_pc = (ULong)((uc)->uc_mcontext.eip); \
273 (srP)->r_sp = (ULong)((uc)->uc_mcontext.esp); \
274 (srP)->misc.X86.r_ebp = (uc)->uc_mcontext.ebp; \
277 #elif defined(VGP_amd64_linux)
278 # define VG_UCONTEXT_INSTR_PTR(uc) ((uc)->uc_mcontext.rip)
279 # define VG_UCONTEXT_STACK_PTR(uc) ((uc)->uc_mcontext.rsp)
280 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
281 /* Convert the value in uc_mcontext.rax into a SysRes. */ \
282 VG_(mk_SysRes_amd64_linux)( (uc)->uc_mcontext.rax )
283 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
284 { (srP)->r_pc = (uc)->uc_mcontext.rip; \
285 (srP)->r_sp = (uc)->uc_mcontext.rsp; \
286 (srP)->misc.AMD64.r_rbp = (uc)->uc_mcontext.rbp; \
289 #elif defined(VGP_ppc32_linux)
290 /* Comments from Paul Mackerras 25 Nov 05:
292 > I'm tracking down a problem where V's signal handling doesn't
293 > work properly on a ppc440gx running 2.4.20. The problem is that
294 > the ucontext being presented to V's sighandler seems completely
295 > bogus.
297 > V's kernel headers and hence ucontext layout are derived from
298 > 2.6.9. I compared include/asm-ppc/ucontext.h from 2.4.20 and
299 > 2.6.13.
301 > Can I just check my interpretation: the 2.4.20 one contains the
302 > uc_mcontext field in line, whereas the 2.6.13 one has a pointer
303 > to said struct? And so if V is using the 2.6.13 struct then a
304 > 2.4.20 one will make no sense to it.
306 Not quite... what is inline in the 2.4.20 version is a
307 sigcontext_struct, not an mcontext. The sigcontext looks like
308 this:
310 struct sigcontext_struct {
311 unsigned long _unused[4];
312 int signal;
313 unsigned long handler;
314 unsigned long oldmask;
315 struct pt_regs *regs;
318 The regs pointer of that struct ends up at the same offset as the
319 uc_regs of the 2.6 struct ucontext, and a struct pt_regs is the
320 same as the mc_gregs field of the mcontext. In fact the integer
321 regs are followed in memory by the floating point regs on 2.4.20.
323 Thus if you are using the 2.6 definitions, it should work on 2.4.20
324 provided that you go via uc->uc_regs rather than looking in
325 uc->uc_mcontext directly.
327 There is another subtlety: 2.4.20 doesn't save the vector regs when
328 delivering a signal, and 2.6.x only saves the vector regs if the
329 process has ever used an altivec instructions. If 2.6.x does save
330 the vector regs, it sets the MSR_VEC bit in
331 uc->uc_regs->mc_gregs[PT_MSR], otherwise it clears it. That bit
332 will always be clear under 2.4.20. So you can use that bit to tell
333 whether uc->uc_regs->mc_vregs is valid. */
334 # define VG_UCONTEXT_INSTR_PTR(uc) ((uc)->uc_regs->mc_gregs[VKI_PT_NIP])
335 # define VG_UCONTEXT_STACK_PTR(uc) ((uc)->uc_regs->mc_gregs[VKI_PT_R1])
336 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
337 /* Convert the values in uc_mcontext r3,cr into a SysRes. */ \
338 VG_(mk_SysRes_ppc32_linux)( \
339 (uc)->uc_regs->mc_gregs[VKI_PT_R3], \
340 (((uc)->uc_regs->mc_gregs[VKI_PT_CCR] >> 28) & 1) \
342 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
343 { (srP)->r_pc = (ULong)((uc)->uc_regs->mc_gregs[VKI_PT_NIP]); \
344 (srP)->r_sp = (ULong)((uc)->uc_regs->mc_gregs[VKI_PT_R1]); \
345 (srP)->misc.PPC32.r_lr = (uc)->uc_regs->mc_gregs[VKI_PT_LNK]; \
348 #elif defined(VGP_ppc64be_linux) || defined(VGP_ppc64le_linux)
349 # define VG_UCONTEXT_INSTR_PTR(uc) ((uc)->uc_mcontext.gp_regs[VKI_PT_NIP])
350 # define VG_UCONTEXT_STACK_PTR(uc) ((uc)->uc_mcontext.gp_regs[VKI_PT_R1])
351 /* Dubious hack: if there is an error, only consider the lowest 8
352 bits of r3. memcheck/tests/post-syscall shows a case where an
353 interrupted syscall should have produced a ucontext with 0x4
354 (VKI_EINTR) in r3 but is in fact producing 0x204. */
355 /* Awaiting clarification from PaulM. Evidently 0x204 is
356 ERESTART_RESTARTBLOCK, which shouldn't have made it into user
357 space. */
358 static inline SysRes VG_UCONTEXT_SYSCALL_SYSRES( struct vki_ucontext* uc )
360 ULong err = (uc->uc_mcontext.gp_regs[VKI_PT_CCR] >> 28) & 1;
361 ULong r3 = uc->uc_mcontext.gp_regs[VKI_PT_R3];
362 if (err) r3 &= 0xFF;
363 return VG_(mk_SysRes_ppc64_linux)( r3, err );
365 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
366 { (srP)->r_pc = (uc)->uc_mcontext.gp_regs[VKI_PT_NIP]; \
367 (srP)->r_sp = (uc)->uc_mcontext.gp_regs[VKI_PT_R1]; \
368 (srP)->misc.PPC64.r_lr = (uc)->uc_mcontext.gp_regs[VKI_PT_LNK]; \
371 #elif defined(VGP_arm_linux)
372 # define VG_UCONTEXT_INSTR_PTR(uc) ((uc)->uc_mcontext.arm_pc)
373 # define VG_UCONTEXT_STACK_PTR(uc) ((uc)->uc_mcontext.arm_sp)
374 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
375 /* Convert the value in uc_mcontext.rax into a SysRes. */ \
376 VG_(mk_SysRes_arm_linux)( (uc)->uc_mcontext.arm_r0 )
377 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
378 { (srP)->r_pc = (uc)->uc_mcontext.arm_pc; \
379 (srP)->r_sp = (uc)->uc_mcontext.arm_sp; \
380 (srP)->misc.ARM.r14 = (uc)->uc_mcontext.arm_lr; \
381 (srP)->misc.ARM.r12 = (uc)->uc_mcontext.arm_ip; \
382 (srP)->misc.ARM.r11 = (uc)->uc_mcontext.arm_fp; \
383 (srP)->misc.ARM.r7 = (uc)->uc_mcontext.arm_r7; \
386 #elif defined(VGP_arm64_linux)
387 # define VG_UCONTEXT_INSTR_PTR(uc) ((UWord)((uc)->uc_mcontext.pc))
388 # define VG_UCONTEXT_STACK_PTR(uc) ((UWord)((uc)->uc_mcontext.sp))
389 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
390 /* Convert the value in uc_mcontext.regs[0] into a SysRes. */ \
391 VG_(mk_SysRes_arm64_linux)( (uc)->uc_mcontext.regs[0] )
392 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
393 { (srP)->r_pc = (uc)->uc_mcontext.pc; \
394 (srP)->r_sp = (uc)->uc_mcontext.sp; \
395 (srP)->misc.ARM64.x29 = (uc)->uc_mcontext.regs[29]; \
396 (srP)->misc.ARM64.x30 = (uc)->uc_mcontext.regs[30]; \
399 #elif defined(VGP_x86_darwin)
401 static inline Addr VG_UCONTEXT_INSTR_PTR( void* ucV ) {
402 ucontext_t* uc = (ucontext_t*)ucV;
403 struct __darwin_mcontext32* mc = uc->uc_mcontext;
404 struct __darwin_i386_thread_state* ss = &mc->__ss;
405 return ss->__eip;
407 static inline Addr VG_UCONTEXT_STACK_PTR( void* ucV ) {
408 ucontext_t* uc = (ucontext_t*)ucV;
409 struct __darwin_mcontext32* mc = uc->uc_mcontext;
410 struct __darwin_i386_thread_state* ss = &mc->__ss;
411 return ss->__esp;
413 static inline SysRes VG_UCONTEXT_SYSCALL_SYSRES( void* ucV,
414 UWord scclass ) {
415 /* this is complicated by the problem that there are 3 different
416 kinds of syscalls, each with its own return convention.
417 NB: scclass is a host word, hence UWord is good for both
418 amd64-darwin and x86-darwin */
419 ucontext_t* uc = (ucontext_t*)ucV;
420 struct __darwin_mcontext32* mc = uc->uc_mcontext;
421 struct __darwin_i386_thread_state* ss = &mc->__ss;
422 /* duplicates logic in m_syswrap.getSyscallStatusFromGuestState */
423 UInt carry = 1 & ss->__eflags;
424 UInt err = 0;
425 UInt wLO = 0;
426 UInt wHI = 0;
427 switch (scclass) {
428 case VG_DARWIN_SYSCALL_CLASS_UNIX:
429 err = carry;
430 wLO = ss->__eax;
431 wHI = ss->__edx;
432 break;
433 case VG_DARWIN_SYSCALL_CLASS_MACH:
434 wLO = ss->__eax;
435 break;
436 case VG_DARWIN_SYSCALL_CLASS_MDEP:
437 wLO = ss->__eax;
438 break;
439 default:
440 vg_assert(0);
441 break;
443 return VG_(mk_SysRes_x86_darwin)( scclass, err ? True : False,
444 wHI, wLO );
446 static inline
447 void VG_UCONTEXT_TO_UnwindStartRegs( UnwindStartRegs* srP,
448 void* ucV ) {
449 ucontext_t* uc = (ucontext_t*)(ucV);
450 struct __darwin_mcontext32* mc = uc->uc_mcontext;
451 struct __darwin_i386_thread_state* ss = &mc->__ss;
452 srP->r_pc = (ULong)(ss->__eip);
453 srP->r_sp = (ULong)(ss->__esp);
454 srP->misc.X86.r_ebp = (UInt)(ss->__ebp);
457 #elif defined(VGP_amd64_darwin)
459 static inline Addr VG_UCONTEXT_INSTR_PTR( void* ucV ) {
460 ucontext_t* uc = (ucontext_t*)ucV;
461 struct __darwin_mcontext64* mc = uc->uc_mcontext;
462 struct __darwin_x86_thread_state64* ss = &mc->__ss;
463 return ss->__rip;
465 static inline Addr VG_UCONTEXT_STACK_PTR( void* ucV ) {
466 ucontext_t* uc = (ucontext_t*)ucV;
467 struct __darwin_mcontext64* mc = uc->uc_mcontext;
468 struct __darwin_x86_thread_state64* ss = &mc->__ss;
469 return ss->__rsp;
471 static inline SysRes VG_UCONTEXT_SYSCALL_SYSRES( void* ucV,
472 UWord scclass ) {
473 /* This is copied from the x86-darwin case. I'm not sure if it
474 is correct. */
475 ucontext_t* uc = (ucontext_t*)ucV;
476 struct __darwin_mcontext64* mc = uc->uc_mcontext;
477 struct __darwin_x86_thread_state64* ss = &mc->__ss;
478 /* duplicates logic in m_syswrap.getSyscallStatusFromGuestState */
479 ULong carry = 1 & ss->__rflags;
480 ULong err = 0;
481 ULong wLO = 0;
482 ULong wHI = 0;
483 switch (scclass) {
484 case VG_DARWIN_SYSCALL_CLASS_UNIX:
485 err = carry;
486 wLO = ss->__rax;
487 wHI = ss->__rdx;
488 break;
489 case VG_DARWIN_SYSCALL_CLASS_MACH:
490 wLO = ss->__rax;
491 break;
492 case VG_DARWIN_SYSCALL_CLASS_MDEP:
493 wLO = ss->__rax;
494 break;
495 default:
496 vg_assert(0);
497 break;
499 return VG_(mk_SysRes_amd64_darwin)( scclass, err ? True : False,
500 wHI, wLO );
502 static inline
503 void VG_UCONTEXT_TO_UnwindStartRegs( UnwindStartRegs* srP,
504 void* ucV ) {
505 ucontext_t* uc = (ucontext_t*)ucV;
506 struct __darwin_mcontext64* mc = uc->uc_mcontext;
507 struct __darwin_x86_thread_state64* ss = &mc->__ss;
508 srP->r_pc = (ULong)(ss->__rip);
509 srP->r_sp = (ULong)(ss->__rsp);
510 srP->misc.AMD64.r_rbp = (ULong)(ss->__rbp);
513 #elif defined(VGP_s390x_linux)
515 # define VG_UCONTEXT_INSTR_PTR(uc) ((uc)->uc_mcontext.regs.psw.addr)
516 # define VG_UCONTEXT_STACK_PTR(uc) ((uc)->uc_mcontext.regs.gprs[15])
517 # define VG_UCONTEXT_FRAME_PTR(uc) ((uc)->uc_mcontext.regs.gprs[11])
518 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
519 VG_(mk_SysRes_s390x_linux)((uc)->uc_mcontext.regs.gprs[2])
520 # define VG_UCONTEXT_LINK_REG(uc) ((uc)->uc_mcontext.regs.gprs[14])
522 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
523 { (srP)->r_pc = (ULong)((uc)->uc_mcontext.regs.psw.addr); \
524 (srP)->r_sp = (ULong)((uc)->uc_mcontext.regs.gprs[15]); \
525 (srP)->misc.S390X.r_fp = (uc)->uc_mcontext.regs.gprs[11]; \
526 (srP)->misc.S390X.r_lr = (uc)->uc_mcontext.regs.gprs[14]; \
527 (srP)->misc.S390X.r_f0 = (uc)->uc_mcontext.fpregs.fprs[0]; \
528 (srP)->misc.S390X.r_f1 = (uc)->uc_mcontext.fpregs.fprs[1]; \
529 (srP)->misc.S390X.r_f2 = (uc)->uc_mcontext.fpregs.fprs[2]; \
530 (srP)->misc.S390X.r_f3 = (uc)->uc_mcontext.fpregs.fprs[3]; \
531 (srP)->misc.S390X.r_f4 = (uc)->uc_mcontext.fpregs.fprs[4]; \
532 (srP)->misc.S390X.r_f5 = (uc)->uc_mcontext.fpregs.fprs[5]; \
533 (srP)->misc.S390X.r_f6 = (uc)->uc_mcontext.fpregs.fprs[6]; \
534 (srP)->misc.S390X.r_f7 = (uc)->uc_mcontext.fpregs.fprs[7]; \
537 #elif defined(VGP_mips32_linux)
538 # define VG_UCONTEXT_INSTR_PTR(uc) ((UWord)(((uc)->uc_mcontext.sc_pc)))
539 # define VG_UCONTEXT_STACK_PTR(uc) ((UWord)((uc)->uc_mcontext.sc_regs[29]))
540 # define VG_UCONTEXT_FRAME_PTR(uc) ((uc)->uc_mcontext.sc_regs[30])
541 # define VG_UCONTEXT_SYSCALL_NUM(uc) ((uc)->uc_mcontext.sc_regs[2])
542 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
543 /* Convert the value in uc_mcontext.rax into a SysRes. */ \
544 VG_(mk_SysRes_mips32_linux)( (uc)->uc_mcontext.sc_regs[2], \
545 (uc)->uc_mcontext.sc_regs[3], \
546 (uc)->uc_mcontext.sc_regs[7])
548 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
549 { (srP)->r_pc = (uc)->uc_mcontext.sc_pc; \
550 (srP)->r_sp = (uc)->uc_mcontext.sc_regs[29]; \
551 (srP)->misc.MIPS32.r30 = (uc)->uc_mcontext.sc_regs[30]; \
552 (srP)->misc.MIPS32.r31 = (uc)->uc_mcontext.sc_regs[31]; \
553 (srP)->misc.MIPS32.r28 = (uc)->uc_mcontext.sc_regs[28]; \
556 #elif defined(VGP_mips64_linux)
557 # define VG_UCONTEXT_INSTR_PTR(uc) (((uc)->uc_mcontext.sc_pc))
558 # define VG_UCONTEXT_STACK_PTR(uc) ((uc)->uc_mcontext.sc_regs[29])
559 # define VG_UCONTEXT_FRAME_PTR(uc) ((uc)->uc_mcontext.sc_regs[30])
560 # define VG_UCONTEXT_SYSCALL_NUM(uc) ((uc)->uc_mcontext.sc_regs[2])
561 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
562 /* Convert the value in uc_mcontext.rax into a SysRes. */ \
563 VG_(mk_SysRes_mips64_linux)((uc)->uc_mcontext.sc_regs[2], \
564 (uc)->uc_mcontext.sc_regs[3], \
565 (uc)->uc_mcontext.sc_regs[7])
567 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
568 { (srP)->r_pc = (uc)->uc_mcontext.sc_pc; \
569 (srP)->r_sp = (uc)->uc_mcontext.sc_regs[29]; \
570 (srP)->misc.MIPS64.r30 = (uc)->uc_mcontext.sc_regs[30]; \
571 (srP)->misc.MIPS64.r31 = (uc)->uc_mcontext.sc_regs[31]; \
572 (srP)->misc.MIPS64.r28 = (uc)->uc_mcontext.sc_regs[28]; \
575 #elif defined(VGP_nanomips_linux)
576 # define VG_UCONTEXT_INSTR_PTR(uc) ((UWord)(((uc)->uc_mcontext.sc_pc)))
577 # define VG_UCONTEXT_STACK_PTR(uc) ((UWord)((uc)->uc_mcontext.sc_regs[29]))
578 # define VG_UCONTEXT_FRAME_PTR(uc) ((uc)->uc_mcontext.sc_regs[30])
579 # define VG_UCONTEXT_SYSCALL_NUM(uc) ((uc)->uc_mcontext.sc_regs[2])
580 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
581 VG_(mk_SysRes_nanomips_linux)((uc)->uc_mcontext.sc_regs[4])
583 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
584 { (srP)->r_pc = (uc)->uc_mcontext.sc_pc; \
585 (srP)->r_sp = (uc)->uc_mcontext.sc_regs[29]; \
586 (srP)->misc.MIPS32.r30 = (uc)->uc_mcontext.sc_regs[30]; \
587 (srP)->misc.MIPS32.r31 = (uc)->uc_mcontext.sc_regs[31]; \
588 (srP)->misc.MIPS32.r28 = (uc)->uc_mcontext.sc_regs[28]; \
591 #elif defined(VGP_x86_solaris)
592 # define VG_UCONTEXT_INSTR_PTR(uc) ((Addr)(uc)->uc_mcontext.gregs[VKI_EIP])
593 # define VG_UCONTEXT_STACK_PTR(uc) ((Addr)(uc)->uc_mcontext.gregs[VKI_UESP])
594 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
595 VG_(mk_SysRes_x86_solaris)((uc)->uc_mcontext.gregs[VKI_EFL] & 1, \
596 (uc)->uc_mcontext.gregs[VKI_EAX], \
597 (uc)->uc_mcontext.gregs[VKI_EFL] & 1 \
598 ? 0 : (uc)->uc_mcontext.gregs[VKI_EDX])
599 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
600 { (srP)->r_pc = (ULong)(uc)->uc_mcontext.gregs[VKI_EIP]; \
601 (srP)->r_sp = (ULong)(uc)->uc_mcontext.gregs[VKI_UESP]; \
602 (srP)->misc.X86.r_ebp = (uc)->uc_mcontext.gregs[VKI_EBP]; \
605 #elif defined(VGP_amd64_solaris)
606 # define VG_UCONTEXT_INSTR_PTR(uc) ((Addr)(uc)->uc_mcontext.gregs[VKI_REG_RIP])
607 # define VG_UCONTEXT_STACK_PTR(uc) ((Addr)(uc)->uc_mcontext.gregs[VKI_REG_RSP])
608 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
609 VG_(mk_SysRes_amd64_solaris)((uc)->uc_mcontext.gregs[VKI_REG_RFL] & 1, \
610 (uc)->uc_mcontext.gregs[VKI_REG_RAX], \
611 (uc)->uc_mcontext.gregs[VKI_REG_RFL] & 1 \
612 ? 0 : (uc)->uc_mcontext.gregs[VKI_REG_RDX])
613 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
614 { (srP)->r_pc = (uc)->uc_mcontext.gregs[VKI_REG_RIP]; \
615 (srP)->r_sp = (uc)->uc_mcontext.gregs[VKI_REG_RSP]; \
616 (srP)->misc.AMD64.r_rbp = (uc)->uc_mcontext.gregs[VKI_REG_RBP]; \
618 #else
619 # error Unknown platform
620 #endif
623 /* ------ Macros for pulling stuff out of siginfos ------ */
625 /* These macros allow use of uniform names when working with
626 both the Linux and Darwin vki definitions. */
627 #if defined(VGO_linux)
628 # define VKI_SIGINFO_si_addr _sifields._sigfault._addr
629 # define VKI_SIGINFO_si_pid _sifields._kill._pid
630 #elif defined(VGO_darwin) || defined(VGO_solaris)
631 # define VKI_SIGINFO_si_addr si_addr
632 # define VKI_SIGINFO_si_pid si_pid
633 #else
634 # error Unknown OS
635 #endif
638 /* ---------------------------------------------------------------------
639 HIGH LEVEL STUFF TO DO WITH SIGNALS: POLICY (MOSTLY)
640 ------------------------------------------------------------------ */
642 /* ---------------------------------------------------------------------
643 Signal state for this process.
644 ------------------------------------------------------------------ */
647 /* Base-ment of these arrays[_VKI_NSIG].
649 Valid signal numbers are 1 .. _VKI_NSIG inclusive.
650 Rather than subtracting 1 for indexing these arrays, which
651 is tedious and error-prone, they are simply dimensioned 1 larger,
652 and entry [0] is not used.
656 /* -----------------------------------------------------
657 Static client signal state (SCSS). This is the state
658 that the client thinks it has the kernel in.
659 SCSS records verbatim the client's settings. These
660 are mashed around only when SKSS is calculated from it.
661 -------------------------------------------------- */
663 typedef
664 struct {
665 void* scss_handler; /* VKI_SIG_DFL or VKI_SIG_IGN or ptr to
666 client's handler */
667 UInt scss_flags;
668 vki_sigset_t scss_mask;
669 void* scss_restorer; /* where sigreturn goes */
670 void* scss_sa_tramp; /* sa_tramp setting, Darwin only */
671 /* re _restorer and _sa_tramp, we merely record the values
672 supplied when the client does 'sigaction' and give them back
673 when requested. Otherwise they are simply ignored. */
675 SCSS_Per_Signal;
677 typedef
678 struct {
679 /* per-signal info */
680 SCSS_Per_Signal scss_per_sig[1+_VKI_NSIG];
682 /* Additional elements to SCSS not stored here:
683 - for each thread, the thread's blocking mask
684 - for each thread in WaitSIG, the set of waited-on sigs
687 SCSS;
689 static SCSS scss;
692 /* -----------------------------------------------------
693 Static kernel signal state (SKSS). This is the state
694 that we have the kernel in. It is computed from SCSS.
695 -------------------------------------------------- */
697 /* Let's do:
698 sigprocmask assigns to all thread masks
699 so that at least everything is always consistent
700 Flags:
701 SA_SIGINFO -- we always set it, and honour it for the client
702 SA_NOCLDSTOP -- passed to kernel
703 SA_ONESHOT or SA_RESETHAND -- pass through
704 SA_RESTART -- we observe this but set our handlers to always restart
705 (this doesn't apply to the Solaris port)
706 SA_NOMASK or SA_NODEFER -- we observe this, but our handlers block everything
707 SA_ONSTACK -- pass through
708 SA_NOCLDWAIT -- pass through
712 typedef
713 struct {
714 void* skss_handler; /* VKI_SIG_DFL or VKI_SIG_IGN
715 or ptr to our handler */
716 UInt skss_flags;
717 /* There is no skss_mask, since we know that we will always ask
718 for all signals to be blocked in our sighandlers. */
719 /* Also there is no skss_restorer. */
721 SKSS_Per_Signal;
723 typedef
724 struct {
725 SKSS_Per_Signal skss_per_sig[1+_VKI_NSIG];
727 SKSS;
729 static SKSS skss;
731 /* returns True if signal is to be ignored.
732 To check this, possibly call gdbserver with tid. */
733 static Bool is_sig_ign(vki_siginfo_t *info, ThreadId tid)
735 vg_assert(info->si_signo >= 1 && info->si_signo <= _VKI_NSIG);
737 /* If VG_(gdbserver_report_signal) tells to report the signal,
738 then verify if this signal is not to be ignored. GDB might have
739 modified si_signo, so we check after the call to gdbserver. */
740 return !VG_(gdbserver_report_signal) (info, tid)
741 || scss.scss_per_sig[info->si_signo].scss_handler == VKI_SIG_IGN;
744 /* ---------------------------------------------------------------------
745 Compute the SKSS required by the current SCSS.
746 ------------------------------------------------------------------ */
748 static
749 void pp_SKSS ( void )
751 Int sig;
752 VG_(printf)("\n\nSKSS:\n");
753 for (sig = 1; sig <= _VKI_NSIG; sig++) {
754 VG_(printf)("sig %d: handler %p, flags 0x%x\n", sig,
755 skss.skss_per_sig[sig].skss_handler,
756 skss.skss_per_sig[sig].skss_flags );
761 /* This is the core, clever bit. Computation is as follows:
763 For each signal
764 handler = if client has a handler, then our handler
765 else if client is DFL, then our handler as well
766 else (client must be IGN)
767 then hander is IGN
769 static
770 void calculate_SKSS_from_SCSS ( SKSS* dst )
772 Int sig;
773 UInt scss_flags;
774 UInt skss_flags;
776 for (sig = 1; sig <= _VKI_NSIG; sig++) {
777 void *skss_handler;
778 void *scss_handler;
780 scss_handler = scss.scss_per_sig[sig].scss_handler;
781 scss_flags = scss.scss_per_sig[sig].scss_flags;
783 switch(sig) {
784 case VKI_SIGSEGV:
785 case VKI_SIGBUS:
786 case VKI_SIGFPE:
787 case VKI_SIGILL:
788 case VKI_SIGTRAP:
789 /* For these, we always want to catch them and report, even
790 if the client code doesn't. */
791 skss_handler = sync_signalhandler;
792 break;
794 case VKI_SIGCONT:
795 /* Let the kernel handle SIGCONT unless the client is actually
796 catching it. */
797 case VKI_SIGCHLD:
798 case VKI_SIGWINCH:
799 case VKI_SIGURG:
800 /* For signals which are have a default action of Ignore,
801 only set a handler if the client has set a signal handler.
802 Otherwise the kernel will interrupt a syscall which
803 wouldn't have otherwise been interrupted. */
804 if (scss.scss_per_sig[sig].scss_handler == VKI_SIG_DFL)
805 skss_handler = VKI_SIG_DFL;
806 else if (scss.scss_per_sig[sig].scss_handler == VKI_SIG_IGN)
807 skss_handler = VKI_SIG_IGN;
808 else
809 skss_handler = async_signalhandler;
810 break;
812 default:
813 // VKI_SIGVG* are runtime variables, so we can't make them
814 // cases in the switch, so we handle them in the 'default' case.
815 if (sig == VG_SIGVGKILL)
816 skss_handler = sigvgkill_handler;
817 else {
818 if (scss_handler == VKI_SIG_IGN)
819 skss_handler = VKI_SIG_IGN;
820 else
821 skss_handler = async_signalhandler;
823 break;
826 /* Flags */
828 skss_flags = 0;
830 /* SA_NOCLDSTOP, SA_NOCLDWAIT: pass to kernel */
831 skss_flags |= scss_flags & (VKI_SA_NOCLDSTOP | VKI_SA_NOCLDWAIT);
833 /* SA_ONESHOT: ignore client setting */
835 # if !defined(VGO_solaris)
836 /* SA_RESTART: ignore client setting and always set it for us.
837 Though we never rely on the kernel to restart a
838 syscall, we observe whether it wanted to restart the syscall
839 or not, which is needed by
840 VG_(fixup_guest_state_after_syscall_interrupted) */
841 skss_flags |= VKI_SA_RESTART;
842 #else
843 /* The above does not apply to the Solaris port, where the kernel does
844 not directly restart syscalls, but instead it checks SA_RESTART flag
845 and if it is set then it returns ERESTART to libc and the library
846 actually restarts the syscall. */
847 skss_flags |= scss_flags & VKI_SA_RESTART;
848 # endif
850 /* SA_NOMASK: ignore it */
852 /* SA_ONSTACK: client setting is irrelevant here */
853 /* We don't set a signal stack, so ignore */
855 /* always ask for SA_SIGINFO */
856 skss_flags |= VKI_SA_SIGINFO;
858 /* use our own restorer */
859 skss_flags |= VKI_SA_RESTORER;
861 /* Create SKSS entry for this signal. */
862 if (sig != VKI_SIGKILL && sig != VKI_SIGSTOP)
863 dst->skss_per_sig[sig].skss_handler = skss_handler;
864 else
865 dst->skss_per_sig[sig].skss_handler = VKI_SIG_DFL;
867 dst->skss_per_sig[sig].skss_flags = skss_flags;
870 /* Sanity checks. */
871 vg_assert(dst->skss_per_sig[VKI_SIGKILL].skss_handler == VKI_SIG_DFL);
872 vg_assert(dst->skss_per_sig[VKI_SIGSTOP].skss_handler == VKI_SIG_DFL);
874 if (0)
875 pp_SKSS();
879 /* ---------------------------------------------------------------------
880 After a possible SCSS change, update SKSS and the kernel itself.
881 ------------------------------------------------------------------ */
883 // We need two levels of macro-expansion here to convert __NR_rt_sigreturn
884 // to a number before converting it to a string... sigh.
885 extern void my_sigreturn(void);
887 #if defined(VGP_x86_linux)
888 # define _MY_SIGRETURN(name) \
889 ".text\n" \
890 ".globl my_sigreturn\n" \
891 "my_sigreturn:\n" \
892 " movl $" #name ", %eax\n" \
893 " int $0x80\n" \
894 ".previous\n"
896 #elif defined(VGP_amd64_linux)
897 # define _MY_SIGRETURN(name) \
898 ".text\n" \
899 ".globl my_sigreturn\n" \
900 "my_sigreturn:\n" \
901 " movq $" #name ", %rax\n" \
902 " syscall\n" \
903 ".previous\n"
905 #elif defined(VGP_ppc32_linux)
906 # define _MY_SIGRETURN(name) \
907 ".text\n" \
908 ".globl my_sigreturn\n" \
909 "my_sigreturn:\n" \
910 " li 0, " #name "\n" \
911 " sc\n" \
912 ".previous\n"
914 #elif defined(VGP_ppc64be_linux)
915 # define _MY_SIGRETURN(name) \
916 ".align 2\n" \
917 ".globl my_sigreturn\n" \
918 ".section \".opd\",\"aw\"\n" \
919 ".align 3\n" \
920 "my_sigreturn:\n" \
921 ".quad .my_sigreturn,.TOC.@tocbase,0\n" \
922 ".previous\n" \
923 ".type .my_sigreturn,@function\n" \
924 ".globl .my_sigreturn\n" \
925 ".my_sigreturn:\n" \
926 " li 0, " #name "\n" \
927 " sc\n"
929 #elif defined(VGP_ppc64le_linux)
930 /* Little Endian supports ELF version 2. In the future, it may
931 * support other versions.
933 # define _MY_SIGRETURN(name) \
934 ".align 2\n" \
935 ".globl my_sigreturn\n" \
936 ".type .my_sigreturn,@function\n" \
937 "my_sigreturn:\n" \
938 "#if _CALL_ELF == 2 \n" \
939 "0: addis 2,12,.TOC.-0b@ha\n" \
940 " addi 2,2,.TOC.-0b@l\n" \
941 " .localentry my_sigreturn,.-my_sigreturn\n" \
942 "#endif \n" \
943 " sc\n" \
944 " .size my_sigreturn,.-my_sigreturn\n"
946 #elif defined(VGP_arm_linux)
947 # define _MY_SIGRETURN(name) \
948 ".text\n" \
949 ".globl my_sigreturn\n" \
950 "my_sigreturn:\n\t" \
951 " mov r7, #" #name "\n\t" \
952 " svc 0x00000000\n" \
953 ".previous\n"
955 #elif defined(VGP_arm64_linux)
956 # define _MY_SIGRETURN(name) \
957 ".text\n" \
958 ".globl my_sigreturn\n" \
959 "my_sigreturn:\n\t" \
960 " mov x8, #" #name "\n\t" \
961 " svc 0x0\n" \
962 ".previous\n"
964 #elif defined(VGP_x86_darwin)
965 # define _MY_SIGRETURN(name) \
966 ".text\n" \
967 ".globl my_sigreturn\n" \
968 "my_sigreturn:\n" \
969 " movl $" VG_STRINGIFY(__NR_DARWIN_FAKE_SIGRETURN) ",%eax\n" \
970 " int $0x80\n"
972 #elif defined(VGP_amd64_darwin)
973 # define _MY_SIGRETURN(name) \
974 ".text\n" \
975 ".globl my_sigreturn\n" \
976 "my_sigreturn:\n" \
977 " movq $" VG_STRINGIFY(__NR_DARWIN_FAKE_SIGRETURN) ",%rax\n" \
978 " syscall\n"
980 #elif defined(VGP_s390x_linux)
981 # define _MY_SIGRETURN(name) \
982 ".text\n" \
983 ".globl my_sigreturn\n" \
984 "my_sigreturn:\n" \
985 " svc " #name "\n" \
986 ".previous\n"
988 #elif defined(VGP_mips32_linux)
989 # define _MY_SIGRETURN(name) \
990 ".text\n" \
991 "my_sigreturn:\n" \
992 " li $2, " #name "\n" /* apparently $2 is v0 */ \
993 " syscall\n" \
994 ".previous\n"
996 #elif defined(VGP_mips64_linux)
997 # define _MY_SIGRETURN(name) \
998 ".text\n" \
999 "my_sigreturn:\n" \
1000 " li $2, " #name "\n" \
1001 " syscall\n" \
1002 ".previous\n"
1004 #elif defined(VGP_nanomips_linux)
1005 # define _MY_SIGRETURN(name) \
1006 ".text\n" \
1007 "my_sigreturn:\n" \
1008 " li $t4, " #name "\n" \
1009 " syscall[32]\n" \
1010 ".previous\n"
1011 #elif defined(VGP_x86_solaris) || defined(VGP_amd64_solaris)
1012 /* Not used on Solaris. */
1013 # define _MY_SIGRETURN(name) \
1014 ".text\n" \
1015 ".globl my_sigreturn\n" \
1016 "my_sigreturn:\n" \
1017 "ud2\n" \
1018 ".previous\n"
1020 #else
1021 # error Unknown platform
1022 #endif
1024 #define MY_SIGRETURN(name) _MY_SIGRETURN(name)
1025 asm(
1026 MY_SIGRETURN(__NR_rt_sigreturn)
1030 static void handle_SCSS_change ( Bool force_update )
1032 Int res, sig;
1033 SKSS skss_old;
1034 vki_sigaction_toK_t ksa;
1035 vki_sigaction_fromK_t ksa_old;
1037 /* Remember old SKSS and calculate new one. */
1038 skss_old = skss;
1039 calculate_SKSS_from_SCSS ( &skss );
1041 /* Compare the new SKSS entries vs the old ones, and update kernel
1042 where they differ. */
1043 for (sig = 1; sig <= VG_(max_signal); sig++) {
1045 /* Trying to do anything with SIGKILL is pointless; just ignore
1046 it. */
1047 if (sig == VKI_SIGKILL || sig == VKI_SIGSTOP)
1048 continue;
1050 if (!force_update) {
1051 if ((skss_old.skss_per_sig[sig].skss_handler
1052 == skss.skss_per_sig[sig].skss_handler)
1053 && (skss_old.skss_per_sig[sig].skss_flags
1054 == skss.skss_per_sig[sig].skss_flags))
1055 /* no difference */
1056 continue;
1059 ksa.ksa_handler = skss.skss_per_sig[sig].skss_handler;
1060 ksa.sa_flags = skss.skss_per_sig[sig].skss_flags;
1061 # if !defined(VGP_ppc32_linux) && \
1062 !defined(VGP_x86_darwin) && !defined(VGP_amd64_darwin) && \
1063 !defined(VGP_mips32_linux) && !defined(VGO_solaris)
1064 ksa.sa_restorer = my_sigreturn;
1065 # endif
1066 /* Re above ifdef (also the assertion below), PaulM says:
1067 The sa_restorer field is not used at all on ppc. Glibc
1068 converts the sigaction you give it into a kernel sigaction,
1069 but it doesn't put anything in the sa_restorer field.
1072 /* block all signals in handler */
1073 VG_(sigfillset)( &ksa.sa_mask );
1074 VG_(sigdelset)( &ksa.sa_mask, VKI_SIGKILL );
1075 VG_(sigdelset)( &ksa.sa_mask, VKI_SIGSTOP );
1077 if (VG_(clo_trace_signals) && VG_(clo_verbosity) > 2)
1078 VG_(dmsg)("setting ksig %d to: hdlr %p, flags 0x%lx, "
1079 "mask(msb..lsb) 0x%llx 0x%llx\n",
1080 sig, ksa.ksa_handler,
1081 (UWord)ksa.sa_flags,
1082 _VKI_NSIG_WORDS > 1 ? (ULong)ksa.sa_mask.sig[1] : 0,
1083 (ULong)ksa.sa_mask.sig[0]);
1085 res = VG_(sigaction)( sig, &ksa, &ksa_old );
1086 vg_assert(res == 0);
1088 /* Since we got the old sigaction more or less for free, might
1089 as well extract the maximum sanity-check value from it. */
1090 if (!force_update) {
1091 vg_assert(ksa_old.ksa_handler
1092 == skss_old.skss_per_sig[sig].skss_handler);
1093 # if defined(VGO_solaris)
1094 if (ksa_old.ksa_handler == VKI_SIG_DFL
1095 || ksa_old.ksa_handler == VKI_SIG_IGN) {
1096 /* The Solaris kernel ignores signal flags (except SA_NOCLDWAIT
1097 and SA_NOCLDSTOP) and a signal mask if a handler is set to
1098 SIG_DFL or SIG_IGN. */
1099 skss_old.skss_per_sig[sig].skss_flags
1100 &= (VKI_SA_NOCLDWAIT | VKI_SA_NOCLDSTOP);
1101 vg_assert(VG_(isemptysigset)( &ksa_old.sa_mask ));
1102 VG_(sigfillset)( &ksa_old.sa_mask );
1104 # endif
1105 vg_assert(ksa_old.sa_flags
1106 == skss_old.skss_per_sig[sig].skss_flags);
1107 # if !defined(VGP_ppc32_linux) && \
1108 !defined(VGP_x86_darwin) && !defined(VGP_amd64_darwin) && \
1109 !defined(VGP_mips32_linux) && !defined(VGP_mips64_linux) && \
1110 !defined(VGP_nanomips_linux) && !defined(VGO_solaris)
1111 vg_assert(ksa_old.sa_restorer == my_sigreturn);
1112 # endif
1113 VG_(sigaddset)( &ksa_old.sa_mask, VKI_SIGKILL );
1114 VG_(sigaddset)( &ksa_old.sa_mask, VKI_SIGSTOP );
1115 vg_assert(VG_(isfullsigset)( &ksa_old.sa_mask ));
1121 /* ---------------------------------------------------------------------
1122 Update/query SCSS in accordance with client requests.
1123 ------------------------------------------------------------------ */
1125 /* Logic for this alt-stack stuff copied directly from do_sigaltstack
1126 in kernel/signal.[ch] */
1128 /* True if we are on the alternate signal stack. */
1129 static Bool on_sig_stack ( ThreadId tid, Addr m_SP )
1131 ThreadState *tst = VG_(get_ThreadState)(tid);
1133 return (m_SP - (Addr)tst->altstack.ss_sp < (Addr)tst->altstack.ss_size);
1136 static Int sas_ss_flags ( ThreadId tid, Addr m_SP )
1138 ThreadState *tst = VG_(get_ThreadState)(tid);
1140 return (tst->altstack.ss_size == 0
1141 ? VKI_SS_DISABLE
1142 : on_sig_stack(tid, m_SP) ? VKI_SS_ONSTACK : 0);
1146 SysRes VG_(do_sys_sigaltstack) ( ThreadId tid, vki_stack_t* ss, vki_stack_t* oss )
1148 Addr m_SP;
1150 vg_assert(VG_(is_valid_tid)(tid));
1151 m_SP = VG_(get_SP)(tid);
1153 if (VG_(clo_trace_signals))
1154 VG_(dmsg)("sys_sigaltstack: tid %u, "
1155 "ss %p{%p,sz=%llu,flags=0x%llx}, oss %p (current SP %p)\n",
1156 tid, (void*)ss,
1157 ss ? ss->ss_sp : 0,
1158 (ULong)(ss ? ss->ss_size : 0),
1159 (ULong)(ss ? ss->ss_flags : 0),
1160 (void*)oss, (void*)m_SP);
1162 if (oss != NULL) {
1163 oss->ss_sp = VG_(threads)[tid].altstack.ss_sp;
1164 oss->ss_size = VG_(threads)[tid].altstack.ss_size;
1165 oss->ss_flags = VG_(threads)[tid].altstack.ss_flags
1166 | sas_ss_flags(tid, m_SP);
1169 if (ss != NULL) {
1170 if (on_sig_stack(tid, VG_(get_SP)(tid))) {
1171 return VG_(mk_SysRes_Error)( VKI_EPERM );
1173 if (ss->ss_flags != VKI_SS_DISABLE
1174 && ss->ss_flags != VKI_SS_ONSTACK
1175 && ss->ss_flags != 0) {
1176 return VG_(mk_SysRes_Error)( VKI_EINVAL );
1178 if (ss->ss_flags == VKI_SS_DISABLE) {
1179 VG_(threads)[tid].altstack.ss_flags = VKI_SS_DISABLE;
1180 } else {
1181 if (ss->ss_size < VKI_MINSIGSTKSZ) {
1182 return VG_(mk_SysRes_Error)( VKI_ENOMEM );
1185 VG_(threads)[tid].altstack.ss_sp = ss->ss_sp;
1186 VG_(threads)[tid].altstack.ss_size = ss->ss_size;
1187 VG_(threads)[tid].altstack.ss_flags = 0;
1190 return VG_(mk_SysRes_Success)( 0 );
1194 SysRes VG_(do_sys_sigaction) ( Int signo,
1195 const vki_sigaction_toK_t* new_act,
1196 vki_sigaction_fromK_t* old_act )
1198 if (VG_(clo_trace_signals))
1199 VG_(dmsg)("sys_sigaction: sigNo %d, "
1200 "new %#lx, old %#lx, new flags 0x%llx\n",
1201 signo, (UWord)new_act, (UWord)old_act,
1202 (ULong)(new_act ? new_act->sa_flags : 0));
1204 /* Rule out various error conditions. The aim is to ensure that if
1205 when the call is passed to the kernel it will definitely
1206 succeed. */
1208 /* Reject out-of-range signal numbers. */
1209 if (signo < 1 || signo > VG_(max_signal)) goto bad_signo;
1211 /* don't let them use our signals */
1212 if ( (signo > VG_SIGVGRTUSERMAX)
1213 && new_act
1214 && !(new_act->ksa_handler == VKI_SIG_DFL
1215 || new_act->ksa_handler == VKI_SIG_IGN) )
1216 goto bad_signo_reserved;
1218 /* Reject attempts to set a handler (or set ignore) for SIGKILL. */
1219 if ( (signo == VKI_SIGKILL || signo == VKI_SIGSTOP)
1220 && new_act
1221 && new_act->ksa_handler != VKI_SIG_DFL)
1222 goto bad_sigkill_or_sigstop;
1224 /* If the client supplied non-NULL old_act, copy the relevant SCSS
1225 entry into it. */
1226 if (old_act) {
1227 old_act->ksa_handler = scss.scss_per_sig[signo].scss_handler;
1228 old_act->sa_flags = scss.scss_per_sig[signo].scss_flags;
1229 old_act->sa_mask = scss.scss_per_sig[signo].scss_mask;
1230 # if !defined(VGP_x86_darwin) && !defined(VGP_amd64_darwin) && \
1231 !defined(VGO_solaris)
1232 old_act->sa_restorer = scss.scss_per_sig[signo].scss_restorer;
1233 # endif
1236 /* And now copy new SCSS entry from new_act. */
1237 if (new_act) {
1238 scss.scss_per_sig[signo].scss_handler = new_act->ksa_handler;
1239 scss.scss_per_sig[signo].scss_flags = new_act->sa_flags;
1240 scss.scss_per_sig[signo].scss_mask = new_act->sa_mask;
1242 scss.scss_per_sig[signo].scss_restorer = NULL;
1243 # if !defined(VGP_x86_darwin) && !defined(VGP_amd64_darwin) && \
1244 !defined(VGO_solaris)
1245 scss.scss_per_sig[signo].scss_restorer = new_act->sa_restorer;
1246 # endif
1248 scss.scss_per_sig[signo].scss_sa_tramp = NULL;
1249 # if defined(VGP_x86_darwin) || defined(VGP_amd64_darwin)
1250 scss.scss_per_sig[signo].scss_sa_tramp = new_act->sa_tramp;
1251 # endif
1253 VG_(sigdelset)(&scss.scss_per_sig[signo].scss_mask, VKI_SIGKILL);
1254 VG_(sigdelset)(&scss.scss_per_sig[signo].scss_mask, VKI_SIGSTOP);
1257 /* All happy bunnies ... */
1258 if (new_act) {
1259 handle_SCSS_change( False /* lazy update */ );
1261 return VG_(mk_SysRes_Success)( 0 );
1263 bad_signo:
1264 if (VG_(showing_core_errors)() && !VG_(clo_xml)) {
1265 VG_(umsg)("Warning: bad signal number %d in sigaction()\n", signo);
1267 return VG_(mk_SysRes_Error)( VKI_EINVAL );
1269 bad_signo_reserved:
1270 if (VG_(showing_core_errors)() && !VG_(clo_xml)) {
1271 VG_(umsg)("Warning: ignored attempt to set %s handler in sigaction();\n",
1272 VG_(signame)(signo));
1273 VG_(umsg)(" the %s signal is used internally by Valgrind\n",
1274 VG_(signame)(signo));
1276 return VG_(mk_SysRes_Error)( VKI_EINVAL );
1278 bad_sigkill_or_sigstop:
1279 if (VG_(showing_core_errors)() && !VG_(clo_xml)) {
1280 VG_(umsg)("Warning: ignored attempt to set %s handler in sigaction();\n",
1281 VG_(signame)(signo));
1282 VG_(umsg)(" the %s signal is uncatchable\n",
1283 VG_(signame)(signo));
1285 return VG_(mk_SysRes_Error)( VKI_EINVAL );
1289 static
1290 void do_sigprocmask_bitops ( Int vki_how,
1291 vki_sigset_t* orig_set,
1292 vki_sigset_t* modifier )
1294 switch (vki_how) {
1295 case VKI_SIG_BLOCK:
1296 VG_(sigaddset_from_set)( orig_set, modifier );
1297 break;
1298 case VKI_SIG_UNBLOCK:
1299 VG_(sigdelset_from_set)( orig_set, modifier );
1300 break;
1301 case VKI_SIG_SETMASK:
1302 *orig_set = *modifier;
1303 break;
1304 default:
1305 VG_(core_panic)("do_sigprocmask_bitops");
1306 break;
1310 static
1311 HChar* format_sigset ( const vki_sigset_t* set )
1313 static HChar buf[_VKI_NSIG_WORDS * 16 + 1];
1314 int w;
1316 VG_(strcpy)(buf, "");
1318 for (w = _VKI_NSIG_WORDS - 1; w >= 0; w--)
1320 # if _VKI_NSIG_BPW == 32
1321 VG_(sprintf)(buf + VG_(strlen)(buf), "%08llx",
1322 set ? (ULong)set->sig[w] : 0);
1323 # elif _VKI_NSIG_BPW == 64
1324 VG_(sprintf)(buf + VG_(strlen)(buf), "%16llx",
1325 set ? (ULong)set->sig[w] : 0);
1326 # else
1327 # error "Unsupported value for _VKI_NSIG_BPW"
1328 # endif
1331 return buf;
1335 This updates the thread's signal mask. There's no such thing as a
1336 process-wide signal mask.
1338 Note that the thread signal masks are an implicit part of SCSS,
1339 which is why this routine is allowed to mess with them.
1341 static
1342 void do_setmask ( ThreadId tid,
1343 Int how,
1344 vki_sigset_t* newset,
1345 vki_sigset_t* oldset )
1347 if (VG_(clo_trace_signals))
1348 VG_(dmsg)("do_setmask: tid = %u how = %d (%s), newset = %p (%s)\n",
1349 tid, how,
1350 how==VKI_SIG_BLOCK ? "SIG_BLOCK" : (
1351 how==VKI_SIG_UNBLOCK ? "SIG_UNBLOCK" : (
1352 how==VKI_SIG_SETMASK ? "SIG_SETMASK" : "???")),
1353 newset, newset ? format_sigset(newset) : "NULL" );
1355 /* Just do this thread. */
1356 vg_assert(VG_(is_valid_tid)(tid));
1357 if (oldset) {
1358 *oldset = VG_(threads)[tid].sig_mask;
1359 if (VG_(clo_trace_signals))
1360 VG_(dmsg)("\toldset=%p %s\n", oldset, format_sigset(oldset));
1362 if (newset) {
1363 do_sigprocmask_bitops (how, &VG_(threads)[tid].sig_mask, newset );
1364 VG_(sigdelset)(&VG_(threads)[tid].sig_mask, VKI_SIGKILL);
1365 VG_(sigdelset)(&VG_(threads)[tid].sig_mask, VKI_SIGSTOP);
1366 VG_(threads)[tid].tmp_sig_mask = VG_(threads)[tid].sig_mask;
1371 SysRes VG_(do_sys_sigprocmask) ( ThreadId tid,
1372 Int how,
1373 vki_sigset_t* set,
1374 vki_sigset_t* oldset )
1376 /* Fix for case when ,,set,, is NULL.
1377 In this case ,,how,, flag should be ignored
1378 because we are only requesting from kernel
1379 to put current mask into ,,oldset,,.
1380 Taken from linux man pages (sigprocmask).
1381 The same is specified for POSIX.
1383 if (set != NULL) {
1384 switch(how) {
1385 case VKI_SIG_BLOCK:
1386 case VKI_SIG_UNBLOCK:
1387 case VKI_SIG_SETMASK:
1388 break;
1390 default:
1391 VG_(dmsg)("sigprocmask: unknown 'how' field %d\n", how);
1392 return VG_(mk_SysRes_Error)( VKI_EINVAL );
1396 vg_assert(VG_(is_valid_tid)(tid));
1397 do_setmask(tid, how, set, oldset);
1398 return VG_(mk_SysRes_Success)( 0 );
1402 /* ---------------------------------------------------------------------
1403 LOW LEVEL STUFF TO DO WITH SIGNALS: IMPLEMENTATION
1404 ------------------------------------------------------------------ */
1406 /* ---------------------------------------------------------------------
1407 Handy utilities to block/restore all host signals.
1408 ------------------------------------------------------------------ */
1410 /* Block all host signals, dumping the old mask in *saved_mask. */
1411 static void block_all_host_signals ( /* OUT */ vki_sigset_t* saved_mask )
1413 Int ret;
1414 vki_sigset_t block_procmask;
1415 VG_(sigfillset)(&block_procmask);
1416 ret = VG_(sigprocmask)
1417 (VKI_SIG_SETMASK, &block_procmask, saved_mask);
1418 vg_assert(ret == 0);
1421 /* Restore the blocking mask using the supplied saved one. */
1422 static void restore_all_host_signals ( /* IN */ vki_sigset_t* saved_mask )
1424 Int ret;
1425 ret = VG_(sigprocmask)(VKI_SIG_SETMASK, saved_mask, NULL);
1426 vg_assert(ret == 0);
1429 void VG_(clear_out_queued_signals)( ThreadId tid, vki_sigset_t* saved_mask )
1431 block_all_host_signals(saved_mask);
1432 if (VG_(threads)[tid].sig_queue != NULL) {
1433 VG_(free)(VG_(threads)[tid].sig_queue);
1434 VG_(threads)[tid].sig_queue = NULL;
1436 restore_all_host_signals(saved_mask);
1439 /* ---------------------------------------------------------------------
1440 The signal simulation proper. A simplified version of what the
1441 Linux kernel does.
1442 ------------------------------------------------------------------ */
1444 /* Set up a stack frame (VgSigContext) for the client's signal
1445 handler. */
1446 static
1447 void push_signal_frame ( ThreadId tid, const vki_siginfo_t *siginfo,
1448 const struct vki_ucontext *uc )
1450 Bool on_altstack;
1451 Addr esp_top_of_frame;
1452 ThreadState* tst;
1453 Int sigNo = siginfo->si_signo;
1455 vg_assert(sigNo >= 1 && sigNo <= VG_(max_signal));
1456 vg_assert(VG_(is_valid_tid)(tid));
1457 tst = & VG_(threads)[tid];
1459 if (VG_(clo_trace_signals)) {
1460 VG_(dmsg)("push_signal_frame (thread %u): signal %d\n", tid, sigNo);
1461 VG_(get_and_pp_StackTrace)(tid, 10);
1464 if (/* this signal asked to run on an alt stack */
1465 (scss.scss_per_sig[sigNo].scss_flags & VKI_SA_ONSTACK )
1466 && /* there is a defined and enabled alt stack, which we're not
1467 already using. Logic from get_sigframe in
1468 arch/i386/kernel/signal.c. */
1469 sas_ss_flags(tid, VG_(get_SP)(tid)) == 0
1471 on_altstack = True;
1472 esp_top_of_frame
1473 = (Addr)(tst->altstack.ss_sp) + tst->altstack.ss_size;
1474 if (VG_(clo_trace_signals))
1475 VG_(dmsg)("delivering signal %d (%s) to thread %u: "
1476 "on ALT STACK (%p-%p; %ld bytes)\n",
1477 sigNo, VG_(signame)(sigNo), tid, tst->altstack.ss_sp,
1478 (UChar *)tst->altstack.ss_sp + tst->altstack.ss_size,
1479 (Word)tst->altstack.ss_size );
1480 } else {
1481 on_altstack = False;
1482 esp_top_of_frame = VG_(get_SP)(tid) - VG_STACK_REDZONE_SZB;
1485 /* Signal delivery to tools */
1486 VG_TRACK( pre_deliver_signal, tid, sigNo, on_altstack );
1488 vg_assert(scss.scss_per_sig[sigNo].scss_handler != VKI_SIG_IGN);
1489 vg_assert(scss.scss_per_sig[sigNo].scss_handler != VKI_SIG_DFL);
1491 /* This may fail if the client stack is busted; if that happens,
1492 the whole process will exit rather than simply calling the
1493 signal handler. */
1494 VG_(sigframe_create) (tid, on_altstack, esp_top_of_frame, siginfo, uc,
1495 scss.scss_per_sig[sigNo].scss_handler,
1496 scss.scss_per_sig[sigNo].scss_flags,
1497 &tst->sig_mask,
1498 scss.scss_per_sig[sigNo].scss_restorer);
1502 const HChar *VG_(signame)(Int sigNo)
1504 static HChar buf[20]; // large enough
1506 switch(sigNo) {
1507 case VKI_SIGHUP: return "SIGHUP";
1508 case VKI_SIGINT: return "SIGINT";
1509 case VKI_SIGQUIT: return "SIGQUIT";
1510 case VKI_SIGILL: return "SIGILL";
1511 case VKI_SIGTRAP: return "SIGTRAP";
1512 case VKI_SIGABRT: return "SIGABRT";
1513 case VKI_SIGBUS: return "SIGBUS";
1514 case VKI_SIGFPE: return "SIGFPE";
1515 case VKI_SIGKILL: return "SIGKILL";
1516 case VKI_SIGUSR1: return "SIGUSR1";
1517 case VKI_SIGUSR2: return "SIGUSR2";
1518 case VKI_SIGSEGV: return "SIGSEGV";
1519 case VKI_SIGSYS: return "SIGSYS";
1520 case VKI_SIGPIPE: return "SIGPIPE";
1521 case VKI_SIGALRM: return "SIGALRM";
1522 case VKI_SIGTERM: return "SIGTERM";
1523 # if defined(VKI_SIGSTKFLT)
1524 case VKI_SIGSTKFLT: return "SIGSTKFLT";
1525 # endif
1526 case VKI_SIGCHLD: return "SIGCHLD";
1527 case VKI_SIGCONT: return "SIGCONT";
1528 case VKI_SIGSTOP: return "SIGSTOP";
1529 case VKI_SIGTSTP: return "SIGTSTP";
1530 case VKI_SIGTTIN: return "SIGTTIN";
1531 case VKI_SIGTTOU: return "SIGTTOU";
1532 case VKI_SIGURG: return "SIGURG";
1533 case VKI_SIGXCPU: return "SIGXCPU";
1534 case VKI_SIGXFSZ: return "SIGXFSZ";
1535 case VKI_SIGVTALRM: return "SIGVTALRM";
1536 case VKI_SIGPROF: return "SIGPROF";
1537 case VKI_SIGWINCH: return "SIGWINCH";
1538 case VKI_SIGIO: return "SIGIO";
1539 # if defined(VKI_SIGPWR)
1540 case VKI_SIGPWR: return "SIGPWR";
1541 # endif
1542 # if defined(VKI_SIGUNUSED) && (VKI_SIGUNUSED != VKI_SIGSYS)
1543 case VKI_SIGUNUSED: return "SIGUNUSED";
1544 # endif
1546 /* Solaris-specific signals. */
1547 # if defined(VKI_SIGEMT)
1548 case VKI_SIGEMT: return "SIGEMT";
1549 # endif
1550 # if defined(VKI_SIGWAITING)
1551 case VKI_SIGWAITING: return "SIGWAITING";
1552 # endif
1553 # if defined(VKI_SIGLWP)
1554 case VKI_SIGLWP: return "SIGLWP";
1555 # endif
1556 # if defined(VKI_SIGFREEZE)
1557 case VKI_SIGFREEZE: return "SIGFREEZE";
1558 # endif
1559 # if defined(VKI_SIGTHAW)
1560 case VKI_SIGTHAW: return "SIGTHAW";
1561 # endif
1562 # if defined(VKI_SIGCANCEL)
1563 case VKI_SIGCANCEL: return "SIGCANCEL";
1564 # endif
1565 # if defined(VKI_SIGLOST)
1566 case VKI_SIGLOST: return "SIGLOST";
1567 # endif
1568 # if defined(VKI_SIGXRES)
1569 case VKI_SIGXRES: return "SIGXRES";
1570 # endif
1571 # if defined(VKI_SIGJVM1)
1572 case VKI_SIGJVM1: return "SIGJVM1";
1573 # endif
1574 # if defined(VKI_SIGJVM2)
1575 case VKI_SIGJVM2: return "SIGJVM2";
1576 # endif
1578 # if defined(VKI_SIGRTMIN) && defined(VKI_SIGRTMAX)
1579 case VKI_SIGRTMIN ... VKI_SIGRTMAX:
1580 VG_(sprintf)(buf, "SIGRT%d", sigNo-VKI_SIGRTMIN);
1581 return buf;
1582 # endif
1584 default:
1585 VG_(sprintf)(buf, "SIG%d", sigNo);
1586 return buf;
1590 /* Hit ourselves with a signal using the default handler */
1591 void VG_(kill_self)(Int sigNo)
1593 Int r;
1594 vki_sigset_t mask, origmask;
1595 vki_sigaction_toK_t sa, origsa2;
1596 vki_sigaction_fromK_t origsa;
1598 sa.ksa_handler = VKI_SIG_DFL;
1599 sa.sa_flags = 0;
1600 # if !defined(VGP_x86_darwin) && !defined(VGP_amd64_darwin) && \
1601 !defined(VGO_solaris)
1602 sa.sa_restorer = 0;
1603 # endif
1604 VG_(sigemptyset)(&sa.sa_mask);
1606 VG_(sigaction)(sigNo, &sa, &origsa);
1608 VG_(sigemptyset)(&mask);
1609 VG_(sigaddset)(&mask, sigNo);
1610 VG_(sigprocmask)(VKI_SIG_UNBLOCK, &mask, &origmask);
1612 r = VG_(kill)(VG_(getpid)(), sigNo);
1613 # if !defined(VGO_darwin)
1614 /* This sometimes fails with EPERM on Darwin. I don't know why. */
1615 vg_assert(r == 0);
1616 # endif
1618 VG_(convert_sigaction_fromK_to_toK)( &origsa, &origsa2 );
1619 VG_(sigaction)(sigNo, &origsa2, NULL);
1620 VG_(sigprocmask)(VKI_SIG_SETMASK, &origmask, NULL);
1623 // The si_code describes where the signal came from. Some come from the
1624 // kernel, eg.: seg faults, illegal opcodes. Some come from the user, eg.:
1625 // from kill() (SI_USER), or timer_settime() (SI_TIMER), or an async I/O
1626 // request (SI_ASYNCIO). There's lots of implementation-defined leeway in
1627 // POSIX, but the user vs. kernal distinction is what we want here. We also
1628 // pass in some other details that can help when si_code is unreliable.
1629 static Bool is_signal_from_kernel(ThreadId tid, int signum, int si_code)
1631 # if defined(VGO_linux) || defined(VGO_solaris)
1632 // On Linux, SI_USER is zero, negative values are from the user, positive
1633 // values are from the kernel. There are SI_FROMUSER and SI_FROMKERNEL
1634 // macros but we don't use them here because other platforms don't have
1635 // them.
1636 return ( si_code > VKI_SI_USER ? True : False );
1638 # elif defined(VGO_darwin)
1639 // On Darwin 9.6.0, the si_code is completely unreliable. It should be the
1640 // case that 0 means "user", and >0 means "kernel". But:
1641 // - For SIGSEGV, it seems quite reliable.
1642 // - For SIGBUS, it's always 2.
1643 // - For SIGFPE, it's often 0, even for kernel ones (eg.
1644 // div-by-integer-zero always gives zero).
1645 // - For SIGILL, it's unclear.
1646 // - For SIGTRAP, it's always 1.
1647 // You can see the "NOTIMP" (not implemented) status of a number of the
1648 // sub-cases in sys/signal.h. Hopefully future versions of Darwin will
1649 // get this right.
1651 // If we're blocked waiting on a syscall, it must be a user signal, because
1652 // the kernel won't generate sync signals within syscalls.
1653 if (VG_(threads)[tid].status == VgTs_WaitSys) {
1654 return False;
1656 // If it's a SIGSEGV, use the proper condition, since it's fairly reliable.
1657 } else if (SIGSEGV == signum) {
1658 return ( si_code > 0 ? True : False );
1660 // If it's anything else, assume it's kernel-generated. Reason being that
1661 // kernel-generated sync signals are more common, and it's probable that
1662 // misdiagnosing a user signal as a kernel signal is better than the
1663 // opposite.
1664 } else {
1665 return True;
1667 # else
1668 # error Unknown OS
1669 # endif
1673 Perform the default action of a signal. If the signal is fatal, it
1674 terminates all other threads, but it doesn't actually kill
1675 the process and calling thread.
1677 If we're not being quiet, then print out some more detail about
1678 fatal signals (esp. core dumping signals).
1680 static void default_action(const vki_siginfo_t *info, ThreadId tid)
1682 Int sigNo = info->si_signo;
1683 Bool terminate = False; /* kills process */
1684 Bool core = False; /* kills process w/ core */
1685 struct vki_rlimit corelim;
1686 Bool could_core;
1687 ThreadState* tst = VG_(get_ThreadState)(tid);
1689 vg_assert(VG_(is_running_thread)(tid));
1691 switch(sigNo) {
1692 case VKI_SIGQUIT: /* core */
1693 case VKI_SIGILL: /* core */
1694 case VKI_SIGABRT: /* core */
1695 case VKI_SIGFPE: /* core */
1696 case VKI_SIGSEGV: /* core */
1697 case VKI_SIGBUS: /* core */
1698 case VKI_SIGTRAP: /* core */
1699 case VKI_SIGSYS: /* core */
1700 case VKI_SIGXCPU: /* core */
1701 case VKI_SIGXFSZ: /* core */
1703 /* Solaris-specific signals. */
1704 # if defined(VKI_SIGEMT)
1705 case VKI_SIGEMT: /* core */
1706 # endif
1708 terminate = True;
1709 core = True;
1710 break;
1712 case VKI_SIGHUP: /* term */
1713 case VKI_SIGINT: /* term */
1714 case VKI_SIGKILL: /* term - we won't see this */
1715 case VKI_SIGPIPE: /* term */
1716 case VKI_SIGALRM: /* term */
1717 case VKI_SIGTERM: /* term */
1718 case VKI_SIGUSR1: /* term */
1719 case VKI_SIGUSR2: /* term */
1720 case VKI_SIGIO: /* term */
1721 # if defined(VKI_SIGPWR)
1722 case VKI_SIGPWR: /* term */
1723 # endif
1724 case VKI_SIGPROF: /* term */
1725 case VKI_SIGVTALRM: /* term */
1726 # if defined(VKI_SIGRTMIN) && defined(VKI_SIGRTMAX)
1727 case VKI_SIGRTMIN ... VKI_SIGRTMAX: /* term */
1728 # endif
1730 /* Solaris-specific signals. */
1731 # if defined(VKI_SIGLOST)
1732 case VKI_SIGLOST: /* term */
1733 # endif
1735 terminate = True;
1736 break;
1739 vg_assert(!core || (core && terminate));
1741 if (VG_(clo_trace_signals))
1742 VG_(dmsg)("delivering %d (code %d) to default handler; action: %s%s\n",
1743 sigNo, info->si_code, terminate ? "terminate" : "ignore",
1744 core ? "+core" : "");
1746 if (!terminate)
1747 return; /* nothing to do */
1749 #if defined(VGO_linux)
1750 if (terminate && (tst->ptrace & VKI_PT_PTRACED)
1751 && (sigNo != VKI_SIGKILL)) {
1752 VG_(kill)(VG_(getpid)(), VKI_SIGSTOP);
1753 return;
1755 #endif
1757 could_core = core;
1759 if (core) {
1760 /* If they set the core-size limit to zero, don't generate a
1761 core file */
1763 VG_(getrlimit)(VKI_RLIMIT_CORE, &corelim);
1765 if (corelim.rlim_cur == 0)
1766 core = False;
1769 if ( VG_(clo_verbosity) >= 1
1770 || (could_core && is_signal_from_kernel(tid, sigNo, info->si_code))
1771 || VG_(clo_xml) ) {
1772 if (VG_(clo_xml)) {
1773 VG_(printf_xml)("<fatal_signal>\n");
1774 VG_(printf_xml)(" <tid>%u</tid>\n", tid);
1775 if (tst->thread_name) {
1776 VG_(printf_xml)(" <threadname>%s</threadname>\n",
1777 tst->thread_name);
1779 VG_(printf_xml)(" <signo>%d</signo>\n", sigNo);
1780 VG_(printf_xml)(" <signame>%s</signame>\n", VG_(signame)(sigNo));
1781 VG_(printf_xml)(" <sicode>%d</sicode>\n", info->si_code);
1782 } else {
1783 VG_(umsg)(
1784 "\n"
1785 "Process terminating with default action of signal %d (%s)%s\n",
1786 sigNo, VG_(signame)(sigNo), core ? ": dumping core" : "");
1789 /* Be helpful - decode some more details about this fault */
1790 if (is_signal_from_kernel(tid, sigNo, info->si_code)) {
1791 const HChar *event = NULL;
1792 Bool haveaddr = True;
1794 switch(sigNo) {
1795 case VKI_SIGSEGV:
1796 switch(info->si_code) {
1797 case VKI_SEGV_MAPERR: event = "Access not within mapped region";
1798 break;
1799 case VKI_SEGV_ACCERR: event = "Bad permissions for mapped region";
1800 break;
1801 case VKI_SEGV_MADE_UP_GPF:
1802 /* General Protection Fault: The CPU/kernel
1803 isn't telling us anything useful, but this
1804 is commonly the result of exceeding a
1805 segment limit. */
1806 event = "General Protection Fault";
1807 haveaddr = False;
1808 break;
1810 #if 0
1812 HChar buf[50]; // large enough
1813 VG_(am_show_nsegments)(0,"post segfault");
1814 VG_(sprintf)(buf, "/bin/cat /proc/%d/maps", VG_(getpid)());
1815 VG_(system)(buf);
1817 #endif
1818 break;
1820 case VKI_SIGILL:
1821 switch(info->si_code) {
1822 case VKI_ILL_ILLOPC: event = "Illegal opcode"; break;
1823 case VKI_ILL_ILLOPN: event = "Illegal operand"; break;
1824 case VKI_ILL_ILLADR: event = "Illegal addressing mode"; break;
1825 case VKI_ILL_ILLTRP: event = "Illegal trap"; break;
1826 case VKI_ILL_PRVOPC: event = "Privileged opcode"; break;
1827 case VKI_ILL_PRVREG: event = "Privileged register"; break;
1828 case VKI_ILL_COPROC: event = "Coprocessor error"; break;
1829 case VKI_ILL_BADSTK: event = "Internal stack error"; break;
1831 break;
1833 case VKI_SIGFPE:
1834 switch (info->si_code) {
1835 case VKI_FPE_INTDIV: event = "Integer divide by zero"; break;
1836 case VKI_FPE_INTOVF: event = "Integer overflow"; break;
1837 case VKI_FPE_FLTDIV: event = "FP divide by zero"; break;
1838 case VKI_FPE_FLTOVF: event = "FP overflow"; break;
1839 case VKI_FPE_FLTUND: event = "FP underflow"; break;
1840 case VKI_FPE_FLTRES: event = "FP inexact"; break;
1841 case VKI_FPE_FLTINV: event = "FP invalid operation"; break;
1842 case VKI_FPE_FLTSUB: event = "FP subscript out of range"; break;
1844 /* Solaris-specific codes. */
1845 # if defined(VKI_FPE_FLTDEN)
1846 case VKI_FPE_FLTDEN: event = "FP denormalize"; break;
1847 # endif
1849 break;
1851 case VKI_SIGBUS:
1852 switch (info->si_code) {
1853 case VKI_BUS_ADRALN: event = "Invalid address alignment"; break;
1854 case VKI_BUS_ADRERR: event = "Non-existent physical address"; break;
1855 case VKI_BUS_OBJERR: event = "Hardware error"; break;
1857 break;
1858 } /* switch (sigNo) */
1860 if (VG_(clo_xml)) {
1861 if (event != NULL)
1862 VG_(printf_xml)(" <event>%s</event>\n", event);
1863 if (haveaddr)
1864 VG_(printf_xml)(" <siaddr>%p</siaddr>\n",
1865 info->VKI_SIGINFO_si_addr);
1866 } else {
1867 if (event != NULL) {
1868 if (haveaddr)
1869 VG_(umsg)(" %s at address %p\n",
1870 event, info->VKI_SIGINFO_si_addr);
1871 else
1872 VG_(umsg)(" %s\n", event);
1876 /* Print a stack trace. Be cautious if the thread's SP is in an
1877 obviously stupid place (not mapped readable) that would
1878 likely cause a segfault. */
1879 if (VG_(is_valid_tid)(tid)) {
1880 Word first_ip_delta = 0;
1881 #if defined(VGO_linux) || defined(VGO_solaris)
1882 /* Make sure that the address stored in the stack pointer is
1883 located in a mapped page. That is not necessarily so. E.g.
1884 consider the scenario where the stack pointer was decreased
1885 and now has a value that is just below the end of a page that has
1886 not been mapped yet. In that case VG_(am_is_valid_for_client)
1887 will consider the address of the stack pointer invalid and that
1888 would cause a back-trace of depth 1 to be printed, instead of a
1889 full back-trace. */
1890 if (tid == 1) { // main thread
1891 Addr esp = VG_(get_SP)(tid);
1892 Addr base = VG_PGROUNDDN(esp - VG_STACK_REDZONE_SZB);
1893 if (VG_(am_addr_is_in_extensible_client_stack)(base)
1894 && VG_(extend_stack)(tid, base)) {
1895 if (VG_(clo_trace_signals))
1896 VG_(dmsg)(" -> extended stack base to %#lx\n",
1897 VG_PGROUNDDN(esp));
1900 #endif
1901 #if defined(VGA_s390x)
1902 if (sigNo == VKI_SIGILL) {
1903 /* The guest instruction address has been adjusted earlier to
1904 point to the insn following the one that could not be decoded.
1905 When printing the back-trace here we need to undo that
1906 adjustment so the first line in the back-trace reports the
1907 correct address. */
1908 Addr addr = (Addr)info->VKI_SIGINFO_si_addr;
1909 UChar byte = ((UChar *)addr)[0];
1910 Int insn_length = ((((byte >> 6) + 1) >> 1) + 1) << 1;
1912 first_ip_delta = -insn_length;
1914 #endif
1915 ExeContext* ec = VG_(am_is_valid_for_client)
1916 (VG_(get_SP)(tid), sizeof(Addr), VKI_PROT_READ)
1917 ? VG_(record_ExeContext)( tid, first_ip_delta )
1918 : VG_(record_depth_1_ExeContext)( tid,
1919 first_ip_delta );
1920 vg_assert(ec);
1921 VG_(pp_ExeContext)( ec );
1923 if (sigNo == VKI_SIGSEGV
1924 && is_signal_from_kernel(tid, sigNo, info->si_code)
1925 && info->si_code == VKI_SEGV_MAPERR) {
1926 VG_(umsg)(" If you believe this happened as a result of a stack\n" );
1927 VG_(umsg)(" overflow in your program's main thread (unlikely but\n");
1928 VG_(umsg)(" possible), you can try to increase the size of the\n" );
1929 VG_(umsg)(" main thread stack using the --main-stacksize= flag.\n" );
1930 // FIXME: assumes main ThreadId == 1
1931 if (VG_(is_valid_tid)(1)) {
1932 VG_(umsg)(
1933 " The main thread stack size used in this run was %lu.\n",
1934 VG_(threads)[1].client_stack_szB);
1937 if (VG_(clo_xml)) {
1938 /* postamble */
1939 VG_(printf_xml)("</fatal_signal>\n");
1940 VG_(printf_xml)("\n");
1944 if (VG_(clo_vgdb) != Vg_VgdbNo
1945 && VG_(clo_vgdb_error) <= VG_(get_n_errs_shown)() + 1) {
1946 /* Note: we add + 1 to n_errs_shown as the fatal signal was not
1947 reported through error msg, and so was not counted. */
1948 VG_(gdbserver_report_fatal_signal) (info, tid);
1951 if (core) {
1952 static const struct vki_rlimit zero = { 0, 0 };
1954 VG_(make_coredump)(tid, info, corelim.rlim_cur);
1956 /* Make sure we don't get a confusing kernel-generated
1957 coredump when we finally exit */
1958 VG_(setrlimit)(VKI_RLIMIT_CORE, &zero);
1961 // what's this for?
1962 //VG_(threads)[VG_(master_tid)].os_state.fatalsig = sigNo;
1964 /* everyone but tid dies */
1965 VG_(nuke_all_threads_except)(tid, VgSrc_FatalSig);
1966 VG_(reap_threads)(tid);
1967 /* stash fatal signal in this thread */
1968 VG_(threads)[tid].exitreason = VgSrc_FatalSig;
1969 VG_(threads)[tid].os_state.fatalsig = sigNo;
1973 This does the business of delivering a signal to a thread. It may
1974 be called from either a real signal handler, or from normal code to
1975 cause the thread to enter the signal handler.
1977 This updates the thread state, but it does not set it to be
1978 Runnable.
1980 static void deliver_signal ( ThreadId tid, const vki_siginfo_t *info,
1981 const struct vki_ucontext *uc )
1983 Int sigNo = info->si_signo;
1984 SCSS_Per_Signal *handler = &scss.scss_per_sig[sigNo];
1985 void *handler_fn;
1986 ThreadState *tst = VG_(get_ThreadState)(tid);
1988 if (VG_(clo_trace_signals))
1989 VG_(dmsg)("delivering signal %d (%s):%d to thread %u\n",
1990 sigNo, VG_(signame)(sigNo), info->si_code, tid );
1992 if (sigNo == VG_SIGVGKILL) {
1993 /* If this is a SIGVGKILL, we're expecting it to interrupt any
1994 blocked syscall. It doesn't matter whether the VCPU state is
1995 set to restart or not, because we don't expect it will
1996 execute any more client instructions. */
1997 vg_assert(VG_(is_exiting)(tid));
1998 return;
2001 /* If the client specifies SIG_IGN, treat it as SIG_DFL.
2003 If deliver_signal() is being called on a thread, we want
2004 the signal to get through no matter what; if they're ignoring
2005 it, then we do this override (this is so we can send it SIGSEGV,
2006 etc). */
2007 handler_fn = handler->scss_handler;
2008 if (handler_fn == VKI_SIG_IGN)
2009 handler_fn = VKI_SIG_DFL;
2011 vg_assert(handler_fn != VKI_SIG_IGN);
2013 if (handler_fn == VKI_SIG_DFL) {
2014 default_action(info, tid);
2015 } else {
2016 /* Create a signal delivery frame, and set the client's %ESP and
2017 %EIP so that when execution continues, we will enter the
2018 signal handler with the frame on top of the client's stack,
2019 as it expects.
2021 Signal delivery can fail if the client stack is too small or
2022 missing, and we can't push the frame. If that happens,
2023 push_signal_frame will cause the whole process to exit when
2024 we next hit the scheduler.
2026 vg_assert(VG_(is_valid_tid)(tid));
2028 push_signal_frame ( tid, info, uc );
2030 if (handler->scss_flags & VKI_SA_ONESHOT) {
2031 /* Do the ONESHOT thing. */
2032 handler->scss_handler = VKI_SIG_DFL;
2034 handle_SCSS_change( False /* lazy update */ );
2037 /* At this point:
2038 tst->sig_mask is the current signal mask
2039 tst->tmp_sig_mask is the same as sig_mask, unless we're in sigsuspend
2040 handler->scss_mask is the mask set by the handler
2042 Handler gets a mask of tmp_sig_mask|handler_mask|signo
2044 tst->sig_mask = tst->tmp_sig_mask;
2045 if (!(handler->scss_flags & VKI_SA_NOMASK)) {
2046 VG_(sigaddset_from_set)(&tst->sig_mask, &handler->scss_mask);
2047 VG_(sigaddset)(&tst->sig_mask, sigNo);
2048 tst->tmp_sig_mask = tst->sig_mask;
2052 /* Thread state is ready to go - just add Runnable */
2055 static void resume_scheduler(ThreadId tid)
2057 ThreadState *tst = VG_(get_ThreadState)(tid);
2059 vg_assert(tst->os_state.lwpid == VG_(gettid)());
2061 if (tst->sched_jmpbuf_valid) {
2062 /* Can't continue; must longjmp back to the scheduler and thus
2063 enter the sighandler immediately. */
2064 VG_MINIMAL_LONGJMP(tst->sched_jmpbuf);
2068 static void synth_fault_common(ThreadId tid, Addr addr, Int si_code)
2070 vki_siginfo_t info;
2072 vg_assert(VG_(threads)[tid].status == VgTs_Runnable);
2074 VG_(memset)(&info, 0, sizeof(info));
2075 info.si_signo = VKI_SIGSEGV;
2076 info.si_code = si_code;
2077 info.VKI_SIGINFO_si_addr = (void*)addr;
2079 /* Even if gdbserver indicates to ignore the signal, we must deliver it.
2080 So ignore the return value of VG_(gdbserver_report_signal). */
2081 (void) VG_(gdbserver_report_signal) (&info, tid);
2083 /* If they're trying to block the signal, force it to be delivered */
2084 if (VG_(sigismember)(&VG_(threads)[tid].sig_mask, VKI_SIGSEGV))
2085 VG_(set_default_handler)(VKI_SIGSEGV);
2087 deliver_signal(tid, &info, NULL);
2090 // Synthesize a fault where the address is OK, but the page
2091 // permissions are bad.
2092 void VG_(synth_fault_perms)(ThreadId tid, Addr addr)
2094 synth_fault_common(tid, addr, VKI_SEGV_ACCERR);
2097 // Synthesize a fault where the address there's nothing mapped at the address.
2098 void VG_(synth_fault_mapping)(ThreadId tid, Addr addr)
2100 synth_fault_common(tid, addr, VKI_SEGV_MAPERR);
2103 // Synthesize a misc memory fault.
2104 void VG_(synth_fault)(ThreadId tid)
2106 synth_fault_common(tid, 0, VKI_SEGV_MADE_UP_GPF);
2109 // Synthesise a SIGILL.
2110 void VG_(synth_sigill)(ThreadId tid, Addr addr)
2112 vki_siginfo_t info;
2114 vg_assert(VG_(threads)[tid].status == VgTs_Runnable);
2116 VG_(memset)(&info, 0, sizeof(info));
2117 info.si_signo = VKI_SIGILL;
2118 info.si_code = VKI_ILL_ILLOPC; /* jrs: no idea what this should be */
2119 info.VKI_SIGINFO_si_addr = (void*)addr;
2121 if (VG_(gdbserver_report_signal) (&info, tid)) {
2122 resume_scheduler(tid);
2123 deliver_signal(tid, &info, NULL);
2125 else
2126 resume_scheduler(tid);
2129 // Synthesise a SIGBUS.
2130 void VG_(synth_sigbus)(ThreadId tid)
2132 vki_siginfo_t info;
2134 vg_assert(VG_(threads)[tid].status == VgTs_Runnable);
2136 VG_(memset)(&info, 0, sizeof(info));
2137 info.si_signo = VKI_SIGBUS;
2138 /* There are several meanings to SIGBUS (as per POSIX, presumably),
2139 but the most widely understood is "invalid address alignment",
2140 so let's use that. */
2141 info.si_code = VKI_BUS_ADRALN;
2142 /* If we knew the invalid address in question, we could put it
2143 in .si_addr. Oh well. */
2144 /* info.VKI_SIGINFO_si_addr = (void*)addr; */
2146 if (VG_(gdbserver_report_signal) (&info, tid)) {
2147 resume_scheduler(tid);
2148 deliver_signal(tid, &info, NULL);
2150 else
2151 resume_scheduler(tid);
2154 // Synthesise a SIGTRAP.
2155 void VG_(synth_sigtrap)(ThreadId tid)
2157 vki_siginfo_t info;
2158 struct vki_ucontext uc;
2159 # if defined(VGP_x86_darwin)
2160 struct __darwin_mcontext32 mc;
2161 # elif defined(VGP_amd64_darwin)
2162 struct __darwin_mcontext64 mc;
2163 # endif
2165 vg_assert(VG_(threads)[tid].status == VgTs_Runnable);
2167 VG_(memset)(&info, 0, sizeof(info));
2168 VG_(memset)(&uc, 0, sizeof(uc));
2169 info.si_signo = VKI_SIGTRAP;
2170 info.si_code = VKI_TRAP_BRKPT; /* tjh: only ever called for a brkpt ins */
2172 # if defined(VGP_x86_linux) || defined(VGP_amd64_linux)
2173 uc.uc_mcontext.trapno = 3; /* tjh: this is the x86 trap number
2174 for a breakpoint trap... */
2175 uc.uc_mcontext.err = 0; /* tjh: no error code for x86
2176 breakpoint trap... */
2177 # elif defined(VGP_x86_darwin) || defined(VGP_amd64_darwin)
2178 /* the same thing, but using Darwin field/struct names */
2179 VG_(memset)(&mc, 0, sizeof(mc));
2180 uc.uc_mcontext = &mc;
2181 uc.uc_mcontext->__es.__trapno = 3;
2182 uc.uc_mcontext->__es.__err = 0;
2183 # elif defined(VGP_x86_solaris)
2184 uc.uc_mcontext.gregs[VKI_ERR] = 0;
2185 uc.uc_mcontext.gregs[VKI_TRAPNO] = VKI_T_BPTFLT;
2186 # endif
2188 /* fixs390: do we need to do anything here for s390 ? */
2189 if (VG_(gdbserver_report_signal) (&info, tid)) {
2190 resume_scheduler(tid);
2191 deliver_signal(tid, &info, &uc);
2193 else
2194 resume_scheduler(tid);
2197 // Synthesise a SIGFPE.
2198 void VG_(synth_sigfpe)(ThreadId tid, UInt code)
2200 // Only tested on mips32, mips64, s390x and nanomips.
2201 #if !defined(VGA_mips32) && !defined(VGA_mips64) && !defined(VGA_s390x) && !defined(VGA_nanomips)
2202 vg_assert(0);
2203 #else
2204 vki_siginfo_t info;
2205 struct vki_ucontext uc;
2207 vg_assert(VG_(threads)[tid].status == VgTs_Runnable);
2209 VG_(memset)(&info, 0, sizeof(info));
2210 VG_(memset)(&uc, 0, sizeof(uc));
2211 info.si_signo = VKI_SIGFPE;
2212 info.si_code = code;
2214 if (VG_(gdbserver_report_signal) (&info, tid)) {
2215 resume_scheduler(tid);
2216 deliver_signal(tid, &info, &uc);
2218 else
2219 resume_scheduler(tid);
2220 #endif
2223 /* Make a signal pending for a thread, for later delivery.
2224 VG_(poll_signals) will arrange for it to be delivered at the right
2225 time.
2227 tid==0 means add it to the process-wide queue, and not sent it to a
2228 specific thread.
2230 static
2231 void queue_signal(ThreadId tid, const vki_siginfo_t *si)
2233 ThreadState *tst;
2234 SigQueue *sq;
2235 vki_sigset_t savedmask;
2237 tst = VG_(get_ThreadState)(tid);
2239 /* Protect the signal queue against async deliveries */
2240 block_all_host_signals(&savedmask);
2242 if (tst->sig_queue == NULL) {
2243 tst->sig_queue = VG_(malloc)("signals.qs.1", sizeof(*tst->sig_queue));
2244 VG_(memset)(tst->sig_queue, 0, sizeof(*tst->sig_queue));
2246 sq = tst->sig_queue;
2248 if (VG_(clo_trace_signals))
2249 VG_(dmsg)("Queueing signal %d (idx %d) to thread %u\n",
2250 si->si_signo, sq->next, tid);
2252 /* Add signal to the queue. If the queue gets overrun, then old
2253 queued signals may get lost.
2255 XXX We should also keep a sigset of pending signals, so that at
2256 least a non-siginfo signal gets deliviered.
2258 if (sq->sigs[sq->next].si_signo != 0)
2259 VG_(umsg)("Signal %d being dropped from thread %u's queue\n",
2260 sq->sigs[sq->next].si_signo, tid);
2262 sq->sigs[sq->next] = *si;
2263 sq->next = (sq->next+1) % N_QUEUED_SIGNALS;
2265 restore_all_host_signals(&savedmask);
2269 Returns the next queued signal for thread tid which is in "set".
2270 tid==0 means process-wide signal. Set si_signo to 0 when the
2271 signal has been delivered.
2273 Must be called with all signals blocked, to protect against async
2274 deliveries.
2276 static vki_siginfo_t *next_queued(ThreadId tid, const vki_sigset_t *set)
2278 ThreadState *tst = VG_(get_ThreadState)(tid);
2279 SigQueue *sq;
2280 Int idx;
2281 vki_siginfo_t *ret = NULL;
2283 sq = tst->sig_queue;
2284 if (sq == NULL)
2285 goto out;
2287 idx = sq->next;
2288 do {
2289 if (0)
2290 VG_(printf)("idx=%d si_signo=%d inset=%d\n", idx,
2291 sq->sigs[idx].si_signo,
2292 VG_(sigismember)(set, sq->sigs[idx].si_signo));
2294 if (sq->sigs[idx].si_signo != 0
2295 && VG_(sigismember)(set, sq->sigs[idx].si_signo)) {
2296 if (VG_(clo_trace_signals))
2297 VG_(dmsg)("Returning queued signal %d (idx %d) for thread %u\n",
2298 sq->sigs[idx].si_signo, idx, tid);
2299 ret = &sq->sigs[idx];
2300 goto out;
2303 idx = (idx + 1) % N_QUEUED_SIGNALS;
2304 } while(idx != sq->next);
2305 out:
2306 return ret;
2309 static int sanitize_si_code(int si_code)
2311 #if defined(VGO_linux)
2312 /* The linux kernel uses the top 16 bits of si_code for it's own
2313 use and only exports the bottom 16 bits to user space - at least
2314 that is the theory, but it turns out that there are some kernels
2315 around that forget to mask out the top 16 bits so we do it here.
2317 The kernel treats the bottom 16 bits as signed and (when it does
2318 mask them off) sign extends them when exporting to user space so
2319 we do the same thing here. */
2320 return (Short)si_code;
2321 #elif defined(VGO_darwin) || defined(VGO_solaris)
2322 return si_code;
2323 #else
2324 # error Unknown OS
2325 #endif
2328 #if defined(VGO_solaris)
2329 /* Following function is used to switch Valgrind from a client stack back onto
2330 a Valgrind stack. It is used only when the door_return call was invoked by
2331 the client because this is the only syscall which is executed directly on
2332 the client stack (see syscall-{x86,amd64}-solaris.S). The switch onto the
2333 Valgrind stack has to be made as soon as possible because there is no
2334 guarantee that there is enough space on the client stack to run the
2335 complete signal machinery. Also, Valgrind has to be switched back onto its
2336 stack before a simulated signal frame is created because that will
2337 overwrite the real sigframe built by the kernel. */
2338 static void async_signalhandler_solaris_preprocess(ThreadId tid, Int *signo,
2339 vki_siginfo_t *info,
2340 struct vki_ucontext *uc)
2342 # define RECURSION_BIT 0x1000
2343 Addr sp;
2344 vki_sigframe_t *frame;
2345 ThreadState *tst = VG_(get_ThreadState)(tid);
2346 Int rec_signo;
2348 /* If not doing door_return then return instantly. */
2349 if (!tst->os_state.in_door_return)
2350 return;
2352 /* Check for the recursion:
2353 v ...
2354 | async_signalhandler - executed on the client stack
2355 v async_signalhandler_solaris_preprocess - first call switches the
2356 | stacks and sets the RECURSION_BIT flag
2357 v async_signalhandler - executed on the Valgrind stack
2358 | async_signalhandler_solaris_preprocess - the RECURSION_BIT flag is
2359 v set, clear it and return
2361 if (*signo & RECURSION_BIT) {
2362 *signo &= ~RECURSION_BIT;
2363 return;
2366 rec_signo = *signo | RECURSION_BIT;
2368 # if defined(VGP_x86_solaris)
2369 /* Register %ebx/%rbx points to the top of the original V stack. */
2370 sp = uc->uc_mcontext.gregs[VKI_EBX];
2371 # elif defined(VGP_amd64_solaris)
2372 sp = uc->uc_mcontext.gregs[VKI_REG_RBX];
2373 # else
2374 # error "Unknown platform"
2375 # endif
2377 /* Build a fake signal frame, similarly as in sigframe-solaris.c. */
2378 /* Calculate a new stack pointer. */
2379 sp -= sizeof(vki_sigframe_t);
2380 sp = VG_ROUNDDN(sp, 16) - sizeof(UWord);
2382 /* Fill in the frame. */
2383 frame = (vki_sigframe_t*)sp;
2384 /* Set a bogus return address. */
2385 frame->return_addr = (void*)~0UL;
2386 frame->a1_signo = rec_signo;
2387 /* The first parameter has to be 16-byte aligned, resembling a function
2388 call. */
2390 /* Using
2391 vg_assert(VG_IS_16_ALIGNED(&frame->a1_signo));
2392 seems to get miscompiled on amd64 with GCC 4.7.2. */
2393 Addr signo_addr = (Addr)&frame->a1_signo;
2394 vg_assert(VG_IS_16_ALIGNED(signo_addr));
2396 frame->a2_siginfo = &frame->siginfo;
2397 frame->siginfo = *info;
2398 frame->ucontext = *uc;
2400 # if defined(VGP_x86_solaris)
2401 frame->a3_ucontext = &frame->ucontext;
2403 /* Switch onto the V stack and restart the signal processing. */
2404 __asm__ __volatile__(
2405 "xorl %%ebp, %%ebp\n"
2406 "movl %[sp], %%esp\n"
2407 "jmp async_signalhandler\n"
2409 : [sp] "a" (sp)
2410 : /*"ebp"*/);
2412 # elif defined(VGP_amd64_solaris)
2413 __asm__ __volatile__(
2414 "xorq %%rbp, %%rbp\n"
2415 "movq %[sp], %%rsp\n"
2416 "jmp async_signalhandler\n"
2418 : [sp] "a" (sp), "D" (rec_signo), "S" (&frame->siginfo),
2419 "d" (&frame->ucontext)
2420 : /*"rbp"*/);
2421 # else
2422 # error "Unknown platform"
2423 # endif
2425 /* We should never get here. */
2426 vg_assert(0);
2428 # undef RECURSION_BIT
2430 #endif
2433 Receive an async signal from the kernel.
2435 This should only happen when the thread is blocked in a syscall,
2436 since that's the only time this set of signals is unblocked.
2438 static
2439 void async_signalhandler ( Int sigNo,
2440 vki_siginfo_t *info, struct vki_ucontext *uc )
2442 ThreadId tid = VG_(lwpid_to_vgtid)(VG_(gettid)());
2443 ThreadState* tst = VG_(get_ThreadState)(tid);
2444 SysRes sres;
2446 vg_assert(tst->status == VgTs_WaitSys);
2448 # if defined(VGO_solaris)
2449 async_signalhandler_solaris_preprocess(tid, &sigNo, info, uc);
2450 # endif
2452 /* The thread isn't currently running, make it so before going on */
2453 VG_(acquire_BigLock)(tid, "async_signalhandler");
2455 info->si_code = sanitize_si_code(info->si_code);
2457 if (VG_(clo_trace_signals))
2458 VG_(dmsg)("async signal handler: signal=%d, tid=%u, si_code=%d, "
2459 "exitreason %s\n",
2460 sigNo, tid, info->si_code,
2461 VG_(name_of_VgSchedReturnCode)(tst->exitreason));
2463 /* See similar logic in VG_(poll_signals). */
2464 if (tst->exitreason != VgSrc_None)
2465 resume_scheduler(tid);
2467 /* Update thread state properly. The signal can only have been
2468 delivered whilst we were in
2469 coregrind/m_syswrap/syscall-<PLAT>.S, and only then in the
2470 window between the two sigprocmask calls, since at all other
2471 times, we run with async signals on the host blocked. Hence
2472 make enquiries on the basis that we were in or very close to a
2473 syscall, and attempt to fix up the guest state accordingly.
2475 (normal async signals occurring during computation are blocked,
2476 but periodically polled for using VG_(sigtimedwait_zero), and
2477 delivered at a point convenient for us. Hence this routine only
2478 deals with signals that are delivered to a thread during a
2479 syscall.) */
2481 /* First, extract a SysRes from the ucontext_t* given to this
2482 handler. If it is subsequently established by
2483 VG_(fixup_guest_state_after_syscall_interrupted) that the
2484 syscall was complete but the results had not been committed yet
2485 to the guest state, then it'll have to commit the results itself
2486 "by hand", and so we need to extract the SysRes. Of course if
2487 the thread was not in that particular window then the
2488 SysRes will be meaningless, but that's OK too because
2489 VG_(fixup_guest_state_after_syscall_interrupted) will detect
2490 that the thread was not in said window and ignore the SysRes. */
2492 /* To make matters more complex still, on Darwin we need to know
2493 the "class" of the syscall under consideration in order to be
2494 able to extract the a correct SysRes. The class will have been
2495 saved just before the syscall, by VG_(client_syscall), into this
2496 thread's tst->arch.vex.guest_SC_CLASS. Hence: */
2497 # if defined(VGO_darwin)
2498 sres = VG_UCONTEXT_SYSCALL_SYSRES(uc, tst->arch.vex.guest_SC_CLASS);
2499 # else
2500 sres = VG_UCONTEXT_SYSCALL_SYSRES(uc);
2501 # endif
2503 /* (1) */
2504 VG_(fixup_guest_state_after_syscall_interrupted)(
2505 tid,
2506 VG_UCONTEXT_INSTR_PTR(uc),
2507 sres,
2508 !!(scss.scss_per_sig[sigNo].scss_flags & VKI_SA_RESTART),
2512 /* (2) */
2513 /* Set up the thread's state to deliver a signal.
2514 However, if exitreason is VgSrc_FatalSig, then thread tid was
2515 taken out of a syscall by VG_(nuke_all_threads_except).
2516 But after the emission of VKI_SIGKILL, another (fatal) async
2517 signal might be sent. In such a case, we must not handle this
2518 signal, as the thread is supposed to die first.
2519 => resume the scheduler for such a thread, so that the scheduler
2520 can let the thread die. */
2521 if (tst->exitreason != VgSrc_FatalSig
2522 && !is_sig_ign(info, tid))
2523 deliver_signal(tid, info, uc);
2525 /* It's crucial that (1) and (2) happen in the order (1) then (2)
2526 and not the other way around. (1) fixes up the guest thread
2527 state to reflect the fact that the syscall was interrupted --
2528 either to restart the syscall or to return EINTR. (2) then sets
2529 up the thread state to deliver the signal. Then we resume
2530 execution. First, the signal handler is run, since that's the
2531 second adjustment we made to the thread state. If that returns,
2532 then we resume at the guest state created by (1), viz, either
2533 the syscall returns EINTR or is restarted.
2535 If (2) was done before (1) the outcome would be completely
2536 different, and wrong. */
2538 /* longjmp back to the thread's main loop to start executing the
2539 handler. */
2540 resume_scheduler(tid);
2542 VG_(core_panic)("async_signalhandler: got unexpected signal "
2543 "while outside of scheduler");
2546 /* Extend the stack of thread #tid to cover addr. It is expected that
2547 addr either points into an already mapped anonymous segment or into a
2548 reservation segment abutting the stack segment. Everything else is a bug.
2550 Returns True on success, False on failure.
2552 Succeeds without doing anything if addr is already within a segment.
2554 Failure could be caused by:
2555 - addr not below a growable segment
2556 - new stack size would exceed the stack limit for the given thread
2557 - mmap failed for some other reason
2559 Bool VG_(extend_stack)(ThreadId tid, Addr addr)
2561 SizeT udelta;
2562 Addr new_stack_base;
2564 /* Get the segment containing addr. */
2565 const NSegment* seg = VG_(am_find_nsegment)(addr);
2566 vg_assert(seg != NULL);
2568 /* TODO: the test "seg->kind == SkAnonC" is really inadequate,
2569 because although it tests whether the segment is mapped
2570 _somehow_, it doesn't check that it has the right permissions
2571 (r,w, maybe x) ? */
2572 if (seg->kind == SkAnonC)
2573 /* addr is already mapped. Nothing to do. */
2574 return True;
2576 const NSegment* seg_next = VG_(am_next_nsegment)( seg, True/*fwds*/ );
2577 vg_assert(seg_next != NULL);
2579 udelta = VG_PGROUNDUP(seg_next->start - addr);
2580 new_stack_base = seg_next->start - udelta;
2582 VG_(debugLog)(1, "signals",
2583 "extending a stack base 0x%lx down by %lu"
2584 " new base 0x%lx to cover 0x%lx\n",
2585 seg_next->start, udelta, new_stack_base, addr);
2586 Bool overflow;
2587 if (! VG_(am_extend_into_adjacent_reservation_client)
2588 ( seg_next->start, -(SSizeT)udelta, &overflow )) {
2589 if (overflow)
2590 VG_(umsg)("Stack overflow in thread #%u: can't grow stack to %#lx\n",
2591 tid, new_stack_base);
2592 else
2593 VG_(umsg)("Cannot map memory to grow the stack for thread #%u "
2594 "to %#lx\n", tid, new_stack_base);
2595 return False;
2598 /* When we change the main stack, we have to let the stack handling
2599 code know about it. */
2600 VG_(change_stack)(VG_(clstk_id), new_stack_base, VG_(clstk_end));
2602 if (VG_(clo_sanity_level) > 2)
2603 VG_(sanity_check_general)(False);
2605 return True;
2608 static fault_catcher_t fault_catcher = NULL;
2610 fault_catcher_t VG_(set_fault_catcher)(fault_catcher_t catcher)
2612 fault_catcher_t prev_catcher = fault_catcher;
2613 fault_catcher = catcher;
2614 return prev_catcher;
2617 static
2618 void sync_signalhandler_from_user ( ThreadId tid,
2619 Int sigNo, vki_siginfo_t *info, struct vki_ucontext *uc )
2621 ThreadId qtid;
2623 /* If some user-process sent us a sync signal (ie. it's not the result
2624 of a faulting instruction), then how we treat it depends on when it
2625 arrives... */
2627 if (VG_(threads)[tid].status == VgTs_WaitSys
2628 # if defined(VGO_solaris)
2629 /* Check if the signal was really received while doing a blocking
2630 syscall. Only then the async_signalhandler() path can be used. */
2631 && VG_(is_ip_in_blocking_syscall)(tid, VG_UCONTEXT_INSTR_PTR(uc))
2632 # endif
2634 /* Signal arrived while we're blocked in a syscall. This means that
2635 the client's signal mask was applied. In other words, so we can't
2636 get here unless the client wants this signal right now. This means
2637 we can simply use the async_signalhandler. */
2638 if (VG_(clo_trace_signals))
2639 VG_(dmsg)("Delivering user-sent sync signal %d as async signal\n",
2640 sigNo);
2642 async_signalhandler(sigNo, info, uc);
2643 VG_(core_panic)("async_signalhandler returned!?\n");
2645 } else {
2646 /* Signal arrived while in generated client code, or while running
2647 Valgrind core code. That means that every thread has these signals
2648 unblocked, so we can't rely on the kernel to route them properly, so
2649 we need to queue them manually. */
2650 if (VG_(clo_trace_signals))
2651 VG_(dmsg)("Routing user-sent sync signal %d via queue\n", sigNo);
2653 # if defined(VGO_linux)
2654 /* On Linux, first we have to do a sanity check of the siginfo. */
2655 if (info->VKI_SIGINFO_si_pid == 0) {
2656 /* There's a per-user limit of pending siginfo signals. If
2657 you exceed this, by having more than that number of
2658 pending signals with siginfo, then new signals are
2659 delivered without siginfo. This condition can be caused
2660 by any unrelated program you're running at the same time
2661 as Valgrind, if it has a large number of pending siginfo
2662 signals which it isn't taking delivery of.
2664 Since we depend on siginfo to work out why we were sent a
2665 signal and what we should do about it, we really can't
2666 continue unless we get it. */
2667 VG_(umsg)("Signal %d (%s) appears to have lost its siginfo; "
2668 "I can't go on.\n", sigNo, VG_(signame)(sigNo));
2669 VG_(printf)(
2670 " This may be because one of your programs has consumed your ration of\n"
2671 " siginfo structures. For more information, see:\n"
2672 " http://kerneltrap.org/mailarchive/1/message/25599/thread\n"
2673 " Basically, some program on your system is building up a large queue of\n"
2674 " pending signals, and this causes the siginfo data for other signals to\n"
2675 " be dropped because it's exceeding a system limit. However, Valgrind\n"
2676 " absolutely needs siginfo for SIGSEGV. A workaround is to track down the\n"
2677 " offending program and avoid running it while using Valgrind, but there\n"
2678 " is no easy way to do this. Apparently the problem was fixed in kernel\n"
2679 " 2.6.12.\n");
2681 /* It's a fatal signal, so we force the default handler. */
2682 VG_(set_default_handler)(sigNo);
2683 deliver_signal(tid, info, uc);
2684 resume_scheduler(tid);
2685 VG_(exit)(99); /* If we can't resume, then just exit */
2687 # endif
2689 qtid = 0; /* shared pending by default */
2690 # if defined(VGO_linux)
2691 if (info->si_code == VKI_SI_TKILL)
2692 qtid = tid; /* directed to us specifically */
2693 # endif
2694 queue_signal(qtid, info);
2698 /* Returns the reported fault address for an exact address */
2699 static Addr fault_mask(Addr in)
2701 /* We have to use VG_PGROUNDDN because faults on s390x only deliver
2702 the page address but not the address within a page.
2704 # if defined(VGA_s390x)
2705 return VG_PGROUNDDN(in);
2706 # else
2707 return in;
2708 #endif
2711 /* Returns True if the sync signal was due to the stack requiring extension
2712 and the extension was successful.
2714 static Bool extend_stack_if_appropriate(ThreadId tid, vki_siginfo_t* info)
2716 Addr fault;
2717 Addr esp;
2718 NSegment const *seg, *seg_next;
2720 if (info->si_signo != VKI_SIGSEGV)
2721 return False;
2723 fault = (Addr)info->VKI_SIGINFO_si_addr;
2724 esp = VG_(get_SP)(tid);
2725 seg = VG_(am_find_nsegment)(fault);
2726 seg_next = seg ? VG_(am_next_nsegment)( seg, True/*fwds*/ )
2727 : NULL;
2729 if (VG_(clo_trace_signals)) {
2730 if (seg == NULL)
2731 VG_(dmsg)("SIGSEGV: si_code=%d faultaddr=%#lx tid=%u ESP=%#lx "
2732 "seg=NULL\n",
2733 info->si_code, fault, tid, esp);
2734 else
2735 VG_(dmsg)("SIGSEGV: si_code=%d faultaddr=%#lx tid=%u ESP=%#lx "
2736 "seg=%#lx-%#lx\n",
2737 info->si_code, fault, tid, esp, seg->start, seg->end);
2740 if (info->si_code == VKI_SEGV_MAPERR
2741 && seg
2742 && seg->kind == SkResvn
2743 && seg->smode == SmUpper
2744 && seg_next
2745 && seg_next->kind == SkAnonC
2746 && fault >= fault_mask(esp - VG_STACK_REDZONE_SZB)) {
2747 /* If the fault address is above esp but below the current known
2748 stack segment base, and it was a fault because there was
2749 nothing mapped there (as opposed to a permissions fault),
2750 then extend the stack segment.
2752 Addr base = VG_PGROUNDDN(esp - VG_STACK_REDZONE_SZB);
2753 if (VG_(am_addr_is_in_extensible_client_stack)(base)
2754 && VG_(extend_stack)(tid, base)) {
2755 if (VG_(clo_trace_signals))
2756 VG_(dmsg)(" -> extended stack base to %#lx\n",
2757 VG_PGROUNDDN(fault));
2758 return True;
2759 } else {
2760 return False;
2762 } else {
2763 return False;
2767 static
2768 void sync_signalhandler_from_kernel ( ThreadId tid,
2769 Int sigNo, vki_siginfo_t *info, struct vki_ucontext *uc )
2771 /* Check to see if some part of Valgrind itself is interested in faults.
2772 The fault catcher should never be set whilst we're in generated code, so
2773 check for that. AFAIK the only use of the catcher right now is
2774 memcheck's leak detector. */
2775 if (fault_catcher) {
2776 vg_assert(VG_(in_generated_code) == False);
2778 (*fault_catcher)(sigNo, (Addr)info->VKI_SIGINFO_si_addr);
2779 /* If the catcher returns, then it didn't handle the fault,
2780 so carry on panicking. */
2783 if (extend_stack_if_appropriate(tid, info)) {
2784 /* Stack extension occurred, so we don't need to do anything else; upon
2785 returning from this function, we'll restart the host (hence guest)
2786 instruction. */
2787 } else {
2788 /* OK, this is a signal we really have to deal with. If it came
2789 from the client's code, then we can jump back into the scheduler
2790 and have it delivered. Otherwise it's a Valgrind bug. */
2791 ThreadState *tst = VG_(get_ThreadState)(tid);
2793 if (VG_(sigismember)(&tst->sig_mask, sigNo)) {
2794 /* signal is blocked, but they're not allowed to block faults */
2795 VG_(set_default_handler)(sigNo);
2798 if (VG_(in_generated_code)) {
2799 if (VG_(gdbserver_report_signal) (info, tid)
2800 || VG_(sigismember)(&tst->sig_mask, sigNo)) {
2801 /* Can't continue; must longjmp back to the scheduler and thus
2802 enter the sighandler immediately. */
2803 deliver_signal(tid, info, uc);
2804 resume_scheduler(tid);
2806 else
2807 resume_scheduler(tid);
2810 /* If resume_scheduler returns or its our fault, it means we
2811 don't have longjmp set up, implying that we weren't running
2812 client code, and therefore it was actually generated by
2813 Valgrind internally.
2815 VG_(dmsg)("VALGRIND INTERNAL ERROR: Valgrind received "
2816 "a signal %d (%s) - exiting\n",
2817 sigNo, VG_(signame)(sigNo));
2819 VG_(dmsg)("si_code=%d; Faulting address: %p; sp: %#lx\n",
2820 info->si_code, info->VKI_SIGINFO_si_addr,
2821 (Addr)VG_UCONTEXT_STACK_PTR(uc));
2823 if (0)
2824 VG_(kill_self)(sigNo); /* generate a core dump */
2826 //if (tid == 0) /* could happen after everyone has exited */
2827 // tid = VG_(master_tid);
2828 vg_assert(tid != 0);
2830 UnwindStartRegs startRegs;
2831 VG_(memset)(&startRegs, 0, sizeof(startRegs));
2833 VG_UCONTEXT_TO_UnwindStartRegs(&startRegs, uc);
2834 VG_(core_panic_at)("Killed by fatal signal", &startRegs);
2839 Receive a sync signal from the host.
2841 static
2842 void sync_signalhandler ( Int sigNo,
2843 vki_siginfo_t *info, struct vki_ucontext *uc )
2845 ThreadId tid = VG_(lwpid_to_vgtid)(VG_(gettid)());
2846 Bool from_user;
2848 if (0)
2849 VG_(printf)("sync_sighandler(%d, %p, %p)\n", sigNo, info, uc);
2851 vg_assert(info != NULL);
2852 vg_assert(info->si_signo == sigNo);
2853 vg_assert(sigNo == VKI_SIGSEGV
2854 || sigNo == VKI_SIGBUS
2855 || sigNo == VKI_SIGFPE
2856 || sigNo == VKI_SIGILL
2857 || sigNo == VKI_SIGTRAP);
2859 info->si_code = sanitize_si_code(info->si_code);
2861 from_user = !is_signal_from_kernel(tid, sigNo, info->si_code);
2863 if (VG_(clo_trace_signals)) {
2864 VG_(dmsg)("sync signal handler: "
2865 "signal=%d, si_code=%d, EIP=%#lx, eip=%#lx, from %s\n",
2866 sigNo, info->si_code, VG_(get_IP)(tid),
2867 (Addr)VG_UCONTEXT_INSTR_PTR(uc),
2868 ( from_user ? "user" : "kernel" ));
2870 vg_assert(sigNo >= 1 && sigNo <= VG_(max_signal));
2872 /* // debug code:
2873 if (0) {
2874 VG_(printf)("info->si_signo %d\n", info->si_signo);
2875 VG_(printf)("info->si_errno %d\n", info->si_errno);
2876 VG_(printf)("info->si_code %d\n", info->si_code);
2877 VG_(printf)("info->si_pid %d\n", info->si_pid);
2878 VG_(printf)("info->si_uid %d\n", info->si_uid);
2879 VG_(printf)("info->si_status %d\n", info->si_status);
2880 VG_(printf)("info->si_addr %p\n", info->si_addr);
2884 /* Figure out if the signal is being sent from outside the process.
2885 (Why do we care?) If the signal is from the user rather than the
2886 kernel, then treat it more like an async signal than a sync signal --
2887 that is, merely queue it for later delivery. */
2888 if (from_user) {
2889 sync_signalhandler_from_user( tid, sigNo, info, uc);
2890 } else {
2891 sync_signalhandler_from_kernel(tid, sigNo, info, uc);
2894 # if defined(VGO_solaris)
2895 /* On Solaris we have to return from signal handler manually. */
2896 VG_(do_syscall2)(__NR_context, VKI_SETCONTEXT, (UWord)uc);
2897 # endif
2902 Kill this thread. Makes it leave any syscall it might be currently
2903 blocked in, and return to the scheduler. This doesn't mark the thread
2904 as exiting; that's the caller's job.
2906 static void sigvgkill_handler(int signo, vki_siginfo_t *si,
2907 struct vki_ucontext *uc)
2909 ThreadId tid = VG_(lwpid_to_vgtid)(VG_(gettid)());
2910 ThreadStatus at_signal = VG_(threads)[tid].status;
2912 if (VG_(clo_trace_signals))
2913 VG_(dmsg)("sigvgkill for lwp %d tid %u\n", VG_(gettid)(), tid);
2915 VG_(acquire_BigLock)(tid, "sigvgkill_handler");
2917 vg_assert(signo == VG_SIGVGKILL);
2918 vg_assert(si->si_signo == signo);
2920 /* jrs 2006 August 3: the following assertion seems incorrect to
2921 me, and fails on AIX. sigvgkill could be sent to a thread which
2922 is runnable - see VG_(nuke_all_threads_except) in the scheduler.
2923 Hence comment these out ..
2925 vg_assert(VG_(threads)[tid].status == VgTs_WaitSys);
2926 VG_(post_syscall)(tid);
2928 and instead do:
2930 if (at_signal == VgTs_WaitSys)
2931 VG_(post_syscall)(tid);
2932 /* jrs 2006 August 3 ends */
2934 resume_scheduler(tid);
2936 VG_(core_panic)("sigvgkill_handler couldn't return to the scheduler\n");
2939 static __attribute((unused))
2940 void pp_ksigaction ( vki_sigaction_toK_t* sa )
2942 Int i;
2943 VG_(printf)("pp_ksigaction: handler %p, flags 0x%x, restorer %p\n",
2944 sa->ksa_handler,
2945 (UInt)sa->sa_flags,
2946 # if !defined(VGP_x86_darwin) && !defined(VGP_amd64_darwin) && \
2947 !defined(VGO_solaris)
2948 sa->sa_restorer
2949 # else
2950 (void*)0
2951 # endif
2953 VG_(printf)("pp_ksigaction: { ");
2954 for (i = 1; i <= VG_(max_signal); i++)
2955 if (VG_(sigismember)(&(sa->sa_mask),i))
2956 VG_(printf)("%d ", i);
2957 VG_(printf)("}\n");
2961 Force signal handler to default
2963 void VG_(set_default_handler)(Int signo)
2965 vki_sigaction_toK_t sa;
2967 sa.ksa_handler = VKI_SIG_DFL;
2968 sa.sa_flags = 0;
2969 # if !defined(VGP_x86_darwin) && !defined(VGP_amd64_darwin) && \
2970 !defined(VGO_solaris)
2971 sa.sa_restorer = 0;
2972 # endif
2973 VG_(sigemptyset)(&sa.sa_mask);
2975 VG_(do_sys_sigaction)(signo, &sa, NULL);
2979 Poll for pending signals, and set the next one up for delivery.
2981 void VG_(poll_signals)(ThreadId tid)
2983 vki_siginfo_t si, *sip;
2984 vki_sigset_t pollset;
2985 ThreadState *tst = VG_(get_ThreadState)(tid);
2986 vki_sigset_t saved_mask;
2988 if (tst->exitreason != VgSrc_None ) {
2989 /* This task has been requested to die (e.g. due to a fatal signal
2990 received by the process, or because of a call to exit syscall).
2991 So, we cannot poll new signals, as we are supposed to die asap.
2992 If we would poll and deliver
2993 a new (maybe fatal) signal, this could cause a deadlock, as
2994 this thread would believe it has to terminate the other threads
2995 and wait for them to die, while we already have a thread doing
2996 that. */
2997 if (VG_(clo_trace_signals))
2998 VG_(dmsg)("poll_signals: not polling as thread %u is exitreason %s\n",
2999 tid, VG_(name_of_VgSchedReturnCode)(tst->exitreason));
3000 return;
3003 /* look for all the signals this thread isn't blocking */
3004 /* pollset = ~tst->sig_mask */
3005 VG_(sigcomplementset)( &pollset, &tst->sig_mask );
3007 block_all_host_signals(&saved_mask); // protect signal queue
3009 /* First look for any queued pending signals */
3010 sip = next_queued(tid, &pollset); /* this thread */
3012 if (sip == NULL)
3013 sip = next_queued(0, &pollset); /* process-wide */
3015 /* If there was nothing queued, ask the kernel for a pending signal */
3016 if (sip == NULL && VG_(sigtimedwait_zero)(&pollset, &si) > 0) {
3017 if (VG_(clo_trace_signals))
3018 VG_(dmsg)("poll_signals: got signal %d for thread %u exitreason %s\n",
3019 si.si_signo, tid,
3020 VG_(name_of_VgSchedReturnCode)(tst->exitreason));
3021 sip = &si;
3024 if (sip != NULL) {
3025 /* OK, something to do; deliver it */
3026 if (VG_(clo_trace_signals))
3027 VG_(dmsg)("Polling found signal %d for tid %u exitreason %s\n",
3028 sip->si_signo, tid,
3029 VG_(name_of_VgSchedReturnCode)(tst->exitreason));
3030 if (!is_sig_ign(sip, tid))
3031 deliver_signal(tid, sip, NULL);
3032 else if (VG_(clo_trace_signals))
3033 VG_(dmsg)(" signal %d ignored\n", sip->si_signo);
3035 sip->si_signo = 0; /* remove from signal queue, if that's
3036 where it came from */
3039 restore_all_host_signals(&saved_mask);
3042 /* At startup, copy the process' real signal state to the SCSS.
3043 Whilst doing this, block all real signals. Then calculate SKSS and
3044 set the kernel to that. Also initialise DCSS.
3046 void VG_(sigstartup_actions) ( void )
3048 Int i, ret, vKI_SIGRTMIN;
3049 vki_sigset_t saved_procmask;
3050 vki_sigaction_fromK_t sa;
3052 VG_(memset)(&scss, 0, sizeof(scss));
3053 VG_(memset)(&skss, 0, sizeof(skss));
3055 # if defined(VKI_SIGRTMIN)
3056 vKI_SIGRTMIN = VKI_SIGRTMIN;
3057 # else
3058 vKI_SIGRTMIN = 0; /* eg Darwin */
3059 # endif
3061 /* VG_(printf)("SIGSTARTUP\n"); */
3062 /* Block all signals. saved_procmask remembers the previous mask,
3063 which the first thread inherits.
3065 block_all_host_signals( &saved_procmask );
3067 /* Copy per-signal settings to SCSS. */
3068 for (i = 1; i <= _VKI_NSIG; i++) {
3069 /* Get the old host action */
3070 ret = VG_(sigaction)(i, NULL, &sa);
3072 # if defined(VGP_x86_darwin) || defined(VGP_amd64_darwin) \
3073 || defined(VGP_nanomips_linux)
3074 /* apparently we may not even ask about the disposition of these
3075 signals, let alone change them */
3076 if (ret != 0 && (i == VKI_SIGKILL || i == VKI_SIGSTOP))
3077 continue;
3078 # endif
3080 if (ret != 0)
3081 break;
3083 /* Try setting it back to see if this signal is really
3084 available */
3085 if (vKI_SIGRTMIN > 0 /* it actually exists on this platform */
3086 && i >= vKI_SIGRTMIN) {
3087 vki_sigaction_toK_t tsa, sa2;
3089 tsa.ksa_handler = (void *)sync_signalhandler;
3090 tsa.sa_flags = VKI_SA_SIGINFO;
3091 # if !defined(VGP_x86_darwin) && !defined(VGP_amd64_darwin) && \
3092 !defined(VGO_solaris)
3093 tsa.sa_restorer = 0;
3094 # endif
3095 VG_(sigfillset)(&tsa.sa_mask);
3097 /* try setting it to some arbitrary handler */
3098 if (VG_(sigaction)(i, &tsa, NULL) != 0) {
3099 /* failed - not really usable */
3100 break;
3103 VG_(convert_sigaction_fromK_to_toK)( &sa, &sa2 );
3104 ret = VG_(sigaction)(i, &sa2, NULL);
3105 vg_assert(ret == 0);
3108 VG_(max_signal) = i;
3110 if (VG_(clo_trace_signals) && VG_(clo_verbosity) > 2)
3111 VG_(printf)("snaffling handler 0x%lx for signal %d\n",
3112 (Addr)(sa.ksa_handler), i );
3114 scss.scss_per_sig[i].scss_handler = sa.ksa_handler;
3115 scss.scss_per_sig[i].scss_flags = sa.sa_flags;
3116 scss.scss_per_sig[i].scss_mask = sa.sa_mask;
3118 scss.scss_per_sig[i].scss_restorer = NULL;
3119 # if !defined(VGP_x86_darwin) && !defined(VGP_amd64_darwin) && \
3120 !defined(VGO_solaris)
3121 scss.scss_per_sig[i].scss_restorer = sa.sa_restorer;
3122 # endif
3124 scss.scss_per_sig[i].scss_sa_tramp = NULL;
3125 # if defined(VGP_x86_darwin) || defined(VGP_amd64_darwin)
3126 scss.scss_per_sig[i].scss_sa_tramp = NULL;
3127 /*sa.sa_tramp;*/
3128 /* We can't know what it was, because Darwin's sys_sigaction
3129 doesn't tell us. */
3130 # endif
3133 if (VG_(clo_trace_signals))
3134 VG_(dmsg)("Max kernel-supported signal is %d, VG_SIGVGKILL is %d\n",
3135 VG_(max_signal), VG_SIGVGKILL);
3137 /* Our private internal signals are treated as ignored */
3138 scss.scss_per_sig[VG_SIGVGKILL].scss_handler = VKI_SIG_IGN;
3139 scss.scss_per_sig[VG_SIGVGKILL].scss_flags = VKI_SA_SIGINFO;
3140 VG_(sigfillset)(&scss.scss_per_sig[VG_SIGVGKILL].scss_mask);
3142 /* Copy the process' signal mask into the root thread. */
3143 vg_assert(VG_(threads)[1].status == VgTs_Init);
3144 for (i = 2; i < VG_N_THREADS; i++)
3145 vg_assert(VG_(threads)[i].status == VgTs_Empty);
3147 VG_(threads)[1].sig_mask = saved_procmask;
3148 VG_(threads)[1].tmp_sig_mask = saved_procmask;
3150 /* Calculate SKSS and apply it. This also sets the initial kernel
3151 mask we need to run with. */
3152 handle_SCSS_change( True /* forced update */ );
3154 /* Leave with all signals still blocked; the thread scheduler loop
3155 will set the appropriate mask at the appropriate time. */
3158 /*--------------------------------------------------------------------*/
3159 /*--- end ---*/
3160 /*--------------------------------------------------------------------*/