Bug 439685 compiler warning in callgrind/main.c
[valgrind.git] / coregrind / m_signals.c
blobb3c94fcc90d825c1950e0aa04404104c92d2450d
1 /* -*- mode: C; c-basic-offset: 3; -*- */
3 /*--------------------------------------------------------------------*/
4 /*--- Implementation of POSIX signals. m_signals.c ---*/
5 /*--------------------------------------------------------------------*/
7 /*
8 This file is part of Valgrind, a dynamic binary instrumentation
9 framework.
11 Copyright (C) 2000-2017 Julian Seward
12 jseward@acm.org
14 This program is free software; you can redistribute it and/or
15 modify it under the terms of the GNU General Public License as
16 published by the Free Software Foundation; either version 2 of the
17 License, or (at your option) any later version.
19 This program is distributed in the hope that it will be useful, but
20 WITHOUT ANY WARRANTY; without even the implied warranty of
21 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
22 General Public License for more details.
24 You should have received a copy of the GNU General Public License
25 along with this program; if not, see <http://www.gnu.org/licenses/>.
27 The GNU General Public License is contained in the file COPYING.
30 /*
31 Signal handling.
33 There are 4 distinct classes of signal:
35 1. Synchronous, instruction-generated (SIGILL, FPE, BUS, SEGV and
36 TRAP): these are signals as a result of an instruction fault. If
37 we get one while running client code, then we just do the
38 appropriate thing. If it happens while running Valgrind code, then
39 it indicates a Valgrind bug. Note that we "manually" implement
40 automatic stack growth, such that if a fault happens near the
41 client process stack, it is extended in the same way the kernel
42 would, and the fault is never reported to the client program.
44 2. Asynchronous variants of the above signals: If the kernel tries
45 to deliver a sync signal while it is blocked, it just kills the
46 process. Therefore, we can't block those signals if we want to be
47 able to report on bugs in Valgrind. This means that we're also
48 open to receiving those signals from other processes, sent with
49 kill. We could get away with just dropping them, since they aren't
50 really signals that processes send to each other.
52 3. Synchronous, general signals. If a thread/process sends itself
53 a signal with kill, its expected to be synchronous: ie, the signal
54 will have been delivered by the time the syscall finishes.
56 4. Asynchronous, general signals. All other signals, sent by
57 another process with kill. These are generally blocked, except for
58 two special cases: we poll for them each time we're about to run a
59 thread for a time quanta, and while running blocking syscalls.
62 In addition, we reserve one signal for internal use: SIGVGKILL.
63 SIGVGKILL is used to terminate threads. When one thread wants
64 another to exit, it will set its exitreason and send it SIGVGKILL
65 if it appears to be blocked in a syscall.
68 We use a kernel thread for each application thread. When the
69 thread allows itself to be open to signals, it sets the thread
70 signal mask to what the client application set it to. This means
71 that we get the kernel to do all signal routing: under Valgrind,
72 signals get delivered in the same way as in the non-Valgrind case
73 (the exception being for the sync signal set, since they're almost
74 always unblocked).
78 Some more details...
80 First off, we take note of the client's requests (via sys_sigaction
81 and sys_sigprocmask) to set the signal state (handlers for each
82 signal, which are process-wide, + a mask for each signal, which is
83 per-thread). This info is duly recorded in the SCSS (static Client
84 signal state) in m_signals.c, and if the client later queries what
85 the state is, we merely fish the relevant info out of SCSS and give
86 it back.
88 However, we set the real signal state in the kernel to something
89 entirely different. This is recorded in SKSS, the static Kernel
90 signal state. What's nice (to the extent that anything is nice w.r.t
91 signals) is that there's a pure function to calculate SKSS from SCSS,
92 calculate_SKSS_from_SCSS. So when the client changes SCSS then we
93 recompute the associated SKSS and apply any changes from the previous
94 SKSS through to the kernel.
96 Now, that said, the general scheme we have now is, that regardless of
97 what the client puts into the SCSS (viz, asks for), what we would
98 like to do is as follows:
100 (1) run code on the virtual CPU with all signals blocked
102 (2) at convenient moments for us (that is, when the VCPU stops, and
103 control is back with the scheduler), ask the kernel "do you have
104 any signals for me?" and if it does, collect up the info, and
105 deliver them to the client (by building sigframes).
107 And that's almost what we do. The signal polling is done by
108 VG_(poll_signals), which calls through to VG_(sigtimedwait_zero) to
109 do the dirty work. (of which more later).
111 By polling signals, rather than catching them, we get to deal with
112 them only at convenient moments, rather than having to recover from
113 taking a signal while generated code is running.
115 Now unfortunately .. the above scheme only works for so-called async
116 signals. An async signal is one which isn't associated with any
117 particular instruction, eg Control-C (SIGINT). For those, it doesn't
118 matter if we don't deliver the signal to the client immediately; it
119 only matters that we deliver it eventually. Hence polling is OK.
121 But the other group -- sync signals -- are all related by the fact
122 that they are various ways for the host CPU to fail to execute an
123 instruction: SIGILL, SIGSEGV, SIGFPU. And they can't be deferred,
124 because obviously if a host instruction can't execute, well then we
125 have to immediately do Plan B, whatever that is.
127 So the next approximation of what happens is:
129 (1) run code on vcpu with all async signals blocked
131 (2) at convenient moments (when NOT running the vcpu), poll for async
132 signals.
134 (1) and (2) together imply that if the host does deliver a signal to
135 async_signalhandler while the VCPU is running, something's
136 seriously wrong.
138 (3) when running code on vcpu, don't block sync signals. Instead
139 register sync_signalhandler and catch any such via that. Of
140 course, that means an ugly recovery path if we do -- the
141 sync_signalhandler has to longjump, exiting out of the generated
142 code, and the assembly-dispatcher thingy that runs it, and gets
143 caught in m_scheduler, which then tells m_signals to deliver the
144 signal.
146 Now naturally (ha ha) even that might be tolerable, but there's
147 something worse: dealing with signals delivered to threads in
148 syscalls.
150 Obviously from the above, SKSS's signal mask (viz, what we really run
151 with) is way different from SCSS's signal mask (viz, what the client
152 thread thought it asked for). (eg) It may well be that the client
153 did not block control-C, so that it just expects to drop dead if it
154 receives ^C whilst blocked in a syscall, but by default we are
155 running with all async signals blocked, and so that signal could be
156 arbitrarily delayed, or perhaps even lost (not sure).
158 So what we have to do, when doing any syscall which SfMayBlock, is to
159 quickly switch in the SCSS-specified signal mask just before the
160 syscall, and switch it back just afterwards, and hope that we don't
161 get caught up in some weird race condition. This is the primary
162 purpose of the ultra-magical pieces of assembly code in
163 coregrind/m_syswrap/syscall-<plat>.S
165 -----------
167 The ways in which V can come to hear of signals that need to be
168 forwarded to the client as are follows:
170 sync signals: can arrive at any time whatsoever. These are caught
171 by sync_signalhandler
173 async signals:
175 if running generated code
176 then these are blocked, so we don't expect to catch them in
177 async_signalhandler
179 else
180 if thread is blocked in a syscall marked SfMayBlock
181 then signals may be delivered to async_sighandler, since we
182 temporarily unblocked them for the duration of the syscall,
183 by using the real (SCSS) mask for this thread
185 else we're doing misc housekeeping activities (eg, making a translation,
186 washing our hair, etc). As in the normal case, these signals are
187 blocked, but we can and do poll for them using VG_(poll_signals).
189 Now, re VG_(poll_signals), it polls the kernel by doing
190 VG_(sigtimedwait_zero). This is trivial on Linux, since it's just a
191 syscall. But on Darwin and AIX, we have to cobble together the
192 functionality in a tedious, longwinded and probably error-prone way.
194 Finally, if a gdb is debugging the process under valgrind,
195 the signal can be ignored if gdb tells this. So, before resuming the
196 scheduler/delivering the signal, a call to VG_(gdbserver_report_signal)
197 is done. If this returns True, the signal is delivered.
200 #include "pub_core_basics.h"
201 #include "pub_core_vki.h"
202 #include "pub_core_vkiscnums.h"
203 #include "pub_core_debuglog.h"
204 #include "pub_core_threadstate.h"
205 #include "pub_core_xarray.h"
206 #include "pub_core_clientstate.h"
207 #include "pub_core_aspacemgr.h"
208 #include "pub_core_errormgr.h"
209 #include "pub_core_gdbserver.h"
210 #include "pub_core_hashtable.h"
211 #include "pub_core_libcbase.h"
212 #include "pub_core_libcassert.h"
213 #include "pub_core_libcprint.h"
214 #include "pub_core_libcproc.h"
215 #include "pub_core_libcsignal.h"
216 #include "pub_core_machine.h"
217 #include "pub_core_mallocfree.h"
218 #include "pub_core_options.h"
219 #include "pub_core_scheduler.h"
220 #include "pub_core_signals.h"
221 #include "pub_core_sigframe.h" // For VG_(sigframe_create)()
222 #include "pub_core_stacks.h" // For VG_(change_stack)()
223 #include "pub_core_stacktrace.h" // For VG_(get_and_pp_StackTrace)()
224 #include "pub_core_syscall.h"
225 #include "pub_core_syswrap.h"
226 #include "pub_core_tooliface.h"
227 #include "pub_core_coredump.h"
230 /* ---------------------------------------------------------------------
231 Forwards decls.
232 ------------------------------------------------------------------ */
234 static void sync_signalhandler ( Int sigNo, vki_siginfo_t *info,
235 struct vki_ucontext * );
236 static void async_signalhandler ( Int sigNo, vki_siginfo_t *info,
237 struct vki_ucontext * );
238 static void sigvgkill_handler ( Int sigNo, vki_siginfo_t *info,
239 struct vki_ucontext * );
241 /* Maximum usable signal. */
242 Int VG_(max_signal) = _VKI_NSIG;
244 #define N_QUEUED_SIGNALS 8
246 typedef struct SigQueue {
247 Int next;
248 vki_siginfo_t sigs[N_QUEUED_SIGNALS];
249 } SigQueue;
251 /* Hash table of PIDs from which SIGCHLD is ignored. */
252 VgHashTable *ht_sigchld_ignore = NULL;
254 /* ------ Macros for pulling stuff out of ucontexts ------ */
256 /* Q: what does VG_UCONTEXT_SYSCALL_SYSRES do? A: let's suppose the
257 machine context (uc) reflects the situation that a syscall had just
258 completed, quite literally -- that is, that the program counter was
259 now at the instruction following the syscall. (or we're slightly
260 downstream, but we're sure no relevant register has yet changed
261 value.) Then VG_UCONTEXT_SYSCALL_SYSRES returns a SysRes reflecting
262 the result of the syscall; it does this by fishing relevant bits of
263 the machine state out of the uc. Of course if the program counter
264 was somewhere else entirely then the result is likely to be
265 meaningless, so the caller of VG_UCONTEXT_SYSCALL_SYSRES has to be
266 very careful to pay attention to the results only when it is sure
267 that the said constraint on the program counter is indeed valid. */
269 #if defined(VGP_x86_linux)
270 # define VG_UCONTEXT_INSTR_PTR(uc) ((uc)->uc_mcontext.eip)
271 # define VG_UCONTEXT_STACK_PTR(uc) ((uc)->uc_mcontext.esp)
272 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
273 /* Convert the value in uc_mcontext.eax into a SysRes. */ \
274 VG_(mk_SysRes_x86_linux)( (uc)->uc_mcontext.eax )
275 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
276 { (srP)->r_pc = (ULong)((uc)->uc_mcontext.eip); \
277 (srP)->r_sp = (ULong)((uc)->uc_mcontext.esp); \
278 (srP)->misc.X86.r_ebp = (uc)->uc_mcontext.ebp; \
281 #elif defined(VGP_amd64_linux)
282 # define VG_UCONTEXT_INSTR_PTR(uc) ((uc)->uc_mcontext.rip)
283 # define VG_UCONTEXT_STACK_PTR(uc) ((uc)->uc_mcontext.rsp)
284 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
285 /* Convert the value in uc_mcontext.rax into a SysRes. */ \
286 VG_(mk_SysRes_amd64_linux)( (uc)->uc_mcontext.rax )
287 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
288 { (srP)->r_pc = (uc)->uc_mcontext.rip; \
289 (srP)->r_sp = (uc)->uc_mcontext.rsp; \
290 (srP)->misc.AMD64.r_rbp = (uc)->uc_mcontext.rbp; \
293 #elif defined(VGP_ppc32_linux)
294 /* Comments from Paul Mackerras 25 Nov 05:
296 > I'm tracking down a problem where V's signal handling doesn't
297 > work properly on a ppc440gx running 2.4.20. The problem is that
298 > the ucontext being presented to V's sighandler seems completely
299 > bogus.
301 > V's kernel headers and hence ucontext layout are derived from
302 > 2.6.9. I compared include/asm-ppc/ucontext.h from 2.4.20 and
303 > 2.6.13.
305 > Can I just check my interpretation: the 2.4.20 one contains the
306 > uc_mcontext field in line, whereas the 2.6.13 one has a pointer
307 > to said struct? And so if V is using the 2.6.13 struct then a
308 > 2.4.20 one will make no sense to it.
310 Not quite... what is inline in the 2.4.20 version is a
311 sigcontext_struct, not an mcontext. The sigcontext looks like
312 this:
314 struct sigcontext_struct {
315 unsigned long _unused[4];
316 int signal;
317 unsigned long handler;
318 unsigned long oldmask;
319 struct pt_regs *regs;
322 The regs pointer of that struct ends up at the same offset as the
323 uc_regs of the 2.6 struct ucontext, and a struct pt_regs is the
324 same as the mc_gregs field of the mcontext. In fact the integer
325 regs are followed in memory by the floating point regs on 2.4.20.
327 Thus if you are using the 2.6 definitions, it should work on 2.4.20
328 provided that you go via uc->uc_regs rather than looking in
329 uc->uc_mcontext directly.
331 There is another subtlety: 2.4.20 doesn't save the vector regs when
332 delivering a signal, and 2.6.x only saves the vector regs if the
333 process has ever used an altivec instructions. If 2.6.x does save
334 the vector regs, it sets the MSR_VEC bit in
335 uc->uc_regs->mc_gregs[PT_MSR], otherwise it clears it. That bit
336 will always be clear under 2.4.20. So you can use that bit to tell
337 whether uc->uc_regs->mc_vregs is valid. */
338 # define VG_UCONTEXT_INSTR_PTR(uc) ((uc)->uc_regs->mc_gregs[VKI_PT_NIP])
339 # define VG_UCONTEXT_STACK_PTR(uc) ((uc)->uc_regs->mc_gregs[VKI_PT_R1])
340 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
341 /* Convert the values in uc_mcontext r3,cr into a SysRes. */ \
342 VG_(mk_SysRes_ppc32_linux)( \
343 (uc)->uc_regs->mc_gregs[VKI_PT_R3], \
344 (((uc)->uc_regs->mc_gregs[VKI_PT_CCR] >> 28) & 1) \
346 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
347 { (srP)->r_pc = (ULong)((uc)->uc_regs->mc_gregs[VKI_PT_NIP]); \
348 (srP)->r_sp = (ULong)((uc)->uc_regs->mc_gregs[VKI_PT_R1]); \
349 (srP)->misc.PPC32.r_lr = (uc)->uc_regs->mc_gregs[VKI_PT_LNK]; \
352 #elif defined(VGP_ppc64be_linux) || defined(VGP_ppc64le_linux)
353 # define VG_UCONTEXT_INSTR_PTR(uc) ((uc)->uc_mcontext.gp_regs[VKI_PT_NIP])
354 # define VG_UCONTEXT_STACK_PTR(uc) ((uc)->uc_mcontext.gp_regs[VKI_PT_R1])
355 /* Dubious hack: if there is an error, only consider the lowest 8
356 bits of r3. memcheck/tests/post-syscall shows a case where an
357 interrupted syscall should have produced a ucontext with 0x4
358 (VKI_EINTR) in r3 but is in fact producing 0x204. */
359 /* Awaiting clarification from PaulM. Evidently 0x204 is
360 ERESTART_RESTARTBLOCK, which shouldn't have made it into user
361 space. */
362 static inline SysRes VG_UCONTEXT_SYSCALL_SYSRES( struct vki_ucontext* uc )
364 ULong err = (uc->uc_mcontext.gp_regs[VKI_PT_CCR] >> 28) & 1;
365 ULong r3 = uc->uc_mcontext.gp_regs[VKI_PT_R3];
366 ThreadId tid = VG_(lwpid_to_vgtid)(VG_(gettid)());
367 ThreadState *tst = VG_(get_ThreadState)(tid);
369 if (err) r3 &= 0xFF;
370 return VG_(mk_SysRes_ppc64_linux)( r3, err,
371 tst->arch.vex.guest_syscall_flag);
373 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
374 { (srP)->r_pc = (uc)->uc_mcontext.gp_regs[VKI_PT_NIP]; \
375 (srP)->r_sp = (uc)->uc_mcontext.gp_regs[VKI_PT_R1]; \
376 (srP)->misc.PPC64.r_lr = (uc)->uc_mcontext.gp_regs[VKI_PT_LNK]; \
379 #elif defined(VGP_arm_linux)
380 # define VG_UCONTEXT_INSTR_PTR(uc) ((uc)->uc_mcontext.arm_pc)
381 # define VG_UCONTEXT_STACK_PTR(uc) ((uc)->uc_mcontext.arm_sp)
382 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
383 /* Convert the value in uc_mcontext.rax into a SysRes. */ \
384 VG_(mk_SysRes_arm_linux)( (uc)->uc_mcontext.arm_r0 )
385 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
386 { (srP)->r_pc = (uc)->uc_mcontext.arm_pc; \
387 (srP)->r_sp = (uc)->uc_mcontext.arm_sp; \
388 (srP)->misc.ARM.r14 = (uc)->uc_mcontext.arm_lr; \
389 (srP)->misc.ARM.r12 = (uc)->uc_mcontext.arm_ip; \
390 (srP)->misc.ARM.r11 = (uc)->uc_mcontext.arm_fp; \
391 (srP)->misc.ARM.r7 = (uc)->uc_mcontext.arm_r7; \
394 #elif defined(VGP_arm64_linux)
395 # define VG_UCONTEXT_INSTR_PTR(uc) ((UWord)((uc)->uc_mcontext.pc))
396 # define VG_UCONTEXT_STACK_PTR(uc) ((UWord)((uc)->uc_mcontext.sp))
397 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
398 /* Convert the value in uc_mcontext.regs[0] into a SysRes. */ \
399 VG_(mk_SysRes_arm64_linux)( (uc)->uc_mcontext.regs[0] )
400 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
401 { (srP)->r_pc = (uc)->uc_mcontext.pc; \
402 (srP)->r_sp = (uc)->uc_mcontext.sp; \
403 (srP)->misc.ARM64.x29 = (uc)->uc_mcontext.regs[29]; \
404 (srP)->misc.ARM64.x30 = (uc)->uc_mcontext.regs[30]; \
407 #elif defined(VGP_x86_darwin)
409 static inline Addr VG_UCONTEXT_INSTR_PTR( void* ucV ) {
410 ucontext_t* uc = (ucontext_t*)ucV;
411 struct __darwin_mcontext32* mc = uc->uc_mcontext;
412 struct __darwin_i386_thread_state* ss = &mc->__ss;
413 return ss->__eip;
415 static inline Addr VG_UCONTEXT_STACK_PTR( void* ucV ) {
416 ucontext_t* uc = (ucontext_t*)ucV;
417 struct __darwin_mcontext32* mc = uc->uc_mcontext;
418 struct __darwin_i386_thread_state* ss = &mc->__ss;
419 return ss->__esp;
421 static inline SysRes VG_UCONTEXT_SYSCALL_SYSRES( void* ucV,
422 UWord scclass ) {
423 /* this is complicated by the problem that there are 3 different
424 kinds of syscalls, each with its own return convention.
425 NB: scclass is a host word, hence UWord is good for both
426 amd64-darwin and x86-darwin */
427 ucontext_t* uc = (ucontext_t*)ucV;
428 struct __darwin_mcontext32* mc = uc->uc_mcontext;
429 struct __darwin_i386_thread_state* ss = &mc->__ss;
430 /* duplicates logic in m_syswrap.getSyscallStatusFromGuestState */
431 UInt carry = 1 & ss->__eflags;
432 UInt err = 0;
433 UInt wLO = 0;
434 UInt wHI = 0;
435 switch (scclass) {
436 case VG_DARWIN_SYSCALL_CLASS_UNIX:
437 err = carry;
438 wLO = ss->__eax;
439 wHI = ss->__edx;
440 break;
441 case VG_DARWIN_SYSCALL_CLASS_MACH:
442 wLO = ss->__eax;
443 break;
444 case VG_DARWIN_SYSCALL_CLASS_MDEP:
445 wLO = ss->__eax;
446 break;
447 default:
448 vg_assert(0);
449 break;
451 return VG_(mk_SysRes_x86_darwin)( scclass, err ? True : False,
452 wHI, wLO );
454 static inline
455 void VG_UCONTEXT_TO_UnwindStartRegs( UnwindStartRegs* srP,
456 void* ucV ) {
457 ucontext_t* uc = (ucontext_t*)(ucV);
458 struct __darwin_mcontext32* mc = uc->uc_mcontext;
459 struct __darwin_i386_thread_state* ss = &mc->__ss;
460 srP->r_pc = (ULong)(ss->__eip);
461 srP->r_sp = (ULong)(ss->__esp);
462 srP->misc.X86.r_ebp = (UInt)(ss->__ebp);
465 #elif defined(VGP_amd64_darwin)
467 static inline Addr VG_UCONTEXT_INSTR_PTR( void* ucV ) {
468 ucontext_t* uc = (ucontext_t*)ucV;
469 struct __darwin_mcontext64* mc = uc->uc_mcontext;
470 struct __darwin_x86_thread_state64* ss = &mc->__ss;
471 return ss->__rip;
473 static inline Addr VG_UCONTEXT_STACK_PTR( void* ucV ) {
474 ucontext_t* uc = (ucontext_t*)ucV;
475 struct __darwin_mcontext64* mc = uc->uc_mcontext;
476 struct __darwin_x86_thread_state64* ss = &mc->__ss;
477 return ss->__rsp;
479 static inline SysRes VG_UCONTEXT_SYSCALL_SYSRES( void* ucV,
480 UWord scclass ) {
481 /* This is copied from the x86-darwin case. I'm not sure if it
482 is correct. */
483 ucontext_t* uc = (ucontext_t*)ucV;
484 struct __darwin_mcontext64* mc = uc->uc_mcontext;
485 struct __darwin_x86_thread_state64* ss = &mc->__ss;
486 /* duplicates logic in m_syswrap.getSyscallStatusFromGuestState */
487 ULong carry = 1 & ss->__rflags;
488 ULong err = 0;
489 ULong wLO = 0;
490 ULong wHI = 0;
491 switch (scclass) {
492 case VG_DARWIN_SYSCALL_CLASS_UNIX:
493 err = carry;
494 wLO = ss->__rax;
495 wHI = ss->__rdx;
496 break;
497 case VG_DARWIN_SYSCALL_CLASS_MACH:
498 wLO = ss->__rax;
499 break;
500 case VG_DARWIN_SYSCALL_CLASS_MDEP:
501 wLO = ss->__rax;
502 break;
503 default:
504 vg_assert(0);
505 break;
507 return VG_(mk_SysRes_amd64_darwin)( scclass, err ? True : False,
508 wHI, wLO );
510 static inline
511 void VG_UCONTEXT_TO_UnwindStartRegs( UnwindStartRegs* srP,
512 void* ucV ) {
513 ucontext_t* uc = (ucontext_t*)ucV;
514 struct __darwin_mcontext64* mc = uc->uc_mcontext;
515 struct __darwin_x86_thread_state64* ss = &mc->__ss;
516 srP->r_pc = (ULong)(ss->__rip);
517 srP->r_sp = (ULong)(ss->__rsp);
518 srP->misc.AMD64.r_rbp = (ULong)(ss->__rbp);
521 #elif defined(VGP_x86_freebsd)
522 # define VG_UCONTEXT_INSTR_PTR(uc) ((UWord)(uc)->uc_mcontext.eip)
523 # define VG_UCONTEXT_STACK_PTR(uc) ((UWord)(uc)->uc_mcontext.esp)
524 # define VG_UCONTEXT_FRAME_PTR(uc) ((UWord)(uc)->uc_mcontext.ebp)
525 # define VG_UCONTEXT_SYSCALL_NUM(uc) ((UWord)(uc)->uc_mcontext.eax)
526 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
527 /* Convert the value in uc_mcontext.eax into a SysRes. */ \
528 VG_(mk_SysRes_x86_freebsd)( (uc)->uc_mcontext.eax, \
529 (uc)->uc_mcontext.edx, ((uc)->uc_mcontext.eflags & 1) != 0 ? True : False)
530 # define VG_UCONTEXT_LINK_REG(uc) 0 /* What is an LR for anyway? */
531 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
532 { (srP)->r_pc = (ULong)((uc)->uc_mcontext.eip); \
533 (srP)->r_sp = (ULong)((uc)->uc_mcontext.esp); \
534 (srP)->misc.X86.r_ebp = (uc)->uc_mcontext.ebp; \
537 #elif defined(VGP_amd64_freebsd)
538 # define VG_UCONTEXT_INSTR_PTR(uc) ((uc)->uc_mcontext.rip)
539 # define VG_UCONTEXT_STACK_PTR(uc) ((uc)->uc_mcontext.rsp)
540 # define VG_UCONTEXT_FRAME_PTR(uc) ((uc)->uc_mcontext.rbp)
541 # define VG_UCONTEXT_SYSCALL_NUM(uc) ((uc)->uc_mcontext.rax)
542 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
543 /* Convert the value in uc_mcontext.rax into a SysRes. */ \
544 VG_(mk_SysRes_amd64_freebsd)( (uc)->uc_mcontext.rax, \
545 (uc)->uc_mcontext.rdx, ((uc)->uc_mcontext.rflags & 1) != 0 ? True : False )
546 # define VG_UCONTEXT_LINK_REG(uc) 0 /* No LR on amd64 either */
547 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
548 { (srP)->r_pc = (uc)->uc_mcontext.rip; \
549 (srP)->r_sp = (uc)->uc_mcontext.rsp; \
550 (srP)->misc.AMD64.r_rbp = (uc)->uc_mcontext.rbp; \
553 #elif defined(VGP_s390x_linux)
555 # define VG_UCONTEXT_INSTR_PTR(uc) ((uc)->uc_mcontext.regs.psw.addr)
556 # define VG_UCONTEXT_STACK_PTR(uc) ((uc)->uc_mcontext.regs.gprs[15])
557 # define VG_UCONTEXT_FRAME_PTR(uc) ((uc)->uc_mcontext.regs.gprs[11])
558 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
559 VG_(mk_SysRes_s390x_linux)((uc)->uc_mcontext.regs.gprs[2])
560 # define VG_UCONTEXT_LINK_REG(uc) ((uc)->uc_mcontext.regs.gprs[14])
562 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
563 { (srP)->r_pc = (ULong)((uc)->uc_mcontext.regs.psw.addr); \
564 (srP)->r_sp = (ULong)((uc)->uc_mcontext.regs.gprs[15]); \
565 (srP)->misc.S390X.r_fp = (uc)->uc_mcontext.regs.gprs[11]; \
566 (srP)->misc.S390X.r_lr = (uc)->uc_mcontext.regs.gprs[14]; \
567 (srP)->misc.S390X.r_f0 = (uc)->uc_mcontext.fpregs.fprs[0]; \
568 (srP)->misc.S390X.r_f1 = (uc)->uc_mcontext.fpregs.fprs[1]; \
569 (srP)->misc.S390X.r_f2 = (uc)->uc_mcontext.fpregs.fprs[2]; \
570 (srP)->misc.S390X.r_f3 = (uc)->uc_mcontext.fpregs.fprs[3]; \
571 (srP)->misc.S390X.r_f4 = (uc)->uc_mcontext.fpregs.fprs[4]; \
572 (srP)->misc.S390X.r_f5 = (uc)->uc_mcontext.fpregs.fprs[5]; \
573 (srP)->misc.S390X.r_f6 = (uc)->uc_mcontext.fpregs.fprs[6]; \
574 (srP)->misc.S390X.r_f7 = (uc)->uc_mcontext.fpregs.fprs[7]; \
577 #elif defined(VGP_mips32_linux)
578 # define VG_UCONTEXT_INSTR_PTR(uc) ((UWord)(((uc)->uc_mcontext.sc_pc)))
579 # define VG_UCONTEXT_STACK_PTR(uc) ((UWord)((uc)->uc_mcontext.sc_regs[29]))
580 # define VG_UCONTEXT_FRAME_PTR(uc) ((uc)->uc_mcontext.sc_regs[30])
581 # define VG_UCONTEXT_SYSCALL_NUM(uc) ((uc)->uc_mcontext.sc_regs[2])
582 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
583 /* Convert the value in uc_mcontext.rax into a SysRes. */ \
584 VG_(mk_SysRes_mips32_linux)( (uc)->uc_mcontext.sc_regs[2], \
585 (uc)->uc_mcontext.sc_regs[3], \
586 (uc)->uc_mcontext.sc_regs[7])
588 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
589 { (srP)->r_pc = (uc)->uc_mcontext.sc_pc; \
590 (srP)->r_sp = (uc)->uc_mcontext.sc_regs[29]; \
591 (srP)->misc.MIPS32.r30 = (uc)->uc_mcontext.sc_regs[30]; \
592 (srP)->misc.MIPS32.r31 = (uc)->uc_mcontext.sc_regs[31]; \
593 (srP)->misc.MIPS32.r28 = (uc)->uc_mcontext.sc_regs[28]; \
596 #elif defined(VGP_mips64_linux)
597 # define VG_UCONTEXT_INSTR_PTR(uc) (((uc)->uc_mcontext.sc_pc))
598 # define VG_UCONTEXT_STACK_PTR(uc) ((uc)->uc_mcontext.sc_regs[29])
599 # define VG_UCONTEXT_FRAME_PTR(uc) ((uc)->uc_mcontext.sc_regs[30])
600 # define VG_UCONTEXT_SYSCALL_NUM(uc) ((uc)->uc_mcontext.sc_regs[2])
601 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
602 /* Convert the value in uc_mcontext.rax into a SysRes. */ \
603 VG_(mk_SysRes_mips64_linux)((uc)->uc_mcontext.sc_regs[2], \
604 (uc)->uc_mcontext.sc_regs[3], \
605 (uc)->uc_mcontext.sc_regs[7])
607 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
608 { (srP)->r_pc = (uc)->uc_mcontext.sc_pc; \
609 (srP)->r_sp = (uc)->uc_mcontext.sc_regs[29]; \
610 (srP)->misc.MIPS64.r30 = (uc)->uc_mcontext.sc_regs[30]; \
611 (srP)->misc.MIPS64.r31 = (uc)->uc_mcontext.sc_regs[31]; \
612 (srP)->misc.MIPS64.r28 = (uc)->uc_mcontext.sc_regs[28]; \
615 #elif defined(VGP_nanomips_linux)
616 # define VG_UCONTEXT_INSTR_PTR(uc) ((UWord)(((uc)->uc_mcontext.sc_pc)))
617 # define VG_UCONTEXT_STACK_PTR(uc) ((UWord)((uc)->uc_mcontext.sc_regs[29]))
618 # define VG_UCONTEXT_FRAME_PTR(uc) ((uc)->uc_mcontext.sc_regs[30])
619 # define VG_UCONTEXT_SYSCALL_NUM(uc) ((uc)->uc_mcontext.sc_regs[2])
620 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
621 VG_(mk_SysRes_nanomips_linux)((uc)->uc_mcontext.sc_regs[4])
623 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
624 { (srP)->r_pc = (uc)->uc_mcontext.sc_pc; \
625 (srP)->r_sp = (uc)->uc_mcontext.sc_regs[29]; \
626 (srP)->misc.MIPS32.r30 = (uc)->uc_mcontext.sc_regs[30]; \
627 (srP)->misc.MIPS32.r31 = (uc)->uc_mcontext.sc_regs[31]; \
628 (srP)->misc.MIPS32.r28 = (uc)->uc_mcontext.sc_regs[28]; \
631 #elif defined(VGP_x86_solaris)
632 # define VG_UCONTEXT_INSTR_PTR(uc) ((Addr)(uc)->uc_mcontext.gregs[VKI_EIP])
633 # define VG_UCONTEXT_STACK_PTR(uc) ((Addr)(uc)->uc_mcontext.gregs[VKI_UESP])
634 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
635 VG_(mk_SysRes_x86_solaris)((uc)->uc_mcontext.gregs[VKI_EFL] & 1, \
636 (uc)->uc_mcontext.gregs[VKI_EAX], \
637 (uc)->uc_mcontext.gregs[VKI_EFL] & 1 \
638 ? 0 : (uc)->uc_mcontext.gregs[VKI_EDX])
639 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
640 { (srP)->r_pc = (ULong)(uc)->uc_mcontext.gregs[VKI_EIP]; \
641 (srP)->r_sp = (ULong)(uc)->uc_mcontext.gregs[VKI_UESP]; \
642 (srP)->misc.X86.r_ebp = (uc)->uc_mcontext.gregs[VKI_EBP]; \
645 #elif defined(VGP_amd64_solaris)
646 # define VG_UCONTEXT_INSTR_PTR(uc) ((Addr)(uc)->uc_mcontext.gregs[VKI_REG_RIP])
647 # define VG_UCONTEXT_STACK_PTR(uc) ((Addr)(uc)->uc_mcontext.gregs[VKI_REG_RSP])
648 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
649 VG_(mk_SysRes_amd64_solaris)((uc)->uc_mcontext.gregs[VKI_REG_RFL] & 1, \
650 (uc)->uc_mcontext.gregs[VKI_REG_RAX], \
651 (uc)->uc_mcontext.gregs[VKI_REG_RFL] & 1 \
652 ? 0 : (uc)->uc_mcontext.gregs[VKI_REG_RDX])
653 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
654 { (srP)->r_pc = (uc)->uc_mcontext.gregs[VKI_REG_RIP]; \
655 (srP)->r_sp = (uc)->uc_mcontext.gregs[VKI_REG_RSP]; \
656 (srP)->misc.AMD64.r_rbp = (uc)->uc_mcontext.gregs[VKI_REG_RBP]; \
658 #else
659 # error Unknown platform
660 #endif
663 /* ------ Macros for pulling stuff out of siginfos ------ */
665 /* These macros allow use of uniform names when working with
666 both the Linux and Darwin vki definitions. */
667 #if defined(VGO_linux)
668 # define VKI_SIGINFO_si_addr _sifields._sigfault._addr
669 # define VKI_SIGINFO_si_pid _sifields._kill._pid
670 #elif defined(VGO_darwin) || defined(VGO_solaris) || defined(VGO_freebsd)
671 # define VKI_SIGINFO_si_addr si_addr
672 # define VKI_SIGINFO_si_pid si_pid
673 #else
674 # error Unknown OS
675 #endif
678 /* ---------------------------------------------------------------------
679 HIGH LEVEL STUFF TO DO WITH SIGNALS: POLICY (MOSTLY)
680 ------------------------------------------------------------------ */
682 /* ---------------------------------------------------------------------
683 Signal state for this process.
684 ------------------------------------------------------------------ */
687 /* Base-ment of these arrays[_VKI_NSIG].
689 Valid signal numbers are 1 .. _VKI_NSIG inclusive.
690 Rather than subtracting 1 for indexing these arrays, which
691 is tedious and error-prone, they are simply dimensioned 1 larger,
692 and entry [0] is not used.
696 /* -----------------------------------------------------
697 Static client signal state (SCSS). This is the state
698 that the client thinks it has the kernel in.
699 SCSS records verbatim the client's settings. These
700 are mashed around only when SKSS is calculated from it.
701 -------------------------------------------------- */
703 typedef
704 struct {
705 void* scss_handler; /* VKI_SIG_DFL or VKI_SIG_IGN or ptr to
706 client's handler */
707 UInt scss_flags;
708 vki_sigset_t scss_mask;
709 void* scss_restorer; /* where sigreturn goes */
710 void* scss_sa_tramp; /* sa_tramp setting, Darwin only */
711 /* re _restorer and _sa_tramp, we merely record the values
712 supplied when the client does 'sigaction' and give them back
713 when requested. Otherwise they are simply ignored. */
715 SCSS_Per_Signal;
717 typedef
718 struct {
719 /* per-signal info */
720 SCSS_Per_Signal scss_per_sig[1+_VKI_NSIG];
722 /* Additional elements to SCSS not stored here:
723 - for each thread, the thread's blocking mask
724 - for each thread in WaitSIG, the set of waited-on sigs
727 SCSS;
729 static SCSS scss;
732 /* -----------------------------------------------------
733 Static kernel signal state (SKSS). This is the state
734 that we have the kernel in. It is computed from SCSS.
735 -------------------------------------------------- */
737 /* Let's do:
738 sigprocmask assigns to all thread masks
739 so that at least everything is always consistent
740 Flags:
741 SA_SIGINFO -- we always set it, and honour it for the client
742 SA_NOCLDSTOP -- passed to kernel
743 SA_ONESHOT or SA_RESETHAND -- pass through
744 SA_RESTART -- we observe this but set our handlers to always restart
745 (this doesn't apply to the Solaris port)
746 SA_NOMASK or SA_NODEFER -- we observe this, but our handlers block everything
747 SA_ONSTACK -- pass through
748 SA_NOCLDWAIT -- pass through
752 typedef
753 struct {
754 void* skss_handler; /* VKI_SIG_DFL or VKI_SIG_IGN
755 or ptr to our handler */
756 UInt skss_flags;
757 /* There is no skss_mask, since we know that we will always ask
758 for all signals to be blocked in our sighandlers. */
759 /* Also there is no skss_restorer. */
761 SKSS_Per_Signal;
763 typedef
764 struct {
765 SKSS_Per_Signal skss_per_sig[1+_VKI_NSIG];
767 SKSS;
769 static SKSS skss;
771 /* returns True if signal is to be ignored.
772 To check this, possibly call gdbserver with tid. */
773 static Bool is_sig_ign(vki_siginfo_t *info, ThreadId tid)
775 vg_assert(info->si_signo >= 1 && info->si_signo <= _VKI_NSIG);
777 /* If VG_(gdbserver_report_signal) tells to report the signal,
778 then verify if this signal is not to be ignored. GDB might have
779 modified si_signo, so we check after the call to gdbserver. */
780 return !VG_(gdbserver_report_signal) (info, tid)
781 || scss.scss_per_sig[info->si_signo].scss_handler == VKI_SIG_IGN;
784 /* ---------------------------------------------------------------------
785 Compute the SKSS required by the current SCSS.
786 ------------------------------------------------------------------ */
788 static
789 void pp_SKSS ( void )
791 Int sig;
792 VG_(printf)("\n\nSKSS:\n");
793 for (sig = 1; sig <= _VKI_NSIG; sig++) {
794 VG_(printf)("sig %d: handler %p, flags 0x%x\n", sig,
795 skss.skss_per_sig[sig].skss_handler,
796 skss.skss_per_sig[sig].skss_flags );
801 /* This is the core, clever bit. Computation is as follows:
803 For each signal
804 handler = if client has a handler, then our handler
805 else if client is DFL, then our handler as well
806 else (client must be IGN)
807 then hander is IGN
809 static
810 void calculate_SKSS_from_SCSS ( SKSS* dst )
812 Int sig;
813 UInt scss_flags;
814 UInt skss_flags;
816 for (sig = 1; sig <= _VKI_NSIG; sig++) {
817 void *skss_handler;
818 void *scss_handler;
820 scss_handler = scss.scss_per_sig[sig].scss_handler;
821 scss_flags = scss.scss_per_sig[sig].scss_flags;
823 switch(sig) {
824 case VKI_SIGSEGV:
825 case VKI_SIGBUS:
826 case VKI_SIGFPE:
827 case VKI_SIGILL:
828 case VKI_SIGTRAP:
829 #if defined(VGO_freebsd)
830 case VKI_SIGSYS:
831 #endif
832 /* For these, we always want to catch them and report, even
833 if the client code doesn't. */
834 skss_handler = sync_signalhandler;
835 break;
837 case VKI_SIGCONT:
838 /* Let the kernel handle SIGCONT unless the client is actually
839 catching it. */
840 case VKI_SIGCHLD:
841 case VKI_SIGWINCH:
842 case VKI_SIGURG:
843 /* For signals which are have a default action of Ignore,
844 only set a handler if the client has set a signal handler.
845 Otherwise the kernel will interrupt a syscall which
846 wouldn't have otherwise been interrupted. */
847 if (scss.scss_per_sig[sig].scss_handler == VKI_SIG_DFL)
848 skss_handler = VKI_SIG_DFL;
849 else if (scss.scss_per_sig[sig].scss_handler == VKI_SIG_IGN)
850 skss_handler = VKI_SIG_IGN;
851 else
852 skss_handler = async_signalhandler;
853 break;
855 default:
856 // VKI_SIGVG* are runtime variables, so we can't make them
857 // cases in the switch, so we handle them in the 'default' case.
858 if (sig == VG_SIGVGKILL)
859 skss_handler = sigvgkill_handler;
860 else {
861 if (scss_handler == VKI_SIG_IGN)
862 skss_handler = VKI_SIG_IGN;
863 else
864 skss_handler = async_signalhandler;
866 break;
869 /* Flags */
871 skss_flags = 0;
873 /* SA_NOCLDSTOP, SA_NOCLDWAIT: pass to kernel */
874 skss_flags |= scss_flags & (VKI_SA_NOCLDSTOP | VKI_SA_NOCLDWAIT);
876 /* SA_ONESHOT: ignore client setting */
878 # if !defined(VGO_solaris)
879 /* SA_RESTART: ignore client setting and always set it for us.
880 Though we never rely on the kernel to restart a
881 syscall, we observe whether it wanted to restart the syscall
882 or not, which is needed by
883 VG_(fixup_guest_state_after_syscall_interrupted) */
884 skss_flags |= VKI_SA_RESTART;
885 #else
886 /* The above does not apply to the Solaris port, where the kernel does
887 not directly restart syscalls, but instead it checks SA_RESTART flag
888 and if it is set then it returns ERESTART to libc and the library
889 actually restarts the syscall. */
890 skss_flags |= scss_flags & VKI_SA_RESTART;
891 # endif
893 /* SA_NOMASK: ignore it */
895 /* SA_ONSTACK: client setting is irrelevant here */
896 /* We don't set a signal stack, so ignore */
898 /* always ask for SA_SIGINFO */
899 if (skss_handler != VKI_SIG_IGN && skss_handler != VKI_SIG_DFL)
900 skss_flags |= VKI_SA_SIGINFO;
902 /* use our own restorer */
903 skss_flags |= VKI_SA_RESTORER;
905 /* Create SKSS entry for this signal. */
906 if (sig != VKI_SIGKILL && sig != VKI_SIGSTOP)
907 dst->skss_per_sig[sig].skss_handler = skss_handler;
908 else
909 dst->skss_per_sig[sig].skss_handler = VKI_SIG_DFL;
911 dst->skss_per_sig[sig].skss_flags = skss_flags;
914 /* Sanity checks. */
915 vg_assert(dst->skss_per_sig[VKI_SIGKILL].skss_handler == VKI_SIG_DFL);
916 vg_assert(dst->skss_per_sig[VKI_SIGSTOP].skss_handler == VKI_SIG_DFL);
918 if (0)
919 pp_SKSS();
923 /* ---------------------------------------------------------------------
924 After a possible SCSS change, update SKSS and the kernel itself.
925 ------------------------------------------------------------------ */
927 // We need two levels of macro-expansion here to convert __NR_rt_sigreturn
928 // to a number before converting it to a string... sigh.
929 extern void my_sigreturn(void);
931 #if defined(VGP_x86_linux)
932 # define _MY_SIGRETURN(name) \
933 ".text\n" \
934 ".globl my_sigreturn\n" \
935 "my_sigreturn:\n" \
936 " movl $" #name ", %eax\n" \
937 " int $0x80\n" \
938 ".previous\n"
940 #elif defined(VGP_amd64_linux)
941 # define _MY_SIGRETURN(name) \
942 ".text\n" \
943 ".globl my_sigreturn\n" \
944 "my_sigreturn:\n" \
945 " movq $" #name ", %rax\n" \
946 " syscall\n" \
947 ".previous\n"
949 #elif defined(VGP_ppc32_linux)
950 # define _MY_SIGRETURN(name) \
951 ".text\n" \
952 ".globl my_sigreturn\n" \
953 "my_sigreturn:\n" \
954 " li 0, " #name "\n" \
955 " sc\n" \
956 ".previous\n"
958 #elif defined(VGP_ppc64be_linux)
959 # define _MY_SIGRETURN(name) \
960 ".align 2\n" \
961 ".globl my_sigreturn\n" \
962 ".section \".opd\",\"aw\"\n" \
963 ".align 3\n" \
964 "my_sigreturn:\n" \
965 ".quad .my_sigreturn,.TOC.@tocbase,0\n" \
966 ".previous\n" \
967 ".type .my_sigreturn,@function\n" \
968 ".globl .my_sigreturn\n" \
969 ".my_sigreturn:\n" \
970 " li 0, " #name "\n" \
971 " sc\n"
973 #elif defined(VGP_ppc64le_linux)
974 /* Little Endian supports ELF version 2. In the future, it may
975 * support other versions.
977 # define _MY_SIGRETURN(name) \
978 ".align 2\n" \
979 ".globl my_sigreturn\n" \
980 ".type .my_sigreturn,@function\n" \
981 "my_sigreturn:\n" \
982 "#if _CALL_ELF == 2 \n" \
983 "0: addis 2,12,.TOC.-0b@ha\n" \
984 " addi 2,2,.TOC.-0b@l\n" \
985 " .localentry my_sigreturn,.-my_sigreturn\n" \
986 "#endif \n" \
987 " sc\n" \
988 " .size my_sigreturn,.-my_sigreturn\n"
990 #elif defined(VGP_arm_linux)
991 # define _MY_SIGRETURN(name) \
992 ".text\n" \
993 ".globl my_sigreturn\n" \
994 "my_sigreturn:\n\t" \
995 " mov r7, #" #name "\n\t" \
996 " svc 0x00000000\n" \
997 ".previous\n"
999 #elif defined(VGP_arm64_linux)
1000 # define _MY_SIGRETURN(name) \
1001 ".text\n" \
1002 ".globl my_sigreturn\n" \
1003 "my_sigreturn:\n\t" \
1004 " mov x8, #" #name "\n\t" \
1005 " svc 0x0\n" \
1006 ".previous\n"
1008 #elif defined(VGP_x86_darwin)
1009 # define _MY_SIGRETURN(name) \
1010 ".text\n" \
1011 ".globl my_sigreturn\n" \
1012 "my_sigreturn:\n" \
1013 " movl $" VG_STRINGIFY(__NR_DARWIN_FAKE_SIGRETURN) ",%eax\n" \
1014 " int $0x80\n"
1016 #elif defined(VGP_amd64_darwin)
1017 # define _MY_SIGRETURN(name) \
1018 ".text\n" \
1019 ".globl my_sigreturn\n" \
1020 "my_sigreturn:\n" \
1021 " movq $" VG_STRINGIFY(__NR_DARWIN_FAKE_SIGRETURN) ",%rax\n" \
1022 " syscall\n"
1024 #elif defined(VGP_s390x_linux)
1025 # define _MY_SIGRETURN(name) \
1026 ".text\n" \
1027 ".globl my_sigreturn\n" \
1028 "my_sigreturn:\n" \
1029 " svc " #name "\n" \
1030 ".previous\n"
1032 #elif defined(VGP_mips32_linux)
1033 # define _MY_SIGRETURN(name) \
1034 ".text\n" \
1035 "my_sigreturn:\n" \
1036 " li $2, " #name "\n" /* apparently $2 is v0 */ \
1037 " syscall\n" \
1038 ".previous\n"
1040 #elif defined(VGP_mips64_linux)
1041 # define _MY_SIGRETURN(name) \
1042 ".text\n" \
1043 "my_sigreturn:\n" \
1044 " li $2, " #name "\n" \
1045 " syscall\n" \
1046 ".previous\n"
1048 #elif defined(VGP_nanomips_linux)
1049 # define _MY_SIGRETURN(name) \
1050 ".text\n" \
1051 "my_sigreturn:\n" \
1052 " li $t4, " #name "\n" \
1053 " syscall[32]\n" \
1054 ".previous\n"
1055 #elif defined(VGP_x86_solaris) || defined(VGP_amd64_solaris)
1056 /* Not used on Solaris. */
1057 # define _MY_SIGRETURN(name) \
1058 ".text\n" \
1059 ".globl my_sigreturn\n" \
1060 "my_sigreturn:\n" \
1061 "ud2\n" \
1062 ".previous\n"
1063 #elif defined(VGP_x86_freebsd) || defined(VGP_amd64_freebsd)
1064 /* Not used on FreeBSD */
1065 # define _MY_SIGRETURN(name) \
1066 ".text\n" \
1067 ".globl my_sigreturn\n" \
1068 "my_sigreturn:\n" \
1069 "ud2\n" \
1070 ".previous\n"
1071 #else
1072 # error Unknown platform
1073 #endif
1075 #define MY_SIGRETURN(name) _MY_SIGRETURN(name)
1076 asm(
1077 MY_SIGRETURN(__NR_rt_sigreturn)
1081 static void handle_SCSS_change ( Bool force_update )
1083 Int res, sig;
1084 SKSS skss_old;
1085 vki_sigaction_toK_t ksa;
1086 vki_sigaction_fromK_t ksa_old;
1088 /* Remember old SKSS and calculate new one. */
1089 skss_old = skss;
1090 calculate_SKSS_from_SCSS ( &skss );
1092 /* Compare the new SKSS entries vs the old ones, and update kernel
1093 where they differ. */
1094 for (sig = 1; sig <= VG_(max_signal); sig++) {
1096 /* Trying to do anything with SIGKILL is pointless; just ignore
1097 it. */
1098 if (sig == VKI_SIGKILL || sig == VKI_SIGSTOP)
1099 continue;
1101 if (!force_update) {
1102 if ((skss_old.skss_per_sig[sig].skss_handler
1103 == skss.skss_per_sig[sig].skss_handler)
1104 && (skss_old.skss_per_sig[sig].skss_flags
1105 == skss.skss_per_sig[sig].skss_flags))
1106 /* no difference */
1107 continue;
1110 ksa.ksa_handler = skss.skss_per_sig[sig].skss_handler;
1111 ksa.sa_flags = skss.skss_per_sig[sig].skss_flags;
1112 # if !defined(VGP_ppc32_linux) && \
1113 !defined(VGP_x86_darwin) && !defined(VGP_amd64_darwin) && \
1114 !defined(VGP_mips32_linux) && !defined(VGO_solaris) && !defined(VGO_freebsd)
1115 ksa.sa_restorer = my_sigreturn;
1116 # endif
1117 /* Re above ifdef (also the assertion below), PaulM says:
1118 The sa_restorer field is not used at all on ppc. Glibc
1119 converts the sigaction you give it into a kernel sigaction,
1120 but it doesn't put anything in the sa_restorer field.
1123 /* block all signals in handler */
1124 VG_(sigfillset)( &ksa.sa_mask );
1125 VG_(sigdelset)( &ksa.sa_mask, VKI_SIGKILL );
1126 VG_(sigdelset)( &ksa.sa_mask, VKI_SIGSTOP );
1128 if (VG_(clo_trace_signals) && VG_(clo_verbosity) > 2)
1129 VG_(dmsg)("setting ksig %d to: hdlr %p, flags 0x%lx, "
1130 "mask(msb..lsb) 0x%llx 0x%llx\n",
1131 sig, ksa.ksa_handler,
1132 (UWord)ksa.sa_flags,
1133 _VKI_NSIG_WORDS > 1 ? (ULong)ksa.sa_mask.sig[1] : 0,
1134 (ULong)ksa.sa_mask.sig[0]);
1136 res = VG_(sigaction)( sig, &ksa, &ksa_old );
1137 vg_assert(res == 0);
1139 /* Since we got the old sigaction more or less for free, might
1140 as well extract the maximum sanity-check value from it. */
1141 if (!force_update) {
1142 vg_assert(ksa_old.ksa_handler
1143 == skss_old.skss_per_sig[sig].skss_handler);
1144 # if defined(VGO_solaris)
1145 if (ksa_old.ksa_handler == VKI_SIG_DFL
1146 || ksa_old.ksa_handler == VKI_SIG_IGN) {
1147 /* The Solaris kernel ignores signal flags (except SA_NOCLDWAIT
1148 and SA_NOCLDSTOP) and a signal mask if a handler is set to
1149 SIG_DFL or SIG_IGN. */
1150 skss_old.skss_per_sig[sig].skss_flags
1151 &= (VKI_SA_NOCLDWAIT | VKI_SA_NOCLDSTOP);
1152 vg_assert(VG_(isemptysigset)( &ksa_old.sa_mask ));
1153 VG_(sigfillset)( &ksa_old.sa_mask );
1155 # endif
1156 vg_assert(ksa_old.sa_flags
1157 == skss_old.skss_per_sig[sig].skss_flags);
1158 # if !defined(VGP_ppc32_linux) && \
1159 !defined(VGP_x86_darwin) && !defined(VGP_amd64_darwin) && \
1160 !defined(VGP_mips32_linux) && !defined(VGP_mips64_linux) && \
1161 !defined(VGP_nanomips_linux) && !defined(VGO_solaris) && \
1162 !defined(VGO_freebsd)
1163 vg_assert(ksa_old.sa_restorer == my_sigreturn);
1164 # endif
1165 VG_(sigaddset)( &ksa_old.sa_mask, VKI_SIGKILL );
1166 VG_(sigaddset)( &ksa_old.sa_mask, VKI_SIGSTOP );
1167 vg_assert(VG_(isfullsigset)( &ksa_old.sa_mask ));
1173 /* ---------------------------------------------------------------------
1174 Update/query SCSS in accordance with client requests.
1175 ------------------------------------------------------------------ */
1177 /* Logic for this alt-stack stuff copied directly from do_sigaltstack
1178 in kernel/signal.[ch] */
1180 /* True if we are on the alternate signal stack. */
1181 static Bool on_sig_stack ( ThreadId tid, Addr m_SP )
1183 ThreadState *tst = VG_(get_ThreadState)(tid);
1185 return (m_SP - (Addr)tst->altstack.ss_sp < (Addr)tst->altstack.ss_size);
1188 static Int sas_ss_flags ( ThreadId tid, Addr m_SP )
1190 ThreadState *tst = VG_(get_ThreadState)(tid);
1192 return (tst->altstack.ss_size == 0
1193 ? VKI_SS_DISABLE
1194 : on_sig_stack(tid, m_SP) ? VKI_SS_ONSTACK : 0);
1198 SysRes VG_(do_sys_sigaltstack) ( ThreadId tid, vki_stack_t* ss, vki_stack_t* oss )
1200 Addr m_SP;
1202 vg_assert(VG_(is_valid_tid)(tid));
1203 m_SP = VG_(get_SP)(tid);
1205 if (VG_(clo_trace_signals))
1206 VG_(dmsg)("sys_sigaltstack: tid %u, "
1207 "ss %p{%p,sz=%llu,flags=0x%llx}, oss %p (current SP %p)\n",
1208 tid, (void*)ss,
1209 ss ? ss->ss_sp : 0,
1210 (ULong)(ss ? ss->ss_size : 0),
1211 (ULong)(ss ? ss->ss_flags : 0),
1212 (void*)oss, (void*)m_SP);
1214 if (oss != NULL) {
1215 oss->ss_sp = VG_(threads)[tid].altstack.ss_sp;
1216 oss->ss_size = VG_(threads)[tid].altstack.ss_size;
1217 oss->ss_flags = VG_(threads)[tid].altstack.ss_flags
1218 | sas_ss_flags(tid, m_SP);
1221 if (ss != NULL) {
1222 if (on_sig_stack(tid, VG_(get_SP)(tid))) {
1223 return VG_(mk_SysRes_Error)( VKI_EPERM );
1225 if (ss->ss_flags != VKI_SS_DISABLE
1226 && ss->ss_flags != VKI_SS_ONSTACK
1227 && ss->ss_flags != 0) {
1228 return VG_(mk_SysRes_Error)( VKI_EINVAL );
1230 if (ss->ss_flags == VKI_SS_DISABLE) {
1231 VG_(threads)[tid].altstack.ss_flags = VKI_SS_DISABLE;
1232 } else {
1233 if (ss->ss_size < VKI_MINSIGSTKSZ) {
1234 return VG_(mk_SysRes_Error)( VKI_ENOMEM );
1237 VG_(threads)[tid].altstack.ss_sp = ss->ss_sp;
1238 VG_(threads)[tid].altstack.ss_size = ss->ss_size;
1239 VG_(threads)[tid].altstack.ss_flags = 0;
1242 return VG_(mk_SysRes_Success)( 0 );
1246 SysRes VG_(do_sys_sigaction) ( Int signo,
1247 const vki_sigaction_toK_t* new_act,
1248 vki_sigaction_fromK_t* old_act )
1250 if (VG_(clo_trace_signals))
1251 VG_(dmsg)("sys_sigaction: sigNo %d, "
1252 "new %#lx, old %#lx, new flags 0x%llx\n",
1253 signo, (UWord)new_act, (UWord)old_act,
1254 (ULong)(new_act ? new_act->sa_flags : 0));
1256 /* Rule out various error conditions. The aim is to ensure that if
1257 when the call is passed to the kernel it will definitely
1258 succeed. */
1260 /* Reject out-of-range signal numbers. */
1261 if (signo < 1 || signo > VG_(max_signal)) goto bad_signo;
1263 /* don't let them use our signals */
1264 if ( (signo > VG_SIGVGRTUSERMAX)
1265 && new_act
1266 && !(new_act->ksa_handler == VKI_SIG_DFL
1267 || new_act->ksa_handler == VKI_SIG_IGN) )
1268 goto bad_signo_reserved;
1270 /* Reject attempts to set a handler (or set ignore) for SIGKILL. */
1271 if ( (signo == VKI_SIGKILL || signo == VKI_SIGSTOP)
1272 && new_act
1273 && new_act->ksa_handler != VKI_SIG_DFL)
1274 goto bad_sigkill_or_sigstop;
1276 /* If the client supplied non-NULL old_act, copy the relevant SCSS
1277 entry into it. */
1278 if (old_act) {
1279 old_act->ksa_handler = scss.scss_per_sig[signo].scss_handler;
1280 old_act->sa_flags = scss.scss_per_sig[signo].scss_flags;
1281 old_act->sa_mask = scss.scss_per_sig[signo].scss_mask;
1282 # if !defined(VGO_darwin) && !defined(VGO_freebsd) && \
1283 !defined(VGO_solaris)
1284 old_act->sa_restorer = scss.scss_per_sig[signo].scss_restorer;
1285 # endif
1288 /* And now copy new SCSS entry from new_act. */
1289 if (new_act) {
1290 scss.scss_per_sig[signo].scss_handler = new_act->ksa_handler;
1291 scss.scss_per_sig[signo].scss_flags = new_act->sa_flags;
1292 scss.scss_per_sig[signo].scss_mask = new_act->sa_mask;
1294 scss.scss_per_sig[signo].scss_restorer = NULL;
1295 # if !defined(VGO_darwin) && !defined(VGO_freebsd) && \
1296 !defined(VGO_solaris)
1297 scss.scss_per_sig[signo].scss_restorer = new_act->sa_restorer;
1298 # endif
1300 scss.scss_per_sig[signo].scss_sa_tramp = NULL;
1301 # if defined(VGP_x86_darwin) || defined(VGP_amd64_darwin)
1302 scss.scss_per_sig[signo].scss_sa_tramp = new_act->sa_tramp;
1303 # endif
1305 VG_(sigdelset)(&scss.scss_per_sig[signo].scss_mask, VKI_SIGKILL);
1306 VG_(sigdelset)(&scss.scss_per_sig[signo].scss_mask, VKI_SIGSTOP);
1309 /* All happy bunnies ... */
1310 if (new_act) {
1311 handle_SCSS_change( False /* lazy update */ );
1313 return VG_(mk_SysRes_Success)( 0 );
1315 bad_signo:
1316 if (VG_(showing_core_errors)() && !VG_(clo_xml)) {
1317 VG_(umsg)("Warning: bad signal number %d in sigaction()\n", signo);
1319 return VG_(mk_SysRes_Error)( VKI_EINVAL );
1321 bad_signo_reserved:
1322 if (VG_(showing_core_errors)() && !VG_(clo_xml)) {
1323 VG_(umsg)("Warning: ignored attempt to set %s handler in sigaction();\n",
1324 VG_(signame)(signo));
1325 VG_(umsg)(" the %s signal is used internally by Valgrind\n",
1326 VG_(signame)(signo));
1328 return VG_(mk_SysRes_Error)( VKI_EINVAL );
1330 bad_sigkill_or_sigstop:
1331 if (VG_(showing_core_errors)() && !VG_(clo_xml)) {
1332 VG_(umsg)("Warning: ignored attempt to set %s handler in sigaction();\n",
1333 VG_(signame)(signo));
1334 VG_(umsg)(" the %s signal is uncatchable\n",
1335 VG_(signame)(signo));
1337 return VG_(mk_SysRes_Error)( VKI_EINVAL );
1341 static
1342 void do_sigprocmask_bitops ( Int vki_how,
1343 vki_sigset_t* orig_set,
1344 vki_sigset_t* modifier )
1346 switch (vki_how) {
1347 case VKI_SIG_BLOCK:
1348 VG_(sigaddset_from_set)( orig_set, modifier );
1349 break;
1350 case VKI_SIG_UNBLOCK:
1351 VG_(sigdelset_from_set)( orig_set, modifier );
1352 break;
1353 case VKI_SIG_SETMASK:
1354 *orig_set = *modifier;
1355 break;
1356 default:
1357 VG_(core_panic)("do_sigprocmask_bitops");
1358 break;
1362 static
1363 HChar* format_sigset ( const vki_sigset_t* set )
1365 static HChar buf[_VKI_NSIG_WORDS * 16 + 1];
1366 int w;
1368 VG_(strcpy)(buf, "");
1370 for (w = _VKI_NSIG_WORDS - 1; w >= 0; w--)
1372 # if _VKI_NSIG_BPW == 32
1373 VG_(sprintf)(buf + VG_(strlen)(buf), "%08llx",
1374 set ? (ULong)set->sig[w] : 0);
1375 # elif _VKI_NSIG_BPW == 64
1376 VG_(sprintf)(buf + VG_(strlen)(buf), "%16llx",
1377 set ? (ULong)set->sig[w] : 0);
1378 # else
1379 # error "Unsupported value for _VKI_NSIG_BPW"
1380 # endif
1383 return buf;
1387 This updates the thread's signal mask. There's no such thing as a
1388 process-wide signal mask.
1390 Note that the thread signal masks are an implicit part of SCSS,
1391 which is why this routine is allowed to mess with them.
1393 static
1394 void do_setmask ( ThreadId tid,
1395 Int how,
1396 vki_sigset_t* newset,
1397 vki_sigset_t* oldset )
1399 if (VG_(clo_trace_signals))
1400 VG_(dmsg)("do_setmask: tid = %u how = %d (%s), newset = %p (%s)\n",
1401 tid, how,
1402 how==VKI_SIG_BLOCK ? "SIG_BLOCK" : (
1403 how==VKI_SIG_UNBLOCK ? "SIG_UNBLOCK" : (
1404 how==VKI_SIG_SETMASK ? "SIG_SETMASK" : "???")),
1405 newset, newset ? format_sigset(newset) : "NULL" );
1407 /* Just do this thread. */
1408 vg_assert(VG_(is_valid_tid)(tid));
1409 if (oldset) {
1410 *oldset = VG_(threads)[tid].sig_mask;
1411 if (VG_(clo_trace_signals))
1412 VG_(dmsg)("\toldset=%p %s\n", oldset, format_sigset(oldset));
1414 if (newset) {
1415 do_sigprocmask_bitops (how, &VG_(threads)[tid].sig_mask, newset );
1416 VG_(sigdelset)(&VG_(threads)[tid].sig_mask, VKI_SIGKILL);
1417 VG_(sigdelset)(&VG_(threads)[tid].sig_mask, VKI_SIGSTOP);
1418 VG_(threads)[tid].tmp_sig_mask = VG_(threads)[tid].sig_mask;
1423 SysRes VG_(do_sys_sigprocmask) ( ThreadId tid,
1424 Int how,
1425 vki_sigset_t* set,
1426 vki_sigset_t* oldset )
1428 /* Fix for case when ,,set,, is NULL.
1429 In this case ,,how,, flag should be ignored
1430 because we are only requesting from kernel
1431 to put current mask into ,,oldset,,.
1432 Taken from linux man pages (sigprocmask).
1433 The same is specified for POSIX.
1435 if (set != NULL) {
1436 switch(how) {
1437 case VKI_SIG_BLOCK:
1438 case VKI_SIG_UNBLOCK:
1439 case VKI_SIG_SETMASK:
1440 break;
1442 default:
1443 VG_(dmsg)("sigprocmask: unknown 'how' field %d\n", how);
1444 return VG_(mk_SysRes_Error)( VKI_EINVAL );
1448 vg_assert(VG_(is_valid_tid)(tid));
1449 do_setmask(tid, how, set, oldset);
1450 return VG_(mk_SysRes_Success)( 0 );
1454 /* ---------------------------------------------------------------------
1455 LOW LEVEL STUFF TO DO WITH SIGNALS: IMPLEMENTATION
1456 ------------------------------------------------------------------ */
1458 /* ---------------------------------------------------------------------
1459 Handy utilities to block/restore all host signals.
1460 ------------------------------------------------------------------ */
1462 /* Block all host signals, dumping the old mask in *saved_mask. */
1463 static void block_all_host_signals ( /* OUT */ vki_sigset_t* saved_mask )
1465 Int ret;
1466 vki_sigset_t block_procmask;
1467 VG_(sigfillset)(&block_procmask);
1468 ret = VG_(sigprocmask)
1469 (VKI_SIG_SETMASK, &block_procmask, saved_mask);
1470 vg_assert(ret == 0);
1473 /* Restore the blocking mask using the supplied saved one. */
1474 static void restore_all_host_signals ( /* IN */ vki_sigset_t* saved_mask )
1476 Int ret;
1477 ret = VG_(sigprocmask)(VKI_SIG_SETMASK, saved_mask, NULL);
1478 vg_assert(ret == 0);
1481 void VG_(clear_out_queued_signals)( ThreadId tid, vki_sigset_t* saved_mask )
1483 block_all_host_signals(saved_mask);
1484 if (VG_(threads)[tid].sig_queue != NULL) {
1485 VG_(free)(VG_(threads)[tid].sig_queue);
1486 VG_(threads)[tid].sig_queue = NULL;
1488 restore_all_host_signals(saved_mask);
1491 /* ---------------------------------------------------------------------
1492 The signal simulation proper. A simplified version of what the
1493 Linux kernel does.
1494 ------------------------------------------------------------------ */
1496 /* Set up a stack frame (VgSigContext) for the client's signal
1497 handler. */
1498 static
1499 void push_signal_frame ( ThreadId tid, const vki_siginfo_t *siginfo,
1500 const struct vki_ucontext *uc )
1502 Bool on_altstack;
1503 Addr esp_top_of_frame;
1504 ThreadState* tst;
1505 Int sigNo = siginfo->si_signo;
1507 vg_assert(sigNo >= 1 && sigNo <= VG_(max_signal));
1508 vg_assert(VG_(is_valid_tid)(tid));
1509 tst = & VG_(threads)[tid];
1511 if (VG_(clo_trace_signals)) {
1512 VG_(dmsg)("push_signal_frame (thread %u): signal %d\n", tid, sigNo);
1513 VG_(get_and_pp_StackTrace)(tid, 10);
1516 if (/* this signal asked to run on an alt stack */
1517 (scss.scss_per_sig[sigNo].scss_flags & VKI_SA_ONSTACK )
1518 && /* there is a defined and enabled alt stack, which we're not
1519 already using. Logic from get_sigframe in
1520 arch/i386/kernel/signal.c. */
1521 sas_ss_flags(tid, VG_(get_SP)(tid)) == 0
1523 on_altstack = True;
1524 esp_top_of_frame
1525 = (Addr)(tst->altstack.ss_sp) + tst->altstack.ss_size;
1526 if (VG_(clo_trace_signals))
1527 VG_(dmsg)("delivering signal %d (%s) to thread %u: "
1528 "on ALT STACK (%p-%p; %ld bytes)\n",
1529 sigNo, VG_(signame)(sigNo), tid, tst->altstack.ss_sp,
1530 (UChar *)tst->altstack.ss_sp + tst->altstack.ss_size,
1531 (Word)tst->altstack.ss_size );
1532 } else {
1533 on_altstack = False;
1534 esp_top_of_frame = VG_(get_SP)(tid) - VG_STACK_REDZONE_SZB;
1537 /* Signal delivery to tools */
1538 VG_TRACK( pre_deliver_signal, tid, sigNo, on_altstack );
1540 vg_assert(scss.scss_per_sig[sigNo].scss_handler != VKI_SIG_IGN);
1541 vg_assert(scss.scss_per_sig[sigNo].scss_handler != VKI_SIG_DFL);
1543 /* This may fail if the client stack is busted; if that happens,
1544 the whole process will exit rather than simply calling the
1545 signal handler. */
1546 VG_(sigframe_create) (tid, on_altstack, esp_top_of_frame, siginfo, uc,
1547 scss.scss_per_sig[sigNo].scss_handler,
1548 scss.scss_per_sig[sigNo].scss_flags,
1549 &tst->sig_mask,
1550 scss.scss_per_sig[sigNo].scss_restorer);
1554 const HChar *VG_(signame)(Int sigNo)
1556 static HChar buf[20]; // large enough
1558 switch(sigNo) {
1559 case VKI_SIGHUP: return "SIGHUP";
1560 case VKI_SIGINT: return "SIGINT";
1561 case VKI_SIGQUIT: return "SIGQUIT";
1562 case VKI_SIGILL: return "SIGILL";
1563 case VKI_SIGTRAP: return "SIGTRAP";
1564 case VKI_SIGABRT: return "SIGABRT";
1565 case VKI_SIGBUS: return "SIGBUS";
1566 case VKI_SIGFPE: return "SIGFPE";
1567 case VKI_SIGKILL: return "SIGKILL";
1568 case VKI_SIGUSR1: return "SIGUSR1";
1569 case VKI_SIGUSR2: return "SIGUSR2";
1570 case VKI_SIGSEGV: return "SIGSEGV";
1571 case VKI_SIGSYS: return "SIGSYS";
1572 case VKI_SIGPIPE: return "SIGPIPE";
1573 case VKI_SIGALRM: return "SIGALRM";
1574 case VKI_SIGTERM: return "SIGTERM";
1575 # if defined(VKI_SIGSTKFLT)
1576 case VKI_SIGSTKFLT: return "SIGSTKFLT";
1577 # endif
1578 case VKI_SIGCHLD: return "SIGCHLD";
1579 case VKI_SIGCONT: return "SIGCONT";
1580 case VKI_SIGSTOP: return "SIGSTOP";
1581 case VKI_SIGTSTP: return "SIGTSTP";
1582 case VKI_SIGTTIN: return "SIGTTIN";
1583 case VKI_SIGTTOU: return "SIGTTOU";
1584 case VKI_SIGURG: return "SIGURG";
1585 case VKI_SIGXCPU: return "SIGXCPU";
1586 case VKI_SIGXFSZ: return "SIGXFSZ";
1587 case VKI_SIGVTALRM: return "SIGVTALRM";
1588 case VKI_SIGPROF: return "SIGPROF";
1589 case VKI_SIGWINCH: return "SIGWINCH";
1590 case VKI_SIGIO: return "SIGIO";
1591 # if defined(VKI_SIGPWR)
1592 case VKI_SIGPWR: return "SIGPWR";
1593 # endif
1594 # if defined(VKI_SIGUNUSED) && (VKI_SIGUNUSED != VKI_SIGSYS)
1595 case VKI_SIGUNUSED: return "SIGUNUSED";
1596 # endif
1597 # if defined(VKI_SIGINFO)
1598 case VKI_SIGINFO: return "SIGINFO";
1599 # endif
1601 /* Solaris-specific signals. */
1602 # if defined(VKI_SIGEMT)
1603 case VKI_SIGEMT: return "SIGEMT";
1604 # endif
1605 # if defined(VKI_SIGWAITING)
1606 case VKI_SIGWAITING: return "SIGWAITING";
1607 # endif
1608 # if defined(VKI_SIGLWP)
1609 case VKI_SIGLWP: return "SIGLWP";
1610 # endif
1611 # if defined(VKI_SIGFREEZE)
1612 case VKI_SIGFREEZE: return "SIGFREEZE";
1613 # endif
1614 # if defined(VKI_SIGTHAW)
1615 case VKI_SIGTHAW: return "SIGTHAW";
1616 # endif
1617 # if defined(VKI_SIGCANCEL)
1618 case VKI_SIGCANCEL: return "SIGCANCEL";
1619 # endif
1620 # if defined(VKI_SIGLOST)
1621 case VKI_SIGLOST: return "SIGLOST";
1622 # endif
1623 # if defined(VKI_SIGXRES)
1624 case VKI_SIGXRES: return "SIGXRES";
1625 # endif
1626 # if defined(VKI_SIGJVM1)
1627 case VKI_SIGJVM1: return "SIGJVM1";
1628 # endif
1629 # if defined(VKI_SIGJVM2)
1630 case VKI_SIGJVM2: return "SIGJVM2";
1631 # endif
1633 # if defined(VKI_SIGRTMIN) && defined(VKI_SIGRTMAX)
1634 case VKI_SIGRTMIN ... VKI_SIGRTMAX:
1635 VG_(sprintf)(buf, "SIGRT%d", sigNo-VKI_SIGRTMIN);
1636 return buf;
1637 # endif
1639 default:
1640 VG_(sprintf)(buf, "SIG%d", sigNo);
1641 return buf;
1645 /* Hit ourselves with a signal using the default handler */
1646 void VG_(kill_self)(Int sigNo)
1648 Int r;
1649 vki_sigset_t mask, origmask;
1650 vki_sigaction_toK_t sa, origsa2;
1651 vki_sigaction_fromK_t origsa;
1653 sa.ksa_handler = VKI_SIG_DFL;
1654 sa.sa_flags = 0;
1655 # if !defined(VGO_darwin) && !defined(VGO_freebsd) && \
1656 !defined(VGO_solaris)
1657 sa.sa_restorer = 0;
1658 # endif
1659 VG_(sigemptyset)(&sa.sa_mask);
1661 VG_(sigaction)(sigNo, &sa, &origsa);
1663 VG_(sigemptyset)(&mask);
1664 VG_(sigaddset)(&mask, sigNo);
1665 VG_(sigprocmask)(VKI_SIG_UNBLOCK, &mask, &origmask);
1667 r = VG_(kill)(VG_(getpid)(), sigNo);
1668 # if !defined(VGO_darwin)
1669 /* This sometimes fails with EPERM on Darwin. I don't know why. */
1670 vg_assert(r == 0);
1671 # endif
1673 VG_(convert_sigaction_fromK_to_toK)( &origsa, &origsa2 );
1674 VG_(sigaction)(sigNo, &origsa2, NULL);
1675 VG_(sigprocmask)(VKI_SIG_SETMASK, &origmask, NULL);
1678 // The si_code describes where the signal came from. Some come from the
1679 // kernel, eg.: seg faults, illegal opcodes. Some come from the user, eg.:
1680 // from kill() (SI_USER), or timer_settime() (SI_TIMER), or an async I/O
1681 // request (SI_ASYNCIO). There's lots of implementation-defined leeway in
1682 // POSIX, but the user vs. kernal distinction is what we want here. We also
1683 // pass in some other details that can help when si_code is unreliable.
1684 static Bool is_signal_from_kernel(ThreadId tid, int signum, int si_code)
1686 # if defined(VGO_linux) || defined(VGO_solaris)
1687 // On Linux, SI_USER is zero, negative values are from the user, positive
1688 // values are from the kernel. There are SI_FROMUSER and SI_FROMKERNEL
1689 // macros but we don't use them here because other platforms don't have
1690 // them.
1691 return ( si_code > VKI_SI_USER ? True : False );
1692 #elif defined(VGO_freebsd)
1694 // The comment below seems a bit out of date. From the siginfo manpage
1696 // Full support for POSIX signal information first appeared in FreeBSD 7.0.
1697 // The codes SI_USER and SI_KERNEL can be generated as of FreeBSD 8.1. The
1698 // code SI_LWP can be generated as of FreeBSD 9.0.
1699 if (si_code == VKI_SI_USER || si_code == VKI_SI_LWP)
1700 return False;
1702 // It looks like there's no reliable way to say where the signal came from
1703 if (VG_(threads)[tid].status == VgTs_WaitSys) {
1704 return False;
1705 } else
1706 return True;
1707 # elif defined(VGO_darwin)
1708 // On Darwin 9.6.0, the si_code is completely unreliable. It should be the
1709 // case that 0 means "user", and >0 means "kernel". But:
1710 // - For SIGSEGV, it seems quite reliable.
1711 // - For SIGBUS, it's always 2.
1712 // - For SIGFPE, it's often 0, even for kernel ones (eg.
1713 // div-by-integer-zero always gives zero).
1714 // - For SIGILL, it's unclear.
1715 // - For SIGTRAP, it's always 1.
1716 // You can see the "NOTIMP" (not implemented) status of a number of the
1717 // sub-cases in sys/signal.h. Hopefully future versions of Darwin will
1718 // get this right.
1720 // If we're blocked waiting on a syscall, it must be a user signal, because
1721 // the kernel won't generate sync signals within syscalls.
1722 if (VG_(threads)[tid].status == VgTs_WaitSys) {
1723 return False;
1725 // If it's a SIGSEGV, use the proper condition, since it's fairly reliable.
1726 } else if (SIGSEGV == signum) {
1727 return ( si_code > 0 ? True : False );
1729 // If it's anything else, assume it's kernel-generated. Reason being that
1730 // kernel-generated sync signals are more common, and it's probable that
1731 // misdiagnosing a user signal as a kernel signal is better than the
1732 // opposite.
1733 } else {
1734 return True;
1736 # else
1737 # error Unknown OS
1738 # endif
1742 Perform the default action of a signal. If the signal is fatal, it
1743 terminates all other threads, but it doesn't actually kill
1744 the process and calling thread.
1746 If we're not being quiet, then print out some more detail about
1747 fatal signals (esp. core dumping signals).
1749 static void default_action(const vki_siginfo_t *info, ThreadId tid)
1751 Int sigNo = info->si_signo;
1752 Bool terminate = False; /* kills process */
1753 Bool core = False; /* kills process w/ core */
1754 struct vki_rlimit corelim;
1755 Bool could_core;
1756 ThreadState* tst = VG_(get_ThreadState)(tid);
1758 vg_assert(VG_(is_running_thread)(tid));
1760 switch(sigNo) {
1761 case VKI_SIGQUIT: /* core */
1762 case VKI_SIGILL: /* core */
1763 case VKI_SIGABRT: /* core */
1764 case VKI_SIGFPE: /* core */
1765 case VKI_SIGSEGV: /* core */
1766 case VKI_SIGBUS: /* core */
1767 case VKI_SIGTRAP: /* core */
1768 case VKI_SIGSYS: /* core */
1769 case VKI_SIGXCPU: /* core */
1770 case VKI_SIGXFSZ: /* core */
1772 /* Solaris-specific signals. */
1773 # if defined(VKI_SIGEMT)
1774 case VKI_SIGEMT: /* core */
1775 # endif
1777 terminate = True;
1778 core = True;
1779 break;
1781 case VKI_SIGHUP: /* term */
1782 case VKI_SIGINT: /* term */
1783 case VKI_SIGKILL: /* term - we won't see this */
1784 case VKI_SIGPIPE: /* term */
1785 case VKI_SIGALRM: /* term */
1786 case VKI_SIGTERM: /* term */
1787 case VKI_SIGUSR1: /* term */
1788 case VKI_SIGUSR2: /* term */
1789 case VKI_SIGIO: /* term */
1790 # if defined(VKI_SIGPWR)
1791 case VKI_SIGPWR: /* term */
1792 # endif
1793 case VKI_SIGPROF: /* term */
1794 case VKI_SIGVTALRM: /* term */
1795 # if defined(VKI_SIGRTMIN) && defined(VKI_SIGRTMAX)
1796 case VKI_SIGRTMIN ... VKI_SIGRTMAX: /* term */
1797 # endif
1799 /* Solaris-specific signals. */
1800 # if defined(VKI_SIGLOST)
1801 case VKI_SIGLOST: /* term */
1802 # endif
1804 terminate = True;
1805 break;
1808 vg_assert(!core || (core && terminate));
1810 if (VG_(clo_trace_signals))
1811 VG_(dmsg)("delivering %d (code %d) to default handler; action: %s%s\n",
1812 sigNo, info->si_code, terminate ? "terminate" : "ignore",
1813 core ? "+core" : "");
1815 if (!terminate)
1816 return; /* nothing to do */
1818 #if defined(VGO_linux)
1819 if (terminate && (tst->ptrace & VKI_PT_PTRACED)
1820 && (sigNo != VKI_SIGKILL)) {
1821 VG_(kill)(VG_(getpid)(), VKI_SIGSTOP);
1822 return;
1824 #endif
1826 could_core = core;
1828 if (core) {
1829 /* If they set the core-size limit to zero, don't generate a
1830 core file */
1832 VG_(getrlimit)(VKI_RLIMIT_CORE, &corelim);
1834 if (corelim.rlim_cur == 0)
1835 core = False;
1838 if ( VG_(clo_verbosity) >= 1
1839 || (could_core && is_signal_from_kernel(tid, sigNo, info->si_code))
1840 || VG_(clo_xml) ) {
1841 if (VG_(clo_xml)) {
1842 VG_(printf_xml)("<fatal_signal>\n");
1843 VG_(printf_xml)(" <tid>%u</tid>\n", tid);
1844 if (tst->thread_name) {
1845 VG_(printf_xml)(" <threadname>%s</threadname>\n",
1846 tst->thread_name);
1848 VG_(printf_xml)(" <signo>%d</signo>\n", sigNo);
1849 VG_(printf_xml)(" <signame>%s</signame>\n", VG_(signame)(sigNo));
1850 VG_(printf_xml)(" <sicode>%d</sicode>\n", info->si_code);
1851 } else {
1852 VG_(umsg)(
1853 "\n"
1854 "Process terminating with default action of signal %d (%s)%s\n",
1855 sigNo, VG_(signame)(sigNo), core ? ": dumping core" : "");
1858 /* Be helpful - decode some more details about this fault */
1859 if (is_signal_from_kernel(tid, sigNo, info->si_code)) {
1860 const HChar *event = NULL;
1861 Bool haveaddr = True;
1863 switch(sigNo) {
1864 case VKI_SIGSEGV:
1865 switch(info->si_code) {
1866 case VKI_SEGV_MAPERR: event = "Access not within mapped region";
1867 break;
1868 case VKI_SEGV_ACCERR: event = "Bad permissions for mapped region";
1869 break;
1870 case VKI_SEGV_MADE_UP_GPF:
1871 /* General Protection Fault: The CPU/kernel
1872 isn't telling us anything useful, but this
1873 is commonly the result of exceeding a
1874 segment limit. */
1875 event = "General Protection Fault";
1876 haveaddr = False;
1877 break;
1879 #if 0
1881 HChar buf[50]; // large enough
1882 VG_(am_show_nsegments)(0,"post segfault");
1883 VG_(sprintf)(buf, "/bin/cat /proc/%d/maps", VG_(getpid)());
1884 VG_(system)(buf);
1886 #endif
1887 break;
1889 case VKI_SIGILL:
1890 switch(info->si_code) {
1891 case VKI_ILL_ILLOPC: event = "Illegal opcode"; break;
1892 case VKI_ILL_ILLOPN: event = "Illegal operand"; break;
1893 case VKI_ILL_ILLADR: event = "Illegal addressing mode"; break;
1894 case VKI_ILL_ILLTRP: event = "Illegal trap"; break;
1895 case VKI_ILL_PRVOPC: event = "Privileged opcode"; break;
1896 case VKI_ILL_PRVREG: event = "Privileged register"; break;
1897 case VKI_ILL_COPROC: event = "Coprocessor error"; break;
1898 case VKI_ILL_BADSTK: event = "Internal stack error"; break;
1900 break;
1902 case VKI_SIGFPE:
1903 switch (info->si_code) {
1904 case VKI_FPE_INTDIV: event = "Integer divide by zero"; break;
1905 case VKI_FPE_INTOVF: event = "Integer overflow"; break;
1906 case VKI_FPE_FLTDIV: event = "FP divide by zero"; break;
1907 case VKI_FPE_FLTOVF: event = "FP overflow"; break;
1908 case VKI_FPE_FLTUND: event = "FP underflow"; break;
1909 case VKI_FPE_FLTRES: event = "FP inexact"; break;
1910 case VKI_FPE_FLTINV: event = "FP invalid operation"; break;
1911 case VKI_FPE_FLTSUB: event = "FP subscript out of range"; break;
1913 /* Solaris-specific codes. */
1914 # if defined(VKI_FPE_FLTDEN)
1915 case VKI_FPE_FLTDEN: event = "FP denormalize"; break;
1916 # endif
1918 break;
1920 case VKI_SIGBUS:
1921 switch (info->si_code) {
1922 case VKI_BUS_ADRALN: event = "Invalid address alignment"; break;
1923 case VKI_BUS_ADRERR: event = "Non-existent physical address"; break;
1924 case VKI_BUS_OBJERR: event = "Hardware error"; break;
1925 #if defined(VGO_freebsd)
1926 // This si_code can be generated for both SIGBUS and SIGSEGV on FreeBSD
1927 // This is undocumented
1928 case VKI_SEGV_PAGE_FAULT:
1929 // It should get replaced with this non-standard value, which is documented.
1930 case VKI_BUS_OOMERR:
1931 event = "Access not within mapped region";
1932 #endif
1934 break;
1935 } /* switch (sigNo) */
1937 if (VG_(clo_xml)) {
1938 if (event != NULL)
1939 VG_(printf_xml)(" <event>%s</event>\n", event);
1940 if (haveaddr)
1941 VG_(printf_xml)(" <siaddr>%p</siaddr>\n",
1942 info->VKI_SIGINFO_si_addr);
1943 } else {
1944 if (event != NULL) {
1945 if (haveaddr)
1946 VG_(umsg)(" %s at address %p\n",
1947 event, info->VKI_SIGINFO_si_addr);
1948 else
1949 VG_(umsg)(" %s\n", event);
1953 /* Print a stack trace. Be cautious if the thread's SP is in an
1954 obviously stupid place (not mapped readable) that would
1955 likely cause a segfault. */
1956 if (VG_(is_valid_tid)(tid)) {
1957 Word first_ip_delta = 0;
1958 #if defined(VGO_linux) || defined(VGO_solaris)
1959 /* Make sure that the address stored in the stack pointer is
1960 located in a mapped page. That is not necessarily so. E.g.
1961 consider the scenario where the stack pointer was decreased
1962 and now has a value that is just below the end of a page that has
1963 not been mapped yet. In that case VG_(am_is_valid_for_client)
1964 will consider the address of the stack pointer invalid and that
1965 would cause a back-trace of depth 1 to be printed, instead of a
1966 full back-trace. */
1967 if (tid == 1) { // main thread
1968 Addr esp = VG_(get_SP)(tid);
1969 Addr base = VG_PGROUNDDN(esp - VG_STACK_REDZONE_SZB);
1970 if (VG_(am_addr_is_in_extensible_client_stack)(base)
1971 && VG_(extend_stack)(tid, base)) {
1972 if (VG_(clo_trace_signals))
1973 VG_(dmsg)(" -> extended stack base to %#lx\n",
1974 VG_PGROUNDDN(esp));
1977 #endif
1978 #if defined(VGA_s390x)
1979 if (sigNo == VKI_SIGILL) {
1980 /* The guest instruction address has been adjusted earlier to
1981 point to the insn following the one that could not be decoded.
1982 When printing the back-trace here we need to undo that
1983 adjustment so the first line in the back-trace reports the
1984 correct address. */
1985 Addr addr = (Addr)info->VKI_SIGINFO_si_addr;
1986 UChar byte = ((UChar *)addr)[0];
1987 Int insn_length = ((((byte >> 6) + 1) >> 1) + 1) << 1;
1989 first_ip_delta = -insn_length;
1991 #endif
1992 ExeContext* ec = VG_(am_is_valid_for_client)
1993 (VG_(get_SP)(tid), sizeof(Addr), VKI_PROT_READ)
1994 ? VG_(record_ExeContext)( tid, first_ip_delta )
1995 : VG_(record_depth_1_ExeContext)( tid,
1996 first_ip_delta );
1997 vg_assert(ec);
1998 VG_(pp_ExeContext)( ec );
2000 if (sigNo == VKI_SIGSEGV
2001 && is_signal_from_kernel(tid, sigNo, info->si_code)
2002 && info->si_code == VKI_SEGV_MAPERR) {
2003 VG_(umsg)(" If you believe this happened as a result of a stack\n" );
2004 VG_(umsg)(" overflow in your program's main thread (unlikely but\n");
2005 VG_(umsg)(" possible), you can try to increase the size of the\n" );
2006 VG_(umsg)(" main thread stack using the --main-stacksize= flag.\n" );
2007 // FIXME: assumes main ThreadId == 1
2008 if (VG_(is_valid_tid)(1)) {
2009 VG_(umsg)(
2010 " The main thread stack size used in this run was %lu.\n",
2011 VG_(threads)[1].client_stack_szB);
2014 if (VG_(clo_xml)) {
2015 /* postamble */
2016 VG_(printf_xml)("</fatal_signal>\n");
2017 VG_(printf_xml)("\n");
2021 if (VG_(clo_vgdb) != Vg_VgdbNo
2022 && VG_(clo_vgdb_error) <= VG_(get_n_errs_shown)() + 1) {
2023 /* Note: we add + 1 to n_errs_shown as the fatal signal was not
2024 reported through error msg, and so was not counted. */
2025 VG_(gdbserver_report_fatal_signal) (info, tid);
2028 if (core) {
2029 static const struct vki_rlimit zero = { 0, 0 };
2031 VG_(make_coredump)(tid, info, corelim.rlim_cur);
2033 /* Make sure we don't get a confusing kernel-generated
2034 coredump when we finally exit */
2035 VG_(setrlimit)(VKI_RLIMIT_CORE, &zero);
2038 // what's this for?
2039 //VG_(threads)[VG_(master_tid)].os_state.fatalsig = sigNo;
2041 /* everyone but tid dies */
2042 VG_(nuke_all_threads_except)(tid, VgSrc_FatalSig);
2043 VG_(reap_threads)(tid);
2044 /* stash fatal signal in this thread */
2045 VG_(threads)[tid].exitreason = VgSrc_FatalSig;
2046 VG_(threads)[tid].os_state.fatalsig = sigNo;
2050 This does the business of delivering a signal to a thread. It may
2051 be called from either a real signal handler, or from normal code to
2052 cause the thread to enter the signal handler.
2054 This updates the thread state, but it does not set it to be
2055 Runnable.
2057 static void deliver_signal ( ThreadId tid, const vki_siginfo_t *info,
2058 const struct vki_ucontext *uc )
2060 Int sigNo = info->si_signo;
2061 SCSS_Per_Signal *handler = &scss.scss_per_sig[sigNo];
2062 void *handler_fn;
2063 ThreadState *tst = VG_(get_ThreadState)(tid);
2065 #if defined(VGO_linux)
2066 /* If this signal is SIGCHLD and it came from a process which valgrind
2067 created for some internal use, then it should not be delivered to
2068 the client. */
2069 if (sigNo == VKI_SIGCHLD && ht_sigchld_ignore != NULL) {
2070 Int pid = info->_sifields._sigchld._pid;
2071 ht_ignore_node *n = VG_(HT_lookup)(ht_sigchld_ignore, pid);
2073 if (n != NULL) {
2074 /* If the child has terminated, remove its PID from the
2075 ignore list. */
2076 if (info->si_code == VKI_CLD_EXITED
2077 || info->si_code == VKI_CLD_KILLED
2078 || info->si_code == VKI_CLD_DUMPED) {
2079 VG_(HT_remove)(ht_sigchld_ignore, pid);
2080 VG_(free)(n);
2082 return;
2085 #endif
2087 if (VG_(clo_trace_signals))
2088 VG_(dmsg)("delivering signal %d (%s):%d to thread %u\n",
2089 sigNo, VG_(signame)(sigNo), info->si_code, tid );
2091 if (sigNo == VG_SIGVGKILL) {
2092 /* If this is a SIGVGKILL, we're expecting it to interrupt any
2093 blocked syscall. It doesn't matter whether the VCPU state is
2094 set to restart or not, because we don't expect it will
2095 execute any more client instructions. */
2096 vg_assert(VG_(is_exiting)(tid));
2097 return;
2100 /* If the client specifies SIG_IGN, treat it as SIG_DFL.
2102 If deliver_signal() is being called on a thread, we want
2103 the signal to get through no matter what; if they're ignoring
2104 it, then we do this override (this is so we can send it SIGSEGV,
2105 etc). */
2106 handler_fn = handler->scss_handler;
2107 if (handler_fn == VKI_SIG_IGN)
2108 handler_fn = VKI_SIG_DFL;
2110 vg_assert(handler_fn != VKI_SIG_IGN);
2112 if (handler_fn == VKI_SIG_DFL) {
2113 default_action(info, tid);
2114 } else {
2115 /* Create a signal delivery frame, and set the client's %ESP and
2116 %EIP so that when execution continues, we will enter the
2117 signal handler with the frame on top of the client's stack,
2118 as it expects.
2120 Signal delivery can fail if the client stack is too small or
2121 missing, and we can't push the frame. If that happens,
2122 push_signal_frame will cause the whole process to exit when
2123 we next hit the scheduler.
2125 vg_assert(VG_(is_valid_tid)(tid));
2127 push_signal_frame ( tid, info, uc );
2129 if (handler->scss_flags & VKI_SA_ONESHOT) {
2130 /* Do the ONESHOT thing. */
2131 handler->scss_handler = VKI_SIG_DFL;
2133 handle_SCSS_change( False /* lazy update */ );
2136 /* At this point:
2137 tst->sig_mask is the current signal mask
2138 tst->tmp_sig_mask is the same as sig_mask, unless we're in sigsuspend
2139 handler->scss_mask is the mask set by the handler
2141 Handler gets a mask of tmp_sig_mask|handler_mask|signo
2143 tst->sig_mask = tst->tmp_sig_mask;
2144 if (!(handler->scss_flags & VKI_SA_NOMASK)) {
2145 VG_(sigaddset_from_set)(&tst->sig_mask, &handler->scss_mask);
2146 VG_(sigaddset)(&tst->sig_mask, sigNo);
2147 tst->tmp_sig_mask = tst->sig_mask;
2151 /* Thread state is ready to go - just add Runnable */
2154 static void resume_scheduler(ThreadId tid)
2156 ThreadState *tst = VG_(get_ThreadState)(tid);
2158 vg_assert(tst->os_state.lwpid == VG_(gettid)());
2160 if (tst->sched_jmpbuf_valid) {
2161 /* Can't continue; must longjmp back to the scheduler and thus
2162 enter the sighandler immediately. */
2163 VG_MINIMAL_LONGJMP(tst->sched_jmpbuf);
2167 static void synth_fault_common(ThreadId tid, Addr addr, Int si_code)
2169 vki_siginfo_t info;
2171 vg_assert(VG_(threads)[tid].status == VgTs_Runnable);
2173 VG_(memset)(&info, 0, sizeof(info));
2174 info.si_signo = VKI_SIGSEGV;
2175 info.si_code = si_code;
2176 info.VKI_SIGINFO_si_addr = (void*)addr;
2178 /* Even if gdbserver indicates to ignore the signal, we must deliver it.
2179 So ignore the return value of VG_(gdbserver_report_signal). */
2180 (void) VG_(gdbserver_report_signal) (&info, tid);
2182 /* If they're trying to block the signal, force it to be delivered */
2183 if (VG_(sigismember)(&VG_(threads)[tid].sig_mask, VKI_SIGSEGV))
2184 VG_(set_default_handler)(VKI_SIGSEGV);
2186 deliver_signal(tid, &info, NULL);
2189 // Synthesize a fault where the address is OK, but the page
2190 // permissions are bad.
2191 void VG_(synth_fault_perms)(ThreadId tid, Addr addr)
2193 synth_fault_common(tid, addr, VKI_SEGV_ACCERR);
2196 // Synthesize a fault where the address there's nothing mapped at the address.
2197 void VG_(synth_fault_mapping)(ThreadId tid, Addr addr)
2199 synth_fault_common(tid, addr, VKI_SEGV_MAPERR);
2202 // Synthesize a misc memory fault.
2203 void VG_(synth_fault)(ThreadId tid)
2205 synth_fault_common(tid, 0, VKI_SEGV_MADE_UP_GPF);
2208 // Synthesise a SIGILL.
2209 void VG_(synth_sigill)(ThreadId tid, Addr addr)
2211 vki_siginfo_t info;
2213 vg_assert(VG_(threads)[tid].status == VgTs_Runnable);
2215 VG_(memset)(&info, 0, sizeof(info));
2216 info.si_signo = VKI_SIGILL;
2217 info.si_code = VKI_ILL_ILLOPC; /* jrs: no idea what this should be */
2218 info.VKI_SIGINFO_si_addr = (void*)addr;
2220 if (VG_(gdbserver_report_signal) (&info, tid)) {
2221 resume_scheduler(tid);
2222 deliver_signal(tid, &info, NULL);
2224 else
2225 resume_scheduler(tid);
2228 // Synthesise a SIGBUS.
2229 void VG_(synth_sigbus)(ThreadId tid)
2231 vki_siginfo_t info;
2233 vg_assert(VG_(threads)[tid].status == VgTs_Runnable);
2235 VG_(memset)(&info, 0, sizeof(info));
2236 info.si_signo = VKI_SIGBUS;
2237 /* There are several meanings to SIGBUS (as per POSIX, presumably),
2238 but the most widely understood is "invalid address alignment",
2239 so let's use that. */
2240 info.si_code = VKI_BUS_ADRALN;
2241 /* If we knew the invalid address in question, we could put it
2242 in .si_addr. Oh well. */
2243 /* info.VKI_SIGINFO_si_addr = (void*)addr; */
2245 if (VG_(gdbserver_report_signal) (&info, tid)) {
2246 resume_scheduler(tid);
2247 deliver_signal(tid, &info, NULL);
2249 else
2250 resume_scheduler(tid);
2253 // Synthesise a SIGTRAP.
2254 void VG_(synth_sigtrap)(ThreadId tid)
2256 vki_siginfo_t info;
2257 struct vki_ucontext uc;
2258 # if defined(VGP_x86_darwin)
2259 struct __darwin_mcontext32 mc;
2260 # elif defined(VGP_amd64_darwin)
2261 struct __darwin_mcontext64 mc;
2262 # endif
2264 vg_assert(VG_(threads)[tid].status == VgTs_Runnable);
2266 VG_(memset)(&info, 0, sizeof(info));
2267 VG_(memset)(&uc, 0, sizeof(uc));
2268 info.si_signo = VKI_SIGTRAP;
2269 info.si_code = VKI_TRAP_BRKPT; /* tjh: only ever called for a brkpt ins */
2271 # if defined(VGP_x86_linux) || defined(VGP_amd64_linux)
2272 uc.uc_mcontext.trapno = 3; /* tjh: this is the x86 trap number
2273 for a breakpoint trap... */
2274 uc.uc_mcontext.err = 0; /* tjh: no error code for x86
2275 breakpoint trap... */
2276 # elif defined(VGP_x86_darwin) || defined(VGP_amd64_darwin)
2277 /* the same thing, but using Darwin field/struct names */
2278 VG_(memset)(&mc, 0, sizeof(mc));
2279 uc.uc_mcontext = &mc;
2280 uc.uc_mcontext->__es.__trapno = 3;
2281 uc.uc_mcontext->__es.__err = 0;
2282 # elif defined(VGP_x86_solaris)
2283 uc.uc_mcontext.gregs[VKI_ERR] = 0;
2284 uc.uc_mcontext.gregs[VKI_TRAPNO] = VKI_T_BPTFLT;
2285 # endif
2287 /* fixs390: do we need to do anything here for s390 ? */
2288 if (VG_(gdbserver_report_signal) (&info, tid)) {
2289 resume_scheduler(tid);
2290 deliver_signal(tid, &info, &uc);
2292 else
2293 resume_scheduler(tid);
2296 // Synthesise a SIGFPE.
2297 void VG_(synth_sigfpe)(ThreadId tid, UInt code)
2299 // Only tested on mips32, mips64, s390x and nanomips.
2300 #if !defined(VGA_mips32) && !defined(VGA_mips64) && !defined(VGA_s390x) && !defined(VGA_nanomips)
2301 vg_assert(0);
2302 #else
2303 vki_siginfo_t info;
2304 struct vki_ucontext uc;
2306 vg_assert(VG_(threads)[tid].status == VgTs_Runnable);
2308 VG_(memset)(&info, 0, sizeof(info));
2309 VG_(memset)(&uc, 0, sizeof(uc));
2310 info.si_signo = VKI_SIGFPE;
2311 info.si_code = code;
2313 if (VG_(gdbserver_report_signal) (&info, tid)) {
2314 resume_scheduler(tid);
2315 deliver_signal(tid, &info, &uc);
2317 else
2318 resume_scheduler(tid);
2319 #endif
2322 /* Make a signal pending for a thread, for later delivery.
2323 VG_(poll_signals) will arrange for it to be delivered at the right
2324 time.
2326 tid==0 means add it to the process-wide queue, and not sent it to a
2327 specific thread.
2329 static
2330 void queue_signal(ThreadId tid, const vki_siginfo_t *si)
2332 ThreadState *tst;
2333 SigQueue *sq;
2334 vki_sigset_t savedmask;
2336 tst = VG_(get_ThreadState)(tid);
2338 /* Protect the signal queue against async deliveries */
2339 block_all_host_signals(&savedmask);
2341 if (tst->sig_queue == NULL) {
2342 tst->sig_queue = VG_(malloc)("signals.qs.1", sizeof(*tst->sig_queue));
2343 VG_(memset)(tst->sig_queue, 0, sizeof(*tst->sig_queue));
2345 sq = tst->sig_queue;
2347 if (VG_(clo_trace_signals))
2348 VG_(dmsg)("Queueing signal %d (idx %d) to thread %u\n",
2349 si->si_signo, sq->next, tid);
2351 /* Add signal to the queue. If the queue gets overrun, then old
2352 queued signals may get lost.
2354 XXX We should also keep a sigset of pending signals, so that at
2355 least a non-siginfo signal gets deliviered.
2357 if (sq->sigs[sq->next].si_signo != 0)
2358 VG_(umsg)("Signal %d being dropped from thread %u's queue\n",
2359 sq->sigs[sq->next].si_signo, tid);
2361 sq->sigs[sq->next] = *si;
2362 sq->next = (sq->next+1) % N_QUEUED_SIGNALS;
2364 restore_all_host_signals(&savedmask);
2368 Returns the next queued signal for thread tid which is in "set".
2369 tid==0 means process-wide signal. Set si_signo to 0 when the
2370 signal has been delivered.
2372 Must be called with all signals blocked, to protect against async
2373 deliveries.
2375 static vki_siginfo_t *next_queued(ThreadId tid, const vki_sigset_t *set)
2377 ThreadState *tst = VG_(get_ThreadState)(tid);
2378 SigQueue *sq;
2379 Int idx;
2380 vki_siginfo_t *ret = NULL;
2382 sq = tst->sig_queue;
2383 if (sq == NULL)
2384 goto out;
2386 idx = sq->next;
2387 do {
2388 if (0)
2389 VG_(printf)("idx=%d si_signo=%d inset=%d\n", idx,
2390 sq->sigs[idx].si_signo,
2391 VG_(sigismember)(set, sq->sigs[idx].si_signo));
2393 if (sq->sigs[idx].si_signo != 0
2394 && VG_(sigismember)(set, sq->sigs[idx].si_signo)) {
2395 if (VG_(clo_trace_signals))
2396 VG_(dmsg)("Returning queued signal %d (idx %d) for thread %u\n",
2397 sq->sigs[idx].si_signo, idx, tid);
2398 ret = &sq->sigs[idx];
2399 goto out;
2402 idx = (idx + 1) % N_QUEUED_SIGNALS;
2403 } while(idx != sq->next);
2404 out:
2405 return ret;
2408 static int sanitize_si_code(int si_code)
2410 #if defined(VGO_linux)
2411 /* The linux kernel uses the top 16 bits of si_code for it's own
2412 use and only exports the bottom 16 bits to user space - at least
2413 that is the theory, but it turns out that there are some kernels
2414 around that forget to mask out the top 16 bits so we do it here.
2416 The kernel treats the bottom 16 bits as signed and (when it does
2417 mask them off) sign extends them when exporting to user space so
2418 we do the same thing here. */
2419 return (Short)si_code;
2420 #elif defined(VGO_darwin) || defined(VGO_solaris) || defined(VGO_freebsd)
2421 return si_code;
2422 #else
2423 # error Unknown OS
2424 #endif
2427 #if defined(VGO_solaris)
2428 /* Following function is used to switch Valgrind from a client stack back onto
2429 a Valgrind stack. It is used only when the door_return call was invoked by
2430 the client because this is the only syscall which is executed directly on
2431 the client stack (see syscall-{x86,amd64}-solaris.S). The switch onto the
2432 Valgrind stack has to be made as soon as possible because there is no
2433 guarantee that there is enough space on the client stack to run the
2434 complete signal machinery. Also, Valgrind has to be switched back onto its
2435 stack before a simulated signal frame is created because that will
2436 overwrite the real sigframe built by the kernel. */
2437 static void async_signalhandler_solaris_preprocess(ThreadId tid, Int *signo,
2438 vki_siginfo_t *info,
2439 struct vki_ucontext *uc)
2441 # define RECURSION_BIT 0x1000
2442 Addr sp;
2443 vki_sigframe_t *frame;
2444 ThreadState *tst = VG_(get_ThreadState)(tid);
2445 Int rec_signo;
2447 /* If not doing door_return then return instantly. */
2448 if (!tst->os_state.in_door_return)
2449 return;
2451 /* Check for the recursion:
2452 v ...
2453 | async_signalhandler - executed on the client stack
2454 v async_signalhandler_solaris_preprocess - first call switches the
2455 | stacks and sets the RECURSION_BIT flag
2456 v async_signalhandler - executed on the Valgrind stack
2457 | async_signalhandler_solaris_preprocess - the RECURSION_BIT flag is
2458 v set, clear it and return
2460 if (*signo & RECURSION_BIT) {
2461 *signo &= ~RECURSION_BIT;
2462 return;
2465 rec_signo = *signo | RECURSION_BIT;
2467 # if defined(VGP_x86_solaris)
2468 /* Register %ebx/%rbx points to the top of the original V stack. */
2469 sp = uc->uc_mcontext.gregs[VKI_EBX];
2470 # elif defined(VGP_amd64_solaris)
2471 sp = uc->uc_mcontext.gregs[VKI_REG_RBX];
2472 # else
2473 # error "Unknown platform"
2474 # endif
2476 /* Build a fake signal frame, similarly as in sigframe-solaris.c. */
2477 /* Calculate a new stack pointer. */
2478 sp -= sizeof(vki_sigframe_t);
2479 sp = VG_ROUNDDN(sp, 16) - sizeof(UWord);
2481 /* Fill in the frame. */
2482 frame = (vki_sigframe_t*)sp;
2483 /* Set a bogus return address. */
2484 frame->return_addr = (void*)~0UL;
2485 frame->a1_signo = rec_signo;
2486 /* The first parameter has to be 16-byte aligned, resembling a function
2487 call. */
2489 /* Using
2490 vg_assert(VG_IS_16_ALIGNED(&frame->a1_signo));
2491 seems to get miscompiled on amd64 with GCC 4.7.2. */
2492 Addr signo_addr = (Addr)&frame->a1_signo;
2493 vg_assert(VG_IS_16_ALIGNED(signo_addr));
2495 frame->a2_siginfo = &frame->siginfo;
2496 frame->siginfo = *info;
2497 frame->ucontext = *uc;
2499 # if defined(VGP_x86_solaris)
2500 frame->a3_ucontext = &frame->ucontext;
2502 /* Switch onto the V stack and restart the signal processing. */
2503 __asm__ __volatile__(
2504 "xorl %%ebp, %%ebp\n"
2505 "movl %[sp], %%esp\n"
2506 "jmp async_signalhandler\n"
2508 : [sp] "a" (sp)
2509 : /*"ebp"*/);
2511 # elif defined(VGP_amd64_solaris)
2512 __asm__ __volatile__(
2513 "xorq %%rbp, %%rbp\n"
2514 "movq %[sp], %%rsp\n"
2515 "jmp async_signalhandler\n"
2517 : [sp] "a" (sp), "D" (rec_signo), "S" (&frame->siginfo),
2518 "d" (&frame->ucontext)
2519 : /*"rbp"*/);
2520 # else
2521 # error "Unknown platform"
2522 # endif
2524 /* We should never get here. */
2525 vg_assert(0);
2527 # undef RECURSION_BIT
2529 #endif
2532 Receive an async signal from the kernel.
2534 This should only happen when the thread is blocked in a syscall,
2535 since that's the only time this set of signals is unblocked.
2537 static
2538 void async_signalhandler ( Int sigNo,
2539 vki_siginfo_t *info, struct vki_ucontext *uc )
2541 ThreadId tid = VG_(lwpid_to_vgtid)(VG_(gettid)());
2542 ThreadState* tst = VG_(get_ThreadState)(tid);
2543 SysRes sres;
2545 vg_assert(tst->status == VgTs_WaitSys);
2547 # if defined(VGO_solaris)
2548 async_signalhandler_solaris_preprocess(tid, &sigNo, info, uc);
2549 # endif
2551 /* The thread isn't currently running, make it so before going on */
2552 VG_(acquire_BigLock)(tid, "async_signalhandler");
2554 info->si_code = sanitize_si_code(info->si_code);
2556 if (VG_(clo_trace_signals))
2557 VG_(dmsg)("async signal handler: signal=%d, vgtid=%d, tid=%u, si_code=%d, "
2558 "exitreason %s\n",
2559 sigNo, VG_(gettid)(), tid, info->si_code,
2560 VG_(name_of_VgSchedReturnCode)(tst->exitreason));
2562 /* See similar logic in VG_(poll_signals). */
2563 if (tst->exitreason != VgSrc_None)
2564 resume_scheduler(tid);
2566 /* Update thread state properly. The signal can only have been
2567 delivered whilst we were in
2568 coregrind/m_syswrap/syscall-<PLAT>.S, and only then in the
2569 window between the two sigprocmask calls, since at all other
2570 times, we run with async signals on the host blocked. Hence
2571 make enquiries on the basis that we were in or very close to a
2572 syscall, and attempt to fix up the guest state accordingly.
2574 (normal async signals occurring during computation are blocked,
2575 but periodically polled for using VG_(sigtimedwait_zero), and
2576 delivered at a point convenient for us. Hence this routine only
2577 deals with signals that are delivered to a thread during a
2578 syscall.) */
2580 /* First, extract a SysRes from the ucontext_t* given to this
2581 handler. If it is subsequently established by
2582 VG_(fixup_guest_state_after_syscall_interrupted) that the
2583 syscall was complete but the results had not been committed yet
2584 to the guest state, then it'll have to commit the results itself
2585 "by hand", and so we need to extract the SysRes. Of course if
2586 the thread was not in that particular window then the
2587 SysRes will be meaningless, but that's OK too because
2588 VG_(fixup_guest_state_after_syscall_interrupted) will detect
2589 that the thread was not in said window and ignore the SysRes. */
2591 /* To make matters more complex still, on Darwin we need to know
2592 the "class" of the syscall under consideration in order to be
2593 able to extract the a correct SysRes. The class will have been
2594 saved just before the syscall, by VG_(client_syscall), into this
2595 thread's tst->arch.vex.guest_SC_CLASS. Hence: */
2596 # if defined(VGO_darwin)
2597 sres = VG_UCONTEXT_SYSCALL_SYSRES(uc, tst->arch.vex.guest_SC_CLASS);
2598 # else
2599 sres = VG_UCONTEXT_SYSCALL_SYSRES(uc);
2600 # endif
2602 /* (1) */
2603 VG_(fixup_guest_state_after_syscall_interrupted)(
2604 tid,
2605 VG_UCONTEXT_INSTR_PTR(uc),
2606 sres,
2607 !!(scss.scss_per_sig[sigNo].scss_flags & VKI_SA_RESTART) || VG_(is_in_kernel_restart_syscall)(tid),
2611 /* (2) */
2612 /* Set up the thread's state to deliver a signal.
2613 However, if exitreason is VgSrc_FatalSig, then thread tid was
2614 taken out of a syscall by VG_(nuke_all_threads_except).
2615 But after the emission of VKI_SIGKILL, another (fatal) async
2616 signal might be sent. In such a case, we must not handle this
2617 signal, as the thread is supposed to die first.
2618 => resume the scheduler for such a thread, so that the scheduler
2619 can let the thread die. */
2620 if (tst->exitreason != VgSrc_FatalSig
2621 && !is_sig_ign(info, tid))
2622 deliver_signal(tid, info, uc);
2624 /* It's crucial that (1) and (2) happen in the order (1) then (2)
2625 and not the other way around. (1) fixes up the guest thread
2626 state to reflect the fact that the syscall was interrupted --
2627 either to restart the syscall or to return EINTR. (2) then sets
2628 up the thread state to deliver the signal. Then we resume
2629 execution. First, the signal handler is run, since that's the
2630 second adjustment we made to the thread state. If that returns,
2631 then we resume at the guest state created by (1), viz, either
2632 the syscall returns EINTR or is restarted.
2634 If (2) was done before (1) the outcome would be completely
2635 different, and wrong. */
2637 /* longjmp back to the thread's main loop to start executing the
2638 handler. */
2639 resume_scheduler(tid);
2641 VG_(core_panic)("async_signalhandler: got unexpected signal "
2642 "while outside of scheduler");
2645 /* Extend the stack of thread #tid to cover addr. It is expected that
2646 addr either points into an already mapped anonymous segment or into a
2647 reservation segment abutting the stack segment. Everything else is a bug.
2649 Returns True on success, False on failure.
2651 Succeeds without doing anything if addr is already within a segment.
2653 Failure could be caused by:
2654 - addr not below a growable segment
2655 - new stack size would exceed the stack limit for the given thread
2656 - mmap failed for some other reason
2658 Bool VG_(extend_stack)(ThreadId tid, Addr addr)
2660 SizeT udelta;
2661 Addr new_stack_base;
2663 /* Get the segment containing addr. */
2664 const NSegment* seg = VG_(am_find_nsegment)(addr);
2665 vg_assert(seg != NULL);
2667 /* TODO: the test "seg->kind == SkAnonC" is really inadequate,
2668 because although it tests whether the segment is mapped
2669 _somehow_, it doesn't check that it has the right permissions
2670 (r,w, maybe x) ? */
2671 if (seg->kind == SkAnonC)
2672 /* addr is already mapped. Nothing to do. */
2673 return True;
2675 const NSegment* seg_next = VG_(am_next_nsegment)( seg, True/*fwds*/ );
2676 vg_assert(seg_next != NULL);
2678 udelta = VG_PGROUNDUP(seg_next->start - addr);
2679 new_stack_base = seg_next->start - udelta;
2681 VG_(debugLog)(1, "signals",
2682 "extending a stack base 0x%lx down by %lu"
2683 " new base 0x%lx to cover 0x%lx\n",
2684 seg_next->start, udelta, new_stack_base, addr);
2685 Bool overflow;
2686 if (! VG_(am_extend_into_adjacent_reservation_client)
2687 ( seg_next->start, -(SSizeT)udelta, &overflow )) {
2688 if (overflow)
2689 VG_(umsg)("Stack overflow in thread #%u: can't grow stack to %#lx\n",
2690 tid, new_stack_base);
2691 else
2692 VG_(umsg)("Cannot map memory to grow the stack for thread #%u "
2693 "to %#lx\n", tid, new_stack_base);
2694 return False;
2697 /* When we change the main stack, we have to let the stack handling
2698 code know about it. */
2699 VG_(change_stack)(VG_(clstk_id), new_stack_base, VG_(clstk_end));
2701 if (VG_(clo_sanity_level) > 2)
2702 VG_(sanity_check_general)(False);
2704 return True;
2707 static fault_catcher_t fault_catcher = NULL;
2709 fault_catcher_t VG_(set_fault_catcher)(fault_catcher_t catcher)
2711 fault_catcher_t prev_catcher = fault_catcher;
2712 fault_catcher = catcher;
2713 return prev_catcher;
2716 static
2717 void sync_signalhandler_from_user ( ThreadId tid,
2718 Int sigNo, vki_siginfo_t *info, struct vki_ucontext *uc )
2720 ThreadId qtid;
2722 /* If some user-process sent us a sync signal (ie. it's not the result
2723 of a faulting instruction), then how we treat it depends on when it
2724 arrives... */
2726 if (VG_(threads)[tid].status == VgTs_WaitSys
2727 # if defined(VGO_solaris)
2728 /* Check if the signal was really received while doing a blocking
2729 syscall. Only then the async_signalhandler() path can be used. */
2730 && VG_(is_ip_in_blocking_syscall)(tid, VG_UCONTEXT_INSTR_PTR(uc))
2731 # endif
2733 /* Signal arrived while we're blocked in a syscall. This means that
2734 the client's signal mask was applied. In other words, so we can't
2735 get here unless the client wants this signal right now. This means
2736 we can simply use the async_signalhandler. */
2737 if (VG_(clo_trace_signals))
2738 VG_(dmsg)("Delivering user-sent sync signal %d as async signal\n",
2739 sigNo);
2741 async_signalhandler(sigNo, info, uc);
2742 VG_(core_panic)("async_signalhandler returned!?\n");
2744 } else {
2745 /* Signal arrived while in generated client code, or while running
2746 Valgrind core code. That means that every thread has these signals
2747 unblocked, so we can't rely on the kernel to route them properly, so
2748 we need to queue them manually. */
2749 if (VG_(clo_trace_signals))
2750 VG_(dmsg)("Routing user-sent sync signal %d via queue\n", sigNo);
2752 # if defined(VGO_linux)
2753 /* On Linux, first we have to do a sanity check of the siginfo. */
2754 if (info->VKI_SIGINFO_si_pid == 0) {
2755 /* There's a per-user limit of pending siginfo signals. If
2756 you exceed this, by having more than that number of
2757 pending signals with siginfo, then new signals are
2758 delivered without siginfo. This condition can be caused
2759 by any unrelated program you're running at the same time
2760 as Valgrind, if it has a large number of pending siginfo
2761 signals which it isn't taking delivery of.
2763 Since we depend on siginfo to work out why we were sent a
2764 signal and what we should do about it, we really can't
2765 continue unless we get it. */
2766 VG_(umsg)("Signal %d (%s) appears to have lost its siginfo; "
2767 "I can't go on.\n", sigNo, VG_(signame)(sigNo));
2768 VG_(printf)(
2769 " This may be because one of your programs has consumed your ration of\n"
2770 " siginfo structures. For more information, see:\n"
2771 " http://kerneltrap.org/mailarchive/1/message/25599/thread\n"
2772 " Basically, some program on your system is building up a large queue of\n"
2773 " pending signals, and this causes the siginfo data for other signals to\n"
2774 " be dropped because it's exceeding a system limit. However, Valgrind\n"
2775 " absolutely needs siginfo for SIGSEGV. A workaround is to track down the\n"
2776 " offending program and avoid running it while using Valgrind, but there\n"
2777 " is no easy way to do this. Apparently the problem was fixed in kernel\n"
2778 " 2.6.12.\n");
2780 /* It's a fatal signal, so we force the default handler. */
2781 VG_(set_default_handler)(sigNo);
2782 deliver_signal(tid, info, uc);
2783 resume_scheduler(tid);
2784 VG_(exit)(99); /* If we can't resume, then just exit */
2786 # endif
2788 qtid = 0; /* shared pending by default */
2789 # if defined(VGO_linux)
2790 if (info->si_code == VKI_SI_TKILL)
2791 qtid = tid; /* directed to us specifically */
2792 # endif
2793 queue_signal(qtid, info);
2797 /* Returns the reported fault address for an exact address */
2798 static Addr fault_mask(Addr in)
2800 /* We have to use VG_PGROUNDDN because faults on s390x only deliver
2801 the page address but not the address within a page.
2803 # if defined(VGA_s390x)
2804 return VG_PGROUNDDN(in);
2805 # else
2806 return in;
2807 #endif
2810 /* Returns True if the sync signal was due to the stack requiring extension
2811 and the extension was successful.
2813 static Bool extend_stack_if_appropriate(ThreadId tid, vki_siginfo_t* info)
2815 Addr fault;
2816 Addr esp;
2817 NSegment const *seg, *seg_next;
2819 if (info->si_signo != VKI_SIGSEGV)
2820 return False;
2822 fault = (Addr)info->VKI_SIGINFO_si_addr;
2823 esp = VG_(get_SP)(tid);
2824 seg = VG_(am_find_nsegment)(fault);
2825 seg_next = seg ? VG_(am_next_nsegment)( seg, True/*fwds*/ )
2826 : NULL;
2828 if (VG_(clo_trace_signals)) {
2829 if (seg == NULL)
2830 VG_(dmsg)("SIGSEGV: si_code=%d faultaddr=%#lx tid=%u ESP=%#lx "
2831 "seg=NULL\n",
2832 info->si_code, fault, tid, esp);
2833 else
2834 VG_(dmsg)("SIGSEGV: si_code=%d faultaddr=%#lx tid=%u ESP=%#lx "
2835 "seg=%#lx-%#lx\n",
2836 info->si_code, fault, tid, esp, seg->start, seg->end);
2839 if (info->si_code == VKI_SEGV_MAPERR
2840 && seg
2841 && seg->kind == SkResvn
2842 && seg->smode == SmUpper
2843 && seg_next
2844 && seg_next->kind == SkAnonC
2845 && fault >= fault_mask(esp - VG_STACK_REDZONE_SZB)) {
2846 /* If the fault address is above esp but below the current known
2847 stack segment base, and it was a fault because there was
2848 nothing mapped there (as opposed to a permissions fault),
2849 then extend the stack segment.
2851 Addr base = VG_PGROUNDDN(esp - VG_STACK_REDZONE_SZB);
2852 if (VG_(am_addr_is_in_extensible_client_stack)(base)
2853 && VG_(extend_stack)(tid, base)) {
2854 if (VG_(clo_trace_signals))
2855 VG_(dmsg)(" -> extended stack base to %#lx\n",
2856 VG_PGROUNDDN(fault));
2857 return True;
2858 } else {
2859 return False;
2861 } else {
2862 return False;
2866 static
2867 void sync_signalhandler_from_kernel ( ThreadId tid,
2868 Int sigNo, vki_siginfo_t *info, struct vki_ucontext *uc )
2870 /* Check to see if some part of Valgrind itself is interested in faults.
2871 The fault catcher should never be set whilst we're in generated code, so
2872 check for that. AFAIK the only use of the catcher right now is
2873 memcheck's leak detector. */
2874 if (fault_catcher) {
2875 vg_assert(VG_(in_generated_code) == False);
2877 (*fault_catcher)(sigNo, (Addr)info->VKI_SIGINFO_si_addr);
2878 /* If the catcher returns, then it didn't handle the fault,
2879 so carry on panicking. */
2882 if (extend_stack_if_appropriate(tid, info)) {
2883 /* Stack extension occurred, so we don't need to do anything else; upon
2884 returning from this function, we'll restart the host (hence guest)
2885 instruction. */
2886 } else {
2887 /* OK, this is a signal we really have to deal with. If it came
2888 from the client's code, then we can jump back into the scheduler
2889 and have it delivered. Otherwise it's a Valgrind bug. */
2890 ThreadState *tst = VG_(get_ThreadState)(tid);
2892 if (VG_(sigismember)(&tst->sig_mask, sigNo)) {
2893 /* signal is blocked, but they're not allowed to block faults */
2894 VG_(set_default_handler)(sigNo);
2897 if (VG_(in_generated_code)) {
2898 if (VG_(gdbserver_report_signal) (info, tid)
2899 || VG_(sigismember)(&tst->sig_mask, sigNo)) {
2900 /* Can't continue; must longjmp back to the scheduler and thus
2901 enter the sighandler immediately. */
2902 deliver_signal(tid, info, uc);
2903 resume_scheduler(tid);
2905 else
2906 resume_scheduler(tid);
2909 /* If resume_scheduler returns or its our fault, it means we
2910 don't have longjmp set up, implying that we weren't running
2911 client code, and therefore it was actually generated by
2912 Valgrind internally.
2914 VG_(dmsg)("VALGRIND INTERNAL ERROR: Valgrind received "
2915 "a signal %d (%s) - exiting\n",
2916 sigNo, VG_(signame)(sigNo));
2918 VG_(dmsg)("si_code=%d; Faulting address: %p; sp: %#lx\n",
2919 info->si_code, info->VKI_SIGINFO_si_addr,
2920 (Addr)VG_UCONTEXT_STACK_PTR(uc));
2922 if (0)
2923 VG_(kill_self)(sigNo); /* generate a core dump */
2925 //if (tid == 0) /* could happen after everyone has exited */
2926 // tid = VG_(master_tid);
2927 vg_assert(tid != 0);
2929 UnwindStartRegs startRegs;
2930 VG_(memset)(&startRegs, 0, sizeof(startRegs));
2932 VG_UCONTEXT_TO_UnwindStartRegs(&startRegs, uc);
2933 VG_(core_panic_at)("Killed by fatal signal", &startRegs);
2938 Receive a sync signal from the host.
2940 static
2941 void sync_signalhandler ( Int sigNo,
2942 vki_siginfo_t *info, struct vki_ucontext *uc )
2944 ThreadId tid = VG_(lwpid_to_vgtid)(VG_(gettid)());
2945 Bool from_user;
2947 if (0)
2948 VG_(printf)("sync_sighandler(%d, %p, %p)\n", sigNo, info, uc);
2950 vg_assert(info != NULL);
2951 vg_assert(info->si_signo == sigNo);
2952 vg_assert(sigNo == VKI_SIGSEGV
2953 || sigNo == VKI_SIGBUS
2954 || sigNo == VKI_SIGFPE
2955 || sigNo == VKI_SIGILL
2956 || sigNo == VKI_SIGTRAP);
2958 info->si_code = sanitize_si_code(info->si_code);
2960 from_user = !is_signal_from_kernel(tid, sigNo, info->si_code);
2962 if (VG_(clo_trace_signals)) {
2963 VG_(dmsg)("sync signal handler: "
2964 "signal=%d, si_code=%d, EIP=%#lx, eip=%#lx, from %s\n",
2965 sigNo, info->si_code, VG_(get_IP)(tid),
2966 (Addr)VG_UCONTEXT_INSTR_PTR(uc),
2967 ( from_user ? "user" : "kernel" ));
2969 vg_assert(sigNo >= 1 && sigNo <= VG_(max_signal));
2971 /* // debug code:
2972 if (0) {
2973 VG_(printf)("info->si_signo %d\n", info->si_signo);
2974 VG_(printf)("info->si_errno %d\n", info->si_errno);
2975 VG_(printf)("info->si_code %d\n", info->si_code);
2976 VG_(printf)("info->si_pid %d\n", info->si_pid);
2977 VG_(printf)("info->si_uid %d\n", info->si_uid);
2978 VG_(printf)("info->si_status %d\n", info->si_status);
2979 VG_(printf)("info->si_addr %p\n", info->si_addr);
2983 /* Figure out if the signal is being sent from outside the process.
2984 (Why do we care?) If the signal is from the user rather than the
2985 kernel, then treat it more like an async signal than a sync signal --
2986 that is, merely queue it for later delivery. */
2987 if (from_user) {
2988 sync_signalhandler_from_user( tid, sigNo, info, uc);
2989 } else {
2990 sync_signalhandler_from_kernel(tid, sigNo, info, uc);
2993 # if defined(VGO_solaris)
2994 /* On Solaris we have to return from signal handler manually. */
2995 VG_(do_syscall2)(__NR_context, VKI_SETCONTEXT, (UWord)uc);
2996 # endif
3001 Kill this thread. Makes it leave any syscall it might be currently
3002 blocked in, and return to the scheduler. This doesn't mark the thread
3003 as exiting; that's the caller's job.
3005 static void sigvgkill_handler(int signo, vki_siginfo_t *si,
3006 struct vki_ucontext *uc)
3008 ThreadId tid = VG_(lwpid_to_vgtid)(VG_(gettid)());
3009 ThreadStatus at_signal = VG_(threads)[tid].status;
3011 if (VG_(clo_trace_signals))
3012 VG_(dmsg)("sigvgkill for lwp %d tid %u\n", VG_(gettid)(), tid);
3014 VG_(acquire_BigLock)(tid, "sigvgkill_handler");
3016 vg_assert(signo == VG_SIGVGKILL);
3017 vg_assert(si->si_signo == signo);
3019 /* jrs 2006 August 3: the following assertion seems incorrect to
3020 me, and fails on AIX. sigvgkill could be sent to a thread which
3021 is runnable - see VG_(nuke_all_threads_except) in the scheduler.
3022 Hence comment these out ..
3024 vg_assert(VG_(threads)[tid].status == VgTs_WaitSys);
3025 VG_(post_syscall)(tid);
3027 and instead do:
3029 if (at_signal == VgTs_WaitSys)
3030 VG_(post_syscall)(tid);
3031 /* jrs 2006 August 3 ends */
3033 resume_scheduler(tid);
3035 VG_(core_panic)("sigvgkill_handler couldn't return to the scheduler\n");
3038 static __attribute((unused))
3039 void pp_ksigaction ( vki_sigaction_toK_t* sa )
3041 Int i;
3042 VG_(printf)("pp_ksigaction: handler %p, flags 0x%x, restorer %p\n",
3043 sa->ksa_handler,
3044 (UInt)sa->sa_flags,
3045 # if !defined(VGO_darwin) && !defined(VGO_freebsd) && \
3046 !defined(VGO_solaris)
3047 sa->sa_restorer
3048 # else
3049 (void*)0
3050 # endif
3052 VG_(printf)("pp_ksigaction: { ");
3053 for (i = 1; i <= VG_(max_signal); i++)
3054 if (VG_(sigismember)(&(sa->sa_mask),i))
3055 VG_(printf)("%d ", i);
3056 VG_(printf)("}\n");
3060 Force signal handler to default
3062 void VG_(set_default_handler)(Int signo)
3064 vki_sigaction_toK_t sa;
3066 sa.ksa_handler = VKI_SIG_DFL;
3067 sa.sa_flags = 0;
3068 # if !defined(VGO_darwin) && !defined(VGO_freebsd) && \
3069 !defined(VGO_solaris)
3070 sa.sa_restorer = 0;
3071 # endif
3072 VG_(sigemptyset)(&sa.sa_mask);
3074 VG_(do_sys_sigaction)(signo, &sa, NULL);
3078 Poll for pending signals, and set the next one up for delivery.
3080 void VG_(poll_signals)(ThreadId tid)
3082 vki_siginfo_t si, *sip;
3083 vki_sigset_t pollset;
3084 ThreadState *tst = VG_(get_ThreadState)(tid);
3085 vki_sigset_t saved_mask;
3087 if (tst->exitreason != VgSrc_None ) {
3088 /* This task has been requested to die (e.g. due to a fatal signal
3089 received by the process, or because of a call to exit syscall).
3090 So, we cannot poll new signals, as we are supposed to die asap.
3091 If we would poll and deliver
3092 a new (maybe fatal) signal, this could cause a deadlock, as
3093 this thread would believe it has to terminate the other threads
3094 and wait for them to die, while we already have a thread doing
3095 that. */
3096 if (VG_(clo_trace_signals))
3097 VG_(dmsg)("poll_signals: not polling as thread %u is exitreason %s\n",
3098 tid, VG_(name_of_VgSchedReturnCode)(tst->exitreason));
3099 return;
3102 /* look for all the signals this thread isn't blocking */
3103 /* pollset = ~tst->sig_mask */
3104 VG_(sigcomplementset)( &pollset, &tst->sig_mask );
3106 block_all_host_signals(&saved_mask); // protect signal queue
3108 /* First look for any queued pending signals */
3109 sip = next_queued(tid, &pollset); /* this thread */
3111 if (sip == NULL)
3112 sip = next_queued(0, &pollset); /* process-wide */
3114 /* If there was nothing queued, ask the kernel for a pending signal */
3115 if (sip == NULL && VG_(sigtimedwait_zero)(&pollset, &si) > 0) {
3116 if (VG_(clo_trace_signals))
3117 VG_(dmsg)("poll_signals: got signal %d for thread %u exitreason %s\n",
3118 si.si_signo, tid,
3119 VG_(name_of_VgSchedReturnCode)(tst->exitreason));
3120 sip = &si;
3123 if (sip != NULL) {
3124 /* OK, something to do; deliver it */
3125 if (VG_(clo_trace_signals))
3126 VG_(dmsg)("Polling found signal %d for tid %u exitreason %s\n",
3127 sip->si_signo, tid,
3128 VG_(name_of_VgSchedReturnCode)(tst->exitreason));
3129 if (!is_sig_ign(sip, tid))
3130 deliver_signal(tid, sip, NULL);
3131 else if (VG_(clo_trace_signals))
3132 VG_(dmsg)(" signal %d ignored\n", sip->si_signo);
3134 sip->si_signo = 0; /* remove from signal queue, if that's
3135 where it came from */
3138 restore_all_host_signals(&saved_mask);
3141 /* At startup, copy the process' real signal state to the SCSS.
3142 Whilst doing this, block all real signals. Then calculate SKSS and
3143 set the kernel to that. Also initialise DCSS.
3145 void VG_(sigstartup_actions) ( void )
3147 Int i, ret, vKI_SIGRTMIN;
3148 vki_sigset_t saved_procmask;
3149 vki_sigaction_fromK_t sa;
3151 VG_(memset)(&scss, 0, sizeof(scss));
3152 VG_(memset)(&skss, 0, sizeof(skss));
3154 # if defined(VKI_SIGRTMIN)
3155 vKI_SIGRTMIN = VKI_SIGRTMIN;
3156 # else
3157 vKI_SIGRTMIN = 0; /* eg Darwin */
3158 # endif
3160 /* VG_(printf)("SIGSTARTUP\n"); */
3161 /* Block all signals. saved_procmask remembers the previous mask,
3162 which the first thread inherits.
3164 block_all_host_signals( &saved_procmask );
3166 /* Copy per-signal settings to SCSS. */
3167 for (i = 1; i <= _VKI_NSIG; i++) {
3168 /* Get the old host action */
3169 ret = VG_(sigaction)(i, NULL, &sa);
3171 # if defined(VGP_x86_darwin) || defined(VGP_amd64_darwin) \
3172 || defined(VGP_nanomips_linux)
3173 /* apparently we may not even ask about the disposition of these
3174 signals, let alone change them */
3175 if (ret != 0 && (i == VKI_SIGKILL || i == VKI_SIGSTOP))
3176 continue;
3177 # endif
3179 if (ret != 0)
3180 break;
3182 /* Try setting it back to see if this signal is really
3183 available */
3184 if (vKI_SIGRTMIN > 0 /* it actually exists on this platform */
3185 && i >= vKI_SIGRTMIN) {
3186 vki_sigaction_toK_t tsa, sa2;
3188 tsa.ksa_handler = (void *)sync_signalhandler;
3189 tsa.sa_flags = VKI_SA_SIGINFO;
3190 # if !defined(VGO_darwin) && !defined(VGO_freebsd) && \
3191 !defined(VGO_solaris)
3192 tsa.sa_restorer = 0;
3193 # endif
3194 VG_(sigfillset)(&tsa.sa_mask);
3196 /* try setting it to some arbitrary handler */
3197 if (VG_(sigaction)(i, &tsa, NULL) != 0) {
3198 /* failed - not really usable */
3199 break;
3202 VG_(convert_sigaction_fromK_to_toK)( &sa, &sa2 );
3203 ret = VG_(sigaction)(i, &sa2, NULL);
3204 vg_assert(ret == 0);
3207 VG_(max_signal) = i;
3209 if (VG_(clo_trace_signals) && VG_(clo_verbosity) > 2)
3210 VG_(printf)("snaffling handler 0x%lx for signal %d\n",
3211 (Addr)(sa.ksa_handler), i );
3213 scss.scss_per_sig[i].scss_handler = sa.ksa_handler;
3214 scss.scss_per_sig[i].scss_flags = sa.sa_flags;
3215 scss.scss_per_sig[i].scss_mask = sa.sa_mask;
3217 scss.scss_per_sig[i].scss_restorer = NULL;
3218 # if !defined(VGO_darwin) && !defined(VGO_freebsd) && \
3219 !defined(VGO_solaris)
3220 scss.scss_per_sig[i].scss_restorer = sa.sa_restorer;
3221 # endif
3223 scss.scss_per_sig[i].scss_sa_tramp = NULL;
3224 # if defined(VGP_x86_darwin) || defined(VGP_amd64_darwin)
3225 scss.scss_per_sig[i].scss_sa_tramp = NULL;
3226 /*sa.sa_tramp;*/
3227 /* We can't know what it was, because Darwin's sys_sigaction
3228 doesn't tell us. */
3229 # endif
3232 if (VG_(clo_trace_signals))
3233 VG_(dmsg)("Max kernel-supported signal is %d, VG_SIGVGKILL is %d\n",
3234 VG_(max_signal), VG_SIGVGKILL);
3236 /* Our private internal signals are treated as ignored */
3237 scss.scss_per_sig[VG_SIGVGKILL].scss_handler = VKI_SIG_IGN;
3238 scss.scss_per_sig[VG_SIGVGKILL].scss_flags = VKI_SA_SIGINFO;
3239 VG_(sigfillset)(&scss.scss_per_sig[VG_SIGVGKILL].scss_mask);
3241 /* Copy the process' signal mask into the root thread. */
3242 vg_assert(VG_(threads)[1].status == VgTs_Init);
3243 for (i = 2; i < VG_N_THREADS; i++)
3244 vg_assert(VG_(threads)[i].status == VgTs_Empty);
3246 VG_(threads)[1].sig_mask = saved_procmask;
3247 VG_(threads)[1].tmp_sig_mask = saved_procmask;
3249 /* Calculate SKSS and apply it. This also sets the initial kernel
3250 mask we need to run with. */
3251 handle_SCSS_change( True /* forced update */ );
3253 /* Leave with all signals still blocked; the thread scheduler loop
3254 will set the appropriate mask at the appropriate time. */
3257 /*--------------------------------------------------------------------*/
3258 /*--- end ---*/
3259 /*--------------------------------------------------------------------*/