2 * This file and its contents are supplied under the terms of the
3 * Common Development and Distribution License ("CDDL"), version 1.0.
4 * You may only use this file in accordance with the terms of version
7 * A full copy of the text of the CDDL should have accompanied this
8 * source. A copy of the CDDL is also available via the Internet at
9 * http://www.illumos.org/license/CDDL.
12 * Copyright 2018 Joyent, Inc.
16 * This file contains the trampolines that are used by KPTI in order to be
17 * able to take interrupts/trap/etc while on the "user" page table.
19 * We don't map the full kernel text into the user page table: instead we
20 * map this one small section of trampolines (which compiles to ~13 pages).
21 * These trampolines are set in the IDT always (so they will run no matter
22 * whether we're on the kernel or user page table), and their primary job is to
23 * pivot us to the kernel %cr3 and %rsp without ruining everything.
25 * All of these interrupts use the amd64 IST feature when we have KPTI enabled,
26 * meaning that they will execute with their %rsp set to a known location, even
27 * if we take them in the kernel.
29 * Over in desctbls.c (for cpu0) and mp_pc.c (other cpus) we set up the IST
30 * stack to point at &cpu->cpu_m.mcpu_kpti.kf_tr_rsp. You can see the mcpu_kpti
31 * (a struct kpti_frame) defined in machcpuvar.h. This struct is set up to be
32 * page-aligned, and we map the page it's on into both page tables. Using a
33 * struct attached to the cpu_t also means that we can use %rsp-relative
34 * addressing to find anything on the cpu_t, so we don't have to touch %gs or
35 * GSBASE at all on incoming interrupt trampolines (which can get pretty hairy).
37 * This little struct is where the CPU will push the actual interrupt frame.
38 * Then, in the trampoline, we change %cr3, then figure out our destination
39 * stack pointer and "pivot" to it (set %rsp and re-push the CPU's interrupt
40 * frame). Then we jump to the regular ISR in the kernel text and carry on as
43 * We leave the original frame and any spilled regs behind in the kpti_frame
44 * lazily until we want to return to userland. Then, we clear any spilled
45 * regs from it, and overwrite the rest with our iret frame. When switching
46 * this cpu to a different process (in hat_switch), we bzero the whole region to
47 * make sure nothing can leak between processes.
49 * When we're returning back to the original place we took the interrupt later
50 * (especially if it was in userland), we have to jmp back to the "return
51 * trampolines" here, since when we set %cr3 back to the user value, we need to
52 * be executing from code here in these shared pages and not the main kernel
53 * text again. Even though it should be fine to iret directly from kernel text
54 * when returning to kernel code, we make things jmp to a trampoline here just
57 * Note that with IST, it's very important that we always must have pivoted
58 * away from the IST stack before we could possibly take any other interrupt
59 * on the same IST (unless it's an end-of-the-world fault and we don't care
60 * about coming back from it ever).
62 * This is particularly relevant to the dbgtrap/brktrap trampolines, as they
63 * regularly have to happen from within trampoline code (e.g. in the sysenter
64 * single-step case) and then return to the world normally. As a result, these
65 * two are IST'd to their own kpti_frame right above the normal one (in the same
66 * page), so they don't clobber their parent interrupt.
68 * To aid with debugging, we also IST the page fault (#PF/pftrap), general
69 * protection fault (#GP/gptrap) and stack fault (#SS/stktrap) interrupts to
70 * their own separate kpti_frame. This ensures that if we take one of these
71 * due to a bug in trampoline code, we preserve the original trampoline
72 * state that caused the trap.
74 * NMI, MCE and dblfault interrupts also are taken on their own dedicated IST
75 * stacks, since they can interrupt another ISR at any time. These stacks are
76 * full-sized, however, and not a little kpti_frame struct. We only set %cr3 in
77 * their trampolines (and do it unconditionally), and don't bother pivoting
78 * away. We're either going into the panic() path, or we're going to return
79 * straight away without rescheduling, so it's fine to not be on our real
80 * kthread stack (and some of the state we want to go find it with might be
83 * Finally, for these "special" interrupts (NMI/MCE/double fault) we use a
84 * special %cr3 value we stash here in the text (kpti_safe_cr3). We set this to
85 * point at the PML4 for kas early in boot and never touch it again. Hopefully
86 * it survives whatever corruption brings down the rest of the kernel!
88 * Syscalls are different to interrupts (at least in the SYSENTER/SYSCALL64
89 * cases) in that they do not push an interrupt frame (and also have some other
90 * effects). In the syscall trampolines, we assume that we can only be taking
91 * the call from userland and use SWAPGS and an unconditional overwrite of %cr3.
92 * We do not do any stack pivoting for syscalls (and we leave SYSENTER's
93 * existing %rsp pivot untouched) -- instead we spill registers into
94 * %gs:CPU_KPTI_* as we need to.
96 * Note that the normal %cr3 values do not cause invalidations with PCIDE - see
101 * The macros here mostly line up with what's in kdi_idthdl.s, too, so if you
102 * fix bugs here check to see if they should be fixed there as well.
105 #include <sys/asm_linkage.h>
106 #include <sys/asm_misc.h>
107 #include <sys/regset.h>
108 #include <sys/privregs.h>
110 #include <sys/machbrand.h>
111 #include <sys/param.h>
113 #include <sys/segments.h>
115 #include <sys/trap.h>
116 #include <sys/ftrace.h>
117 #include <sys/traptrace.h>
118 #include <sys/clock.h>
119 #include <sys/model.h>
120 #include <sys/panic.h>
125 DGDEF3
(kpti_enable
, 8, 8)
131 .global kpti_tramp_start
135 /* This will be set by mlsetup, and then double-checked later */
136 .global kpti_safe_cr3
139 SET_SIZE
(kpti_safe_cr3
)
141 /* startup_kmem() will overwrite this */
147 #define SET_KERNEL_CR3(spillreg) \
148 mov
%cr3
, spillreg; \
149 mov spillreg
, %gs
:CPU_KPTI_TR_CR3; \
150 mov
%gs
:CPU_KPTI_KCR3
, spillreg; \
153 mov spillreg
, %cr3; \
157 #define SET_USER_CR3(spillreg) \
158 mov
%cr3
, spillreg; \
159 mov spillreg
, %gs
:CPU_KPTI_TR_CR3; \
160 mov
%gs
:CPU_KPTI_UCR3
, spillreg; \
163 #define SET_USER_CR3(spillreg) \
164 mov
%gs
:CPU_KPTI_UCR3
, spillreg; \
168 #define PIVOT_KPTI_STK(spillreg) \
169 mov
%rsp
, spillreg; \
170 mov
%gs
:CPU_KPTI_RET_RSP
, %rsp; \
171 pushq T_FRAMERET_SS
(spillreg
); \
172 pushq T_FRAMERET_RSP
(spillreg
); \
173 pushq T_FRAMERET_RFLAGS
(spillreg
); \
174 pushq T_FRAMERET_CS
(spillreg
); \
175 pushq T_FRAMERET_RIP
(spillreg
)
178 #define INTERRUPT_TRAMPOLINE_P(errpush) \
181 subq $KPTI_R14
, %rsp; \
182 /* Save current %cr3. */ \
184 mov
%r14, KPTI_TR_CR3
(%rsp
); \
186 cmpw $KCS_SEL
, KPTI_CS
(%rsp
); \
189 /* Change to the "kernel" %cr3 */ \
190 mov KPTI_KCR3
(%rsp
), %r14; \
195 /* Get our cpu_t in %r13 */ \
197 and $
(~
(MMU_PAGESIZE
- 1)), %r13; \
198 subq $CPU_KPTI_START
, %r13; \
199 /* Use top of the kthread stk */ \
200 mov CPU_THREAD
(%r13), %r14; \
201 mov T_STACK
(%r14), %r14; \
202 addq $REGSIZE+MINFRAME
, %r14; \
205 /* Check the %rsp in the frame. */ \
206 /* Is it above kernel base? */ \
207 mov kpti_kbase
, %r14; \
208 cmp %r14, KPTI_RSP
(%rsp
); \
210 /* Use the %rsp from the trap frame */ \
211 mov KPTI_RSP
(%rsp
), %r14; \
215 /* %r14 contains our destination stk */ \
217 pushq KPTI_SS
(%r13); \
218 pushq KPTI_RSP
(%r13); \
219 pushq KPTI_RFLAGS
(%r13); \
220 pushq KPTI_CS
(%r13); \
221 pushq KPTI_RIP
(%r13); \
223 mov KPTI_R14
(%r13), %r14; \
224 mov KPTI_R13
(%r13), %r13
226 #define INTERRUPT_TRAMPOLINE_NOERR \
227 INTERRUPT_TRAMPOLINE_P
()
229 #define INTERRUPT_TRAMPOLINE \
230 INTERRUPT_TRAMPOLINE_P
(pushq KPTI_ERR
(%r13))
233 * This is used for all interrupts that can plausibly be taken inside another
234 * interrupt and are using a kpti_frame stack (so #BP, #DB, #GP, #PF, #SS).
236 * We check for whether we took the interrupt while in another trampoline, in
237 * which case we need to use the kthread stack.
239 #define DBG_INTERRUPT_TRAMPOLINE_P(errpush) \
242 subq $KPTI_R14
, %rsp; \
243 /* Check for clobbering */ \
244 cmp $
0, KPTI_FLAG
(%rsp
); \
246 /* Don't worry, this totally works */ \
249 movq $
1, KPTI_FLAG
(%rsp
); \
250 /* Save current %cr3. */ \
252 mov
%r14, KPTI_TR_CR3
(%rsp
); \
254 cmpw $KCS_SEL
, KPTI_CS
(%rsp
); \
257 /* Change to the "kernel" %cr3 */ \
258 mov KPTI_KCR3
(%rsp
), %r14; \
263 /* Get our cpu_t in %r13 */ \
265 and $
(~
(MMU_PAGESIZE
- 1)), %r13; \
266 subq $CPU_KPTI_START
, %r13; \
267 /* Use top of the kthread stk */ \
268 mov CPU_THREAD
(%r13), %r14; \
269 mov T_STACK
(%r14), %r14; \
270 addq $REGSIZE+MINFRAME
, %r14; \
273 /* Check the %rsp in the frame. */ \
274 /* Is it above kernel base? */ \
275 /* If not, treat as user. */ \
276 mov kpti_kbase
, %r14; \
277 cmp %r14, KPTI_RSP
(%rsp
); \
279 /* Is it within the kpti_frame page? */ \
280 /* If it is, treat as user interrupt */ \
282 and $
(~
(MMU_PAGESIZE
- 1)), %r13; \
283 mov KPTI_RSP
(%rsp
), %r14; \
284 and $
(~
(MMU_PAGESIZE
- 1)), %r14; \
287 /* Were we in trampoline code? */ \
288 leaq kpti_tramp_start
, %r14; \
289 cmp %r14, KPTI_RIP
(%rsp
); \
291 leaq kpti_tramp_end
, %r14; \
292 cmp %r14, KPTI_RIP
(%rsp
); \
294 /* If we were, change %cr3: we might */ \
295 /* have interrupted before it did. */ \
296 mov KPTI_KCR3
(%rsp
), %r14; \
299 /* Use the %rsp from the trap frame */ \
300 mov KPTI_RSP
(%rsp
), %r14; \
304 /* %r14 contains our destination stk */ \
306 pushq KPTI_SS
(%r13); \
307 pushq KPTI_RSP
(%r13); \
308 pushq KPTI_RFLAGS
(%r13); \
309 pushq KPTI_CS
(%r13); \
310 pushq KPTI_RIP
(%r13); \
312 mov KPTI_R14
(%r13), %r14; \
313 movq $
0, KPTI_FLAG
(%r13); \
314 mov KPTI_R13
(%r13), %r13
316 #define DBG_INTERRUPT_TRAMPOLINE_NOERR \
317 DBG_INTERRUPT_TRAMPOLINE_P
()
319 #define DBG_INTERRUPT_TRAMPOLINE \
320 DBG_INTERRUPT_TRAMPOLINE_P
(pushq KPTI_ERR
(%r13))
323 * These labels (_start and _end) are used by trap.c to determine if
324 * we took an interrupt like an NMI during the return process.
326 .global tr_sysc_ret_start
330 * Syscall return trampolines.
332 * These are expected to be called on the kernel %gs. tr_sysret[ql] are
333 * called after %rsp is changed back to the user value, so we have no
334 * stack to work with. tr_sysexit has a kernel stack (but has to
335 * preserve rflags, soooo).
341 mov
%r13, %gs
:CPU_KPTI_R13
343 mov
%gs
:CPU_KPTI_R13
, %r13
344 /* Zero these to make sure they didn't leak from a kernel trap */
345 movq $
0, %gs
:CPU_KPTI_R13
346 movq $
0, %gs
:CPU_KPTI_R14
356 mov
%r13, %gs
:CPU_KPTI_R13
358 mov
%gs
:CPU_KPTI_R13
, %r13
359 /* Zero these to make sure they didn't leak from a kernel trap */
360 movq $
0, %gs
:CPU_KPTI_R13
361 movq $
0, %gs
:CPU_KPTI_R14
369 * Note: we want to preserve RFLAGS across this branch, since sysexit
370 * (unlike sysret above) does not restore RFLAGS for us.
372 * We still have the real kernel stack (sysexit does restore that), so
373 * we can use pushfq/popfq.
380 /* Have to pop it back off now before we change %cr3! */
382 mov
%r13, %gs
:CPU_KPTI_R13
384 mov
%gs
:CPU_KPTI_R13
, %r13
385 /* Zero these to make sure they didn't leak from a kernel trap */
386 movq $
0, %gs
:CPU_KPTI_R13
387 movq $
0, %gs
:CPU_KPTI_R14
397 .global tr_sysc_ret_end
401 * Syscall entry trampolines.
405 #define MK_SYSCALL_TRAMPOLINE(isr) \
406 ENTRY_NP
(tr_
##isr); \
408 mov
%r13, %gs
:CPU_KPTI_R13; \
410 mov
%r13, %gs
:CPU_KPTI_TR_CR3; \
411 mov
%gs
:CPU_KPTI_KCR3
, %r13; \
413 mov
%gs
:CPU_KPTI_R13
, %r13; \
418 #define MK_SYSCALL_TRAMPOLINE(isr) \
419 ENTRY_NP
(tr_
##isr); \
421 mov
%r13, %gs
:CPU_KPTI_R13; \
422 mov
%gs
:CPU_KPTI_KCR3
, %r13; \
424 mov
%gs
:CPU_KPTI_R13
, %r13; \
430 MK_SYSCALL_TRAMPOLINE
(sys_syscall
)
431 MK_SYSCALL_TRAMPOLINE
(sys_syscall32
)
432 MK_SYSCALL_TRAMPOLINE
(brand_sys_syscall
)
433 MK_SYSCALL_TRAMPOLINE
(brand_sys_syscall32
)
436 * SYSENTER is special. The CPU is really not very helpful when it
437 * comes to preserving and restoring state with it, and as a result
438 * we have to do all of it by hand. So, since we want to preserve
439 * RFLAGS, we have to be very careful in these trampolines to not
440 * clobber any bits in it. That means no cmpqs or branches!
442 ENTRY_NP
(tr_sys_sysenter
)
444 mov
%r13, %gs
:CPU_KPTI_R13
447 mov
%r13, %gs
:CPU_KPTI_TR_CR3
449 mov
%gs
:CPU_KPTI_KCR3
, %r13
451 mov
%gs
:CPU_KPTI_R13
, %r13
452 jmp _sys_sysenter_post_swapgs
453 SET_SIZE
(tr_sys_sysenter
)
455 ENTRY_NP
(tr_brand_sys_sysenter
)
457 mov
%r13, %gs
:CPU_KPTI_R13
460 mov
%r13, %gs
:CPU_KPTI_TR_CR3
462 mov
%gs
:CPU_KPTI_KCR3
, %r13
464 mov
%gs
:CPU_KPTI_R13
, %r13
465 jmp _brand_sys_sysenter_post_swapgs
466 SET_SIZE
(tr_brand_sys_sysenter
)
468 #define MK_SYSCALL_INT_TRAMPOLINE(isr) \
469 ENTRY_NP
(tr_
##isr); \
471 mov
%r13, %gs
:CPU_KPTI_R13; \
472 SET_KERNEL_CR3
(%r13); \
473 mov
%gs
:CPU_THREAD
, %r13; \
474 mov T_STACK
(%r13), %r13; \
475 addq $REGSIZE+MINFRAME
, %r13; \
477 pushq
%gs
:CPU_KPTI_SS; \
478 pushq
%gs
:CPU_KPTI_RSP; \
479 pushq
%gs
:CPU_KPTI_RFLAGS; \
480 pushq
%gs
:CPU_KPTI_CS; \
481 pushq
%gs
:CPU_KPTI_RIP; \
482 mov
%gs
:CPU_KPTI_R13
, %r13; \
487 MK_SYSCALL_INT_TRAMPOLINE
(brand_sys_syscall_int
)
488 MK_SYSCALL_INT_TRAMPOLINE
(sys_syscall_int
)
491 * Interrupt/trap return trampolines
494 .global tr_intr_ret_start
497 ENTRY_NP
(tr_iret_auto
)
500 cmpw $KCS_SEL
, T_FRAMERET_CS
(%rsp
)
503 SET_SIZE
(tr_iret_auto
)
505 ENTRY_NP
(tr_iret_kernel
)
507 * Yes, this does nothing extra. But this way we know if we see iret
508 * elsewhere, then we've failed to properly consider trampolines there.
511 SET_SIZE
(tr_iret_kernel
)
513 ENTRY_NP
(tr_iret_user
)
518 mov
%r13, %gs
:CPU_KPTI_R13
521 mov
%gs
:CPU_KPTI_R13
, %r13
522 /* Zero these to make sure they didn't leak from a kernel trap */
523 movq $
0, %gs
:CPU_KPTI_R13
524 movq $
0, %gs
:CPU_KPTI_R14
528 SET_SIZE
(tr_iret_user
)
531 * This special return trampoline is for KDI's use only (with kmdb).
533 * KDI/kmdb do not use swapgs -- they directly write the GSBASE MSR
534 * instead. This trampoline runs after GSBASE has already been changed
535 * back to the userland value (so we can't use %gs).
537 * Instead, the caller gives us a pointer to the kpti_dbg frame in %r13.
538 * The KPTI_R13 member in the kpti_dbg has already been set to what the
539 * real %r13 should be before we IRET.
541 * Additionally, KDI keeps a copy of the incoming %cr3 value when it
542 * took an interrupt, and has put that back in the kpti_dbg area for us
543 * to use, so we don't do any sniffing of %cs here. This is important
544 * so that debugging code that changes %cr3 is possible.
546 ENTRY_NP
(tr_iret_kdi
)
547 movq
%r14, KPTI_R14
(%r13) /* %r14 has to be preserved by us */
549 movq
%rsp
, %r14 /* original %rsp is pointing at IRET frame */
550 leaq KPTI_TOP
(%r13), %rsp
551 pushq T_FRAMERET_SS
(%r14)
552 pushq T_FRAMERET_RSP
(%r14)
553 pushq T_FRAMERET_RFLAGS
(%r14)
554 pushq T_FRAMERET_CS
(%r14)
555 pushq T_FRAMERET_RIP
(%r14)
557 movq KPTI_TR_CR3
(%r13), %r14
560 movq KPTI_R14
(%r13), %r14
561 movq KPTI_R13
(%r13), %r13 /* preserved by our caller */
564 SET_SIZE
(tr_iret_kdi
)
566 .global tr_intr_ret_end
570 * Interrupt/trap entry trampolines
573 /* CPU pushed an error code, and ISR wants one */
574 #define MK_INTR_TRAMPOLINE(isr) \
575 ENTRY_NP
(tr_
##isr); \
576 INTERRUPT_TRAMPOLINE; \
580 /* CPU didn't push an error code, and ISR doesn't want one */
581 #define MK_INTR_TRAMPOLINE_NOERR(isr) \
582 ENTRY_NP
(tr_
##isr); \
584 INTERRUPT_TRAMPOLINE_NOERR; \
588 /* CPU pushed an error code, and ISR wants one */
589 #define MK_DBG_INTR_TRAMPOLINE(isr) \
590 ENTRY_NP
(tr_
##isr); \
591 DBG_INTERRUPT_TRAMPOLINE; \
595 /* CPU didn't push an error code, and ISR doesn't want one */
596 #define MK_DBG_INTR_TRAMPOLINE_NOERR(isr) \
597 ENTRY_NP
(tr_
##isr); \
599 DBG_INTERRUPT_TRAMPOLINE_NOERR; \
604 MK_INTR_TRAMPOLINE_NOERR
(div0trap
)
605 MK_DBG_INTR_TRAMPOLINE_NOERR
(dbgtrap
)
606 MK_DBG_INTR_TRAMPOLINE_NOERR
(brktrap
)
607 MK_INTR_TRAMPOLINE_NOERR
(ovflotrap
)
608 MK_INTR_TRAMPOLINE_NOERR
(boundstrap
)
609 MK_INTR_TRAMPOLINE_NOERR
(invoptrap
)
610 MK_INTR_TRAMPOLINE_NOERR
(ndptrap
)
611 MK_INTR_TRAMPOLINE
(invtsstrap
)
612 MK_INTR_TRAMPOLINE
(segnptrap
)
613 MK_DBG_INTR_TRAMPOLINE
(stktrap
)
614 MK_DBG_INTR_TRAMPOLINE
(gptrap
)
615 MK_DBG_INTR_TRAMPOLINE
(pftrap
)
616 MK_INTR_TRAMPOLINE_NOERR
(resvtrap
)
617 MK_INTR_TRAMPOLINE_NOERR
(ndperr
)
618 MK_INTR_TRAMPOLINE
(achktrap
)
619 MK_INTR_TRAMPOLINE_NOERR
(xmtrap
)
620 MK_INTR_TRAMPOLINE_NOERR
(invaltrap
)
621 MK_INTR_TRAMPOLINE_NOERR
(fasttrap
)
622 MK_INTR_TRAMPOLINE_NOERR
(dtrace_ret
)
625 * These are special because they can interrupt other traps, and
626 * each other. We don't need to pivot their stacks, because they have
627 * dedicated IST stack space, but we need to change %cr3.
631 mov kpti_safe_cr3
, %r13
638 ENTRY_NP
(tr_syserrtrap
)
640 * If we got here we should always have a zero error code pushed.
641 * The INT $0x8 instr doesn't seem to push one, though, which we use
642 * as an emergency panic in the other trampolines. So adjust things
650 mov kpti_safe_cr3
, %r13
654 SET_SIZE
(tr_syserrtrap
)
659 mov kpti_safe_cr3
, %r13
666 * Interrupts start at 32
669 ENTRY_NP
(tr_ivct
##n) \
671 INTERRUPT_TRAMPOLINE; \
676 MKIVCT
(32); MKIVCT
(33); MKIVCT
(34); MKIVCT
(35);
677 MKIVCT
(36); MKIVCT
(37); MKIVCT
(38); MKIVCT
(39);
678 MKIVCT
(40); MKIVCT
(41); MKIVCT
(42); MKIVCT
(43);
679 MKIVCT
(44); MKIVCT
(45); MKIVCT
(46); MKIVCT
(47);
680 MKIVCT
(48); MKIVCT
(49); MKIVCT
(50); MKIVCT
(51);
681 MKIVCT
(52); MKIVCT
(53); MKIVCT
(54); MKIVCT
(55);
682 MKIVCT
(56); MKIVCT
(57); MKIVCT
(58); MKIVCT
(59);
683 MKIVCT
(60); MKIVCT
(61); MKIVCT
(62); MKIVCT
(63);
684 MKIVCT
(64); MKIVCT
(65); MKIVCT
(66); MKIVCT
(67);
685 MKIVCT
(68); MKIVCT
(69); MKIVCT
(70); MKIVCT
(71);
686 MKIVCT
(72); MKIVCT
(73); MKIVCT
(74); MKIVCT
(75);
687 MKIVCT
(76); MKIVCT
(77); MKIVCT
(78); MKIVCT
(79);
688 MKIVCT
(80); MKIVCT
(81); MKIVCT
(82); MKIVCT
(83);
689 MKIVCT
(84); MKIVCT
(85); MKIVCT
(86); MKIVCT
(87);
690 MKIVCT
(88); MKIVCT
(89); MKIVCT
(90); MKIVCT
(91);
691 MKIVCT
(92); MKIVCT
(93); MKIVCT
(94); MKIVCT
(95);
692 MKIVCT
(96); MKIVCT
(97); MKIVCT
(98); MKIVCT
(99);
693 MKIVCT
(100); MKIVCT
(101); MKIVCT
(102); MKIVCT
(103);
694 MKIVCT
(104); MKIVCT
(105); MKIVCT
(106); MKIVCT
(107);
695 MKIVCT
(108); MKIVCT
(109); MKIVCT
(110); MKIVCT
(111);
696 MKIVCT
(112); MKIVCT
(113); MKIVCT
(114); MKIVCT
(115);
697 MKIVCT
(116); MKIVCT
(117); MKIVCT
(118); MKIVCT
(119);
698 MKIVCT
(120); MKIVCT
(121); MKIVCT
(122); MKIVCT
(123);
699 MKIVCT
(124); MKIVCT
(125); MKIVCT
(126); MKIVCT
(127);
700 MKIVCT
(128); MKIVCT
(129); MKIVCT
(130); MKIVCT
(131);
701 MKIVCT
(132); MKIVCT
(133); MKIVCT
(134); MKIVCT
(135);
702 MKIVCT
(136); MKIVCT
(137); MKIVCT
(138); MKIVCT
(139);
703 MKIVCT
(140); MKIVCT
(141); MKIVCT
(142); MKIVCT
(143);
704 MKIVCT
(144); MKIVCT
(145); MKIVCT
(146); MKIVCT
(147);
705 MKIVCT
(148); MKIVCT
(149); MKIVCT
(150); MKIVCT
(151);
706 MKIVCT
(152); MKIVCT
(153); MKIVCT
(154); MKIVCT
(155);
707 MKIVCT
(156); MKIVCT
(157); MKIVCT
(158); MKIVCT
(159);
708 MKIVCT
(160); MKIVCT
(161); MKIVCT
(162); MKIVCT
(163);
709 MKIVCT
(164); MKIVCT
(165); MKIVCT
(166); MKIVCT
(167);
710 MKIVCT
(168); MKIVCT
(169); MKIVCT
(170); MKIVCT
(171);
711 MKIVCT
(172); MKIVCT
(173); MKIVCT
(174); MKIVCT
(175);
712 MKIVCT
(176); MKIVCT
(177); MKIVCT
(178); MKIVCT
(179);
713 MKIVCT
(180); MKIVCT
(181); MKIVCT
(182); MKIVCT
(183);
714 MKIVCT
(184); MKIVCT
(185); MKIVCT
(186); MKIVCT
(187);
715 MKIVCT
(188); MKIVCT
(189); MKIVCT
(190); MKIVCT
(191);
716 MKIVCT
(192); MKIVCT
(193); MKIVCT
(194); MKIVCT
(195);
717 MKIVCT
(196); MKIVCT
(197); MKIVCT
(198); MKIVCT
(199);
718 MKIVCT
(200); MKIVCT
(201); MKIVCT
(202); MKIVCT
(203);
719 MKIVCT
(204); MKIVCT
(205); MKIVCT
(206); MKIVCT
(207);
720 MKIVCT
(208); MKIVCT
(209); MKIVCT
(210); MKIVCT
(211);
721 MKIVCT
(212); MKIVCT
(213); MKIVCT
(214); MKIVCT
(215);
722 MKIVCT
(216); MKIVCT
(217); MKIVCT
(218); MKIVCT
(219);
723 MKIVCT
(220); MKIVCT
(221); MKIVCT
(222); MKIVCT
(223);
724 MKIVCT
(224); MKIVCT
(225); MKIVCT
(226); MKIVCT
(227);
725 MKIVCT
(228); MKIVCT
(229); MKIVCT
(230); MKIVCT
(231);
726 MKIVCT
(232); MKIVCT
(233); MKIVCT
(234); MKIVCT
(235);
727 MKIVCT
(236); MKIVCT
(237); MKIVCT
(238); MKIVCT
(239);
728 MKIVCT
(240); MKIVCT
(241); MKIVCT
(242); MKIVCT
(243);
729 MKIVCT
(244); MKIVCT
(245); MKIVCT
(246); MKIVCT
(247);
730 MKIVCT
(248); MKIVCT
(249); MKIVCT
(250); MKIVCT
(251);
731 MKIVCT
(252); MKIVCT
(253); MKIVCT
(254); MKIVCT
(255);
734 * We're PCIDE, but we don't have INVPCID. The only way to invalidate a
735 * PCID other than the current one, then, is to load its cr3 then
736 * invlpg. But loading kf_user_cr3 means we can longer access our
737 * caller's text mapping (or indeed, its stack). So this little helper
738 * has to live within our trampoline text region.
740 * Called as tr_mmu_flush_user_range(addr, len, pgsz, cr3)
742 ENTRY_NP
(tr_mmu_flush_user_range
)
744 /* When we read cr3, it never has the NOINVL bit set. */
746 movq $CR3_NOINVL_BIT
, %rbx
751 .align ASM_ENTRY_ALIGN
760 SET_SIZE
(tr_mmu_flush_user_range
)
763 .global kpti_tramp_end