4 * The contents of this file are subject to the terms of the
5 * Common Development and Distribution License (the "License").
6 * You may not use this file except in compliance with the License.
8 * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 * or http://www.opensolaris.org/os/licensing.
10 * See the License for the specific language governing permissions
11 * and limitations under the License.
13 * When distributing Covered Code, include this CDDL HEADER in each
14 * file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 * If applicable, add the following below this CDDL HEADER, with the
16 * fields enclosed by brackets "[]" replaced with your own identifying
17 * information: Portions Copyright [yyyy] [name of copyright owner]
22 * Copyright 2010 Sun Microsystems, Inc. All rights reserved.
23 * Use is subject to license terms.
24 * Copyright (c) 2016 by Delphix. All rights reserved.
28 * Page Retire - Big Theory Statement.
30 * This file handles removing sections of faulty memory from use when the
31 * user land FMA Diagnosis Engine requests that a page be removed or when
32 * a CE or UE is detected by the hardware.
34 * In the bad old days, the kernel side of Page Retire did a lot of the work
35 * on its own. Now, with the DE keeping track of errors, the kernel side is
36 * rather simple minded on most platforms.
38 * Errors are all reflected to the DE, and after digesting the error and
39 * looking at all previously reported errors, the DE decides what should
40 * be done about the current error. If the DE wants a particular page to
41 * be retired, then the kernel page retire code is invoked via an ioctl.
42 * On non-FMA platforms, the ue_drain and ce_drain paths ends up calling
43 * page retire to handle the error. Since page retire is just a simple
44 * mechanism it doesn't need to differentiate between the different callers.
46 * The p_toxic field in the page_t is used to indicate which errors have
47 * occurred and what action has been taken on a given page. Because errors are
48 * reported without regard to the locked state of a page, no locks are used
49 * to SET the error bits in p_toxic. However, in order to clear the error
50 * bits, the page_t must be held exclusively locked.
52 * When page_retire() is called, it must be able to acquire locks, sleep, etc.
53 * It must not be called from high-level interrupt context.
55 * Depending on how the requested page is being used at the time of the retire
56 * request (and on the availability of sufficient system resources), the page
57 * may be retired immediately, or just marked for retirement later. For
58 * example, locked pages are marked, while free pages are retired. Multiple
59 * requests may be made to retire the same page, although there is no need
60 * to: once the p_toxic flags are set, the page will be retired as soon as it
61 * can be exclusively locked.
63 * The retire mechanism is driven centrally out of page_unlock(). To expedite
64 * the retirement of pages, further requests for SE_SHARED locks are denied
65 * as long as a page retirement is pending. In addition, as long as pages are
66 * pending retirement a background thread runs periodically trying to retire
67 * those pages. Pages which could not be retired while the system is running
68 * are scrubbed prior to rebooting to avoid latent errors on the next boot.
70 * UE pages without persistent errors are scrubbed and returned to service.
71 * Recidivist pages, as well as FMA-directed requests for retirement, result
72 * in the page being taken out of service. Once the decision is made to take
73 * a page out of service, the page is cleared, hashed onto the retired_pages
74 * vnode, marked as retired, and it is unlocked. No other requesters (except
75 * for unretire) are allowed to lock retired pages.
77 * The public routines return (sadly) 0 if they worked and a non-zero error
78 * value if something went wrong. This is done for the ioctl side of the
79 * world to allow errors to be reflected all the way out to user land. The
80 * non-zero values are explained in comments atop each function.
86 * 1. Trying to retire non-relocatable kvp pages may result in a
87 * quagmire. This is because seg_kmem() no longer keeps its pages locked,
88 * and calls page_lookup() in the free path; since kvp pages are modified
89 * and don't have a usable backing store, page_retire() can't do anything
90 * with them, and we'll keep denying the lock to seg_kmem_free() in a
91 * vicious cycle. To prevent that, we don't deny locks to kvp pages, and
92 * hence only try to retire a page from page_unlock() in the free path.
93 * Since most kernel pages are indefinitely held anyway, and don't
94 * participate in I/O, this is of little consequence.
96 * 2. Low memory situations will be interesting. If we don't have
97 * enough memory for page_relocate() to succeed, we won't be able to
98 * retire dirty pages; nobody will be able to push them out to disk
99 * either, since we aggressively deny the page lock. We could change
100 * fsflush so it can recognize this situation, grab the lock, and push
101 * the page out, where we'll catch it in the free path and retire it.
103 * 3. Beware of places that have code like this in them:
105 * if (! page_tryupgrade(pp)) {
107 * while (! page_lock(pp, SE_EXCL, NULL, P_RECLAIM)) {
113 * The problem is that pp can change identity right after the
114 * page_unlock() call. In particular, page_retire() can step in
115 * there, change pp's identity, and hash pp onto the retired_vnode.
117 * Of course, other functions besides page_retire() can have the
118 * same effect. A kmem reader can waltz by, set up a mapping to the
119 * page, and then unlock the page. Page_free() will then go castors
120 * up. So if anybody is doing this, it's already a bug.
122 * 4. mdboot()'s call into page_retire_mdboot() should probably be
123 * moved lower. Where the call is made now, we can get into trouble
124 * by scrubbing a kernel page that is then accessed later.
127 #include <sys/types.h>
128 #include <sys/param.h>
129 #include <sys/systm.h>
130 #include <sys/mman.h>
131 #include <sys/vnode.h>
133 #include <sys/cmn_err.h>
134 #include <sys/ksynch.h>
135 #include <sys/thread.h>
136 #include <sys/disp.h>
137 #include <sys/ontrap.h>
138 #include <sys/vmsystm.h>
139 #include <sys/mem_config.h>
140 #include <sys/atomic.h>
141 #include <sys/callb.h>
142 #include <sys/kobj.h>
144 #include <vm/vm_dep.h>
147 #include <vm/seg_kmem.h>
150 * vnode for all pages which are retired from the VM system;
152 vnode_t
*retired_pages
;
155 * vnode ops - all defaults
157 static const struct vnodeops retired_vnodeops
= {
158 .vnop_name
= "retired_pages",
161 static int page_retire_pp_finish(page_t
*, void *, uint_t
);
164 * Make a list of all of the pages that have been marked for retirement
165 * but are not yet retired. At system shutdown, we will scrub all of the
166 * pages in the list in case there are outstanding UEs. Then, we
167 * cross-check this list against the number of pages that are yet to be
168 * retired, and if we find inconsistencies, we scan every page_t in the
169 * whole system looking for any pages that need to be scrubbed for UEs.
170 * The background thread also uses this queue to determine which pages
171 * it should keep trying to retire.
174 #define PR_PENDING_QMAX 32
176 #define PR_PENDING_QMAX 256
178 page_t
*pr_pending_q
[PR_PENDING_QMAX
];
182 * Page retire global kstats
184 struct page_retire_kstat
{
185 kstat_named_t pr_retired
;
186 kstat_named_t pr_requested
;
187 kstat_named_t pr_requested_free
;
188 kstat_named_t pr_enqueue_fail
;
189 kstat_named_t pr_dequeue_fail
;
190 kstat_named_t pr_pending
;
191 kstat_named_t pr_pending_kas
;
192 kstat_named_t pr_failed
;
193 kstat_named_t pr_failed_kernel
;
194 kstat_named_t pr_limit
;
195 kstat_named_t pr_limit_exceeded
;
196 kstat_named_t pr_fma
;
197 kstat_named_t pr_mce
;
199 kstat_named_t pr_ue_cleared_retire
;
200 kstat_named_t pr_ue_cleared_free
;
201 kstat_named_t pr_ue_persistent
;
202 kstat_named_t pr_unretired
;
205 static struct page_retire_kstat page_retire_kstat
= {
206 { "pages_retired", KSTAT_DATA_UINT64
},
207 { "pages_retire_request", KSTAT_DATA_UINT64
},
208 { "pages_retire_request_free", KSTAT_DATA_UINT64
},
209 { "pages_notenqueued", KSTAT_DATA_UINT64
},
210 { "pages_notdequeued", KSTAT_DATA_UINT64
},
211 { "pages_pending", KSTAT_DATA_UINT64
},
212 { "pages_pending_kas", KSTAT_DATA_UINT64
},
213 { "pages_deferred", KSTAT_DATA_UINT64
},
214 { "pages_deferred_kernel", KSTAT_DATA_UINT64
},
215 { "pages_limit", KSTAT_DATA_UINT64
},
216 { "pages_limit_exceeded", KSTAT_DATA_UINT64
},
217 { "pages_fma", KSTAT_DATA_UINT64
},
218 { "pages_multiple_ce", KSTAT_DATA_UINT64
},
219 { "pages_ue", KSTAT_DATA_UINT64
},
220 { "pages_ue_cleared_retired", KSTAT_DATA_UINT64
},
221 { "pages_ue_cleared_freed", KSTAT_DATA_UINT64
},
222 { "pages_ue_persistent", KSTAT_DATA_UINT64
},
223 { "pages_unretired", KSTAT_DATA_UINT64
},
226 static kstat_t
*page_retire_ksp
= NULL
;
228 #define PR_INCR_KSTAT(stat) \
229 atomic_inc_64(&(page_retire_kstat.stat.value.ui64))
230 #define PR_DECR_KSTAT(stat) \
231 atomic_dec_64(&(page_retire_kstat.stat.value.ui64))
233 #define PR_KSTAT_RETIRED_CE (page_retire_kstat.pr_mce.value.ui64)
234 #define PR_KSTAT_RETIRED_FMA (page_retire_kstat.pr_fma.value.ui64)
235 #define PR_KSTAT_RETIRED_NOTUE (PR_KSTAT_RETIRED_CE + PR_KSTAT_RETIRED_FMA)
236 #define PR_KSTAT_PENDING (page_retire_kstat.pr_pending.value.ui64)
237 #define PR_KSTAT_PENDING_KAS (page_retire_kstat.pr_pending_kas.value.ui64)
238 #define PR_KSTAT_EQFAIL (page_retire_kstat.pr_enqueue_fail.value.ui64)
239 #define PR_KSTAT_DQFAIL (page_retire_kstat.pr_dequeue_fail.value.ui64)
242 * page retire kstats to list all retired pages
244 static int pr_list_kstat_update(kstat_t
*ksp
, int rw
);
245 static int pr_list_kstat_snapshot(kstat_t
*ksp
, void *buf
, int rw
);
246 kmutex_t pr_list_kstat_mutex
;
249 * Limit the number of multiple CE page retires.
250 * The default is 0.1% of physmem, or 1 in 1000 pages. This is set in
251 * basis points, where 100 basis points equals one percent.
254 uint64_t max_pages_retired_bps
= MCE_BPT
;
255 #define PAGE_RETIRE_LIMIT ((physmem * max_pages_retired_bps) / 10000)
258 * Control over the verbosity of page retirement.
260 * When set to zero (the default), no messages will be printed.
261 * When set to one, summary messages will be printed.
262 * When set > one, all messages will be printed.
264 * A value of one will trigger detailed messages for retirement operations,
265 * and is intended as a platform tunable for processors where FMA's DE does
266 * not run (e.g., spitfire). Values > one are intended for debugging only.
268 int page_retire_messages
= 0;
271 * Control whether or not we return scrubbed UE pages to service.
272 * By default we do not since FMA wants to run its diagnostics first
273 * and then ask us to unretire the page if it passes. Non-FMA platforms
274 * may set this to zero so we will only retire recidivist pages. It should
275 * not be changed by the user.
277 int page_retire_first_ue
= 1;
280 * Master enable for page retire. This prevents a CE or UE early in boot
281 * from trying to retire a page before page_retire_init() has finished
282 * setting things up. This is internal only and is not a tunable!
284 static int pr_enable
= 0;
286 static void (*memscrub_notify_func
)(uint64_t);
289 struct page_retire_debug
{
309 int prd_uenotscrubbed
;
321 int prd_checkmiss_pend
;
322 int prd_checkmiss_noerr
;
329 int prd_nofreedemote
;
334 #define PR_DEBUG(foo) ((pr_debug.foo)++)
337 * A type histogram. We record the incidence of the various toxic
338 * flag combinations along with the interesting page attributes. The
339 * goal is to get as many combinations as we can while driving all
340 * pr_debug values nonzero (indicating we've exercised all possible
341 * code paths across all possible page types). Not all combinations
342 * will make sense -- e.g. PRT_MOD|PRT_KERNEL.
344 * pr_type offset bit encoding (when examining with a debugger):
355 #define PRT_NAMED 0x01
356 #define PRT_KERNEL 0x02
357 #define PRT_FREE 0x04
359 #define PRT_FMA 0x00 /* yes, this is not a mistake */
364 int pr_types
[PRT_ALL
+1];
366 #define PR_TYPES(pp) { \
369 whichtype |= PRT_NAMED; \
371 whichtype |= PRT_KERNEL; \
373 whichtype |= PRT_FREE; \
375 whichtype |= PRT_MOD; \
376 if (pp->p_toxic & PR_UE) \
377 whichtype |= PRT_UE; \
378 if (pp->p_toxic & PR_MCE) \
379 whichtype |= PRT_MCE; \
380 pr_types[whichtype]++; \
390 #define MTBF(v, f) (((++(v)) & (f)) != (f))
394 #define PR_DEBUG(foo) /* nothing */
395 #define PR_TYPES(foo) /* nothing */
396 #define MTBF(v, f) (1)
401 * page_retire_done() - completion processing
403 * Used by the page_retire code for common completion processing.
404 * It keeps track of how many times a given result has happened,
405 * and writes out an occasional message.
407 * May be called with a NULL pp (PRD_INVALID_PA case).
409 #define PRD_INVALID_KEY -1
410 #define PRD_SUCCESS 0
411 #define PRD_PENDING 1
413 #define PRD_DUPLICATE 3
414 #define PRD_INVALID_PA 4
416 #define PRD_UE_SCRUBBED 6
417 #define PRD_UNR_SUCCESS 7
418 #define PRD_UNR_CANTLOCK 8
419 #define PRD_UNR_NOT 9
421 typedef struct page_retire_op
{
422 int pr_key
; /* one of the PRD_* defines from above */
423 int pr_count
; /* How many times this has happened */
424 int pr_retval
; /* return value */
425 int pr_msglvl
; /* message level - when to print */
426 char *pr_message
; /* Cryptic message for field service */
429 static page_retire_op_t page_retire_ops
[] = {
430 /* key count retval msglvl message */
431 {PRD_SUCCESS
, 0, 0, 1,
432 "Page 0x%08x.%08x removed from service"},
433 {PRD_PENDING
, 0, EAGAIN
, 2,
434 "Page 0x%08x.%08x will be retired on free"},
435 {PRD_FAILED
, 0, EAGAIN
, 0, NULL
},
436 {PRD_DUPLICATE
, 0, EIO
, 2,
437 "Page 0x%08x.%08x already retired or pending"},
438 {PRD_INVALID_PA
, 0, EINVAL
, 2,
439 "PA 0x%08x.%08x is not a relocatable page"},
441 "Page 0x%08x.%08x not retired due to limit exceeded"},
442 {PRD_UE_SCRUBBED
, 0, 0, 1,
443 "Previously reported error on page 0x%08x.%08x cleared"},
444 {PRD_UNR_SUCCESS
, 0, 0, 1,
445 "Page 0x%08x.%08x returned to service"},
446 {PRD_UNR_CANTLOCK
, 0, EAGAIN
, 2,
447 "Page 0x%08x.%08x could not be unretired"},
448 {PRD_UNR_NOT
, 0, EIO
, 2,
449 "Page 0x%08x.%08x is not retired"},
450 {PRD_INVALID_KEY
, 0, 0, 0, NULL
} /* MUST BE LAST! */
454 * print a message if page_retire_messages is true.
456 #define PR_MESSAGE(debuglvl, msglvl, msg, pa) \
458 uint64_t p = (uint64_t)pa; \
459 if (page_retire_messages >= msglvl && msg != NULL) { \
460 cmn_err(debuglvl, msg, \
461 (uint32_t)(p >> 32), (uint32_t)p); \
466 * Note that multiple bits may be set in a single settoxic operation.
467 * May be called without the page locked.
470 page_settoxic(page_t
*pp
, uchar_t bits
)
472 atomic_or_8(&pp
->p_toxic
, bits
);
476 * Note that multiple bits may cleared in a single clrtoxic operation.
477 * Must be called with the page exclusively locked to prevent races which
478 * may attempt to retire a page without any toxic bits set.
479 * Note that the PR_CAPTURE bit can be cleared without the exclusive lock
480 * being held as there is a separate mutex which protects that bit.
483 page_clrtoxic(page_t
*pp
, uchar_t bits
)
485 ASSERT((bits
& PR_CAPTURE
) || PAGE_EXCL(pp
));
486 atomic_and_8(&pp
->p_toxic
, ~bits
);
490 * Prints any page retire messages to the user, and decides what
491 * error code is appropriate for the condition reported.
494 page_retire_done(page_t
*pp
, int code
)
496 page_retire_op_t
*prop
;
501 pa
= mmu_ptob((uint64_t)pp
->p_pagenum
);
505 for (i
= 0; page_retire_ops
[i
].pr_key
!= PRD_INVALID_KEY
; i
++) {
506 if (page_retire_ops
[i
].pr_key
== code
) {
507 prop
= &page_retire_ops
[i
];
513 if (page_retire_ops
[i
].pr_key
== PRD_INVALID_KEY
) {
514 cmn_err(CE_PANIC
, "page_retire_done: Invalid opcode %d", code
);
518 ASSERT(prop
->pr_key
== code
);
522 PR_MESSAGE(CE_NOTE
, prop
->pr_msglvl
, prop
->pr_message
, pa
);
524 page_settoxic(pp
, PR_MSG
);
527 return (prop
->pr_retval
);
531 * Act like page_destroy(), but instead of freeing the page, hash it onto
532 * the retired_pages vnode, and mark it retired.
534 * For fun, we try to scrub the page until it's squeaky clean.
535 * availrmem is adjusted here.
538 page_retire_destroy(page_t
*pp
)
540 uoff_t off
= (uoff_t
)((uintptr_t)pp
);
542 ASSERT(PAGE_EXCL(pp
));
543 ASSERT(!PP_ISFREE(pp
));
544 ASSERT(pp
->p_szc
== 0);
545 ASSERT(!hat_page_is_mapped(pp
));
546 ASSERT(!pp
->p_vnode
);
548 page_clr_all_props(pp
);
549 pagescrub(pp
, 0, MMU_PAGESIZE
);
553 if (page_hashin(pp
, &retired_pages
->v_object
, off
, false) == 0) {
554 cmn_err(CE_PANIC
, "retired page %p hashin failed", (void *)pp
);
557 page_settoxic(pp
, PR_RETIRED
);
558 PR_INCR_KSTAT(pr_retired
);
560 if (pp
->p_toxic
& PR_FMA
) {
561 PR_INCR_KSTAT(pr_fma
);
562 } else if (pp
->p_toxic
& PR_UE
) {
563 PR_INCR_KSTAT(pr_ue
);
565 PR_INCR_KSTAT(pr_mce
);
568 mutex_enter(&freemem_lock
);
570 mutex_exit(&freemem_lock
);
576 * Check whether the number of pages which have been retired already exceeds
577 * the maximum allowable percentage of memory which may be retired.
579 * Returns 1 if the limit has been exceeded.
582 page_retire_limit(void)
584 if (PR_KSTAT_RETIRED_NOTUE
>= (uint64_t)PAGE_RETIRE_LIMIT
) {
585 PR_INCR_KSTAT(pr_limit_exceeded
);
592 #define MSG_DM "Data Mismatch occurred at PA 0x%08x.%08x" \
593 "[ 0x%x != 0x%x ] while attempting to clear previously " \
594 "reported error; page removed from service"
596 #define MSG_UE "Uncorrectable Error occurred at PA 0x%08x.%08x while " \
597 "attempting to clear previously reported error; page removed " \
601 * Attempt to clear a UE from a page.
602 * Returns 1 if the error has been successfully cleared.
605 page_clear_transient_ue(page_t
*pp
)
610 uint32_t pa_hi
, pa_lo
;
615 ASSERT(PAGE_EXCL(pp
));
616 ASSERT(PP_PR_REQ(pp
));
617 ASSERT(pp
->p_szc
== 0);
618 ASSERT(!hat_page_is_mapped(pp
));
621 * Clear the page and attempt to clear the UE. If we trap
622 * on the next access to the page, we know the UE has recurred.
624 pagescrub(pp
, 0, PAGESIZE
);
627 * Map the page and write a bunch of bit patterns to compare
628 * what we wrote with what we read back. This isn't a perfect
629 * test but it should be good enough to catch most of the
630 * recurring UEs. If this fails to catch a recurrent UE, we'll
631 * retire the page the next time we see a UE on the page.
633 kaddr
= ppmapin(pp
, PROT_READ
|PROT_WRITE
, (caddr_t
)-1);
635 pa
= ptob((uint64_t)page_pptonum(pp
));
636 pa_hi
= (uint32_t)(pa
>> 32);
637 pa_lo
= (uint32_t)pa
;
640 * Disable preemption to prevent the off chance that
641 * we migrate while in the middle of running through
642 * the bit pattern and run on a different processor
643 * than what we started on.
648 * Fill the page with each (0x00 - 0xFF] bit pattern, flushing
649 * the cache in between reading and writing. We do this under
650 * on_trap() protection to avoid recursion.
652 if (on_trap(&otd
, OT_DATA_EC
)) {
653 PR_MESSAGE(CE_WARN
, 1, MSG_UE
, pa
);
656 for (wb
= 0xff; wb
> 0; wb
--) {
657 for (i
= 0; i
< PAGESIZE
; i
++) {
661 sync_data_memory(kaddr
, PAGESIZE
);
663 for (i
= 0; i
< PAGESIZE
; i
++) {
667 * We had a mismatch without a trap.
668 * Uh-oh. Something is really wrong
671 if (page_retire_messages
) {
672 cmn_err(CE_WARN
, MSG_DM
,
673 pa_hi
, pa_lo
, rb
, wb
);
676 goto out
; /* double break */
686 return (errors
? 0 : 1);
690 * Try to clear a page_t with a single UE. If the UE was transient, it is
691 * returned to service, and we return 1. Otherwise we return 0 meaning
692 * that further processing is required to retire the page.
695 page_retire_transient_ue(page_t
*pp
)
697 ASSERT(PAGE_EXCL(pp
));
698 ASSERT(!hat_page_is_mapped(pp
));
701 * If this page is a repeat offender, retire it under the
702 * "two strikes and you're out" rule. The caller is responsible
703 * for scrubbing the page to try to clear the error.
705 if (pp
->p_toxic
& PR_UE_SCRUBBED
) {
706 PR_INCR_KSTAT(pr_ue_persistent
);
710 if (page_clear_transient_ue(pp
)) {
712 * We set the PR_SCRUBBED_UE bit; if we ever see this
713 * page again, we will retire it, no questions asked.
715 page_settoxic(pp
, PR_UE_SCRUBBED
);
717 if (page_retire_first_ue
) {
718 PR_INCR_KSTAT(pr_ue_cleared_retire
);
721 PR_INCR_KSTAT(pr_ue_cleared_free
);
723 page_clrtoxic(pp
, PR_UE
| PR_MCE
| PR_MSG
);
725 VN_DISPOSE(pp
, B_FREE
, 1, kcred
);
730 PR_INCR_KSTAT(pr_ue_persistent
);
735 * Update the statistics dynamically when our kstat is read.
738 page_retire_kstat_update(kstat_t
*ksp
, int rw
)
740 struct page_retire_kstat
*pr
;
748 pr
= (struct page_retire_kstat
*)ksp
->ks_data
;
749 ASSERT(pr
== &page_retire_kstat
);
750 pr
->pr_limit
.value
.ui64
= PAGE_RETIRE_LIMIT
;
763 pr_list_kstat_update(kstat_t
*ksp
, int rw
)
765 struct vmobject
*obj
= &retired_pages
->v_object
;
769 if (rw
== KSTAT_WRITE
)
773 /* Needs to be under a lock so that for loop will work right */
774 if (!vn_has_cached_data(retired_pages
)) {
775 vmobject_unlock(obj
);
777 ksp
->ks_data_size
= 0;
782 for (pp
= vmobject_get_next(obj
, vmobject_get_head(obj
));
783 pp
!= NULL
; pp
= vmobject_get_next(obj
, pp
)) {
786 vmobject_unlock(obj
);
788 ksp
->ks_ndata
= count
;
789 ksp
->ks_data_size
= count
* 2 * sizeof (uint64_t);
795 * all spans will be pagesize and no coalescing will be done with the
799 pr_list_kstat_snapshot(kstat_t
*ksp
, void *buf
, int rw
)
801 struct vmobject
*obj
= &retired_pages
->v_object
;
808 if (rw
== KSTAT_WRITE
)
811 ksp
->ks_snaptime
= gethrtime();
813 kspmem
= (struct memunit
*)buf
;
816 pp
= vmobject_get_head(obj
);
817 if (((caddr_t
)kspmem
>= (caddr_t
)buf
+ ksp
->ks_data_size
) ||
819 vmobject_unlock(obj
);
822 kspmem
->address
= ptob(pp
->p_pagenum
);
823 kspmem
->size
= PAGESIZE
;
825 for (pp
= vmobject_get_next(obj
, pp
); pp
!= NULL
;
826 pp
= vmobject_get_next(obj
, pp
), kspmem
++) {
827 if ((caddr_t
)kspmem
>= (caddr_t
)buf
+ ksp
->ks_data_size
)
829 kspmem
->address
= ptob(pp
->p_pagenum
);
830 kspmem
->size
= PAGESIZE
;
832 vmobject_unlock(obj
);
838 * page_retire_pend_count -- helper function for page_capture_thread,
839 * returns the number of pages pending retirement.
842 page_retire_pend_count(void)
844 return (PR_KSTAT_PENDING
);
848 page_retire_pend_kas_count(void)
850 return (PR_KSTAT_PENDING_KAS
);
854 page_retire_incr_pend_count(void *datap
)
856 PR_INCR_KSTAT(pr_pending
);
858 if ((datap
== &kvp
) || (datap
== &zvp
)) {
859 PR_INCR_KSTAT(pr_pending_kas
);
864 page_retire_decr_pend_count(void *datap
)
866 PR_DECR_KSTAT(pr_pending
);
868 if ((datap
== &kvp
) || (datap
== &zvp
)) {
869 PR_DECR_KSTAT(pr_pending_kas
);
874 * Initialize the page retire mechanism:
876 * - Establish the correctable error retire limit.
877 * - Initialize locks.
878 * - Build the retired_pages vnode.
879 * - Set up the kstats.
880 * - Fire off the background thread.
881 * - Tell page_retire() it's OK to start retiring pages.
884 page_retire_init(void)
888 const uint_t page_retire_ndata
=
889 sizeof (page_retire_kstat
) / sizeof (kstat_named_t
);
891 ASSERT(page_retire_ksp
== NULL
);
893 if (max_pages_retired_bps
<= 0) {
894 max_pages_retired_bps
= MCE_BPT
;
897 mutex_init(&pr_q_mutex
, NULL
, MUTEX_DEFAULT
, NULL
);
899 retired_pages
= vn_alloc(KM_SLEEP
);
900 vn_setops(retired_pages
, &retired_vnodeops
);
902 if ((page_retire_ksp
= kstat_create("unix", 0, "page_retire",
903 "misc", KSTAT_TYPE_NAMED
, page_retire_ndata
,
904 KSTAT_FLAG_VIRTUAL
)) == NULL
) {
905 cmn_err(CE_WARN
, "kstat_create for page_retire failed");
907 page_retire_ksp
->ks_data
= (void *)&page_retire_kstat
;
908 page_retire_ksp
->ks_update
= page_retire_kstat_update
;
909 kstat_install(page_retire_ksp
);
912 mutex_init(&pr_list_kstat_mutex
, NULL
, MUTEX_DEFAULT
, NULL
);
913 ksp
= kstat_create("unix", 0, "page_retire_list", "misc",
914 KSTAT_TYPE_RAW
, 0, KSTAT_FLAG_VAR_SIZE
| KSTAT_FLAG_VIRTUAL
);
916 ksp
->ks_update
= pr_list_kstat_update
;
917 ksp
->ks_snapshot
= pr_list_kstat_snapshot
;
918 ksp
->ks_lock
= &pr_list_kstat_mutex
;
922 memscrub_notify_func
=
923 (void(*)(uint64_t))kobj_getsymvalue("memscrub_notify", 0);
925 page_capture_register_callback(PC_RETIRE
, -1, page_retire_pp_finish
);
930 * page_retire_hunt() callback for the retire thread.
933 page_retire_thread_cb(page_t
*pp
)
936 if (!PP_ISKAS(pp
) && page_trylock(pp
, SE_EXCL
)) {
937 PR_DEBUG(prd_tclocked
);
943 * Callback used by page_trycapture() to finish off retiring a page.
944 * The page has already been cleaned and we've been given sole access to
946 * Always returns 0 to indicate that callback succeded as the callback never
947 * fails to finish retiring the given page.
951 page_retire_pp_finish(page_t
*pp
, void *notused
, uint_t flags
)
955 ASSERT(PAGE_EXCL(pp
));
956 ASSERT(pp
->p_iolock_state
== 0);
957 ASSERT(pp
->p_szc
== 0);
962 * The problem page is locked, demoted, unmapped, not free,
963 * hashed out, and not COW or mlocked (whew!).
965 * Now we select our ammunition, take it around back, and shoot it.
969 if (page_retire_transient_ue(pp
)) {
970 PR_DEBUG(prd_uescrubbed
);
971 (void) page_retire_done(pp
, PRD_UE_SCRUBBED
);
973 PR_DEBUG(prd_uenotscrubbed
);
974 page_retire_destroy(pp
);
975 (void) page_retire_done(pp
, PRD_SUCCESS
);
978 } else if (toxic
& PR_FMA
) {
980 page_retire_destroy(pp
);
981 (void) page_retire_done(pp
, PRD_SUCCESS
);
983 } else if (toxic
& PR_MCE
) {
985 page_retire_destroy(pp
);
986 (void) page_retire_done(pp
, PRD_SUCCESS
);
991 * When page_retire_first_ue is set to zero and a UE occurs which is
992 * transient, it's possible that we clear some flags set by a second
993 * UE error on the page which occurs while the first is currently being
994 * handled and thus we need to handle the case where none of the above
995 * are set. In this instance, PR_UE_SCRUBBED should be set and thus
996 * we should execute the UE code above.
998 if (toxic
& PR_UE_SCRUBBED
) {
1003 * It's impossible to get here.
1005 panic("bad toxic flags 0x%x in page_retire_pp_finish\n", toxic
);
1010 * page_retire() - the front door in to retire a page.
1012 * Ideally, page_retire() would instantly retire the requested page.
1013 * Unfortunately, some pages are locked or otherwise tied up and cannot be
1014 * retired right away. We use the page capture logic to deal with this
1015 * situation as it will continuously try to retire the page in the background
1016 * if the first attempt fails. Success is determined by looking to see whether
1017 * the page has been retired after the page_trycapture() attempt.
1022 * - EINVAL when the PA is whacko,
1023 * - EIO if the page is already retired or already pending retirement, or
1024 * - EAGAIN if the page could not be _immediately_ retired but is pending.
1027 page_retire(uint64_t pa
, uchar_t reason
)
1031 ASSERT(reason
& PR_REASONS
); /* there must be a reason */
1032 ASSERT(!(reason
& ~PR_REASONS
)); /* but no other bits */
1034 pp
= page_numtopp_nolock(mmu_btop(pa
));
1036 PR_MESSAGE(CE_WARN
, 1, "Cannot schedule clearing of error on"
1037 " page 0x%08x.%08x; page is not relocatable memory", pa
);
1038 return (page_retire_done(pp
, PRD_INVALID_PA
));
1040 if (PP_RETIRED(pp
)) {
1042 return (page_retire_done(pp
, PRD_DUPLICATE
));
1045 if (memscrub_notify_func
!= NULL
) {
1046 (void) memscrub_notify_func(pa
);
1049 if ((reason
& PR_UE
) && !PP_TOXIC(pp
)) {
1050 PR_MESSAGE(CE_NOTE
, 1, "Scheduling clearing of error on"
1051 " page 0x%08x.%08x", pa
);
1052 } else if (PP_PR_REQ(pp
)) {
1054 return (page_retire_done(pp
, PRD_DUPLICATE
));
1056 PR_MESSAGE(CE_NOTE
, 1, "Scheduling removal of"
1057 " page 0x%08x.%08x", pa
);
1060 /* Avoid setting toxic bits in the first place */
1061 if ((reason
& (PR_FMA
| PR_MCE
)) && !(reason
& PR_UE
) &&
1062 page_retire_limit()) {
1063 return (page_retire_done(pp
, PRD_LIMIT
));
1066 if (MTBF(pr_calls
, pr_mtbf
)) {
1067 page_settoxic(pp
, reason
);
1068 if (page_trycapture(pp
, 0, CAPTURE_RETIRE
, pp
->p_vnode
) == 0) {
1069 PR_DEBUG(prd_prlocked
);
1071 PR_DEBUG(prd_prnotlocked
);
1074 PR_DEBUG(prd_prnotlocked
);
1077 if (PP_RETIRED(pp
)) {
1078 PR_DEBUG(prd_prretired
);
1082 PR_INCR_KSTAT(pr_failed
);
1084 if (pp
->p_toxic
& PR_MSG
) {
1085 return (page_retire_done(pp
, PRD_FAILED
));
1087 return (page_retire_done(pp
, PRD_PENDING
));
1093 * Take a retired page off the retired-pages vnode and clear the toxic flags.
1094 * If "free" is nonzero, lock it and put it back on the freelist. If "free"
1095 * is zero, the caller already holds SE_EXCL lock so we simply unretire it
1096 * and don't do anything else with it.
1098 * Any unretire messages are printed from this routine.
1100 * Returns 0 if page pp was unretired; else an error code.
1103 * PR_UNR_FREE - lock the page, clear the toxic flags and free it
1105 * PR_UNR_TEMP - lock the page, unretire it, leave the toxic
1106 * bits set as is and return it to the caller.
1107 * PR_UNR_CLEAN - page is SE_EXCL locked, unretire it, clear the
1108 * toxic flags and return it to caller as is.
1111 page_unretire_pp(page_t
*pp
, int flags
)
1114 * To be retired, a page has to be hashed onto the retired_pages vnode
1115 * and have PR_RETIRED set in p_toxic.
1117 if (flags
== PR_UNR_CLEAN
||
1118 page_try_reclaim_lock(pp
, SE_EXCL
, SE_RETIRED
)) {
1119 ASSERT(PAGE_EXCL(pp
));
1120 PR_DEBUG(prd_ulocked
);
1121 if (!PP_RETIRED(pp
)) {
1122 PR_DEBUG(prd_unotretired
);
1124 return (page_retire_done(pp
, PRD_UNR_NOT
));
1127 PR_MESSAGE(CE_NOTE
, 1, "unretiring retired"
1128 " page 0x%08x.%08x", mmu_ptob((uint64_t)pp
->p_pagenum
));
1129 if (pp
->p_toxic
& PR_FMA
) {
1130 PR_DECR_KSTAT(pr_fma
);
1131 } else if (pp
->p_toxic
& PR_UE
) {
1132 PR_DECR_KSTAT(pr_ue
);
1134 PR_DECR_KSTAT(pr_mce
);
1137 if (flags
== PR_UNR_TEMP
)
1138 page_clrtoxic(pp
, PR_RETIRED
);
1140 page_clrtoxic(pp
, PR_TOXICFLAGS
);
1142 if (flags
== PR_UNR_FREE
) {
1143 PR_DEBUG(prd_udestroy
);
1144 page_destroy(pp
, 0);
1146 PR_DEBUG(prd_uhashout
);
1147 page_hashout(pp
, false);
1150 mutex_enter(&freemem_lock
);
1152 mutex_exit(&freemem_lock
);
1154 PR_DEBUG(prd_uunretired
);
1155 PR_DECR_KSTAT(pr_retired
);
1156 PR_INCR_KSTAT(pr_unretired
);
1157 return (page_retire_done(pp
, PRD_UNR_SUCCESS
));
1159 PR_DEBUG(prd_unotlocked
);
1160 return (page_retire_done(pp
, PRD_UNR_CANTLOCK
));
1164 * Return a page to service by moving it from the retired_pages vnode
1165 * onto the freelist.
1167 * Called from mmioctl_page_retire() on behalf of the FMA DE.
1171 * - 0 if the page is unretired,
1172 * - EAGAIN if the pp can not be locked,
1173 * - EINVAL if the PA is whacko, and
1174 * - EIO if the pp is not retired.
1177 page_unretire(uint64_t pa
)
1181 pp
= page_numtopp_nolock(mmu_btop(pa
));
1183 return (page_retire_done(pp
, PRD_INVALID_PA
));
1186 return (page_unretire_pp(pp
, PR_UNR_FREE
));
1190 * Test a page to see if it is retired. If errors is non-NULL, the toxic
1191 * bits of the page are returned. Returns 0 on success, error code on failure.
1194 page_retire_check_pp(page_t
*pp
, uint64_t *errors
)
1198 if (PP_RETIRED(pp
)) {
1199 PR_DEBUG(prd_checkhit
);
1201 } else if (PP_PR_REQ(pp
)) {
1202 PR_DEBUG(prd_checkmiss_pend
);
1205 PR_DEBUG(prd_checkmiss_noerr
);
1210 * We have magically arranged the bit values returned to fmd(8)
1211 * to line up with the FMA, MCE, and UE bits of the page_t.
1214 uint64_t toxic
= (uint64_t)(pp
->p_toxic
& PR_ERRMASK
);
1215 if (toxic
& PR_UE_SCRUBBED
) {
1216 toxic
&= ~PR_UE_SCRUBBED
;
1226 * Test to see if the page_t for a given PA is retired, and return the
1227 * hardware errors we have seen on the page if requested.
1229 * Called from mmioctl_page_retire on behalf of the FMA DE.
1233 * - 0 if the page is retired,
1234 * - EIO if the page is not retired and has no errors,
1235 * - EAGAIN if the page is not retired but is pending; and
1236 * - EINVAL if the PA is whacko.
1239 page_retire_check(uint64_t pa
, uint64_t *errors
)
1247 pp
= page_numtopp_nolock(mmu_btop(pa
));
1249 return (page_retire_done(pp
, PRD_INVALID_PA
));
1252 return (page_retire_check_pp(pp
, errors
));
1256 * Page retire self-test. For now, it always returns 0.
1259 page_retire_test(void)
1261 page_t
*first
, *pp
, *cpp
, *cpp2
, *lpp
;
1264 * Tests the corner case where a large page can't be retired
1265 * because one of the constituent pages is locked. We mark
1266 * one page to be retired and try to retire it, and mark the
1267 * other page to be retired but don't try to retire it, so
1268 * that page_unlock() in the failure path will recurse and try
1269 * to retire THAT page. This is the worst possible situation
1270 * we can get ourselves into.
1273 pp
= first
= page_first();
1275 if (pp
->p_szc
&& PP_PAGEROOT(pp
) == pp
) {
1277 lpp
= PP_ISFREE(pp
)? pp
: pp
+ 2;
1279 if (!page_trylock(lpp
, pp
== lpp
? SE_EXCL
: SE_SHARED
))
1281 if (!page_trylock(cpp
, SE_EXCL
)) {
1287 (void) page_retire(ptob(cpp
->p_pagenum
), PR_FMA
);
1291 (void) page_retire(ptob(cpp
->p_pagenum
), PR_FMA
);
1292 (void) page_retire(ptob(cpp2
->p_pagenum
), PR_FMA
);
1294 } while ((pp
= page_next(pp
)) != first
);