4 * The contents of this file are subject to the terms of the
5 * Common Development and Distribution License (the "License").
6 * You may not use this file except in compliance with the License.
8 * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 * or http://www.opensolaris.org/os/licensing.
10 * See the License for the specific language governing permissions
11 * and limitations under the License.
13 * When distributing Covered Code, include this CDDL HEADER in each
14 * file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 * If applicable, add the following below this CDDL HEADER, with the
16 * fields enclosed by brackets "[]" replaced with your own identifying
17 * information: Portions Copyright [yyyy] [name of copyright owner]
22 * Copyright (c) 1994, 2010, Oracle and/or its affiliates. All rights reserved.
23 * Copyright (c) 2012, 2017 by Delphix. All rights reserved.
24 * Copyright 2015 Nexenta Systems, Inc. All rights reserved.
25 * Copyright 2018, Joyent, Inc.
29 * Kernel memory allocator, as described in the following two papers and a
30 * statement about the consolidator:
33 * The Slab Allocator: An Object-Caching Kernel Memory Allocator.
34 * Proceedings of the Summer 1994 Usenix Conference.
35 * Available as /shared/sac/PSARC/1994/028/materials/kmem.pdf.
37 * Jeff Bonwick and Jonathan Adams,
38 * Magazines and vmem: Extending the Slab Allocator to Many CPUs and
39 * Arbitrary Resources.
40 * Proceedings of the 2001 Usenix Conference.
41 * Available as /shared/sac/PSARC/2000/550/materials/vmem.pdf.
43 * kmem Slab Consolidator Big Theory Statement:
47 * As stated in Bonwick94, slabs provide the following advantages over other
48 * allocation structures in terms of memory fragmentation:
50 * - Internal fragmentation (per-buffer wasted space) is minimal.
51 * - Severe external fragmentation (unused buffers on the free list) is
54 * Segregating objects by size eliminates one source of external fragmentation,
55 * and according to Bonwick:
57 * The other reason that slabs reduce external fragmentation is that all
58 * objects in a slab are of the same type, so they have the same lifetime
59 * distribution. The resulting segregation of short-lived and long-lived
60 * objects at slab granularity reduces the likelihood of an entire page being
61 * held hostage due to a single long-lived allocation [Barrett93, Hanson90].
63 * While unlikely, severe external fragmentation remains possible. Clients that
64 * allocate both short- and long-lived objects from the same cache cannot
65 * anticipate the distribution of long-lived objects within the allocator's slab
66 * implementation. Even a small percentage of long-lived objects distributed
67 * randomly across many slabs can lead to a worst case scenario where the client
68 * frees the majority of its objects and the system gets back almost none of the
69 * slabs. Despite the client doing what it reasonably can to help the system
70 * reclaim memory, the allocator cannot shake free enough slabs because of
71 * lonely allocations stubbornly hanging on. Although the allocator is in a
72 * position to diagnose the fragmentation, there is nothing that the allocator
73 * by itself can do about it. It only takes a single allocated object to prevent
74 * an entire slab from being reclaimed, and any object handed out by
75 * kmem_cache_alloc() is by definition in the client's control. Conversely,
76 * although the client is in a position to move a long-lived object, it has no
77 * way of knowing if the object is causing fragmentation, and if so, where to
78 * move it. A solution necessarily requires further cooperation between the
79 * allocator and the client.
83 * The kmem slab consolidator therefore adds a move callback to the
84 * allocator/client interface, improving worst-case external fragmentation in
85 * kmem caches that supply a function to move objects from one memory location
86 * to another. In a situation of low memory kmem attempts to consolidate all of
87 * a cache's slabs at once; otherwise it works slowly to bring external
88 * fragmentation within the 1/8 limit guaranteed for internal fragmentation,
89 * thereby helping to avoid a low memory situation in the future.
91 * The callback has the following signature:
93 * kmem_cbrc_t move(void *old, void *new, size_t size, void *user_arg)
95 * It supplies the kmem client with two addresses: the allocated object that
96 * kmem wants to move and a buffer selected by kmem for the client to use as the
97 * copy destination. The callback is kmem's way of saying "Please get off of
98 * this buffer and use this one instead." kmem knows where it wants to move the
99 * object in order to best reduce fragmentation. All the client needs to know
100 * about the second argument (void *new) is that it is an allocated, constructed
101 * object ready to take the contents of the old object. When the move function
102 * is called, the system is likely to be low on memory, and the new object
103 * spares the client from having to worry about allocating memory for the
104 * requested move. The third argument supplies the size of the object, in case a
105 * single move function handles multiple caches whose objects differ only in
106 * size (such as zio_buf_512, zio_buf_1024, etc). Finally, the same optional
107 * user argument passed to the constructor, destructor, and reclaim functions is
108 * also passed to the move callback.
110 * 2.1 Setting the Move Callback
112 * The client sets the move callback after creating the cache and before
113 * allocating from it:
115 * object_cache = kmem_cache_create(...);
116 * kmem_cache_set_move(object_cache, object_move);
118 * 2.2 Move Callback Return Values
120 * Only the client knows about its own data and when is a good time to move it.
121 * The client is cooperating with kmem to return unused memory to the system,
122 * and kmem respectfully accepts this help at the client's convenience. When
123 * asked to move an object, the client can respond with any of the following:
125 * typedef enum kmem_cbrc {
129 * KMEM_CBRC_DONT_NEED,
130 * KMEM_CBRC_DONT_KNOW
133 * The client must not explicitly kmem_cache_free() either of the objects passed
134 * to the callback, since kmem wants to free them directly to the slab layer
135 * (bypassing the per-CPU magazine layer). The response tells kmem which of the
138 * YES: (Did it) The client moved the object, so kmem frees the old one.
139 * NO: (Never) The client refused, so kmem frees the new object (the
140 * unused copy destination). kmem also marks the slab of the old
141 * object so as not to bother the client with further callbacks for
142 * that object as long as the slab remains on the partial slab list.
143 * (The system won't be getting the slab back as long as the
144 * immovable object holds it hostage, so there's no point in moving
145 * any of its objects.)
146 * LATER: The client is using the object and cannot move it now, so kmem
147 * frees the new object (the unused copy destination). kmem still
148 * attempts to move other objects off the slab, since it expects to
149 * succeed in clearing the slab in a later callback. The client
150 * should use LATER instead of NO if the object is likely to become
152 * DONT_NEED: The client no longer needs the object, so kmem frees the old along
153 * with the new object (the unused copy destination). This response
154 * is the client's opportunity to be a model citizen and give back as
156 * DONT_KNOW: The client does not know about the object because
157 * a) the client has just allocated the object and not yet put it
158 * wherever it expects to find known objects
159 * b) the client has removed the object from wherever it expects to
160 * find known objects and is about to free it, or
161 * c) the client has freed the object.
162 * In all these cases (a, b, and c) kmem frees the new object (the
163 * unused copy destination). In the first case, the object is in
164 * use and the correct action is that for LATER; in the latter two
165 * cases, we know that the object is either freed or about to be
166 * freed, in which case it is either already in a magazine or about
167 * to be in one. In these cases, we know that the object will either
168 * be reallocated and reused, or it will end up in a full magazine
169 * that will be reaped (thereby liberating the slab). Because it
170 * is prohibitively expensive to differentiate these cases, and
171 * because the defrag code is executed when we're low on memory
172 * (thereby biasing the system to reclaim full magazines) we treat
173 * all DONT_KNOW cases as LATER and rely on cache reaping to
174 * generally clean up full magazines. While we take the same action
175 * for these cases, we maintain their semantic distinction: if
176 * defragmentation is not occurring, it is useful to know if this
177 * is due to objects in use (LATER) or objects in an unknown state
178 * of transition (DONT_KNOW).
182 * Neither kmem nor the client can be assumed to know the object's whereabouts
183 * at the time of the callback. An object belonging to a kmem cache may be in
184 * any of the following states:
186 * 1. Uninitialized on the slab
187 * 2. Allocated from the slab but not constructed (still uninitialized)
188 * 3. Allocated from the slab, constructed, but not yet ready for business
189 * (not in a valid state for the move callback)
190 * 4. In use (valid and known to the client)
191 * 5. About to be freed (no longer in a valid state for the move callback)
192 * 6. Freed to a magazine (still constructed)
193 * 7. Allocated from a magazine, not yet ready for business (not in a valid
194 * state for the move callback), and about to return to state #4
195 * 8. Deconstructed on a magazine that is about to be freed
196 * 9. Freed to the slab
198 * Since the move callback may be called at any time while the object is in any
199 * of the above states (except state #1), the client needs a safe way to
200 * determine whether or not it knows about the object. Specifically, the client
201 * needs to know whether or not the object is in state #4, the only state in
202 * which a move is valid. If the object is in any other state, the client should
203 * immediately return KMEM_CBRC_DONT_KNOW, since it is unsafe to access any of
204 * the object's fields.
206 * Note that although an object may be in state #4 when kmem initiates the move
207 * request, the object may no longer be in that state by the time kmem actually
208 * calls the move function. Not only does the client free objects
209 * asynchronously, kmem itself puts move requests on a queue where thay are
210 * pending until kmem processes them from another context. Also, objects freed
211 * to a magazine appear allocated from the point of view of the slab layer, so
212 * kmem may even initiate requests for objects in a state other than state #4.
214 * 2.3.1 Magazine Layer
216 * An important insight revealed by the states listed above is that the magazine
217 * layer is populated only by kmem_cache_free(). Magazines of constructed
218 * objects are never populated directly from the slab layer (which contains raw,
219 * unconstructed objects). Whenever an allocation request cannot be satisfied
220 * from the magazine layer, the magazines are bypassed and the request is
221 * satisfied from the slab layer (creating a new slab if necessary). kmem calls
222 * the object constructor only when allocating from the slab layer, and only in
223 * response to kmem_cache_alloc() or to prepare the destination buffer passed in
224 * the move callback. kmem does not preconstruct objects in anticipation of
225 * kmem_cache_alloc().
227 * 2.3.2 Object Constructor and Destructor
229 * If the client supplies a destructor, it must be valid to call the destructor
230 * on a newly created object (immediately after the constructor).
232 * 2.4 Recognizing Known Objects
234 * There is a simple test to determine safely whether or not the client knows
235 * about a given object in the move callback. It relies on the fact that kmem
236 * guarantees that the object of the move callback has only been touched by the
237 * client itself or else by kmem. kmem does this by ensuring that none of the
238 * cache's slabs are freed to the virtual memory (VM) subsystem while a move
239 * callback is pending. When the last object on a slab is freed, if there is a
240 * pending move, kmem puts the slab on a per-cache dead list and defers freeing
241 * slabs on that list until all pending callbacks are completed. That way,
242 * clients can be certain that the object of a move callback is in one of the
243 * states listed above, making it possible to distinguish known objects (in
244 * state #4) using the two low order bits of any pointer member (with the
245 * exception of 'char *' or 'short *' which may not be 4-byte aligned on some
248 * The test works as long as the client always transitions objects from state #4
249 * (known, in use) to state #5 (about to be freed, invalid) by setting the low
250 * order bit of the client-designated pointer member. Since kmem only writes
251 * invalid memory patterns, such as 0xbaddcafe to uninitialized memory and
252 * 0xdeadbeef to freed memory, any scribbling on the object done by kmem is
253 * guaranteed to set at least one of the two low order bits. Therefore, given an
254 * object with a back pointer to a 'container_t *o_container', the client can
257 * container_t *container = object->o_container;
258 * if ((uintptr_t)container & 0x3) {
259 * return (KMEM_CBRC_DONT_KNOW);
262 * Typically, an object will have a pointer to some structure with a list or
263 * hash where objects from the cache are kept while in use. Assuming that the
264 * client has some way of knowing that the container structure is valid and will
265 * not go away during the move, and assuming that the structure includes a lock
266 * to protect whatever collection is used, then the client would continue as
269 * // Ensure that the container structure does not go away.
270 * if (container_hold(container) == 0) {
271 * return (KMEM_CBRC_DONT_KNOW);
273 * mutex_enter(&container->c_objects_lock);
274 * if (container != object->o_container) {
275 * mutex_exit(&container->c_objects_lock);
276 * container_rele(container);
277 * return (KMEM_CBRC_DONT_KNOW);
280 * At this point the client knows that the object cannot be freed as long as
281 * c_objects_lock is held. Note that after acquiring the lock, the client must
282 * recheck the o_container pointer in case the object was removed just before
283 * acquiring the lock.
285 * When the client is about to free an object, it must first remove that object
286 * from the list, hash, or other structure where it is kept. At that time, to
287 * mark the object so it can be distinguished from the remaining, known objects,
288 * the client sets the designated low order bit:
290 * mutex_enter(&container->c_objects_lock);
291 * object->o_container = (void *)((uintptr_t)object->o_container | 0x1);
292 * list_remove(&container->c_objects, object);
293 * mutex_exit(&container->c_objects_lock);
295 * In the common case, the object is freed to the magazine layer, where it may
296 * be reused on a subsequent allocation without the overhead of calling the
297 * constructor. While in the magazine it appears allocated from the point of
298 * view of the slab layer, making it a candidate for the move callback. Most
299 * objects unrecognized by the client in the move callback fall into this
300 * category and are cheaply distinguished from known objects by the test
301 * described earlier. Because searching magazines is prohibitively expensive
302 * for kmem, clients that do not mark freed objects (and therefore return
303 * KMEM_CBRC_DONT_KNOW for large numbers of objects) may find defragmentation
306 * Invalidating the designated pointer member before freeing the object marks
307 * the object to be avoided in the callback, and conversely, assigning a valid
308 * value to the designated pointer member after allocating the object makes the
309 * object fair game for the callback:
311 * ... allocate object ...
312 * ... set any initial state not set by the constructor ...
314 * mutex_enter(&container->c_objects_lock);
315 * list_insert_tail(&container->c_objects, object);
317 * object->o_container = container;
318 * mutex_exit(&container->c_objects_lock);
320 * Note that everything else must be valid before setting o_container makes the
321 * object fair game for the move callback. The membar_producer() call ensures
322 * that all the object's state is written to memory before setting the pointer
323 * that transitions the object from state #3 or #7 (allocated, constructed, not
324 * yet in use) to state #4 (in use, valid). That's important because the move
325 * function has to check the validity of the pointer before it can safely
326 * acquire the lock protecting the collection where it expects to find known
329 * This method of distinguishing known objects observes the usual symmetry:
330 * invalidating the designated pointer is the first thing the client does before
331 * freeing the object, and setting the designated pointer is the last thing the
332 * client does after allocating the object. Of course, the client is not
333 * required to use this method. Fundamentally, how the client recognizes known
334 * objects is completely up to the client, but this method is recommended as an
335 * efficient and safe way to take advantage of the guarantees made by kmem. If
336 * the entire object is arbitrary data without any markable bits from a suitable
337 * pointer member, then the client must find some other method, such as
338 * searching a hash table of known objects.
340 * 2.5 Preventing Objects From Moving
342 * Besides a way to distinguish known objects, the other thing that the client
343 * needs is a strategy to ensure that an object will not move while the client
344 * is actively using it. The details of satisfying this requirement tend to be
345 * highly cache-specific. It might seem that the same rules that let a client
346 * remove an object safely should also decide when an object can be moved
347 * safely. However, any object state that makes a removal attempt invalid is
348 * likely to be long-lasting for objects that the client does not expect to
349 * remove. kmem knows nothing about the object state and is equally likely (from
350 * the client's point of view) to request a move for any object in the cache,
351 * whether prepared for removal or not. Even a low percentage of objects stuck
352 * in place by unremovability will defeat the consolidator if the stuck objects
353 * are the same long-lived allocations likely to hold slabs hostage.
354 * Fundamentally, the consolidator is not aimed at common cases. Severe external
355 * fragmentation is a worst case scenario manifested as sparsely allocated
356 * slabs, by definition a low percentage of the cache's objects. When deciding
357 * what makes an object movable, keep in mind the goal of the consolidator: to
358 * bring worst-case external fragmentation within the limits guaranteed for
359 * internal fragmentation. Removability is a poor criterion if it is likely to
360 * exclude more than an insignificant percentage of objects for long periods of
363 * A tricky general solution exists, and it has the advantage of letting you
364 * move any object at almost any moment, practically eliminating the likelihood
365 * that an object can hold a slab hostage. However, if there is a cache-specific
366 * way to ensure that an object is not actively in use in the vast majority of
367 * cases, a simpler solution that leverages this cache-specific knowledge is
370 * 2.5.1 Cache-Specific Solution
372 * As an example of a cache-specific solution, the ZFS znode cache takes
373 * advantage of the fact that the vast majority of znodes are only being
374 * referenced from the DNLC. (A typical case might be a few hundred in active
375 * use and a hundred thousand in the DNLC.) In the move callback, after the ZFS
376 * client has established that it recognizes the znode and can access its fields
377 * safely (using the method described earlier), it then tests whether the znode
378 * is referenced by anything other than the DNLC. If so, it assumes that the
379 * znode may be in active use and is unsafe to move, so it drops its locks and
380 * returns KMEM_CBRC_LATER. The advantage of this strategy is that everywhere
381 * else znodes are used, no change is needed to protect against the possibility
382 * of the znode moving. The disadvantage is that it remains possible for an
383 * application to hold a znode slab hostage with an open file descriptor.
384 * However, this case ought to be rare and the consolidator has a way to deal
385 * with it: If the client responds KMEM_CBRC_LATER repeatedly for the same
386 * object, kmem eventually stops believing it and treats the slab as if the
387 * client had responded KMEM_CBRC_NO. Having marked the hostage slab, kmem can
388 * then focus on getting it off of the partial slab list by allocating rather
389 * than freeing all of its objects. (Either way of getting a slab off the
390 * free list reduces fragmentation.)
392 * 2.5.2 General Solution
394 * The general solution, on the other hand, requires an explicit hold everywhere
395 * the object is used to prevent it from moving. To keep the client locking
396 * strategy as uncomplicated as possible, kmem guarantees the simplifying
397 * assumption that move callbacks are sequential, even across multiple caches.
398 * Internally, a global queue processed by a single thread supports all caches
399 * implementing the callback function. No matter how many caches supply a move
400 * function, the consolidator never moves more than one object at a time, so the
401 * client does not have to worry about tricky lock ordering involving several
402 * related objects from different kmem caches.
404 * The general solution implements the explicit hold as a read-write lock, which
405 * allows multiple readers to access an object from the cache simultaneously
406 * while a single writer is excluded from moving it. A single rwlock for the
407 * entire cache would lock out all threads from using any of the cache's objects
408 * even though only a single object is being moved, so to reduce contention,
409 * the client can fan out the single rwlock into an array of rwlocks hashed by
410 * the object address, making it probable that moving one object will not
411 * prevent other threads from using a different object. The rwlock cannot be a
412 * member of the object itself, because the possibility of the object moving
413 * makes it unsafe to access any of the object's fields until the lock is
416 * Assuming a small, fixed number of locks, it's possible that multiple objects
417 * will hash to the same lock. A thread that needs to use multiple objects in
418 * the same function may acquire the same lock multiple times. Since rwlocks are
419 * reentrant for readers, and since there is never more than a single writer at
420 * a time (assuming that the client acquires the lock as a writer only when
421 * moving an object inside the callback), there would seem to be no problem.
422 * However, a client locking multiple objects in the same function must handle
423 * one case of potential deadlock: Assume that thread A needs to prevent both
424 * object 1 and object 2 from moving, and thread B, the callback, meanwhile
425 * tries to move object 3. It's possible, if objects 1, 2, and 3 all hash to the
426 * same lock, that thread A will acquire the lock for object 1 as a reader
427 * before thread B sets the lock's write-wanted bit, preventing thread A from
428 * reacquiring the lock for object 2 as a reader. Unable to make forward
429 * progress, thread A will never release the lock for object 1, resulting in
432 * There are two ways of avoiding the deadlock just described. The first is to
433 * use rw_tryenter() rather than rw_enter() in the callback function when
434 * attempting to acquire the lock as a writer. If tryenter discovers that the
435 * same object (or another object hashed to the same lock) is already in use, it
436 * aborts the callback and returns KMEM_CBRC_LATER. The second way is to use
437 * rprwlock_t (declared in kernel/fs/zfs/sys/rprwlock.h) instead of rwlock_t,
438 * since it allows a thread to acquire the lock as a reader in spite of a
439 * waiting writer. This second approach insists on moving the object now, no
440 * matter how many readers the move function must wait for in order to do so,
441 * and could delay the completion of the callback indefinitely (blocking
442 * callbacks to other clients). In practice, a less insistent callback using
443 * rw_tryenter() returns KMEM_CBRC_LATER infrequently enough that there seems
444 * little reason to use anything else.
446 * Avoiding deadlock is not the only problem that an implementation using an
447 * explicit hold needs to solve. Locking the object in the first place (to
448 * prevent it from moving) remains a problem, since the object could move
449 * between the time you obtain a pointer to the object and the time you acquire
450 * the rwlock hashed to that pointer value. Therefore the client needs to
451 * recheck the value of the pointer after acquiring the lock, drop the lock if
452 * the value has changed, and try again. This requires a level of indirection:
453 * something that points to the object rather than the object itself, that the
454 * client can access safely while attempting to acquire the lock. (The object
455 * itself cannot be referenced safely because it can move at any time.)
456 * The following lock-acquisition function takes whatever is safe to reference
457 * (arg), follows its pointer to the object (using function f), and tries as
458 * often as necessary to acquire the hashed lock and verify that the object
459 * still has not moved:
462 * object_hold(object_f f, void *arg)
471 * rw_enter(OBJECT_RWLOCK(op), RW_READER);
472 * while (op != f(arg)) {
473 * rw_exit(OBJECT_RWLOCK(op));
478 * rw_enter(OBJECT_RWLOCK(op), RW_READER);
484 * The OBJECT_RWLOCK macro hashes the object address to obtain the rwlock. The
485 * lock reacquisition loop, while necessary, almost never executes. The function
486 * pointer f (used to obtain the object pointer from arg) has the following type
489 * typedef object_t *(*object_f)(void *arg);
491 * An object_f implementation is likely to be as simple as accessing a structure
495 * s_object(void *arg)
497 * something_t *sp = arg;
498 * return (sp->s_object);
501 * The flexibility of a function pointer allows the path to the object to be
502 * arbitrarily complex and also supports the notion that depending on where you
503 * are using the object, you may need to get it from someplace different.
505 * The function that releases the explicit hold is simpler because it does not
506 * have to worry about the object moving:
509 * object_rele(object_t *op)
511 * rw_exit(OBJECT_RWLOCK(op));
514 * The caller is spared these details so that obtaining and releasing an
515 * explicit hold feels like a simple mutex_enter()/mutex_exit() pair. The caller
516 * of object_hold() only needs to know that the returned object pointer is valid
517 * if not NULL and that the object will not move until released.
519 * Although object_hold() prevents an object from moving, it does not prevent it
520 * from being freed. The caller must take measures before calling object_hold()
521 * (afterwards is too late) to ensure that the held object cannot be freed. The
522 * caller must do so without accessing the unsafe object reference, so any lock
523 * or reference count used to ensure the continued existence of the object must
524 * live outside the object itself.
526 * Obtaining a new object is a special case where an explicit hold is impossible
527 * for the caller. Any function that returns a newly allocated object (either as
528 * a return value, or as an in-out paramter) must return it already held; after
529 * the caller gets it is too late, since the object cannot be safely accessed
530 * without the level of indirection described earlier. The following
531 * object_alloc() example uses the same code shown earlier to transition a new
532 * object into the state of being recognized (by the client) as a known object.
533 * The function must acquire the hold (rw_enter) before that state transition
534 * makes the object movable:
537 * object_alloc(container_t *container)
539 * object_t *object = kmem_cache_alloc(object_cache, 0);
540 * ... set any initial state not set by the constructor ...
541 * rw_enter(OBJECT_RWLOCK(object), RW_READER);
542 * mutex_enter(&container->c_objects_lock);
543 * list_insert_tail(&container->c_objects, object);
545 * object->o_container = container;
546 * mutex_exit(&container->c_objects_lock);
550 * Functions that implicitly acquire an object hold (any function that calls
551 * object_alloc() to supply an object for the caller) need to be carefully noted
552 * so that the matching object_rele() is not neglected. Otherwise, leaked holds
553 * prevent all objects hashed to the affected rwlocks from ever being moved.
555 * The pointer to a held object can be hashed to the holding rwlock even after
556 * the object has been freed. Although it is possible to release the hold
557 * after freeing the object, you may decide to release the hold implicitly in
558 * whatever function frees the object, so as to release the hold as soon as
559 * possible, and for the sake of symmetry with the function that implicitly
560 * acquires the hold when it allocates the object. Here, object_free() releases
561 * the hold acquired by object_alloc(). Its implicit object_rele() forms a
562 * matching pair with object_hold():
565 * object_free(object_t *object)
567 * container_t *container;
569 * ASSERT(object_held(object));
570 * container = object->o_container;
571 * mutex_enter(&container->c_objects_lock);
572 * object->o_container =
573 * (void *)((uintptr_t)object->o_container | 0x1);
574 * list_remove(&container->c_objects, object);
575 * mutex_exit(&container->c_objects_lock);
576 * object_rele(object);
577 * kmem_cache_free(object_cache, object);
580 * Note that object_free() cannot safely accept an object pointer as an argument
581 * unless the object is already held. Any function that calls object_free()
582 * needs to be carefully noted since it similarly forms a matching pair with
585 * To complete the picture, the following callback function implements the
586 * general solution by moving objects only if they are currently unheld:
589 * object_move(void *buf, void *newbuf, size_t size, void *arg)
591 * object_t *op = buf, *np = newbuf;
592 * container_t *container;
594 * container = op->o_container;
595 * if ((uintptr_t)container & 0x3) {
596 * return (KMEM_CBRC_DONT_KNOW);
599 * // Ensure that the container structure does not go away.
600 * if (container_hold(container) == 0) {
601 * return (KMEM_CBRC_DONT_KNOW);
604 * mutex_enter(&container->c_objects_lock);
605 * if (container != op->o_container) {
606 * mutex_exit(&container->c_objects_lock);
607 * container_rele(container);
608 * return (KMEM_CBRC_DONT_KNOW);
611 * if (rw_tryenter(OBJECT_RWLOCK(op), RW_WRITER) == 0) {
612 * mutex_exit(&container->c_objects_lock);
613 * container_rele(container);
614 * return (KMEM_CBRC_LATER);
617 * object_move_impl(op, np); // critical section
618 * rw_exit(OBJECT_RWLOCK(op));
620 * op->o_container = (void *)((uintptr_t)op->o_container | 0x1);
621 * list_link_replace(&op->o_link_node, &np->o_link_node);
622 * mutex_exit(&container->c_objects_lock);
623 * container_rele(container);
624 * return (KMEM_CBRC_YES);
627 * Note that object_move() must invalidate the designated o_container pointer of
628 * the old object in the same way that object_free() does, since kmem will free
629 * the object in response to the KMEM_CBRC_YES return value.
631 * The lock order in object_move() differs from object_alloc(), which locks
632 * OBJECT_RWLOCK first and &container->c_objects_lock second, but as long as the
633 * callback uses rw_tryenter() (preventing the deadlock described earlier), it's
634 * not a problem. Holding the lock on the object list in the example above
635 * through the entire callback not only prevents the object from going away, it
636 * also allows you to lock the list elsewhere and know that none of its elements
637 * will move during iteration.
639 * Adding an explicit hold everywhere an object from the cache is used is tricky
640 * and involves much more change to client code than a cache-specific solution
641 * that leverages existing state to decide whether or not an object is
642 * movable. However, this approach has the advantage that no object remains
643 * immovable for any significant length of time, making it extremely unlikely
644 * that long-lived allocations can continue holding slabs hostage; and it works
647 * 3. Consolidator Implementation
649 * Once the client supplies a move function that a) recognizes known objects and
650 * b) avoids moving objects that are actively in use, the remaining work is up
651 * to the consolidator to decide which objects to move and when to issue
654 * The consolidator relies on the fact that a cache's slabs are ordered by
655 * usage. Each slab has a fixed number of objects. Depending on the slab's
656 * "color" (the offset of the first object from the beginning of the slab;
657 * offsets are staggered to mitigate false sharing of cache lines) it is either
658 * the maximum number of objects per slab determined at cache creation time or
659 * else the number closest to the maximum that fits within the space remaining
660 * after the initial offset. A completely allocated slab may contribute some
661 * internal fragmentation (per-slab overhead) but no external fragmentation, so
662 * it is of no interest to the consolidator. At the other extreme, slabs whose
663 * objects have all been freed to the slab are released to the virtual memory
664 * (VM) subsystem (objects freed to magazines are still allocated as far as the
665 * slab is concerned). External fragmentation exists when there are slabs
666 * somewhere between these extremes. A partial slab has at least one but not all
667 * of its objects allocated. The more partial slabs, and the fewer allocated
668 * objects on each of them, the higher the fragmentation. Hence the
669 * consolidator's overall strategy is to reduce the number of partial slabs by
670 * moving allocated objects from the least allocated slabs to the most allocated
673 * Partial slabs are kept in an AVL tree ordered by usage. Completely allocated
674 * slabs are kept separately in an unordered list. Since the majority of slabs
675 * tend to be completely allocated (a typical unfragmented cache may have
676 * thousands of complete slabs and only a single partial slab), separating
677 * complete slabs improves the efficiency of partial slab ordering, since the
678 * complete slabs do not affect the depth or balance of the AVL tree. This
679 * ordered sequence of partial slabs acts as a "free list" supplying objects for
680 * allocation requests.
682 * Objects are always allocated from the first partial slab in the free list,
683 * where the allocation is most likely to eliminate a partial slab (by
684 * completely allocating it). Conversely, when a single object from a completely
685 * allocated slab is freed to the slab, that slab is added to the front of the
686 * free list. Since most free list activity involves highly allocated slabs
687 * coming and going at the front of the list, slabs tend naturally toward the
688 * ideal order: highly allocated at the front, sparsely allocated at the back.
689 * Slabs with few allocated objects are likely to become completely free if they
690 * keep a safe distance away from the front of the free list. Slab misorders
691 * interfere with the natural tendency of slabs to become completely free or
692 * completely allocated. For example, a slab with a single allocated object
693 * needs only a single free to escape the cache; its natural desire is
694 * frustrated when it finds itself at the front of the list where a second
695 * allocation happens just before the free could have released it. Another slab
696 * with all but one object allocated might have supplied the buffer instead, so
697 * that both (as opposed to neither) of the slabs would have been taken off the
700 * Although slabs tend naturally toward the ideal order, misorders allowed by a
701 * simple list implementation defeat the consolidator's strategy of merging
702 * least- and most-allocated slabs. Without an AVL tree to guarantee order, kmem
703 * needs another way to fix misorders to optimize its callback strategy. One
704 * approach is to periodically scan a limited number of slabs, advancing a
705 * marker to hold the current scan position, and to move extreme misorders to
706 * the front or back of the free list and to the front or back of the current
707 * scan range. By making consecutive scan ranges overlap by one slab, the least
708 * allocated slab in the current range can be carried along from the end of one
709 * scan to the start of the next.
711 * Maintaining partial slabs in an AVL tree relieves kmem of this additional
712 * task, however. Since most of the cache's activity is in the magazine layer,
713 * and allocations from the slab layer represent only a startup cost, the
714 * overhead of maintaining a balanced tree is not a significant concern compared
715 * to the opportunity of reducing complexity by eliminating the partial slab
716 * scanner just described. The overhead of an AVL tree is minimized by
717 * maintaining only partial slabs in the tree and keeping completely allocated
718 * slabs separately in a list. To avoid increasing the size of the slab
719 * structure the AVL linkage pointers are reused for the slab's list linkage,
720 * since the slab will always be either partial or complete, never stored both
721 * ways at the same time. To further minimize the overhead of the AVL tree the
722 * compare function that orders partial slabs by usage divides the range of
723 * allocated object counts into bins such that counts within the same bin are
724 * considered equal. Binning partial slabs makes it less likely that allocating
725 * or freeing a single object will change the slab's order, requiring a tree
726 * reinsertion (an avl_remove() followed by an avl_add(), both potentially
727 * requiring some rebalancing of the tree). Allocation counts closest to
728 * completely free and completely allocated are left unbinned (finely sorted) to
729 * better support the consolidator's strategy of merging slabs at either
732 * 3.1 Assessing Fragmentation and Selecting Candidate Slabs
734 * The consolidator piggybacks on the kmem maintenance thread and is called on
735 * the same interval as kmem_cache_update(), once per cache every fifteen
736 * seconds. kmem maintains a running count of unallocated objects in the slab
737 * layer (cache_bufslab). The consolidator checks whether that number exceeds
738 * 12.5% (1/8) of the total objects in the cache (cache_buftotal), and whether
739 * there is a significant number of slabs in the cache (arbitrarily a minimum
740 * 101 total slabs). Unused objects that have fallen out of the magazine layer's
741 * working set are included in the assessment, and magazines in the depot are
742 * reaped if those objects would lift cache_bufslab above the fragmentation
743 * threshold. Once the consolidator decides that a cache is fragmented, it looks
744 * for a candidate slab to reclaim, starting at the end of the partial slab free
745 * list and scanning backwards. At first the consolidator is choosy: only a slab
746 * with fewer than 12.5% (1/8) of its objects allocated qualifies (or else a
747 * single allocated object, regardless of percentage). If there is difficulty
748 * finding a candidate slab, kmem raises the allocation threshold incrementally,
749 * up to a maximum 87.5% (7/8), so that eventually the consolidator will reduce
750 * external fragmentation (unused objects on the free list) below 12.5% (1/8),
751 * even in the worst case of every slab in the cache being almost 7/8 allocated.
752 * The threshold can also be lowered incrementally when candidate slabs are easy
753 * to find, and the threshold is reset to the minimum 1/8 as soon as the cache
754 * is no longer fragmented.
756 * 3.2 Generating Callbacks
758 * Once an eligible slab is chosen, a callback is generated for every allocated
759 * object on the slab, in the hope that the client will move everything off the
760 * slab and make it reclaimable. Objects selected as move destinations are
761 * chosen from slabs at the front of the free list. Assuming slabs in the ideal
762 * order (most allocated at the front, least allocated at the back) and a
763 * cooperative client, the consolidator will succeed in removing slabs from both
764 * ends of the free list, completely allocating on the one hand and completely
765 * freeing on the other. Objects selected as move destinations are allocated in
766 * the kmem maintenance thread where move requests are enqueued. A separate
767 * callback thread removes pending callbacks from the queue and calls the
768 * client. The separate thread ensures that client code (the move function) does
769 * not interfere with internal kmem maintenance tasks. A map of pending
770 * callbacks keyed by object address (the object to be moved) is checked to
771 * ensure that duplicate callbacks are not generated for the same object.
772 * Allocating the move destination (the object to move to) prevents subsequent
773 * callbacks from selecting the same destination as an earlier pending callback.
775 * Move requests can also be generated by kmem_cache_reap() when the system is
776 * desperate for memory and by kmem_cache_move_notify(), called by the client to
777 * notify kmem that a move refused earlier with KMEM_CBRC_LATER is now possible.
778 * The map of pending callbacks is protected by the same lock that protects the
781 * When the system is desperate for memory, kmem does not bother to determine
782 * whether or not the cache exceeds the fragmentation threshold, but tries to
783 * consolidate as many slabs as possible. Normally, the consolidator chews
784 * slowly, one sparsely allocated slab at a time during each maintenance
785 * interval that the cache is fragmented. When desperate, the consolidator
786 * starts at the last partial slab and enqueues callbacks for every allocated
787 * object on every partial slab, working backwards until it reaches the first
788 * partial slab. The first partial slab, meanwhile, advances in pace with the
789 * consolidator as allocations to supply move destinations for the enqueued
790 * callbacks use up the highly allocated slabs at the front of the free list.
791 * Ideally, the overgrown free list collapses like an accordion, starting at
792 * both ends and ending at the center with a single partial slab.
794 * 3.3 Client Responses
796 * When the client returns KMEM_CBRC_NO in response to the move callback, kmem
797 * marks the slab that supplied the stuck object non-reclaimable and moves it to
798 * front of the free list. The slab remains marked as long as it remains on the
799 * free list, and it appears more allocated to the partial slab compare function
800 * than any unmarked slab, no matter how many of its objects are allocated.
801 * Since even one immovable object ties up the entire slab, the goal is to
802 * completely allocate any slab that cannot be completely freed. kmem does not
803 * bother generating callbacks to move objects from a marked slab unless the
804 * system is desperate.
806 * When the client responds KMEM_CBRC_LATER, kmem increments a count for the
807 * slab. If the client responds LATER too many times, kmem disbelieves and
808 * treats the response as a NO. The count is cleared when the slab is taken off
809 * the partial slab list or when the client moves one of the slab's objects.
813 * A kmem cache's external fragmentation is best observed with 'mdb -k' using
814 * the ::kmem_slabs dcmd. For a complete description of the command, enter
815 * '::help kmem_slabs' at the mdb prompt.
818 #include <sys/kmem_impl.h>
819 #include <sys/vmem_impl.h>
820 #include <sys/param.h>
821 #include <sys/sysmacros.h>
823 #include <sys/proc.h>
824 #include <sys/tuneable.h>
825 #include <sys/systm.h>
826 #include <sys/cmn_err.h>
827 #include <sys/debug.h>
829 #include <sys/mutex.h>
830 #include <sys/bitmap.h>
831 #include <sys/atomic.h>
832 #include <sys/kobj.h>
833 #include <sys/disp.h>
834 #include <vm/seg_kmem.h>
836 #include <sys/callb.h>
837 #include <sys/taskq.h>
838 #include <sys/modctl.h>
839 #include <sys/reboot.h>
840 #include <sys/id32.h>
841 #include <sys/zone.h>
842 #include <sys/netstack.h>
844 #include <sys/random.h>
847 extern void streams_msg_init(void);
848 extern int segkp_fromheap
;
849 extern void segkp_cache_free(void);
850 extern int callout_init_done
;
852 struct kmem_cache_kstat
{
853 kstat_named_t kmc_buf_size
;
854 kstat_named_t kmc_align
;
855 kstat_named_t kmc_chunk_size
;
856 kstat_named_t kmc_slab_size
;
857 kstat_named_t kmc_alloc
;
858 kstat_named_t kmc_alloc_fail
;
859 kstat_named_t kmc_free
;
860 kstat_named_t kmc_depot_alloc
;
861 kstat_named_t kmc_depot_free
;
862 kstat_named_t kmc_depot_contention
;
863 kstat_named_t kmc_slab_alloc
;
864 kstat_named_t kmc_slab_free
;
865 kstat_named_t kmc_buf_constructed
;
866 kstat_named_t kmc_buf_avail
;
867 kstat_named_t kmc_buf_inuse
;
868 kstat_named_t kmc_buf_total
;
869 kstat_named_t kmc_buf_max
;
870 kstat_named_t kmc_slab_create
;
871 kstat_named_t kmc_slab_destroy
;
872 kstat_named_t kmc_vmem_source
;
873 kstat_named_t kmc_hash_size
;
874 kstat_named_t kmc_hash_lookup_depth
;
875 kstat_named_t kmc_hash_rescale
;
876 kstat_named_t kmc_full_magazines
;
877 kstat_named_t kmc_empty_magazines
;
878 kstat_named_t kmc_magazine_size
;
879 kstat_named_t kmc_reap
; /* number of kmem_cache_reap() calls */
880 kstat_named_t kmc_defrag
; /* attempts to defrag all partial slabs */
881 kstat_named_t kmc_scan
; /* attempts to defrag one partial slab */
882 kstat_named_t kmc_move_callbacks
; /* sum of yes, no, later, dn, dk */
883 kstat_named_t kmc_move_yes
;
884 kstat_named_t kmc_move_no
;
885 kstat_named_t kmc_move_later
;
886 kstat_named_t kmc_move_dont_need
;
887 kstat_named_t kmc_move_dont_know
; /* obj unrecognized by client ... */
888 kstat_named_t kmc_move_hunt_found
; /* ... but found in mag layer */
889 kstat_named_t kmc_move_slabs_freed
; /* slabs freed by consolidator */
890 kstat_named_t kmc_move_reclaimable
; /* buffers, if consolidator ran */
891 } kmem_cache_kstat
= {
892 { "buf_size", KSTAT_DATA_UINT64
},
893 { "align", KSTAT_DATA_UINT64
},
894 { "chunk_size", KSTAT_DATA_UINT64
},
895 { "slab_size", KSTAT_DATA_UINT64
},
896 { "alloc", KSTAT_DATA_UINT64
},
897 { "alloc_fail", KSTAT_DATA_UINT64
},
898 { "free", KSTAT_DATA_UINT64
},
899 { "depot_alloc", KSTAT_DATA_UINT64
},
900 { "depot_free", KSTAT_DATA_UINT64
},
901 { "depot_contention", KSTAT_DATA_UINT64
},
902 { "slab_alloc", KSTAT_DATA_UINT64
},
903 { "slab_free", KSTAT_DATA_UINT64
},
904 { "buf_constructed", KSTAT_DATA_UINT64
},
905 { "buf_avail", KSTAT_DATA_UINT64
},
906 { "buf_inuse", KSTAT_DATA_UINT64
},
907 { "buf_total", KSTAT_DATA_UINT64
},
908 { "buf_max", KSTAT_DATA_UINT64
},
909 { "slab_create", KSTAT_DATA_UINT64
},
910 { "slab_destroy", KSTAT_DATA_UINT64
},
911 { "vmem_source", KSTAT_DATA_UINT64
},
912 { "hash_size", KSTAT_DATA_UINT64
},
913 { "hash_lookup_depth", KSTAT_DATA_UINT64
},
914 { "hash_rescale", KSTAT_DATA_UINT64
},
915 { "full_magazines", KSTAT_DATA_UINT64
},
916 { "empty_magazines", KSTAT_DATA_UINT64
},
917 { "magazine_size", KSTAT_DATA_UINT64
},
918 { "reap", KSTAT_DATA_UINT64
},
919 { "defrag", KSTAT_DATA_UINT64
},
920 { "scan", KSTAT_DATA_UINT64
},
921 { "move_callbacks", KSTAT_DATA_UINT64
},
922 { "move_yes", KSTAT_DATA_UINT64
},
923 { "move_no", KSTAT_DATA_UINT64
},
924 { "move_later", KSTAT_DATA_UINT64
},
925 { "move_dont_need", KSTAT_DATA_UINT64
},
926 { "move_dont_know", KSTAT_DATA_UINT64
},
927 { "move_hunt_found", KSTAT_DATA_UINT64
},
928 { "move_slabs_freed", KSTAT_DATA_UINT64
},
929 { "move_reclaimable", KSTAT_DATA_UINT64
},
932 static kmutex_t kmem_cache_kstat_lock
;
935 * The default set of caches to back kmem_alloc().
936 * These sizes should be reevaluated periodically.
938 * We want allocations that are multiples of the coherency granularity
939 * (64 bytes) to be satisfied from a cache which is a multiple of 64
940 * bytes, so that it will be 64-byte aligned. For all multiples of 64,
941 * the next kmem_cache_size greater than or equal to it must be a
944 * We split the table into two sections: size <= 4k and size > 4k. This
945 * saves a lot of space and cache footprint in our cache tables.
947 static const int kmem_alloc_sizes
[] = {
951 4 * 8, 5 * 8, 6 * 8, 7 * 8,
952 4 * 16, 5 * 16, 6 * 16, 7 * 16,
953 4 * 32, 5 * 32, 6 * 32, 7 * 32,
954 4 * 64, 5 * 64, 6 * 64, 7 * 64,
955 4 * 128, 5 * 128, 6 * 128, 7 * 128,
956 P2ALIGN(8192 / 7, 64),
957 P2ALIGN(8192 / 6, 64),
958 P2ALIGN(8192 / 5, 64),
959 P2ALIGN(8192 / 4, 64),
960 P2ALIGN(8192 / 3, 64),
961 P2ALIGN(8192 / 2, 64),
964 static const int kmem_big_alloc_sizes
[] = {
967 4 * 8192, 5 * 8192, 6 * 8192, 7 * 8192,
968 8 * 8192, 9 * 8192, 10 * 8192, 11 * 8192,
969 12 * 8192, 13 * 8192, 14 * 8192, 15 * 8192,
973 #define KMEM_MAXBUF 4096
974 #define KMEM_BIG_MAXBUF_32BIT 32768
975 #define KMEM_BIG_MAXBUF 131072
977 #define KMEM_BIG_MULTIPLE 4096 /* big_alloc_sizes must be a multiple */
978 #define KMEM_BIG_SHIFT 12 /* lg(KMEM_BIG_MULTIPLE) */
980 static kmem_cache_t
*kmem_alloc_table
[KMEM_MAXBUF
>> KMEM_ALIGN_SHIFT
];
981 static kmem_cache_t
*kmem_big_alloc_table
[KMEM_BIG_MAXBUF
>> KMEM_BIG_SHIFT
];
983 #define KMEM_ALLOC_TABLE_MAX (KMEM_MAXBUF >> KMEM_ALIGN_SHIFT)
984 static size_t kmem_big_alloc_table_max
= 0; /* # of filled elements */
986 static kmem_magtype_t kmem_magtype
[] = {
987 { 1, 8, 3200, 65536 },
988 { 3, 16, 256, 32768 },
989 { 7, 32, 64, 16384 },
998 static uint32_t kmem_reaping
;
999 static uint32_t kmem_reaping_idspace
;
1004 clock_t kmem_reap_interval
; /* cache reaping rate [15 * HZ ticks] */
1005 int kmem_depot_contention
= 3; /* max failed tryenters per real interval */
1006 pgcnt_t kmem_reapahead
= 0; /* start reaping N pages before pageout */
1007 int kmem_panic
= 1; /* whether to panic on error */
1008 int kmem_logging
= 1; /* kmem_log_enter() override */
1009 uint32_t kmem_mtbf
= 0; /* mean time between failures [default: off] */
1010 size_t kmem_transaction_log_size
; /* transaction log size [2% of memory] */
1011 size_t kmem_content_log_size
; /* content log size [2% of memory] */
1012 size_t kmem_failure_log_size
; /* failure log [4 pages per CPU] */
1013 size_t kmem_slab_log_size
; /* slab create log [4 pages per CPU] */
1014 size_t kmem_content_maxsave
= 256; /* KMF_CONTENTS max bytes to log */
1015 size_t kmem_lite_minsize
= 0; /* minimum buffer size for KMF_LITE */
1016 size_t kmem_lite_maxalign
= 1024; /* maximum buffer alignment for KMF_LITE */
1017 int kmem_lite_pcs
= 4; /* number of PCs to store in KMF_LITE mode */
1018 size_t kmem_maxverify
; /* maximum bytes to inspect in debug routines */
1019 size_t kmem_minfirewall
; /* hardware-enforced redzone threshold */
1022 size_t kmem_max_cached
= KMEM_BIG_MAXBUF
; /* maximum kmem_alloc cache */
1024 size_t kmem_max_cached
= KMEM_BIG_MAXBUF_32BIT
; /* maximum kmem_alloc cache */
1028 int kmem_flags
= KMF_AUDIT
| KMF_DEADBEEF
| KMF_REDZONE
| KMF_CONTENTS
;
1034 static kmem_cache_t
*kmem_slab_cache
;
1035 static kmem_cache_t
*kmem_bufctl_cache
;
1036 static kmem_cache_t
*kmem_bufctl_audit_cache
;
1038 static kmutex_t kmem_cache_lock
; /* inter-cache linkage only */
1039 static list_t kmem_caches
;
1041 static taskq_t
*kmem_taskq
;
1042 static kmutex_t kmem_flags_lock
;
1043 static vmem_t
*kmem_metadata_arena
;
1044 static vmem_t
*kmem_msb_arena
; /* arena for metadata caches */
1045 static vmem_t
*kmem_cache_arena
;
1046 static vmem_t
*kmem_hash_arena
;
1047 static vmem_t
*kmem_log_arena
;
1048 static vmem_t
*kmem_oversize_arena
;
1049 static vmem_t
*kmem_va_arena
;
1050 static vmem_t
*kmem_default_arena
;
1051 static vmem_t
*kmem_firewall_va_arena
;
1052 static vmem_t
*kmem_firewall_arena
;
1055 * kmem slab consolidator thresholds (tunables)
1057 size_t kmem_frag_minslabs
= 101; /* minimum total slabs */
1058 size_t kmem_frag_numer
= 1; /* free buffers (numerator) */
1059 size_t kmem_frag_denom
= KMEM_VOID_FRACTION
; /* buffers (denominator) */
1061 * Maximum number of slabs from which to move buffers during a single
1062 * maintenance interval while the system is not low on memory.
1064 size_t kmem_reclaim_max_slabs
= 1;
1066 * Number of slabs to scan backwards from the end of the partial slab list
1067 * when searching for buffers to relocate.
1069 size_t kmem_reclaim_scan_range
= 12;
1071 /* consolidator knobs */
1072 boolean_t kmem_move_noreap
;
1073 boolean_t kmem_move_blocked
;
1074 boolean_t kmem_move_fulltilt
;
1075 boolean_t kmem_move_any_partial
;
1079 * kmem consolidator debug tunables:
1080 * Ensure code coverage by occasionally running the consolidator even when the
1081 * caches are not fragmented (they may never be). These intervals are mean time
1082 * in cache maintenance intervals (kmem_cache_update).
1084 uint32_t kmem_mtb_move
= 60; /* defrag 1 slab (~15min) */
1085 uint32_t kmem_mtb_reap
= 1800; /* defrag all slabs (~7.5hrs) */
1088 static kmem_cache_t
*kmem_defrag_cache
;
1089 static kmem_cache_t
*kmem_move_cache
;
1090 static taskq_t
*kmem_move_taskq
;
1092 static void kmem_cache_scan(kmem_cache_t
*);
1093 static void kmem_cache_defrag(kmem_cache_t
*);
1094 static void kmem_slab_prefill(kmem_cache_t
*, kmem_slab_t
*);
1097 kmem_log_header_t
*kmem_transaction_log
;
1098 kmem_log_header_t
*kmem_content_log
;
1099 kmem_log_header_t
*kmem_failure_log
;
1100 kmem_log_header_t
*kmem_slab_log
;
1102 static int kmem_lite_count
; /* # of PCs in kmem_buftag_lite_t */
1104 #define KMEM_BUFTAG_LITE_ENTER(bt, count, caller) \
1105 if ((count) > 0) { \
1106 pc_t *_s = ((kmem_buftag_lite_t *)(bt))->bt_history; \
1108 /* memmove() the old entries down one notch */ \
1109 for (_e = &_s[(count) - 1]; _e > _s; _e--) \
1111 *_s = (uintptr_t)(caller); \
1114 #define KMERR_MODIFIED 0 /* buffer modified while on freelist */
1115 #define KMERR_REDZONE 1 /* redzone violation (write past end of buf) */
1116 #define KMERR_DUPFREE 2 /* freed a buffer twice */
1117 #define KMERR_BADADDR 3 /* freed a bad (unallocated) address */
1118 #define KMERR_BADBUFTAG 4 /* buftag corrupted */
1119 #define KMERR_BADBUFCTL 5 /* bufctl corrupted */
1120 #define KMERR_BADCACHE 6 /* freed a buffer to the wrong cache */
1121 #define KMERR_BADSIZE 7 /* alloc size != free size */
1122 #define KMERR_BADBASE 8 /* buffer base address wrong */
1125 hrtime_t kmp_timestamp
; /* timestamp of panic */
1126 int kmp_error
; /* type of kmem error */
1127 void *kmp_buffer
; /* buffer that induced panic */
1128 void *kmp_realbuf
; /* real start address for buffer */
1129 kmem_cache_t
*kmp_cache
; /* buffer's cache according to client */
1130 kmem_cache_t
*kmp_realcache
; /* actual cache containing buffer */
1131 kmem_slab_t
*kmp_slab
; /* slab accoring to kmem_findslab() */
1132 kmem_bufctl_t
*kmp_bufctl
; /* bufctl */
1137 copy_pattern(uint64_t pattern
, void *buf_arg
, size_t size
)
1139 uint64_t *bufend
= (uint64_t *)((char *)buf_arg
+ size
);
1140 uint64_t *buf
= buf_arg
;
1142 while (buf
< bufend
)
1147 verify_pattern(uint64_t pattern
, void *buf_arg
, size_t size
)
1149 uint64_t *bufend
= (uint64_t *)((char *)buf_arg
+ size
);
1152 for (buf
= buf_arg
; buf
< bufend
; buf
++)
1153 if (*buf
!= pattern
)
1159 verify_and_copy_pattern(uint64_t old
, uint64_t new, void *buf_arg
, size_t size
)
1161 uint64_t *bufend
= (uint64_t *)((char *)buf_arg
+ size
);
1164 for (buf
= buf_arg
; buf
< bufend
; buf
++) {
1166 copy_pattern(old
, buf_arg
,
1167 (char *)buf
- (char *)buf_arg
);
1177 kmem_cache_applyall(void (*func
)(kmem_cache_t
*), taskq_t
*tq
, int tqflag
)
1181 mutex_enter(&kmem_cache_lock
);
1182 for (cp
= list_head(&kmem_caches
); cp
!= NULL
;
1183 cp
= list_next(&kmem_caches
, cp
))
1185 (void) taskq_dispatch(tq
, (task_func_t
*)func
, cp
,
1189 mutex_exit(&kmem_cache_lock
);
1193 kmem_cache_applyall_id(void (*func
)(kmem_cache_t
*), taskq_t
*tq
, int tqflag
)
1197 mutex_enter(&kmem_cache_lock
);
1198 for (cp
= list_head(&kmem_caches
); cp
!= NULL
;
1199 cp
= list_next(&kmem_caches
, cp
)) {
1200 if (!(cp
->cache_cflags
& KMC_IDENTIFIER
))
1203 (void) taskq_dispatch(tq
, (task_func_t
*)func
, cp
,
1208 mutex_exit(&kmem_cache_lock
);
1212 * Debugging support. Given a buffer address, find its slab.
1214 static kmem_slab_t
*
1215 kmem_findslab(kmem_cache_t
*cp
, void *buf
)
1219 mutex_enter(&cp
->cache_lock
);
1220 for (sp
= list_head(&cp
->cache_complete_slabs
); sp
!= NULL
;
1221 sp
= list_next(&cp
->cache_complete_slabs
, sp
)) {
1222 if (KMEM_SLAB_MEMBER(sp
, buf
)) {
1223 mutex_exit(&cp
->cache_lock
);
1227 for (sp
= avl_first(&cp
->cache_partial_slabs
); sp
!= NULL
;
1228 sp
= AVL_NEXT(&cp
->cache_partial_slabs
, sp
)) {
1229 if (KMEM_SLAB_MEMBER(sp
, buf
)) {
1230 mutex_exit(&cp
->cache_lock
);
1234 mutex_exit(&cp
->cache_lock
);
1240 kmem_error(int error
, kmem_cache_t
*cparg
, void *bufarg
)
1242 kmem_buftag_t
*btp
= NULL
;
1243 kmem_bufctl_t
*bcp
= NULL
;
1244 kmem_cache_t
*cp
= cparg
;
1249 kmem_logging
= 0; /* stop logging when a bad thing happens */
1251 kmem_panic_info
.kmp_timestamp
= gethrtime();
1253 sp
= kmem_findslab(cp
, buf
);
1255 for (cp
= list_tail(&kmem_caches
); cp
!= NULL
;
1256 cp
= list_prev(&kmem_caches
, cp
)) {
1257 if ((sp
= kmem_findslab(cp
, buf
)) != NULL
)
1264 error
= KMERR_BADADDR
;
1267 error
= KMERR_BADCACHE
;
1269 buf
= (char *)bufarg
- ((uintptr_t)bufarg
-
1270 (uintptr_t)sp
->slab_base
) % cp
->cache_chunksize
;
1272 error
= KMERR_BADBASE
;
1273 if (cp
->cache_flags
& KMF_BUFTAG
)
1274 btp
= KMEM_BUFTAG(cp
, buf
);
1275 if (cp
->cache_flags
& KMF_HASH
) {
1276 mutex_enter(&cp
->cache_lock
);
1277 for (bcp
= *KMEM_HASH(cp
, buf
); bcp
; bcp
= bcp
->bc_next
)
1278 if (bcp
->bc_addr
== buf
)
1280 mutex_exit(&cp
->cache_lock
);
1281 if (bcp
== NULL
&& btp
!= NULL
)
1282 bcp
= btp
->bt_bufctl
;
1283 if (kmem_findslab(cp
->cache_bufctl_cache
, bcp
) ==
1284 NULL
|| P2PHASE((uintptr_t)bcp
, KMEM_ALIGN
) ||
1285 bcp
->bc_addr
!= buf
) {
1286 error
= KMERR_BADBUFCTL
;
1292 kmem_panic_info
.kmp_error
= error
;
1293 kmem_panic_info
.kmp_buffer
= bufarg
;
1294 kmem_panic_info
.kmp_realbuf
= buf
;
1295 kmem_panic_info
.kmp_cache
= cparg
;
1296 kmem_panic_info
.kmp_realcache
= cp
;
1297 kmem_panic_info
.kmp_slab
= sp
;
1298 kmem_panic_info
.kmp_bufctl
= bcp
;
1300 printf("kernel memory allocator: ");
1304 case KMERR_MODIFIED
:
1305 printf("buffer modified after being freed\n");
1306 off
= verify_pattern(KMEM_FREE_PATTERN
, buf
, cp
->cache_verify
);
1307 if (off
== NULL
) /* shouldn't happen */
1309 printf("modification occurred at offset 0x%lx "
1310 "(0x%llx replaced by 0x%llx)\n",
1311 (uintptr_t)off
- (uintptr_t)buf
,
1312 (longlong_t
)KMEM_FREE_PATTERN
, (longlong_t
)*off
);
1316 printf("redzone violation: write past end of buffer\n");
1320 printf("invalid free: buffer not in cache\n");
1324 printf("duplicate free: buffer freed twice\n");
1327 case KMERR_BADBUFTAG
:
1328 printf("boundary tag corrupted\n");
1329 printf("bcp ^ bxstat = %lx, should be %lx\n",
1330 (intptr_t)btp
->bt_bufctl
^ btp
->bt_bxstat
,
1334 case KMERR_BADBUFCTL
:
1335 printf("bufctl corrupted\n");
1338 case KMERR_BADCACHE
:
1339 printf("buffer freed to wrong cache\n");
1340 printf("buffer was allocated from %s,\n", cp
->cache_name
);
1341 printf("caller attempting free to %s.\n", cparg
->cache_name
);
1345 printf("bad free: free size (%u) != alloc size (%u)\n",
1346 KMEM_SIZE_DECODE(((uint32_t *)btp
)[0]),
1347 KMEM_SIZE_DECODE(((uint32_t *)btp
)[1]));
1351 printf("bad free: free address (%p) != alloc address (%p)\n",
1356 printf("buffer=%p bufctl=%p cache: %s\n",
1357 bufarg
, (void *)bcp
, cparg
->cache_name
);
1359 if (bcp
!= NULL
&& (cp
->cache_flags
& KMF_AUDIT
) &&
1360 error
!= KMERR_BADBUFCTL
) {
1363 kmem_bufctl_audit_t
*bcap
= (kmem_bufctl_audit_t
*)bcp
;
1365 hrt2ts(kmem_panic_info
.kmp_timestamp
- bcap
->bc_timestamp
, &ts
);
1366 printf("previous transaction on buffer %p:\n", buf
);
1367 printf("thread=%p time=T-%ld.%09ld slab=%p cache: %s\n",
1368 (void *)bcap
->bc_thread
, ts
.tv_sec
, ts
.tv_nsec
,
1369 (void *)sp
, cp
->cache_name
);
1370 for (d
= 0; d
< MIN(bcap
->bc_depth
, KMEM_STACK_DEPTH
); d
++) {
1372 char *sym
= kobj_getsymname(bcap
->bc_stack
[d
], &off
);
1373 printf("%s+%lx\n", sym
? sym
: "?", off
);
1377 panic("kernel heap corruption detected");
1378 if (kmem_panic
== 0)
1380 kmem_logging
= 1; /* resume logging */
1383 static kmem_log_header_t
*
1384 kmem_log_init(size_t logsize
)
1386 kmem_log_header_t
*lhp
;
1387 int nchunks
= 4 * max_ncpus
;
1388 size_t lhsize
= (size_t)&((kmem_log_header_t
*)0)->lh_cpu
[max_ncpus
];
1392 * Make sure that lhp->lh_cpu[] is nicely aligned
1393 * to prevent false sharing of cache lines.
1395 lhsize
= P2ROUNDUP(lhsize
, KMEM_ALIGN
);
1396 lhp
= vmem_xalloc(kmem_log_arena
, lhsize
, 64, P2NPHASE(lhsize
, 64), 0,
1397 NULL
, NULL
, VM_SLEEP
);
1400 mutex_init(&lhp
->lh_lock
, NULL
, MUTEX_DEFAULT
, NULL
);
1401 lhp
->lh_nchunks
= nchunks
;
1402 lhp
->lh_chunksize
= P2ROUNDUP(logsize
/ nchunks
+ 1, PAGESIZE
);
1403 lhp
->lh_base
= vmem_alloc(kmem_log_arena
,
1404 lhp
->lh_chunksize
* nchunks
, VM_SLEEP
);
1405 lhp
->lh_free
= vmem_alloc(kmem_log_arena
,
1406 nchunks
* sizeof (int), VM_SLEEP
);
1407 bzero(lhp
->lh_base
, lhp
->lh_chunksize
* nchunks
);
1409 for (i
= 0; i
< max_ncpus
; i
++) {
1410 kmem_cpu_log_header_t
*clhp
= &lhp
->lh_cpu
[i
];
1411 mutex_init(&clhp
->clh_lock
, NULL
, MUTEX_DEFAULT
, NULL
);
1412 clhp
->clh_chunk
= i
;
1415 for (i
= max_ncpus
; i
< nchunks
; i
++)
1416 lhp
->lh_free
[i
] = i
;
1418 lhp
->lh_head
= max_ncpus
;
1425 kmem_log_enter(kmem_log_header_t
*lhp
, void *data
, size_t size
)
1428 kmem_cpu_log_header_t
*clhp
= &lhp
->lh_cpu
[CPU
->cpu_seqid
];
1430 if (lhp
== NULL
|| kmem_logging
== 0 || panicstr
)
1433 mutex_enter(&clhp
->clh_lock
);
1435 if (size
> clhp
->clh_avail
) {
1436 mutex_enter(&lhp
->lh_lock
);
1438 lhp
->lh_free
[lhp
->lh_tail
] = clhp
->clh_chunk
;
1439 lhp
->lh_tail
= (lhp
->lh_tail
+ 1) % lhp
->lh_nchunks
;
1440 clhp
->clh_chunk
= lhp
->lh_free
[lhp
->lh_head
];
1441 lhp
->lh_head
= (lhp
->lh_head
+ 1) % lhp
->lh_nchunks
;
1442 clhp
->clh_current
= lhp
->lh_base
+
1443 clhp
->clh_chunk
* lhp
->lh_chunksize
;
1444 clhp
->clh_avail
= lhp
->lh_chunksize
;
1445 if (size
> lhp
->lh_chunksize
)
1446 size
= lhp
->lh_chunksize
;
1447 mutex_exit(&lhp
->lh_lock
);
1449 logspace
= clhp
->clh_current
;
1450 clhp
->clh_current
+= size
;
1451 clhp
->clh_avail
-= size
;
1452 bcopy(data
, logspace
, size
);
1453 mutex_exit(&clhp
->clh_lock
);
1457 #define KMEM_AUDIT(lp, cp, bcp) \
1459 kmem_bufctl_audit_t *_bcp = (kmem_bufctl_audit_t *)(bcp); \
1460 _bcp->bc_timestamp = gethrtime(); \
1461 _bcp->bc_thread = curthread; \
1462 _bcp->bc_depth = getpcstack(_bcp->bc_stack, KMEM_STACK_DEPTH); \
1463 _bcp->bc_lastlog = kmem_log_enter((lp), _bcp, sizeof (*_bcp)); \
1467 kmem_log_event(kmem_log_header_t
*lp
, kmem_cache_t
*cp
,
1468 kmem_slab_t
*sp
, void *addr
)
1470 kmem_bufctl_audit_t bca
;
1472 bzero(&bca
, sizeof (kmem_bufctl_audit_t
));
1476 KMEM_AUDIT(lp
, cp
, &bca
);
1480 * Create a new slab for cache cp.
1482 static kmem_slab_t
*
1483 kmem_slab_create(kmem_cache_t
*cp
, int kmflag
)
1485 size_t slabsize
= cp
->cache_slabsize
;
1486 size_t chunksize
= cp
->cache_chunksize
;
1487 int cache_flags
= cp
->cache_flags
;
1488 size_t color
, chunks
;
1492 vmem_t
*vmp
= cp
->cache_arena
;
1494 ASSERT(MUTEX_NOT_HELD(&cp
->cache_lock
));
1496 color
= cp
->cache_color
+ cp
->cache_align
;
1497 if (color
> cp
->cache_maxcolor
)
1498 color
= cp
->cache_mincolor
;
1499 cp
->cache_color
= color
;
1501 slab
= vmem_alloc(vmp
, slabsize
, kmflag
& KM_VMFLAGS
);
1504 goto vmem_alloc_failure
;
1506 ASSERT(P2PHASE((uintptr_t)slab
, vmp
->vm_quantum
) == 0);
1509 * Reverify what was already checked in kmem_cache_set_move(), since the
1510 * consolidator depends (for correctness) on slabs being initialized
1511 * with the 0xbaddcafe memory pattern (setting a low order bit usable by
1512 * clients to distinguish uninitialized memory from known objects).
1514 ASSERT((cp
->cache_move
== NULL
) || !(cp
->cache_cflags
& KMC_NOTOUCH
));
1515 if (!(cp
->cache_cflags
& KMC_NOTOUCH
))
1516 copy_pattern(KMEM_UNINITIALIZED_PATTERN
, slab
, slabsize
);
1518 if (cache_flags
& KMF_HASH
) {
1519 if ((sp
= kmem_cache_alloc(kmem_slab_cache
, kmflag
)) == NULL
)
1520 goto slab_alloc_failure
;
1521 chunks
= (slabsize
- color
) / chunksize
;
1523 sp
= KMEM_SLAB(cp
, slab
);
1524 chunks
= (slabsize
- sizeof (kmem_slab_t
) - color
) / chunksize
;
1527 sp
->slab_cache
= cp
;
1528 sp
->slab_head
= NULL
;
1529 sp
->slab_refcnt
= 0;
1530 sp
->slab_base
= buf
= slab
+ color
;
1531 sp
->slab_chunks
= chunks
;
1532 sp
->slab_stuck_offset
= (uint32_t)-1;
1533 sp
->slab_later_count
= 0;
1537 while (chunks
-- != 0) {
1538 if (cache_flags
& KMF_HASH
) {
1539 bcp
= kmem_cache_alloc(cp
->cache_bufctl_cache
, kmflag
);
1541 goto bufctl_alloc_failure
;
1542 if (cache_flags
& KMF_AUDIT
) {
1543 kmem_bufctl_audit_t
*bcap
=
1544 (kmem_bufctl_audit_t
*)bcp
;
1545 bzero(bcap
, sizeof (kmem_bufctl_audit_t
));
1546 bcap
->bc_cache
= cp
;
1551 bcp
= KMEM_BUFCTL(cp
, buf
);
1553 if (cache_flags
& KMF_BUFTAG
) {
1554 kmem_buftag_t
*btp
= KMEM_BUFTAG(cp
, buf
);
1555 btp
->bt_redzone
= KMEM_REDZONE_PATTERN
;
1556 btp
->bt_bufctl
= bcp
;
1557 btp
->bt_bxstat
= (intptr_t)bcp
^ KMEM_BUFTAG_FREE
;
1558 if (cache_flags
& KMF_DEADBEEF
) {
1559 copy_pattern(KMEM_FREE_PATTERN
, buf
,
1563 bcp
->bc_next
= sp
->slab_head
;
1564 sp
->slab_head
= bcp
;
1568 kmem_log_event(kmem_slab_log
, cp
, sp
, slab
);
1572 bufctl_alloc_failure
:
1574 while ((bcp
= sp
->slab_head
) != NULL
) {
1575 sp
->slab_head
= bcp
->bc_next
;
1576 kmem_cache_free(cp
->cache_bufctl_cache
, bcp
);
1578 kmem_cache_free(kmem_slab_cache
, sp
);
1582 vmem_free(vmp
, slab
, slabsize
);
1586 kmem_log_event(kmem_failure_log
, cp
, NULL
, NULL
);
1587 atomic_inc_64(&cp
->cache_alloc_fail
);
1596 kmem_slab_destroy(kmem_cache_t
*cp
, kmem_slab_t
*sp
)
1598 vmem_t
*vmp
= cp
->cache_arena
;
1599 void *slab
= (void *)P2ALIGN((uintptr_t)sp
->slab_base
, vmp
->vm_quantum
);
1601 ASSERT(MUTEX_NOT_HELD(&cp
->cache_lock
));
1602 ASSERT(sp
->slab_refcnt
== 0);
1604 if (cp
->cache_flags
& KMF_HASH
) {
1606 while ((bcp
= sp
->slab_head
) != NULL
) {
1607 sp
->slab_head
= bcp
->bc_next
;
1608 kmem_cache_free(cp
->cache_bufctl_cache
, bcp
);
1610 kmem_cache_free(kmem_slab_cache
, sp
);
1612 vmem_free(vmp
, slab
, cp
->cache_slabsize
);
1616 kmem_slab_alloc_impl(kmem_cache_t
*cp
, kmem_slab_t
*sp
, boolean_t prefill
)
1618 kmem_bufctl_t
*bcp
, **hash_bucket
;
1620 boolean_t new_slab
= (sp
->slab_refcnt
== 0);
1622 ASSERT(MUTEX_HELD(&cp
->cache_lock
));
1624 * kmem_slab_alloc() drops cache_lock when it creates a new slab, so we
1625 * can't ASSERT(avl_is_empty(&cp->cache_partial_slabs)) here when the
1626 * slab is newly created.
1628 ASSERT(new_slab
|| (KMEM_SLAB_IS_PARTIAL(sp
) &&
1629 (sp
== avl_first(&cp
->cache_partial_slabs
))));
1630 ASSERT(sp
->slab_cache
== cp
);
1632 cp
->cache_slab_alloc
++;
1633 cp
->cache_bufslab
--;
1636 bcp
= sp
->slab_head
;
1637 sp
->slab_head
= bcp
->bc_next
;
1639 if (cp
->cache_flags
& KMF_HASH
) {
1641 * Add buffer to allocated-address hash table.
1644 hash_bucket
= KMEM_HASH(cp
, buf
);
1645 bcp
->bc_next
= *hash_bucket
;
1647 if ((cp
->cache_flags
& (KMF_AUDIT
| KMF_BUFTAG
)) == KMF_AUDIT
) {
1648 KMEM_AUDIT(kmem_transaction_log
, cp
, bcp
);
1651 buf
= KMEM_BUF(cp
, bcp
);
1654 ASSERT(KMEM_SLAB_MEMBER(sp
, buf
));
1656 if (sp
->slab_head
== NULL
) {
1657 ASSERT(KMEM_SLAB_IS_ALL_USED(sp
));
1659 ASSERT(sp
->slab_chunks
== 1);
1661 ASSERT(sp
->slab_chunks
> 1); /* the slab was partial */
1662 avl_remove(&cp
->cache_partial_slabs
, sp
);
1663 sp
->slab_later_count
= 0; /* clear history */
1664 sp
->slab_flags
&= ~KMEM_SLAB_NOMOVE
;
1665 sp
->slab_stuck_offset
= (uint32_t)-1;
1667 list_insert_head(&cp
->cache_complete_slabs
, sp
);
1668 cp
->cache_complete_slab_count
++;
1672 ASSERT(KMEM_SLAB_IS_PARTIAL(sp
));
1674 * Peek to see if the magazine layer is enabled before
1675 * we prefill. We're not holding the cpu cache lock,
1676 * so the peek could be wrong, but there's no harm in it.
1678 if (new_slab
&& prefill
&& (cp
->cache_flags
& KMF_PREFILL
) &&
1679 (KMEM_CPU_CACHE(cp
)->cc_magsize
!= 0)) {
1680 kmem_slab_prefill(cp
, sp
);
1685 avl_add(&cp
->cache_partial_slabs
, sp
);
1690 * The slab is now more allocated than it was, so the
1691 * order remains unchanged.
1693 ASSERT(!avl_update(&cp
->cache_partial_slabs
, sp
));
1698 * Allocate a raw (unconstructed) buffer from cp's slab layer.
1701 kmem_slab_alloc(kmem_cache_t
*cp
, int kmflag
)
1705 boolean_t test_destructor
;
1707 mutex_enter(&cp
->cache_lock
);
1708 test_destructor
= (cp
->cache_slab_alloc
== 0);
1709 sp
= avl_first(&cp
->cache_partial_slabs
);
1711 ASSERT(cp
->cache_bufslab
== 0);
1714 * The freelist is empty. Create a new slab.
1716 mutex_exit(&cp
->cache_lock
);
1717 if ((sp
= kmem_slab_create(cp
, kmflag
)) == NULL
) {
1720 mutex_enter(&cp
->cache_lock
);
1721 cp
->cache_slab_create
++;
1722 if ((cp
->cache_buftotal
+= sp
->slab_chunks
) > cp
->cache_bufmax
)
1723 cp
->cache_bufmax
= cp
->cache_buftotal
;
1724 cp
->cache_bufslab
+= sp
->slab_chunks
;
1727 buf
= kmem_slab_alloc_impl(cp
, sp
, B_TRUE
);
1728 ASSERT((cp
->cache_slab_create
- cp
->cache_slab_destroy
) ==
1729 (cp
->cache_complete_slab_count
+
1730 avl_numnodes(&cp
->cache_partial_slabs
) +
1731 (cp
->cache_defrag
== NULL
? 0 : cp
->cache_defrag
->kmd_deadcount
)));
1732 mutex_exit(&cp
->cache_lock
);
1734 if (test_destructor
&& cp
->cache_destructor
!= NULL
) {
1736 * On the first kmem_slab_alloc(), assert that it is valid to
1737 * call the destructor on a newly constructed object without any
1738 * client involvement.
1740 if ((cp
->cache_constructor
== NULL
) ||
1741 cp
->cache_constructor(buf
, cp
->cache_private
,
1743 cp
->cache_destructor(buf
, cp
->cache_private
);
1745 copy_pattern(KMEM_UNINITIALIZED_PATTERN
, buf
,
1747 if (cp
->cache_flags
& KMF_DEADBEEF
) {
1748 copy_pattern(KMEM_FREE_PATTERN
, buf
, cp
->cache_verify
);
1755 static void kmem_slab_move_yes(kmem_cache_t
*, kmem_slab_t
*, void *);
1758 * Free a raw (unconstructed) buffer to cp's slab layer.
1761 kmem_slab_free(kmem_cache_t
*cp
, void *buf
)
1764 kmem_bufctl_t
*bcp
, **prev_bcpp
;
1766 ASSERT(buf
!= NULL
);
1768 mutex_enter(&cp
->cache_lock
);
1769 cp
->cache_slab_free
++;
1771 if (cp
->cache_flags
& KMF_HASH
) {
1773 * Look up buffer in allocated-address hash table.
1775 prev_bcpp
= KMEM_HASH(cp
, buf
);
1776 while ((bcp
= *prev_bcpp
) != NULL
) {
1777 if (bcp
->bc_addr
== buf
) {
1778 *prev_bcpp
= bcp
->bc_next
;
1782 cp
->cache_lookup_depth
++;
1783 prev_bcpp
= &bcp
->bc_next
;
1786 bcp
= KMEM_BUFCTL(cp
, buf
);
1787 sp
= KMEM_SLAB(cp
, buf
);
1790 if (bcp
== NULL
|| sp
->slab_cache
!= cp
|| !KMEM_SLAB_MEMBER(sp
, buf
)) {
1791 mutex_exit(&cp
->cache_lock
);
1792 kmem_error(KMERR_BADADDR
, cp
, buf
);
1796 if (KMEM_SLAB_OFFSET(sp
, buf
) == sp
->slab_stuck_offset
) {
1798 * If this is the buffer that prevented the consolidator from
1799 * clearing the slab, we can reset the slab flags now that the
1800 * buffer is freed. (It makes sense to do this in
1801 * kmem_cache_free(), where the client gives up ownership of the
1802 * buffer, but on the hot path the test is too expensive.)
1804 kmem_slab_move_yes(cp
, sp
, buf
);
1807 if ((cp
->cache_flags
& (KMF_AUDIT
| KMF_BUFTAG
)) == KMF_AUDIT
) {
1808 if (cp
->cache_flags
& KMF_CONTENTS
)
1809 ((kmem_bufctl_audit_t
*)bcp
)->bc_contents
=
1810 kmem_log_enter(kmem_content_log
, buf
,
1811 cp
->cache_contents
);
1812 KMEM_AUDIT(kmem_transaction_log
, cp
, bcp
);
1815 bcp
->bc_next
= sp
->slab_head
;
1816 sp
->slab_head
= bcp
;
1818 cp
->cache_bufslab
++;
1819 ASSERT(sp
->slab_refcnt
>= 1);
1821 if (--sp
->slab_refcnt
== 0) {
1823 * There are no outstanding allocations from this slab,
1824 * so we can reclaim the memory.
1826 if (sp
->slab_chunks
== 1) {
1827 list_remove(&cp
->cache_complete_slabs
, sp
);
1828 cp
->cache_complete_slab_count
--;
1830 avl_remove(&cp
->cache_partial_slabs
, sp
);
1833 cp
->cache_buftotal
-= sp
->slab_chunks
;
1834 cp
->cache_bufslab
-= sp
->slab_chunks
;
1836 * Defer releasing the slab to the virtual memory subsystem
1837 * while there is a pending move callback, since we guarantee
1838 * that buffers passed to the move callback have only been
1839 * touched by kmem or by the client itself. Since the memory
1840 * patterns baddcafe (uninitialized) and deadbeef (freed) both
1841 * set at least one of the two lowest order bits, the client can
1842 * test those bits in the move callback to determine whether or
1843 * not it knows about the buffer (assuming that the client also
1844 * sets one of those low order bits whenever it frees a buffer).
1846 if (cp
->cache_defrag
== NULL
||
1847 (avl_is_empty(&cp
->cache_defrag
->kmd_moves_pending
) &&
1848 !(sp
->slab_flags
& KMEM_SLAB_MOVE_PENDING
))) {
1849 cp
->cache_slab_destroy
++;
1850 mutex_exit(&cp
->cache_lock
);
1851 kmem_slab_destroy(cp
, sp
);
1853 list_t
*deadlist
= &cp
->cache_defrag
->kmd_deadlist
;
1855 * Slabs are inserted at both ends of the deadlist to
1856 * distinguish between slabs freed while move callbacks
1857 * are pending (list head) and a slab freed while the
1858 * lock is dropped in kmem_move_buffers() (list tail) so
1859 * that in both cases slab_destroy() is called from the
1862 if (sp
->slab_flags
& KMEM_SLAB_MOVE_PENDING
) {
1863 list_insert_tail(deadlist
, sp
);
1865 list_insert_head(deadlist
, sp
);
1867 cp
->cache_defrag
->kmd_deadcount
++;
1868 mutex_exit(&cp
->cache_lock
);
1873 if (bcp
->bc_next
== NULL
) {
1874 /* Transition the slab from completely allocated to partial. */
1875 ASSERT(sp
->slab_refcnt
== (sp
->slab_chunks
- 1));
1876 ASSERT(sp
->slab_chunks
> 1);
1877 list_remove(&cp
->cache_complete_slabs
, sp
);
1878 cp
->cache_complete_slab_count
--;
1879 avl_add(&cp
->cache_partial_slabs
, sp
);
1881 (void) avl_update_gt(&cp
->cache_partial_slabs
, sp
);
1884 ASSERT((cp
->cache_slab_create
- cp
->cache_slab_destroy
) ==
1885 (cp
->cache_complete_slab_count
+
1886 avl_numnodes(&cp
->cache_partial_slabs
) +
1887 (cp
->cache_defrag
== NULL
? 0 : cp
->cache_defrag
->kmd_deadcount
)));
1888 mutex_exit(&cp
->cache_lock
);
1892 * Return -1 if kmem_error, 1 if constructor fails, 0 if successful.
1895 kmem_cache_alloc_debug(kmem_cache_t
*cp
, void *buf
, int kmflag
, int construct
,
1898 kmem_buftag_t
*btp
= KMEM_BUFTAG(cp
, buf
);
1899 kmem_bufctl_audit_t
*bcp
= (kmem_bufctl_audit_t
*)btp
->bt_bufctl
;
1902 if (btp
->bt_bxstat
!= ((intptr_t)bcp
^ KMEM_BUFTAG_FREE
)) {
1903 kmem_error(KMERR_BADBUFTAG
, cp
, buf
);
1907 btp
->bt_bxstat
= (intptr_t)bcp
^ KMEM_BUFTAG_ALLOC
;
1909 if ((cp
->cache_flags
& KMF_HASH
) && bcp
->bc_addr
!= buf
) {
1910 kmem_error(KMERR_BADBUFCTL
, cp
, buf
);
1914 if (cp
->cache_flags
& KMF_DEADBEEF
) {
1915 if (!construct
&& (cp
->cache_flags
& KMF_LITE
)) {
1916 if (*(uint64_t *)buf
!= KMEM_FREE_PATTERN
) {
1917 kmem_error(KMERR_MODIFIED
, cp
, buf
);
1920 if (cp
->cache_constructor
!= NULL
)
1921 *(uint64_t *)buf
= btp
->bt_redzone
;
1923 *(uint64_t *)buf
= KMEM_UNINITIALIZED_PATTERN
;
1926 if (verify_and_copy_pattern(KMEM_FREE_PATTERN
,
1927 KMEM_UNINITIALIZED_PATTERN
, buf
,
1928 cp
->cache_verify
)) {
1929 kmem_error(KMERR_MODIFIED
, cp
, buf
);
1934 btp
->bt_redzone
= KMEM_REDZONE_PATTERN
;
1936 if ((mtbf
= kmem_mtbf
| cp
->cache_mtbf
) != 0 &&
1937 gethrtime() % mtbf
== 0 &&
1938 (kmflag
& (KM_NOSLEEP
| KM_PANIC
)) == KM_NOSLEEP
) {
1939 kmem_log_event(kmem_failure_log
, cp
, NULL
, NULL
);
1940 if (!construct
&& cp
->cache_destructor
!= NULL
)
1941 cp
->cache_destructor(buf
, cp
->cache_private
);
1946 if (mtbf
|| (construct
&& cp
->cache_constructor
!= NULL
&&
1947 cp
->cache_constructor(buf
, cp
->cache_private
, kmflag
) != 0)) {
1948 atomic_inc_64(&cp
->cache_alloc_fail
);
1949 btp
->bt_bxstat
= (intptr_t)bcp
^ KMEM_BUFTAG_FREE
;
1950 if (cp
->cache_flags
& KMF_DEADBEEF
)
1951 copy_pattern(KMEM_FREE_PATTERN
, buf
, cp
->cache_verify
);
1952 kmem_slab_free(cp
, buf
);
1956 if (cp
->cache_flags
& KMF_AUDIT
) {
1957 KMEM_AUDIT(kmem_transaction_log
, cp
, bcp
);
1960 if ((cp
->cache_flags
& KMF_LITE
) &&
1961 !(cp
->cache_cflags
& KMC_KMEM_ALLOC
)) {
1962 KMEM_BUFTAG_LITE_ENTER(btp
, kmem_lite_count
, caller
);
1969 kmem_cache_free_debug(kmem_cache_t
*cp
, void *buf
, caddr_t caller
)
1971 kmem_buftag_t
*btp
= KMEM_BUFTAG(cp
, buf
);
1972 kmem_bufctl_audit_t
*bcp
= (kmem_bufctl_audit_t
*)btp
->bt_bufctl
;
1975 if (btp
->bt_bxstat
!= ((intptr_t)bcp
^ KMEM_BUFTAG_ALLOC
)) {
1976 if (btp
->bt_bxstat
== ((intptr_t)bcp
^ KMEM_BUFTAG_FREE
)) {
1977 kmem_error(KMERR_DUPFREE
, cp
, buf
);
1980 sp
= kmem_findslab(cp
, buf
);
1981 if (sp
== NULL
|| sp
->slab_cache
!= cp
)
1982 kmem_error(KMERR_BADADDR
, cp
, buf
);
1984 kmem_error(KMERR_REDZONE
, cp
, buf
);
1988 btp
->bt_bxstat
= (intptr_t)bcp
^ KMEM_BUFTAG_FREE
;
1990 if ((cp
->cache_flags
& KMF_HASH
) && bcp
->bc_addr
!= buf
) {
1991 kmem_error(KMERR_BADBUFCTL
, cp
, buf
);
1995 if (btp
->bt_redzone
!= KMEM_REDZONE_PATTERN
) {
1996 kmem_error(KMERR_REDZONE
, cp
, buf
);
2000 if (cp
->cache_flags
& KMF_AUDIT
) {
2001 if (cp
->cache_flags
& KMF_CONTENTS
)
2002 bcp
->bc_contents
= kmem_log_enter(kmem_content_log
,
2003 buf
, cp
->cache_contents
);
2004 KMEM_AUDIT(kmem_transaction_log
, cp
, bcp
);
2007 if ((cp
->cache_flags
& KMF_LITE
) &&
2008 !(cp
->cache_cflags
& KMC_KMEM_ALLOC
)) {
2009 KMEM_BUFTAG_LITE_ENTER(btp
, kmem_lite_count
, caller
);
2012 if (cp
->cache_flags
& KMF_DEADBEEF
) {
2013 if (cp
->cache_flags
& KMF_LITE
)
2014 btp
->bt_redzone
= *(uint64_t *)buf
;
2015 else if (cp
->cache_destructor
!= NULL
)
2016 cp
->cache_destructor(buf
, cp
->cache_private
);
2018 copy_pattern(KMEM_FREE_PATTERN
, buf
, cp
->cache_verify
);
2025 * Free each object in magazine mp to cp's slab layer, and free mp itself.
2028 kmem_magazine_destroy(kmem_cache_t
*cp
, kmem_magazine_t
*mp
, int nrounds
)
2032 ASSERT(!list_link_active(&cp
->cache_link
) ||
2033 taskq_member(kmem_taskq
, curthread
));
2035 for (round
= 0; round
< nrounds
; round
++) {
2036 void *buf
= mp
->mag_round
[round
];
2038 if (cp
->cache_flags
& KMF_DEADBEEF
) {
2039 if (verify_pattern(KMEM_FREE_PATTERN
, buf
,
2040 cp
->cache_verify
) != NULL
) {
2041 kmem_error(KMERR_MODIFIED
, cp
, buf
);
2044 if ((cp
->cache_flags
& KMF_LITE
) &&
2045 cp
->cache_destructor
!= NULL
) {
2046 kmem_buftag_t
*btp
= KMEM_BUFTAG(cp
, buf
);
2047 *(uint64_t *)buf
= btp
->bt_redzone
;
2048 cp
->cache_destructor(buf
, cp
->cache_private
);
2049 *(uint64_t *)buf
= KMEM_FREE_PATTERN
;
2051 } else if (cp
->cache_destructor
!= NULL
) {
2052 cp
->cache_destructor(buf
, cp
->cache_private
);
2055 kmem_slab_free(cp
, buf
);
2057 ASSERT(KMEM_MAGAZINE_VALID(cp
, mp
));
2058 kmem_cache_free(cp
->cache_magtype
->mt_cache
, mp
);
2062 * Allocate a magazine from the depot.
2064 static kmem_magazine_t
*
2065 kmem_depot_alloc(kmem_cache_t
*cp
, kmem_maglist_t
*mlp
)
2067 kmem_magazine_t
*mp
;
2070 * If we can't get the depot lock without contention,
2071 * update our contention count. We use the depot
2072 * contention rate to determine whether we need to
2073 * increase the magazine size for better scalability.
2075 if (!mutex_tryenter(&cp
->cache_depot_lock
)) {
2076 mutex_enter(&cp
->cache_depot_lock
);
2077 cp
->cache_depot_contention
++;
2080 if ((mp
= mlp
->ml_list
) != NULL
) {
2081 ASSERT(KMEM_MAGAZINE_VALID(cp
, mp
));
2082 mlp
->ml_list
= mp
->mag_next
;
2083 if (--mlp
->ml_total
< mlp
->ml_min
)
2084 mlp
->ml_min
= mlp
->ml_total
;
2088 mutex_exit(&cp
->cache_depot_lock
);
2094 * Free a magazine to the depot.
2097 kmem_depot_free(kmem_cache_t
*cp
, kmem_maglist_t
*mlp
, kmem_magazine_t
*mp
)
2099 mutex_enter(&cp
->cache_depot_lock
);
2100 ASSERT(KMEM_MAGAZINE_VALID(cp
, mp
));
2101 mp
->mag_next
= mlp
->ml_list
;
2104 mutex_exit(&cp
->cache_depot_lock
);
2108 * Update the working set statistics for cp's depot.
2111 kmem_depot_ws_update(kmem_cache_t
*cp
)
2113 mutex_enter(&cp
->cache_depot_lock
);
2114 cp
->cache_full
.ml_reaplimit
= cp
->cache_full
.ml_min
;
2115 cp
->cache_full
.ml_min
= cp
->cache_full
.ml_total
;
2116 cp
->cache_empty
.ml_reaplimit
= cp
->cache_empty
.ml_min
;
2117 cp
->cache_empty
.ml_min
= cp
->cache_empty
.ml_total
;
2118 mutex_exit(&cp
->cache_depot_lock
);
2122 * Set the working set statistics for cp's depot to zero. (Everything is
2123 * eligible for reaping.)
2126 kmem_depot_ws_zero(kmem_cache_t
*cp
)
2128 mutex_enter(&cp
->cache_depot_lock
);
2129 cp
->cache_full
.ml_reaplimit
= cp
->cache_full
.ml_total
;
2130 cp
->cache_full
.ml_min
= cp
->cache_full
.ml_total
;
2131 cp
->cache_empty
.ml_reaplimit
= cp
->cache_empty
.ml_total
;
2132 cp
->cache_empty
.ml_min
= cp
->cache_empty
.ml_total
;
2133 mutex_exit(&cp
->cache_depot_lock
);
2137 * The number of bytes to reap before we call kpreempt(). The default (1MB)
2138 * causes us to preempt reaping up to hundreds of times per second. Using a
2139 * larger value (1GB) causes this to have virtually no effect.
2141 size_t kmem_reap_preempt_bytes
= 1024 * 1024;
2144 * Reap all magazines that have fallen out of the depot's working set.
2147 kmem_depot_ws_reap(kmem_cache_t
*cp
)
2151 kmem_magazine_t
*mp
;
2153 ASSERT(!list_link_active(&cp
->cache_link
) ||
2154 taskq_member(kmem_taskq
, curthread
));
2156 reap
= MIN(cp
->cache_full
.ml_reaplimit
, cp
->cache_full
.ml_min
);
2158 (mp
= kmem_depot_alloc(cp
, &cp
->cache_full
)) != NULL
) {
2159 kmem_magazine_destroy(cp
, mp
, cp
->cache_magtype
->mt_magsize
);
2160 bytes
+= cp
->cache_magtype
->mt_magsize
* cp
->cache_bufsize
;
2161 if (bytes
> kmem_reap_preempt_bytes
) {
2162 kpreempt(KPREEMPT_SYNC
);
2167 reap
= MIN(cp
->cache_empty
.ml_reaplimit
, cp
->cache_empty
.ml_min
);
2169 (mp
= kmem_depot_alloc(cp
, &cp
->cache_empty
)) != NULL
) {
2170 kmem_magazine_destroy(cp
, mp
, 0);
2171 bytes
+= cp
->cache_magtype
->mt_magsize
* cp
->cache_bufsize
;
2172 if (bytes
> kmem_reap_preempt_bytes
) {
2173 kpreempt(KPREEMPT_SYNC
);
2180 kmem_cpu_reload(kmem_cpu_cache_t
*ccp
, kmem_magazine_t
*mp
, int rounds
)
2182 ASSERT((ccp
->cc_loaded
== NULL
&& ccp
->cc_rounds
== -1) ||
2183 (ccp
->cc_loaded
&& ccp
->cc_rounds
+ rounds
== ccp
->cc_magsize
));
2184 ASSERT(ccp
->cc_magsize
> 0);
2186 ccp
->cc_ploaded
= ccp
->cc_loaded
;
2187 ccp
->cc_prounds
= ccp
->cc_rounds
;
2188 ccp
->cc_loaded
= mp
;
2189 ccp
->cc_rounds
= rounds
;
2193 * Intercept kmem alloc/free calls during crash dump in order to avoid
2194 * changing kmem state while memory is being saved to the dump device.
2195 * Otherwise, ::kmem_verify will report "corrupt buffers". Note that
2196 * there are no locks because only one CPU calls kmem during a crash
2197 * dump. To enable this feature, first create the associated vmem
2198 * arena with VMC_DUMPSAFE.
2200 static void *kmem_dump_start
; /* start of pre-reserved heap */
2201 static void *kmem_dump_end
; /* end of heap area */
2202 static void *kmem_dump_curr
; /* current free heap pointer */
2203 static size_t kmem_dump_size
; /* size of heap area */
2205 /* append to each buf created in the pre-reserved heap */
2206 typedef struct kmem_dumpctl
{
2207 void *kdc_next
; /* cache dump free list linkage */
2210 #define KMEM_DUMPCTL(cp, buf) \
2211 ((kmem_dumpctl_t *)P2ROUNDUP((uintptr_t)(buf) + (cp)->cache_bufsize, \
2214 /* Keep some simple stats. */
2215 #define KMEM_DUMP_LOGS (100)
2217 typedef struct kmem_dump_log
{
2218 kmem_cache_t
*kdl_cache
;
2219 uint_t kdl_allocs
; /* # of dump allocations */
2220 uint_t kdl_frees
; /* # of dump frees */
2221 uint_t kdl_alloc_fails
; /* # of allocation failures */
2222 uint_t kdl_free_nondump
; /* # of non-dump frees */
2223 uint_t kdl_unsafe
; /* cache was used, but unsafe */
2226 static kmem_dump_log_t
*kmem_dump_log
;
2227 static int kmem_dump_log_idx
;
2229 #define KDI_LOG(cp, stat) { \
2230 kmem_dump_log_t *kdl; \
2231 if ((kdl = (kmem_dump_log_t *)((cp)->cache_dumplog)) != NULL) { \
2233 } else if (kmem_dump_log_idx < KMEM_DUMP_LOGS) { \
2234 kdl = &kmem_dump_log[kmem_dump_log_idx++]; \
2236 kdl->kdl_cache = (cp); \
2237 (cp)->cache_dumplog = kdl; \
2241 /* set non zero for full report */
2242 uint_t kmem_dump_verbose
= 0;
2244 /* stats for overize heap */
2245 uint_t kmem_dump_oversize_allocs
= 0;
2246 uint_t kmem_dump_oversize_max
= 0;
2249 kmem_dumppr(char **pp
, char *e
, const char *format
, ...)
2257 va_start(ap
, format
);
2258 n
= vsnprintf(p
, e
- p
, format
, ap
);
2265 * Called when dumpadm(8) configures dump parameters.
2268 kmem_dump_init(size_t size
)
2270 if (kmem_dump_start
!= NULL
)
2271 kmem_free(kmem_dump_start
, kmem_dump_size
);
2273 if (kmem_dump_log
== NULL
)
2274 kmem_dump_log
= (kmem_dump_log_t
*)kmem_zalloc(KMEM_DUMP_LOGS
*
2275 sizeof (kmem_dump_log_t
), KM_SLEEP
);
2277 kmem_dump_start
= kmem_alloc(size
, KM_SLEEP
);
2279 if (kmem_dump_start
!= NULL
) {
2280 kmem_dump_size
= size
;
2281 kmem_dump_curr
= kmem_dump_start
;
2282 kmem_dump_end
= (void *)((char *)kmem_dump_start
+ size
);
2283 copy_pattern(KMEM_UNINITIALIZED_PATTERN
, kmem_dump_start
, size
);
2286 kmem_dump_curr
= NULL
;
2287 kmem_dump_end
= NULL
;
2292 * Set flag for each kmem_cache_t if is safe to use alternate dump
2293 * memory. Called just before panic crash dump starts. Set the flag
2294 * for the calling CPU.
2297 kmem_dump_begin(void)
2299 ASSERT(panicstr
!= NULL
);
2300 if (kmem_dump_start
!= NULL
) {
2303 for (cp
= list_head(&kmem_caches
); cp
!= NULL
;
2304 cp
= list_next(&kmem_caches
, cp
)) {
2305 kmem_cpu_cache_t
*ccp
= KMEM_CPU_CACHE(cp
);
2307 if (cp
->cache_arena
->vm_cflags
& VMC_DUMPSAFE
) {
2308 cp
->cache_flags
|= KMF_DUMPDIVERT
;
2309 ccp
->cc_flags
|= KMF_DUMPDIVERT
;
2310 ccp
->cc_dump_rounds
= ccp
->cc_rounds
;
2311 ccp
->cc_dump_prounds
= ccp
->cc_prounds
;
2312 ccp
->cc_rounds
= ccp
->cc_prounds
= -1;
2314 cp
->cache_flags
|= KMF_DUMPUNSAFE
;
2315 ccp
->cc_flags
|= KMF_DUMPUNSAFE
;
2322 * finished dump intercept
2323 * print any warnings on the console
2324 * return verbose information to dumpsys() in the given buffer
2327 kmem_dump_finish(char *buf
, size_t size
)
2330 int kdi_end
= kmem_dump_log_idx
;
2336 kmem_dump_log_t
*kdl
;
2337 char *e
= buf
+ size
;
2340 if (kmem_dump_size
== 0 || kmem_dump_verbose
== 0)
2343 used
= (char *)kmem_dump_curr
- (char *)kmem_dump_start
;
2344 percent
= (used
* 100) / kmem_dump_size
;
2346 kmem_dumppr(&p
, e
, "%% heap used,%d\n", percent
);
2347 kmem_dumppr(&p
, e
, "used bytes,%ld\n", used
);
2348 kmem_dumppr(&p
, e
, "heap size,%ld\n", kmem_dump_size
);
2349 kmem_dumppr(&p
, e
, "Oversize allocs,%d\n",
2350 kmem_dump_oversize_allocs
);
2351 kmem_dumppr(&p
, e
, "Oversize max size,%ld\n",
2352 kmem_dump_oversize_max
);
2354 for (kdi_idx
= 0; kdi_idx
< kdi_end
; kdi_idx
++) {
2355 kdl
= &kmem_dump_log
[kdi_idx
];
2356 cp
= kdl
->kdl_cache
;
2359 if (kdl
->kdl_alloc_fails
)
2363 "Cache Name,Allocs,Frees,Alloc Fails,"
2364 "Nondump Frees,Unsafe Allocs/Frees\n");
2367 kmem_dumppr(&p
, e
, "%s,%d,%d,%d,%d,%d\n",
2368 cp
->cache_name
, kdl
->kdl_allocs
, kdl
->kdl_frees
,
2369 kdl
->kdl_alloc_fails
, kdl
->kdl_free_nondump
,
2373 /* return buffer size used */
2380 * Allocate a constructed object from alternate dump memory.
2383 kmem_cache_alloc_dump(kmem_cache_t
*cp
, int kmflag
)
2389 /* return a constructed object */
2390 if ((buf
= cp
->cache_dumpfreelist
) != NULL
) {
2391 cp
->cache_dumpfreelist
= KMEM_DUMPCTL(cp
, buf
)->kdc_next
;
2392 KDI_LOG(cp
, kdl_allocs
);
2396 /* create a new constructed object */
2397 curr
= kmem_dump_curr
;
2398 buf
= (void *)P2ROUNDUP((uintptr_t)curr
, cp
->cache_align
);
2399 bufend
= (char *)KMEM_DUMPCTL(cp
, buf
) + sizeof (kmem_dumpctl_t
);
2401 /* hat layer objects cannot cross a page boundary */
2402 if (cp
->cache_align
< PAGESIZE
) {
2403 char *page
= (char *)P2ROUNDUP((uintptr_t)buf
, PAGESIZE
);
2404 if (bufend
> page
) {
2405 bufend
+= page
- (char *)buf
;
2410 /* fall back to normal alloc if reserved area is used up */
2411 if (bufend
> (char *)kmem_dump_end
) {
2412 kmem_dump_curr
= kmem_dump_end
;
2413 KDI_LOG(cp
, kdl_alloc_fails
);
2418 * Must advance curr pointer before calling a constructor that
2419 * may also allocate memory.
2421 kmem_dump_curr
= bufend
;
2423 /* run constructor */
2424 if (cp
->cache_constructor
!= NULL
&&
2425 cp
->cache_constructor(buf
, cp
->cache_private
, kmflag
)
2428 printf("name='%s' cache=0x%p: kmem cache constructor failed\n",
2429 cp
->cache_name
, (void *)cp
);
2431 /* reset curr pointer iff no allocs were done */
2432 if (kmem_dump_curr
== bufend
)
2433 kmem_dump_curr
= curr
;
2435 /* fall back to normal alloc if the constructor fails */
2436 KDI_LOG(cp
, kdl_alloc_fails
);
2440 KDI_LOG(cp
, kdl_allocs
);
2445 * Free a constructed object in alternate dump memory.
2448 kmem_cache_free_dump(kmem_cache_t
*cp
, void *buf
)
2450 /* save constructed buffers for next time */
2451 if ((char *)buf
>= (char *)kmem_dump_start
&&
2452 (char *)buf
< (char *)kmem_dump_end
) {
2453 KMEM_DUMPCTL(cp
, buf
)->kdc_next
= cp
->cache_dumpfreelist
;
2454 cp
->cache_dumpfreelist
= buf
;
2455 KDI_LOG(cp
, kdl_frees
);
2459 /* count all non-dump buf frees */
2460 KDI_LOG(cp
, kdl_free_nondump
);
2462 /* just drop buffers that were allocated before dump started */
2463 if (kmem_dump_curr
< kmem_dump_end
)
2466 /* fall back to normal free if reserved area is used up */
2471 * Allocate a constructed object from cache cp.
2474 kmem_cache_alloc(kmem_cache_t
*cp
, int kmflag
)
2476 kmem_cpu_cache_t
*ccp
= KMEM_CPU_CACHE(cp
);
2477 kmem_magazine_t
*fmp
;
2480 mutex_enter(&ccp
->cc_lock
);
2483 * If there's an object available in the current CPU's
2484 * loaded magazine, just take it and return.
2486 if (ccp
->cc_rounds
> 0) {
2487 buf
= ccp
->cc_loaded
->mag_round
[--ccp
->cc_rounds
];
2489 mutex_exit(&ccp
->cc_lock
);
2490 if (ccp
->cc_flags
& (KMF_BUFTAG
| KMF_DUMPUNSAFE
)) {
2491 if (ccp
->cc_flags
& KMF_DUMPUNSAFE
) {
2492 ASSERT(!(ccp
->cc_flags
&
2494 KDI_LOG(cp
, kdl_unsafe
);
2496 if ((ccp
->cc_flags
& KMF_BUFTAG
) &&
2497 kmem_cache_alloc_debug(cp
, buf
, kmflag
, 0,
2499 if (kmflag
& KM_NOSLEEP
)
2501 mutex_enter(&ccp
->cc_lock
);
2509 * The loaded magazine is empty. If the previously loaded
2510 * magazine was full, exchange them and try again.
2512 if (ccp
->cc_prounds
> 0) {
2513 kmem_cpu_reload(ccp
, ccp
->cc_ploaded
, ccp
->cc_prounds
);
2518 * Return an alternate buffer at dump time to preserve
2521 if (ccp
->cc_flags
& (KMF_DUMPDIVERT
| KMF_DUMPUNSAFE
)) {
2522 if (ccp
->cc_flags
& KMF_DUMPUNSAFE
) {
2523 ASSERT(!(ccp
->cc_flags
& KMF_DUMPDIVERT
));
2524 /* log it so that we can warn about it */
2525 KDI_LOG(cp
, kdl_unsafe
);
2527 if ((buf
= kmem_cache_alloc_dump(cp
, kmflag
)) !=
2529 mutex_exit(&ccp
->cc_lock
);
2532 break; /* fall back to slab layer */
2537 * If the magazine layer is disabled, break out now.
2539 if (ccp
->cc_magsize
== 0)
2543 * Try to get a full magazine from the depot.
2545 fmp
= kmem_depot_alloc(cp
, &cp
->cache_full
);
2547 if (ccp
->cc_ploaded
!= NULL
)
2548 kmem_depot_free(cp
, &cp
->cache_empty
,
2550 kmem_cpu_reload(ccp
, fmp
, ccp
->cc_magsize
);
2555 * There are no full magazines in the depot,
2556 * so fall through to the slab layer.
2560 mutex_exit(&ccp
->cc_lock
);
2563 * We couldn't allocate a constructed object from the magazine layer,
2564 * so get a raw buffer from the slab layer and apply its constructor.
2566 buf
= kmem_slab_alloc(cp
, kmflag
);
2571 if (cp
->cache_flags
& KMF_BUFTAG
) {
2573 * Make kmem_cache_alloc_debug() apply the constructor for us.
2575 int rc
= kmem_cache_alloc_debug(cp
, buf
, kmflag
, 1, caller());
2577 if (kmflag
& KM_NOSLEEP
)
2580 * kmem_cache_alloc_debug() detected corruption
2581 * but didn't panic (kmem_panic <= 0). We should not be
2582 * here because the constructor failed (indicated by a
2583 * return code of 1). Try again.
2586 return (kmem_cache_alloc(cp
, kmflag
));
2591 if (cp
->cache_constructor
!= NULL
&&
2592 cp
->cache_constructor(buf
, cp
->cache_private
, kmflag
) != 0) {
2593 atomic_inc_64(&cp
->cache_alloc_fail
);
2594 kmem_slab_free(cp
, buf
);
2599 if (buf
!= NULL
&& (kmflag
& KM_ZERO
))
2600 bzero(buf
, cp
->cache_bufsize
);
2606 * The freed argument tells whether or not kmem_cache_free_debug() has already
2607 * been called so that we can avoid the duplicate free error. For example, a
2608 * buffer on a magazine has already been freed by the client but is still
2612 kmem_slab_free_constructed(kmem_cache_t
*cp
, void *buf
, boolean_t freed
)
2614 if (!freed
&& (cp
->cache_flags
& KMF_BUFTAG
))
2615 if (kmem_cache_free_debug(cp
, buf
, caller()) == -1)
2619 * Note that if KMF_DEADBEEF is in effect and KMF_LITE is not,
2620 * kmem_cache_free_debug() will have already applied the destructor.
2622 if ((cp
->cache_flags
& (KMF_DEADBEEF
| KMF_LITE
)) != KMF_DEADBEEF
&&
2623 cp
->cache_destructor
!= NULL
) {
2624 if (cp
->cache_flags
& KMF_DEADBEEF
) { /* KMF_LITE implied */
2625 kmem_buftag_t
*btp
= KMEM_BUFTAG(cp
, buf
);
2626 *(uint64_t *)buf
= btp
->bt_redzone
;
2627 cp
->cache_destructor(buf
, cp
->cache_private
);
2628 *(uint64_t *)buf
= KMEM_FREE_PATTERN
;
2630 cp
->cache_destructor(buf
, cp
->cache_private
);
2634 kmem_slab_free(cp
, buf
);
2638 * Used when there's no room to free a buffer to the per-CPU cache.
2639 * Drops and re-acquires &ccp->cc_lock, and returns non-zero if the
2640 * caller should try freeing to the per-CPU cache again.
2641 * Note that we don't directly install the magazine in the cpu cache,
2642 * since its state may have changed wildly while the lock was dropped.
2645 kmem_cpucache_magazine_alloc(kmem_cpu_cache_t
*ccp
, kmem_cache_t
*cp
)
2647 kmem_magazine_t
*emp
;
2648 kmem_magtype_t
*mtp
;
2650 ASSERT(MUTEX_HELD(&ccp
->cc_lock
));
2651 ASSERT(((uint_t
)ccp
->cc_rounds
== ccp
->cc_magsize
||
2652 ((uint_t
)ccp
->cc_rounds
== -1)) &&
2653 ((uint_t
)ccp
->cc_prounds
== ccp
->cc_magsize
||
2654 ((uint_t
)ccp
->cc_prounds
== -1)));
2656 emp
= kmem_depot_alloc(cp
, &cp
->cache_empty
);
2658 if (ccp
->cc_ploaded
!= NULL
)
2659 kmem_depot_free(cp
, &cp
->cache_full
,
2661 kmem_cpu_reload(ccp
, emp
, 0);
2665 * There are no empty magazines in the depot,
2666 * so try to allocate a new one. We must drop all locks
2667 * across kmem_cache_alloc() because lower layers may
2668 * attempt to allocate from this cache.
2670 mtp
= cp
->cache_magtype
;
2671 mutex_exit(&ccp
->cc_lock
);
2672 emp
= kmem_cache_alloc(mtp
->mt_cache
, KM_NOSLEEP
);
2673 mutex_enter(&ccp
->cc_lock
);
2677 * We successfully allocated an empty magazine.
2678 * However, we had to drop ccp->cc_lock to do it,
2679 * so the cache's magazine size may have changed.
2680 * If so, free the magazine and try again.
2682 if (ccp
->cc_magsize
!= mtp
->mt_magsize
) {
2683 mutex_exit(&ccp
->cc_lock
);
2684 kmem_cache_free(mtp
->mt_cache
, emp
);
2685 mutex_enter(&ccp
->cc_lock
);
2690 * We got a magazine of the right size. Add it to
2691 * the depot and try the whole dance again.
2693 kmem_depot_free(cp
, &cp
->cache_empty
, emp
);
2698 * We couldn't allocate an empty magazine,
2699 * so fall through to the slab layer.
2705 * Free a constructed object to cache cp.
2708 kmem_cache_free(kmem_cache_t
*cp
, void *buf
)
2710 kmem_cpu_cache_t
*ccp
= KMEM_CPU_CACHE(cp
);
2713 * The client must not free either of the buffers passed to the move
2714 * callback function.
2716 ASSERT(cp
->cache_defrag
== NULL
||
2717 cp
->cache_defrag
->kmd_thread
!= curthread
||
2718 (buf
!= cp
->cache_defrag
->kmd_from_buf
&&
2719 buf
!= cp
->cache_defrag
->kmd_to_buf
));
2721 if (ccp
->cc_flags
& (KMF_BUFTAG
| KMF_DUMPDIVERT
| KMF_DUMPUNSAFE
)) {
2722 if (ccp
->cc_flags
& KMF_DUMPUNSAFE
) {
2723 ASSERT(!(ccp
->cc_flags
& KMF_DUMPDIVERT
));
2724 /* log it so that we can warn about it */
2725 KDI_LOG(cp
, kdl_unsafe
);
2726 } else if (KMEM_DUMPCC(ccp
) && !kmem_cache_free_dump(cp
, buf
)) {
2729 if (ccp
->cc_flags
& KMF_BUFTAG
) {
2730 if (kmem_cache_free_debug(cp
, buf
, caller()) == -1)
2735 mutex_enter(&ccp
->cc_lock
);
2737 * Any changes to this logic should be reflected in kmem_slab_prefill()
2741 * If there's a slot available in the current CPU's
2742 * loaded magazine, just put the object there and return.
2744 if ((uint_t
)ccp
->cc_rounds
< ccp
->cc_magsize
) {
2745 ccp
->cc_loaded
->mag_round
[ccp
->cc_rounds
++] = buf
;
2747 mutex_exit(&ccp
->cc_lock
);
2752 * The loaded magazine is full. If the previously loaded
2753 * magazine was empty, exchange them and try again.
2755 if (ccp
->cc_prounds
== 0) {
2756 kmem_cpu_reload(ccp
, ccp
->cc_ploaded
, ccp
->cc_prounds
);
2761 * If the magazine layer is disabled, break out now.
2763 if (ccp
->cc_magsize
== 0)
2766 if (!kmem_cpucache_magazine_alloc(ccp
, cp
)) {
2768 * We couldn't free our constructed object to the
2769 * magazine layer, so apply its destructor and free it
2770 * to the slab layer.
2775 mutex_exit(&ccp
->cc_lock
);
2776 kmem_slab_free_constructed(cp
, buf
, B_TRUE
);
2780 kmem_slab_prefill(kmem_cache_t
*cp
, kmem_slab_t
*sp
)
2782 kmem_cpu_cache_t
*ccp
= KMEM_CPU_CACHE(cp
);
2783 int cache_flags
= cp
->cache_flags
;
2785 kmem_bufctl_t
*next
, *head
;
2789 * Completely allocate the newly created slab and put the pre-allocated
2790 * buffers in magazines. Any of the buffers that cannot be put in
2791 * magazines must be returned to the slab.
2793 ASSERT(MUTEX_HELD(&cp
->cache_lock
));
2794 ASSERT((cache_flags
& (KMF_PREFILL
|KMF_BUFTAG
)) == KMF_PREFILL
);
2795 ASSERT(cp
->cache_constructor
== NULL
);
2796 ASSERT(sp
->slab_cache
== cp
);
2797 ASSERT(sp
->slab_refcnt
== 1);
2798 ASSERT(sp
->slab_head
!= NULL
&& sp
->slab_chunks
> sp
->slab_refcnt
);
2799 ASSERT(avl_find(&cp
->cache_partial_slabs
, sp
, NULL
) == NULL
);
2801 head
= sp
->slab_head
;
2802 nbufs
= (sp
->slab_chunks
- sp
->slab_refcnt
);
2803 sp
->slab_head
= NULL
;
2804 sp
->slab_refcnt
+= nbufs
;
2805 cp
->cache_bufslab
-= nbufs
;
2806 cp
->cache_slab_alloc
+= nbufs
;
2807 list_insert_head(&cp
->cache_complete_slabs
, sp
);
2808 cp
->cache_complete_slab_count
++;
2809 mutex_exit(&cp
->cache_lock
);
2810 mutex_enter(&ccp
->cc_lock
);
2812 while (head
!= NULL
) {
2813 void *buf
= KMEM_BUF(cp
, head
);
2815 * If there's a slot available in the current CPU's
2816 * loaded magazine, just put the object there and
2819 if ((uint_t
)ccp
->cc_rounds
< ccp
->cc_magsize
) {
2820 ccp
->cc_loaded
->mag_round
[ccp
->cc_rounds
++] =
2824 head
= head
->bc_next
;
2829 * The loaded magazine is full. If the previously
2830 * loaded magazine was empty, exchange them and try
2833 if (ccp
->cc_prounds
== 0) {
2834 kmem_cpu_reload(ccp
, ccp
->cc_ploaded
,
2840 * If the magazine layer is disabled, break out now.
2843 if (ccp
->cc_magsize
== 0) {
2847 if (!kmem_cpucache_magazine_alloc(ccp
, cp
))
2850 mutex_exit(&ccp
->cc_lock
);
2852 ASSERT(head
!= NULL
);
2855 * If there was a failure, return remaining objects to
2858 while (head
!= NULL
) {
2860 next
= head
->bc_next
;
2861 head
->bc_next
= NULL
;
2862 kmem_slab_free(cp
, KMEM_BUF(cp
, head
));
2867 ASSERT(head
== NULL
);
2869 mutex_enter(&cp
->cache_lock
);
2873 do_kmem_alloc(size_t size
, int kmflag
, void *caller_pc
)
2879 if ((index
= ((size
- 1) >> KMEM_ALIGN_SHIFT
)) < KMEM_ALLOC_TABLE_MAX
) {
2880 cp
= kmem_alloc_table
[index
];
2881 /* fall through to kmem_cache_alloc() */
2883 } else if ((index
= ((size
- 1) >> KMEM_BIG_SHIFT
)) <
2884 kmem_big_alloc_table_max
) {
2885 cp
= kmem_big_alloc_table
[index
];
2886 /* fall through to kmem_cache_alloc() */
2892 buf
= vmem_alloc(kmem_oversize_arena
, size
,
2893 kmflag
& KM_VMFLAGS
);
2895 kmem_log_event(kmem_failure_log
, NULL
, NULL
,
2897 else if (KMEM_DUMP(kmem_slab_cache
)) {
2898 /* stats for dump intercept */
2899 kmem_dump_oversize_allocs
++;
2900 if (size
> kmem_dump_oversize_max
)
2901 kmem_dump_oversize_max
= size
;
2903 if (buf
!= NULL
&& (kmflag
& KM_ZERO
))
2908 buf
= kmem_cache_alloc(cp
, kmflag
);
2909 if ((cp
->cache_flags
& KMF_BUFTAG
) && !KMEM_DUMP(cp
) && buf
!= NULL
) {
2910 kmem_buftag_t
*btp
= KMEM_BUFTAG(cp
, buf
);
2911 ((uint8_t *)buf
)[size
] = KMEM_REDZONE_BYTE
;
2912 ((uint32_t *)btp
)[1] = KMEM_SIZE_ENCODE(size
);
2914 if (cp
->cache_flags
& KMF_LITE
) {
2915 KMEM_BUFTAG_LITE_ENTER(btp
, kmem_lite_count
, caller_pc
);
2922 kmem_zalloc(size_t size
, int kmflag
)
2924 return (do_kmem_alloc(size
, kmflag
| KM_ZERO
, caller()));
2928 kmem_alloc(size_t size
, int kmflag
)
2930 return (do_kmem_alloc(size
, kmflag
, caller()));
2934 kmem_free(void *buf
, size_t size
)
2939 if ((index
= (size
- 1) >> KMEM_ALIGN_SHIFT
) < KMEM_ALLOC_TABLE_MAX
) {
2940 cp
= kmem_alloc_table
[index
];
2941 /* fall through to kmem_cache_free() */
2943 } else if ((index
= ((size
- 1) >> KMEM_BIG_SHIFT
)) <
2944 kmem_big_alloc_table_max
) {
2945 cp
= kmem_big_alloc_table
[index
];
2946 /* fall through to kmem_cache_free() */
2949 EQUIV(buf
== NULL
, size
== 0);
2950 if (buf
== NULL
&& size
== 0)
2952 vmem_free(kmem_oversize_arena
, buf
, size
);
2956 if ((cp
->cache_flags
& KMF_BUFTAG
) && !KMEM_DUMP(cp
)) {
2957 kmem_buftag_t
*btp
= KMEM_BUFTAG(cp
, buf
);
2958 uint32_t *ip
= (uint32_t *)btp
;
2959 if (ip
[1] != KMEM_SIZE_ENCODE(size
)) {
2960 if (*(uint64_t *)buf
== KMEM_FREE_PATTERN
) {
2961 kmem_error(KMERR_DUPFREE
, cp
, buf
);
2964 if (KMEM_SIZE_VALID(ip
[1])) {
2965 ip
[0] = KMEM_SIZE_ENCODE(size
);
2966 kmem_error(KMERR_BADSIZE
, cp
, buf
);
2968 kmem_error(KMERR_REDZONE
, cp
, buf
);
2972 if (((uint8_t *)buf
)[size
] != KMEM_REDZONE_BYTE
) {
2973 kmem_error(KMERR_REDZONE
, cp
, buf
);
2976 btp
->bt_redzone
= KMEM_REDZONE_PATTERN
;
2977 if (cp
->cache_flags
& KMF_LITE
) {
2978 KMEM_BUFTAG_LITE_ENTER(btp
, kmem_lite_count
,
2982 kmem_cache_free(cp
, buf
);
2986 kmem_firewall_va_alloc(vmem_t
*vmp
, size_t size
, int vmflag
)
2988 size_t realsize
= size
+ vmp
->vm_quantum
;
2992 * Annoying edge case: if 'size' is just shy of ULONG_MAX, adding
2993 * vm_quantum will cause integer wraparound. Check for this, and
2994 * blow off the firewall page in this case. Note that such a
2995 * giant allocation (the entire kernel address space) can never
2996 * be satisfied, so it will either fail immediately (VM_NOSLEEP)
2997 * or sleep forever (VM_SLEEP). Thus, there is no need for a
2998 * corresponding check in kmem_firewall_va_free().
3000 if (realsize
< size
)
3004 * While boot still owns resource management, make sure that this
3005 * redzone virtual address allocation is properly accounted for in
3006 * OBPs "virtual-memory" "available" lists because we're
3007 * effectively claiming them for a red zone. If we don't do this,
3008 * the available lists become too fragmented and too large for the
3009 * current boot/kernel memory list interface.
3011 addr
= vmem_alloc(vmp
, realsize
, vmflag
| VM_NEXTFIT
);
3013 if (addr
!= NULL
&& kvseg
.s_base
== NULL
&& realsize
!= size
)
3014 (void) boot_virt_alloc((char *)addr
+ size
, vmp
->vm_quantum
);
3020 kmem_firewall_va_free(vmem_t
*vmp
, void *addr
, size_t size
)
3022 ASSERT((kvseg
.s_base
== NULL
?
3023 va_to_pfn((char *)addr
+ size
) :
3024 hat_getpfnum(kas
.a_hat
, (caddr_t
)addr
+ size
)) == PFN_INVALID
);
3026 vmem_free(vmp
, addr
, size
+ vmp
->vm_quantum
);
3030 * Try to allocate at least `size' bytes of memory without sleeping or
3031 * panicking. Return actual allocated size in `asize'. If allocation failed,
3032 * try final allocation with sleep or panic allowed.
3035 kmem_alloc_tryhard(size_t size
, size_t *asize
, int kmflag
)
3039 *asize
= P2ROUNDUP(size
, KMEM_ALIGN
);
3041 p
= kmem_alloc(*asize
, (kmflag
| KM_NOSLEEP
) & ~KM_PANIC
);
3044 *asize
+= KMEM_ALIGN
;
3045 } while (*asize
<= PAGESIZE
);
3047 *asize
= P2ROUNDUP(size
, KMEM_ALIGN
);
3048 return (kmem_alloc(*asize
, kmflag
));
3052 * Reclaim all unused memory from a cache.
3055 kmem_cache_reap(kmem_cache_t
*cp
)
3057 ASSERT(taskq_member(kmem_taskq
, curthread
));
3061 * Ask the cache's owner to free some memory if possible.
3062 * The idea is to handle things like the inode cache, which
3063 * typically sits on a bunch of memory that it doesn't truly
3064 * *need*. Reclaim policy is entirely up to the owner; this
3065 * callback is just an advisory plea for help.
3067 if (cp
->cache_reclaim
!= NULL
) {
3071 * Reclaimed memory should be reapable (not included in the
3072 * depot's working set).
3074 delta
= cp
->cache_full
.ml_total
;
3075 cp
->cache_reclaim(cp
->cache_private
);
3076 delta
= cp
->cache_full
.ml_total
- delta
;
3078 mutex_enter(&cp
->cache_depot_lock
);
3079 cp
->cache_full
.ml_reaplimit
+= delta
;
3080 cp
->cache_full
.ml_min
+= delta
;
3081 mutex_exit(&cp
->cache_depot_lock
);
3085 kmem_depot_ws_reap(cp
);
3087 if (cp
->cache_defrag
!= NULL
&& !kmem_move_noreap
) {
3088 kmem_cache_defrag(cp
);
3093 kmem_reap_timeout(void *flag_arg
)
3095 uint32_t *flag
= (uint32_t *)flag_arg
;
3097 ASSERT(flag
== &kmem_reaping
|| flag
== &kmem_reaping_idspace
);
3102 kmem_reap_done(void *flag
)
3104 if (!callout_init_done
) {
3105 /* can't schedule a timeout at this point */
3106 kmem_reap_timeout(flag
);
3108 (void) timeout(kmem_reap_timeout
, flag
, kmem_reap_interval
);
3113 kmem_reap_start(void *flag
)
3115 ASSERT(flag
== &kmem_reaping
|| flag
== &kmem_reaping_idspace
);
3117 if (flag
== &kmem_reaping
) {
3118 kmem_cache_applyall(kmem_cache_reap
, kmem_taskq
, TQ_NOSLEEP
);
3120 * if we have segkp under heap, reap segkp cache.
3126 kmem_cache_applyall_id(kmem_cache_reap
, kmem_taskq
, TQ_NOSLEEP
);
3129 * We use taskq_dispatch() to schedule a timeout to clear
3130 * the flag so that kmem_reap() becomes self-throttling:
3131 * we won't reap again until the current reap completes *and*
3132 * at least kmem_reap_interval ticks have elapsed.
3134 if (!taskq_dispatch(kmem_taskq
, kmem_reap_done
, flag
, TQ_NOSLEEP
))
3135 kmem_reap_done(flag
);
3139 kmem_reap_common(void *flag_arg
)
3141 uint32_t *flag
= (uint32_t *)flag_arg
;
3143 if (MUTEX_HELD(&kmem_cache_lock
) || kmem_taskq
== NULL
||
3144 atomic_cas_32(flag
, 0, 1) != 0)
3148 * It may not be kosher to do memory allocation when a reap is called
3149 * (for example, if vmem_populate() is in the call chain). So we
3150 * start the reap going with a TQ_NOALLOC dispatch. If the dispatch
3151 * fails, we reset the flag, and the next reap will try again.
3153 if (!taskq_dispatch(kmem_taskq
, kmem_reap_start
, flag
, TQ_NOALLOC
))
3158 * Reclaim all unused memory from all caches. Called from the VM system
3159 * when memory gets tight.
3164 kmem_reap_common(&kmem_reaping
);
3168 * Reclaim all unused memory from identifier arenas, called when a vmem
3169 * arena not back by memory is exhausted. Since reaping memory-backed caches
3170 * cannot help with identifier exhaustion, we avoid both a large amount of
3171 * work and unwanted side-effects from reclaim callbacks.
3174 kmem_reap_idspace(void)
3176 kmem_reap_common(&kmem_reaping_idspace
);
3180 * Purge all magazines from a cache and set its magazine limit to zero.
3181 * All calls are serialized by the kmem_taskq lock, except for the final
3182 * call from kmem_cache_destroy().
3185 kmem_cache_magazine_purge(kmem_cache_t
*cp
)
3187 kmem_cpu_cache_t
*ccp
;
3188 kmem_magazine_t
*mp
, *pmp
;
3189 int rounds
, prounds
, cpu_seqid
;
3191 ASSERT(!list_link_active(&cp
->cache_link
) ||
3192 taskq_member(kmem_taskq
, curthread
));
3193 ASSERT(MUTEX_NOT_HELD(&cp
->cache_lock
));
3195 for (cpu_seqid
= 0; cpu_seqid
< max_ncpus
; cpu_seqid
++) {
3196 ccp
= &cp
->cache_cpu
[cpu_seqid
];
3198 mutex_enter(&ccp
->cc_lock
);
3199 mp
= ccp
->cc_loaded
;
3200 pmp
= ccp
->cc_ploaded
;
3201 rounds
= ccp
->cc_rounds
;
3202 prounds
= ccp
->cc_prounds
;
3203 ccp
->cc_loaded
= NULL
;
3204 ccp
->cc_ploaded
= NULL
;
3205 ccp
->cc_rounds
= -1;
3206 ccp
->cc_prounds
= -1;
3207 ccp
->cc_magsize
= 0;
3208 mutex_exit(&ccp
->cc_lock
);
3211 kmem_magazine_destroy(cp
, mp
, rounds
);
3213 kmem_magazine_destroy(cp
, pmp
, prounds
);
3216 kmem_depot_ws_zero(cp
);
3217 kmem_depot_ws_reap(cp
);
3221 * Enable per-cpu magazines on a cache.
3224 kmem_cache_magazine_enable(kmem_cache_t
*cp
)
3228 if (cp
->cache_flags
& KMF_NOMAGAZINE
)
3231 for (cpu_seqid
= 0; cpu_seqid
< max_ncpus
; cpu_seqid
++) {
3232 kmem_cpu_cache_t
*ccp
= &cp
->cache_cpu
[cpu_seqid
];
3233 mutex_enter(&ccp
->cc_lock
);
3234 ccp
->cc_magsize
= cp
->cache_magtype
->mt_magsize
;
3235 mutex_exit(&ccp
->cc_lock
);
3241 * Allow our caller to determine if there are running reaps.
3243 * This call is very conservative and may return B_TRUE even when
3244 * reaping activity isn't active. If it returns B_FALSE, then reaping
3245 * activity is definitely inactive.
3248 kmem_cache_reap_active(void)
3250 return (!taskq_empty(kmem_taskq
));
3254 * Reap (almost) everything soon.
3256 * Note: this does not wait for the reap-tasks to complete. Caller
3257 * should use kmem_cache_reap_active() (above) and/or moderation to
3258 * avoid scheduling too many reap-tasks.
3261 kmem_cache_reap_soon(kmem_cache_t
*cp
)
3263 ASSERT(list_link_active(&cp
->cache_link
));
3265 kmem_depot_ws_zero(cp
);
3267 (void) taskq_dispatch(kmem_taskq
,
3268 (task_func_t
*)kmem_depot_ws_reap
, cp
, TQ_SLEEP
);
3272 * Recompute a cache's magazine size. The trade-off is that larger magazines
3273 * provide a higher transfer rate with the depot, while smaller magazines
3274 * reduce memory consumption. Magazine resizing is an expensive operation;
3275 * it should not be done frequently.
3277 * Changes to the magazine size are serialized by the kmem_taskq lock.
3279 * Note: at present this only grows the magazine size. It might be useful
3280 * to allow shrinkage too.
3283 kmem_cache_magazine_resize(kmem_cache_t
*cp
)
3285 kmem_magtype_t
*mtp
= cp
->cache_magtype
;
3287 ASSERT(taskq_member(kmem_taskq
, curthread
));
3289 if (cp
->cache_chunksize
< mtp
->mt_maxbuf
) {
3290 kmem_cache_magazine_purge(cp
);
3291 mutex_enter(&cp
->cache_depot_lock
);
3292 cp
->cache_magtype
= ++mtp
;
3293 cp
->cache_depot_contention_prev
=
3294 cp
->cache_depot_contention
+ INT_MAX
;
3295 mutex_exit(&cp
->cache_depot_lock
);
3296 kmem_cache_magazine_enable(cp
);
3301 * Rescale a cache's hash table, so that the table size is roughly the
3302 * cache size. We want the average lookup time to be extremely small.
3305 kmem_hash_rescale(kmem_cache_t
*cp
)
3307 kmem_bufctl_t
**old_table
, **new_table
, *bcp
;
3308 size_t old_size
, new_size
, h
;
3310 ASSERT(taskq_member(kmem_taskq
, curthread
));
3312 new_size
= MAX(KMEM_HASH_INITIAL
,
3313 1 << (highbit(3 * cp
->cache_buftotal
+ 4) - 2));
3314 old_size
= cp
->cache_hash_mask
+ 1;
3316 if ((old_size
>> 1) <= new_size
&& new_size
<= (old_size
<< 1))
3319 new_table
= vmem_alloc(kmem_hash_arena
, new_size
* sizeof (void *),
3321 if (new_table
== NULL
)
3323 bzero(new_table
, new_size
* sizeof (void *));
3325 mutex_enter(&cp
->cache_lock
);
3327 old_size
= cp
->cache_hash_mask
+ 1;
3328 old_table
= cp
->cache_hash_table
;
3330 cp
->cache_hash_mask
= new_size
- 1;
3331 cp
->cache_hash_table
= new_table
;
3332 cp
->cache_rescale
++;
3334 for (h
= 0; h
< old_size
; h
++) {
3336 while (bcp
!= NULL
) {
3337 void *addr
= bcp
->bc_addr
;
3338 kmem_bufctl_t
*next_bcp
= bcp
->bc_next
;
3339 kmem_bufctl_t
**hash_bucket
= KMEM_HASH(cp
, addr
);
3340 bcp
->bc_next
= *hash_bucket
;
3346 mutex_exit(&cp
->cache_lock
);
3348 vmem_free(kmem_hash_arena
, old_table
, old_size
* sizeof (void *));
3352 * Perform periodic maintenance on a cache: hash rescaling, depot working-set
3353 * update, magazine resizing, and slab consolidation.
3356 kmem_cache_update(kmem_cache_t
*cp
)
3358 int need_hash_rescale
= 0;
3359 int need_magazine_resize
= 0;
3361 ASSERT(MUTEX_HELD(&kmem_cache_lock
));
3364 * If the cache has become much larger or smaller than its hash table,
3365 * fire off a request to rescale the hash table.
3367 mutex_enter(&cp
->cache_lock
);
3369 if ((cp
->cache_flags
& KMF_HASH
) &&
3370 (cp
->cache_buftotal
> (cp
->cache_hash_mask
<< 1) ||
3371 (cp
->cache_buftotal
< (cp
->cache_hash_mask
>> 1) &&
3372 cp
->cache_hash_mask
> KMEM_HASH_INITIAL
)))
3373 need_hash_rescale
= 1;
3375 mutex_exit(&cp
->cache_lock
);
3378 * Update the depot working set statistics.
3380 kmem_depot_ws_update(cp
);
3383 * If there's a lot of contention in the depot,
3384 * increase the magazine size.
3386 mutex_enter(&cp
->cache_depot_lock
);
3388 if (cp
->cache_chunksize
< cp
->cache_magtype
->mt_maxbuf
&&
3389 (int)(cp
->cache_depot_contention
-
3390 cp
->cache_depot_contention_prev
) > kmem_depot_contention
)
3391 need_magazine_resize
= 1;
3393 cp
->cache_depot_contention_prev
= cp
->cache_depot_contention
;
3395 mutex_exit(&cp
->cache_depot_lock
);
3397 if (need_hash_rescale
)
3398 (void) taskq_dispatch(kmem_taskq
,
3399 (task_func_t
*)kmem_hash_rescale
, cp
, TQ_NOSLEEP
);
3401 if (need_magazine_resize
)
3402 (void) taskq_dispatch(kmem_taskq
,
3403 (task_func_t
*)kmem_cache_magazine_resize
, cp
, TQ_NOSLEEP
);
3405 if (cp
->cache_defrag
!= NULL
)
3406 (void) taskq_dispatch(kmem_taskq
,
3407 (task_func_t
*)kmem_cache_scan
, cp
, TQ_NOSLEEP
);
3410 static void kmem_update(void *);
3413 kmem_update_timeout(void *dummy
)
3415 (void) timeout(kmem_update
, dummy
, kmem_reap_interval
);
3419 kmem_update(void *dummy
)
3421 kmem_cache_applyall(kmem_cache_update
, NULL
, TQ_NOSLEEP
);
3424 * We use taskq_dispatch() to reschedule the timeout so that
3425 * kmem_update() becomes self-throttling: it won't schedule
3426 * new tasks until all previous tasks have completed.
3428 if (!taskq_dispatch(kmem_taskq
, kmem_update_timeout
, dummy
, TQ_NOSLEEP
))
3429 kmem_update_timeout(NULL
);
3433 kmem_cache_kstat_update(kstat_t
*ksp
, int rw
)
3435 struct kmem_cache_kstat
*kmcp
= &kmem_cache_kstat
;
3436 kmem_cache_t
*cp
= ksp
->ks_private
;
3437 uint64_t cpu_buf_avail
;
3438 uint64_t buf_avail
= 0;
3442 ASSERT(MUTEX_HELD(&kmem_cache_kstat_lock
));
3444 if (rw
== KSTAT_WRITE
)
3447 mutex_enter(&cp
->cache_lock
);
3449 kmcp
->kmc_alloc_fail
.value
.ui64
= cp
->cache_alloc_fail
;
3450 kmcp
->kmc_alloc
.value
.ui64
= cp
->cache_slab_alloc
;
3451 kmcp
->kmc_free
.value
.ui64
= cp
->cache_slab_free
;
3452 kmcp
->kmc_slab_alloc
.value
.ui64
= cp
->cache_slab_alloc
;
3453 kmcp
->kmc_slab_free
.value
.ui64
= cp
->cache_slab_free
;
3455 for (cpu_seqid
= 0; cpu_seqid
< max_ncpus
; cpu_seqid
++) {
3456 kmem_cpu_cache_t
*ccp
= &cp
->cache_cpu
[cpu_seqid
];
3458 mutex_enter(&ccp
->cc_lock
);
3461 if (ccp
->cc_rounds
> 0)
3462 cpu_buf_avail
+= ccp
->cc_rounds
;
3463 if (ccp
->cc_prounds
> 0)
3464 cpu_buf_avail
+= ccp
->cc_prounds
;
3466 kmcp
->kmc_alloc
.value
.ui64
+= ccp
->cc_alloc
;
3467 kmcp
->kmc_free
.value
.ui64
+= ccp
->cc_free
;
3468 buf_avail
+= cpu_buf_avail
;
3470 mutex_exit(&ccp
->cc_lock
);
3473 mutex_enter(&cp
->cache_depot_lock
);
3475 kmcp
->kmc_depot_alloc
.value
.ui64
= cp
->cache_full
.ml_alloc
;
3476 kmcp
->kmc_depot_free
.value
.ui64
= cp
->cache_empty
.ml_alloc
;
3477 kmcp
->kmc_depot_contention
.value
.ui64
= cp
->cache_depot_contention
;
3478 kmcp
->kmc_full_magazines
.value
.ui64
= cp
->cache_full
.ml_total
;
3479 kmcp
->kmc_empty_magazines
.value
.ui64
= cp
->cache_empty
.ml_total
;
3480 kmcp
->kmc_magazine_size
.value
.ui64
=
3481 (cp
->cache_flags
& KMF_NOMAGAZINE
) ?
3482 0 : cp
->cache_magtype
->mt_magsize
;
3484 kmcp
->kmc_alloc
.value
.ui64
+= cp
->cache_full
.ml_alloc
;
3485 kmcp
->kmc_free
.value
.ui64
+= cp
->cache_empty
.ml_alloc
;
3486 buf_avail
+= cp
->cache_full
.ml_total
* cp
->cache_magtype
->mt_magsize
;
3488 reap
= MIN(cp
->cache_full
.ml_reaplimit
, cp
->cache_full
.ml_min
);
3489 reap
= MIN(reap
, cp
->cache_full
.ml_total
);
3491 mutex_exit(&cp
->cache_depot_lock
);
3493 kmcp
->kmc_buf_size
.value
.ui64
= cp
->cache_bufsize
;
3494 kmcp
->kmc_align
.value
.ui64
= cp
->cache_align
;
3495 kmcp
->kmc_chunk_size
.value
.ui64
= cp
->cache_chunksize
;
3496 kmcp
->kmc_slab_size
.value
.ui64
= cp
->cache_slabsize
;
3497 kmcp
->kmc_buf_constructed
.value
.ui64
= buf_avail
;
3498 buf_avail
+= cp
->cache_bufslab
;
3499 kmcp
->kmc_buf_avail
.value
.ui64
= buf_avail
;
3500 kmcp
->kmc_buf_inuse
.value
.ui64
= cp
->cache_buftotal
- buf_avail
;
3501 kmcp
->kmc_buf_total
.value
.ui64
= cp
->cache_buftotal
;
3502 kmcp
->kmc_buf_max
.value
.ui64
= cp
->cache_bufmax
;
3503 kmcp
->kmc_slab_create
.value
.ui64
= cp
->cache_slab_create
;
3504 kmcp
->kmc_slab_destroy
.value
.ui64
= cp
->cache_slab_destroy
;
3505 kmcp
->kmc_hash_size
.value
.ui64
= (cp
->cache_flags
& KMF_HASH
) ?
3506 cp
->cache_hash_mask
+ 1 : 0;
3507 kmcp
->kmc_hash_lookup_depth
.value
.ui64
= cp
->cache_lookup_depth
;
3508 kmcp
->kmc_hash_rescale
.value
.ui64
= cp
->cache_rescale
;
3509 kmcp
->kmc_vmem_source
.value
.ui64
= cp
->cache_arena
->vm_id
;
3510 kmcp
->kmc_reap
.value
.ui64
= cp
->cache_reap
;
3512 if (cp
->cache_defrag
== NULL
) {
3513 kmcp
->kmc_move_callbacks
.value
.ui64
= 0;
3514 kmcp
->kmc_move_yes
.value
.ui64
= 0;
3515 kmcp
->kmc_move_no
.value
.ui64
= 0;
3516 kmcp
->kmc_move_later
.value
.ui64
= 0;
3517 kmcp
->kmc_move_dont_need
.value
.ui64
= 0;
3518 kmcp
->kmc_move_dont_know
.value
.ui64
= 0;
3519 kmcp
->kmc_move_hunt_found
.value
.ui64
= 0;
3520 kmcp
->kmc_move_slabs_freed
.value
.ui64
= 0;
3521 kmcp
->kmc_defrag
.value
.ui64
= 0;
3522 kmcp
->kmc_scan
.value
.ui64
= 0;
3523 kmcp
->kmc_move_reclaimable
.value
.ui64
= 0;
3525 int64_t reclaimable
;
3527 kmem_defrag_t
*kd
= cp
->cache_defrag
;
3528 kmcp
->kmc_move_callbacks
.value
.ui64
= kd
->kmd_callbacks
;
3529 kmcp
->kmc_move_yes
.value
.ui64
= kd
->kmd_yes
;
3530 kmcp
->kmc_move_no
.value
.ui64
= kd
->kmd_no
;
3531 kmcp
->kmc_move_later
.value
.ui64
= kd
->kmd_later
;
3532 kmcp
->kmc_move_dont_need
.value
.ui64
= kd
->kmd_dont_need
;
3533 kmcp
->kmc_move_dont_know
.value
.ui64
= kd
->kmd_dont_know
;
3534 kmcp
->kmc_move_hunt_found
.value
.ui64
= 0;
3535 kmcp
->kmc_move_slabs_freed
.value
.ui64
= kd
->kmd_slabs_freed
;
3536 kmcp
->kmc_defrag
.value
.ui64
= kd
->kmd_defrags
;
3537 kmcp
->kmc_scan
.value
.ui64
= kd
->kmd_scans
;
3539 reclaimable
= cp
->cache_bufslab
- (cp
->cache_maxchunks
- 1);
3540 reclaimable
= MAX(reclaimable
, 0);
3541 reclaimable
+= ((uint64_t)reap
* cp
->cache_magtype
->mt_magsize
);
3542 kmcp
->kmc_move_reclaimable
.value
.ui64
= reclaimable
;
3545 mutex_exit(&cp
->cache_lock
);
3550 * Return a named statistic about a particular cache.
3551 * This shouldn't be called very often, so it's currently designed for
3552 * simplicity (leverages existing kstat support) rather than efficiency.
3555 kmem_cache_stat(kmem_cache_t
*cp
, char *name
)
3558 kstat_t
*ksp
= cp
->cache_kstat
;
3559 kstat_named_t
*knp
= (kstat_named_t
*)&kmem_cache_kstat
;
3563 mutex_enter(&kmem_cache_kstat_lock
);
3564 (void) kmem_cache_kstat_update(ksp
, KSTAT_READ
);
3565 for (i
= 0; i
< ksp
->ks_ndata
; i
++) {
3566 if (strcmp(knp
[i
].name
, name
) == 0) {
3567 value
= knp
[i
].value
.ui64
;
3571 mutex_exit(&kmem_cache_kstat_lock
);
3577 * Return an estimate of currently available kernel heap memory.
3578 * On 32-bit systems, physical memory may exceed virtual memory,
3579 * we just truncate the result at 1GB.
3584 spgcnt_t rmem
= availrmem
- tune
.t_minarmem
;
3585 spgcnt_t fmem
= freemem
- minfree
;
3587 return ((size_t)ptob(MIN(MAX(MIN(rmem
, fmem
), 0),
3588 1 << (30 - PAGESHIFT
))));
3592 * Return the maximum amount of memory that is (in theory) allocatable
3593 * from the heap. This may be used as an estimate only since there
3594 * is no guarentee this space will still be available when an allocation
3595 * request is made, nor that the space may be allocated in one big request
3596 * due to kernel heap fragmentation.
3601 spgcnt_t pmem
= availrmem
- tune
.t_minarmem
;
3602 spgcnt_t vmem
= btop(vmem_size(heap_arena
, VMEM_FREE
));
3604 return ((size_t)ptob(MAX(MIN(pmem
, vmem
), 0)));
3608 * Indicate whether memory-intensive kmem debugging is enabled.
3611 kmem_debugging(void)
3613 return (kmem_flags
& (KMF_AUDIT
| KMF_REDZONE
));
3616 /* binning function, sorts finely at the two extremes */
3617 #define KMEM_PARTIAL_SLAB_WEIGHT(sp, binshift) \
3618 ((((sp)->slab_refcnt <= (binshift)) || \
3619 (((sp)->slab_chunks - (sp)->slab_refcnt) <= (binshift))) \
3620 ? -(sp)->slab_refcnt \
3621 : -((binshift) + ((sp)->slab_refcnt >> (binshift))))
3624 * Minimizing the number of partial slabs on the freelist minimizes
3625 * fragmentation (the ratio of unused buffers held by the slab layer). There are
3626 * two ways to get a slab off of the freelist: 1) free all the buffers on the
3627 * slab, and 2) allocate all the buffers on the slab. It follows that we want
3628 * the most-used slabs at the front of the list where they have the best chance
3629 * of being completely allocated, and the least-used slabs at a safe distance
3630 * from the front to improve the odds that the few remaining buffers will all be
3631 * freed before another allocation can tie up the slab. For that reason a slab
3632 * with a higher slab_refcnt sorts less than than a slab with a lower
3635 * However, if a slab has at least one buffer that is deemed unfreeable, we
3636 * would rather have that slab at the front of the list regardless of
3637 * slab_refcnt, since even one unfreeable buffer makes the entire slab
3638 * unfreeable. If the client returns KMEM_CBRC_NO in response to a cache_move()
3639 * callback, the slab is marked unfreeable for as long as it remains on the
3643 kmem_partial_slab_cmp(const void *p0
, const void *p1
)
3645 const kmem_cache_t
*cp
;
3646 const kmem_slab_t
*s0
= p0
;
3647 const kmem_slab_t
*s1
= p1
;
3651 ASSERT(KMEM_SLAB_IS_PARTIAL(s0
));
3652 ASSERT(KMEM_SLAB_IS_PARTIAL(s1
));
3653 ASSERT(s0
->slab_cache
== s1
->slab_cache
);
3654 cp
= s1
->slab_cache
;
3655 ASSERT(MUTEX_HELD(&cp
->cache_lock
));
3656 binshift
= cp
->cache_partial_binshift
;
3658 /* weight of first slab */
3659 w0
= KMEM_PARTIAL_SLAB_WEIGHT(s0
, binshift
);
3660 if (s0
->slab_flags
& KMEM_SLAB_NOMOVE
) {
3661 w0
-= cp
->cache_maxchunks
;
3664 /* weight of second slab */
3665 w1
= KMEM_PARTIAL_SLAB_WEIGHT(s1
, binshift
);
3666 if (s1
->slab_flags
& KMEM_SLAB_NOMOVE
) {
3667 w1
-= cp
->cache_maxchunks
;
3675 /* compare pointer values */
3676 if ((uintptr_t)s0
< (uintptr_t)s1
)
3678 if ((uintptr_t)s0
> (uintptr_t)s1
)
3685 * It must be valid to call the destructor (if any) on a newly created object.
3686 * That is, the constructor (if any) must leave the object in a valid state for
3691 char *name
, /* descriptive name for this cache */
3692 size_t bufsize
, /* size of the objects it manages */
3693 size_t align
, /* required object alignment */
3694 int (*constructor
)(void *, void *, int), /* object constructor */
3695 void (*destructor
)(void *, void *), /* object destructor */
3696 void (*reclaim
)(void *), /* memory reclaim callback */
3697 void *private, /* pass-thru arg for constr/destr/reclaim */
3698 vmem_t
*vmp
, /* vmem source for slab allocation */
3699 int cflags
) /* cache creation flags */
3704 kmem_magtype_t
*mtp
;
3705 size_t csize
= KMEM_CACHE_SIZE(max_ncpus
);
3709 * Cache names should conform to the rules for valid C identifiers
3711 if (!strident_valid(name
)) {
3713 "kmem_cache_create: '%s' is an invalid cache name\n"
3714 "cache names must conform to the rules for "
3715 "C identifiers\n", name
);
3720 vmp
= kmem_default_arena
;
3723 * If this kmem cache has an identifier vmem arena as its source, mark
3724 * it such to allow kmem_reap_idspace().
3726 ASSERT(!(cflags
& KMC_IDENTIFIER
)); /* consumer should not set this */
3727 if (vmp
->vm_cflags
& VMC_IDENTIFIER
)
3728 cflags
|= KMC_IDENTIFIER
;
3731 * Get a kmem_cache structure. We arrange that cp->cache_cpu[]
3732 * is aligned on a KMEM_CPU_CACHE_SIZE boundary to prevent
3733 * false sharing of per-CPU data.
3735 cp
= vmem_xalloc(kmem_cache_arena
, csize
, KMEM_CPU_CACHE_SIZE
,
3736 P2NPHASE(csize
, KMEM_CPU_CACHE_SIZE
), 0, NULL
, NULL
, VM_SLEEP
);
3738 list_link_init(&cp
->cache_link
);
3744 * If we're not at least KMEM_ALIGN aligned, we can't use free
3745 * memory to hold bufctl information (because we can't safely
3746 * perform word loads and stores on it).
3748 if (align
< KMEM_ALIGN
)
3749 cflags
|= KMC_NOTOUCH
;
3751 if (!ISP2(align
) || align
> vmp
->vm_quantum
)
3752 panic("kmem_cache_create: bad alignment %lu", align
);
3754 mutex_enter(&kmem_flags_lock
);
3755 if (kmem_flags
& KMF_RANDOMIZE
)
3756 kmem_flags
= (((kmem_flags
| ~KMF_RANDOM
) + 1) & KMF_RANDOM
) |
3758 cp
->cache_flags
= (kmem_flags
| cflags
) & KMF_DEBUG
;
3759 mutex_exit(&kmem_flags_lock
);
3762 * Make sure all the various flags are reasonable.
3764 ASSERT(!(cflags
& KMC_NOHASH
) || !(cflags
& KMC_NOTOUCH
));
3766 if (cp
->cache_flags
& KMF_LITE
) {
3767 if (bufsize
>= kmem_lite_minsize
&&
3768 align
<= kmem_lite_maxalign
&&
3769 P2PHASE(bufsize
, kmem_lite_maxalign
) != 0) {
3770 cp
->cache_flags
|= KMF_BUFTAG
;
3771 cp
->cache_flags
&= ~(KMF_AUDIT
| KMF_FIREWALL
);
3773 cp
->cache_flags
&= ~KMF_DEBUG
;
3777 if (cp
->cache_flags
& KMF_DEADBEEF
)
3778 cp
->cache_flags
|= KMF_REDZONE
;
3780 if ((cflags
& KMC_QCACHE
) && (cp
->cache_flags
& KMF_AUDIT
))
3781 cp
->cache_flags
|= KMF_NOMAGAZINE
;
3783 if (cflags
& KMC_NODEBUG
)
3784 cp
->cache_flags
&= ~KMF_DEBUG
;
3786 if (cflags
& KMC_NOTOUCH
)
3787 cp
->cache_flags
&= ~KMF_TOUCH
;
3789 if (cflags
& KMC_PREFILL
)
3790 cp
->cache_flags
|= KMF_PREFILL
;
3792 if (cflags
& KMC_NOHASH
)
3793 cp
->cache_flags
&= ~(KMF_AUDIT
| KMF_FIREWALL
);
3795 if (cflags
& KMC_NOMAGAZINE
)
3796 cp
->cache_flags
|= KMF_NOMAGAZINE
;
3798 if ((cp
->cache_flags
& KMF_AUDIT
) && !(cflags
& KMC_NOTOUCH
))
3799 cp
->cache_flags
|= KMF_REDZONE
;
3801 if (!(cp
->cache_flags
& KMF_AUDIT
))
3802 cp
->cache_flags
&= ~KMF_CONTENTS
;
3804 if ((cp
->cache_flags
& KMF_BUFTAG
) && bufsize
>= kmem_minfirewall
&&
3805 !(cp
->cache_flags
& KMF_LITE
) && !(cflags
& KMC_NOHASH
))
3806 cp
->cache_flags
|= KMF_FIREWALL
;
3808 if (vmp
!= kmem_default_arena
|| kmem_firewall_arena
== NULL
)
3809 cp
->cache_flags
&= ~KMF_FIREWALL
;
3811 if (cp
->cache_flags
& KMF_FIREWALL
) {
3812 cp
->cache_flags
&= ~KMF_BUFTAG
;
3813 cp
->cache_flags
|= KMF_NOMAGAZINE
;
3814 ASSERT(vmp
== kmem_default_arena
);
3815 vmp
= kmem_firewall_arena
;
3819 * Set cache properties.
3821 (void) strncpy(cp
->cache_name
, name
, KMEM_CACHE_NAMELEN
);
3822 strident_canon(cp
->cache_name
, KMEM_CACHE_NAMELEN
+ 1);
3823 cp
->cache_bufsize
= bufsize
;
3824 cp
->cache_align
= align
;
3825 cp
->cache_constructor
= constructor
;
3826 cp
->cache_destructor
= destructor
;
3827 cp
->cache_reclaim
= reclaim
;
3828 cp
->cache_private
= private;
3829 cp
->cache_arena
= vmp
;
3830 cp
->cache_cflags
= cflags
;
3833 * Determine the chunk size.
3835 chunksize
= bufsize
;
3837 if (align
>= KMEM_ALIGN
) {
3838 chunksize
= P2ROUNDUP(chunksize
, KMEM_ALIGN
);
3839 cp
->cache_bufctl
= chunksize
- KMEM_ALIGN
;
3842 if (cp
->cache_flags
& KMF_BUFTAG
) {
3843 cp
->cache_bufctl
= chunksize
;
3844 cp
->cache_buftag
= chunksize
;
3845 if (cp
->cache_flags
& KMF_LITE
)
3846 chunksize
+= KMEM_BUFTAG_LITE_SIZE(kmem_lite_count
);
3848 chunksize
+= sizeof (kmem_buftag_t
);
3851 if (cp
->cache_flags
& KMF_DEADBEEF
) {
3852 cp
->cache_verify
= MIN(cp
->cache_buftag
, kmem_maxverify
);
3853 if (cp
->cache_flags
& KMF_LITE
)
3854 cp
->cache_verify
= sizeof (uint64_t);
3857 cp
->cache_contents
= MIN(cp
->cache_bufctl
, kmem_content_maxsave
);
3859 cp
->cache_chunksize
= chunksize
= P2ROUNDUP(chunksize
, align
);
3862 * Now that we know the chunk size, determine the optimal slab size.
3864 if (vmp
== kmem_firewall_arena
) {
3865 cp
->cache_slabsize
= P2ROUNDUP(chunksize
, vmp
->vm_quantum
);
3866 cp
->cache_mincolor
= cp
->cache_slabsize
- chunksize
;
3867 cp
->cache_maxcolor
= cp
->cache_mincolor
;
3868 cp
->cache_flags
|= KMF_HASH
;
3869 ASSERT(!(cp
->cache_flags
& KMF_BUFTAG
));
3870 } else if ((cflags
& KMC_NOHASH
) || (!(cflags
& KMC_NOTOUCH
) &&
3871 !(cp
->cache_flags
& KMF_AUDIT
) &&
3872 chunksize
< vmp
->vm_quantum
/ KMEM_VOID_FRACTION
)) {
3873 cp
->cache_slabsize
= vmp
->vm_quantum
;
3874 cp
->cache_mincolor
= 0;
3875 cp
->cache_maxcolor
=
3876 (cp
->cache_slabsize
- sizeof (kmem_slab_t
)) % chunksize
;
3877 ASSERT(chunksize
+ sizeof (kmem_slab_t
) <= cp
->cache_slabsize
);
3878 ASSERT(!(cp
->cache_flags
& KMF_AUDIT
));
3880 size_t chunks
, bestfit
, waste
, slabsize
;
3881 size_t minwaste
= LONG_MAX
;
3883 for (chunks
= 1; chunks
<= KMEM_VOID_FRACTION
; chunks
++) {
3884 slabsize
= P2ROUNDUP(chunksize
* chunks
,
3886 chunks
= slabsize
/ chunksize
;
3887 waste
= (slabsize
% chunksize
) / chunks
;
3888 if (waste
< minwaste
) {
3893 if (cflags
& KMC_QCACHE
)
3894 bestfit
= VMEM_QCACHE_SLABSIZE(vmp
->vm_qcache_max
);
3895 cp
->cache_slabsize
= bestfit
;
3896 cp
->cache_mincolor
= 0;
3897 cp
->cache_maxcolor
= bestfit
% chunksize
;
3898 cp
->cache_flags
|= KMF_HASH
;
3901 cp
->cache_maxchunks
= (cp
->cache_slabsize
/ cp
->cache_chunksize
);
3902 cp
->cache_partial_binshift
= highbit(cp
->cache_maxchunks
/ 16) + 1;
3905 * Disallowing prefill when either the DEBUG or HASH flag is set or when
3906 * there is a constructor avoids some tricky issues with debug setup
3907 * that may be revisited later. We cannot allow prefill in a
3908 * metadata cache because of potential recursion.
3910 if (vmp
== kmem_msb_arena
||
3911 cp
->cache_flags
& (KMF_HASH
| KMF_BUFTAG
) ||
3912 cp
->cache_constructor
!= NULL
)
3913 cp
->cache_flags
&= ~KMF_PREFILL
;
3915 if (cp
->cache_flags
& KMF_HASH
) {
3916 ASSERT(!(cflags
& KMC_NOHASH
));
3917 cp
->cache_bufctl_cache
= (cp
->cache_flags
& KMF_AUDIT
) ?
3918 kmem_bufctl_audit_cache
: kmem_bufctl_cache
;
3921 if (cp
->cache_maxcolor
>= vmp
->vm_quantum
)
3922 cp
->cache_maxcolor
= vmp
->vm_quantum
- 1;
3924 cp
->cache_color
= cp
->cache_mincolor
;
3927 * Initialize the rest of the slab layer.
3929 mutex_init(&cp
->cache_lock
, NULL
, MUTEX_DEFAULT
, NULL
);
3931 avl_create(&cp
->cache_partial_slabs
, kmem_partial_slab_cmp
,
3932 sizeof (kmem_slab_t
), offsetof(kmem_slab_t
, slab_link
));
3933 ASSERT(sizeof (list_node_t
) <= sizeof (avl_node_t
));
3934 /* reuse partial slab AVL linkage for complete slab list linkage */
3935 list_create(&cp
->cache_complete_slabs
,
3936 sizeof (kmem_slab_t
), offsetof(kmem_slab_t
, slab_link
));
3938 if (cp
->cache_flags
& KMF_HASH
) {
3939 cp
->cache_hash_table
= vmem_alloc(kmem_hash_arena
,
3940 KMEM_HASH_INITIAL
* sizeof (void *), VM_SLEEP
);
3941 bzero(cp
->cache_hash_table
,
3942 KMEM_HASH_INITIAL
* sizeof (void *));
3943 cp
->cache_hash_mask
= KMEM_HASH_INITIAL
- 1;
3944 cp
->cache_hash_shift
= highbit((ulong_t
)chunksize
) - 1;
3948 * Initialize the depot.
3950 mutex_init(&cp
->cache_depot_lock
, NULL
, MUTEX_DEFAULT
, NULL
);
3952 for (mtp
= kmem_magtype
; chunksize
<= mtp
->mt_minbuf
; mtp
++)
3955 cp
->cache_magtype
= mtp
;
3958 * Initialize the CPU layer.
3960 for (cpu_seqid
= 0; cpu_seqid
< max_ncpus
; cpu_seqid
++) {
3961 kmem_cpu_cache_t
*ccp
= &cp
->cache_cpu
[cpu_seqid
];
3962 mutex_init(&ccp
->cc_lock
, NULL
, MUTEX_DEFAULT
, NULL
);
3963 ccp
->cc_flags
= cp
->cache_flags
;
3964 ccp
->cc_rounds
= -1;
3965 ccp
->cc_prounds
= -1;
3969 * Create the cache's kstats.
3971 if ((cp
->cache_kstat
= kstat_create("unix", 0, cp
->cache_name
,
3972 "kmem_cache", KSTAT_TYPE_NAMED
,
3973 sizeof (kmem_cache_kstat
) / sizeof (kstat_named_t
),
3974 KSTAT_FLAG_VIRTUAL
)) != NULL
) {
3975 cp
->cache_kstat
->ks_data
= &kmem_cache_kstat
;
3976 cp
->cache_kstat
->ks_update
= kmem_cache_kstat_update
;
3977 cp
->cache_kstat
->ks_private
= cp
;
3978 cp
->cache_kstat
->ks_lock
= &kmem_cache_kstat_lock
;
3979 kstat_install(cp
->cache_kstat
);
3983 * Add the cache to the global list. This makes it visible
3984 * to kmem_update(), so the cache must be ready for business.
3986 mutex_enter(&kmem_cache_lock
);
3987 list_insert_tail(&kmem_caches
, cp
);
3988 mutex_exit(&kmem_cache_lock
);
3991 kmem_cache_magazine_enable(cp
);
3997 kmem_move_cmp(const void *buf
, const void *p
)
3999 const kmem_move_t
*kmm
= p
;
4000 uintptr_t v1
= (uintptr_t)buf
;
4001 uintptr_t v2
= (uintptr_t)kmm
->kmm_from_buf
;
4002 return (v1
< v2
? -1 : (v1
> v2
? 1 : 0));
4006 kmem_reset_reclaim_threshold(kmem_defrag_t
*kmd
)
4008 kmd
->kmd_reclaim_numer
= 1;
4012 * Initially, when choosing candidate slabs for buffers to move, we want to be
4013 * very selective and take only slabs that are less than
4014 * (1 / KMEM_VOID_FRACTION) allocated. If we have difficulty finding candidate
4015 * slabs, then we raise the allocation ceiling incrementally. The reclaim
4016 * threshold is reset to (1 / KMEM_VOID_FRACTION) as soon as the cache is no
4017 * longer fragmented.
4020 kmem_adjust_reclaim_threshold(kmem_defrag_t
*kmd
, int direction
)
4022 if (direction
> 0) {
4023 /* make it easier to find a candidate slab */
4024 if (kmd
->kmd_reclaim_numer
< (KMEM_VOID_FRACTION
- 1)) {
4025 kmd
->kmd_reclaim_numer
++;
4028 /* be more selective */
4029 if (kmd
->kmd_reclaim_numer
> 1) {
4030 kmd
->kmd_reclaim_numer
--;
4036 kmem_cache_set_move(kmem_cache_t
*cp
,
4037 kmem_cbrc_t (*move
)(void *, void *, size_t, void *))
4039 kmem_defrag_t
*defrag
;
4041 ASSERT(move
!= NULL
);
4043 * The consolidator does not support NOTOUCH caches because kmem cannot
4044 * initialize their slabs with the 0xbaddcafe memory pattern, which sets
4045 * a low order bit usable by clients to distinguish uninitialized memory
4046 * from known objects (see kmem_slab_create).
4048 ASSERT(!(cp
->cache_cflags
& KMC_NOTOUCH
));
4049 ASSERT(!(cp
->cache_cflags
& KMC_IDENTIFIER
));
4052 * We should not be holding anyone's cache lock when calling
4053 * kmem_cache_alloc(), so allocate in all cases before acquiring the
4056 defrag
= kmem_cache_alloc(kmem_defrag_cache
, KM_SLEEP
);
4058 mutex_enter(&cp
->cache_lock
);
4060 if (KMEM_IS_MOVABLE(cp
)) {
4061 if (cp
->cache_move
== NULL
) {
4062 ASSERT(cp
->cache_slab_alloc
== 0);
4064 cp
->cache_defrag
= defrag
;
4065 defrag
= NULL
; /* nothing to free */
4066 bzero(cp
->cache_defrag
, sizeof (kmem_defrag_t
));
4067 avl_create(&cp
->cache_defrag
->kmd_moves_pending
,
4068 kmem_move_cmp
, sizeof (kmem_move_t
),
4069 offsetof(kmem_move_t
, kmm_entry
));
4070 ASSERT(sizeof (list_node_t
) <= sizeof (avl_node_t
));
4071 /* reuse the slab's AVL linkage for deadlist linkage */
4072 list_create(&cp
->cache_defrag
->kmd_deadlist
,
4073 sizeof (kmem_slab_t
),
4074 offsetof(kmem_slab_t
, slab_link
));
4075 kmem_reset_reclaim_threshold(cp
->cache_defrag
);
4077 cp
->cache_move
= move
;
4080 mutex_exit(&cp
->cache_lock
);
4082 if (defrag
!= NULL
) {
4083 kmem_cache_free(kmem_defrag_cache
, defrag
); /* unused */
4088 kmem_cache_destroy(kmem_cache_t
*cp
)
4093 * Remove the cache from the global cache list so that no one else
4094 * can schedule tasks on its behalf, wait for any pending tasks to
4095 * complete, purge the cache, and then destroy it.
4097 mutex_enter(&kmem_cache_lock
);
4098 list_remove(&kmem_caches
, cp
);
4099 mutex_exit(&kmem_cache_lock
);
4101 if (kmem_taskq
!= NULL
)
4102 taskq_wait(kmem_taskq
);
4104 if (kmem_move_taskq
!= NULL
&& cp
->cache_defrag
!= NULL
)
4105 taskq_wait(kmem_move_taskq
);
4107 kmem_cache_magazine_purge(cp
);
4109 mutex_enter(&cp
->cache_lock
);
4110 if (cp
->cache_buftotal
!= 0)
4111 cmn_err(CE_WARN
, "kmem_cache_destroy: '%s' (%p) not empty",
4112 cp
->cache_name
, (void *)cp
);
4113 if (cp
->cache_defrag
!= NULL
) {
4114 avl_destroy(&cp
->cache_defrag
->kmd_moves_pending
);
4115 list_destroy(&cp
->cache_defrag
->kmd_deadlist
);
4116 kmem_cache_free(kmem_defrag_cache
, cp
->cache_defrag
);
4117 cp
->cache_defrag
= NULL
;
4120 * The cache is now dead. There should be no further activity. We
4121 * enforce this by setting land mines in the constructor, destructor,
4122 * reclaim, and move routines that induce a kernel text fault if
4125 cp
->cache_constructor
= (int (*)(void *, void *, int))1;
4126 cp
->cache_destructor
= (void (*)(void *, void *))2;
4127 cp
->cache_reclaim
= (void (*)(void *))3;
4128 cp
->cache_move
= (kmem_cbrc_t (*)(void *, void *, size_t, void *))4;
4129 mutex_exit(&cp
->cache_lock
);
4131 kstat_delete(cp
->cache_kstat
);
4133 if (cp
->cache_hash_table
!= NULL
)
4134 vmem_free(kmem_hash_arena
, cp
->cache_hash_table
,
4135 (cp
->cache_hash_mask
+ 1) * sizeof (void *));
4137 for (cpu_seqid
= 0; cpu_seqid
< max_ncpus
; cpu_seqid
++)
4138 mutex_destroy(&cp
->cache_cpu
[cpu_seqid
].cc_lock
);
4140 mutex_destroy(&cp
->cache_depot_lock
);
4141 mutex_destroy(&cp
->cache_lock
);
4143 vmem_free(kmem_cache_arena
, cp
, KMEM_CACHE_SIZE(max_ncpus
));
4148 kmem_cpu_setup(cpu_setup_t what
, int id
, void *arg
)
4150 ASSERT(MUTEX_HELD(&cpu_lock
));
4151 if (what
== CPU_UNCONFIG
) {
4152 kmem_cache_applyall(kmem_cache_magazine_purge
,
4153 kmem_taskq
, TQ_SLEEP
);
4154 kmem_cache_applyall(kmem_cache_magazine_enable
,
4155 kmem_taskq
, TQ_SLEEP
);
4161 kmem_alloc_caches_create(const int *array
, size_t count
,
4162 kmem_cache_t
**alloc_table
, size_t maxbuf
, uint_t shift
)
4164 char name
[KMEM_CACHE_NAMELEN
+ 1];
4165 size_t table_unit
= (1 << shift
); /* range of one alloc_table entry */
4166 size_t size
= table_unit
;
4169 for (i
= 0; i
< count
; i
++) {
4170 size_t cache_size
= array
[i
];
4171 size_t align
= KMEM_ALIGN
;
4174 /* if the table has an entry for maxbuf, we're done */
4178 /* cache size must be a multiple of the table unit */
4179 ASSERT(P2PHASE(cache_size
, table_unit
) == 0);
4182 * If they allocate a multiple of the coherency granularity,
4183 * they get a coherency-granularity-aligned address.
4185 if (IS_P2ALIGNED(cache_size
, 64))
4187 if (IS_P2ALIGNED(cache_size
, PAGESIZE
))
4189 (void) snprintf(name
, sizeof (name
),
4190 "kmem_alloc_%lu", cache_size
);
4191 cp
= kmem_cache_create(name
, cache_size
, align
,
4192 NULL
, NULL
, NULL
, NULL
, NULL
, KMC_KMEM_ALLOC
);
4194 while (size
<= cache_size
) {
4195 alloc_table
[(size
- 1) >> shift
] = cp
;
4200 ASSERT(size
> maxbuf
); /* i.e. maxbuf <= max(cache_size) */
4204 kmem_cache_init(int pass
, int use_large_pages
)
4208 kmem_magtype_t
*mtp
;
4210 for (i
= 0; i
< sizeof (kmem_magtype
) / sizeof (*mtp
); i
++) {
4211 char name
[KMEM_CACHE_NAMELEN
+ 1];
4213 mtp
= &kmem_magtype
[i
];
4214 (void) sprintf(name
, "kmem_magazine_%d", mtp
->mt_magsize
);
4215 mtp
->mt_cache
= kmem_cache_create(name
,
4216 (mtp
->mt_magsize
+ 1) * sizeof (void *),
4217 mtp
->mt_align
, NULL
, NULL
, NULL
, NULL
,
4218 kmem_msb_arena
, KMC_NOHASH
);
4221 kmem_slab_cache
= kmem_cache_create("kmem_slab_cache",
4222 sizeof (kmem_slab_t
), 0, NULL
, NULL
, NULL
, NULL
,
4223 kmem_msb_arena
, KMC_NOHASH
);
4225 kmem_bufctl_cache
= kmem_cache_create("kmem_bufctl_cache",
4226 sizeof (kmem_bufctl_t
), 0, NULL
, NULL
, NULL
, NULL
,
4227 kmem_msb_arena
, KMC_NOHASH
);
4229 kmem_bufctl_audit_cache
= kmem_cache_create("kmem_bufctl_audit_cache",
4230 sizeof (kmem_bufctl_audit_t
), 0, NULL
, NULL
, NULL
, NULL
,
4231 kmem_msb_arena
, KMC_NOHASH
);
4234 kmem_va_arena
= vmem_create("kmem_va",
4236 vmem_alloc
, vmem_free
, heap_arena
,
4237 8 * PAGESIZE
, VM_SLEEP
);
4239 if (use_large_pages
) {
4240 kmem_default_arena
= vmem_xcreate("kmem_default",
4242 segkmem_alloc_lp
, segkmem_free_lp
, kmem_va_arena
,
4243 0, VMC_DUMPSAFE
| VM_SLEEP
);
4245 kmem_default_arena
= vmem_create("kmem_default",
4247 segkmem_alloc
, segkmem_free
, kmem_va_arena
,
4248 0, VMC_DUMPSAFE
| VM_SLEEP
);
4251 /* Figure out what our maximum cache size is */
4252 maxbuf
= kmem_max_cached
;
4253 if (maxbuf
<= KMEM_MAXBUF
) {
4255 kmem_max_cached
= KMEM_MAXBUF
;
4259 sizeof (kmem_big_alloc_sizes
) / sizeof (int);
4261 * Round maxbuf up to an existing cache size. If maxbuf
4262 * is larger than the largest cache, we truncate it to
4263 * the largest cache's size.
4265 for (i
= 0; i
< max
; i
++) {
4266 size
= kmem_big_alloc_sizes
[i
];
4270 kmem_max_cached
= maxbuf
= size
;
4274 * The big alloc table may not be completely overwritten, so
4275 * we clear out any stale cache pointers from the first pass.
4277 bzero(kmem_big_alloc_table
, sizeof (kmem_big_alloc_table
));
4280 * During the first pass, the kmem_alloc_* caches
4281 * are treated as metadata.
4283 kmem_default_arena
= kmem_msb_arena
;
4284 maxbuf
= KMEM_BIG_MAXBUF_32BIT
;
4288 * Set up the default caches to back kmem_alloc()
4290 kmem_alloc_caches_create(
4291 kmem_alloc_sizes
, sizeof (kmem_alloc_sizes
) / sizeof (int),
4292 kmem_alloc_table
, KMEM_MAXBUF
, KMEM_ALIGN_SHIFT
);
4294 kmem_alloc_caches_create(
4295 kmem_big_alloc_sizes
, sizeof (kmem_big_alloc_sizes
) / sizeof (int),
4296 kmem_big_alloc_table
, maxbuf
, KMEM_BIG_SHIFT
);
4298 kmem_big_alloc_table_max
= maxbuf
>> KMEM_BIG_SHIFT
;
4305 int old_kmem_flags
= kmem_flags
;
4306 int use_large_pages
= 0;
4307 size_t maxverify
, minfirewall
;
4312 * Don't do firewalled allocations if the heap is less than 1TB
4313 * (i.e. on a 32-bit kernel)
4314 * The resulting VM_NEXTFIT allocations would create too much
4315 * fragmentation in a small heap.
4318 maxverify
= minfirewall
= PAGESIZE
/ 2;
4320 maxverify
= minfirewall
= ULONG_MAX
;
4323 ASSERT(sizeof (kmem_cpu_cache_t
) == KMEM_CPU_CACHE_SIZE
);
4325 list_create(&kmem_caches
, sizeof (kmem_cache_t
),
4326 offsetof(kmem_cache_t
, cache_link
));
4328 kmem_metadata_arena
= vmem_create("kmem_metadata", NULL
, 0, PAGESIZE
,
4329 vmem_alloc
, vmem_free
, heap_arena
, 8 * PAGESIZE
,
4330 VM_SLEEP
| VMC_NO_QCACHE
);
4332 kmem_msb_arena
= vmem_create("kmem_msb", NULL
, 0,
4333 PAGESIZE
, segkmem_alloc
, segkmem_free
, kmem_metadata_arena
, 0,
4334 VMC_DUMPSAFE
| VM_SLEEP
);
4336 kmem_cache_arena
= vmem_create("kmem_cache", NULL
, 0, KMEM_ALIGN
,
4337 segkmem_alloc
, segkmem_free
, kmem_metadata_arena
, 0, VM_SLEEP
);
4339 kmem_hash_arena
= vmem_create("kmem_hash", NULL
, 0, KMEM_ALIGN
,
4340 segkmem_alloc
, segkmem_free
, kmem_metadata_arena
, 0, VM_SLEEP
);
4342 kmem_log_arena
= vmem_create("kmem_log", NULL
, 0, KMEM_ALIGN
,
4343 segkmem_alloc
, segkmem_free
, heap_arena
, 0, VM_SLEEP
);
4345 kmem_firewall_va_arena
= vmem_create("kmem_firewall_va",
4347 kmem_firewall_va_alloc
, kmem_firewall_va_free
, heap_arena
,
4350 kmem_firewall_arena
= vmem_create("kmem_firewall", NULL
, 0, PAGESIZE
,
4351 segkmem_alloc
, segkmem_free
, kmem_firewall_va_arena
, 0,
4352 VMC_DUMPSAFE
| VM_SLEEP
);
4354 /* temporary oversize arena for mod_read_system_file */
4355 kmem_oversize_arena
= vmem_create("kmem_oversize", NULL
, 0, PAGESIZE
,
4356 segkmem_alloc
, segkmem_free
, heap_arena
, 0, VM_SLEEP
);
4358 kmem_reap_interval
= 15 * hz
;
4361 * Read /etc/system. This is a chicken-and-egg problem because
4362 * kmem_flags may be set in /etc/system, but mod_read_system_file()
4363 * needs to use the allocator. The simplest solution is to create
4364 * all the standard kmem caches, read /etc/system, destroy all the
4365 * caches we just created, and then create them all again in light
4366 * of the (possibly) new kmem_flags and other kmem tunables.
4368 kmem_cache_init(1, 0);
4370 mod_read_system_file(boothowto
& RB_ASKNAME
);
4372 while ((cp
= list_tail(&kmem_caches
)) != NULL
)
4373 kmem_cache_destroy(cp
);
4375 vmem_destroy(kmem_oversize_arena
);
4377 if (old_kmem_flags
& KMF_STICKY
)
4378 kmem_flags
= old_kmem_flags
;
4380 if (!(kmem_flags
& KMF_AUDIT
))
4381 vmem_seg_size
= offsetof(vmem_seg_t
, vs_thread
);
4383 if (kmem_maxverify
== 0)
4384 kmem_maxverify
= maxverify
;
4386 if (kmem_minfirewall
== 0)
4387 kmem_minfirewall
= minfirewall
;
4390 * give segkmem a chance to figure out if we are using large pages
4391 * for the kernel heap
4393 use_large_pages
= segkmem_lpsetup();
4396 * To protect against corruption, we keep the actual number of callers
4397 * KMF_LITE records seperate from the tunable. We arbitrarily clamp
4398 * to 16, since the overhead for small buffers quickly gets out of
4401 * The real limit would depend on the needs of the largest KMC_NOHASH
4404 kmem_lite_count
= MIN(MAX(0, kmem_lite_pcs
), 16);
4405 kmem_lite_pcs
= kmem_lite_count
;
4408 * Normally, we firewall oversized allocations when possible, but
4409 * if we are using large pages for kernel memory, and we don't have
4410 * any non-LITE debugging flags set, we want to allocate oversized
4411 * buffers from large pages, and so skip the firewalling.
4413 if (use_large_pages
&&
4414 ((kmem_flags
& KMF_LITE
) || !(kmem_flags
& KMF_DEBUG
))) {
4415 kmem_oversize_arena
= vmem_xcreate("kmem_oversize", NULL
, 0,
4416 PAGESIZE
, segkmem_alloc_lp
, segkmem_free_lp
, heap_arena
,
4417 0, VMC_DUMPSAFE
| VM_SLEEP
);
4419 kmem_oversize_arena
= vmem_create("kmem_oversize",
4421 segkmem_alloc
, segkmem_free
, kmem_minfirewall
< ULONG_MAX
?
4422 kmem_firewall_va_arena
: heap_arena
, 0, VMC_DUMPSAFE
|
4426 kmem_cache_init(2, use_large_pages
);
4428 if (kmem_flags
& (KMF_AUDIT
| KMF_RANDOMIZE
)) {
4429 if (kmem_transaction_log_size
== 0)
4430 kmem_transaction_log_size
= kmem_maxavail() / 50;
4431 kmem_transaction_log
= kmem_log_init(kmem_transaction_log_size
);
4434 if (kmem_flags
& (KMF_CONTENTS
| KMF_RANDOMIZE
)) {
4435 if (kmem_content_log_size
== 0)
4436 kmem_content_log_size
= kmem_maxavail() / 50;
4437 kmem_content_log
= kmem_log_init(kmem_content_log_size
);
4440 kmem_failure_log
= kmem_log_init(kmem_failure_log_size
);
4442 kmem_slab_log
= kmem_log_init(kmem_slab_log_size
);
4445 * Initialize STREAMS message caches so allocb() is available.
4446 * This allows us to initialize the logging framework (cmn_err(9F),
4447 * strlog(9F), etc) so we can start recording messages.
4452 * Initialize the ZSD framework in Zones so modules loaded henceforth
4453 * can register their callbacks.
4461 * Warn about invalid or dangerous values of kmem_flags.
4462 * Always warn about unsupported values.
4464 if (((kmem_flags
& ~(KMF_AUDIT
| KMF_DEADBEEF
| KMF_REDZONE
|
4465 KMF_CONTENTS
| KMF_LITE
)) != 0) ||
4466 ((kmem_flags
& KMF_LITE
) && kmem_flags
!= KMF_LITE
))
4467 cmn_err(CE_WARN
, "kmem_flags set to unsupported value 0x%x. "
4468 "See the Solaris Tunable Parameters Reference Manual.",
4472 if ((kmem_flags
& KMF_DEBUG
) == 0)
4473 cmn_err(CE_NOTE
, "kmem debugging disabled.");
4476 * For non-debug kernels, the only "normal" flags are 0, KMF_LITE,
4477 * KMF_REDZONE, and KMF_CONTENTS (the last because it is only enabled
4478 * if KMF_AUDIT is set). We should warn the user about the performance
4479 * penalty of KMF_AUDIT or KMF_DEADBEEF if they are set and KMF_LITE
4480 * isn't set (since that disables AUDIT).
4482 if (!(kmem_flags
& KMF_LITE
) &&
4483 (kmem_flags
& (KMF_AUDIT
| KMF_DEADBEEF
)) != 0)
4484 cmn_err(CE_WARN
, "High-overhead kmem debugging features "
4485 "enabled (kmem_flags = 0x%x). Performance degradation "
4486 "and large memory overhead possible. See the Solaris "
4487 "Tunable Parameters Reference Manual.", kmem_flags
);
4488 #endif /* not DEBUG */
4490 kmem_cache_applyall(kmem_cache_magazine_enable
, NULL
, TQ_SLEEP
);
4495 * Initialize the platform-specific aligned/DMA memory allocator.
4500 * Initialize 32-bit ID cache.
4505 * Initialize the networking stack so modules loaded can
4506 * register their callbacks.
4512 kmem_move_init(void)
4514 kmem_defrag_cache
= kmem_cache_create("kmem_defrag_cache",
4515 sizeof (kmem_defrag_t
), 0, NULL
, NULL
, NULL
, NULL
,
4516 kmem_msb_arena
, KMC_NOHASH
);
4517 kmem_move_cache
= kmem_cache_create("kmem_move_cache",
4518 sizeof (kmem_move_t
), 0, NULL
, NULL
, NULL
, NULL
,
4519 kmem_msb_arena
, KMC_NOHASH
);
4522 * kmem guarantees that move callbacks are sequential and that even
4523 * across multiple caches no two moves ever execute simultaneously.
4524 * Move callbacks are processed on a separate taskq so that client code
4525 * does not interfere with internal maintenance tasks.
4527 kmem_move_taskq
= taskq_create_instance("kmem_move_taskq", 0, 1,
4528 minclsyspri
, 100, INT_MAX
, TASKQ_PREPOPULATE
);
4532 kmem_thread_init(void)
4535 kmem_taskq
= taskq_create_instance("kmem_taskq", 0, 1, minclsyspri
,
4536 300, INT_MAX
, TASKQ_PREPOPULATE
);
4542 mutex_enter(&cpu_lock
);
4543 register_cpu_setup_func(kmem_cpu_setup
, NULL
);
4544 mutex_exit(&cpu_lock
);
4546 kmem_update_timeout(NULL
);
4552 * Return the slab of the allocated buffer, or NULL if the buffer is not
4553 * allocated. This function may be called with a known slab address to determine
4554 * whether or not the buffer is allocated, or with a NULL slab address to obtain
4555 * an allocated buffer's slab.
4557 static kmem_slab_t
*
4558 kmem_slab_allocated(kmem_cache_t
*cp
, kmem_slab_t
*sp
, void *buf
)
4560 kmem_bufctl_t
*bcp
, *bufbcp
;
4562 ASSERT(MUTEX_HELD(&cp
->cache_lock
));
4563 ASSERT(sp
== NULL
|| KMEM_SLAB_MEMBER(sp
, buf
));
4565 if (cp
->cache_flags
& KMF_HASH
) {
4566 for (bcp
= *KMEM_HASH(cp
, buf
);
4567 (bcp
!= NULL
) && (bcp
->bc_addr
!= buf
);
4568 bcp
= bcp
->bc_next
) {
4571 ASSERT(sp
!= NULL
&& bcp
!= NULL
? sp
== bcp
->bc_slab
: 1);
4572 return (bcp
== NULL
? NULL
: bcp
->bc_slab
);
4576 sp
= KMEM_SLAB(cp
, buf
);
4578 bufbcp
= KMEM_BUFCTL(cp
, buf
);
4579 for (bcp
= sp
->slab_head
;
4580 (bcp
!= NULL
) && (bcp
!= bufbcp
);
4581 bcp
= bcp
->bc_next
) {
4584 return (bcp
== NULL
? sp
: NULL
);
4588 kmem_slab_is_reclaimable(kmem_cache_t
*cp
, kmem_slab_t
*sp
, int flags
)
4590 long refcnt
= sp
->slab_refcnt
;
4592 ASSERT(cp
->cache_defrag
!= NULL
);
4595 * For code coverage we want to be able to move an object within the
4596 * same slab (the only partial slab) even if allocating the destination
4597 * buffer resulted in a completely allocated slab.
4599 if (flags
& KMM_DEBUG
) {
4600 return ((flags
& KMM_DESPERATE
) ||
4601 ((sp
->slab_flags
& KMEM_SLAB_NOMOVE
) == 0));
4604 /* If we're desperate, we don't care if the client said NO. */
4605 if (flags
& KMM_DESPERATE
) {
4606 return (refcnt
< sp
->slab_chunks
); /* any partial */
4609 if (sp
->slab_flags
& KMEM_SLAB_NOMOVE
) {
4613 if ((refcnt
== 1) || kmem_move_any_partial
) {
4614 return (refcnt
< sp
->slab_chunks
);
4618 * The reclaim threshold is adjusted at each kmem_cache_scan() so that
4619 * slabs with a progressively higher percentage of used buffers can be
4620 * reclaimed until the cache as a whole is no longer fragmented.
4622 * sp->slab_refcnt kmd_reclaim_numer
4623 * --------------- < ------------------
4624 * sp->slab_chunks KMEM_VOID_FRACTION
4626 return ((refcnt
* KMEM_VOID_FRACTION
) <
4627 (sp
->slab_chunks
* cp
->cache_defrag
->kmd_reclaim_numer
));
4631 * May be called from the kmem_move_taskq, from kmem_cache_move_notify_task(),
4632 * or when the buffer is freed.
4635 kmem_slab_move_yes(kmem_cache_t
*cp
, kmem_slab_t
*sp
, void *from_buf
)
4637 ASSERT(MUTEX_HELD(&cp
->cache_lock
));
4638 ASSERT(KMEM_SLAB_MEMBER(sp
, from_buf
));
4640 if (!KMEM_SLAB_IS_PARTIAL(sp
)) {
4644 if (sp
->slab_flags
& KMEM_SLAB_NOMOVE
) {
4645 if (KMEM_SLAB_OFFSET(sp
, from_buf
) == sp
->slab_stuck_offset
) {
4646 avl_remove(&cp
->cache_partial_slabs
, sp
);
4647 sp
->slab_flags
&= ~KMEM_SLAB_NOMOVE
;
4648 sp
->slab_stuck_offset
= (uint32_t)-1;
4649 avl_add(&cp
->cache_partial_slabs
, sp
);
4652 sp
->slab_later_count
= 0;
4653 sp
->slab_stuck_offset
= (uint32_t)-1;
4658 kmem_slab_move_no(kmem_cache_t
*cp
, kmem_slab_t
*sp
, void *from_buf
)
4660 ASSERT(taskq_member(kmem_move_taskq
, curthread
));
4661 ASSERT(MUTEX_HELD(&cp
->cache_lock
));
4662 ASSERT(KMEM_SLAB_MEMBER(sp
, from_buf
));
4664 if (!KMEM_SLAB_IS_PARTIAL(sp
)) {
4668 avl_remove(&cp
->cache_partial_slabs
, sp
);
4669 sp
->slab_later_count
= 0;
4670 sp
->slab_flags
|= KMEM_SLAB_NOMOVE
;
4671 sp
->slab_stuck_offset
= KMEM_SLAB_OFFSET(sp
, from_buf
);
4672 avl_add(&cp
->cache_partial_slabs
, sp
);
4675 static void kmem_move_end(kmem_cache_t
*, kmem_move_t
*);
4678 * The move callback takes two buffer addresses, the buffer to be moved, and a
4679 * newly allocated and constructed buffer selected by kmem as the destination.
4680 * It also takes the size of the buffer and an optional user argument specified
4681 * at cache creation time. kmem guarantees that the buffer to be moved has not
4682 * been unmapped by the virtual memory subsystem. Beyond that, it cannot
4683 * guarantee the present whereabouts of the buffer to be moved, so it is up to
4684 * the client to safely determine whether or not it is still using the buffer.
4685 * The client must not free either of the buffers passed to the move callback,
4686 * since kmem wants to free them directly to the slab layer. The client response
4687 * tells kmem which of the two buffers to free:
4689 * YES kmem frees the old buffer (the move was successful)
4690 * NO kmem frees the new buffer, marks the slab of the old buffer
4691 * non-reclaimable to avoid bothering the client again
4692 * LATER kmem frees the new buffer, increments slab_later_count
4693 * DONT_KNOW kmem frees the new buffer
4694 * DONT_NEED kmem frees both the old buffer and the new buffer
4696 * The pending callback argument now being processed contains both of the
4697 * buffers (old and new) passed to the move callback function, the slab of the
4698 * old buffer, and flags related to the move request, such as whether or not the
4699 * system was desperate for memory.
4701 * Slabs are not freed while there is a pending callback, but instead are kept
4702 * on a deadlist, which is drained after the last callback completes. This means
4703 * that slabs are safe to access until kmem_move_end(), no matter how many of
4704 * their buffers have been freed. Once slab_refcnt reaches zero, it stays at
4705 * zero for as long as the slab remains on the deadlist and until the slab is
4709 kmem_move_buffer(kmem_move_t
*callback
)
4711 kmem_cbrc_t response
;
4712 kmem_slab_t
*sp
= callback
->kmm_from_slab
;
4713 kmem_cache_t
*cp
= sp
->slab_cache
;
4714 boolean_t free_on_slab
;
4716 ASSERT(taskq_member(kmem_move_taskq
, curthread
));
4717 ASSERT(MUTEX_NOT_HELD(&cp
->cache_lock
));
4718 ASSERT(KMEM_SLAB_MEMBER(sp
, callback
->kmm_from_buf
));
4721 * The number of allocated buffers on the slab may have changed since we
4722 * last checked the slab's reclaimability (when the pending move was
4723 * enqueued), or the client may have responded NO when asked to move
4724 * another buffer on the same slab.
4726 if (!kmem_slab_is_reclaimable(cp
, sp
, callback
->kmm_flags
)) {
4727 kmem_slab_free(cp
, callback
->kmm_to_buf
);
4728 kmem_move_end(cp
, callback
);
4733 * Checking the slab layer is easy, so we might as well do that here
4734 * in case we can avoid bothering the client.
4736 mutex_enter(&cp
->cache_lock
);
4737 free_on_slab
= (kmem_slab_allocated(cp
, sp
,
4738 callback
->kmm_from_buf
) == NULL
);
4739 mutex_exit(&cp
->cache_lock
);
4742 kmem_slab_free(cp
, callback
->kmm_to_buf
);
4743 kmem_move_end(cp
, callback
);
4747 if (cp
->cache_flags
& KMF_BUFTAG
) {
4749 * Make kmem_cache_alloc_debug() apply the constructor for us.
4751 if (kmem_cache_alloc_debug(cp
, callback
->kmm_to_buf
,
4752 KM_NOSLEEP
, 1, caller()) != 0) {
4753 kmem_move_end(cp
, callback
);
4756 } else if (cp
->cache_constructor
!= NULL
&&
4757 cp
->cache_constructor(callback
->kmm_to_buf
, cp
->cache_private
,
4759 atomic_inc_64(&cp
->cache_alloc_fail
);
4760 kmem_slab_free(cp
, callback
->kmm_to_buf
);
4761 kmem_move_end(cp
, callback
);
4765 cp
->cache_defrag
->kmd_callbacks
++;
4766 cp
->cache_defrag
->kmd_thread
= curthread
;
4767 cp
->cache_defrag
->kmd_from_buf
= callback
->kmm_from_buf
;
4768 cp
->cache_defrag
->kmd_to_buf
= callback
->kmm_to_buf
;
4769 DTRACE_PROBE2(kmem__move__start
, kmem_cache_t
*, cp
, kmem_move_t
*,
4772 response
= cp
->cache_move(callback
->kmm_from_buf
,
4773 callback
->kmm_to_buf
, cp
->cache_bufsize
, cp
->cache_private
);
4775 DTRACE_PROBE3(kmem__move__end
, kmem_cache_t
*, cp
, kmem_move_t
*,
4776 callback
, kmem_cbrc_t
, response
);
4777 cp
->cache_defrag
->kmd_thread
= NULL
;
4778 cp
->cache_defrag
->kmd_from_buf
= NULL
;
4779 cp
->cache_defrag
->kmd_to_buf
= NULL
;
4781 if (response
== KMEM_CBRC_YES
) {
4782 cp
->cache_defrag
->kmd_yes
++;
4783 kmem_slab_free_constructed(cp
, callback
->kmm_from_buf
, B_FALSE
);
4784 /* slab safe to access until kmem_move_end() */
4785 if (sp
->slab_refcnt
== 0)
4786 cp
->cache_defrag
->kmd_slabs_freed
++;
4787 mutex_enter(&cp
->cache_lock
);
4788 kmem_slab_move_yes(cp
, sp
, callback
->kmm_from_buf
);
4789 mutex_exit(&cp
->cache_lock
);
4790 kmem_move_end(cp
, callback
);
4796 cp
->cache_defrag
->kmd_no
++;
4797 mutex_enter(&cp
->cache_lock
);
4798 kmem_slab_move_no(cp
, sp
, callback
->kmm_from_buf
);
4799 mutex_exit(&cp
->cache_lock
);
4801 case KMEM_CBRC_LATER
:
4802 cp
->cache_defrag
->kmd_later
++;
4803 mutex_enter(&cp
->cache_lock
);
4804 if (!KMEM_SLAB_IS_PARTIAL(sp
)) {
4805 mutex_exit(&cp
->cache_lock
);
4809 if (++sp
->slab_later_count
>= KMEM_DISBELIEF
) {
4810 kmem_slab_move_no(cp
, sp
, callback
->kmm_from_buf
);
4811 } else if (!(sp
->slab_flags
& KMEM_SLAB_NOMOVE
)) {
4812 sp
->slab_stuck_offset
= KMEM_SLAB_OFFSET(sp
,
4813 callback
->kmm_from_buf
);
4815 mutex_exit(&cp
->cache_lock
);
4817 case KMEM_CBRC_DONT_NEED
:
4818 cp
->cache_defrag
->kmd_dont_need
++;
4819 kmem_slab_free_constructed(cp
, callback
->kmm_from_buf
, B_FALSE
);
4820 if (sp
->slab_refcnt
== 0)
4821 cp
->cache_defrag
->kmd_slabs_freed
++;
4822 mutex_enter(&cp
->cache_lock
);
4823 kmem_slab_move_yes(cp
, sp
, callback
->kmm_from_buf
);
4824 mutex_exit(&cp
->cache_lock
);
4826 case KMEM_CBRC_DONT_KNOW
:
4828 * If we don't know if we can move this buffer or not, we'll
4829 * just assume that we can't: if the buffer is in fact free,
4830 * then it is sitting in one of the per-CPU magazines or in
4831 * a full magazine in the depot layer. Either way, because
4832 * defrag is induced in the same logic that reaps a cache,
4833 * it's likely that full magazines will be returned to the
4834 * system soon (thereby accomplishing what we're trying to
4835 * accomplish here: return those magazines to their slabs).
4836 * Given this, any work that we might do now to locate a buffer
4837 * in a magazine is wasted (and expensive!) work; we bump
4838 * a counter in this case and otherwise assume that we can't
4841 cp
->cache_defrag
->kmd_dont_know
++;
4844 panic("'%s' (%p) unexpected move callback response %d\n",
4845 cp
->cache_name
, (void *)cp
, response
);
4848 kmem_slab_free_constructed(cp
, callback
->kmm_to_buf
, B_FALSE
);
4849 kmem_move_end(cp
, callback
);
4852 /* Return B_FALSE if there is insufficient memory for the move request. */
4854 kmem_move_begin(kmem_cache_t
*cp
, kmem_slab_t
*sp
, void *buf
, int flags
)
4858 kmem_move_t
*callback
, *pending
;
4861 ASSERT(taskq_member(kmem_taskq
, curthread
));
4862 ASSERT(MUTEX_NOT_HELD(&cp
->cache_lock
));
4863 ASSERT(sp
->slab_flags
& KMEM_SLAB_MOVE_PENDING
);
4865 callback
= kmem_cache_alloc(kmem_move_cache
, KM_NOSLEEP
);
4867 if (callback
== NULL
)
4870 callback
->kmm_from_slab
= sp
;
4871 callback
->kmm_from_buf
= buf
;
4872 callback
->kmm_flags
= flags
;
4874 mutex_enter(&cp
->cache_lock
);
4876 n
= avl_numnodes(&cp
->cache_partial_slabs
);
4877 if ((n
== 0) || ((n
== 1) && !(flags
& KMM_DEBUG
))) {
4878 mutex_exit(&cp
->cache_lock
);
4879 kmem_cache_free(kmem_move_cache
, callback
);
4880 return (B_TRUE
); /* there is no need for the move request */
4883 pending
= avl_find(&cp
->cache_defrag
->kmd_moves_pending
, buf
, &index
);
4884 if (pending
!= NULL
) {
4886 * If the move is already pending and we're desperate now,
4887 * update the move flags.
4889 if (flags
& KMM_DESPERATE
) {
4890 pending
->kmm_flags
|= KMM_DESPERATE
;
4892 mutex_exit(&cp
->cache_lock
);
4893 kmem_cache_free(kmem_move_cache
, callback
);
4897 to_buf
= kmem_slab_alloc_impl(cp
, avl_first(&cp
->cache_partial_slabs
),
4899 callback
->kmm_to_buf
= to_buf
;
4900 avl_insert(&cp
->cache_defrag
->kmd_moves_pending
, callback
, index
);
4902 mutex_exit(&cp
->cache_lock
);
4904 if (!taskq_dispatch(kmem_move_taskq
, (task_func_t
*)kmem_move_buffer
,
4905 callback
, TQ_NOSLEEP
)) {
4906 mutex_enter(&cp
->cache_lock
);
4907 avl_remove(&cp
->cache_defrag
->kmd_moves_pending
, callback
);
4908 mutex_exit(&cp
->cache_lock
);
4909 kmem_slab_free(cp
, to_buf
);
4910 kmem_cache_free(kmem_move_cache
, callback
);
4918 kmem_move_end(kmem_cache_t
*cp
, kmem_move_t
*callback
)
4922 ASSERT(cp
->cache_defrag
!= NULL
);
4923 ASSERT(taskq_member(kmem_move_taskq
, curthread
));
4924 ASSERT(MUTEX_NOT_HELD(&cp
->cache_lock
));
4926 mutex_enter(&cp
->cache_lock
);
4927 VERIFY(avl_find(&cp
->cache_defrag
->kmd_moves_pending
,
4928 callback
->kmm_from_buf
, &index
) != NULL
);
4929 avl_remove(&cp
->cache_defrag
->kmd_moves_pending
, callback
);
4930 if (avl_is_empty(&cp
->cache_defrag
->kmd_moves_pending
)) {
4931 list_t
*deadlist
= &cp
->cache_defrag
->kmd_deadlist
;
4935 * The last pending move completed. Release all slabs from the
4936 * front of the dead list except for any slab at the tail that
4937 * needs to be released from the context of kmem_move_buffers().
4938 * kmem deferred unmapping the buffers on these slabs in order
4939 * to guarantee that buffers passed to the move callback have
4940 * been touched only by kmem or by the client itself.
4942 while ((sp
= list_remove_head(deadlist
)) != NULL
) {
4943 if (sp
->slab_flags
& KMEM_SLAB_MOVE_PENDING
) {
4944 list_insert_tail(deadlist
, sp
);
4947 cp
->cache_defrag
->kmd_deadcount
--;
4948 cp
->cache_slab_destroy
++;
4949 mutex_exit(&cp
->cache_lock
);
4950 kmem_slab_destroy(cp
, sp
);
4951 mutex_enter(&cp
->cache_lock
);
4954 mutex_exit(&cp
->cache_lock
);
4955 kmem_cache_free(kmem_move_cache
, callback
);
4959 * Move buffers from least used slabs first by scanning backwards from the end
4960 * of the partial slab list. Scan at most max_scan candidate slabs and move
4961 * buffers from at most max_slabs slabs (0 for all partial slabs in both cases).
4962 * If desperate to reclaim memory, move buffers from any partial slab, otherwise
4963 * skip slabs with a ratio of allocated buffers at or above the current
4964 * threshold. Return the number of unskipped slabs (at most max_slabs, -1 if the
4965 * scan is aborted) so that the caller can adjust the reclaimability threshold
4966 * depending on how many reclaimable slabs it finds.
4968 * kmem_move_buffers() drops and reacquires cache_lock every time it issues a
4969 * move request, since it is not valid for kmem_move_begin() to call
4970 * kmem_cache_alloc() or taskq_dispatch() with cache_lock held.
4973 kmem_move_buffers(kmem_cache_t
*cp
, size_t max_scan
, size_t max_slabs
,
4978 int i
, j
; /* slab index, buffer index */
4979 int s
; /* reclaimable slabs */
4980 int b
; /* allocated (movable) buffers on reclaimable slab */
4985 ASSERT(taskq_member(kmem_taskq
, curthread
));
4986 ASSERT(MUTEX_HELD(&cp
->cache_lock
));
4987 ASSERT(kmem_move_cache
!= NULL
);
4988 ASSERT(cp
->cache_move
!= NULL
&& cp
->cache_defrag
!= NULL
);
4989 ASSERT((flags
& KMM_DEBUG
) ? !avl_is_empty(&cp
->cache_partial_slabs
) :
4990 avl_numnodes(&cp
->cache_partial_slabs
) > 1);
4992 if (kmem_move_blocked
) {
4996 if (kmem_move_fulltilt
) {
4997 flags
|= KMM_DESPERATE
;
5000 if (max_scan
== 0 || (flags
& KMM_DESPERATE
)) {
5002 * Scan as many slabs as needed to find the desired number of
5005 max_scan
= (size_t)-1;
5008 if (max_slabs
== 0 || (flags
& KMM_DESPERATE
)) {
5009 /* Find as many candidate slabs as possible. */
5010 max_slabs
= (size_t)-1;
5013 sp
= avl_last(&cp
->cache_partial_slabs
);
5014 ASSERT(KMEM_SLAB_IS_PARTIAL(sp
));
5015 for (i
= 0, s
= 0; (i
< max_scan
) && (s
< max_slabs
) && (sp
!= NULL
) &&
5016 ((sp
!= avl_first(&cp
->cache_partial_slabs
)) ||
5017 (flags
& KMM_DEBUG
));
5018 sp
= AVL_PREV(&cp
->cache_partial_slabs
, sp
), i
++) {
5020 if (!kmem_slab_is_reclaimable(cp
, sp
, flags
)) {
5025 /* Look for allocated buffers to move. */
5026 for (j
= 0, b
= 0, buf
= sp
->slab_base
;
5027 (j
< sp
->slab_chunks
) && (b
< sp
->slab_refcnt
);
5028 buf
= (((char *)buf
) + cp
->cache_chunksize
), j
++) {
5030 if (kmem_slab_allocated(cp
, sp
, buf
) == NULL
) {
5037 * Prevent the slab from being destroyed while we drop
5038 * cache_lock and while the pending move is not yet
5039 * registered. Flag the pending move while
5040 * kmd_moves_pending may still be empty, since we can't
5041 * yet rely on a non-zero pending move count to prevent
5042 * the slab from being destroyed.
5044 ASSERT(!(sp
->slab_flags
& KMEM_SLAB_MOVE_PENDING
));
5045 sp
->slab_flags
|= KMEM_SLAB_MOVE_PENDING
;
5047 * Recheck refcnt and nomove after reacquiring the lock,
5048 * since these control the order of partial slabs, and
5049 * we want to know if we can pick up the scan where we
5052 refcnt
= sp
->slab_refcnt
;
5053 nomove
= (sp
->slab_flags
& KMEM_SLAB_NOMOVE
);
5054 mutex_exit(&cp
->cache_lock
);
5056 success
= kmem_move_begin(cp
, sp
, buf
, flags
);
5059 * Now, before the lock is reacquired, kmem could
5060 * process all pending move requests and purge the
5061 * deadlist, so that upon reacquiring the lock, sp has
5062 * been remapped. Or, the client may free all the
5063 * objects on the slab while the pending moves are still
5064 * on the taskq. Therefore, the KMEM_SLAB_MOVE_PENDING
5065 * flag causes the slab to be put at the end of the
5066 * deadlist and prevents it from being destroyed, since
5067 * we plan to destroy it here after reacquiring the
5070 mutex_enter(&cp
->cache_lock
);
5071 ASSERT(sp
->slab_flags
& KMEM_SLAB_MOVE_PENDING
);
5072 sp
->slab_flags
&= ~KMEM_SLAB_MOVE_PENDING
;
5074 if (sp
->slab_refcnt
== 0) {
5076 &cp
->cache_defrag
->kmd_deadlist
;
5077 list_remove(deadlist
, sp
);
5080 &cp
->cache_defrag
->kmd_moves_pending
)) {
5082 * A pending move makes it unsafe to
5083 * destroy the slab, because even though
5084 * the move is no longer needed, the
5085 * context where that is determined
5086 * requires the slab to exist.
5087 * Fortunately, a pending move also
5088 * means we don't need to destroy the
5089 * slab here, since it will get
5090 * destroyed along with any other slabs
5091 * on the deadlist after the last
5092 * pending move completes.
5094 list_insert_head(deadlist
, sp
);
5099 * Destroy the slab now if it was completely
5100 * freed while we dropped cache_lock and there
5101 * are no pending moves. Since slab_refcnt
5102 * cannot change once it reaches zero, no new
5103 * pending moves from that slab are possible.
5105 cp
->cache_defrag
->kmd_deadcount
--;
5106 cp
->cache_slab_destroy
++;
5107 mutex_exit(&cp
->cache_lock
);
5108 kmem_slab_destroy(cp
, sp
);
5109 mutex_enter(&cp
->cache_lock
);
5111 * Since we can't pick up the scan where we left
5112 * off, abort the scan and say nothing about the
5113 * number of reclaimable slabs.
5120 * Abort the scan if there is not enough memory
5121 * for the request and say nothing about the
5122 * number of reclaimable slabs.
5128 * The slab's position changed while the lock was
5129 * dropped, so we don't know where we are in the
5130 * sequence any more.
5132 if (sp
->slab_refcnt
!= refcnt
) {
5134 * If this is a KMM_DEBUG move, the slab_refcnt
5135 * may have changed because we allocated a
5136 * destination buffer on the same slab. In that
5137 * case, we're not interested in counting it.
5141 if ((sp
->slab_flags
& KMEM_SLAB_NOMOVE
) != nomove
)
5145 * Generating a move request allocates a destination
5146 * buffer from the slab layer, bumping the first partial
5147 * slab if it is completely allocated. If the current
5148 * slab becomes the first partial slab as a result, we
5149 * can't continue to scan backwards.
5151 * If this is a KMM_DEBUG move and we allocated the
5152 * destination buffer from the last partial slab, then
5153 * the buffer we're moving is on the same slab and our
5154 * slab_refcnt has changed, causing us to return before
5155 * reaching here if there are no partial slabs left.
5157 ASSERT(!avl_is_empty(&cp
->cache_partial_slabs
));
5158 if (sp
== avl_first(&cp
->cache_partial_slabs
)) {
5160 * We're not interested in a second KMM_DEBUG
5172 typedef struct kmem_move_notify_args
{
5173 kmem_cache_t
*kmna_cache
;
5175 } kmem_move_notify_args_t
;
5178 kmem_cache_move_notify_task(void *arg
)
5180 kmem_move_notify_args_t
*args
= arg
;
5181 kmem_cache_t
*cp
= args
->kmna_cache
;
5182 void *buf
= args
->kmna_buf
;
5185 ASSERT(taskq_member(kmem_taskq
, curthread
));
5186 ASSERT(list_link_active(&cp
->cache_link
));
5188 kmem_free(args
, sizeof (kmem_move_notify_args_t
));
5189 mutex_enter(&cp
->cache_lock
);
5190 sp
= kmem_slab_allocated(cp
, NULL
, buf
);
5192 /* Ignore the notification if the buffer is no longer allocated. */
5194 mutex_exit(&cp
->cache_lock
);
5198 /* Ignore the notification if there's no reason to move the buffer. */
5199 if (avl_numnodes(&cp
->cache_partial_slabs
) > 1) {
5201 * So far the notification is not ignored. Ignore the
5202 * notification if the slab is not marked by an earlier refusal
5205 if (!(sp
->slab_flags
& KMEM_SLAB_NOMOVE
) &&
5206 (sp
->slab_later_count
== 0)) {
5207 mutex_exit(&cp
->cache_lock
);
5211 kmem_slab_move_yes(cp
, sp
, buf
);
5212 ASSERT(!(sp
->slab_flags
& KMEM_SLAB_MOVE_PENDING
));
5213 sp
->slab_flags
|= KMEM_SLAB_MOVE_PENDING
;
5214 mutex_exit(&cp
->cache_lock
);
5215 /* see kmem_move_buffers() about dropping the lock */
5216 (void) kmem_move_begin(cp
, sp
, buf
, KMM_NOTIFY
);
5217 mutex_enter(&cp
->cache_lock
);
5218 ASSERT(sp
->slab_flags
& KMEM_SLAB_MOVE_PENDING
);
5219 sp
->slab_flags
&= ~KMEM_SLAB_MOVE_PENDING
;
5220 if (sp
->slab_refcnt
== 0) {
5221 list_t
*deadlist
= &cp
->cache_defrag
->kmd_deadlist
;
5222 list_remove(deadlist
, sp
);
5225 &cp
->cache_defrag
->kmd_moves_pending
)) {
5226 list_insert_head(deadlist
, sp
);
5227 mutex_exit(&cp
->cache_lock
);
5231 cp
->cache_defrag
->kmd_deadcount
--;
5232 cp
->cache_slab_destroy
++;
5233 mutex_exit(&cp
->cache_lock
);
5234 kmem_slab_destroy(cp
, sp
);
5238 kmem_slab_move_yes(cp
, sp
, buf
);
5240 mutex_exit(&cp
->cache_lock
);
5244 kmem_cache_move_notify(kmem_cache_t
*cp
, void *buf
)
5246 kmem_move_notify_args_t
*args
;
5248 args
= kmem_alloc(sizeof (kmem_move_notify_args_t
), KM_NOSLEEP
);
5250 args
->kmna_cache
= cp
;
5251 args
->kmna_buf
= buf
;
5252 if (!taskq_dispatch(kmem_taskq
,
5253 (task_func_t
*)kmem_cache_move_notify_task
, args
,
5255 kmem_free(args
, sizeof (kmem_move_notify_args_t
));
5260 kmem_cache_defrag(kmem_cache_t
*cp
)
5264 ASSERT(cp
->cache_defrag
!= NULL
);
5266 mutex_enter(&cp
->cache_lock
);
5267 n
= avl_numnodes(&cp
->cache_partial_slabs
);
5269 /* kmem_move_buffers() drops and reacquires cache_lock */
5270 cp
->cache_defrag
->kmd_defrags
++;
5271 (void) kmem_move_buffers(cp
, n
, 0, KMM_DESPERATE
);
5273 mutex_exit(&cp
->cache_lock
);
5276 /* Is this cache above the fragmentation threshold? */
5278 kmem_cache_frag_threshold(kmem_cache_t
*cp
, uint64_t nfree
)
5281 * nfree kmem_frag_numer
5282 * ------------------ > ---------------
5283 * cp->cache_buftotal kmem_frag_denom
5285 return ((nfree
* kmem_frag_denom
) >
5286 (cp
->cache_buftotal
* kmem_frag_numer
));
5290 kmem_cache_is_fragmented(kmem_cache_t
*cp
, boolean_t
*doreap
)
5292 boolean_t fragmented
;
5295 ASSERT(MUTEX_HELD(&cp
->cache_lock
));
5298 if (kmem_move_fulltilt
) {
5299 if (avl_numnodes(&cp
->cache_partial_slabs
) > 1) {
5303 if ((cp
->cache_complete_slab_count
+ avl_numnodes(
5304 &cp
->cache_partial_slabs
)) < kmem_frag_minslabs
) {
5309 nfree
= cp
->cache_bufslab
;
5310 fragmented
= ((avl_numnodes(&cp
->cache_partial_slabs
) > 1) &&
5311 kmem_cache_frag_threshold(cp
, nfree
));
5314 * Free buffers in the magazine layer appear allocated from the point of
5315 * view of the slab layer. We want to know if the slab layer would
5316 * appear fragmented if we included free buffers from magazines that
5317 * have fallen out of the working set.
5322 mutex_enter(&cp
->cache_depot_lock
);
5323 reap
= MIN(cp
->cache_full
.ml_reaplimit
, cp
->cache_full
.ml_min
);
5324 reap
= MIN(reap
, cp
->cache_full
.ml_total
);
5325 mutex_exit(&cp
->cache_depot_lock
);
5327 nfree
+= ((uint64_t)reap
* cp
->cache_magtype
->mt_magsize
);
5328 if (kmem_cache_frag_threshold(cp
, nfree
)) {
5333 return (fragmented
);
5336 /* Called periodically from kmem_taskq */
5338 kmem_cache_scan(kmem_cache_t
*cp
)
5340 boolean_t reap
= B_FALSE
;
5343 ASSERT(taskq_member(kmem_taskq
, curthread
));
5345 mutex_enter(&cp
->cache_lock
);
5347 kmd
= cp
->cache_defrag
;
5348 if (kmd
->kmd_consolidate
> 0) {
5349 kmd
->kmd_consolidate
--;
5350 mutex_exit(&cp
->cache_lock
);
5351 kmem_cache_reap(cp
);
5355 if (kmem_cache_is_fragmented(cp
, &reap
)) {
5359 * Consolidate reclaimable slabs from the end of the partial
5360 * slab list (scan at most kmem_reclaim_scan_range slabs to find
5361 * reclaimable slabs). Keep track of how many candidate slabs we
5362 * looked for and how many we actually found so we can adjust
5363 * the definition of a candidate slab if we're having trouble
5366 * kmem_move_buffers() drops and reacquires cache_lock.
5369 slabs_found
= kmem_move_buffers(cp
, kmem_reclaim_scan_range
,
5370 kmem_reclaim_max_slabs
, 0);
5371 if (slabs_found
>= 0) {
5372 kmd
->kmd_slabs_sought
+= kmem_reclaim_max_slabs
;
5373 kmd
->kmd_slabs_found
+= slabs_found
;
5376 if (++kmd
->kmd_tries
>= kmem_reclaim_scan_range
) {
5380 * If we had difficulty finding candidate slabs in
5381 * previous scans, adjust the threshold so that
5382 * candidates are easier to find.
5384 if (kmd
->kmd_slabs_found
== kmd
->kmd_slabs_sought
) {
5385 kmem_adjust_reclaim_threshold(kmd
, -1);
5386 } else if ((kmd
->kmd_slabs_found
* 2) <
5387 kmd
->kmd_slabs_sought
) {
5388 kmem_adjust_reclaim_threshold(kmd
, 1);
5390 kmd
->kmd_slabs_sought
= 0;
5391 kmd
->kmd_slabs_found
= 0;
5394 kmem_reset_reclaim_threshold(cp
->cache_defrag
);
5396 if (!avl_is_empty(&cp
->cache_partial_slabs
)) {
5398 * In a debug kernel we want the consolidator to
5399 * run occasionally even when there is plenty of
5402 uint16_t debug_rand
;
5404 (void) random_get_bytes((uint8_t *)&debug_rand
, 2);
5405 if (!kmem_move_noreap
&&
5406 ((debug_rand
% kmem_mtb_reap
) == 0)) {
5407 mutex_exit(&cp
->cache_lock
);
5408 kmem_cache_reap(cp
);
5410 } else if ((debug_rand
% kmem_mtb_move
) == 0) {
5412 (void) kmem_move_buffers(cp
,
5413 kmem_reclaim_scan_range
, 1, KMM_DEBUG
);
5419 mutex_exit(&cp
->cache_lock
);
5422 kmem_depot_ws_reap(cp
);