5972 kmem: fix comment typo
[illumos-gate.git] / usr / src / uts / common / os / kmem.c
blobcc2f8e78959d1e6a586baf1ef9e8ac3ad289d5d6
1 /*
2 * CDDL HEADER START
4 * The contents of this file are subject to the terms of the
5 * Common Development and Distribution License (the "License").
6 * You may not use this file except in compliance with the License.
8 * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 * or http://www.opensolaris.org/os/licensing.
10 * See the License for the specific language governing permissions
11 * and limitations under the License.
13 * When distributing Covered Code, include this CDDL HEADER in each
14 * file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 * If applicable, add the following below this CDDL HEADER, with the
16 * fields enclosed by brackets "[]" replaced with your own identifying
17 * information: Portions Copyright [yyyy] [name of copyright owner]
19 * CDDL HEADER END
22 * Copyright (c) 1994, 2010, Oracle and/or its affiliates. All rights reserved.
26 * Kernel memory allocator, as described in the following two papers and a
27 * statement about the consolidator:
29 * Jeff Bonwick,
30 * The Slab Allocator: An Object-Caching Kernel Memory Allocator.
31 * Proceedings of the Summer 1994 Usenix Conference.
32 * Available as /shared/sac/PSARC/1994/028/materials/kmem.pdf.
34 * Jeff Bonwick and Jonathan Adams,
35 * Magazines and vmem: Extending the Slab Allocator to Many CPUs and
36 * Arbitrary Resources.
37 * Proceedings of the 2001 Usenix Conference.
38 * Available as /shared/sac/PSARC/2000/550/materials/vmem.pdf.
40 * kmem Slab Consolidator Big Theory Statement:
42 * 1. Motivation
44 * As stated in Bonwick94, slabs provide the following advantages over other
45 * allocation structures in terms of memory fragmentation:
47 * - Internal fragmentation (per-buffer wasted space) is minimal.
48 * - Severe external fragmentation (unused buffers on the free list) is
49 * unlikely.
51 * Segregating objects by size eliminates one source of external fragmentation,
52 * and according to Bonwick:
54 * The other reason that slabs reduce external fragmentation is that all
55 * objects in a slab are of the same type, so they have the same lifetime
56 * distribution. The resulting segregation of short-lived and long-lived
57 * objects at slab granularity reduces the likelihood of an entire page being
58 * held hostage due to a single long-lived allocation [Barrett93, Hanson90].
60 * While unlikely, severe external fragmentation remains possible. Clients that
61 * allocate both short- and long-lived objects from the same cache cannot
62 * anticipate the distribution of long-lived objects within the allocator's slab
63 * implementation. Even a small percentage of long-lived objects distributed
64 * randomly across many slabs can lead to a worst case scenario where the client
65 * frees the majority of its objects and the system gets back almost none of the
66 * slabs. Despite the client doing what it reasonably can to help the system
67 * reclaim memory, the allocator cannot shake free enough slabs because of
68 * lonely allocations stubbornly hanging on. Although the allocator is in a
69 * position to diagnose the fragmentation, there is nothing that the allocator
70 * by itself can do about it. It only takes a single allocated object to prevent
71 * an entire slab from being reclaimed, and any object handed out by
72 * kmem_cache_alloc() is by definition in the client's control. Conversely,
73 * although the client is in a position to move a long-lived object, it has no
74 * way of knowing if the object is causing fragmentation, and if so, where to
75 * move it. A solution necessarily requires further cooperation between the
76 * allocator and the client.
78 * 2. Move Callback
80 * The kmem slab consolidator therefore adds a move callback to the
81 * allocator/client interface, improving worst-case external fragmentation in
82 * kmem caches that supply a function to move objects from one memory location
83 * to another. In a situation of low memory kmem attempts to consolidate all of
84 * a cache's slabs at once; otherwise it works slowly to bring external
85 * fragmentation within the 1/8 limit guaranteed for internal fragmentation,
86 * thereby helping to avoid a low memory situation in the future.
88 * The callback has the following signature:
90 * kmem_cbrc_t move(void *old, void *new, size_t size, void *user_arg)
92 * It supplies the kmem client with two addresses: the allocated object that
93 * kmem wants to move and a buffer selected by kmem for the client to use as the
94 * copy destination. The callback is kmem's way of saying "Please get off of
95 * this buffer and use this one instead." kmem knows where it wants to move the
96 * object in order to best reduce fragmentation. All the client needs to know
97 * about the second argument (void *new) is that it is an allocated, constructed
98 * object ready to take the contents of the old object. When the move function
99 * is called, the system is likely to be low on memory, and the new object
100 * spares the client from having to worry about allocating memory for the
101 * requested move. The third argument supplies the size of the object, in case a
102 * single move function handles multiple caches whose objects differ only in
103 * size (such as zio_buf_512, zio_buf_1024, etc). Finally, the same optional
104 * user argument passed to the constructor, destructor, and reclaim functions is
105 * also passed to the move callback.
107 * 2.1 Setting the Move Callback
109 * The client sets the move callback after creating the cache and before
110 * allocating from it:
112 * object_cache = kmem_cache_create(...);
113 * kmem_cache_set_move(object_cache, object_move);
115 * 2.2 Move Callback Return Values
117 * Only the client knows about its own data and when is a good time to move it.
118 * The client is cooperating with kmem to return unused memory to the system,
119 * and kmem respectfully accepts this help at the client's convenience. When
120 * asked to move an object, the client can respond with any of the following:
122 * typedef enum kmem_cbrc {
123 * KMEM_CBRC_YES,
124 * KMEM_CBRC_NO,
125 * KMEM_CBRC_LATER,
126 * KMEM_CBRC_DONT_NEED,
127 * KMEM_CBRC_DONT_KNOW
128 * } kmem_cbrc_t;
130 * The client must not explicitly kmem_cache_free() either of the objects passed
131 * to the callback, since kmem wants to free them directly to the slab layer
132 * (bypassing the per-CPU magazine layer). The response tells kmem which of the
133 * objects to free:
135 * YES: (Did it) The client moved the object, so kmem frees the old one.
136 * NO: (Never) The client refused, so kmem frees the new object (the
137 * unused copy destination). kmem also marks the slab of the old
138 * object so as not to bother the client with further callbacks for
139 * that object as long as the slab remains on the partial slab list.
140 * (The system won't be getting the slab back as long as the
141 * immovable object holds it hostage, so there's no point in moving
142 * any of its objects.)
143 * LATER: The client is using the object and cannot move it now, so kmem
144 * frees the new object (the unused copy destination). kmem still
145 * attempts to move other objects off the slab, since it expects to
146 * succeed in clearing the slab in a later callback. The client
147 * should use LATER instead of NO if the object is likely to become
148 * movable very soon.
149 * DONT_NEED: The client no longer needs the object, so kmem frees the old along
150 * with the new object (the unused copy destination). This response
151 * is the client's opportunity to be a model citizen and give back as
152 * much as it can.
153 * DONT_KNOW: The client does not know about the object because
154 * a) the client has just allocated the object and not yet put it
155 * wherever it expects to find known objects
156 * b) the client has removed the object from wherever it expects to
157 * find known objects and is about to free it, or
158 * c) the client has freed the object.
159 * In all these cases (a, b, and c) kmem frees the new object (the
160 * unused copy destination) and searches for the old object in the
161 * magazine layer. If found, the object is removed from the magazine
162 * layer and freed to the slab layer so it will no longer hold the
163 * slab hostage.
165 * 2.3 Object States
167 * Neither kmem nor the client can be assumed to know the object's whereabouts
168 * at the time of the callback. An object belonging to a kmem cache may be in
169 * any of the following states:
171 * 1. Uninitialized on the slab
172 * 2. Allocated from the slab but not constructed (still uninitialized)
173 * 3. Allocated from the slab, constructed, but not yet ready for business
174 * (not in a valid state for the move callback)
175 * 4. In use (valid and known to the client)
176 * 5. About to be freed (no longer in a valid state for the move callback)
177 * 6. Freed to a magazine (still constructed)
178 * 7. Allocated from a magazine, not yet ready for business (not in a valid
179 * state for the move callback), and about to return to state #4
180 * 8. Deconstructed on a magazine that is about to be freed
181 * 9. Freed to the slab
183 * Since the move callback may be called at any time while the object is in any
184 * of the above states (except state #1), the client needs a safe way to
185 * determine whether or not it knows about the object. Specifically, the client
186 * needs to know whether or not the object is in state #4, the only state in
187 * which a move is valid. If the object is in any other state, the client should
188 * immediately return KMEM_CBRC_DONT_KNOW, since it is unsafe to access any of
189 * the object's fields.
191 * Note that although an object may be in state #4 when kmem initiates the move
192 * request, the object may no longer be in that state by the time kmem actually
193 * calls the move function. Not only does the client free objects
194 * asynchronously, kmem itself puts move requests on a queue where thay are
195 * pending until kmem processes them from another context. Also, objects freed
196 * to a magazine appear allocated from the point of view of the slab layer, so
197 * kmem may even initiate requests for objects in a state other than state #4.
199 * 2.3.1 Magazine Layer
201 * An important insight revealed by the states listed above is that the magazine
202 * layer is populated only by kmem_cache_free(). Magazines of constructed
203 * objects are never populated directly from the slab layer (which contains raw,
204 * unconstructed objects). Whenever an allocation request cannot be satisfied
205 * from the magazine layer, the magazines are bypassed and the request is
206 * satisfied from the slab layer (creating a new slab if necessary). kmem calls
207 * the object constructor only when allocating from the slab layer, and only in
208 * response to kmem_cache_alloc() or to prepare the destination buffer passed in
209 * the move callback. kmem does not preconstruct objects in anticipation of
210 * kmem_cache_alloc().
212 * 2.3.2 Object Constructor and Destructor
214 * If the client supplies a destructor, it must be valid to call the destructor
215 * on a newly created object (immediately after the constructor).
217 * 2.4 Recognizing Known Objects
219 * There is a simple test to determine safely whether or not the client knows
220 * about a given object in the move callback. It relies on the fact that kmem
221 * guarantees that the object of the move callback has only been touched by the
222 * client itself or else by kmem. kmem does this by ensuring that none of the
223 * cache's slabs are freed to the virtual memory (VM) subsystem while a move
224 * callback is pending. When the last object on a slab is freed, if there is a
225 * pending move, kmem puts the slab on a per-cache dead list and defers freeing
226 * slabs on that list until all pending callbacks are completed. That way,
227 * clients can be certain that the object of a move callback is in one of the
228 * states listed above, making it possible to distinguish known objects (in
229 * state #4) using the two low order bits of any pointer member (with the
230 * exception of 'char *' or 'short *' which may not be 4-byte aligned on some
231 * platforms).
233 * The test works as long as the client always transitions objects from state #4
234 * (known, in use) to state #5 (about to be freed, invalid) by setting the low
235 * order bit of the client-designated pointer member. Since kmem only writes
236 * invalid memory patterns, such as 0xbaddcafe to uninitialized memory and
237 * 0xdeadbeef to freed memory, any scribbling on the object done by kmem is
238 * guaranteed to set at least one of the two low order bits. Therefore, given an
239 * object with a back pointer to a 'container_t *o_container', the client can
240 * test
242 * container_t *container = object->o_container;
243 * if ((uintptr_t)container & 0x3) {
244 * return (KMEM_CBRC_DONT_KNOW);
247 * Typically, an object will have a pointer to some structure with a list or
248 * hash where objects from the cache are kept while in use. Assuming that the
249 * client has some way of knowing that the container structure is valid and will
250 * not go away during the move, and assuming that the structure includes a lock
251 * to protect whatever collection is used, then the client would continue as
252 * follows:
254 * // Ensure that the container structure does not go away.
255 * if (container_hold(container) == 0) {
256 * return (KMEM_CBRC_DONT_KNOW);
258 * mutex_enter(&container->c_objects_lock);
259 * if (container != object->o_container) {
260 * mutex_exit(&container->c_objects_lock);
261 * container_rele(container);
262 * return (KMEM_CBRC_DONT_KNOW);
265 * At this point the client knows that the object cannot be freed as long as
266 * c_objects_lock is held. Note that after acquiring the lock, the client must
267 * recheck the o_container pointer in case the object was removed just before
268 * acquiring the lock.
270 * When the client is about to free an object, it must first remove that object
271 * from the list, hash, or other structure where it is kept. At that time, to
272 * mark the object so it can be distinguished from the remaining, known objects,
273 * the client sets the designated low order bit:
275 * mutex_enter(&container->c_objects_lock);
276 * object->o_container = (void *)((uintptr_t)object->o_container | 0x1);
277 * list_remove(&container->c_objects, object);
278 * mutex_exit(&container->c_objects_lock);
280 * In the common case, the object is freed to the magazine layer, where it may
281 * be reused on a subsequent allocation without the overhead of calling the
282 * constructor. While in the magazine it appears allocated from the point of
283 * view of the slab layer, making it a candidate for the move callback. Most
284 * objects unrecognized by the client in the move callback fall into this
285 * category and are cheaply distinguished from known objects by the test
286 * described earlier. Since recognition is cheap for the client, and searching
287 * magazines is expensive for kmem, kmem defers searching until the client first
288 * returns KMEM_CBRC_DONT_KNOW. As long as the needed effort is reasonable, kmem
289 * elsewhere does what it can to avoid bothering the client unnecessarily.
291 * Invalidating the designated pointer member before freeing the object marks
292 * the object to be avoided in the callback, and conversely, assigning a valid
293 * value to the designated pointer member after allocating the object makes the
294 * object fair game for the callback:
296 * ... allocate object ...
297 * ... set any initial state not set by the constructor ...
299 * mutex_enter(&container->c_objects_lock);
300 * list_insert_tail(&container->c_objects, object);
301 * membar_producer();
302 * object->o_container = container;
303 * mutex_exit(&container->c_objects_lock);
305 * Note that everything else must be valid before setting o_container makes the
306 * object fair game for the move callback. The membar_producer() call ensures
307 * that all the object's state is written to memory before setting the pointer
308 * that transitions the object from state #3 or #7 (allocated, constructed, not
309 * yet in use) to state #4 (in use, valid). That's important because the move
310 * function has to check the validity of the pointer before it can safely
311 * acquire the lock protecting the collection where it expects to find known
312 * objects.
314 * This method of distinguishing known objects observes the usual symmetry:
315 * invalidating the designated pointer is the first thing the client does before
316 * freeing the object, and setting the designated pointer is the last thing the
317 * client does after allocating the object. Of course, the client is not
318 * required to use this method. Fundamentally, how the client recognizes known
319 * objects is completely up to the client, but this method is recommended as an
320 * efficient and safe way to take advantage of the guarantees made by kmem. If
321 * the entire object is arbitrary data without any markable bits from a suitable
322 * pointer member, then the client must find some other method, such as
323 * searching a hash table of known objects.
325 * 2.5 Preventing Objects From Moving
327 * Besides a way to distinguish known objects, the other thing that the client
328 * needs is a strategy to ensure that an object will not move while the client
329 * is actively using it. The details of satisfying this requirement tend to be
330 * highly cache-specific. It might seem that the same rules that let a client
331 * remove an object safely should also decide when an object can be moved
332 * safely. However, any object state that makes a removal attempt invalid is
333 * likely to be long-lasting for objects that the client does not expect to
334 * remove. kmem knows nothing about the object state and is equally likely (from
335 * the client's point of view) to request a move for any object in the cache,
336 * whether prepared for removal or not. Even a low percentage of objects stuck
337 * in place by unremovability will defeat the consolidator if the stuck objects
338 * are the same long-lived allocations likely to hold slabs hostage.
339 * Fundamentally, the consolidator is not aimed at common cases. Severe external
340 * fragmentation is a worst case scenario manifested as sparsely allocated
341 * slabs, by definition a low percentage of the cache's objects. When deciding
342 * what makes an object movable, keep in mind the goal of the consolidator: to
343 * bring worst-case external fragmentation within the limits guaranteed for
344 * internal fragmentation. Removability is a poor criterion if it is likely to
345 * exclude more than an insignificant percentage of objects for long periods of
346 * time.
348 * A tricky general solution exists, and it has the advantage of letting you
349 * move any object at almost any moment, practically eliminating the likelihood
350 * that an object can hold a slab hostage. However, if there is a cache-specific
351 * way to ensure that an object is not actively in use in the vast majority of
352 * cases, a simpler solution that leverages this cache-specific knowledge is
353 * preferred.
355 * 2.5.1 Cache-Specific Solution
357 * As an example of a cache-specific solution, the ZFS znode cache takes
358 * advantage of the fact that the vast majority of znodes are only being
359 * referenced from the DNLC. (A typical case might be a few hundred in active
360 * use and a hundred thousand in the DNLC.) In the move callback, after the ZFS
361 * client has established that it recognizes the znode and can access its fields
362 * safely (using the method described earlier), it then tests whether the znode
363 * is referenced by anything other than the DNLC. If so, it assumes that the
364 * znode may be in active use and is unsafe to move, so it drops its locks and
365 * returns KMEM_CBRC_LATER. The advantage of this strategy is that everywhere
366 * else znodes are used, no change is needed to protect against the possibility
367 * of the znode moving. The disadvantage is that it remains possible for an
368 * application to hold a znode slab hostage with an open file descriptor.
369 * However, this case ought to be rare and the consolidator has a way to deal
370 * with it: If the client responds KMEM_CBRC_LATER repeatedly for the same
371 * object, kmem eventually stops believing it and treats the slab as if the
372 * client had responded KMEM_CBRC_NO. Having marked the hostage slab, kmem can
373 * then focus on getting it off of the partial slab list by allocating rather
374 * than freeing all of its objects. (Either way of getting a slab off the
375 * free list reduces fragmentation.)
377 * 2.5.2 General Solution
379 * The general solution, on the other hand, requires an explicit hold everywhere
380 * the object is used to prevent it from moving. To keep the client locking
381 * strategy as uncomplicated as possible, kmem guarantees the simplifying
382 * assumption that move callbacks are sequential, even across multiple caches.
383 * Internally, a global queue processed by a single thread supports all caches
384 * implementing the callback function. No matter how many caches supply a move
385 * function, the consolidator never moves more than one object at a time, so the
386 * client does not have to worry about tricky lock ordering involving several
387 * related objects from different kmem caches.
389 * The general solution implements the explicit hold as a read-write lock, which
390 * allows multiple readers to access an object from the cache simultaneously
391 * while a single writer is excluded from moving it. A single rwlock for the
392 * entire cache would lock out all threads from using any of the cache's objects
393 * even though only a single object is being moved, so to reduce contention,
394 * the client can fan out the single rwlock into an array of rwlocks hashed by
395 * the object address, making it probable that moving one object will not
396 * prevent other threads from using a different object. The rwlock cannot be a
397 * member of the object itself, because the possibility of the object moving
398 * makes it unsafe to access any of the object's fields until the lock is
399 * acquired.
401 * Assuming a small, fixed number of locks, it's possible that multiple objects
402 * will hash to the same lock. A thread that needs to use multiple objects in
403 * the same function may acquire the same lock multiple times. Since rwlocks are
404 * reentrant for readers, and since there is never more than a single writer at
405 * a time (assuming that the client acquires the lock as a writer only when
406 * moving an object inside the callback), there would seem to be no problem.
407 * However, a client locking multiple objects in the same function must handle
408 * one case of potential deadlock: Assume that thread A needs to prevent both
409 * object 1 and object 2 from moving, and thread B, the callback, meanwhile
410 * tries to move object 3. It's possible, if objects 1, 2, and 3 all hash to the
411 * same lock, that thread A will acquire the lock for object 1 as a reader
412 * before thread B sets the lock's write-wanted bit, preventing thread A from
413 * reacquiring the lock for object 2 as a reader. Unable to make forward
414 * progress, thread A will never release the lock for object 1, resulting in
415 * deadlock.
417 * There are two ways of avoiding the deadlock just described. The first is to
418 * use rw_tryenter() rather than rw_enter() in the callback function when
419 * attempting to acquire the lock as a writer. If tryenter discovers that the
420 * same object (or another object hashed to the same lock) is already in use, it
421 * aborts the callback and returns KMEM_CBRC_LATER. The second way is to use
422 * rprwlock_t (declared in common/fs/zfs/sys/rprwlock.h) instead of rwlock_t,
423 * since it allows a thread to acquire the lock as a reader in spite of a
424 * waiting writer. This second approach insists on moving the object now, no
425 * matter how many readers the move function must wait for in order to do so,
426 * and could delay the completion of the callback indefinitely (blocking
427 * callbacks to other clients). In practice, a less insistent callback using
428 * rw_tryenter() returns KMEM_CBRC_LATER infrequently enough that there seems
429 * little reason to use anything else.
431 * Avoiding deadlock is not the only problem that an implementation using an
432 * explicit hold needs to solve. Locking the object in the first place (to
433 * prevent it from moving) remains a problem, since the object could move
434 * between the time you obtain a pointer to the object and the time you acquire
435 * the rwlock hashed to that pointer value. Therefore the client needs to
436 * recheck the value of the pointer after acquiring the lock, drop the lock if
437 * the value has changed, and try again. This requires a level of indirection:
438 * something that points to the object rather than the object itself, that the
439 * client can access safely while attempting to acquire the lock. (The object
440 * itself cannot be referenced safely because it can move at any time.)
441 * The following lock-acquisition function takes whatever is safe to reference
442 * (arg), follows its pointer to the object (using function f), and tries as
443 * often as necessary to acquire the hashed lock and verify that the object
444 * still has not moved:
446 * object_t *
447 * object_hold(object_f f, void *arg)
449 * object_t *op;
451 * op = f(arg);
452 * if (op == NULL) {
453 * return (NULL);
456 * rw_enter(OBJECT_RWLOCK(op), RW_READER);
457 * while (op != f(arg)) {
458 * rw_exit(OBJECT_RWLOCK(op));
459 * op = f(arg);
460 * if (op == NULL) {
461 * break;
463 * rw_enter(OBJECT_RWLOCK(op), RW_READER);
466 * return (op);
469 * The OBJECT_RWLOCK macro hashes the object address to obtain the rwlock. The
470 * lock reacquisition loop, while necessary, almost never executes. The function
471 * pointer f (used to obtain the object pointer from arg) has the following type
472 * definition:
474 * typedef object_t *(*object_f)(void *arg);
476 * An object_f implementation is likely to be as simple as accessing a structure
477 * member:
479 * object_t *
480 * s_object(void *arg)
482 * something_t *sp = arg;
483 * return (sp->s_object);
486 * The flexibility of a function pointer allows the path to the object to be
487 * arbitrarily complex and also supports the notion that depending on where you
488 * are using the object, you may need to get it from someplace different.
490 * The function that releases the explicit hold is simpler because it does not
491 * have to worry about the object moving:
493 * void
494 * object_rele(object_t *op)
496 * rw_exit(OBJECT_RWLOCK(op));
499 * The caller is spared these details so that obtaining and releasing an
500 * explicit hold feels like a simple mutex_enter()/mutex_exit() pair. The caller
501 * of object_hold() only needs to know that the returned object pointer is valid
502 * if not NULL and that the object will not move until released.
504 * Although object_hold() prevents an object from moving, it does not prevent it
505 * from being freed. The caller must take measures before calling object_hold()
506 * (afterwards is too late) to ensure that the held object cannot be freed. The
507 * caller must do so without accessing the unsafe object reference, so any lock
508 * or reference count used to ensure the continued existence of the object must
509 * live outside the object itself.
511 * Obtaining a new object is a special case where an explicit hold is impossible
512 * for the caller. Any function that returns a newly allocated object (either as
513 * a return value, or as an in-out paramter) must return it already held; after
514 * the caller gets it is too late, since the object cannot be safely accessed
515 * without the level of indirection described earlier. The following
516 * object_alloc() example uses the same code shown earlier to transition a new
517 * object into the state of being recognized (by the client) as a known object.
518 * The function must acquire the hold (rw_enter) before that state transition
519 * makes the object movable:
521 * static object_t *
522 * object_alloc(container_t *container)
524 * object_t *object = kmem_cache_alloc(object_cache, 0);
525 * ... set any initial state not set by the constructor ...
526 * rw_enter(OBJECT_RWLOCK(object), RW_READER);
527 * mutex_enter(&container->c_objects_lock);
528 * list_insert_tail(&container->c_objects, object);
529 * membar_producer();
530 * object->o_container = container;
531 * mutex_exit(&container->c_objects_lock);
532 * return (object);
535 * Functions that implicitly acquire an object hold (any function that calls
536 * object_alloc() to supply an object for the caller) need to be carefully noted
537 * so that the matching object_rele() is not neglected. Otherwise, leaked holds
538 * prevent all objects hashed to the affected rwlocks from ever being moved.
540 * The pointer to a held object can be hashed to the holding rwlock even after
541 * the object has been freed. Although it is possible to release the hold
542 * after freeing the object, you may decide to release the hold implicitly in
543 * whatever function frees the object, so as to release the hold as soon as
544 * possible, and for the sake of symmetry with the function that implicitly
545 * acquires the hold when it allocates the object. Here, object_free() releases
546 * the hold acquired by object_alloc(). Its implicit object_rele() forms a
547 * matching pair with object_hold():
549 * void
550 * object_free(object_t *object)
552 * container_t *container;
554 * ASSERT(object_held(object));
555 * container = object->o_container;
556 * mutex_enter(&container->c_objects_lock);
557 * object->o_container =
558 * (void *)((uintptr_t)object->o_container | 0x1);
559 * list_remove(&container->c_objects, object);
560 * mutex_exit(&container->c_objects_lock);
561 * object_rele(object);
562 * kmem_cache_free(object_cache, object);
565 * Note that object_free() cannot safely accept an object pointer as an argument
566 * unless the object is already held. Any function that calls object_free()
567 * needs to be carefully noted since it similarly forms a matching pair with
568 * object_hold().
570 * To complete the picture, the following callback function implements the
571 * general solution by moving objects only if they are currently unheld:
573 * static kmem_cbrc_t
574 * object_move(void *buf, void *newbuf, size_t size, void *arg)
576 * object_t *op = buf, *np = newbuf;
577 * container_t *container;
579 * container = op->o_container;
580 * if ((uintptr_t)container & 0x3) {
581 * return (KMEM_CBRC_DONT_KNOW);
584 * // Ensure that the container structure does not go away.
585 * if (container_hold(container) == 0) {
586 * return (KMEM_CBRC_DONT_KNOW);
589 * mutex_enter(&container->c_objects_lock);
590 * if (container != op->o_container) {
591 * mutex_exit(&container->c_objects_lock);
592 * container_rele(container);
593 * return (KMEM_CBRC_DONT_KNOW);
596 * if (rw_tryenter(OBJECT_RWLOCK(op), RW_WRITER) == 0) {
597 * mutex_exit(&container->c_objects_lock);
598 * container_rele(container);
599 * return (KMEM_CBRC_LATER);
602 * object_move_impl(op, np); // critical section
603 * rw_exit(OBJECT_RWLOCK(op));
605 * op->o_container = (void *)((uintptr_t)op->o_container | 0x1);
606 * list_link_replace(&op->o_link_node, &np->o_link_node);
607 * mutex_exit(&container->c_objects_lock);
608 * container_rele(container);
609 * return (KMEM_CBRC_YES);
612 * Note that object_move() must invalidate the designated o_container pointer of
613 * the old object in the same way that object_free() does, since kmem will free
614 * the object in response to the KMEM_CBRC_YES return value.
616 * The lock order in object_move() differs from object_alloc(), which locks
617 * OBJECT_RWLOCK first and &container->c_objects_lock second, but as long as the
618 * callback uses rw_tryenter() (preventing the deadlock described earlier), it's
619 * not a problem. Holding the lock on the object list in the example above
620 * through the entire callback not only prevents the object from going away, it
621 * also allows you to lock the list elsewhere and know that none of its elements
622 * will move during iteration.
624 * Adding an explicit hold everywhere an object from the cache is used is tricky
625 * and involves much more change to client code than a cache-specific solution
626 * that leverages existing state to decide whether or not an object is
627 * movable. However, this approach has the advantage that no object remains
628 * immovable for any significant length of time, making it extremely unlikely
629 * that long-lived allocations can continue holding slabs hostage; and it works
630 * for any cache.
632 * 3. Consolidator Implementation
634 * Once the client supplies a move function that a) recognizes known objects and
635 * b) avoids moving objects that are actively in use, the remaining work is up
636 * to the consolidator to decide which objects to move and when to issue
637 * callbacks.
639 * The consolidator relies on the fact that a cache's slabs are ordered by
640 * usage. Each slab has a fixed number of objects. Depending on the slab's
641 * "color" (the offset of the first object from the beginning of the slab;
642 * offsets are staggered to mitigate false sharing of cache lines) it is either
643 * the maximum number of objects per slab determined at cache creation time or
644 * else the number closest to the maximum that fits within the space remaining
645 * after the initial offset. A completely allocated slab may contribute some
646 * internal fragmentation (per-slab overhead) but no external fragmentation, so
647 * it is of no interest to the consolidator. At the other extreme, slabs whose
648 * objects have all been freed to the slab are released to the virtual memory
649 * (VM) subsystem (objects freed to magazines are still allocated as far as the
650 * slab is concerned). External fragmentation exists when there are slabs
651 * somewhere between these extremes. A partial slab has at least one but not all
652 * of its objects allocated. The more partial slabs, and the fewer allocated
653 * objects on each of them, the higher the fragmentation. Hence the
654 * consolidator's overall strategy is to reduce the number of partial slabs by
655 * moving allocated objects from the least allocated slabs to the most allocated
656 * slabs.
658 * Partial slabs are kept in an AVL tree ordered by usage. Completely allocated
659 * slabs are kept separately in an unordered list. Since the majority of slabs
660 * tend to be completely allocated (a typical unfragmented cache may have
661 * thousands of complete slabs and only a single partial slab), separating
662 * complete slabs improves the efficiency of partial slab ordering, since the
663 * complete slabs do not affect the depth or balance of the AVL tree. This
664 * ordered sequence of partial slabs acts as a "free list" supplying objects for
665 * allocation requests.
667 * Objects are always allocated from the first partial slab in the free list,
668 * where the allocation is most likely to eliminate a partial slab (by
669 * completely allocating it). Conversely, when a single object from a completely
670 * allocated slab is freed to the slab, that slab is added to the front of the
671 * free list. Since most free list activity involves highly allocated slabs
672 * coming and going at the front of the list, slabs tend naturally toward the
673 * ideal order: highly allocated at the front, sparsely allocated at the back.
674 * Slabs with few allocated objects are likely to become completely free if they
675 * keep a safe distance away from the front of the free list. Slab misorders
676 * interfere with the natural tendency of slabs to become completely free or
677 * completely allocated. For example, a slab with a single allocated object
678 * needs only a single free to escape the cache; its natural desire is
679 * frustrated when it finds itself at the front of the list where a second
680 * allocation happens just before the free could have released it. Another slab
681 * with all but one object allocated might have supplied the buffer instead, so
682 * that both (as opposed to neither) of the slabs would have been taken off the
683 * free list.
685 * Although slabs tend naturally toward the ideal order, misorders allowed by a
686 * simple list implementation defeat the consolidator's strategy of merging
687 * least- and most-allocated slabs. Without an AVL tree to guarantee order, kmem
688 * needs another way to fix misorders to optimize its callback strategy. One
689 * approach is to periodically scan a limited number of slabs, advancing a
690 * marker to hold the current scan position, and to move extreme misorders to
691 * the front or back of the free list and to the front or back of the current
692 * scan range. By making consecutive scan ranges overlap by one slab, the least
693 * allocated slab in the current range can be carried along from the end of one
694 * scan to the start of the next.
696 * Maintaining partial slabs in an AVL tree relieves kmem of this additional
697 * task, however. Since most of the cache's activity is in the magazine layer,
698 * and allocations from the slab layer represent only a startup cost, the
699 * overhead of maintaining a balanced tree is not a significant concern compared
700 * to the opportunity of reducing complexity by eliminating the partial slab
701 * scanner just described. The overhead of an AVL tree is minimized by
702 * maintaining only partial slabs in the tree and keeping completely allocated
703 * slabs separately in a list. To avoid increasing the size of the slab
704 * structure the AVL linkage pointers are reused for the slab's list linkage,
705 * since the slab will always be either partial or complete, never stored both
706 * ways at the same time. To further minimize the overhead of the AVL tree the
707 * compare function that orders partial slabs by usage divides the range of
708 * allocated object counts into bins such that counts within the same bin are
709 * considered equal. Binning partial slabs makes it less likely that allocating
710 * or freeing a single object will change the slab's order, requiring a tree
711 * reinsertion (an avl_remove() followed by an avl_add(), both potentially
712 * requiring some rebalancing of the tree). Allocation counts closest to
713 * completely free and completely allocated are left unbinned (finely sorted) to
714 * better support the consolidator's strategy of merging slabs at either
715 * extreme.
717 * 3.1 Assessing Fragmentation and Selecting Candidate Slabs
719 * The consolidator piggybacks on the kmem maintenance thread and is called on
720 * the same interval as kmem_cache_update(), once per cache every fifteen
721 * seconds. kmem maintains a running count of unallocated objects in the slab
722 * layer (cache_bufslab). The consolidator checks whether that number exceeds
723 * 12.5% (1/8) of the total objects in the cache (cache_buftotal), and whether
724 * there is a significant number of slabs in the cache (arbitrarily a minimum
725 * 101 total slabs). Unused objects that have fallen out of the magazine layer's
726 * working set are included in the assessment, and magazines in the depot are
727 * reaped if those objects would lift cache_bufslab above the fragmentation
728 * threshold. Once the consolidator decides that a cache is fragmented, it looks
729 * for a candidate slab to reclaim, starting at the end of the partial slab free
730 * list and scanning backwards. At first the consolidator is choosy: only a slab
731 * with fewer than 12.5% (1/8) of its objects allocated qualifies (or else a
732 * single allocated object, regardless of percentage). If there is difficulty
733 * finding a candidate slab, kmem raises the allocation threshold incrementally,
734 * up to a maximum 87.5% (7/8), so that eventually the consolidator will reduce
735 * external fragmentation (unused objects on the free list) below 12.5% (1/8),
736 * even in the worst case of every slab in the cache being almost 7/8 allocated.
737 * The threshold can also be lowered incrementally when candidate slabs are easy
738 * to find, and the threshold is reset to the minimum 1/8 as soon as the cache
739 * is no longer fragmented.
741 * 3.2 Generating Callbacks
743 * Once an eligible slab is chosen, a callback is generated for every allocated
744 * object on the slab, in the hope that the client will move everything off the
745 * slab and make it reclaimable. Objects selected as move destinations are
746 * chosen from slabs at the front of the free list. Assuming slabs in the ideal
747 * order (most allocated at the front, least allocated at the back) and a
748 * cooperative client, the consolidator will succeed in removing slabs from both
749 * ends of the free list, completely allocating on the one hand and completely
750 * freeing on the other. Objects selected as move destinations are allocated in
751 * the kmem maintenance thread where move requests are enqueued. A separate
752 * callback thread removes pending callbacks from the queue and calls the
753 * client. The separate thread ensures that client code (the move function) does
754 * not interfere with internal kmem maintenance tasks. A map of pending
755 * callbacks keyed by object address (the object to be moved) is checked to
756 * ensure that duplicate callbacks are not generated for the same object.
757 * Allocating the move destination (the object to move to) prevents subsequent
758 * callbacks from selecting the same destination as an earlier pending callback.
760 * Move requests can also be generated by kmem_cache_reap() when the system is
761 * desperate for memory and by kmem_cache_move_notify(), called by the client to
762 * notify kmem that a move refused earlier with KMEM_CBRC_LATER is now possible.
763 * The map of pending callbacks is protected by the same lock that protects the
764 * slab layer.
766 * When the system is desperate for memory, kmem does not bother to determine
767 * whether or not the cache exceeds the fragmentation threshold, but tries to
768 * consolidate as many slabs as possible. Normally, the consolidator chews
769 * slowly, one sparsely allocated slab at a time during each maintenance
770 * interval that the cache is fragmented. When desperate, the consolidator
771 * starts at the last partial slab and enqueues callbacks for every allocated
772 * object on every partial slab, working backwards until it reaches the first
773 * partial slab. The first partial slab, meanwhile, advances in pace with the
774 * consolidator as allocations to supply move destinations for the enqueued
775 * callbacks use up the highly allocated slabs at the front of the free list.
776 * Ideally, the overgrown free list collapses like an accordion, starting at
777 * both ends and ending at the center with a single partial slab.
779 * 3.3 Client Responses
781 * When the client returns KMEM_CBRC_NO in response to the move callback, kmem
782 * marks the slab that supplied the stuck object non-reclaimable and moves it to
783 * front of the free list. The slab remains marked as long as it remains on the
784 * free list, and it appears more allocated to the partial slab compare function
785 * than any unmarked slab, no matter how many of its objects are allocated.
786 * Since even one immovable object ties up the entire slab, the goal is to
787 * completely allocate any slab that cannot be completely freed. kmem does not
788 * bother generating callbacks to move objects from a marked slab unless the
789 * system is desperate.
791 * When the client responds KMEM_CBRC_LATER, kmem increments a count for the
792 * slab. If the client responds LATER too many times, kmem disbelieves and
793 * treats the response as a NO. The count is cleared when the slab is taken off
794 * the partial slab list or when the client moves one of the slab's objects.
796 * 4. Observability
798 * A kmem cache's external fragmentation is best observed with 'mdb -k' using
799 * the ::kmem_slabs dcmd. For a complete description of the command, enter
800 * '::help kmem_slabs' at the mdb prompt.
803 #include <sys/kmem_impl.h>
804 #include <sys/vmem_impl.h>
805 #include <sys/param.h>
806 #include <sys/sysmacros.h>
807 #include <sys/vm.h>
808 #include <sys/proc.h>
809 #include <sys/tuneable.h>
810 #include <sys/systm.h>
811 #include <sys/cmn_err.h>
812 #include <sys/debug.h>
813 #include <sys/sdt.h>
814 #include <sys/mutex.h>
815 #include <sys/bitmap.h>
816 #include <sys/atomic.h>
817 #include <sys/kobj.h>
818 #include <sys/disp.h>
819 #include <vm/seg_kmem.h>
820 #include <sys/log.h>
821 #include <sys/callb.h>
822 #include <sys/taskq.h>
823 #include <sys/modctl.h>
824 #include <sys/reboot.h>
825 #include <sys/id32.h>
826 #include <sys/zone.h>
827 #include <sys/netstack.h>
828 #ifdef DEBUG
829 #include <sys/random.h>
830 #endif
832 extern void streams_msg_init(void);
833 extern int segkp_fromheap;
834 extern void segkp_cache_free(void);
835 extern int callout_init_done;
837 struct kmem_cache_kstat {
838 kstat_named_t kmc_buf_size;
839 kstat_named_t kmc_align;
840 kstat_named_t kmc_chunk_size;
841 kstat_named_t kmc_slab_size;
842 kstat_named_t kmc_alloc;
843 kstat_named_t kmc_alloc_fail;
844 kstat_named_t kmc_free;
845 kstat_named_t kmc_depot_alloc;
846 kstat_named_t kmc_depot_free;
847 kstat_named_t kmc_depot_contention;
848 kstat_named_t kmc_slab_alloc;
849 kstat_named_t kmc_slab_free;
850 kstat_named_t kmc_buf_constructed;
851 kstat_named_t kmc_buf_avail;
852 kstat_named_t kmc_buf_inuse;
853 kstat_named_t kmc_buf_total;
854 kstat_named_t kmc_buf_max;
855 kstat_named_t kmc_slab_create;
856 kstat_named_t kmc_slab_destroy;
857 kstat_named_t kmc_vmem_source;
858 kstat_named_t kmc_hash_size;
859 kstat_named_t kmc_hash_lookup_depth;
860 kstat_named_t kmc_hash_rescale;
861 kstat_named_t kmc_full_magazines;
862 kstat_named_t kmc_empty_magazines;
863 kstat_named_t kmc_magazine_size;
864 kstat_named_t kmc_reap; /* number of kmem_cache_reap() calls */
865 kstat_named_t kmc_defrag; /* attempts to defrag all partial slabs */
866 kstat_named_t kmc_scan; /* attempts to defrag one partial slab */
867 kstat_named_t kmc_move_callbacks; /* sum of yes, no, later, dn, dk */
868 kstat_named_t kmc_move_yes;
869 kstat_named_t kmc_move_no;
870 kstat_named_t kmc_move_later;
871 kstat_named_t kmc_move_dont_need;
872 kstat_named_t kmc_move_dont_know; /* obj unrecognized by client ... */
873 kstat_named_t kmc_move_hunt_found; /* ... but found in mag layer */
874 kstat_named_t kmc_move_slabs_freed; /* slabs freed by consolidator */
875 kstat_named_t kmc_move_reclaimable; /* buffers, if consolidator ran */
876 } kmem_cache_kstat = {
877 { "buf_size", KSTAT_DATA_UINT64 },
878 { "align", KSTAT_DATA_UINT64 },
879 { "chunk_size", KSTAT_DATA_UINT64 },
880 { "slab_size", KSTAT_DATA_UINT64 },
881 { "alloc", KSTAT_DATA_UINT64 },
882 { "alloc_fail", KSTAT_DATA_UINT64 },
883 { "free", KSTAT_DATA_UINT64 },
884 { "depot_alloc", KSTAT_DATA_UINT64 },
885 { "depot_free", KSTAT_DATA_UINT64 },
886 { "depot_contention", KSTAT_DATA_UINT64 },
887 { "slab_alloc", KSTAT_DATA_UINT64 },
888 { "slab_free", KSTAT_DATA_UINT64 },
889 { "buf_constructed", KSTAT_DATA_UINT64 },
890 { "buf_avail", KSTAT_DATA_UINT64 },
891 { "buf_inuse", KSTAT_DATA_UINT64 },
892 { "buf_total", KSTAT_DATA_UINT64 },
893 { "buf_max", KSTAT_DATA_UINT64 },
894 { "slab_create", KSTAT_DATA_UINT64 },
895 { "slab_destroy", KSTAT_DATA_UINT64 },
896 { "vmem_source", KSTAT_DATA_UINT64 },
897 { "hash_size", KSTAT_DATA_UINT64 },
898 { "hash_lookup_depth", KSTAT_DATA_UINT64 },
899 { "hash_rescale", KSTAT_DATA_UINT64 },
900 { "full_magazines", KSTAT_DATA_UINT64 },
901 { "empty_magazines", KSTAT_DATA_UINT64 },
902 { "magazine_size", KSTAT_DATA_UINT64 },
903 { "reap", KSTAT_DATA_UINT64 },
904 { "defrag", KSTAT_DATA_UINT64 },
905 { "scan", KSTAT_DATA_UINT64 },
906 { "move_callbacks", KSTAT_DATA_UINT64 },
907 { "move_yes", KSTAT_DATA_UINT64 },
908 { "move_no", KSTAT_DATA_UINT64 },
909 { "move_later", KSTAT_DATA_UINT64 },
910 { "move_dont_need", KSTAT_DATA_UINT64 },
911 { "move_dont_know", KSTAT_DATA_UINT64 },
912 { "move_hunt_found", KSTAT_DATA_UINT64 },
913 { "move_slabs_freed", KSTAT_DATA_UINT64 },
914 { "move_reclaimable", KSTAT_DATA_UINT64 },
917 static kmutex_t kmem_cache_kstat_lock;
920 * The default set of caches to back kmem_alloc().
921 * These sizes should be reevaluated periodically.
923 * We want allocations that are multiples of the coherency granularity
924 * (64 bytes) to be satisfied from a cache which is a multiple of 64
925 * bytes, so that it will be 64-byte aligned. For all multiples of 64,
926 * the next kmem_cache_size greater than or equal to it must be a
927 * multiple of 64.
929 * We split the table into two sections: size <= 4k and size > 4k. This
930 * saves a lot of space and cache footprint in our cache tables.
932 static const int kmem_alloc_sizes[] = {
933 1 * 8,
934 2 * 8,
935 3 * 8,
936 4 * 8, 5 * 8, 6 * 8, 7 * 8,
937 4 * 16, 5 * 16, 6 * 16, 7 * 16,
938 4 * 32, 5 * 32, 6 * 32, 7 * 32,
939 4 * 64, 5 * 64, 6 * 64, 7 * 64,
940 4 * 128, 5 * 128, 6 * 128, 7 * 128,
941 P2ALIGN(8192 / 7, 64),
942 P2ALIGN(8192 / 6, 64),
943 P2ALIGN(8192 / 5, 64),
944 P2ALIGN(8192 / 4, 64),
945 P2ALIGN(8192 / 3, 64),
946 P2ALIGN(8192 / 2, 64),
949 static const int kmem_big_alloc_sizes[] = {
950 2 * 4096, 3 * 4096,
951 2 * 8192, 3 * 8192,
952 4 * 8192, 5 * 8192, 6 * 8192, 7 * 8192,
953 8 * 8192, 9 * 8192, 10 * 8192, 11 * 8192,
954 12 * 8192, 13 * 8192, 14 * 8192, 15 * 8192,
955 16 * 8192
958 #define KMEM_MAXBUF 4096
959 #define KMEM_BIG_MAXBUF_32BIT 32768
960 #define KMEM_BIG_MAXBUF 131072
962 #define KMEM_BIG_MULTIPLE 4096 /* big_alloc_sizes must be a multiple */
963 #define KMEM_BIG_SHIFT 12 /* lg(KMEM_BIG_MULTIPLE) */
965 static kmem_cache_t *kmem_alloc_table[KMEM_MAXBUF >> KMEM_ALIGN_SHIFT];
966 static kmem_cache_t *kmem_big_alloc_table[KMEM_BIG_MAXBUF >> KMEM_BIG_SHIFT];
968 #define KMEM_ALLOC_TABLE_MAX (KMEM_MAXBUF >> KMEM_ALIGN_SHIFT)
969 static size_t kmem_big_alloc_table_max = 0; /* # of filled elements */
971 static kmem_magtype_t kmem_magtype[] = {
972 { 1, 8, 3200, 65536 },
973 { 3, 16, 256, 32768 },
974 { 7, 32, 64, 16384 },
975 { 15, 64, 0, 8192 },
976 { 31, 64, 0, 4096 },
977 { 47, 64, 0, 2048 },
978 { 63, 64, 0, 1024 },
979 { 95, 64, 0, 512 },
980 { 143, 64, 0, 0 },
983 static uint32_t kmem_reaping;
984 static uint32_t kmem_reaping_idspace;
987 * kmem tunables
989 clock_t kmem_reap_interval; /* cache reaping rate [15 * HZ ticks] */
990 int kmem_depot_contention = 3; /* max failed tryenters per real interval */
991 pgcnt_t kmem_reapahead = 0; /* start reaping N pages before pageout */
992 int kmem_panic = 1; /* whether to panic on error */
993 int kmem_logging = 1; /* kmem_log_enter() override */
994 uint32_t kmem_mtbf = 0; /* mean time between failures [default: off] */
995 size_t kmem_transaction_log_size; /* transaction log size [2% of memory] */
996 size_t kmem_content_log_size; /* content log size [2% of memory] */
997 size_t kmem_failure_log_size; /* failure log [4 pages per CPU] */
998 size_t kmem_slab_log_size; /* slab create log [4 pages per CPU] */
999 size_t kmem_content_maxsave = 256; /* KMF_CONTENTS max bytes to log */
1000 size_t kmem_lite_minsize = 0; /* minimum buffer size for KMF_LITE */
1001 size_t kmem_lite_maxalign = 1024; /* maximum buffer alignment for KMF_LITE */
1002 int kmem_lite_pcs = 4; /* number of PCs to store in KMF_LITE mode */
1003 size_t kmem_maxverify; /* maximum bytes to inspect in debug routines */
1004 size_t kmem_minfirewall; /* hardware-enforced redzone threshold */
1006 #ifdef _LP64
1007 size_t kmem_max_cached = KMEM_BIG_MAXBUF; /* maximum kmem_alloc cache */
1008 #else
1009 size_t kmem_max_cached = KMEM_BIG_MAXBUF_32BIT; /* maximum kmem_alloc cache */
1010 #endif
1012 #ifdef DEBUG
1013 int kmem_flags = KMF_AUDIT | KMF_DEADBEEF | KMF_REDZONE | KMF_CONTENTS;
1014 #else
1015 int kmem_flags = 0;
1016 #endif
1017 int kmem_ready;
1019 static kmem_cache_t *kmem_slab_cache;
1020 static kmem_cache_t *kmem_bufctl_cache;
1021 static kmem_cache_t *kmem_bufctl_audit_cache;
1023 static kmutex_t kmem_cache_lock; /* inter-cache linkage only */
1024 static list_t kmem_caches;
1026 static taskq_t *kmem_taskq;
1027 static kmutex_t kmem_flags_lock;
1028 static vmem_t *kmem_metadata_arena;
1029 static vmem_t *kmem_msb_arena; /* arena for metadata caches */
1030 static vmem_t *kmem_cache_arena;
1031 static vmem_t *kmem_hash_arena;
1032 static vmem_t *kmem_log_arena;
1033 static vmem_t *kmem_oversize_arena;
1034 static vmem_t *kmem_va_arena;
1035 static vmem_t *kmem_default_arena;
1036 static vmem_t *kmem_firewall_va_arena;
1037 static vmem_t *kmem_firewall_arena;
1040 * Define KMEM_STATS to turn on statistic gathering. By default, it is only
1041 * turned on when DEBUG is also defined.
1043 #ifdef DEBUG
1044 #define KMEM_STATS
1045 #endif /* DEBUG */
1047 #ifdef KMEM_STATS
1048 #define KMEM_STAT_ADD(stat) ((stat)++)
1049 #define KMEM_STAT_COND_ADD(cond, stat) ((void) (!(cond) || (stat)++))
1050 #else
1051 #define KMEM_STAT_ADD(stat) /* nothing */
1052 #define KMEM_STAT_COND_ADD(cond, stat) /* nothing */
1053 #endif /* KMEM_STATS */
1056 * kmem slab consolidator thresholds (tunables)
1058 size_t kmem_frag_minslabs = 101; /* minimum total slabs */
1059 size_t kmem_frag_numer = 1; /* free buffers (numerator) */
1060 size_t kmem_frag_denom = KMEM_VOID_FRACTION; /* buffers (denominator) */
1062 * Maximum number of slabs from which to move buffers during a single
1063 * maintenance interval while the system is not low on memory.
1065 size_t kmem_reclaim_max_slabs = 1;
1067 * Number of slabs to scan backwards from the end of the partial slab list
1068 * when searching for buffers to relocate.
1070 size_t kmem_reclaim_scan_range = 12;
1072 #ifdef KMEM_STATS
1073 static struct {
1074 uint64_t kms_callbacks;
1075 uint64_t kms_yes;
1076 uint64_t kms_no;
1077 uint64_t kms_later;
1078 uint64_t kms_dont_need;
1079 uint64_t kms_dont_know;
1080 uint64_t kms_hunt_found_mag;
1081 uint64_t kms_hunt_found_slab;
1082 uint64_t kms_hunt_alloc_fail;
1083 uint64_t kms_hunt_lucky;
1084 uint64_t kms_notify;
1085 uint64_t kms_notify_callbacks;
1086 uint64_t kms_disbelief;
1087 uint64_t kms_already_pending;
1088 uint64_t kms_callback_alloc_fail;
1089 uint64_t kms_callback_taskq_fail;
1090 uint64_t kms_endscan_slab_dead;
1091 uint64_t kms_endscan_slab_destroyed;
1092 uint64_t kms_endscan_nomem;
1093 uint64_t kms_endscan_refcnt_changed;
1094 uint64_t kms_endscan_nomove_changed;
1095 uint64_t kms_endscan_freelist;
1096 uint64_t kms_avl_update;
1097 uint64_t kms_avl_noupdate;
1098 uint64_t kms_no_longer_reclaimable;
1099 uint64_t kms_notify_no_longer_reclaimable;
1100 uint64_t kms_notify_slab_dead;
1101 uint64_t kms_notify_slab_destroyed;
1102 uint64_t kms_alloc_fail;
1103 uint64_t kms_constructor_fail;
1104 uint64_t kms_dead_slabs_freed;
1105 uint64_t kms_defrags;
1106 uint64_t kms_scans;
1107 uint64_t kms_scan_depot_ws_reaps;
1108 uint64_t kms_debug_reaps;
1109 uint64_t kms_debug_scans;
1110 } kmem_move_stats;
1111 #endif /* KMEM_STATS */
1113 /* consolidator knobs */
1114 static boolean_t kmem_move_noreap;
1115 static boolean_t kmem_move_blocked;
1116 static boolean_t kmem_move_fulltilt;
1117 static boolean_t kmem_move_any_partial;
1119 #ifdef DEBUG
1121 * kmem consolidator debug tunables:
1122 * Ensure code coverage by occasionally running the consolidator even when the
1123 * caches are not fragmented (they may never be). These intervals are mean time
1124 * in cache maintenance intervals (kmem_cache_update).
1126 uint32_t kmem_mtb_move = 60; /* defrag 1 slab (~15min) */
1127 uint32_t kmem_mtb_reap = 1800; /* defrag all slabs (~7.5hrs) */
1128 #endif /* DEBUG */
1130 static kmem_cache_t *kmem_defrag_cache;
1131 static kmem_cache_t *kmem_move_cache;
1132 static taskq_t *kmem_move_taskq;
1134 static void kmem_cache_scan(kmem_cache_t *);
1135 static void kmem_cache_defrag(kmem_cache_t *);
1136 static void kmem_slab_prefill(kmem_cache_t *, kmem_slab_t *);
1139 kmem_log_header_t *kmem_transaction_log;
1140 kmem_log_header_t *kmem_content_log;
1141 kmem_log_header_t *kmem_failure_log;
1142 kmem_log_header_t *kmem_slab_log;
1144 static int kmem_lite_count; /* # of PCs in kmem_buftag_lite_t */
1146 #define KMEM_BUFTAG_LITE_ENTER(bt, count, caller) \
1147 if ((count) > 0) { \
1148 pc_t *_s = ((kmem_buftag_lite_t *)(bt))->bt_history; \
1149 pc_t *_e; \
1150 /* memmove() the old entries down one notch */ \
1151 for (_e = &_s[(count) - 1]; _e > _s; _e--) \
1152 *_e = *(_e - 1); \
1153 *_s = (uintptr_t)(caller); \
1156 #define KMERR_MODIFIED 0 /* buffer modified while on freelist */
1157 #define KMERR_REDZONE 1 /* redzone violation (write past end of buf) */
1158 #define KMERR_DUPFREE 2 /* freed a buffer twice */
1159 #define KMERR_BADADDR 3 /* freed a bad (unallocated) address */
1160 #define KMERR_BADBUFTAG 4 /* buftag corrupted */
1161 #define KMERR_BADBUFCTL 5 /* bufctl corrupted */
1162 #define KMERR_BADCACHE 6 /* freed a buffer to the wrong cache */
1163 #define KMERR_BADSIZE 7 /* alloc size != free size */
1164 #define KMERR_BADBASE 8 /* buffer base address wrong */
1166 struct {
1167 hrtime_t kmp_timestamp; /* timestamp of panic */
1168 int kmp_error; /* type of kmem error */
1169 void *kmp_buffer; /* buffer that induced panic */
1170 void *kmp_realbuf; /* real start address for buffer */
1171 kmem_cache_t *kmp_cache; /* buffer's cache according to client */
1172 kmem_cache_t *kmp_realcache; /* actual cache containing buffer */
1173 kmem_slab_t *kmp_slab; /* slab accoring to kmem_findslab() */
1174 kmem_bufctl_t *kmp_bufctl; /* bufctl */
1175 } kmem_panic_info;
1178 static void
1179 copy_pattern(uint64_t pattern, void *buf_arg, size_t size)
1181 uint64_t *bufend = (uint64_t *)((char *)buf_arg + size);
1182 uint64_t *buf = buf_arg;
1184 while (buf < bufend)
1185 *buf++ = pattern;
1188 static void *
1189 verify_pattern(uint64_t pattern, void *buf_arg, size_t size)
1191 uint64_t *bufend = (uint64_t *)((char *)buf_arg + size);
1192 uint64_t *buf;
1194 for (buf = buf_arg; buf < bufend; buf++)
1195 if (*buf != pattern)
1196 return (buf);
1197 return (NULL);
1200 static void *
1201 verify_and_copy_pattern(uint64_t old, uint64_t new, void *buf_arg, size_t size)
1203 uint64_t *bufend = (uint64_t *)((char *)buf_arg + size);
1204 uint64_t *buf;
1206 for (buf = buf_arg; buf < bufend; buf++) {
1207 if (*buf != old) {
1208 copy_pattern(old, buf_arg,
1209 (char *)buf - (char *)buf_arg);
1210 return (buf);
1212 *buf = new;
1215 return (NULL);
1218 static void
1219 kmem_cache_applyall(void (*func)(kmem_cache_t *), taskq_t *tq, int tqflag)
1221 kmem_cache_t *cp;
1223 mutex_enter(&kmem_cache_lock);
1224 for (cp = list_head(&kmem_caches); cp != NULL;
1225 cp = list_next(&kmem_caches, cp))
1226 if (tq != NULL)
1227 (void) taskq_dispatch(tq, (task_func_t *)func, cp,
1228 tqflag);
1229 else
1230 func(cp);
1231 mutex_exit(&kmem_cache_lock);
1234 static void
1235 kmem_cache_applyall_id(void (*func)(kmem_cache_t *), taskq_t *tq, int tqflag)
1237 kmem_cache_t *cp;
1239 mutex_enter(&kmem_cache_lock);
1240 for (cp = list_head(&kmem_caches); cp != NULL;
1241 cp = list_next(&kmem_caches, cp)) {
1242 if (!(cp->cache_cflags & KMC_IDENTIFIER))
1243 continue;
1244 if (tq != NULL)
1245 (void) taskq_dispatch(tq, (task_func_t *)func, cp,
1246 tqflag);
1247 else
1248 func(cp);
1250 mutex_exit(&kmem_cache_lock);
1254 * Debugging support. Given a buffer address, find its slab.
1256 static kmem_slab_t *
1257 kmem_findslab(kmem_cache_t *cp, void *buf)
1259 kmem_slab_t *sp;
1261 mutex_enter(&cp->cache_lock);
1262 for (sp = list_head(&cp->cache_complete_slabs); sp != NULL;
1263 sp = list_next(&cp->cache_complete_slabs, sp)) {
1264 if (KMEM_SLAB_MEMBER(sp, buf)) {
1265 mutex_exit(&cp->cache_lock);
1266 return (sp);
1269 for (sp = avl_first(&cp->cache_partial_slabs); sp != NULL;
1270 sp = AVL_NEXT(&cp->cache_partial_slabs, sp)) {
1271 if (KMEM_SLAB_MEMBER(sp, buf)) {
1272 mutex_exit(&cp->cache_lock);
1273 return (sp);
1276 mutex_exit(&cp->cache_lock);
1278 return (NULL);
1281 static void
1282 kmem_error(int error, kmem_cache_t *cparg, void *bufarg)
1284 kmem_buftag_t *btp = NULL;
1285 kmem_bufctl_t *bcp = NULL;
1286 kmem_cache_t *cp = cparg;
1287 kmem_slab_t *sp;
1288 uint64_t *off;
1289 void *buf = bufarg;
1291 kmem_logging = 0; /* stop logging when a bad thing happens */
1293 kmem_panic_info.kmp_timestamp = gethrtime();
1295 sp = kmem_findslab(cp, buf);
1296 if (sp == NULL) {
1297 for (cp = list_tail(&kmem_caches); cp != NULL;
1298 cp = list_prev(&kmem_caches, cp)) {
1299 if ((sp = kmem_findslab(cp, buf)) != NULL)
1300 break;
1304 if (sp == NULL) {
1305 cp = NULL;
1306 error = KMERR_BADADDR;
1307 } else {
1308 if (cp != cparg)
1309 error = KMERR_BADCACHE;
1310 else
1311 buf = (char *)bufarg - ((uintptr_t)bufarg -
1312 (uintptr_t)sp->slab_base) % cp->cache_chunksize;
1313 if (buf != bufarg)
1314 error = KMERR_BADBASE;
1315 if (cp->cache_flags & KMF_BUFTAG)
1316 btp = KMEM_BUFTAG(cp, buf);
1317 if (cp->cache_flags & KMF_HASH) {
1318 mutex_enter(&cp->cache_lock);
1319 for (bcp = *KMEM_HASH(cp, buf); bcp; bcp = bcp->bc_next)
1320 if (bcp->bc_addr == buf)
1321 break;
1322 mutex_exit(&cp->cache_lock);
1323 if (bcp == NULL && btp != NULL)
1324 bcp = btp->bt_bufctl;
1325 if (kmem_findslab(cp->cache_bufctl_cache, bcp) ==
1326 NULL || P2PHASE((uintptr_t)bcp, KMEM_ALIGN) ||
1327 bcp->bc_addr != buf) {
1328 error = KMERR_BADBUFCTL;
1329 bcp = NULL;
1334 kmem_panic_info.kmp_error = error;
1335 kmem_panic_info.kmp_buffer = bufarg;
1336 kmem_panic_info.kmp_realbuf = buf;
1337 kmem_panic_info.kmp_cache = cparg;
1338 kmem_panic_info.kmp_realcache = cp;
1339 kmem_panic_info.kmp_slab = sp;
1340 kmem_panic_info.kmp_bufctl = bcp;
1342 printf("kernel memory allocator: ");
1344 switch (error) {
1346 case KMERR_MODIFIED:
1347 printf("buffer modified after being freed\n");
1348 off = verify_pattern(KMEM_FREE_PATTERN, buf, cp->cache_verify);
1349 if (off == NULL) /* shouldn't happen */
1350 off = buf;
1351 printf("modification occurred at offset 0x%lx "
1352 "(0x%llx replaced by 0x%llx)\n",
1353 (uintptr_t)off - (uintptr_t)buf,
1354 (longlong_t)KMEM_FREE_PATTERN, (longlong_t)*off);
1355 break;
1357 case KMERR_REDZONE:
1358 printf("redzone violation: write past end of buffer\n");
1359 break;
1361 case KMERR_BADADDR:
1362 printf("invalid free: buffer not in cache\n");
1363 break;
1365 case KMERR_DUPFREE:
1366 printf("duplicate free: buffer freed twice\n");
1367 break;
1369 case KMERR_BADBUFTAG:
1370 printf("boundary tag corrupted\n");
1371 printf("bcp ^ bxstat = %lx, should be %lx\n",
1372 (intptr_t)btp->bt_bufctl ^ btp->bt_bxstat,
1373 KMEM_BUFTAG_FREE);
1374 break;
1376 case KMERR_BADBUFCTL:
1377 printf("bufctl corrupted\n");
1378 break;
1380 case KMERR_BADCACHE:
1381 printf("buffer freed to wrong cache\n");
1382 printf("buffer was allocated from %s,\n", cp->cache_name);
1383 printf("caller attempting free to %s.\n", cparg->cache_name);
1384 break;
1386 case KMERR_BADSIZE:
1387 printf("bad free: free size (%u) != alloc size (%u)\n",
1388 KMEM_SIZE_DECODE(((uint32_t *)btp)[0]),
1389 KMEM_SIZE_DECODE(((uint32_t *)btp)[1]));
1390 break;
1392 case KMERR_BADBASE:
1393 printf("bad free: free address (%p) != alloc address (%p)\n",
1394 bufarg, buf);
1395 break;
1398 printf("buffer=%p bufctl=%p cache: %s\n",
1399 bufarg, (void *)bcp, cparg->cache_name);
1401 if (bcp != NULL && (cp->cache_flags & KMF_AUDIT) &&
1402 error != KMERR_BADBUFCTL) {
1403 int d;
1404 timestruc_t ts;
1405 kmem_bufctl_audit_t *bcap = (kmem_bufctl_audit_t *)bcp;
1407 hrt2ts(kmem_panic_info.kmp_timestamp - bcap->bc_timestamp, &ts);
1408 printf("previous transaction on buffer %p:\n", buf);
1409 printf("thread=%p time=T-%ld.%09ld slab=%p cache: %s\n",
1410 (void *)bcap->bc_thread, ts.tv_sec, ts.tv_nsec,
1411 (void *)sp, cp->cache_name);
1412 for (d = 0; d < MIN(bcap->bc_depth, KMEM_STACK_DEPTH); d++) {
1413 ulong_t off;
1414 char *sym = kobj_getsymname(bcap->bc_stack[d], &off);
1415 printf("%s+%lx\n", sym ? sym : "?", off);
1418 if (kmem_panic > 0)
1419 panic("kernel heap corruption detected");
1420 if (kmem_panic == 0)
1421 debug_enter(NULL);
1422 kmem_logging = 1; /* resume logging */
1425 static kmem_log_header_t *
1426 kmem_log_init(size_t logsize)
1428 kmem_log_header_t *lhp;
1429 int nchunks = 4 * max_ncpus;
1430 size_t lhsize = (size_t)&((kmem_log_header_t *)0)->lh_cpu[max_ncpus];
1431 int i;
1434 * Make sure that lhp->lh_cpu[] is nicely aligned
1435 * to prevent false sharing of cache lines.
1437 lhsize = P2ROUNDUP(lhsize, KMEM_ALIGN);
1438 lhp = vmem_xalloc(kmem_log_arena, lhsize, 64, P2NPHASE(lhsize, 64), 0,
1439 NULL, NULL, VM_SLEEP);
1440 bzero(lhp, lhsize);
1442 mutex_init(&lhp->lh_lock, NULL, MUTEX_DEFAULT, NULL);
1443 lhp->lh_nchunks = nchunks;
1444 lhp->lh_chunksize = P2ROUNDUP(logsize / nchunks + 1, PAGESIZE);
1445 lhp->lh_base = vmem_alloc(kmem_log_arena,
1446 lhp->lh_chunksize * nchunks, VM_SLEEP);
1447 lhp->lh_free = vmem_alloc(kmem_log_arena,
1448 nchunks * sizeof (int), VM_SLEEP);
1449 bzero(lhp->lh_base, lhp->lh_chunksize * nchunks);
1451 for (i = 0; i < max_ncpus; i++) {
1452 kmem_cpu_log_header_t *clhp = &lhp->lh_cpu[i];
1453 mutex_init(&clhp->clh_lock, NULL, MUTEX_DEFAULT, NULL);
1454 clhp->clh_chunk = i;
1457 for (i = max_ncpus; i < nchunks; i++)
1458 lhp->lh_free[i] = i;
1460 lhp->lh_head = max_ncpus;
1461 lhp->lh_tail = 0;
1463 return (lhp);
1466 static void *
1467 kmem_log_enter(kmem_log_header_t *lhp, void *data, size_t size)
1469 void *logspace;
1470 kmem_cpu_log_header_t *clhp = &lhp->lh_cpu[CPU->cpu_seqid];
1472 if (lhp == NULL || kmem_logging == 0 || panicstr)
1473 return (NULL);
1475 mutex_enter(&clhp->clh_lock);
1476 clhp->clh_hits++;
1477 if (size > clhp->clh_avail) {
1478 mutex_enter(&lhp->lh_lock);
1479 lhp->lh_hits++;
1480 lhp->lh_free[lhp->lh_tail] = clhp->clh_chunk;
1481 lhp->lh_tail = (lhp->lh_tail + 1) % lhp->lh_nchunks;
1482 clhp->clh_chunk = lhp->lh_free[lhp->lh_head];
1483 lhp->lh_head = (lhp->lh_head + 1) % lhp->lh_nchunks;
1484 clhp->clh_current = lhp->lh_base +
1485 clhp->clh_chunk * lhp->lh_chunksize;
1486 clhp->clh_avail = lhp->lh_chunksize;
1487 if (size > lhp->lh_chunksize)
1488 size = lhp->lh_chunksize;
1489 mutex_exit(&lhp->lh_lock);
1491 logspace = clhp->clh_current;
1492 clhp->clh_current += size;
1493 clhp->clh_avail -= size;
1494 bcopy(data, logspace, size);
1495 mutex_exit(&clhp->clh_lock);
1496 return (logspace);
1499 #define KMEM_AUDIT(lp, cp, bcp) \
1501 kmem_bufctl_audit_t *_bcp = (kmem_bufctl_audit_t *)(bcp); \
1502 _bcp->bc_timestamp = gethrtime(); \
1503 _bcp->bc_thread = curthread; \
1504 _bcp->bc_depth = getpcstack(_bcp->bc_stack, KMEM_STACK_DEPTH); \
1505 _bcp->bc_lastlog = kmem_log_enter((lp), _bcp, sizeof (*_bcp)); \
1508 static void
1509 kmem_log_event(kmem_log_header_t *lp, kmem_cache_t *cp,
1510 kmem_slab_t *sp, void *addr)
1512 kmem_bufctl_audit_t bca;
1514 bzero(&bca, sizeof (kmem_bufctl_audit_t));
1515 bca.bc_addr = addr;
1516 bca.bc_slab = sp;
1517 bca.bc_cache = cp;
1518 KMEM_AUDIT(lp, cp, &bca);
1522 * Create a new slab for cache cp.
1524 static kmem_slab_t *
1525 kmem_slab_create(kmem_cache_t *cp, int kmflag)
1527 size_t slabsize = cp->cache_slabsize;
1528 size_t chunksize = cp->cache_chunksize;
1529 int cache_flags = cp->cache_flags;
1530 size_t color, chunks;
1531 char *buf, *slab;
1532 kmem_slab_t *sp;
1533 kmem_bufctl_t *bcp;
1534 vmem_t *vmp = cp->cache_arena;
1536 ASSERT(MUTEX_NOT_HELD(&cp->cache_lock));
1538 color = cp->cache_color + cp->cache_align;
1539 if (color > cp->cache_maxcolor)
1540 color = cp->cache_mincolor;
1541 cp->cache_color = color;
1543 slab = vmem_alloc(vmp, slabsize, kmflag & KM_VMFLAGS);
1545 if (slab == NULL)
1546 goto vmem_alloc_failure;
1548 ASSERT(P2PHASE((uintptr_t)slab, vmp->vm_quantum) == 0);
1551 * Reverify what was already checked in kmem_cache_set_move(), since the
1552 * consolidator depends (for correctness) on slabs being initialized
1553 * with the 0xbaddcafe memory pattern (setting a low order bit usable by
1554 * clients to distinguish uninitialized memory from known objects).
1556 ASSERT((cp->cache_move == NULL) || !(cp->cache_cflags & KMC_NOTOUCH));
1557 if (!(cp->cache_cflags & KMC_NOTOUCH))
1558 copy_pattern(KMEM_UNINITIALIZED_PATTERN, slab, slabsize);
1560 if (cache_flags & KMF_HASH) {
1561 if ((sp = kmem_cache_alloc(kmem_slab_cache, kmflag)) == NULL)
1562 goto slab_alloc_failure;
1563 chunks = (slabsize - color) / chunksize;
1564 } else {
1565 sp = KMEM_SLAB(cp, slab);
1566 chunks = (slabsize - sizeof (kmem_slab_t) - color) / chunksize;
1569 sp->slab_cache = cp;
1570 sp->slab_head = NULL;
1571 sp->slab_refcnt = 0;
1572 sp->slab_base = buf = slab + color;
1573 sp->slab_chunks = chunks;
1574 sp->slab_stuck_offset = (uint32_t)-1;
1575 sp->slab_later_count = 0;
1576 sp->slab_flags = 0;
1578 ASSERT(chunks > 0);
1579 while (chunks-- != 0) {
1580 if (cache_flags & KMF_HASH) {
1581 bcp = kmem_cache_alloc(cp->cache_bufctl_cache, kmflag);
1582 if (bcp == NULL)
1583 goto bufctl_alloc_failure;
1584 if (cache_flags & KMF_AUDIT) {
1585 kmem_bufctl_audit_t *bcap =
1586 (kmem_bufctl_audit_t *)bcp;
1587 bzero(bcap, sizeof (kmem_bufctl_audit_t));
1588 bcap->bc_cache = cp;
1590 bcp->bc_addr = buf;
1591 bcp->bc_slab = sp;
1592 } else {
1593 bcp = KMEM_BUFCTL(cp, buf);
1595 if (cache_flags & KMF_BUFTAG) {
1596 kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf);
1597 btp->bt_redzone = KMEM_REDZONE_PATTERN;
1598 btp->bt_bufctl = bcp;
1599 btp->bt_bxstat = (intptr_t)bcp ^ KMEM_BUFTAG_FREE;
1600 if (cache_flags & KMF_DEADBEEF) {
1601 copy_pattern(KMEM_FREE_PATTERN, buf,
1602 cp->cache_verify);
1605 bcp->bc_next = sp->slab_head;
1606 sp->slab_head = bcp;
1607 buf += chunksize;
1610 kmem_log_event(kmem_slab_log, cp, sp, slab);
1612 return (sp);
1614 bufctl_alloc_failure:
1616 while ((bcp = sp->slab_head) != NULL) {
1617 sp->slab_head = bcp->bc_next;
1618 kmem_cache_free(cp->cache_bufctl_cache, bcp);
1620 kmem_cache_free(kmem_slab_cache, sp);
1622 slab_alloc_failure:
1624 vmem_free(vmp, slab, slabsize);
1626 vmem_alloc_failure:
1628 kmem_log_event(kmem_failure_log, cp, NULL, NULL);
1629 atomic_inc_64(&cp->cache_alloc_fail);
1631 return (NULL);
1635 * Destroy a slab.
1637 static void
1638 kmem_slab_destroy(kmem_cache_t *cp, kmem_slab_t *sp)
1640 vmem_t *vmp = cp->cache_arena;
1641 void *slab = (void *)P2ALIGN((uintptr_t)sp->slab_base, vmp->vm_quantum);
1643 ASSERT(MUTEX_NOT_HELD(&cp->cache_lock));
1644 ASSERT(sp->slab_refcnt == 0);
1646 if (cp->cache_flags & KMF_HASH) {
1647 kmem_bufctl_t *bcp;
1648 while ((bcp = sp->slab_head) != NULL) {
1649 sp->slab_head = bcp->bc_next;
1650 kmem_cache_free(cp->cache_bufctl_cache, bcp);
1652 kmem_cache_free(kmem_slab_cache, sp);
1654 vmem_free(vmp, slab, cp->cache_slabsize);
1657 static void *
1658 kmem_slab_alloc_impl(kmem_cache_t *cp, kmem_slab_t *sp, boolean_t prefill)
1660 kmem_bufctl_t *bcp, **hash_bucket;
1661 void *buf;
1662 boolean_t new_slab = (sp->slab_refcnt == 0);
1664 ASSERT(MUTEX_HELD(&cp->cache_lock));
1666 * kmem_slab_alloc() drops cache_lock when it creates a new slab, so we
1667 * can't ASSERT(avl_is_empty(&cp->cache_partial_slabs)) here when the
1668 * slab is newly created.
1670 ASSERT(new_slab || (KMEM_SLAB_IS_PARTIAL(sp) &&
1671 (sp == avl_first(&cp->cache_partial_slabs))));
1672 ASSERT(sp->slab_cache == cp);
1674 cp->cache_slab_alloc++;
1675 cp->cache_bufslab--;
1676 sp->slab_refcnt++;
1678 bcp = sp->slab_head;
1679 sp->slab_head = bcp->bc_next;
1681 if (cp->cache_flags & KMF_HASH) {
1683 * Add buffer to allocated-address hash table.
1685 buf = bcp->bc_addr;
1686 hash_bucket = KMEM_HASH(cp, buf);
1687 bcp->bc_next = *hash_bucket;
1688 *hash_bucket = bcp;
1689 if ((cp->cache_flags & (KMF_AUDIT | KMF_BUFTAG)) == KMF_AUDIT) {
1690 KMEM_AUDIT(kmem_transaction_log, cp, bcp);
1692 } else {
1693 buf = KMEM_BUF(cp, bcp);
1696 ASSERT(KMEM_SLAB_MEMBER(sp, buf));
1698 if (sp->slab_head == NULL) {
1699 ASSERT(KMEM_SLAB_IS_ALL_USED(sp));
1700 if (new_slab) {
1701 ASSERT(sp->slab_chunks == 1);
1702 } else {
1703 ASSERT(sp->slab_chunks > 1); /* the slab was partial */
1704 avl_remove(&cp->cache_partial_slabs, sp);
1705 sp->slab_later_count = 0; /* clear history */
1706 sp->slab_flags &= ~KMEM_SLAB_NOMOVE;
1707 sp->slab_stuck_offset = (uint32_t)-1;
1709 list_insert_head(&cp->cache_complete_slabs, sp);
1710 cp->cache_complete_slab_count++;
1711 return (buf);
1714 ASSERT(KMEM_SLAB_IS_PARTIAL(sp));
1716 * Peek to see if the magazine layer is enabled before
1717 * we prefill. We're not holding the cpu cache lock,
1718 * so the peek could be wrong, but there's no harm in it.
1720 if (new_slab && prefill && (cp->cache_flags & KMF_PREFILL) &&
1721 (KMEM_CPU_CACHE(cp)->cc_magsize != 0)) {
1722 kmem_slab_prefill(cp, sp);
1723 return (buf);
1726 if (new_slab) {
1727 avl_add(&cp->cache_partial_slabs, sp);
1728 return (buf);
1732 * The slab is now more allocated than it was, so the
1733 * order remains unchanged.
1735 ASSERT(!avl_update(&cp->cache_partial_slabs, sp));
1736 return (buf);
1740 * Allocate a raw (unconstructed) buffer from cp's slab layer.
1742 static void *
1743 kmem_slab_alloc(kmem_cache_t *cp, int kmflag)
1745 kmem_slab_t *sp;
1746 void *buf;
1747 boolean_t test_destructor;
1749 mutex_enter(&cp->cache_lock);
1750 test_destructor = (cp->cache_slab_alloc == 0);
1751 sp = avl_first(&cp->cache_partial_slabs);
1752 if (sp == NULL) {
1753 ASSERT(cp->cache_bufslab == 0);
1756 * The freelist is empty. Create a new slab.
1758 mutex_exit(&cp->cache_lock);
1759 if ((sp = kmem_slab_create(cp, kmflag)) == NULL) {
1760 return (NULL);
1762 mutex_enter(&cp->cache_lock);
1763 cp->cache_slab_create++;
1764 if ((cp->cache_buftotal += sp->slab_chunks) > cp->cache_bufmax)
1765 cp->cache_bufmax = cp->cache_buftotal;
1766 cp->cache_bufslab += sp->slab_chunks;
1769 buf = kmem_slab_alloc_impl(cp, sp, B_TRUE);
1770 ASSERT((cp->cache_slab_create - cp->cache_slab_destroy) ==
1771 (cp->cache_complete_slab_count +
1772 avl_numnodes(&cp->cache_partial_slabs) +
1773 (cp->cache_defrag == NULL ? 0 : cp->cache_defrag->kmd_deadcount)));
1774 mutex_exit(&cp->cache_lock);
1776 if (test_destructor && cp->cache_destructor != NULL) {
1778 * On the first kmem_slab_alloc(), assert that it is valid to
1779 * call the destructor on a newly constructed object without any
1780 * client involvement.
1782 if ((cp->cache_constructor == NULL) ||
1783 cp->cache_constructor(buf, cp->cache_private,
1784 kmflag) == 0) {
1785 cp->cache_destructor(buf, cp->cache_private);
1787 copy_pattern(KMEM_UNINITIALIZED_PATTERN, buf,
1788 cp->cache_bufsize);
1789 if (cp->cache_flags & KMF_DEADBEEF) {
1790 copy_pattern(KMEM_FREE_PATTERN, buf, cp->cache_verify);
1794 return (buf);
1797 static void kmem_slab_move_yes(kmem_cache_t *, kmem_slab_t *, void *);
1800 * Free a raw (unconstructed) buffer to cp's slab layer.
1802 static void
1803 kmem_slab_free(kmem_cache_t *cp, void *buf)
1805 kmem_slab_t *sp;
1806 kmem_bufctl_t *bcp, **prev_bcpp;
1808 ASSERT(buf != NULL);
1810 mutex_enter(&cp->cache_lock);
1811 cp->cache_slab_free++;
1813 if (cp->cache_flags & KMF_HASH) {
1815 * Look up buffer in allocated-address hash table.
1817 prev_bcpp = KMEM_HASH(cp, buf);
1818 while ((bcp = *prev_bcpp) != NULL) {
1819 if (bcp->bc_addr == buf) {
1820 *prev_bcpp = bcp->bc_next;
1821 sp = bcp->bc_slab;
1822 break;
1824 cp->cache_lookup_depth++;
1825 prev_bcpp = &bcp->bc_next;
1827 } else {
1828 bcp = KMEM_BUFCTL(cp, buf);
1829 sp = KMEM_SLAB(cp, buf);
1832 if (bcp == NULL || sp->slab_cache != cp || !KMEM_SLAB_MEMBER(sp, buf)) {
1833 mutex_exit(&cp->cache_lock);
1834 kmem_error(KMERR_BADADDR, cp, buf);
1835 return;
1838 if (KMEM_SLAB_OFFSET(sp, buf) == sp->slab_stuck_offset) {
1840 * If this is the buffer that prevented the consolidator from
1841 * clearing the slab, we can reset the slab flags now that the
1842 * buffer is freed. (It makes sense to do this in
1843 * kmem_cache_free(), where the client gives up ownership of the
1844 * buffer, but on the hot path the test is too expensive.)
1846 kmem_slab_move_yes(cp, sp, buf);
1849 if ((cp->cache_flags & (KMF_AUDIT | KMF_BUFTAG)) == KMF_AUDIT) {
1850 if (cp->cache_flags & KMF_CONTENTS)
1851 ((kmem_bufctl_audit_t *)bcp)->bc_contents =
1852 kmem_log_enter(kmem_content_log, buf,
1853 cp->cache_contents);
1854 KMEM_AUDIT(kmem_transaction_log, cp, bcp);
1857 bcp->bc_next = sp->slab_head;
1858 sp->slab_head = bcp;
1860 cp->cache_bufslab++;
1861 ASSERT(sp->slab_refcnt >= 1);
1863 if (--sp->slab_refcnt == 0) {
1865 * There are no outstanding allocations from this slab,
1866 * so we can reclaim the memory.
1868 if (sp->slab_chunks == 1) {
1869 list_remove(&cp->cache_complete_slabs, sp);
1870 cp->cache_complete_slab_count--;
1871 } else {
1872 avl_remove(&cp->cache_partial_slabs, sp);
1875 cp->cache_buftotal -= sp->slab_chunks;
1876 cp->cache_bufslab -= sp->slab_chunks;
1878 * Defer releasing the slab to the virtual memory subsystem
1879 * while there is a pending move callback, since we guarantee
1880 * that buffers passed to the move callback have only been
1881 * touched by kmem or by the client itself. Since the memory
1882 * patterns baddcafe (uninitialized) and deadbeef (freed) both
1883 * set at least one of the two lowest order bits, the client can
1884 * test those bits in the move callback to determine whether or
1885 * not it knows about the buffer (assuming that the client also
1886 * sets one of those low order bits whenever it frees a buffer).
1888 if (cp->cache_defrag == NULL ||
1889 (avl_is_empty(&cp->cache_defrag->kmd_moves_pending) &&
1890 !(sp->slab_flags & KMEM_SLAB_MOVE_PENDING))) {
1891 cp->cache_slab_destroy++;
1892 mutex_exit(&cp->cache_lock);
1893 kmem_slab_destroy(cp, sp);
1894 } else {
1895 list_t *deadlist = &cp->cache_defrag->kmd_deadlist;
1897 * Slabs are inserted at both ends of the deadlist to
1898 * distinguish between slabs freed while move callbacks
1899 * are pending (list head) and a slab freed while the
1900 * lock is dropped in kmem_move_buffers() (list tail) so
1901 * that in both cases slab_destroy() is called from the
1902 * right context.
1904 if (sp->slab_flags & KMEM_SLAB_MOVE_PENDING) {
1905 list_insert_tail(deadlist, sp);
1906 } else {
1907 list_insert_head(deadlist, sp);
1909 cp->cache_defrag->kmd_deadcount++;
1910 mutex_exit(&cp->cache_lock);
1912 return;
1915 if (bcp->bc_next == NULL) {
1916 /* Transition the slab from completely allocated to partial. */
1917 ASSERT(sp->slab_refcnt == (sp->slab_chunks - 1));
1918 ASSERT(sp->slab_chunks > 1);
1919 list_remove(&cp->cache_complete_slabs, sp);
1920 cp->cache_complete_slab_count--;
1921 avl_add(&cp->cache_partial_slabs, sp);
1922 } else {
1923 #ifdef DEBUG
1924 if (avl_update_gt(&cp->cache_partial_slabs, sp)) {
1925 KMEM_STAT_ADD(kmem_move_stats.kms_avl_update);
1926 } else {
1927 KMEM_STAT_ADD(kmem_move_stats.kms_avl_noupdate);
1929 #else
1930 (void) avl_update_gt(&cp->cache_partial_slabs, sp);
1931 #endif
1934 ASSERT((cp->cache_slab_create - cp->cache_slab_destroy) ==
1935 (cp->cache_complete_slab_count +
1936 avl_numnodes(&cp->cache_partial_slabs) +
1937 (cp->cache_defrag == NULL ? 0 : cp->cache_defrag->kmd_deadcount)));
1938 mutex_exit(&cp->cache_lock);
1942 * Return -1 if kmem_error, 1 if constructor fails, 0 if successful.
1944 static int
1945 kmem_cache_alloc_debug(kmem_cache_t *cp, void *buf, int kmflag, int construct,
1946 caddr_t caller)
1948 kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf);
1949 kmem_bufctl_audit_t *bcp = (kmem_bufctl_audit_t *)btp->bt_bufctl;
1950 uint32_t mtbf;
1952 if (btp->bt_bxstat != ((intptr_t)bcp ^ KMEM_BUFTAG_FREE)) {
1953 kmem_error(KMERR_BADBUFTAG, cp, buf);
1954 return (-1);
1957 btp->bt_bxstat = (intptr_t)bcp ^ KMEM_BUFTAG_ALLOC;
1959 if ((cp->cache_flags & KMF_HASH) && bcp->bc_addr != buf) {
1960 kmem_error(KMERR_BADBUFCTL, cp, buf);
1961 return (-1);
1964 if (cp->cache_flags & KMF_DEADBEEF) {
1965 if (!construct && (cp->cache_flags & KMF_LITE)) {
1966 if (*(uint64_t *)buf != KMEM_FREE_PATTERN) {
1967 kmem_error(KMERR_MODIFIED, cp, buf);
1968 return (-1);
1970 if (cp->cache_constructor != NULL)
1971 *(uint64_t *)buf = btp->bt_redzone;
1972 else
1973 *(uint64_t *)buf = KMEM_UNINITIALIZED_PATTERN;
1974 } else {
1975 construct = 1;
1976 if (verify_and_copy_pattern(KMEM_FREE_PATTERN,
1977 KMEM_UNINITIALIZED_PATTERN, buf,
1978 cp->cache_verify)) {
1979 kmem_error(KMERR_MODIFIED, cp, buf);
1980 return (-1);
1984 btp->bt_redzone = KMEM_REDZONE_PATTERN;
1986 if ((mtbf = kmem_mtbf | cp->cache_mtbf) != 0 &&
1987 gethrtime() % mtbf == 0 &&
1988 (kmflag & (KM_NOSLEEP | KM_PANIC)) == KM_NOSLEEP) {
1989 kmem_log_event(kmem_failure_log, cp, NULL, NULL);
1990 if (!construct && cp->cache_destructor != NULL)
1991 cp->cache_destructor(buf, cp->cache_private);
1992 } else {
1993 mtbf = 0;
1996 if (mtbf || (construct && cp->cache_constructor != NULL &&
1997 cp->cache_constructor(buf, cp->cache_private, kmflag) != 0)) {
1998 atomic_inc_64(&cp->cache_alloc_fail);
1999 btp->bt_bxstat = (intptr_t)bcp ^ KMEM_BUFTAG_FREE;
2000 if (cp->cache_flags & KMF_DEADBEEF)
2001 copy_pattern(KMEM_FREE_PATTERN, buf, cp->cache_verify);
2002 kmem_slab_free(cp, buf);
2003 return (1);
2006 if (cp->cache_flags & KMF_AUDIT) {
2007 KMEM_AUDIT(kmem_transaction_log, cp, bcp);
2010 if ((cp->cache_flags & KMF_LITE) &&
2011 !(cp->cache_cflags & KMC_KMEM_ALLOC)) {
2012 KMEM_BUFTAG_LITE_ENTER(btp, kmem_lite_count, caller);
2015 return (0);
2018 static int
2019 kmem_cache_free_debug(kmem_cache_t *cp, void *buf, caddr_t caller)
2021 kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf);
2022 kmem_bufctl_audit_t *bcp = (kmem_bufctl_audit_t *)btp->bt_bufctl;
2023 kmem_slab_t *sp;
2025 if (btp->bt_bxstat != ((intptr_t)bcp ^ KMEM_BUFTAG_ALLOC)) {
2026 if (btp->bt_bxstat == ((intptr_t)bcp ^ KMEM_BUFTAG_FREE)) {
2027 kmem_error(KMERR_DUPFREE, cp, buf);
2028 return (-1);
2030 sp = kmem_findslab(cp, buf);
2031 if (sp == NULL || sp->slab_cache != cp)
2032 kmem_error(KMERR_BADADDR, cp, buf);
2033 else
2034 kmem_error(KMERR_REDZONE, cp, buf);
2035 return (-1);
2038 btp->bt_bxstat = (intptr_t)bcp ^ KMEM_BUFTAG_FREE;
2040 if ((cp->cache_flags & KMF_HASH) && bcp->bc_addr != buf) {
2041 kmem_error(KMERR_BADBUFCTL, cp, buf);
2042 return (-1);
2045 if (btp->bt_redzone != KMEM_REDZONE_PATTERN) {
2046 kmem_error(KMERR_REDZONE, cp, buf);
2047 return (-1);
2050 if (cp->cache_flags & KMF_AUDIT) {
2051 if (cp->cache_flags & KMF_CONTENTS)
2052 bcp->bc_contents = kmem_log_enter(kmem_content_log,
2053 buf, cp->cache_contents);
2054 KMEM_AUDIT(kmem_transaction_log, cp, bcp);
2057 if ((cp->cache_flags & KMF_LITE) &&
2058 !(cp->cache_cflags & KMC_KMEM_ALLOC)) {
2059 KMEM_BUFTAG_LITE_ENTER(btp, kmem_lite_count, caller);
2062 if (cp->cache_flags & KMF_DEADBEEF) {
2063 if (cp->cache_flags & KMF_LITE)
2064 btp->bt_redzone = *(uint64_t *)buf;
2065 else if (cp->cache_destructor != NULL)
2066 cp->cache_destructor(buf, cp->cache_private);
2068 copy_pattern(KMEM_FREE_PATTERN, buf, cp->cache_verify);
2071 return (0);
2075 * Free each object in magazine mp to cp's slab layer, and free mp itself.
2077 static void
2078 kmem_magazine_destroy(kmem_cache_t *cp, kmem_magazine_t *mp, int nrounds)
2080 int round;
2082 ASSERT(!list_link_active(&cp->cache_link) ||
2083 taskq_member(kmem_taskq, curthread));
2085 for (round = 0; round < nrounds; round++) {
2086 void *buf = mp->mag_round[round];
2088 if (cp->cache_flags & KMF_DEADBEEF) {
2089 if (verify_pattern(KMEM_FREE_PATTERN, buf,
2090 cp->cache_verify) != NULL) {
2091 kmem_error(KMERR_MODIFIED, cp, buf);
2092 continue;
2094 if ((cp->cache_flags & KMF_LITE) &&
2095 cp->cache_destructor != NULL) {
2096 kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf);
2097 *(uint64_t *)buf = btp->bt_redzone;
2098 cp->cache_destructor(buf, cp->cache_private);
2099 *(uint64_t *)buf = KMEM_FREE_PATTERN;
2101 } else if (cp->cache_destructor != NULL) {
2102 cp->cache_destructor(buf, cp->cache_private);
2105 kmem_slab_free(cp, buf);
2107 ASSERT(KMEM_MAGAZINE_VALID(cp, mp));
2108 kmem_cache_free(cp->cache_magtype->mt_cache, mp);
2112 * Allocate a magazine from the depot.
2114 static kmem_magazine_t *
2115 kmem_depot_alloc(kmem_cache_t *cp, kmem_maglist_t *mlp)
2117 kmem_magazine_t *mp;
2120 * If we can't get the depot lock without contention,
2121 * update our contention count. We use the depot
2122 * contention rate to determine whether we need to
2123 * increase the magazine size for better scalability.
2125 if (!mutex_tryenter(&cp->cache_depot_lock)) {
2126 mutex_enter(&cp->cache_depot_lock);
2127 cp->cache_depot_contention++;
2130 if ((mp = mlp->ml_list) != NULL) {
2131 ASSERT(KMEM_MAGAZINE_VALID(cp, mp));
2132 mlp->ml_list = mp->mag_next;
2133 if (--mlp->ml_total < mlp->ml_min)
2134 mlp->ml_min = mlp->ml_total;
2135 mlp->ml_alloc++;
2138 mutex_exit(&cp->cache_depot_lock);
2140 return (mp);
2144 * Free a magazine to the depot.
2146 static void
2147 kmem_depot_free(kmem_cache_t *cp, kmem_maglist_t *mlp, kmem_magazine_t *mp)
2149 mutex_enter(&cp->cache_depot_lock);
2150 ASSERT(KMEM_MAGAZINE_VALID(cp, mp));
2151 mp->mag_next = mlp->ml_list;
2152 mlp->ml_list = mp;
2153 mlp->ml_total++;
2154 mutex_exit(&cp->cache_depot_lock);
2158 * Update the working set statistics for cp's depot.
2160 static void
2161 kmem_depot_ws_update(kmem_cache_t *cp)
2163 mutex_enter(&cp->cache_depot_lock);
2164 cp->cache_full.ml_reaplimit = cp->cache_full.ml_min;
2165 cp->cache_full.ml_min = cp->cache_full.ml_total;
2166 cp->cache_empty.ml_reaplimit = cp->cache_empty.ml_min;
2167 cp->cache_empty.ml_min = cp->cache_empty.ml_total;
2168 mutex_exit(&cp->cache_depot_lock);
2172 * Reap all magazines that have fallen out of the depot's working set.
2174 static void
2175 kmem_depot_ws_reap(kmem_cache_t *cp)
2177 long reap;
2178 kmem_magazine_t *mp;
2180 ASSERT(!list_link_active(&cp->cache_link) ||
2181 taskq_member(kmem_taskq, curthread));
2183 reap = MIN(cp->cache_full.ml_reaplimit, cp->cache_full.ml_min);
2184 while (reap-- && (mp = kmem_depot_alloc(cp, &cp->cache_full)) != NULL)
2185 kmem_magazine_destroy(cp, mp, cp->cache_magtype->mt_magsize);
2187 reap = MIN(cp->cache_empty.ml_reaplimit, cp->cache_empty.ml_min);
2188 while (reap-- && (mp = kmem_depot_alloc(cp, &cp->cache_empty)) != NULL)
2189 kmem_magazine_destroy(cp, mp, 0);
2192 static void
2193 kmem_cpu_reload(kmem_cpu_cache_t *ccp, kmem_magazine_t *mp, int rounds)
2195 ASSERT((ccp->cc_loaded == NULL && ccp->cc_rounds == -1) ||
2196 (ccp->cc_loaded && ccp->cc_rounds + rounds == ccp->cc_magsize));
2197 ASSERT(ccp->cc_magsize > 0);
2199 ccp->cc_ploaded = ccp->cc_loaded;
2200 ccp->cc_prounds = ccp->cc_rounds;
2201 ccp->cc_loaded = mp;
2202 ccp->cc_rounds = rounds;
2206 * Intercept kmem alloc/free calls during crash dump in order to avoid
2207 * changing kmem state while memory is being saved to the dump device.
2208 * Otherwise, ::kmem_verify will report "corrupt buffers". Note that
2209 * there are no locks because only one CPU calls kmem during a crash
2210 * dump. To enable this feature, first create the associated vmem
2211 * arena with VMC_DUMPSAFE.
2213 static void *kmem_dump_start; /* start of pre-reserved heap */
2214 static void *kmem_dump_end; /* end of heap area */
2215 static void *kmem_dump_curr; /* current free heap pointer */
2216 static size_t kmem_dump_size; /* size of heap area */
2218 /* append to each buf created in the pre-reserved heap */
2219 typedef struct kmem_dumpctl {
2220 void *kdc_next; /* cache dump free list linkage */
2221 } kmem_dumpctl_t;
2223 #define KMEM_DUMPCTL(cp, buf) \
2224 ((kmem_dumpctl_t *)P2ROUNDUP((uintptr_t)(buf) + (cp)->cache_bufsize, \
2225 sizeof (void *)))
2227 /* Keep some simple stats. */
2228 #define KMEM_DUMP_LOGS (100)
2230 typedef struct kmem_dump_log {
2231 kmem_cache_t *kdl_cache;
2232 uint_t kdl_allocs; /* # of dump allocations */
2233 uint_t kdl_frees; /* # of dump frees */
2234 uint_t kdl_alloc_fails; /* # of allocation failures */
2235 uint_t kdl_free_nondump; /* # of non-dump frees */
2236 uint_t kdl_unsafe; /* cache was used, but unsafe */
2237 } kmem_dump_log_t;
2239 static kmem_dump_log_t *kmem_dump_log;
2240 static int kmem_dump_log_idx;
2242 #define KDI_LOG(cp, stat) { \
2243 kmem_dump_log_t *kdl; \
2244 if ((kdl = (kmem_dump_log_t *)((cp)->cache_dumplog)) != NULL) { \
2245 kdl->stat++; \
2246 } else if (kmem_dump_log_idx < KMEM_DUMP_LOGS) { \
2247 kdl = &kmem_dump_log[kmem_dump_log_idx++]; \
2248 kdl->stat++; \
2249 kdl->kdl_cache = (cp); \
2250 (cp)->cache_dumplog = kdl; \
2254 /* set non zero for full report */
2255 uint_t kmem_dump_verbose = 0;
2257 /* stats for overize heap */
2258 uint_t kmem_dump_oversize_allocs = 0;
2259 uint_t kmem_dump_oversize_max = 0;
2261 static void
2262 kmem_dumppr(char **pp, char *e, const char *format, ...)
2264 char *p = *pp;
2266 if (p < e) {
2267 int n;
2268 va_list ap;
2270 va_start(ap, format);
2271 n = vsnprintf(p, e - p, format, ap);
2272 va_end(ap);
2273 *pp = p + n;
2278 * Called when dumpadm(1M) configures dump parameters.
2280 void
2281 kmem_dump_init(size_t size)
2283 if (kmem_dump_start != NULL)
2284 kmem_free(kmem_dump_start, kmem_dump_size);
2286 if (kmem_dump_log == NULL)
2287 kmem_dump_log = (kmem_dump_log_t *)kmem_zalloc(KMEM_DUMP_LOGS *
2288 sizeof (kmem_dump_log_t), KM_SLEEP);
2290 kmem_dump_start = kmem_alloc(size, KM_SLEEP);
2292 if (kmem_dump_start != NULL) {
2293 kmem_dump_size = size;
2294 kmem_dump_curr = kmem_dump_start;
2295 kmem_dump_end = (void *)((char *)kmem_dump_start + size);
2296 copy_pattern(KMEM_UNINITIALIZED_PATTERN, kmem_dump_start, size);
2297 } else {
2298 kmem_dump_size = 0;
2299 kmem_dump_curr = NULL;
2300 kmem_dump_end = NULL;
2305 * Set flag for each kmem_cache_t if is safe to use alternate dump
2306 * memory. Called just before panic crash dump starts. Set the flag
2307 * for the calling CPU.
2309 void
2310 kmem_dump_begin(void)
2312 ASSERT(panicstr != NULL);
2313 if (kmem_dump_start != NULL) {
2314 kmem_cache_t *cp;
2316 for (cp = list_head(&kmem_caches); cp != NULL;
2317 cp = list_next(&kmem_caches, cp)) {
2318 kmem_cpu_cache_t *ccp = KMEM_CPU_CACHE(cp);
2320 if (cp->cache_arena->vm_cflags & VMC_DUMPSAFE) {
2321 cp->cache_flags |= KMF_DUMPDIVERT;
2322 ccp->cc_flags |= KMF_DUMPDIVERT;
2323 ccp->cc_dump_rounds = ccp->cc_rounds;
2324 ccp->cc_dump_prounds = ccp->cc_prounds;
2325 ccp->cc_rounds = ccp->cc_prounds = -1;
2326 } else {
2327 cp->cache_flags |= KMF_DUMPUNSAFE;
2328 ccp->cc_flags |= KMF_DUMPUNSAFE;
2335 * finished dump intercept
2336 * print any warnings on the console
2337 * return verbose information to dumpsys() in the given buffer
2339 size_t
2340 kmem_dump_finish(char *buf, size_t size)
2342 int kdi_idx;
2343 int kdi_end = kmem_dump_log_idx;
2344 int percent = 0;
2345 int header = 0;
2346 int warn = 0;
2347 size_t used;
2348 kmem_cache_t *cp;
2349 kmem_dump_log_t *kdl;
2350 char *e = buf + size;
2351 char *p = buf;
2353 if (kmem_dump_size == 0 || kmem_dump_verbose == 0)
2354 return (0);
2356 used = (char *)kmem_dump_curr - (char *)kmem_dump_start;
2357 percent = (used * 100) / kmem_dump_size;
2359 kmem_dumppr(&p, e, "%% heap used,%d\n", percent);
2360 kmem_dumppr(&p, e, "used bytes,%ld\n", used);
2361 kmem_dumppr(&p, e, "heap size,%ld\n", kmem_dump_size);
2362 kmem_dumppr(&p, e, "Oversize allocs,%d\n",
2363 kmem_dump_oversize_allocs);
2364 kmem_dumppr(&p, e, "Oversize max size,%ld\n",
2365 kmem_dump_oversize_max);
2367 for (kdi_idx = 0; kdi_idx < kdi_end; kdi_idx++) {
2368 kdl = &kmem_dump_log[kdi_idx];
2369 cp = kdl->kdl_cache;
2370 if (cp == NULL)
2371 break;
2372 if (kdl->kdl_alloc_fails)
2373 ++warn;
2374 if (header == 0) {
2375 kmem_dumppr(&p, e,
2376 "Cache Name,Allocs,Frees,Alloc Fails,"
2377 "Nondump Frees,Unsafe Allocs/Frees\n");
2378 header = 1;
2380 kmem_dumppr(&p, e, "%s,%d,%d,%d,%d,%d\n",
2381 cp->cache_name, kdl->kdl_allocs, kdl->kdl_frees,
2382 kdl->kdl_alloc_fails, kdl->kdl_free_nondump,
2383 kdl->kdl_unsafe);
2386 /* return buffer size used */
2387 if (p < e)
2388 bzero(p, e - p);
2389 return (p - buf);
2393 * Allocate a constructed object from alternate dump memory.
2395 void *
2396 kmem_cache_alloc_dump(kmem_cache_t *cp, int kmflag)
2398 void *buf;
2399 void *curr;
2400 char *bufend;
2402 /* return a constructed object */
2403 if ((buf = cp->cache_dumpfreelist) != NULL) {
2404 cp->cache_dumpfreelist = KMEM_DUMPCTL(cp, buf)->kdc_next;
2405 KDI_LOG(cp, kdl_allocs);
2406 return (buf);
2409 /* create a new constructed object */
2410 curr = kmem_dump_curr;
2411 buf = (void *)P2ROUNDUP((uintptr_t)curr, cp->cache_align);
2412 bufend = (char *)KMEM_DUMPCTL(cp, buf) + sizeof (kmem_dumpctl_t);
2414 /* hat layer objects cannot cross a page boundary */
2415 if (cp->cache_align < PAGESIZE) {
2416 char *page = (char *)P2ROUNDUP((uintptr_t)buf, PAGESIZE);
2417 if (bufend > page) {
2418 bufend += page - (char *)buf;
2419 buf = (void *)page;
2423 /* fall back to normal alloc if reserved area is used up */
2424 if (bufend > (char *)kmem_dump_end) {
2425 kmem_dump_curr = kmem_dump_end;
2426 KDI_LOG(cp, kdl_alloc_fails);
2427 return (NULL);
2431 * Must advance curr pointer before calling a constructor that
2432 * may also allocate memory.
2434 kmem_dump_curr = bufend;
2436 /* run constructor */
2437 if (cp->cache_constructor != NULL &&
2438 cp->cache_constructor(buf, cp->cache_private, kmflag)
2439 != 0) {
2440 #ifdef DEBUG
2441 printf("name='%s' cache=0x%p: kmem cache constructor failed\n",
2442 cp->cache_name, (void *)cp);
2443 #endif
2444 /* reset curr pointer iff no allocs were done */
2445 if (kmem_dump_curr == bufend)
2446 kmem_dump_curr = curr;
2448 /* fall back to normal alloc if the constructor fails */
2449 KDI_LOG(cp, kdl_alloc_fails);
2450 return (NULL);
2453 KDI_LOG(cp, kdl_allocs);
2454 return (buf);
2458 * Free a constructed object in alternate dump memory.
2461 kmem_cache_free_dump(kmem_cache_t *cp, void *buf)
2463 /* save constructed buffers for next time */
2464 if ((char *)buf >= (char *)kmem_dump_start &&
2465 (char *)buf < (char *)kmem_dump_end) {
2466 KMEM_DUMPCTL(cp, buf)->kdc_next = cp->cache_dumpfreelist;
2467 cp->cache_dumpfreelist = buf;
2468 KDI_LOG(cp, kdl_frees);
2469 return (0);
2472 /* count all non-dump buf frees */
2473 KDI_LOG(cp, kdl_free_nondump);
2475 /* just drop buffers that were allocated before dump started */
2476 if (kmem_dump_curr < kmem_dump_end)
2477 return (0);
2479 /* fall back to normal free if reserved area is used up */
2480 return (1);
2484 * Allocate a constructed object from cache cp.
2486 void *
2487 kmem_cache_alloc(kmem_cache_t *cp, int kmflag)
2489 kmem_cpu_cache_t *ccp = KMEM_CPU_CACHE(cp);
2490 kmem_magazine_t *fmp;
2491 void *buf;
2493 mutex_enter(&ccp->cc_lock);
2494 for (;;) {
2496 * If there's an object available in the current CPU's
2497 * loaded magazine, just take it and return.
2499 if (ccp->cc_rounds > 0) {
2500 buf = ccp->cc_loaded->mag_round[--ccp->cc_rounds];
2501 ccp->cc_alloc++;
2502 mutex_exit(&ccp->cc_lock);
2503 if (ccp->cc_flags & (KMF_BUFTAG | KMF_DUMPUNSAFE)) {
2504 if (ccp->cc_flags & KMF_DUMPUNSAFE) {
2505 ASSERT(!(ccp->cc_flags &
2506 KMF_DUMPDIVERT));
2507 KDI_LOG(cp, kdl_unsafe);
2509 if ((ccp->cc_flags & KMF_BUFTAG) &&
2510 kmem_cache_alloc_debug(cp, buf, kmflag, 0,
2511 caller()) != 0) {
2512 if (kmflag & KM_NOSLEEP)
2513 return (NULL);
2514 mutex_enter(&ccp->cc_lock);
2515 continue;
2518 return (buf);
2522 * The loaded magazine is empty. If the previously loaded
2523 * magazine was full, exchange them and try again.
2525 if (ccp->cc_prounds > 0) {
2526 kmem_cpu_reload(ccp, ccp->cc_ploaded, ccp->cc_prounds);
2527 continue;
2531 * Return an alternate buffer at dump time to preserve
2532 * the heap.
2534 if (ccp->cc_flags & (KMF_DUMPDIVERT | KMF_DUMPUNSAFE)) {
2535 if (ccp->cc_flags & KMF_DUMPUNSAFE) {
2536 ASSERT(!(ccp->cc_flags & KMF_DUMPDIVERT));
2537 /* log it so that we can warn about it */
2538 KDI_LOG(cp, kdl_unsafe);
2539 } else {
2540 if ((buf = kmem_cache_alloc_dump(cp, kmflag)) !=
2541 NULL) {
2542 mutex_exit(&ccp->cc_lock);
2543 return (buf);
2545 break; /* fall back to slab layer */
2550 * If the magazine layer is disabled, break out now.
2552 if (ccp->cc_magsize == 0)
2553 break;
2556 * Try to get a full magazine from the depot.
2558 fmp = kmem_depot_alloc(cp, &cp->cache_full);
2559 if (fmp != NULL) {
2560 if (ccp->cc_ploaded != NULL)
2561 kmem_depot_free(cp, &cp->cache_empty,
2562 ccp->cc_ploaded);
2563 kmem_cpu_reload(ccp, fmp, ccp->cc_magsize);
2564 continue;
2568 * There are no full magazines in the depot,
2569 * so fall through to the slab layer.
2571 break;
2573 mutex_exit(&ccp->cc_lock);
2576 * We couldn't allocate a constructed object from the magazine layer,
2577 * so get a raw buffer from the slab layer and apply its constructor.
2579 buf = kmem_slab_alloc(cp, kmflag);
2581 if (buf == NULL)
2582 return (NULL);
2584 if (cp->cache_flags & KMF_BUFTAG) {
2586 * Make kmem_cache_alloc_debug() apply the constructor for us.
2588 int rc = kmem_cache_alloc_debug(cp, buf, kmflag, 1, caller());
2589 if (rc != 0) {
2590 if (kmflag & KM_NOSLEEP)
2591 return (NULL);
2593 * kmem_cache_alloc_debug() detected corruption
2594 * but didn't panic (kmem_panic <= 0). We should not be
2595 * here because the constructor failed (indicated by a
2596 * return code of 1). Try again.
2598 ASSERT(rc == -1);
2599 return (kmem_cache_alloc(cp, kmflag));
2601 return (buf);
2604 if (cp->cache_constructor != NULL &&
2605 cp->cache_constructor(buf, cp->cache_private, kmflag) != 0) {
2606 atomic_inc_64(&cp->cache_alloc_fail);
2607 kmem_slab_free(cp, buf);
2608 return (NULL);
2611 return (buf);
2615 * The freed argument tells whether or not kmem_cache_free_debug() has already
2616 * been called so that we can avoid the duplicate free error. For example, a
2617 * buffer on a magazine has already been freed by the client but is still
2618 * constructed.
2620 static void
2621 kmem_slab_free_constructed(kmem_cache_t *cp, void *buf, boolean_t freed)
2623 if (!freed && (cp->cache_flags & KMF_BUFTAG))
2624 if (kmem_cache_free_debug(cp, buf, caller()) == -1)
2625 return;
2628 * Note that if KMF_DEADBEEF is in effect and KMF_LITE is not,
2629 * kmem_cache_free_debug() will have already applied the destructor.
2631 if ((cp->cache_flags & (KMF_DEADBEEF | KMF_LITE)) != KMF_DEADBEEF &&
2632 cp->cache_destructor != NULL) {
2633 if (cp->cache_flags & KMF_DEADBEEF) { /* KMF_LITE implied */
2634 kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf);
2635 *(uint64_t *)buf = btp->bt_redzone;
2636 cp->cache_destructor(buf, cp->cache_private);
2637 *(uint64_t *)buf = KMEM_FREE_PATTERN;
2638 } else {
2639 cp->cache_destructor(buf, cp->cache_private);
2643 kmem_slab_free(cp, buf);
2647 * Used when there's no room to free a buffer to the per-CPU cache.
2648 * Drops and re-acquires &ccp->cc_lock, and returns non-zero if the
2649 * caller should try freeing to the per-CPU cache again.
2650 * Note that we don't directly install the magazine in the cpu cache,
2651 * since its state may have changed wildly while the lock was dropped.
2653 static int
2654 kmem_cpucache_magazine_alloc(kmem_cpu_cache_t *ccp, kmem_cache_t *cp)
2656 kmem_magazine_t *emp;
2657 kmem_magtype_t *mtp;
2659 ASSERT(MUTEX_HELD(&ccp->cc_lock));
2660 ASSERT(((uint_t)ccp->cc_rounds == ccp->cc_magsize ||
2661 ((uint_t)ccp->cc_rounds == -1)) &&
2662 ((uint_t)ccp->cc_prounds == ccp->cc_magsize ||
2663 ((uint_t)ccp->cc_prounds == -1)));
2665 emp = kmem_depot_alloc(cp, &cp->cache_empty);
2666 if (emp != NULL) {
2667 if (ccp->cc_ploaded != NULL)
2668 kmem_depot_free(cp, &cp->cache_full,
2669 ccp->cc_ploaded);
2670 kmem_cpu_reload(ccp, emp, 0);
2671 return (1);
2674 * There are no empty magazines in the depot,
2675 * so try to allocate a new one. We must drop all locks
2676 * across kmem_cache_alloc() because lower layers may
2677 * attempt to allocate from this cache.
2679 mtp = cp->cache_magtype;
2680 mutex_exit(&ccp->cc_lock);
2681 emp = kmem_cache_alloc(mtp->mt_cache, KM_NOSLEEP);
2682 mutex_enter(&ccp->cc_lock);
2684 if (emp != NULL) {
2686 * We successfully allocated an empty magazine.
2687 * However, we had to drop ccp->cc_lock to do it,
2688 * so the cache's magazine size may have changed.
2689 * If so, free the magazine and try again.
2691 if (ccp->cc_magsize != mtp->mt_magsize) {
2692 mutex_exit(&ccp->cc_lock);
2693 kmem_cache_free(mtp->mt_cache, emp);
2694 mutex_enter(&ccp->cc_lock);
2695 return (1);
2699 * We got a magazine of the right size. Add it to
2700 * the depot and try the whole dance again.
2702 kmem_depot_free(cp, &cp->cache_empty, emp);
2703 return (1);
2707 * We couldn't allocate an empty magazine,
2708 * so fall through to the slab layer.
2710 return (0);
2714 * Free a constructed object to cache cp.
2716 void
2717 kmem_cache_free(kmem_cache_t *cp, void *buf)
2719 kmem_cpu_cache_t *ccp = KMEM_CPU_CACHE(cp);
2722 * The client must not free either of the buffers passed to the move
2723 * callback function.
2725 ASSERT(cp->cache_defrag == NULL ||
2726 cp->cache_defrag->kmd_thread != curthread ||
2727 (buf != cp->cache_defrag->kmd_from_buf &&
2728 buf != cp->cache_defrag->kmd_to_buf));
2730 if (ccp->cc_flags & (KMF_BUFTAG | KMF_DUMPDIVERT | KMF_DUMPUNSAFE)) {
2731 if (ccp->cc_flags & KMF_DUMPUNSAFE) {
2732 ASSERT(!(ccp->cc_flags & KMF_DUMPDIVERT));
2733 /* log it so that we can warn about it */
2734 KDI_LOG(cp, kdl_unsafe);
2735 } else if (KMEM_DUMPCC(ccp) && !kmem_cache_free_dump(cp, buf)) {
2736 return;
2738 if (ccp->cc_flags & KMF_BUFTAG) {
2739 if (kmem_cache_free_debug(cp, buf, caller()) == -1)
2740 return;
2744 mutex_enter(&ccp->cc_lock);
2746 * Any changes to this logic should be reflected in kmem_slab_prefill()
2748 for (;;) {
2750 * If there's a slot available in the current CPU's
2751 * loaded magazine, just put the object there and return.
2753 if ((uint_t)ccp->cc_rounds < ccp->cc_magsize) {
2754 ccp->cc_loaded->mag_round[ccp->cc_rounds++] = buf;
2755 ccp->cc_free++;
2756 mutex_exit(&ccp->cc_lock);
2757 return;
2761 * The loaded magazine is full. If the previously loaded
2762 * magazine was empty, exchange them and try again.
2764 if (ccp->cc_prounds == 0) {
2765 kmem_cpu_reload(ccp, ccp->cc_ploaded, ccp->cc_prounds);
2766 continue;
2770 * If the magazine layer is disabled, break out now.
2772 if (ccp->cc_magsize == 0)
2773 break;
2775 if (!kmem_cpucache_magazine_alloc(ccp, cp)) {
2777 * We couldn't free our constructed object to the
2778 * magazine layer, so apply its destructor and free it
2779 * to the slab layer.
2781 break;
2784 mutex_exit(&ccp->cc_lock);
2785 kmem_slab_free_constructed(cp, buf, B_TRUE);
2788 static void
2789 kmem_slab_prefill(kmem_cache_t *cp, kmem_slab_t *sp)
2791 kmem_cpu_cache_t *ccp = KMEM_CPU_CACHE(cp);
2792 int cache_flags = cp->cache_flags;
2794 kmem_bufctl_t *next, *head;
2795 size_t nbufs;
2798 * Completely allocate the newly created slab and put the pre-allocated
2799 * buffers in magazines. Any of the buffers that cannot be put in
2800 * magazines must be returned to the slab.
2802 ASSERT(MUTEX_HELD(&cp->cache_lock));
2803 ASSERT((cache_flags & (KMF_PREFILL|KMF_BUFTAG)) == KMF_PREFILL);
2804 ASSERT(cp->cache_constructor == NULL);
2805 ASSERT(sp->slab_cache == cp);
2806 ASSERT(sp->slab_refcnt == 1);
2807 ASSERT(sp->slab_head != NULL && sp->slab_chunks > sp->slab_refcnt);
2808 ASSERT(avl_find(&cp->cache_partial_slabs, sp, NULL) == NULL);
2810 head = sp->slab_head;
2811 nbufs = (sp->slab_chunks - sp->slab_refcnt);
2812 sp->slab_head = NULL;
2813 sp->slab_refcnt += nbufs;
2814 cp->cache_bufslab -= nbufs;
2815 cp->cache_slab_alloc += nbufs;
2816 list_insert_head(&cp->cache_complete_slabs, sp);
2817 cp->cache_complete_slab_count++;
2818 mutex_exit(&cp->cache_lock);
2819 mutex_enter(&ccp->cc_lock);
2821 while (head != NULL) {
2822 void *buf = KMEM_BUF(cp, head);
2824 * If there's a slot available in the current CPU's
2825 * loaded magazine, just put the object there and
2826 * continue.
2828 if ((uint_t)ccp->cc_rounds < ccp->cc_magsize) {
2829 ccp->cc_loaded->mag_round[ccp->cc_rounds++] =
2830 buf;
2831 ccp->cc_free++;
2832 nbufs--;
2833 head = head->bc_next;
2834 continue;
2838 * The loaded magazine is full. If the previously
2839 * loaded magazine was empty, exchange them and try
2840 * again.
2842 if (ccp->cc_prounds == 0) {
2843 kmem_cpu_reload(ccp, ccp->cc_ploaded,
2844 ccp->cc_prounds);
2845 continue;
2849 * If the magazine layer is disabled, break out now.
2852 if (ccp->cc_magsize == 0) {
2853 break;
2856 if (!kmem_cpucache_magazine_alloc(ccp, cp))
2857 break;
2859 mutex_exit(&ccp->cc_lock);
2860 if (nbufs != 0) {
2861 ASSERT(head != NULL);
2864 * If there was a failure, return remaining objects to
2865 * the slab
2867 while (head != NULL) {
2868 ASSERT(nbufs != 0);
2869 next = head->bc_next;
2870 head->bc_next = NULL;
2871 kmem_slab_free(cp, KMEM_BUF(cp, head));
2872 head = next;
2873 nbufs--;
2876 ASSERT(head == NULL);
2877 ASSERT(nbufs == 0);
2878 mutex_enter(&cp->cache_lock);
2881 void *
2882 kmem_zalloc(size_t size, int kmflag)
2884 size_t index;
2885 void *buf;
2887 if ((index = ((size - 1) >> KMEM_ALIGN_SHIFT)) < KMEM_ALLOC_TABLE_MAX) {
2888 kmem_cache_t *cp = kmem_alloc_table[index];
2889 buf = kmem_cache_alloc(cp, kmflag);
2890 if (buf != NULL) {
2891 if ((cp->cache_flags & KMF_BUFTAG) && !KMEM_DUMP(cp)) {
2892 kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf);
2893 ((uint8_t *)buf)[size] = KMEM_REDZONE_BYTE;
2894 ((uint32_t *)btp)[1] = KMEM_SIZE_ENCODE(size);
2896 if (cp->cache_flags & KMF_LITE) {
2897 KMEM_BUFTAG_LITE_ENTER(btp,
2898 kmem_lite_count, caller());
2901 bzero(buf, size);
2903 } else {
2904 buf = kmem_alloc(size, kmflag);
2905 if (buf != NULL)
2906 bzero(buf, size);
2908 return (buf);
2911 void *
2912 kmem_alloc(size_t size, int kmflag)
2914 size_t index;
2915 kmem_cache_t *cp;
2916 void *buf;
2918 if ((index = ((size - 1) >> KMEM_ALIGN_SHIFT)) < KMEM_ALLOC_TABLE_MAX) {
2919 cp = kmem_alloc_table[index];
2920 /* fall through to kmem_cache_alloc() */
2922 } else if ((index = ((size - 1) >> KMEM_BIG_SHIFT)) <
2923 kmem_big_alloc_table_max) {
2924 cp = kmem_big_alloc_table[index];
2925 /* fall through to kmem_cache_alloc() */
2927 } else {
2928 if (size == 0)
2929 return (NULL);
2931 buf = vmem_alloc(kmem_oversize_arena, size,
2932 kmflag & KM_VMFLAGS);
2933 if (buf == NULL)
2934 kmem_log_event(kmem_failure_log, NULL, NULL,
2935 (void *)size);
2936 else if (KMEM_DUMP(kmem_slab_cache)) {
2937 /* stats for dump intercept */
2938 kmem_dump_oversize_allocs++;
2939 if (size > kmem_dump_oversize_max)
2940 kmem_dump_oversize_max = size;
2942 return (buf);
2945 buf = kmem_cache_alloc(cp, kmflag);
2946 if ((cp->cache_flags & KMF_BUFTAG) && !KMEM_DUMP(cp) && buf != NULL) {
2947 kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf);
2948 ((uint8_t *)buf)[size] = KMEM_REDZONE_BYTE;
2949 ((uint32_t *)btp)[1] = KMEM_SIZE_ENCODE(size);
2951 if (cp->cache_flags & KMF_LITE) {
2952 KMEM_BUFTAG_LITE_ENTER(btp, kmem_lite_count, caller());
2955 return (buf);
2958 void
2959 kmem_free(void *buf, size_t size)
2961 size_t index;
2962 kmem_cache_t *cp;
2964 if ((index = (size - 1) >> KMEM_ALIGN_SHIFT) < KMEM_ALLOC_TABLE_MAX) {
2965 cp = kmem_alloc_table[index];
2966 /* fall through to kmem_cache_free() */
2968 } else if ((index = ((size - 1) >> KMEM_BIG_SHIFT)) <
2969 kmem_big_alloc_table_max) {
2970 cp = kmem_big_alloc_table[index];
2971 /* fall through to kmem_cache_free() */
2973 } else {
2974 EQUIV(buf == NULL, size == 0);
2975 if (buf == NULL && size == 0)
2976 return;
2977 vmem_free(kmem_oversize_arena, buf, size);
2978 return;
2981 if ((cp->cache_flags & KMF_BUFTAG) && !KMEM_DUMP(cp)) {
2982 kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf);
2983 uint32_t *ip = (uint32_t *)btp;
2984 if (ip[1] != KMEM_SIZE_ENCODE(size)) {
2985 if (*(uint64_t *)buf == KMEM_FREE_PATTERN) {
2986 kmem_error(KMERR_DUPFREE, cp, buf);
2987 return;
2989 if (KMEM_SIZE_VALID(ip[1])) {
2990 ip[0] = KMEM_SIZE_ENCODE(size);
2991 kmem_error(KMERR_BADSIZE, cp, buf);
2992 } else {
2993 kmem_error(KMERR_REDZONE, cp, buf);
2995 return;
2997 if (((uint8_t *)buf)[size] != KMEM_REDZONE_BYTE) {
2998 kmem_error(KMERR_REDZONE, cp, buf);
2999 return;
3001 btp->bt_redzone = KMEM_REDZONE_PATTERN;
3002 if (cp->cache_flags & KMF_LITE) {
3003 KMEM_BUFTAG_LITE_ENTER(btp, kmem_lite_count,
3004 caller());
3007 kmem_cache_free(cp, buf);
3010 void *
3011 kmem_firewall_va_alloc(vmem_t *vmp, size_t size, int vmflag)
3013 size_t realsize = size + vmp->vm_quantum;
3014 void *addr;
3017 * Annoying edge case: if 'size' is just shy of ULONG_MAX, adding
3018 * vm_quantum will cause integer wraparound. Check for this, and
3019 * blow off the firewall page in this case. Note that such a
3020 * giant allocation (the entire kernel address space) can never
3021 * be satisfied, so it will either fail immediately (VM_NOSLEEP)
3022 * or sleep forever (VM_SLEEP). Thus, there is no need for a
3023 * corresponding check in kmem_firewall_va_free().
3025 if (realsize < size)
3026 realsize = size;
3029 * While boot still owns resource management, make sure that this
3030 * redzone virtual address allocation is properly accounted for in
3031 * OBPs "virtual-memory" "available" lists because we're
3032 * effectively claiming them for a red zone. If we don't do this,
3033 * the available lists become too fragmented and too large for the
3034 * current boot/kernel memory list interface.
3036 addr = vmem_alloc(vmp, realsize, vmflag | VM_NEXTFIT);
3038 if (addr != NULL && kvseg.s_base == NULL && realsize != size)
3039 (void) boot_virt_alloc((char *)addr + size, vmp->vm_quantum);
3041 return (addr);
3044 void
3045 kmem_firewall_va_free(vmem_t *vmp, void *addr, size_t size)
3047 ASSERT((kvseg.s_base == NULL ?
3048 va_to_pfn((char *)addr + size) :
3049 hat_getpfnum(kas.a_hat, (caddr_t)addr + size)) == PFN_INVALID);
3051 vmem_free(vmp, addr, size + vmp->vm_quantum);
3055 * Try to allocate at least `size' bytes of memory without sleeping or
3056 * panicking. Return actual allocated size in `asize'. If allocation failed,
3057 * try final allocation with sleep or panic allowed.
3059 void *
3060 kmem_alloc_tryhard(size_t size, size_t *asize, int kmflag)
3062 void *p;
3064 *asize = P2ROUNDUP(size, KMEM_ALIGN);
3065 do {
3066 p = kmem_alloc(*asize, (kmflag | KM_NOSLEEP) & ~KM_PANIC);
3067 if (p != NULL)
3068 return (p);
3069 *asize += KMEM_ALIGN;
3070 } while (*asize <= PAGESIZE);
3072 *asize = P2ROUNDUP(size, KMEM_ALIGN);
3073 return (kmem_alloc(*asize, kmflag));
3077 * Reclaim all unused memory from a cache.
3079 static void
3080 kmem_cache_reap(kmem_cache_t *cp)
3082 ASSERT(taskq_member(kmem_taskq, curthread));
3083 cp->cache_reap++;
3086 * Ask the cache's owner to free some memory if possible.
3087 * The idea is to handle things like the inode cache, which
3088 * typically sits on a bunch of memory that it doesn't truly
3089 * *need*. Reclaim policy is entirely up to the owner; this
3090 * callback is just an advisory plea for help.
3092 if (cp->cache_reclaim != NULL) {
3093 long delta;
3096 * Reclaimed memory should be reapable (not included in the
3097 * depot's working set).
3099 delta = cp->cache_full.ml_total;
3100 cp->cache_reclaim(cp->cache_private);
3101 delta = cp->cache_full.ml_total - delta;
3102 if (delta > 0) {
3103 mutex_enter(&cp->cache_depot_lock);
3104 cp->cache_full.ml_reaplimit += delta;
3105 cp->cache_full.ml_min += delta;
3106 mutex_exit(&cp->cache_depot_lock);
3110 kmem_depot_ws_reap(cp);
3112 if (cp->cache_defrag != NULL && !kmem_move_noreap) {
3113 kmem_cache_defrag(cp);
3117 static void
3118 kmem_reap_timeout(void *flag_arg)
3120 uint32_t *flag = (uint32_t *)flag_arg;
3122 ASSERT(flag == &kmem_reaping || flag == &kmem_reaping_idspace);
3123 *flag = 0;
3126 static void
3127 kmem_reap_done(void *flag)
3129 if (!callout_init_done) {
3130 /* can't schedule a timeout at this point */
3131 kmem_reap_timeout(flag);
3132 } else {
3133 (void) timeout(kmem_reap_timeout, flag, kmem_reap_interval);
3137 static void
3138 kmem_reap_start(void *flag)
3140 ASSERT(flag == &kmem_reaping || flag == &kmem_reaping_idspace);
3142 if (flag == &kmem_reaping) {
3143 kmem_cache_applyall(kmem_cache_reap, kmem_taskq, TQ_NOSLEEP);
3145 * if we have segkp under heap, reap segkp cache.
3147 if (segkp_fromheap)
3148 segkp_cache_free();
3150 else
3151 kmem_cache_applyall_id(kmem_cache_reap, kmem_taskq, TQ_NOSLEEP);
3154 * We use taskq_dispatch() to schedule a timeout to clear
3155 * the flag so that kmem_reap() becomes self-throttling:
3156 * we won't reap again until the current reap completes *and*
3157 * at least kmem_reap_interval ticks have elapsed.
3159 if (!taskq_dispatch(kmem_taskq, kmem_reap_done, flag, TQ_NOSLEEP))
3160 kmem_reap_done(flag);
3163 static void
3164 kmem_reap_common(void *flag_arg)
3166 uint32_t *flag = (uint32_t *)flag_arg;
3168 if (MUTEX_HELD(&kmem_cache_lock) || kmem_taskq == NULL ||
3169 atomic_cas_32(flag, 0, 1) != 0)
3170 return;
3173 * It may not be kosher to do memory allocation when a reap is called
3174 * (for example, if vmem_populate() is in the call chain). So we
3175 * start the reap going with a TQ_NOALLOC dispatch. If the dispatch
3176 * fails, we reset the flag, and the next reap will try again.
3178 if (!taskq_dispatch(kmem_taskq, kmem_reap_start, flag, TQ_NOALLOC))
3179 *flag = 0;
3183 * Reclaim all unused memory from all caches. Called from the VM system
3184 * when memory gets tight.
3186 void
3187 kmem_reap(void)
3189 kmem_reap_common(&kmem_reaping);
3193 * Reclaim all unused memory from identifier arenas, called when a vmem
3194 * arena not back by memory is exhausted. Since reaping memory-backed caches
3195 * cannot help with identifier exhaustion, we avoid both a large amount of
3196 * work and unwanted side-effects from reclaim callbacks.
3198 void
3199 kmem_reap_idspace(void)
3201 kmem_reap_common(&kmem_reaping_idspace);
3205 * Purge all magazines from a cache and set its magazine limit to zero.
3206 * All calls are serialized by the kmem_taskq lock, except for the final
3207 * call from kmem_cache_destroy().
3209 static void
3210 kmem_cache_magazine_purge(kmem_cache_t *cp)
3212 kmem_cpu_cache_t *ccp;
3213 kmem_magazine_t *mp, *pmp;
3214 int rounds, prounds, cpu_seqid;
3216 ASSERT(!list_link_active(&cp->cache_link) ||
3217 taskq_member(kmem_taskq, curthread));
3218 ASSERT(MUTEX_NOT_HELD(&cp->cache_lock));
3220 for (cpu_seqid = 0; cpu_seqid < max_ncpus; cpu_seqid++) {
3221 ccp = &cp->cache_cpu[cpu_seqid];
3223 mutex_enter(&ccp->cc_lock);
3224 mp = ccp->cc_loaded;
3225 pmp = ccp->cc_ploaded;
3226 rounds = ccp->cc_rounds;
3227 prounds = ccp->cc_prounds;
3228 ccp->cc_loaded = NULL;
3229 ccp->cc_ploaded = NULL;
3230 ccp->cc_rounds = -1;
3231 ccp->cc_prounds = -1;
3232 ccp->cc_magsize = 0;
3233 mutex_exit(&ccp->cc_lock);
3235 if (mp)
3236 kmem_magazine_destroy(cp, mp, rounds);
3237 if (pmp)
3238 kmem_magazine_destroy(cp, pmp, prounds);
3242 * Updating the working set statistics twice in a row has the
3243 * effect of setting the working set size to zero, so everything
3244 * is eligible for reaping.
3246 kmem_depot_ws_update(cp);
3247 kmem_depot_ws_update(cp);
3249 kmem_depot_ws_reap(cp);
3253 * Enable per-cpu magazines on a cache.
3255 static void
3256 kmem_cache_magazine_enable(kmem_cache_t *cp)
3258 int cpu_seqid;
3260 if (cp->cache_flags & KMF_NOMAGAZINE)
3261 return;
3263 for (cpu_seqid = 0; cpu_seqid < max_ncpus; cpu_seqid++) {
3264 kmem_cpu_cache_t *ccp = &cp->cache_cpu[cpu_seqid];
3265 mutex_enter(&ccp->cc_lock);
3266 ccp->cc_magsize = cp->cache_magtype->mt_magsize;
3267 mutex_exit(&ccp->cc_lock);
3273 * Reap (almost) everything right now. See kmem_cache_magazine_purge()
3274 * for explanation of the back-to-back kmem_depot_ws_update() calls.
3276 void
3277 kmem_cache_reap_now(kmem_cache_t *cp)
3279 ASSERT(list_link_active(&cp->cache_link));
3281 kmem_depot_ws_update(cp);
3282 kmem_depot_ws_update(cp);
3284 (void) taskq_dispatch(kmem_taskq,
3285 (task_func_t *)kmem_depot_ws_reap, cp, TQ_SLEEP);
3286 taskq_wait(kmem_taskq);
3290 * Recompute a cache's magazine size. The trade-off is that larger magazines
3291 * provide a higher transfer rate with the depot, while smaller magazines
3292 * reduce memory consumption. Magazine resizing is an expensive operation;
3293 * it should not be done frequently.
3295 * Changes to the magazine size are serialized by the kmem_taskq lock.
3297 * Note: at present this only grows the magazine size. It might be useful
3298 * to allow shrinkage too.
3300 static void
3301 kmem_cache_magazine_resize(kmem_cache_t *cp)
3303 kmem_magtype_t *mtp = cp->cache_magtype;
3305 ASSERT(taskq_member(kmem_taskq, curthread));
3307 if (cp->cache_chunksize < mtp->mt_maxbuf) {
3308 kmem_cache_magazine_purge(cp);
3309 mutex_enter(&cp->cache_depot_lock);
3310 cp->cache_magtype = ++mtp;
3311 cp->cache_depot_contention_prev =
3312 cp->cache_depot_contention + INT_MAX;
3313 mutex_exit(&cp->cache_depot_lock);
3314 kmem_cache_magazine_enable(cp);
3319 * Rescale a cache's hash table, so that the table size is roughly the
3320 * cache size. We want the average lookup time to be extremely small.
3322 static void
3323 kmem_hash_rescale(kmem_cache_t *cp)
3325 kmem_bufctl_t **old_table, **new_table, *bcp;
3326 size_t old_size, new_size, h;
3328 ASSERT(taskq_member(kmem_taskq, curthread));
3330 new_size = MAX(KMEM_HASH_INITIAL,
3331 1 << (highbit(3 * cp->cache_buftotal + 4) - 2));
3332 old_size = cp->cache_hash_mask + 1;
3334 if ((old_size >> 1) <= new_size && new_size <= (old_size << 1))
3335 return;
3337 new_table = vmem_alloc(kmem_hash_arena, new_size * sizeof (void *),
3338 VM_NOSLEEP);
3339 if (new_table == NULL)
3340 return;
3341 bzero(new_table, new_size * sizeof (void *));
3343 mutex_enter(&cp->cache_lock);
3345 old_size = cp->cache_hash_mask + 1;
3346 old_table = cp->cache_hash_table;
3348 cp->cache_hash_mask = new_size - 1;
3349 cp->cache_hash_table = new_table;
3350 cp->cache_rescale++;
3352 for (h = 0; h < old_size; h++) {
3353 bcp = old_table[h];
3354 while (bcp != NULL) {
3355 void *addr = bcp->bc_addr;
3356 kmem_bufctl_t *next_bcp = bcp->bc_next;
3357 kmem_bufctl_t **hash_bucket = KMEM_HASH(cp, addr);
3358 bcp->bc_next = *hash_bucket;
3359 *hash_bucket = bcp;
3360 bcp = next_bcp;
3364 mutex_exit(&cp->cache_lock);
3366 vmem_free(kmem_hash_arena, old_table, old_size * sizeof (void *));
3370 * Perform periodic maintenance on a cache: hash rescaling, depot working-set
3371 * update, magazine resizing, and slab consolidation.
3373 static void
3374 kmem_cache_update(kmem_cache_t *cp)
3376 int need_hash_rescale = 0;
3377 int need_magazine_resize = 0;
3379 ASSERT(MUTEX_HELD(&kmem_cache_lock));
3382 * If the cache has become much larger or smaller than its hash table,
3383 * fire off a request to rescale the hash table.
3385 mutex_enter(&cp->cache_lock);
3387 if ((cp->cache_flags & KMF_HASH) &&
3388 (cp->cache_buftotal > (cp->cache_hash_mask << 1) ||
3389 (cp->cache_buftotal < (cp->cache_hash_mask >> 1) &&
3390 cp->cache_hash_mask > KMEM_HASH_INITIAL)))
3391 need_hash_rescale = 1;
3393 mutex_exit(&cp->cache_lock);
3396 * Update the depot working set statistics.
3398 kmem_depot_ws_update(cp);
3401 * If there's a lot of contention in the depot,
3402 * increase the magazine size.
3404 mutex_enter(&cp->cache_depot_lock);
3406 if (cp->cache_chunksize < cp->cache_magtype->mt_maxbuf &&
3407 (int)(cp->cache_depot_contention -
3408 cp->cache_depot_contention_prev) > kmem_depot_contention)
3409 need_magazine_resize = 1;
3411 cp->cache_depot_contention_prev = cp->cache_depot_contention;
3413 mutex_exit(&cp->cache_depot_lock);
3415 if (need_hash_rescale)
3416 (void) taskq_dispatch(kmem_taskq,
3417 (task_func_t *)kmem_hash_rescale, cp, TQ_NOSLEEP);
3419 if (need_magazine_resize)
3420 (void) taskq_dispatch(kmem_taskq,
3421 (task_func_t *)kmem_cache_magazine_resize, cp, TQ_NOSLEEP);
3423 if (cp->cache_defrag != NULL)
3424 (void) taskq_dispatch(kmem_taskq,
3425 (task_func_t *)kmem_cache_scan, cp, TQ_NOSLEEP);
3428 static void kmem_update(void *);
3430 static void
3431 kmem_update_timeout(void *dummy)
3433 (void) timeout(kmem_update, dummy, kmem_reap_interval);
3436 static void
3437 kmem_update(void *dummy)
3439 kmem_cache_applyall(kmem_cache_update, NULL, TQ_NOSLEEP);
3442 * We use taskq_dispatch() to reschedule the timeout so that
3443 * kmem_update() becomes self-throttling: it won't schedule
3444 * new tasks until all previous tasks have completed.
3446 if (!taskq_dispatch(kmem_taskq, kmem_update_timeout, dummy, TQ_NOSLEEP))
3447 kmem_update_timeout(NULL);
3450 static int
3451 kmem_cache_kstat_update(kstat_t *ksp, int rw)
3453 struct kmem_cache_kstat *kmcp = &kmem_cache_kstat;
3454 kmem_cache_t *cp = ksp->ks_private;
3455 uint64_t cpu_buf_avail;
3456 uint64_t buf_avail = 0;
3457 int cpu_seqid;
3458 long reap;
3460 ASSERT(MUTEX_HELD(&kmem_cache_kstat_lock));
3462 if (rw == KSTAT_WRITE)
3463 return (EACCES);
3465 mutex_enter(&cp->cache_lock);
3467 kmcp->kmc_alloc_fail.value.ui64 = cp->cache_alloc_fail;
3468 kmcp->kmc_alloc.value.ui64 = cp->cache_slab_alloc;
3469 kmcp->kmc_free.value.ui64 = cp->cache_slab_free;
3470 kmcp->kmc_slab_alloc.value.ui64 = cp->cache_slab_alloc;
3471 kmcp->kmc_slab_free.value.ui64 = cp->cache_slab_free;
3473 for (cpu_seqid = 0; cpu_seqid < max_ncpus; cpu_seqid++) {
3474 kmem_cpu_cache_t *ccp = &cp->cache_cpu[cpu_seqid];
3476 mutex_enter(&ccp->cc_lock);
3478 cpu_buf_avail = 0;
3479 if (ccp->cc_rounds > 0)
3480 cpu_buf_avail += ccp->cc_rounds;
3481 if (ccp->cc_prounds > 0)
3482 cpu_buf_avail += ccp->cc_prounds;
3484 kmcp->kmc_alloc.value.ui64 += ccp->cc_alloc;
3485 kmcp->kmc_free.value.ui64 += ccp->cc_free;
3486 buf_avail += cpu_buf_avail;
3488 mutex_exit(&ccp->cc_lock);
3491 mutex_enter(&cp->cache_depot_lock);
3493 kmcp->kmc_depot_alloc.value.ui64 = cp->cache_full.ml_alloc;
3494 kmcp->kmc_depot_free.value.ui64 = cp->cache_empty.ml_alloc;
3495 kmcp->kmc_depot_contention.value.ui64 = cp->cache_depot_contention;
3496 kmcp->kmc_full_magazines.value.ui64 = cp->cache_full.ml_total;
3497 kmcp->kmc_empty_magazines.value.ui64 = cp->cache_empty.ml_total;
3498 kmcp->kmc_magazine_size.value.ui64 =
3499 (cp->cache_flags & KMF_NOMAGAZINE) ?
3500 0 : cp->cache_magtype->mt_magsize;
3502 kmcp->kmc_alloc.value.ui64 += cp->cache_full.ml_alloc;
3503 kmcp->kmc_free.value.ui64 += cp->cache_empty.ml_alloc;
3504 buf_avail += cp->cache_full.ml_total * cp->cache_magtype->mt_magsize;
3506 reap = MIN(cp->cache_full.ml_reaplimit, cp->cache_full.ml_min);
3507 reap = MIN(reap, cp->cache_full.ml_total);
3509 mutex_exit(&cp->cache_depot_lock);
3511 kmcp->kmc_buf_size.value.ui64 = cp->cache_bufsize;
3512 kmcp->kmc_align.value.ui64 = cp->cache_align;
3513 kmcp->kmc_chunk_size.value.ui64 = cp->cache_chunksize;
3514 kmcp->kmc_slab_size.value.ui64 = cp->cache_slabsize;
3515 kmcp->kmc_buf_constructed.value.ui64 = buf_avail;
3516 buf_avail += cp->cache_bufslab;
3517 kmcp->kmc_buf_avail.value.ui64 = buf_avail;
3518 kmcp->kmc_buf_inuse.value.ui64 = cp->cache_buftotal - buf_avail;
3519 kmcp->kmc_buf_total.value.ui64 = cp->cache_buftotal;
3520 kmcp->kmc_buf_max.value.ui64 = cp->cache_bufmax;
3521 kmcp->kmc_slab_create.value.ui64 = cp->cache_slab_create;
3522 kmcp->kmc_slab_destroy.value.ui64 = cp->cache_slab_destroy;
3523 kmcp->kmc_hash_size.value.ui64 = (cp->cache_flags & KMF_HASH) ?
3524 cp->cache_hash_mask + 1 : 0;
3525 kmcp->kmc_hash_lookup_depth.value.ui64 = cp->cache_lookup_depth;
3526 kmcp->kmc_hash_rescale.value.ui64 = cp->cache_rescale;
3527 kmcp->kmc_vmem_source.value.ui64 = cp->cache_arena->vm_id;
3528 kmcp->kmc_reap.value.ui64 = cp->cache_reap;
3530 if (cp->cache_defrag == NULL) {
3531 kmcp->kmc_move_callbacks.value.ui64 = 0;
3532 kmcp->kmc_move_yes.value.ui64 = 0;
3533 kmcp->kmc_move_no.value.ui64 = 0;
3534 kmcp->kmc_move_later.value.ui64 = 0;
3535 kmcp->kmc_move_dont_need.value.ui64 = 0;
3536 kmcp->kmc_move_dont_know.value.ui64 = 0;
3537 kmcp->kmc_move_hunt_found.value.ui64 = 0;
3538 kmcp->kmc_move_slabs_freed.value.ui64 = 0;
3539 kmcp->kmc_defrag.value.ui64 = 0;
3540 kmcp->kmc_scan.value.ui64 = 0;
3541 kmcp->kmc_move_reclaimable.value.ui64 = 0;
3542 } else {
3543 int64_t reclaimable;
3545 kmem_defrag_t *kd = cp->cache_defrag;
3546 kmcp->kmc_move_callbacks.value.ui64 = kd->kmd_callbacks;
3547 kmcp->kmc_move_yes.value.ui64 = kd->kmd_yes;
3548 kmcp->kmc_move_no.value.ui64 = kd->kmd_no;
3549 kmcp->kmc_move_later.value.ui64 = kd->kmd_later;
3550 kmcp->kmc_move_dont_need.value.ui64 = kd->kmd_dont_need;
3551 kmcp->kmc_move_dont_know.value.ui64 = kd->kmd_dont_know;
3552 kmcp->kmc_move_hunt_found.value.ui64 = kd->kmd_hunt_found;
3553 kmcp->kmc_move_slabs_freed.value.ui64 = kd->kmd_slabs_freed;
3554 kmcp->kmc_defrag.value.ui64 = kd->kmd_defrags;
3555 kmcp->kmc_scan.value.ui64 = kd->kmd_scans;
3557 reclaimable = cp->cache_bufslab - (cp->cache_maxchunks - 1);
3558 reclaimable = MAX(reclaimable, 0);
3559 reclaimable += ((uint64_t)reap * cp->cache_magtype->mt_magsize);
3560 kmcp->kmc_move_reclaimable.value.ui64 = reclaimable;
3563 mutex_exit(&cp->cache_lock);
3564 return (0);
3568 * Return a named statistic about a particular cache.
3569 * This shouldn't be called very often, so it's currently designed for
3570 * simplicity (leverages existing kstat support) rather than efficiency.
3572 uint64_t
3573 kmem_cache_stat(kmem_cache_t *cp, char *name)
3575 int i;
3576 kstat_t *ksp = cp->cache_kstat;
3577 kstat_named_t *knp = (kstat_named_t *)&kmem_cache_kstat;
3578 uint64_t value = 0;
3580 if (ksp != NULL) {
3581 mutex_enter(&kmem_cache_kstat_lock);
3582 (void) kmem_cache_kstat_update(ksp, KSTAT_READ);
3583 for (i = 0; i < ksp->ks_ndata; i++) {
3584 if (strcmp(knp[i].name, name) == 0) {
3585 value = knp[i].value.ui64;
3586 break;
3589 mutex_exit(&kmem_cache_kstat_lock);
3591 return (value);
3595 * Return an estimate of currently available kernel heap memory.
3596 * On 32-bit systems, physical memory may exceed virtual memory,
3597 * we just truncate the result at 1GB.
3599 size_t
3600 kmem_avail(void)
3602 spgcnt_t rmem = availrmem - tune.t_minarmem;
3603 spgcnt_t fmem = freemem - minfree;
3605 return ((size_t)ptob(MIN(MAX(MIN(rmem, fmem), 0),
3606 1 << (30 - PAGESHIFT))));
3610 * Return the maximum amount of memory that is (in theory) allocatable
3611 * from the heap. This may be used as an estimate only since there
3612 * is no guarentee this space will still be available when an allocation
3613 * request is made, nor that the space may be allocated in one big request
3614 * due to kernel heap fragmentation.
3616 size_t
3617 kmem_maxavail(void)
3619 spgcnt_t pmem = availrmem - tune.t_minarmem;
3620 spgcnt_t vmem = btop(vmem_size(heap_arena, VMEM_FREE));
3622 return ((size_t)ptob(MAX(MIN(pmem, vmem), 0)));
3626 * Indicate whether memory-intensive kmem debugging is enabled.
3629 kmem_debugging(void)
3631 return (kmem_flags & (KMF_AUDIT | KMF_REDZONE));
3634 /* binning function, sorts finely at the two extremes */
3635 #define KMEM_PARTIAL_SLAB_WEIGHT(sp, binshift) \
3636 ((((sp)->slab_refcnt <= (binshift)) || \
3637 (((sp)->slab_chunks - (sp)->slab_refcnt) <= (binshift))) \
3638 ? -(sp)->slab_refcnt \
3639 : -((binshift) + ((sp)->slab_refcnt >> (binshift))))
3642 * Minimizing the number of partial slabs on the freelist minimizes
3643 * fragmentation (the ratio of unused buffers held by the slab layer). There are
3644 * two ways to get a slab off of the freelist: 1) free all the buffers on the
3645 * slab, and 2) allocate all the buffers on the slab. It follows that we want
3646 * the most-used slabs at the front of the list where they have the best chance
3647 * of being completely allocated, and the least-used slabs at a safe distance
3648 * from the front to improve the odds that the few remaining buffers will all be
3649 * freed before another allocation can tie up the slab. For that reason a slab
3650 * with a higher slab_refcnt sorts less than than a slab with a lower
3651 * slab_refcnt.
3653 * However, if a slab has at least one buffer that is deemed unfreeable, we
3654 * would rather have that slab at the front of the list regardless of
3655 * slab_refcnt, since even one unfreeable buffer makes the entire slab
3656 * unfreeable. If the client returns KMEM_CBRC_NO in response to a cache_move()
3657 * callback, the slab is marked unfreeable for as long as it remains on the
3658 * freelist.
3660 static int
3661 kmem_partial_slab_cmp(const void *p0, const void *p1)
3663 const kmem_cache_t *cp;
3664 const kmem_slab_t *s0 = p0;
3665 const kmem_slab_t *s1 = p1;
3666 int w0, w1;
3667 size_t binshift;
3669 ASSERT(KMEM_SLAB_IS_PARTIAL(s0));
3670 ASSERT(KMEM_SLAB_IS_PARTIAL(s1));
3671 ASSERT(s0->slab_cache == s1->slab_cache);
3672 cp = s1->slab_cache;
3673 ASSERT(MUTEX_HELD(&cp->cache_lock));
3674 binshift = cp->cache_partial_binshift;
3676 /* weight of first slab */
3677 w0 = KMEM_PARTIAL_SLAB_WEIGHT(s0, binshift);
3678 if (s0->slab_flags & KMEM_SLAB_NOMOVE) {
3679 w0 -= cp->cache_maxchunks;
3682 /* weight of second slab */
3683 w1 = KMEM_PARTIAL_SLAB_WEIGHT(s1, binshift);
3684 if (s1->slab_flags & KMEM_SLAB_NOMOVE) {
3685 w1 -= cp->cache_maxchunks;
3688 if (w0 < w1)
3689 return (-1);
3690 if (w0 > w1)
3691 return (1);
3693 /* compare pointer values */
3694 if ((uintptr_t)s0 < (uintptr_t)s1)
3695 return (-1);
3696 if ((uintptr_t)s0 > (uintptr_t)s1)
3697 return (1);
3699 return (0);
3703 * It must be valid to call the destructor (if any) on a newly created object.
3704 * That is, the constructor (if any) must leave the object in a valid state for
3705 * the destructor.
3707 kmem_cache_t *
3708 kmem_cache_create(
3709 char *name, /* descriptive name for this cache */
3710 size_t bufsize, /* size of the objects it manages */
3711 size_t align, /* required object alignment */
3712 int (*constructor)(void *, void *, int), /* object constructor */
3713 void (*destructor)(void *, void *), /* object destructor */
3714 void (*reclaim)(void *), /* memory reclaim callback */
3715 void *private, /* pass-thru arg for constr/destr/reclaim */
3716 vmem_t *vmp, /* vmem source for slab allocation */
3717 int cflags) /* cache creation flags */
3719 int cpu_seqid;
3720 size_t chunksize;
3721 kmem_cache_t *cp;
3722 kmem_magtype_t *mtp;
3723 size_t csize = KMEM_CACHE_SIZE(max_ncpus);
3725 #ifdef DEBUG
3727 * Cache names should conform to the rules for valid C identifiers
3729 if (!strident_valid(name)) {
3730 cmn_err(CE_CONT,
3731 "kmem_cache_create: '%s' is an invalid cache name\n"
3732 "cache names must conform to the rules for "
3733 "C identifiers\n", name);
3735 #endif /* DEBUG */
3737 if (vmp == NULL)
3738 vmp = kmem_default_arena;
3741 * If this kmem cache has an identifier vmem arena as its source, mark
3742 * it such to allow kmem_reap_idspace().
3744 ASSERT(!(cflags & KMC_IDENTIFIER)); /* consumer should not set this */
3745 if (vmp->vm_cflags & VMC_IDENTIFIER)
3746 cflags |= KMC_IDENTIFIER;
3749 * Get a kmem_cache structure. We arrange that cp->cache_cpu[]
3750 * is aligned on a KMEM_CPU_CACHE_SIZE boundary to prevent
3751 * false sharing of per-CPU data.
3753 cp = vmem_xalloc(kmem_cache_arena, csize, KMEM_CPU_CACHE_SIZE,
3754 P2NPHASE(csize, KMEM_CPU_CACHE_SIZE), 0, NULL, NULL, VM_SLEEP);
3755 bzero(cp, csize);
3756 list_link_init(&cp->cache_link);
3758 if (align == 0)
3759 align = KMEM_ALIGN;
3762 * If we're not at least KMEM_ALIGN aligned, we can't use free
3763 * memory to hold bufctl information (because we can't safely
3764 * perform word loads and stores on it).
3766 if (align < KMEM_ALIGN)
3767 cflags |= KMC_NOTOUCH;
3769 if (!ISP2(align) || align > vmp->vm_quantum)
3770 panic("kmem_cache_create: bad alignment %lu", align);
3772 mutex_enter(&kmem_flags_lock);
3773 if (kmem_flags & KMF_RANDOMIZE)
3774 kmem_flags = (((kmem_flags | ~KMF_RANDOM) + 1) & KMF_RANDOM) |
3775 KMF_RANDOMIZE;
3776 cp->cache_flags = (kmem_flags | cflags) & KMF_DEBUG;
3777 mutex_exit(&kmem_flags_lock);
3780 * Make sure all the various flags are reasonable.
3782 ASSERT(!(cflags & KMC_NOHASH) || !(cflags & KMC_NOTOUCH));
3784 if (cp->cache_flags & KMF_LITE) {
3785 if (bufsize >= kmem_lite_minsize &&
3786 align <= kmem_lite_maxalign &&
3787 P2PHASE(bufsize, kmem_lite_maxalign) != 0) {
3788 cp->cache_flags |= KMF_BUFTAG;
3789 cp->cache_flags &= ~(KMF_AUDIT | KMF_FIREWALL);
3790 } else {
3791 cp->cache_flags &= ~KMF_DEBUG;
3795 if (cp->cache_flags & KMF_DEADBEEF)
3796 cp->cache_flags |= KMF_REDZONE;
3798 if ((cflags & KMC_QCACHE) && (cp->cache_flags & KMF_AUDIT))
3799 cp->cache_flags |= KMF_NOMAGAZINE;
3801 if (cflags & KMC_NODEBUG)
3802 cp->cache_flags &= ~KMF_DEBUG;
3804 if (cflags & KMC_NOTOUCH)
3805 cp->cache_flags &= ~KMF_TOUCH;
3807 if (cflags & KMC_PREFILL)
3808 cp->cache_flags |= KMF_PREFILL;
3810 if (cflags & KMC_NOHASH)
3811 cp->cache_flags &= ~(KMF_AUDIT | KMF_FIREWALL);
3813 if (cflags & KMC_NOMAGAZINE)
3814 cp->cache_flags |= KMF_NOMAGAZINE;
3816 if ((cp->cache_flags & KMF_AUDIT) && !(cflags & KMC_NOTOUCH))
3817 cp->cache_flags |= KMF_REDZONE;
3819 if (!(cp->cache_flags & KMF_AUDIT))
3820 cp->cache_flags &= ~KMF_CONTENTS;
3822 if ((cp->cache_flags & KMF_BUFTAG) && bufsize >= kmem_minfirewall &&
3823 !(cp->cache_flags & KMF_LITE) && !(cflags & KMC_NOHASH))
3824 cp->cache_flags |= KMF_FIREWALL;
3826 if (vmp != kmem_default_arena || kmem_firewall_arena == NULL)
3827 cp->cache_flags &= ~KMF_FIREWALL;
3829 if (cp->cache_flags & KMF_FIREWALL) {
3830 cp->cache_flags &= ~KMF_BUFTAG;
3831 cp->cache_flags |= KMF_NOMAGAZINE;
3832 ASSERT(vmp == kmem_default_arena);
3833 vmp = kmem_firewall_arena;
3837 * Set cache properties.
3839 (void) strncpy(cp->cache_name, name, KMEM_CACHE_NAMELEN);
3840 strident_canon(cp->cache_name, KMEM_CACHE_NAMELEN + 1);
3841 cp->cache_bufsize = bufsize;
3842 cp->cache_align = align;
3843 cp->cache_constructor = constructor;
3844 cp->cache_destructor = destructor;
3845 cp->cache_reclaim = reclaim;
3846 cp->cache_private = private;
3847 cp->cache_arena = vmp;
3848 cp->cache_cflags = cflags;
3851 * Determine the chunk size.
3853 chunksize = bufsize;
3855 if (align >= KMEM_ALIGN) {
3856 chunksize = P2ROUNDUP(chunksize, KMEM_ALIGN);
3857 cp->cache_bufctl = chunksize - KMEM_ALIGN;
3860 if (cp->cache_flags & KMF_BUFTAG) {
3861 cp->cache_bufctl = chunksize;
3862 cp->cache_buftag = chunksize;
3863 if (cp->cache_flags & KMF_LITE)
3864 chunksize += KMEM_BUFTAG_LITE_SIZE(kmem_lite_count);
3865 else
3866 chunksize += sizeof (kmem_buftag_t);
3869 if (cp->cache_flags & KMF_DEADBEEF) {
3870 cp->cache_verify = MIN(cp->cache_buftag, kmem_maxverify);
3871 if (cp->cache_flags & KMF_LITE)
3872 cp->cache_verify = sizeof (uint64_t);
3875 cp->cache_contents = MIN(cp->cache_bufctl, kmem_content_maxsave);
3877 cp->cache_chunksize = chunksize = P2ROUNDUP(chunksize, align);
3880 * Now that we know the chunk size, determine the optimal slab size.
3882 if (vmp == kmem_firewall_arena) {
3883 cp->cache_slabsize = P2ROUNDUP(chunksize, vmp->vm_quantum);
3884 cp->cache_mincolor = cp->cache_slabsize - chunksize;
3885 cp->cache_maxcolor = cp->cache_mincolor;
3886 cp->cache_flags |= KMF_HASH;
3887 ASSERT(!(cp->cache_flags & KMF_BUFTAG));
3888 } else if ((cflags & KMC_NOHASH) || (!(cflags & KMC_NOTOUCH) &&
3889 !(cp->cache_flags & KMF_AUDIT) &&
3890 chunksize < vmp->vm_quantum / KMEM_VOID_FRACTION)) {
3891 cp->cache_slabsize = vmp->vm_quantum;
3892 cp->cache_mincolor = 0;
3893 cp->cache_maxcolor =
3894 (cp->cache_slabsize - sizeof (kmem_slab_t)) % chunksize;
3895 ASSERT(chunksize + sizeof (kmem_slab_t) <= cp->cache_slabsize);
3896 ASSERT(!(cp->cache_flags & KMF_AUDIT));
3897 } else {
3898 size_t chunks, bestfit, waste, slabsize;
3899 size_t minwaste = LONG_MAX;
3901 for (chunks = 1; chunks <= KMEM_VOID_FRACTION; chunks++) {
3902 slabsize = P2ROUNDUP(chunksize * chunks,
3903 vmp->vm_quantum);
3904 chunks = slabsize / chunksize;
3905 waste = (slabsize % chunksize) / chunks;
3906 if (waste < minwaste) {
3907 minwaste = waste;
3908 bestfit = slabsize;
3911 if (cflags & KMC_QCACHE)
3912 bestfit = VMEM_QCACHE_SLABSIZE(vmp->vm_qcache_max);
3913 cp->cache_slabsize = bestfit;
3914 cp->cache_mincolor = 0;
3915 cp->cache_maxcolor = bestfit % chunksize;
3916 cp->cache_flags |= KMF_HASH;
3919 cp->cache_maxchunks = (cp->cache_slabsize / cp->cache_chunksize);
3920 cp->cache_partial_binshift = highbit(cp->cache_maxchunks / 16) + 1;
3923 * Disallowing prefill when either the DEBUG or HASH flag is set or when
3924 * there is a constructor avoids some tricky issues with debug setup
3925 * that may be revisited later. We cannot allow prefill in a
3926 * metadata cache because of potential recursion.
3928 if (vmp == kmem_msb_arena ||
3929 cp->cache_flags & (KMF_HASH | KMF_BUFTAG) ||
3930 cp->cache_constructor != NULL)
3931 cp->cache_flags &= ~KMF_PREFILL;
3933 if (cp->cache_flags & KMF_HASH) {
3934 ASSERT(!(cflags & KMC_NOHASH));
3935 cp->cache_bufctl_cache = (cp->cache_flags & KMF_AUDIT) ?
3936 kmem_bufctl_audit_cache : kmem_bufctl_cache;
3939 if (cp->cache_maxcolor >= vmp->vm_quantum)
3940 cp->cache_maxcolor = vmp->vm_quantum - 1;
3942 cp->cache_color = cp->cache_mincolor;
3945 * Initialize the rest of the slab layer.
3947 mutex_init(&cp->cache_lock, NULL, MUTEX_DEFAULT, NULL);
3949 avl_create(&cp->cache_partial_slabs, kmem_partial_slab_cmp,
3950 sizeof (kmem_slab_t), offsetof(kmem_slab_t, slab_link));
3951 /* LINTED: E_TRUE_LOGICAL_EXPR */
3952 ASSERT(sizeof (list_node_t) <= sizeof (avl_node_t));
3953 /* reuse partial slab AVL linkage for complete slab list linkage */
3954 list_create(&cp->cache_complete_slabs,
3955 sizeof (kmem_slab_t), offsetof(kmem_slab_t, slab_link));
3957 if (cp->cache_flags & KMF_HASH) {
3958 cp->cache_hash_table = vmem_alloc(kmem_hash_arena,
3959 KMEM_HASH_INITIAL * sizeof (void *), VM_SLEEP);
3960 bzero(cp->cache_hash_table,
3961 KMEM_HASH_INITIAL * sizeof (void *));
3962 cp->cache_hash_mask = KMEM_HASH_INITIAL - 1;
3963 cp->cache_hash_shift = highbit((ulong_t)chunksize) - 1;
3967 * Initialize the depot.
3969 mutex_init(&cp->cache_depot_lock, NULL, MUTEX_DEFAULT, NULL);
3971 for (mtp = kmem_magtype; chunksize <= mtp->mt_minbuf; mtp++)
3972 continue;
3974 cp->cache_magtype = mtp;
3977 * Initialize the CPU layer.
3979 for (cpu_seqid = 0; cpu_seqid < max_ncpus; cpu_seqid++) {
3980 kmem_cpu_cache_t *ccp = &cp->cache_cpu[cpu_seqid];
3981 mutex_init(&ccp->cc_lock, NULL, MUTEX_DEFAULT, NULL);
3982 ccp->cc_flags = cp->cache_flags;
3983 ccp->cc_rounds = -1;
3984 ccp->cc_prounds = -1;
3988 * Create the cache's kstats.
3990 if ((cp->cache_kstat = kstat_create("unix", 0, cp->cache_name,
3991 "kmem_cache", KSTAT_TYPE_NAMED,
3992 sizeof (kmem_cache_kstat) / sizeof (kstat_named_t),
3993 KSTAT_FLAG_VIRTUAL)) != NULL) {
3994 cp->cache_kstat->ks_data = &kmem_cache_kstat;
3995 cp->cache_kstat->ks_update = kmem_cache_kstat_update;
3996 cp->cache_kstat->ks_private = cp;
3997 cp->cache_kstat->ks_lock = &kmem_cache_kstat_lock;
3998 kstat_install(cp->cache_kstat);
4002 * Add the cache to the global list. This makes it visible
4003 * to kmem_update(), so the cache must be ready for business.
4005 mutex_enter(&kmem_cache_lock);
4006 list_insert_tail(&kmem_caches, cp);
4007 mutex_exit(&kmem_cache_lock);
4009 if (kmem_ready)
4010 kmem_cache_magazine_enable(cp);
4012 return (cp);
4015 static int
4016 kmem_move_cmp(const void *buf, const void *p)
4018 const kmem_move_t *kmm = p;
4019 uintptr_t v1 = (uintptr_t)buf;
4020 uintptr_t v2 = (uintptr_t)kmm->kmm_from_buf;
4021 return (v1 < v2 ? -1 : (v1 > v2 ? 1 : 0));
4024 static void
4025 kmem_reset_reclaim_threshold(kmem_defrag_t *kmd)
4027 kmd->kmd_reclaim_numer = 1;
4031 * Initially, when choosing candidate slabs for buffers to move, we want to be
4032 * very selective and take only slabs that are less than
4033 * (1 / KMEM_VOID_FRACTION) allocated. If we have difficulty finding candidate
4034 * slabs, then we raise the allocation ceiling incrementally. The reclaim
4035 * threshold is reset to (1 / KMEM_VOID_FRACTION) as soon as the cache is no
4036 * longer fragmented.
4038 static void
4039 kmem_adjust_reclaim_threshold(kmem_defrag_t *kmd, int direction)
4041 if (direction > 0) {
4042 /* make it easier to find a candidate slab */
4043 if (kmd->kmd_reclaim_numer < (KMEM_VOID_FRACTION - 1)) {
4044 kmd->kmd_reclaim_numer++;
4046 } else {
4047 /* be more selective */
4048 if (kmd->kmd_reclaim_numer > 1) {
4049 kmd->kmd_reclaim_numer--;
4054 void
4055 kmem_cache_set_move(kmem_cache_t *cp,
4056 kmem_cbrc_t (*move)(void *, void *, size_t, void *))
4058 kmem_defrag_t *defrag;
4060 ASSERT(move != NULL);
4062 * The consolidator does not support NOTOUCH caches because kmem cannot
4063 * initialize their slabs with the 0xbaddcafe memory pattern, which sets
4064 * a low order bit usable by clients to distinguish uninitialized memory
4065 * from known objects (see kmem_slab_create).
4067 ASSERT(!(cp->cache_cflags & KMC_NOTOUCH));
4068 ASSERT(!(cp->cache_cflags & KMC_IDENTIFIER));
4071 * We should not be holding anyone's cache lock when calling
4072 * kmem_cache_alloc(), so allocate in all cases before acquiring the
4073 * lock.
4075 defrag = kmem_cache_alloc(kmem_defrag_cache, KM_SLEEP);
4077 mutex_enter(&cp->cache_lock);
4079 if (KMEM_IS_MOVABLE(cp)) {
4080 if (cp->cache_move == NULL) {
4081 ASSERT(cp->cache_slab_alloc == 0);
4083 cp->cache_defrag = defrag;
4084 defrag = NULL; /* nothing to free */
4085 bzero(cp->cache_defrag, sizeof (kmem_defrag_t));
4086 avl_create(&cp->cache_defrag->kmd_moves_pending,
4087 kmem_move_cmp, sizeof (kmem_move_t),
4088 offsetof(kmem_move_t, kmm_entry));
4089 /* LINTED: E_TRUE_LOGICAL_EXPR */
4090 ASSERT(sizeof (list_node_t) <= sizeof (avl_node_t));
4091 /* reuse the slab's AVL linkage for deadlist linkage */
4092 list_create(&cp->cache_defrag->kmd_deadlist,
4093 sizeof (kmem_slab_t),
4094 offsetof(kmem_slab_t, slab_link));
4095 kmem_reset_reclaim_threshold(cp->cache_defrag);
4097 cp->cache_move = move;
4100 mutex_exit(&cp->cache_lock);
4102 if (defrag != NULL) {
4103 kmem_cache_free(kmem_defrag_cache, defrag); /* unused */
4107 void
4108 kmem_cache_destroy(kmem_cache_t *cp)
4110 int cpu_seqid;
4113 * Remove the cache from the global cache list so that no one else
4114 * can schedule tasks on its behalf, wait for any pending tasks to
4115 * complete, purge the cache, and then destroy it.
4117 mutex_enter(&kmem_cache_lock);
4118 list_remove(&kmem_caches, cp);
4119 mutex_exit(&kmem_cache_lock);
4121 if (kmem_taskq != NULL)
4122 taskq_wait(kmem_taskq);
4123 if (kmem_move_taskq != NULL)
4124 taskq_wait(kmem_move_taskq);
4126 kmem_cache_magazine_purge(cp);
4128 mutex_enter(&cp->cache_lock);
4129 if (cp->cache_buftotal != 0)
4130 cmn_err(CE_WARN, "kmem_cache_destroy: '%s' (%p) not empty",
4131 cp->cache_name, (void *)cp);
4132 if (cp->cache_defrag != NULL) {
4133 avl_destroy(&cp->cache_defrag->kmd_moves_pending);
4134 list_destroy(&cp->cache_defrag->kmd_deadlist);
4135 kmem_cache_free(kmem_defrag_cache, cp->cache_defrag);
4136 cp->cache_defrag = NULL;
4139 * The cache is now dead. There should be no further activity. We
4140 * enforce this by setting land mines in the constructor, destructor,
4141 * reclaim, and move routines that induce a kernel text fault if
4142 * invoked.
4144 cp->cache_constructor = (int (*)(void *, void *, int))1;
4145 cp->cache_destructor = (void (*)(void *, void *))2;
4146 cp->cache_reclaim = (void (*)(void *))3;
4147 cp->cache_move = (kmem_cbrc_t (*)(void *, void *, size_t, void *))4;
4148 mutex_exit(&cp->cache_lock);
4150 kstat_delete(cp->cache_kstat);
4152 if (cp->cache_hash_table != NULL)
4153 vmem_free(kmem_hash_arena, cp->cache_hash_table,
4154 (cp->cache_hash_mask + 1) * sizeof (void *));
4156 for (cpu_seqid = 0; cpu_seqid < max_ncpus; cpu_seqid++)
4157 mutex_destroy(&cp->cache_cpu[cpu_seqid].cc_lock);
4159 mutex_destroy(&cp->cache_depot_lock);
4160 mutex_destroy(&cp->cache_lock);
4162 vmem_free(kmem_cache_arena, cp, KMEM_CACHE_SIZE(max_ncpus));
4165 /*ARGSUSED*/
4166 static int
4167 kmem_cpu_setup(cpu_setup_t what, int id, void *arg)
4169 ASSERT(MUTEX_HELD(&cpu_lock));
4170 if (what == CPU_UNCONFIG) {
4171 kmem_cache_applyall(kmem_cache_magazine_purge,
4172 kmem_taskq, TQ_SLEEP);
4173 kmem_cache_applyall(kmem_cache_magazine_enable,
4174 kmem_taskq, TQ_SLEEP);
4176 return (0);
4179 static void
4180 kmem_alloc_caches_create(const int *array, size_t count,
4181 kmem_cache_t **alloc_table, size_t maxbuf, uint_t shift)
4183 char name[KMEM_CACHE_NAMELEN + 1];
4184 size_t table_unit = (1 << shift); /* range of one alloc_table entry */
4185 size_t size = table_unit;
4186 int i;
4188 for (i = 0; i < count; i++) {
4189 size_t cache_size = array[i];
4190 size_t align = KMEM_ALIGN;
4191 kmem_cache_t *cp;
4193 /* if the table has an entry for maxbuf, we're done */
4194 if (size > maxbuf)
4195 break;
4197 /* cache size must be a multiple of the table unit */
4198 ASSERT(P2PHASE(cache_size, table_unit) == 0);
4201 * If they allocate a multiple of the coherency granularity,
4202 * they get a coherency-granularity-aligned address.
4204 if (IS_P2ALIGNED(cache_size, 64))
4205 align = 64;
4206 if (IS_P2ALIGNED(cache_size, PAGESIZE))
4207 align = PAGESIZE;
4208 (void) snprintf(name, sizeof (name),
4209 "kmem_alloc_%lu", cache_size);
4210 cp = kmem_cache_create(name, cache_size, align,
4211 NULL, NULL, NULL, NULL, NULL, KMC_KMEM_ALLOC);
4213 while (size <= cache_size) {
4214 alloc_table[(size - 1) >> shift] = cp;
4215 size += table_unit;
4219 ASSERT(size > maxbuf); /* i.e. maxbuf <= max(cache_size) */
4222 static void
4223 kmem_cache_init(int pass, int use_large_pages)
4225 int i;
4226 size_t maxbuf;
4227 kmem_magtype_t *mtp;
4229 for (i = 0; i < sizeof (kmem_magtype) / sizeof (*mtp); i++) {
4230 char name[KMEM_CACHE_NAMELEN + 1];
4232 mtp = &kmem_magtype[i];
4233 (void) sprintf(name, "kmem_magazine_%d", mtp->mt_magsize);
4234 mtp->mt_cache = kmem_cache_create(name,
4235 (mtp->mt_magsize + 1) * sizeof (void *),
4236 mtp->mt_align, NULL, NULL, NULL, NULL,
4237 kmem_msb_arena, KMC_NOHASH);
4240 kmem_slab_cache = kmem_cache_create("kmem_slab_cache",
4241 sizeof (kmem_slab_t), 0, NULL, NULL, NULL, NULL,
4242 kmem_msb_arena, KMC_NOHASH);
4244 kmem_bufctl_cache = kmem_cache_create("kmem_bufctl_cache",
4245 sizeof (kmem_bufctl_t), 0, NULL, NULL, NULL, NULL,
4246 kmem_msb_arena, KMC_NOHASH);
4248 kmem_bufctl_audit_cache = kmem_cache_create("kmem_bufctl_audit_cache",
4249 sizeof (kmem_bufctl_audit_t), 0, NULL, NULL, NULL, NULL,
4250 kmem_msb_arena, KMC_NOHASH);
4252 if (pass == 2) {
4253 kmem_va_arena = vmem_create("kmem_va",
4254 NULL, 0, PAGESIZE,
4255 vmem_alloc, vmem_free, heap_arena,
4256 8 * PAGESIZE, VM_SLEEP);
4258 if (use_large_pages) {
4259 kmem_default_arena = vmem_xcreate("kmem_default",
4260 NULL, 0, PAGESIZE,
4261 segkmem_alloc_lp, segkmem_free_lp, kmem_va_arena,
4262 0, VMC_DUMPSAFE | VM_SLEEP);
4263 } else {
4264 kmem_default_arena = vmem_create("kmem_default",
4265 NULL, 0, PAGESIZE,
4266 segkmem_alloc, segkmem_free, kmem_va_arena,
4267 0, VMC_DUMPSAFE | VM_SLEEP);
4270 /* Figure out what our maximum cache size is */
4271 maxbuf = kmem_max_cached;
4272 if (maxbuf <= KMEM_MAXBUF) {
4273 maxbuf = 0;
4274 kmem_max_cached = KMEM_MAXBUF;
4275 } else {
4276 size_t size = 0;
4277 size_t max =
4278 sizeof (kmem_big_alloc_sizes) / sizeof (int);
4280 * Round maxbuf up to an existing cache size. If maxbuf
4281 * is larger than the largest cache, we truncate it to
4282 * the largest cache's size.
4284 for (i = 0; i < max; i++) {
4285 size = kmem_big_alloc_sizes[i];
4286 if (maxbuf <= size)
4287 break;
4289 kmem_max_cached = maxbuf = size;
4293 * The big alloc table may not be completely overwritten, so
4294 * we clear out any stale cache pointers from the first pass.
4296 bzero(kmem_big_alloc_table, sizeof (kmem_big_alloc_table));
4297 } else {
4299 * During the first pass, the kmem_alloc_* caches
4300 * are treated as metadata.
4302 kmem_default_arena = kmem_msb_arena;
4303 maxbuf = KMEM_BIG_MAXBUF_32BIT;
4307 * Set up the default caches to back kmem_alloc()
4309 kmem_alloc_caches_create(
4310 kmem_alloc_sizes, sizeof (kmem_alloc_sizes) / sizeof (int),
4311 kmem_alloc_table, KMEM_MAXBUF, KMEM_ALIGN_SHIFT);
4313 kmem_alloc_caches_create(
4314 kmem_big_alloc_sizes, sizeof (kmem_big_alloc_sizes) / sizeof (int),
4315 kmem_big_alloc_table, maxbuf, KMEM_BIG_SHIFT);
4317 kmem_big_alloc_table_max = maxbuf >> KMEM_BIG_SHIFT;
4320 void
4321 kmem_init(void)
4323 kmem_cache_t *cp;
4324 int old_kmem_flags = kmem_flags;
4325 int use_large_pages = 0;
4326 size_t maxverify, minfirewall;
4328 kstat_init();
4331 * Small-memory systems (< 24 MB) can't handle kmem_flags overhead.
4333 if (physmem < btop(24 << 20) && !(old_kmem_flags & KMF_STICKY))
4334 kmem_flags = 0;
4337 * Don't do firewalled allocations if the heap is less than 1TB
4338 * (i.e. on a 32-bit kernel)
4339 * The resulting VM_NEXTFIT allocations would create too much
4340 * fragmentation in a small heap.
4342 #if defined(_LP64)
4343 maxverify = minfirewall = PAGESIZE / 2;
4344 #else
4345 maxverify = minfirewall = ULONG_MAX;
4346 #endif
4348 /* LINTED */
4349 ASSERT(sizeof (kmem_cpu_cache_t) == KMEM_CPU_CACHE_SIZE);
4351 list_create(&kmem_caches, sizeof (kmem_cache_t),
4352 offsetof(kmem_cache_t, cache_link));
4354 kmem_metadata_arena = vmem_create("kmem_metadata", NULL, 0, PAGESIZE,
4355 vmem_alloc, vmem_free, heap_arena, 8 * PAGESIZE,
4356 VM_SLEEP | VMC_NO_QCACHE);
4358 kmem_msb_arena = vmem_create("kmem_msb", NULL, 0,
4359 PAGESIZE, segkmem_alloc, segkmem_free, kmem_metadata_arena, 0,
4360 VMC_DUMPSAFE | VM_SLEEP);
4362 kmem_cache_arena = vmem_create("kmem_cache", NULL, 0, KMEM_ALIGN,
4363 segkmem_alloc, segkmem_free, kmem_metadata_arena, 0, VM_SLEEP);
4365 kmem_hash_arena = vmem_create("kmem_hash", NULL, 0, KMEM_ALIGN,
4366 segkmem_alloc, segkmem_free, kmem_metadata_arena, 0, VM_SLEEP);
4368 kmem_log_arena = vmem_create("kmem_log", NULL, 0, KMEM_ALIGN,
4369 segkmem_alloc, segkmem_free, heap_arena, 0, VM_SLEEP);
4371 kmem_firewall_va_arena = vmem_create("kmem_firewall_va",
4372 NULL, 0, PAGESIZE,
4373 kmem_firewall_va_alloc, kmem_firewall_va_free, heap_arena,
4374 0, VM_SLEEP);
4376 kmem_firewall_arena = vmem_create("kmem_firewall", NULL, 0, PAGESIZE,
4377 segkmem_alloc, segkmem_free, kmem_firewall_va_arena, 0,
4378 VMC_DUMPSAFE | VM_SLEEP);
4380 /* temporary oversize arena for mod_read_system_file */
4381 kmem_oversize_arena = vmem_create("kmem_oversize", NULL, 0, PAGESIZE,
4382 segkmem_alloc, segkmem_free, heap_arena, 0, VM_SLEEP);
4384 kmem_reap_interval = 15 * hz;
4387 * Read /etc/system. This is a chicken-and-egg problem because
4388 * kmem_flags may be set in /etc/system, but mod_read_system_file()
4389 * needs to use the allocator. The simplest solution is to create
4390 * all the standard kmem caches, read /etc/system, destroy all the
4391 * caches we just created, and then create them all again in light
4392 * of the (possibly) new kmem_flags and other kmem tunables.
4394 kmem_cache_init(1, 0);
4396 mod_read_system_file(boothowto & RB_ASKNAME);
4398 while ((cp = list_tail(&kmem_caches)) != NULL)
4399 kmem_cache_destroy(cp);
4401 vmem_destroy(kmem_oversize_arena);
4403 if (old_kmem_flags & KMF_STICKY)
4404 kmem_flags = old_kmem_flags;
4406 if (!(kmem_flags & KMF_AUDIT))
4407 vmem_seg_size = offsetof(vmem_seg_t, vs_thread);
4409 if (kmem_maxverify == 0)
4410 kmem_maxverify = maxverify;
4412 if (kmem_minfirewall == 0)
4413 kmem_minfirewall = minfirewall;
4416 * give segkmem a chance to figure out if we are using large pages
4417 * for the kernel heap
4419 use_large_pages = segkmem_lpsetup();
4422 * To protect against corruption, we keep the actual number of callers
4423 * KMF_LITE records seperate from the tunable. We arbitrarily clamp
4424 * to 16, since the overhead for small buffers quickly gets out of
4425 * hand.
4427 * The real limit would depend on the needs of the largest KMC_NOHASH
4428 * cache.
4430 kmem_lite_count = MIN(MAX(0, kmem_lite_pcs), 16);
4431 kmem_lite_pcs = kmem_lite_count;
4434 * Normally, we firewall oversized allocations when possible, but
4435 * if we are using large pages for kernel memory, and we don't have
4436 * any non-LITE debugging flags set, we want to allocate oversized
4437 * buffers from large pages, and so skip the firewalling.
4439 if (use_large_pages &&
4440 ((kmem_flags & KMF_LITE) || !(kmem_flags & KMF_DEBUG))) {
4441 kmem_oversize_arena = vmem_xcreate("kmem_oversize", NULL, 0,
4442 PAGESIZE, segkmem_alloc_lp, segkmem_free_lp, heap_arena,
4443 0, VMC_DUMPSAFE | VM_SLEEP);
4444 } else {
4445 kmem_oversize_arena = vmem_create("kmem_oversize",
4446 NULL, 0, PAGESIZE,
4447 segkmem_alloc, segkmem_free, kmem_minfirewall < ULONG_MAX?
4448 kmem_firewall_va_arena : heap_arena, 0, VMC_DUMPSAFE |
4449 VM_SLEEP);
4452 kmem_cache_init(2, use_large_pages);
4454 if (kmem_flags & (KMF_AUDIT | KMF_RANDOMIZE)) {
4455 if (kmem_transaction_log_size == 0)
4456 kmem_transaction_log_size = kmem_maxavail() / 50;
4457 kmem_transaction_log = kmem_log_init(kmem_transaction_log_size);
4460 if (kmem_flags & (KMF_CONTENTS | KMF_RANDOMIZE)) {
4461 if (kmem_content_log_size == 0)
4462 kmem_content_log_size = kmem_maxavail() / 50;
4463 kmem_content_log = kmem_log_init(kmem_content_log_size);
4466 kmem_failure_log = kmem_log_init(kmem_failure_log_size);
4468 kmem_slab_log = kmem_log_init(kmem_slab_log_size);
4471 * Initialize STREAMS message caches so allocb() is available.
4472 * This allows us to initialize the logging framework (cmn_err(9F),
4473 * strlog(9F), etc) so we can start recording messages.
4475 streams_msg_init();
4478 * Initialize the ZSD framework in Zones so modules loaded henceforth
4479 * can register their callbacks.
4481 zone_zsd_init();
4483 log_init();
4484 taskq_init();
4487 * Warn about invalid or dangerous values of kmem_flags.
4488 * Always warn about unsupported values.
4490 if (((kmem_flags & ~(KMF_AUDIT | KMF_DEADBEEF | KMF_REDZONE |
4491 KMF_CONTENTS | KMF_LITE)) != 0) ||
4492 ((kmem_flags & KMF_LITE) && kmem_flags != KMF_LITE))
4493 cmn_err(CE_WARN, "kmem_flags set to unsupported value 0x%x. "
4494 "See the Solaris Tunable Parameters Reference Manual.",
4495 kmem_flags);
4497 #ifdef DEBUG
4498 if ((kmem_flags & KMF_DEBUG) == 0)
4499 cmn_err(CE_NOTE, "kmem debugging disabled.");
4500 #else
4502 * For non-debug kernels, the only "normal" flags are 0, KMF_LITE,
4503 * KMF_REDZONE, and KMF_CONTENTS (the last because it is only enabled
4504 * if KMF_AUDIT is set). We should warn the user about the performance
4505 * penalty of KMF_AUDIT or KMF_DEADBEEF if they are set and KMF_LITE
4506 * isn't set (since that disables AUDIT).
4508 if (!(kmem_flags & KMF_LITE) &&
4509 (kmem_flags & (KMF_AUDIT | KMF_DEADBEEF)) != 0)
4510 cmn_err(CE_WARN, "High-overhead kmem debugging features "
4511 "enabled (kmem_flags = 0x%x). Performance degradation "
4512 "and large memory overhead possible. See the Solaris "
4513 "Tunable Parameters Reference Manual.", kmem_flags);
4514 #endif /* not DEBUG */
4516 kmem_cache_applyall(kmem_cache_magazine_enable, NULL, TQ_SLEEP);
4518 kmem_ready = 1;
4521 * Initialize the platform-specific aligned/DMA memory allocator.
4523 ka_init();
4526 * Initialize 32-bit ID cache.
4528 id32_init();
4531 * Initialize the networking stack so modules loaded can
4532 * register their callbacks.
4534 netstack_init();
4537 static void
4538 kmem_move_init(void)
4540 kmem_defrag_cache = kmem_cache_create("kmem_defrag_cache",
4541 sizeof (kmem_defrag_t), 0, NULL, NULL, NULL, NULL,
4542 kmem_msb_arena, KMC_NOHASH);
4543 kmem_move_cache = kmem_cache_create("kmem_move_cache",
4544 sizeof (kmem_move_t), 0, NULL, NULL, NULL, NULL,
4545 kmem_msb_arena, KMC_NOHASH);
4548 * kmem guarantees that move callbacks are sequential and that even
4549 * across multiple caches no two moves ever execute simultaneously.
4550 * Move callbacks are processed on a separate taskq so that client code
4551 * does not interfere with internal maintenance tasks.
4553 kmem_move_taskq = taskq_create_instance("kmem_move_taskq", 0, 1,
4554 minclsyspri, 100, INT_MAX, TASKQ_PREPOPULATE);
4557 void
4558 kmem_thread_init(void)
4560 kmem_move_init();
4561 kmem_taskq = taskq_create_instance("kmem_taskq", 0, 1, minclsyspri,
4562 300, INT_MAX, TASKQ_PREPOPULATE);
4565 void
4566 kmem_mp_init(void)
4568 mutex_enter(&cpu_lock);
4569 register_cpu_setup_func(kmem_cpu_setup, NULL);
4570 mutex_exit(&cpu_lock);
4572 kmem_update_timeout(NULL);
4574 taskq_mp_init();
4578 * Return the slab of the allocated buffer, or NULL if the buffer is not
4579 * allocated. This function may be called with a known slab address to determine
4580 * whether or not the buffer is allocated, or with a NULL slab address to obtain
4581 * an allocated buffer's slab.
4583 static kmem_slab_t *
4584 kmem_slab_allocated(kmem_cache_t *cp, kmem_slab_t *sp, void *buf)
4586 kmem_bufctl_t *bcp, *bufbcp;
4588 ASSERT(MUTEX_HELD(&cp->cache_lock));
4589 ASSERT(sp == NULL || KMEM_SLAB_MEMBER(sp, buf));
4591 if (cp->cache_flags & KMF_HASH) {
4592 for (bcp = *KMEM_HASH(cp, buf);
4593 (bcp != NULL) && (bcp->bc_addr != buf);
4594 bcp = bcp->bc_next) {
4595 continue;
4597 ASSERT(sp != NULL && bcp != NULL ? sp == bcp->bc_slab : 1);
4598 return (bcp == NULL ? NULL : bcp->bc_slab);
4601 if (sp == NULL) {
4602 sp = KMEM_SLAB(cp, buf);
4604 bufbcp = KMEM_BUFCTL(cp, buf);
4605 for (bcp = sp->slab_head;
4606 (bcp != NULL) && (bcp != bufbcp);
4607 bcp = bcp->bc_next) {
4608 continue;
4610 return (bcp == NULL ? sp : NULL);
4613 static boolean_t
4614 kmem_slab_is_reclaimable(kmem_cache_t *cp, kmem_slab_t *sp, int flags)
4616 long refcnt = sp->slab_refcnt;
4618 ASSERT(cp->cache_defrag != NULL);
4621 * For code coverage we want to be able to move an object within the
4622 * same slab (the only partial slab) even if allocating the destination
4623 * buffer resulted in a completely allocated slab.
4625 if (flags & KMM_DEBUG) {
4626 return ((flags & KMM_DESPERATE) ||
4627 ((sp->slab_flags & KMEM_SLAB_NOMOVE) == 0));
4630 /* If we're desperate, we don't care if the client said NO. */
4631 if (flags & KMM_DESPERATE) {
4632 return (refcnt < sp->slab_chunks); /* any partial */
4635 if (sp->slab_flags & KMEM_SLAB_NOMOVE) {
4636 return (B_FALSE);
4639 if ((refcnt == 1) || kmem_move_any_partial) {
4640 return (refcnt < sp->slab_chunks);
4644 * The reclaim threshold is adjusted at each kmem_cache_scan() so that
4645 * slabs with a progressively higher percentage of used buffers can be
4646 * reclaimed until the cache as a whole is no longer fragmented.
4648 * sp->slab_refcnt kmd_reclaim_numer
4649 * --------------- < ------------------
4650 * sp->slab_chunks KMEM_VOID_FRACTION
4652 return ((refcnt * KMEM_VOID_FRACTION) <
4653 (sp->slab_chunks * cp->cache_defrag->kmd_reclaim_numer));
4656 static void *
4657 kmem_hunt_mag(kmem_cache_t *cp, kmem_magazine_t *m, int n, void *buf,
4658 void *tbuf)
4660 int i; /* magazine round index */
4662 for (i = 0; i < n; i++) {
4663 if (buf == m->mag_round[i]) {
4664 if (cp->cache_flags & KMF_BUFTAG) {
4665 (void) kmem_cache_free_debug(cp, tbuf,
4666 caller());
4668 m->mag_round[i] = tbuf;
4669 return (buf);
4673 return (NULL);
4677 * Hunt the magazine layer for the given buffer. If found, the buffer is
4678 * removed from the magazine layer and returned, otherwise NULL is returned.
4679 * The state of the returned buffer is freed and constructed.
4681 static void *
4682 kmem_hunt_mags(kmem_cache_t *cp, void *buf)
4684 kmem_cpu_cache_t *ccp;
4685 kmem_magazine_t *m;
4686 int cpu_seqid;
4687 int n; /* magazine rounds */
4688 void *tbuf; /* temporary swap buffer */
4690 ASSERT(MUTEX_NOT_HELD(&cp->cache_lock));
4693 * Allocated a buffer to swap with the one we hope to pull out of a
4694 * magazine when found.
4696 tbuf = kmem_cache_alloc(cp, KM_NOSLEEP);
4697 if (tbuf == NULL) {
4698 KMEM_STAT_ADD(kmem_move_stats.kms_hunt_alloc_fail);
4699 return (NULL);
4701 if (tbuf == buf) {
4702 KMEM_STAT_ADD(kmem_move_stats.kms_hunt_lucky);
4703 if (cp->cache_flags & KMF_BUFTAG) {
4704 (void) kmem_cache_free_debug(cp, buf, caller());
4706 return (buf);
4709 /* Hunt the depot. */
4710 mutex_enter(&cp->cache_depot_lock);
4711 n = cp->cache_magtype->mt_magsize;
4712 for (m = cp->cache_full.ml_list; m != NULL; m = m->mag_next) {
4713 if (kmem_hunt_mag(cp, m, n, buf, tbuf) != NULL) {
4714 mutex_exit(&cp->cache_depot_lock);
4715 return (buf);
4718 mutex_exit(&cp->cache_depot_lock);
4720 /* Hunt the per-CPU magazines. */
4721 for (cpu_seqid = 0; cpu_seqid < max_ncpus; cpu_seqid++) {
4722 ccp = &cp->cache_cpu[cpu_seqid];
4724 mutex_enter(&ccp->cc_lock);
4725 m = ccp->cc_loaded;
4726 n = ccp->cc_rounds;
4727 if (kmem_hunt_mag(cp, m, n, buf, tbuf) != NULL) {
4728 mutex_exit(&ccp->cc_lock);
4729 return (buf);
4731 m = ccp->cc_ploaded;
4732 n = ccp->cc_prounds;
4733 if (kmem_hunt_mag(cp, m, n, buf, tbuf) != NULL) {
4734 mutex_exit(&ccp->cc_lock);
4735 return (buf);
4737 mutex_exit(&ccp->cc_lock);
4740 kmem_cache_free(cp, tbuf);
4741 return (NULL);
4745 * May be called from the kmem_move_taskq, from kmem_cache_move_notify_task(),
4746 * or when the buffer is freed.
4748 static void
4749 kmem_slab_move_yes(kmem_cache_t *cp, kmem_slab_t *sp, void *from_buf)
4751 ASSERT(MUTEX_HELD(&cp->cache_lock));
4752 ASSERT(KMEM_SLAB_MEMBER(sp, from_buf));
4754 if (!KMEM_SLAB_IS_PARTIAL(sp)) {
4755 return;
4758 if (sp->slab_flags & KMEM_SLAB_NOMOVE) {
4759 if (KMEM_SLAB_OFFSET(sp, from_buf) == sp->slab_stuck_offset) {
4760 avl_remove(&cp->cache_partial_slabs, sp);
4761 sp->slab_flags &= ~KMEM_SLAB_NOMOVE;
4762 sp->slab_stuck_offset = (uint32_t)-1;
4763 avl_add(&cp->cache_partial_slabs, sp);
4765 } else {
4766 sp->slab_later_count = 0;
4767 sp->slab_stuck_offset = (uint32_t)-1;
4771 static void
4772 kmem_slab_move_no(kmem_cache_t *cp, kmem_slab_t *sp, void *from_buf)
4774 ASSERT(taskq_member(kmem_move_taskq, curthread));
4775 ASSERT(MUTEX_HELD(&cp->cache_lock));
4776 ASSERT(KMEM_SLAB_MEMBER(sp, from_buf));
4778 if (!KMEM_SLAB_IS_PARTIAL(sp)) {
4779 return;
4782 avl_remove(&cp->cache_partial_slabs, sp);
4783 sp->slab_later_count = 0;
4784 sp->slab_flags |= KMEM_SLAB_NOMOVE;
4785 sp->slab_stuck_offset = KMEM_SLAB_OFFSET(sp, from_buf);
4786 avl_add(&cp->cache_partial_slabs, sp);
4789 static void kmem_move_end(kmem_cache_t *, kmem_move_t *);
4792 * The move callback takes two buffer addresses, the buffer to be moved, and a
4793 * newly allocated and constructed buffer selected by kmem as the destination.
4794 * It also takes the size of the buffer and an optional user argument specified
4795 * at cache creation time. kmem guarantees that the buffer to be moved has not
4796 * been unmapped by the virtual memory subsystem. Beyond that, it cannot
4797 * guarantee the present whereabouts of the buffer to be moved, so it is up to
4798 * the client to safely determine whether or not it is still using the buffer.
4799 * The client must not free either of the buffers passed to the move callback,
4800 * since kmem wants to free them directly to the slab layer. The client response
4801 * tells kmem which of the two buffers to free:
4803 * YES kmem frees the old buffer (the move was successful)
4804 * NO kmem frees the new buffer, marks the slab of the old buffer
4805 * non-reclaimable to avoid bothering the client again
4806 * LATER kmem frees the new buffer, increments slab_later_count
4807 * DONT_KNOW kmem frees the new buffer, searches mags for the old buffer
4808 * DONT_NEED kmem frees both the old buffer and the new buffer
4810 * The pending callback argument now being processed contains both of the
4811 * buffers (old and new) passed to the move callback function, the slab of the
4812 * old buffer, and flags related to the move request, such as whether or not the
4813 * system was desperate for memory.
4815 * Slabs are not freed while there is a pending callback, but instead are kept
4816 * on a deadlist, which is drained after the last callback completes. This means
4817 * that slabs are safe to access until kmem_move_end(), no matter how many of
4818 * their buffers have been freed. Once slab_refcnt reaches zero, it stays at
4819 * zero for as long as the slab remains on the deadlist and until the slab is
4820 * freed.
4822 static void
4823 kmem_move_buffer(kmem_move_t *callback)
4825 kmem_cbrc_t response;
4826 kmem_slab_t *sp = callback->kmm_from_slab;
4827 kmem_cache_t *cp = sp->slab_cache;
4828 boolean_t free_on_slab;
4830 ASSERT(taskq_member(kmem_move_taskq, curthread));
4831 ASSERT(MUTEX_NOT_HELD(&cp->cache_lock));
4832 ASSERT(KMEM_SLAB_MEMBER(sp, callback->kmm_from_buf));
4835 * The number of allocated buffers on the slab may have changed since we
4836 * last checked the slab's reclaimability (when the pending move was
4837 * enqueued), or the client may have responded NO when asked to move
4838 * another buffer on the same slab.
4840 if (!kmem_slab_is_reclaimable(cp, sp, callback->kmm_flags)) {
4841 KMEM_STAT_ADD(kmem_move_stats.kms_no_longer_reclaimable);
4842 KMEM_STAT_COND_ADD((callback->kmm_flags & KMM_NOTIFY),
4843 kmem_move_stats.kms_notify_no_longer_reclaimable);
4844 kmem_slab_free(cp, callback->kmm_to_buf);
4845 kmem_move_end(cp, callback);
4846 return;
4850 * Hunting magazines is expensive, so we'll wait to do that until the
4851 * client responds KMEM_CBRC_DONT_KNOW. However, checking the slab layer
4852 * is cheap, so we might as well do that here in case we can avoid
4853 * bothering the client.
4855 mutex_enter(&cp->cache_lock);
4856 free_on_slab = (kmem_slab_allocated(cp, sp,
4857 callback->kmm_from_buf) == NULL);
4858 mutex_exit(&cp->cache_lock);
4860 if (free_on_slab) {
4861 KMEM_STAT_ADD(kmem_move_stats.kms_hunt_found_slab);
4862 kmem_slab_free(cp, callback->kmm_to_buf);
4863 kmem_move_end(cp, callback);
4864 return;
4867 if (cp->cache_flags & KMF_BUFTAG) {
4869 * Make kmem_cache_alloc_debug() apply the constructor for us.
4871 if (kmem_cache_alloc_debug(cp, callback->kmm_to_buf,
4872 KM_NOSLEEP, 1, caller()) != 0) {
4873 KMEM_STAT_ADD(kmem_move_stats.kms_alloc_fail);
4874 kmem_move_end(cp, callback);
4875 return;
4877 } else if (cp->cache_constructor != NULL &&
4878 cp->cache_constructor(callback->kmm_to_buf, cp->cache_private,
4879 KM_NOSLEEP) != 0) {
4880 atomic_inc_64(&cp->cache_alloc_fail);
4881 KMEM_STAT_ADD(kmem_move_stats.kms_constructor_fail);
4882 kmem_slab_free(cp, callback->kmm_to_buf);
4883 kmem_move_end(cp, callback);
4884 return;
4887 KMEM_STAT_ADD(kmem_move_stats.kms_callbacks);
4888 KMEM_STAT_COND_ADD((callback->kmm_flags & KMM_NOTIFY),
4889 kmem_move_stats.kms_notify_callbacks);
4890 cp->cache_defrag->kmd_callbacks++;
4891 cp->cache_defrag->kmd_thread = curthread;
4892 cp->cache_defrag->kmd_from_buf = callback->kmm_from_buf;
4893 cp->cache_defrag->kmd_to_buf = callback->kmm_to_buf;
4894 DTRACE_PROBE2(kmem__move__start, kmem_cache_t *, cp, kmem_move_t *,
4895 callback);
4897 response = cp->cache_move(callback->kmm_from_buf,
4898 callback->kmm_to_buf, cp->cache_bufsize, cp->cache_private);
4900 DTRACE_PROBE3(kmem__move__end, kmem_cache_t *, cp, kmem_move_t *,
4901 callback, kmem_cbrc_t, response);
4902 cp->cache_defrag->kmd_thread = NULL;
4903 cp->cache_defrag->kmd_from_buf = NULL;
4904 cp->cache_defrag->kmd_to_buf = NULL;
4906 if (response == KMEM_CBRC_YES) {
4907 KMEM_STAT_ADD(kmem_move_stats.kms_yes);
4908 cp->cache_defrag->kmd_yes++;
4909 kmem_slab_free_constructed(cp, callback->kmm_from_buf, B_FALSE);
4910 /* slab safe to access until kmem_move_end() */
4911 if (sp->slab_refcnt == 0)
4912 cp->cache_defrag->kmd_slabs_freed++;
4913 mutex_enter(&cp->cache_lock);
4914 kmem_slab_move_yes(cp, sp, callback->kmm_from_buf);
4915 mutex_exit(&cp->cache_lock);
4916 kmem_move_end(cp, callback);
4917 return;
4920 switch (response) {
4921 case KMEM_CBRC_NO:
4922 KMEM_STAT_ADD(kmem_move_stats.kms_no);
4923 cp->cache_defrag->kmd_no++;
4924 mutex_enter(&cp->cache_lock);
4925 kmem_slab_move_no(cp, sp, callback->kmm_from_buf);
4926 mutex_exit(&cp->cache_lock);
4927 break;
4928 case KMEM_CBRC_LATER:
4929 KMEM_STAT_ADD(kmem_move_stats.kms_later);
4930 cp->cache_defrag->kmd_later++;
4931 mutex_enter(&cp->cache_lock);
4932 if (!KMEM_SLAB_IS_PARTIAL(sp)) {
4933 mutex_exit(&cp->cache_lock);
4934 break;
4937 if (++sp->slab_later_count >= KMEM_DISBELIEF) {
4938 KMEM_STAT_ADD(kmem_move_stats.kms_disbelief);
4939 kmem_slab_move_no(cp, sp, callback->kmm_from_buf);
4940 } else if (!(sp->slab_flags & KMEM_SLAB_NOMOVE)) {
4941 sp->slab_stuck_offset = KMEM_SLAB_OFFSET(sp,
4942 callback->kmm_from_buf);
4944 mutex_exit(&cp->cache_lock);
4945 break;
4946 case KMEM_CBRC_DONT_NEED:
4947 KMEM_STAT_ADD(kmem_move_stats.kms_dont_need);
4948 cp->cache_defrag->kmd_dont_need++;
4949 kmem_slab_free_constructed(cp, callback->kmm_from_buf, B_FALSE);
4950 if (sp->slab_refcnt == 0)
4951 cp->cache_defrag->kmd_slabs_freed++;
4952 mutex_enter(&cp->cache_lock);
4953 kmem_slab_move_yes(cp, sp, callback->kmm_from_buf);
4954 mutex_exit(&cp->cache_lock);
4955 break;
4956 case KMEM_CBRC_DONT_KNOW:
4957 KMEM_STAT_ADD(kmem_move_stats.kms_dont_know);
4958 cp->cache_defrag->kmd_dont_know++;
4959 if (kmem_hunt_mags(cp, callback->kmm_from_buf) != NULL) {
4960 KMEM_STAT_ADD(kmem_move_stats.kms_hunt_found_mag);
4961 cp->cache_defrag->kmd_hunt_found++;
4962 kmem_slab_free_constructed(cp, callback->kmm_from_buf,
4963 B_TRUE);
4964 if (sp->slab_refcnt == 0)
4965 cp->cache_defrag->kmd_slabs_freed++;
4966 mutex_enter(&cp->cache_lock);
4967 kmem_slab_move_yes(cp, sp, callback->kmm_from_buf);
4968 mutex_exit(&cp->cache_lock);
4970 break;
4971 default:
4972 panic("'%s' (%p) unexpected move callback response %d\n",
4973 cp->cache_name, (void *)cp, response);
4976 kmem_slab_free_constructed(cp, callback->kmm_to_buf, B_FALSE);
4977 kmem_move_end(cp, callback);
4980 /* Return B_FALSE if there is insufficient memory for the move request. */
4981 static boolean_t
4982 kmem_move_begin(kmem_cache_t *cp, kmem_slab_t *sp, void *buf, int flags)
4984 void *to_buf;
4985 avl_index_t index;
4986 kmem_move_t *callback, *pending;
4987 ulong_t n;
4989 ASSERT(taskq_member(kmem_taskq, curthread));
4990 ASSERT(MUTEX_NOT_HELD(&cp->cache_lock));
4991 ASSERT(sp->slab_flags & KMEM_SLAB_MOVE_PENDING);
4993 callback = kmem_cache_alloc(kmem_move_cache, KM_NOSLEEP);
4994 if (callback == NULL) {
4995 KMEM_STAT_ADD(kmem_move_stats.kms_callback_alloc_fail);
4996 return (B_FALSE);
4999 callback->kmm_from_slab = sp;
5000 callback->kmm_from_buf = buf;
5001 callback->kmm_flags = flags;
5003 mutex_enter(&cp->cache_lock);
5005 n = avl_numnodes(&cp->cache_partial_slabs);
5006 if ((n == 0) || ((n == 1) && !(flags & KMM_DEBUG))) {
5007 mutex_exit(&cp->cache_lock);
5008 kmem_cache_free(kmem_move_cache, callback);
5009 return (B_TRUE); /* there is no need for the move request */
5012 pending = avl_find(&cp->cache_defrag->kmd_moves_pending, buf, &index);
5013 if (pending != NULL) {
5015 * If the move is already pending and we're desperate now,
5016 * update the move flags.
5018 if (flags & KMM_DESPERATE) {
5019 pending->kmm_flags |= KMM_DESPERATE;
5021 mutex_exit(&cp->cache_lock);
5022 KMEM_STAT_ADD(kmem_move_stats.kms_already_pending);
5023 kmem_cache_free(kmem_move_cache, callback);
5024 return (B_TRUE);
5027 to_buf = kmem_slab_alloc_impl(cp, avl_first(&cp->cache_partial_slabs),
5028 B_FALSE);
5029 callback->kmm_to_buf = to_buf;
5030 avl_insert(&cp->cache_defrag->kmd_moves_pending, callback, index);
5032 mutex_exit(&cp->cache_lock);
5034 if (!taskq_dispatch(kmem_move_taskq, (task_func_t *)kmem_move_buffer,
5035 callback, TQ_NOSLEEP)) {
5036 KMEM_STAT_ADD(kmem_move_stats.kms_callback_taskq_fail);
5037 mutex_enter(&cp->cache_lock);
5038 avl_remove(&cp->cache_defrag->kmd_moves_pending, callback);
5039 mutex_exit(&cp->cache_lock);
5040 kmem_slab_free(cp, to_buf);
5041 kmem_cache_free(kmem_move_cache, callback);
5042 return (B_FALSE);
5045 return (B_TRUE);
5048 static void
5049 kmem_move_end(kmem_cache_t *cp, kmem_move_t *callback)
5051 avl_index_t index;
5053 ASSERT(cp->cache_defrag != NULL);
5054 ASSERT(taskq_member(kmem_move_taskq, curthread));
5055 ASSERT(MUTEX_NOT_HELD(&cp->cache_lock));
5057 mutex_enter(&cp->cache_lock);
5058 VERIFY(avl_find(&cp->cache_defrag->kmd_moves_pending,
5059 callback->kmm_from_buf, &index) != NULL);
5060 avl_remove(&cp->cache_defrag->kmd_moves_pending, callback);
5061 if (avl_is_empty(&cp->cache_defrag->kmd_moves_pending)) {
5062 list_t *deadlist = &cp->cache_defrag->kmd_deadlist;
5063 kmem_slab_t *sp;
5066 * The last pending move completed. Release all slabs from the
5067 * front of the dead list except for any slab at the tail that
5068 * needs to be released from the context of kmem_move_buffers().
5069 * kmem deferred unmapping the buffers on these slabs in order
5070 * to guarantee that buffers passed to the move callback have
5071 * been touched only by kmem or by the client itself.
5073 while ((sp = list_remove_head(deadlist)) != NULL) {
5074 if (sp->slab_flags & KMEM_SLAB_MOVE_PENDING) {
5075 list_insert_tail(deadlist, sp);
5076 break;
5078 cp->cache_defrag->kmd_deadcount--;
5079 cp->cache_slab_destroy++;
5080 mutex_exit(&cp->cache_lock);
5081 kmem_slab_destroy(cp, sp);
5082 KMEM_STAT_ADD(kmem_move_stats.kms_dead_slabs_freed);
5083 mutex_enter(&cp->cache_lock);
5086 mutex_exit(&cp->cache_lock);
5087 kmem_cache_free(kmem_move_cache, callback);
5091 * Move buffers from least used slabs first by scanning backwards from the end
5092 * of the partial slab list. Scan at most max_scan candidate slabs and move
5093 * buffers from at most max_slabs slabs (0 for all partial slabs in both cases).
5094 * If desperate to reclaim memory, move buffers from any partial slab, otherwise
5095 * skip slabs with a ratio of allocated buffers at or above the current
5096 * threshold. Return the number of unskipped slabs (at most max_slabs, -1 if the
5097 * scan is aborted) so that the caller can adjust the reclaimability threshold
5098 * depending on how many reclaimable slabs it finds.
5100 * kmem_move_buffers() drops and reacquires cache_lock every time it issues a
5101 * move request, since it is not valid for kmem_move_begin() to call
5102 * kmem_cache_alloc() or taskq_dispatch() with cache_lock held.
5104 static int
5105 kmem_move_buffers(kmem_cache_t *cp, size_t max_scan, size_t max_slabs,
5106 int flags)
5108 kmem_slab_t *sp;
5109 void *buf;
5110 int i, j; /* slab index, buffer index */
5111 int s; /* reclaimable slabs */
5112 int b; /* allocated (movable) buffers on reclaimable slab */
5113 boolean_t success;
5114 int refcnt;
5115 int nomove;
5117 ASSERT(taskq_member(kmem_taskq, curthread));
5118 ASSERT(MUTEX_HELD(&cp->cache_lock));
5119 ASSERT(kmem_move_cache != NULL);
5120 ASSERT(cp->cache_move != NULL && cp->cache_defrag != NULL);
5121 ASSERT((flags & KMM_DEBUG) ? !avl_is_empty(&cp->cache_partial_slabs) :
5122 avl_numnodes(&cp->cache_partial_slabs) > 1);
5124 if (kmem_move_blocked) {
5125 return (0);
5128 if (kmem_move_fulltilt) {
5129 flags |= KMM_DESPERATE;
5132 if (max_scan == 0 || (flags & KMM_DESPERATE)) {
5134 * Scan as many slabs as needed to find the desired number of
5135 * candidate slabs.
5137 max_scan = (size_t)-1;
5140 if (max_slabs == 0 || (flags & KMM_DESPERATE)) {
5141 /* Find as many candidate slabs as possible. */
5142 max_slabs = (size_t)-1;
5145 sp = avl_last(&cp->cache_partial_slabs);
5146 ASSERT(KMEM_SLAB_IS_PARTIAL(sp));
5147 for (i = 0, s = 0; (i < max_scan) && (s < max_slabs) && (sp != NULL) &&
5148 ((sp != avl_first(&cp->cache_partial_slabs)) ||
5149 (flags & KMM_DEBUG));
5150 sp = AVL_PREV(&cp->cache_partial_slabs, sp), i++) {
5152 if (!kmem_slab_is_reclaimable(cp, sp, flags)) {
5153 continue;
5155 s++;
5157 /* Look for allocated buffers to move. */
5158 for (j = 0, b = 0, buf = sp->slab_base;
5159 (j < sp->slab_chunks) && (b < sp->slab_refcnt);
5160 buf = (((char *)buf) + cp->cache_chunksize), j++) {
5162 if (kmem_slab_allocated(cp, sp, buf) == NULL) {
5163 continue;
5166 b++;
5169 * Prevent the slab from being destroyed while we drop
5170 * cache_lock and while the pending move is not yet
5171 * registered. Flag the pending move while
5172 * kmd_moves_pending may still be empty, since we can't
5173 * yet rely on a non-zero pending move count to prevent
5174 * the slab from being destroyed.
5176 ASSERT(!(sp->slab_flags & KMEM_SLAB_MOVE_PENDING));
5177 sp->slab_flags |= KMEM_SLAB_MOVE_PENDING;
5179 * Recheck refcnt and nomove after reacquiring the lock,
5180 * since these control the order of partial slabs, and
5181 * we want to know if we can pick up the scan where we
5182 * left off.
5184 refcnt = sp->slab_refcnt;
5185 nomove = (sp->slab_flags & KMEM_SLAB_NOMOVE);
5186 mutex_exit(&cp->cache_lock);
5188 success = kmem_move_begin(cp, sp, buf, flags);
5191 * Now, before the lock is reacquired, kmem could
5192 * process all pending move requests and purge the
5193 * deadlist, so that upon reacquiring the lock, sp has
5194 * been remapped. Or, the client may free all the
5195 * objects on the slab while the pending moves are still
5196 * on the taskq. Therefore, the KMEM_SLAB_MOVE_PENDING
5197 * flag causes the slab to be put at the end of the
5198 * deadlist and prevents it from being destroyed, since
5199 * we plan to destroy it here after reacquiring the
5200 * lock.
5202 mutex_enter(&cp->cache_lock);
5203 ASSERT(sp->slab_flags & KMEM_SLAB_MOVE_PENDING);
5204 sp->slab_flags &= ~KMEM_SLAB_MOVE_PENDING;
5206 if (sp->slab_refcnt == 0) {
5207 list_t *deadlist =
5208 &cp->cache_defrag->kmd_deadlist;
5209 list_remove(deadlist, sp);
5211 if (!avl_is_empty(
5212 &cp->cache_defrag->kmd_moves_pending)) {
5214 * A pending move makes it unsafe to
5215 * destroy the slab, because even though
5216 * the move is no longer needed, the
5217 * context where that is determined
5218 * requires the slab to exist.
5219 * Fortunately, a pending move also
5220 * means we don't need to destroy the
5221 * slab here, since it will get
5222 * destroyed along with any other slabs
5223 * on the deadlist after the last
5224 * pending move completes.
5226 list_insert_head(deadlist, sp);
5227 KMEM_STAT_ADD(kmem_move_stats.
5228 kms_endscan_slab_dead);
5229 return (-1);
5233 * Destroy the slab now if it was completely
5234 * freed while we dropped cache_lock and there
5235 * are no pending moves. Since slab_refcnt
5236 * cannot change once it reaches zero, no new
5237 * pending moves from that slab are possible.
5239 cp->cache_defrag->kmd_deadcount--;
5240 cp->cache_slab_destroy++;
5241 mutex_exit(&cp->cache_lock);
5242 kmem_slab_destroy(cp, sp);
5243 KMEM_STAT_ADD(kmem_move_stats.
5244 kms_dead_slabs_freed);
5245 KMEM_STAT_ADD(kmem_move_stats.
5246 kms_endscan_slab_destroyed);
5247 mutex_enter(&cp->cache_lock);
5249 * Since we can't pick up the scan where we left
5250 * off, abort the scan and say nothing about the
5251 * number of reclaimable slabs.
5253 return (-1);
5256 if (!success) {
5258 * Abort the scan if there is not enough memory
5259 * for the request and say nothing about the
5260 * number of reclaimable slabs.
5262 KMEM_STAT_COND_ADD(s < max_slabs,
5263 kmem_move_stats.kms_endscan_nomem);
5264 return (-1);
5268 * The slab's position changed while the lock was
5269 * dropped, so we don't know where we are in the
5270 * sequence any more.
5272 if (sp->slab_refcnt != refcnt) {
5274 * If this is a KMM_DEBUG move, the slab_refcnt
5275 * may have changed because we allocated a
5276 * destination buffer on the same slab. In that
5277 * case, we're not interested in counting it.
5279 KMEM_STAT_COND_ADD(!(flags & KMM_DEBUG) &&
5280 (s < max_slabs),
5281 kmem_move_stats.kms_endscan_refcnt_changed);
5282 return (-1);
5284 if ((sp->slab_flags & KMEM_SLAB_NOMOVE) != nomove) {
5285 KMEM_STAT_COND_ADD(s < max_slabs,
5286 kmem_move_stats.kms_endscan_nomove_changed);
5287 return (-1);
5291 * Generating a move request allocates a destination
5292 * buffer from the slab layer, bumping the first partial
5293 * slab if it is completely allocated. If the current
5294 * slab becomes the first partial slab as a result, we
5295 * can't continue to scan backwards.
5297 * If this is a KMM_DEBUG move and we allocated the
5298 * destination buffer from the last partial slab, then
5299 * the buffer we're moving is on the same slab and our
5300 * slab_refcnt has changed, causing us to return before
5301 * reaching here if there are no partial slabs left.
5303 ASSERT(!avl_is_empty(&cp->cache_partial_slabs));
5304 if (sp == avl_first(&cp->cache_partial_slabs)) {
5306 * We're not interested in a second KMM_DEBUG
5307 * move.
5309 goto end_scan;
5313 end_scan:
5315 KMEM_STAT_COND_ADD(!(flags & KMM_DEBUG) &&
5316 (s < max_slabs) &&
5317 (sp == avl_first(&cp->cache_partial_slabs)),
5318 kmem_move_stats.kms_endscan_freelist);
5320 return (s);
5323 typedef struct kmem_move_notify_args {
5324 kmem_cache_t *kmna_cache;
5325 void *kmna_buf;
5326 } kmem_move_notify_args_t;
5328 static void
5329 kmem_cache_move_notify_task(void *arg)
5331 kmem_move_notify_args_t *args = arg;
5332 kmem_cache_t *cp = args->kmna_cache;
5333 void *buf = args->kmna_buf;
5334 kmem_slab_t *sp;
5336 ASSERT(taskq_member(kmem_taskq, curthread));
5337 ASSERT(list_link_active(&cp->cache_link));
5339 kmem_free(args, sizeof (kmem_move_notify_args_t));
5340 mutex_enter(&cp->cache_lock);
5341 sp = kmem_slab_allocated(cp, NULL, buf);
5343 /* Ignore the notification if the buffer is no longer allocated. */
5344 if (sp == NULL) {
5345 mutex_exit(&cp->cache_lock);
5346 return;
5349 /* Ignore the notification if there's no reason to move the buffer. */
5350 if (avl_numnodes(&cp->cache_partial_slabs) > 1) {
5352 * So far the notification is not ignored. Ignore the
5353 * notification if the slab is not marked by an earlier refusal
5354 * to move a buffer.
5356 if (!(sp->slab_flags & KMEM_SLAB_NOMOVE) &&
5357 (sp->slab_later_count == 0)) {
5358 mutex_exit(&cp->cache_lock);
5359 return;
5362 kmem_slab_move_yes(cp, sp, buf);
5363 ASSERT(!(sp->slab_flags & KMEM_SLAB_MOVE_PENDING));
5364 sp->slab_flags |= KMEM_SLAB_MOVE_PENDING;
5365 mutex_exit(&cp->cache_lock);
5366 /* see kmem_move_buffers() about dropping the lock */
5367 (void) kmem_move_begin(cp, sp, buf, KMM_NOTIFY);
5368 mutex_enter(&cp->cache_lock);
5369 ASSERT(sp->slab_flags & KMEM_SLAB_MOVE_PENDING);
5370 sp->slab_flags &= ~KMEM_SLAB_MOVE_PENDING;
5371 if (sp->slab_refcnt == 0) {
5372 list_t *deadlist = &cp->cache_defrag->kmd_deadlist;
5373 list_remove(deadlist, sp);
5375 if (!avl_is_empty(
5376 &cp->cache_defrag->kmd_moves_pending)) {
5377 list_insert_head(deadlist, sp);
5378 mutex_exit(&cp->cache_lock);
5379 KMEM_STAT_ADD(kmem_move_stats.
5380 kms_notify_slab_dead);
5381 return;
5384 cp->cache_defrag->kmd_deadcount--;
5385 cp->cache_slab_destroy++;
5386 mutex_exit(&cp->cache_lock);
5387 kmem_slab_destroy(cp, sp);
5388 KMEM_STAT_ADD(kmem_move_stats.kms_dead_slabs_freed);
5389 KMEM_STAT_ADD(kmem_move_stats.
5390 kms_notify_slab_destroyed);
5391 return;
5393 } else {
5394 kmem_slab_move_yes(cp, sp, buf);
5396 mutex_exit(&cp->cache_lock);
5399 void
5400 kmem_cache_move_notify(kmem_cache_t *cp, void *buf)
5402 kmem_move_notify_args_t *args;
5404 KMEM_STAT_ADD(kmem_move_stats.kms_notify);
5405 args = kmem_alloc(sizeof (kmem_move_notify_args_t), KM_NOSLEEP);
5406 if (args != NULL) {
5407 args->kmna_cache = cp;
5408 args->kmna_buf = buf;
5409 if (!taskq_dispatch(kmem_taskq,
5410 (task_func_t *)kmem_cache_move_notify_task, args,
5411 TQ_NOSLEEP))
5412 kmem_free(args, sizeof (kmem_move_notify_args_t));
5416 static void
5417 kmem_cache_defrag(kmem_cache_t *cp)
5419 size_t n;
5421 ASSERT(cp->cache_defrag != NULL);
5423 mutex_enter(&cp->cache_lock);
5424 n = avl_numnodes(&cp->cache_partial_slabs);
5425 if (n > 1) {
5426 /* kmem_move_buffers() drops and reacquires cache_lock */
5427 KMEM_STAT_ADD(kmem_move_stats.kms_defrags);
5428 cp->cache_defrag->kmd_defrags++;
5429 (void) kmem_move_buffers(cp, n, 0, KMM_DESPERATE);
5431 mutex_exit(&cp->cache_lock);
5434 /* Is this cache above the fragmentation threshold? */
5435 static boolean_t
5436 kmem_cache_frag_threshold(kmem_cache_t *cp, uint64_t nfree)
5439 * nfree kmem_frag_numer
5440 * ------------------ > ---------------
5441 * cp->cache_buftotal kmem_frag_denom
5443 return ((nfree * kmem_frag_denom) >
5444 (cp->cache_buftotal * kmem_frag_numer));
5447 static boolean_t
5448 kmem_cache_is_fragmented(kmem_cache_t *cp, boolean_t *doreap)
5450 boolean_t fragmented;
5451 uint64_t nfree;
5453 ASSERT(MUTEX_HELD(&cp->cache_lock));
5454 *doreap = B_FALSE;
5456 if (kmem_move_fulltilt) {
5457 if (avl_numnodes(&cp->cache_partial_slabs) > 1) {
5458 return (B_TRUE);
5460 } else {
5461 if ((cp->cache_complete_slab_count + avl_numnodes(
5462 &cp->cache_partial_slabs)) < kmem_frag_minslabs) {
5463 return (B_FALSE);
5467 nfree = cp->cache_bufslab;
5468 fragmented = ((avl_numnodes(&cp->cache_partial_slabs) > 1) &&
5469 kmem_cache_frag_threshold(cp, nfree));
5472 * Free buffers in the magazine layer appear allocated from the point of
5473 * view of the slab layer. We want to know if the slab layer would
5474 * appear fragmented if we included free buffers from magazines that
5475 * have fallen out of the working set.
5477 if (!fragmented) {
5478 long reap;
5480 mutex_enter(&cp->cache_depot_lock);
5481 reap = MIN(cp->cache_full.ml_reaplimit, cp->cache_full.ml_min);
5482 reap = MIN(reap, cp->cache_full.ml_total);
5483 mutex_exit(&cp->cache_depot_lock);
5485 nfree += ((uint64_t)reap * cp->cache_magtype->mt_magsize);
5486 if (kmem_cache_frag_threshold(cp, nfree)) {
5487 *doreap = B_TRUE;
5491 return (fragmented);
5494 /* Called periodically from kmem_taskq */
5495 static void
5496 kmem_cache_scan(kmem_cache_t *cp)
5498 boolean_t reap = B_FALSE;
5499 kmem_defrag_t *kmd;
5501 ASSERT(taskq_member(kmem_taskq, curthread));
5503 mutex_enter(&cp->cache_lock);
5505 kmd = cp->cache_defrag;
5506 if (kmd->kmd_consolidate > 0) {
5507 kmd->kmd_consolidate--;
5508 mutex_exit(&cp->cache_lock);
5509 kmem_cache_reap(cp);
5510 return;
5513 if (kmem_cache_is_fragmented(cp, &reap)) {
5514 size_t slabs_found;
5517 * Consolidate reclaimable slabs from the end of the partial
5518 * slab list (scan at most kmem_reclaim_scan_range slabs to find
5519 * reclaimable slabs). Keep track of how many candidate slabs we
5520 * looked for and how many we actually found so we can adjust
5521 * the definition of a candidate slab if we're having trouble
5522 * finding them.
5524 * kmem_move_buffers() drops and reacquires cache_lock.
5526 KMEM_STAT_ADD(kmem_move_stats.kms_scans);
5527 kmd->kmd_scans++;
5528 slabs_found = kmem_move_buffers(cp, kmem_reclaim_scan_range,
5529 kmem_reclaim_max_slabs, 0);
5530 if (slabs_found >= 0) {
5531 kmd->kmd_slabs_sought += kmem_reclaim_max_slabs;
5532 kmd->kmd_slabs_found += slabs_found;
5535 if (++kmd->kmd_tries >= kmem_reclaim_scan_range) {
5536 kmd->kmd_tries = 0;
5539 * If we had difficulty finding candidate slabs in
5540 * previous scans, adjust the threshold so that
5541 * candidates are easier to find.
5543 if (kmd->kmd_slabs_found == kmd->kmd_slabs_sought) {
5544 kmem_adjust_reclaim_threshold(kmd, -1);
5545 } else if ((kmd->kmd_slabs_found * 2) <
5546 kmd->kmd_slabs_sought) {
5547 kmem_adjust_reclaim_threshold(kmd, 1);
5549 kmd->kmd_slabs_sought = 0;
5550 kmd->kmd_slabs_found = 0;
5552 } else {
5553 kmem_reset_reclaim_threshold(cp->cache_defrag);
5554 #ifdef DEBUG
5555 if (!avl_is_empty(&cp->cache_partial_slabs)) {
5557 * In a debug kernel we want the consolidator to
5558 * run occasionally even when there is plenty of
5559 * memory.
5561 uint16_t debug_rand;
5563 (void) random_get_bytes((uint8_t *)&debug_rand, 2);
5564 if (!kmem_move_noreap &&
5565 ((debug_rand % kmem_mtb_reap) == 0)) {
5566 mutex_exit(&cp->cache_lock);
5567 KMEM_STAT_ADD(kmem_move_stats.kms_debug_reaps);
5568 kmem_cache_reap(cp);
5569 return;
5570 } else if ((debug_rand % kmem_mtb_move) == 0) {
5571 KMEM_STAT_ADD(kmem_move_stats.kms_scans);
5572 KMEM_STAT_ADD(kmem_move_stats.kms_debug_scans);
5573 kmd->kmd_scans++;
5574 (void) kmem_move_buffers(cp,
5575 kmem_reclaim_scan_range, 1, KMM_DEBUG);
5578 #endif /* DEBUG */
5581 mutex_exit(&cp->cache_lock);
5583 if (reap) {
5584 KMEM_STAT_ADD(kmem_move_stats.kms_scan_depot_ws_reaps);
5585 kmem_depot_ws_reap(cp);