4 * The contents of this file are subject to the terms of the
5 * Common Development and Distribution License (the "License").
6 * You may not use this file except in compliance with the License.
8 * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 * or http://www.opensolaris.org/os/licensing.
10 * See the License for the specific language governing permissions
11 * and limitations under the License.
13 * When distributing Covered Code, include this CDDL HEADER in each
14 * file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 * If applicable, add the following below this CDDL HEADER, with the
16 * fields enclosed by brackets "[]" replaced with your own identifying
17 * information: Portions Copyright [yyyy] [name of copyright owner]
23 * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
24 * Copyright (c) 2014, Joyent, Inc. All rights reserved.
30 * The GLDv3 framework locking - The MAC layer
31 * --------------------------------------------
33 * The MAC layer is central to the GLD framework and can provide the locking
34 * framework needed for itself and for the use of MAC clients. MAC end points
35 * are fairly disjoint and don't share a lot of state. So a coarse grained
36 * multi-threading scheme is to single thread all create/modify/delete or set
37 * type of control operations on a per mac end point while allowing data threads
40 * Control operations (set) that modify a mac end point are always serialized on
41 * a per mac end point basis, We have at most 1 such thread per mac end point
44 * All other operations that are not serialized are essentially multi-threaded.
45 * For example a control operation (get) like getting statistics which may not
46 * care about reading values atomically or data threads sending or receiving
47 * data. Mostly these type of operations don't modify the control state. Any
48 * state these operations care about are protected using traditional locks.
50 * The perimeter only serializes serial operations. It does not imply there
51 * aren't any other concurrent operations. However a serialized operation may
52 * sometimes need to make sure it is the only thread. In this case it needs
53 * to use reference counting mechanisms to cv_wait until any current data
56 * The mac layer itself does not hold any locks across a call to another layer.
57 * The perimeter is however held across a down call to the driver to make the
58 * whole control operation atomic with respect to other control operations.
59 * Also the data path and get type control operations may proceed concurrently.
60 * These operations synchronize with the single serial operation on a given mac
61 * end point using regular locks. The perimeter ensures that conflicting
62 * operations like say a mac_multicast_add and a mac_multicast_remove on the
63 * same mac end point don't interfere with each other and also ensures that the
64 * changes in the mac layer and the call to the underlying driver to say add a
65 * multicast address are done atomically without interference from a thread
66 * trying to delete the same address.
68 * For example, consider
71 * mac_perimeter_enter(); serialize all control operations
73 * grab list lock protect against access by data threads
77 * call driver's mi_multicst
79 * mac_perimeter_exit();
82 * To lessen the number of serialization locks and simplify the lock hierarchy,
83 * we serialize all the control operations on a per mac end point by using a
84 * single serialization lock called the perimeter. We allow recursive entry into
85 * the perimeter to facilitate use of this mechanism by both the mac client and
86 * the MAC layer itself.
88 * MAC client means an entity that does an operation on a mac handle
89 * obtained from a mac_open/mac_client_open. Similarly MAC driver means
90 * an entity that does an operation on a mac handle obtained from a
91 * mac_register. An entity could be both client and driver but on different
92 * handles eg. aggr. and should only make the corresponding mac interface calls
93 * i.e. mac driver interface or mac client interface as appropriate for that
99 * R1. The lock order of upcall threads is natually opposite to downcall
100 * threads. Hence upcalls must not hold any locks across layers for fear of
101 * recursive lock enter and lock order violation. This applies to all layers.
103 * R2. The perimeter is just another lock. Since it is held in the down
104 * direction, acquiring the perimeter in an upcall is prohibited as it would
105 * cause a deadlock. This applies to all layers.
107 * Note that upcalls that need to grab the mac perimeter (for example
108 * mac_notify upcalls) can still achieve that by posting the request to a
109 * thread, which can then grab all the required perimeters and locks in the
110 * right global order. Note that in the above example the mac layer iself
111 * won't grab the mac perimeter in the mac_notify upcall, instead the upcall
112 * to the client must do that. Please see the aggr code for an example.
117 * R3. A MAC client may use the MAC provided perimeter facility to serialize
118 * control operations on a per mac end point. It does this by by acquring
119 * and holding the perimeter across a sequence of calls to the mac layer.
120 * This ensures atomicity across the entire block of mac calls. In this
121 * model the MAC client must not hold any client locks across the calls to
122 * the mac layer. This model is the preferred solution.
124 * R4. However if a MAC client has a lot of global state across all mac end
125 * points the per mac end point serialization may not be sufficient. In this
126 * case the client may choose to use global locks or use its own serialization.
127 * To avoid deadlocks, these client layer locks held across the mac calls
128 * in the control path must never be acquired by the data path for the reason
131 * (Assume that a control operation that holds a client lock blocks in the
132 * mac layer waiting for upcall reference counts to drop to zero. If an upcall
133 * data thread that holds this reference count, tries to acquire the same
134 * client lock subsequently it will deadlock).
136 * A MAC client may follow either the R3 model or the R4 model, but can't
137 * mix both. In the former, the hierarchy is Perim -> client locks, but in
138 * the latter it is client locks -> Perim.
140 * R5. MAC clients must make MAC calls (excluding data calls) in a cv_wait'able
141 * context since they may block while trying to acquire the perimeter.
142 * In addition some calls may block waiting for upcall refcnts to come down to
145 * R6. MAC clients must make sure that they are single threaded and all threads
146 * from the top (in particular data threads) have finished before calling
147 * mac_client_close. The MAC framework does not track the number of client
148 * threads using the mac client handle. Also mac clients must make sure
149 * they have undone all the control operations before calling mac_client_close.
150 * For example mac_unicast_remove/mac_multicast_remove to undo the corresponding
151 * mac_unicast_add/mac_multicast_add.
153 * MAC framework rules
154 * -------------------
156 * R7. The mac layer itself must not hold any mac layer locks (except the mac
157 * perimeter) across a call to any other layer from the mac layer. The call to
158 * any other layer could be via mi_* entry points, classifier entry points into
159 * the driver or via upcall pointers into layers above. The mac perimeter may
160 * be acquired or held only in the down direction, for e.g. when calling into
161 * a mi_* driver enty point to provide atomicity of the operation.
163 * R8. Since it is not guaranteed (see R14) that drivers won't hold locks across
164 * mac driver interfaces, the MAC layer must provide a cut out for control
165 * interfaces like upcall notifications and start them in a separate thread.
167 * R9. Note that locking order also implies a plumbing order. For example
168 * VNICs are allowed to be created over aggrs, but not vice-versa. An attempt
169 * to plumb in any other order must be failed at mac_open time, otherwise it
170 * could lead to deadlocks due to inverse locking order.
172 * R10. MAC driver interfaces must not block since the driver could call them
173 * in interrupt context.
175 * R11. Walkers must preferably not hold any locks while calling walker
176 * callbacks. Instead these can operate on reference counts. In simple
177 * callbacks it may be ok to hold a lock and call the callbacks, but this is
178 * harder to maintain in the general case of arbitrary callbacks.
180 * R12. The MAC layer must protect upcall notification callbacks using reference
181 * counts rather than holding locks across the callbacks.
183 * R13. Given the variety of drivers, it is preferable if the MAC layer can make
184 * sure that any pointers (such as mac ring pointers) it passes to the driver
185 * remain valid until mac unregister time. Currently the mac layer achieves
186 * this by using generation numbers for rings and freeing the mac rings only
187 * at unregister time. The MAC layer must provide a layer of indirection and
188 * must not expose underlying driver rings or driver data structures/pointers
189 * directly to MAC clients.
194 * R14. It would be preferable if MAC drivers don't hold any locks across any
195 * mac call. However at a minimum they must not hold any locks across data
196 * upcalls. They must also make sure that all references to mac data structures
197 * are cleaned up and that it is single threaded at mac_unregister time.
199 * R15. MAC driver interfaces don't block and so the action may be done
200 * asynchronously in a separate thread as for example handling notifications.
201 * The driver must not assume that the action is complete when the call
204 * R16. Drivers must maintain a generation number per Rx ring, and pass it
205 * back to mac_rx_ring(); They are expected to increment the generation
206 * number whenever the ring's stop routine is invoked.
207 * See comments in mac_rx_ring();
209 * R17 Similarly mi_stop is another synchronization point and the driver must
210 * ensure that all upcalls are done and there won't be any future upcall
211 * before returning from mi_stop.
213 * R18. The driver may assume that all set/modify control operations via
214 * the mi_* entry points are single threaded on a per mac end point.
216 * Lock and Perimeter hierarchy scenarios
217 * ---------------------------------------
219 * i_mac_impl_lock -> mi_rw_lock -> srs_lock -> s_ring_lock[i_mac_tx_srs_notify]
221 * ft_lock -> fe_lock [mac_flow_lookup]
223 * mi_rw_lock -> fe_lock [mac_bcast_send]
225 * srs_lock -> mac_bw_lock [mac_rx_srs_drain_bw]
227 * cpu_lock -> mac_srs_g_lock -> srs_lock -> s_ring_lock [mac_walk_srs_and_bind]
229 * i_dls_devnet_lock -> mac layer locks [dls_devnet_rename]
231 * Perimeters are ordered P1 -> P2 -> P3 from top to bottom in order of mac
232 * client to driver. In the case of clients that explictly use the mac provided
233 * perimeter mechanism for its serialization, the hierarchy is
234 * Perimeter -> mac layer locks, since the client never holds any locks across
235 * the mac calls. In the case of clients that use its own locks the hierarchy
236 * is Client locks -> Mac Perim -> Mac layer locks. The client never explicitly
237 * calls mac_perim_enter/exit in this case.
239 * Subflow creation rules
240 * ---------------------------
241 * o In case of a user specified cpulist present on underlying link and flows,
242 * the flows cpulist must be a subset of the underlying link.
243 * o In case of a user specified fanout mode present on link and flow, the
244 * subflow fanout count has to be less than or equal to that of the
245 * underlying link. The cpu-bindings for the subflows will be a subset of
246 * the underlying link.
247 * o In case if no cpulist specified on both underlying link and flow, the
248 * underlying link relies on a MAC tunable to provide out of box fanout.
249 * The subflow will have no cpulist (the subflow will be unbound)
250 * o In case if no cpulist is specified on the underlying link, a subflow can
251 * carry either a user-specified cpulist or fanout count. The cpu-bindings
252 * for the subflow will not adhere to restriction that they need to be subset
253 * of the underlying link.
254 * o In case where the underlying link is carrying either a user specified
255 * cpulist or fanout mode and for a unspecified subflow, the subflow will be
257 * o While creating unbound subflows, bandwidth mode changes attempt to
258 * figure a right fanout count. In such cases the fanout count will override
259 * the unbound cpu-binding behavior.
260 * o In addition to this, while cycling between flow and link properties, we
261 * impose a restriction that if a link property has a subflow with
262 * user-specified attributes, we will not allow changing the link property.
263 * The administrator needs to reset all the user specified properties for the
264 * subflows before attempting a link property change.
265 * Some of the above rules can be overridden by specifying additional command
266 * line options while creating or modifying link or subflow properties.
271 * For information on the datapath, the world of soft rings, hardware rings, how
272 * it is structured, and the path of an mblk_t between a driver and a mac
273 * client, see mac_sched.c.
276 #include <sys/types.h>
277 #include <sys/conf.h>
278 #include <sys/id_space.h>
279 #include <sys/esunddi.h>
280 #include <sys/stat.h>
281 #include <sys/mkdev.h>
282 #include <sys/stream.h>
283 #include <sys/strsun.h>
284 #include <sys/strsubr.h>
285 #include <sys/dlpi.h>
286 #include <sys/list.h>
287 #include <sys/modhash.h>
288 #include <sys/mac_provider.h>
289 #include <sys/mac_client_impl.h>
290 #include <sys/mac_soft_ring.h>
291 #include <sys/mac_stat.h>
292 #include <sys/mac_impl.h>
296 #include <sys/modctl.h>
297 #include <sys/fs/dv_node.h>
298 #include <sys/thread.h>
299 #include <sys/proc.h>
300 #include <sys/callb.h>
301 #include <sys/cpuvar.h>
302 #include <sys/atomic.h>
303 #include <sys/bitmap.h>
305 #include <sys/mac_flow.h>
306 #include <sys/ddi_intr_impl.h>
307 #include <sys/disp.h>
309 #include <sys/vnic.h>
310 #include <sys/vnic_impl.h>
311 #include <sys/vlan.h>
313 #include <inet/ip6.h>
314 #include <sys/exacct.h>
315 #include <sys/exacct_impl.h>
317 #include <sys/ethernet.h>
318 #include <sys/pool.h>
319 #include <sys/pool_pset.h>
320 #include <sys/cpupart.h>
321 #include <inet/wifi_ioctl.h>
324 #define IMPL_HASHSZ 67 /* prime */
326 kmem_cache_t
*i_mac_impl_cachep
;
327 mod_hash_t
*i_mac_impl_hash
;
328 krwlock_t i_mac_impl_lock
;
329 uint_t i_mac_impl_count
;
330 static kmem_cache_t
*mac_ring_cache
;
331 static id_space_t
*minor_ids
;
332 static uint32_t minor_count
;
333 static pool_event_cb_t mac_pool_event_reg
;
336 * Logging stuff. Perhaps mac_logging_interval could be broken into
337 * mac_flow_log_interval and mac_link_log_interval if we want to be
338 * able to schedule them differently.
340 uint_t mac_logging_interval
;
341 boolean_t mac_flow_log_enable
;
342 boolean_t mac_link_log_enable
;
343 timeout_id_t mac_logging_timer
;
345 /* for debugging, see MAC_DBG_PRT() in mac_impl.h */
348 #define MACTYPE_KMODDIR "mac"
349 #define MACTYPE_HASHSZ 67
350 static mod_hash_t
*i_mactype_hash
;
352 * i_mactype_lock synchronizes threads that obtain references to mactype_t
353 * structures through i_mactype_getplugin().
355 static kmutex_t i_mactype_lock
;
360 * Number of per cpu locks per mac_client_impl_t. Used by the transmit side
361 * in mac_tx to reduce lock contention. This is sized at boot time in mac_init.
362 * mac_tx_percpu_cnt_max is settable in /etc/system and must be a power of 2.
363 * Per cpu locks may be disabled by setting mac_tx_percpu_cnt_max to 1.
365 int mac_tx_percpu_cnt
;
366 int mac_tx_percpu_cnt_max
= 128;
369 * Call back functions for the bridge module. These are guaranteed to be valid
370 * when holding a reference on a link or when holding mip->mi_bridge_lock and
371 * mi_bridge_link is non-NULL.
373 mac_bridge_tx_t mac_bridge_tx_cb
;
374 mac_bridge_rx_t mac_bridge_rx_cb
;
375 mac_bridge_ref_t mac_bridge_ref_cb
;
376 mac_bridge_ls_t mac_bridge_ls_cb
;
378 static int i_mac_constructor(void *, void *, int);
379 static void i_mac_destructor(void *, void *);
380 static int i_mac_ring_ctor(void *, void *, int);
381 static void i_mac_ring_dtor(void *, void *);
382 static mblk_t
*mac_rx_classify(mac_impl_t
*, mac_resource_handle_t
, mblk_t
*);
383 void mac_tx_client_flush(mac_client_impl_t
*);
384 void mac_tx_client_block(mac_client_impl_t
*);
385 static void mac_rx_ring_quiesce(mac_ring_t
*, uint_t
);
386 static int mac_start_group_and_rings(mac_group_t
*);
387 static void mac_stop_group_and_rings(mac_group_t
*);
388 static void mac_pool_event_cb(pool_event_t
, int, void *);
390 typedef struct netinfo_s
{
398 * Module initialization functions.
404 mac_tx_percpu_cnt
= ((boot_max_ncpus
== -1) ? max_ncpus
:
407 /* Upper bound is mac_tx_percpu_cnt_max */
408 if (mac_tx_percpu_cnt
> mac_tx_percpu_cnt_max
)
409 mac_tx_percpu_cnt
= mac_tx_percpu_cnt_max
;
411 if (mac_tx_percpu_cnt
< 1) {
412 /* Someone set max_tx_percpu_cnt_max to 0 or less */
413 mac_tx_percpu_cnt
= 1;
416 ASSERT(mac_tx_percpu_cnt
>= 1);
417 mac_tx_percpu_cnt
= (1 << highbit(mac_tx_percpu_cnt
- 1));
419 * Make it of the form 2**N - 1 in the range
420 * [0 .. mac_tx_percpu_cnt_max - 1]
424 i_mac_impl_cachep
= kmem_cache_create("mac_impl_cache",
425 sizeof (mac_impl_t
), 0, i_mac_constructor
, i_mac_destructor
,
426 NULL
, NULL
, NULL
, 0);
427 ASSERT(i_mac_impl_cachep
!= NULL
);
429 mac_ring_cache
= kmem_cache_create("mac_ring_cache",
430 sizeof (mac_ring_t
), 0, i_mac_ring_ctor
, i_mac_ring_dtor
, NULL
,
432 ASSERT(mac_ring_cache
!= NULL
);
434 i_mac_impl_hash
= mod_hash_create_extended("mac_impl_hash",
435 IMPL_HASHSZ
, mod_hash_null_keydtor
, mod_hash_null_valdtor
,
436 mod_hash_bystr
, NULL
, mod_hash_strkey_cmp
, KM_SLEEP
);
437 rw_init(&i_mac_impl_lock
, NULL
, RW_DEFAULT
, NULL
);
440 mac_soft_ring_init();
444 i_mac_impl_count
= 0;
446 i_mactype_hash
= mod_hash_create_extended("mactype_hash",
448 mod_hash_null_keydtor
, mod_hash_null_valdtor
,
449 mod_hash_bystr
, NULL
, mod_hash_strkey_cmp
, KM_SLEEP
);
452 * Allocate an id space to manage minor numbers. The range of the
453 * space will be from MAC_MAX_MINOR+1 to MAC_PRIVATE_MINOR-1. This
454 * leaves half of the 32-bit minors available for driver private use.
456 minor_ids
= id_space_create("mac_minor_ids", MAC_MAX_MINOR
+1,
457 MAC_PRIVATE_MINOR
-1);
458 ASSERT(minor_ids
!= NULL
);
461 /* Let's default to 20 seconds */
462 mac_logging_interval
= 20;
463 mac_flow_log_enable
= B_FALSE
;
464 mac_link_log_enable
= B_FALSE
;
465 mac_logging_timer
= 0;
467 /* Register to be notified of noteworthy pools events */
468 mac_pool_event_reg
.pec_func
= mac_pool_event_cb
;
469 mac_pool_event_reg
.pec_arg
= NULL
;
470 pool_event_cb_register(&mac_pool_event_reg
);
477 if (i_mac_impl_count
> 0 || minor_count
> 0)
480 pool_event_cb_unregister(&mac_pool_event_reg
);
482 id_space_destroy(minor_ids
);
485 mod_hash_destroy_hash(i_mac_impl_hash
);
486 rw_destroy(&i_mac_impl_lock
);
489 kmem_cache_destroy(mac_ring_cache
);
491 mod_hash_destroy_hash(i_mactype_hash
);
492 mac_soft_ring_finish();
499 * Initialize a GLDv3 driver's device ops. A driver that manages its own ops
500 * (e.g. softmac) may pass in a NULL ops argument.
503 mac_init_ops(struct dev_ops
*ops
, const char *name
)
505 major_t major
= ddi_name_to_major((char *)name
);
508 * By returning on error below, we are not letting the driver continue
509 * in an undefined context. The mac_register() function will faill if
510 * DN_GLDV3_DRIVER isn't set.
512 if (major
== DDI_MAJOR_T_NONE
)
514 LOCK_DEV_OPS(&devnamesp
[major
].dn_lock
);
515 devnamesp
[major
].dn_flags
|= (DN_GLDV3_DRIVER
| DN_NETWORK_DRIVER
);
516 UNLOCK_DEV_OPS(&devnamesp
[major
].dn_lock
);
518 dld_init_ops(ops
, name
);
522 mac_fini_ops(struct dev_ops
*ops
)
529 i_mac_constructor(void *buf
, void *arg
, int kmflag
)
531 mac_impl_t
*mip
= buf
;
533 bzero(buf
, sizeof (mac_impl_t
));
535 mip
->mi_linkstate
= LINK_STATE_UNKNOWN
;
537 rw_init(&mip
->mi_rw_lock
, NULL
, RW_DRIVER
, NULL
);
538 mutex_init(&mip
->mi_notify_lock
, NULL
, MUTEX_DRIVER
, NULL
);
539 mutex_init(&mip
->mi_promisc_lock
, NULL
, MUTEX_DRIVER
, NULL
);
540 mutex_init(&mip
->mi_ring_lock
, NULL
, MUTEX_DEFAULT
, NULL
);
542 mip
->mi_notify_cb_info
.mcbi_lockp
= &mip
->mi_notify_lock
;
543 cv_init(&mip
->mi_notify_cb_info
.mcbi_cv
, NULL
, CV_DRIVER
, NULL
);
544 mip
->mi_promisc_cb_info
.mcbi_lockp
= &mip
->mi_promisc_lock
;
545 cv_init(&mip
->mi_promisc_cb_info
.mcbi_cv
, NULL
, CV_DRIVER
, NULL
);
547 mutex_init(&mip
->mi_bridge_lock
, NULL
, MUTEX_DEFAULT
, NULL
);
554 i_mac_destructor(void *buf
, void *arg
)
556 mac_impl_t
*mip
= buf
;
559 ASSERT(mip
->mi_ref
== 0);
560 ASSERT(mip
->mi_active
== 0);
561 ASSERT(mip
->mi_linkstate
== LINK_STATE_UNKNOWN
);
562 ASSERT(mip
->mi_devpromisc
== 0);
563 ASSERT(mip
->mi_ksp
== NULL
);
564 ASSERT(mip
->mi_kstat_count
== 0);
565 ASSERT(mip
->mi_nclients
== 0);
566 ASSERT(mip
->mi_nactiveclients
== 0);
567 ASSERT(mip
->mi_single_active_client
== NULL
);
568 ASSERT(mip
->mi_state_flags
== 0);
569 ASSERT(mip
->mi_factory_addr
== NULL
);
570 ASSERT(mip
->mi_factory_addr_num
== 0);
571 ASSERT(mip
->mi_default_tx_ring
== NULL
);
573 mcbi
= &mip
->mi_notify_cb_info
;
574 ASSERT(mcbi
->mcbi_del_cnt
== 0 && mcbi
->mcbi_walker_cnt
== 0);
575 ASSERT(mip
->mi_notify_bits
== 0);
576 ASSERT(mip
->mi_notify_thread
== NULL
);
577 ASSERT(mcbi
->mcbi_lockp
== &mip
->mi_notify_lock
);
578 mcbi
->mcbi_lockp
= NULL
;
580 mcbi
= &mip
->mi_promisc_cb_info
;
581 ASSERT(mcbi
->mcbi_del_cnt
== 0 && mip
->mi_promisc_list
== NULL
);
582 ASSERT(mip
->mi_promisc_list
== NULL
);
583 ASSERT(mcbi
->mcbi_lockp
== &mip
->mi_promisc_lock
);
584 mcbi
->mcbi_lockp
= NULL
;
586 ASSERT(mip
->mi_bcast_ngrps
== 0 && mip
->mi_bcast_grp
== NULL
);
587 ASSERT(mip
->mi_perim_owner
== NULL
&& mip
->mi_perim_ocnt
== 0);
589 rw_destroy(&mip
->mi_rw_lock
);
591 mutex_destroy(&mip
->mi_promisc_lock
);
592 cv_destroy(&mip
->mi_promisc_cb_info
.mcbi_cv
);
593 mutex_destroy(&mip
->mi_notify_lock
);
594 cv_destroy(&mip
->mi_notify_cb_info
.mcbi_cv
);
595 mutex_destroy(&mip
->mi_ring_lock
);
597 ASSERT(mip
->mi_bridge_link
== NULL
);
602 i_mac_ring_ctor(void *buf
, void *arg
, int kmflag
)
604 mac_ring_t
*ring
= (mac_ring_t
*)buf
;
606 bzero(ring
, sizeof (mac_ring_t
));
607 cv_init(&ring
->mr_cv
, NULL
, CV_DEFAULT
, NULL
);
608 mutex_init(&ring
->mr_lock
, NULL
, MUTEX_DEFAULT
, NULL
);
609 ring
->mr_state
= MR_FREE
;
615 i_mac_ring_dtor(void *buf
, void *arg
)
617 mac_ring_t
*ring
= (mac_ring_t
*)buf
;
619 cv_destroy(&ring
->mr_cv
);
620 mutex_destroy(&ring
->mr_lock
);
624 * Common functions to do mac callback addition and deletion. Currently this is
625 * used by promisc callbacks and notify callbacks. List addition and deletion
626 * need to take care of list walkers. List walkers in general, can't hold list
627 * locks and make upcall callbacks due to potential lock order and recursive
628 * reentry issues. Instead list walkers increment the list walker count to mark
629 * the presence of a walker thread. Addition can be carefully done to ensure
630 * that the list walker always sees either the old list or the new list.
631 * However the deletion can't be done while the walker is active, instead the
632 * deleting thread simply marks the entry as logically deleted. The last walker
633 * physically deletes and frees up the logically deleted entries when the walk
637 mac_callback_add(mac_cb_info_t
*mcbi
, mac_cb_t
**mcb_head
,
643 /* Verify it is not already in the list */
644 for (pp
= mcb_head
; (p
= *pp
) != NULL
; pp
= &p
->mcb_nextp
) {
651 * Add it to the head of the callback list. The membar ensures that
652 * the following list pointer manipulations reach global visibility
653 * in exactly the program order below.
655 ASSERT(MUTEX_HELD(mcbi
->mcbi_lockp
));
657 mcb_elem
->mcb_nextp
= *mcb_head
;
659 *mcb_head
= mcb_elem
;
663 * Mark the entry as logically deleted. If there aren't any walkers unlink
664 * from the list. In either case return the corresponding status.
667 mac_callback_remove(mac_cb_info_t
*mcbi
, mac_cb_t
**mcb_head
,
673 ASSERT(MUTEX_HELD(mcbi
->mcbi_lockp
));
675 * Search the callback list for the entry to be removed
677 for (pp
= mcb_head
; (p
= *pp
) != NULL
; pp
= &p
->mcb_nextp
) {
684 * If there are walkers just mark it as deleted and the last walker
685 * will remove from the list and free it.
687 if (mcbi
->mcbi_walker_cnt
!= 0) {
688 p
->mcb_flags
|= MCB_CONDEMNED
;
689 mcbi
->mcbi_del_cnt
++;
693 ASSERT(mcbi
->mcbi_del_cnt
== 0);
700 * Wait for all pending callback removals to be completed
703 mac_callback_remove_wait(mac_cb_info_t
*mcbi
)
705 ASSERT(MUTEX_HELD(mcbi
->mcbi_lockp
));
706 while (mcbi
->mcbi_del_cnt
!= 0) {
707 DTRACE_PROBE1(need_wait
, mac_cb_info_t
*, mcbi
);
708 cv_wait(&mcbi
->mcbi_cv
, mcbi
->mcbi_lockp
);
713 * The last mac callback walker does the cleanup. Walk the list and unlik
714 * all the logically deleted entries and construct a temporary list of
715 * removed entries. Return the list of removed entries to the caller.
718 mac_callback_walker_cleanup(mac_cb_info_t
*mcbi
, mac_cb_t
**mcb_head
)
722 mac_cb_t
*rmlist
= NULL
; /* List of removed elements */
725 ASSERT(MUTEX_HELD(mcbi
->mcbi_lockp
));
726 ASSERT(mcbi
->mcbi_del_cnt
!= 0 && mcbi
->mcbi_walker_cnt
== 0);
729 while (*pp
!= NULL
) {
730 if ((*pp
)->mcb_flags
& MCB_CONDEMNED
) {
733 p
->mcb_nextp
= rmlist
;
738 pp
= &(*pp
)->mcb_nextp
;
741 ASSERT(mcbi
->mcbi_del_cnt
== cnt
);
742 mcbi
->mcbi_del_cnt
= 0;
747 mac_callback_lookup(mac_cb_t
**mcb_headp
, mac_cb_t
*mcb_elem
)
751 /* Verify it is not already in the list */
752 for (mcb
= *mcb_headp
; mcb
!= NULL
; mcb
= mcb
->mcb_nextp
) {
761 mac_callback_find(mac_cb_info_t
*mcbi
, mac_cb_t
**mcb_headp
, mac_cb_t
*mcb_elem
)
765 mutex_enter(mcbi
->mcbi_lockp
);
766 found
= mac_callback_lookup(mcb_headp
, mcb_elem
);
767 mutex_exit(mcbi
->mcbi_lockp
);
772 /* Free the list of removed callbacks */
774 mac_callback_free(mac_cb_t
*rmlist
)
779 for (mcb
= rmlist
; mcb
!= NULL
; mcb
= mcb_next
) {
780 mcb_next
= mcb
->mcb_nextp
;
781 kmem_free(mcb
->mcb_objp
, mcb
->mcb_objsize
);
786 * The promisc callbacks are in 2 lists, one off the 'mip' and another off the
787 * 'mcip' threaded by mpi_mi_link and mpi_mci_link respectively. However there
788 * is only a single shared total walker count, and an entry can't be physically
789 * unlinked if a walker is active on either list. The last walker does this
790 * cleanup of logically deleted entries.
793 i_mac_promisc_walker_cleanup(mac_impl_t
*mip
)
798 mac_promisc_impl_t
*mpip
;
801 * Construct a temporary list of deleted callbacks by walking the
802 * the mi_promisc_list. Then for each entry in the temporary list,
803 * remove it from the mci_promisc_list and free the entry.
805 rmlist
= mac_callback_walker_cleanup(&mip
->mi_promisc_cb_info
,
806 &mip
->mi_promisc_list
);
808 for (mcb
= rmlist
; mcb
!= NULL
; mcb
= mcb_next
) {
809 mcb_next
= mcb
->mcb_nextp
;
810 mpip
= (mac_promisc_impl_t
*)mcb
->mcb_objp
;
811 VERIFY(mac_callback_remove(&mip
->mi_promisc_cb_info
,
812 &mpip
->mpi_mcip
->mci_promisc_list
, &mpip
->mpi_mci_link
));
814 mcb
->mcb_nextp
= NULL
;
815 kmem_cache_free(mac_promisc_impl_cache
, mpip
);
820 i_mac_notify(mac_impl_t
*mip
, mac_notify_type_t type
)
825 * Signal the notify thread even after mi_ref has become zero and
826 * mi_disabled is set. The synchronization with the notify thread
827 * happens in mac_unregister and that implies the driver must make
828 * sure it is single-threaded (with respect to mac calls) and that
829 * all pending mac calls have returned before it calls mac_unregister
831 rw_enter(&i_mac_impl_lock
, RW_READER
);
832 if (mip
->mi_state_flags
& MIS_DISABLED
)
836 * Guard against incorrect notifications. (Running a newer
837 * mac client against an older implementation?)
839 if (type
>= MAC_NNOTE
)
842 mcbi
= &mip
->mi_notify_cb_info
;
843 mutex_enter(mcbi
->mcbi_lockp
);
844 mip
->mi_notify_bits
|= (1 << type
);
845 cv_broadcast(&mcbi
->mcbi_cv
);
846 mutex_exit(mcbi
->mcbi_lockp
);
849 rw_exit(&i_mac_impl_lock
);
853 * Mac serialization primitives. Please see the block comment at the
857 i_mac_perim_enter(mac_impl_t
*mip
)
859 mac_client_impl_t
*mcip
;
861 if (mip
->mi_state_flags
& MIS_IS_VNIC
) {
863 * This is a VNIC. Return the lower mac since that is what
864 * we want to serialize on.
866 mcip
= mac_vnic_lower(mip
);
870 mutex_enter(&mip
->mi_perim_lock
);
871 if (mip
->mi_perim_owner
== curthread
) {
872 mip
->mi_perim_ocnt
++;
873 mutex_exit(&mip
->mi_perim_lock
);
877 while (mip
->mi_perim_owner
!= NULL
)
878 cv_wait(&mip
->mi_perim_cv
, &mip
->mi_perim_lock
);
880 mip
->mi_perim_owner
= curthread
;
881 ASSERT(mip
->mi_perim_ocnt
== 0);
882 mip
->mi_perim_ocnt
++;
884 mip
->mi_perim_stack_depth
= getpcstack(mip
->mi_perim_stack
,
885 MAC_PERIM_STACK_DEPTH
);
887 mutex_exit(&mip
->mi_perim_lock
);
891 i_mac_perim_enter_nowait(mac_impl_t
*mip
)
894 * The vnic is a special case, since the serialization is done based
895 * on the lower mac. If the lower mac is busy, it does not imply the
896 * vnic can't be unregistered. But in the case of other drivers,
897 * a busy perimeter or open mac handles implies that the mac is busy
898 * and can't be unregistered.
900 if (mip
->mi_state_flags
& MIS_IS_VNIC
) {
901 i_mac_perim_enter(mip
);
905 mutex_enter(&mip
->mi_perim_lock
);
906 if (mip
->mi_perim_owner
!= NULL
) {
907 mutex_exit(&mip
->mi_perim_lock
);
910 ASSERT(mip
->mi_perim_ocnt
== 0);
911 mip
->mi_perim_owner
= curthread
;
912 mip
->mi_perim_ocnt
++;
913 mutex_exit(&mip
->mi_perim_lock
);
919 i_mac_perim_exit(mac_impl_t
*mip
)
921 mac_client_impl_t
*mcip
;
923 if (mip
->mi_state_flags
& MIS_IS_VNIC
) {
925 * This is a VNIC. Return the lower mac since that is what
926 * we want to serialize on.
928 mcip
= mac_vnic_lower(mip
);
932 ASSERT(mip
->mi_perim_owner
== curthread
&& mip
->mi_perim_ocnt
!= 0);
934 mutex_enter(&mip
->mi_perim_lock
);
935 if (--mip
->mi_perim_ocnt
== 0) {
936 mip
->mi_perim_owner
= NULL
;
937 cv_signal(&mip
->mi_perim_cv
);
939 mutex_exit(&mip
->mi_perim_lock
);
943 * Returns whether the current thread holds the mac perimeter. Used in making
947 mac_perim_held(mac_handle_t mh
)
949 mac_impl_t
*mip
= (mac_impl_t
*)mh
;
950 mac_client_impl_t
*mcip
;
952 if (mip
->mi_state_flags
& MIS_IS_VNIC
) {
954 * This is a VNIC. Return the lower mac since that is what
955 * we want to serialize on.
957 mcip
= mac_vnic_lower(mip
);
960 return (mip
->mi_perim_owner
== curthread
);
964 * mac client interfaces to enter the mac perimeter of a mac end point, given
965 * its mac handle, or macname or linkid.
968 mac_perim_enter_by_mh(mac_handle_t mh
, mac_perim_handle_t
*mphp
)
970 mac_impl_t
*mip
= (mac_impl_t
*)mh
;
972 i_mac_perim_enter(mip
);
974 * The mac_perim_handle_t returned encodes the 'mip' and whether a
975 * mac_open has been done internally while entering the perimeter.
976 * This information is used in mac_perim_exit
978 MAC_ENCODE_MPH(*mphp
, mip
, 0);
982 mac_perim_enter_by_macname(const char *name
, mac_perim_handle_t
*mphp
)
987 if ((err
= mac_open(name
, &mh
)) != 0)
990 mac_perim_enter_by_mh(mh
, mphp
);
991 MAC_ENCODE_MPH(*mphp
, mh
, 1);
996 mac_perim_enter_by_linkid(datalink_id_t linkid
, mac_perim_handle_t
*mphp
)
1001 if ((err
= mac_open_by_linkid(linkid
, &mh
)) != 0)
1004 mac_perim_enter_by_mh(mh
, mphp
);
1005 MAC_ENCODE_MPH(*mphp
, mh
, 1);
1010 mac_perim_exit(mac_perim_handle_t mph
)
1013 boolean_t need_close
;
1015 MAC_DECODE_MPH(mph
, mip
, need_close
);
1016 i_mac_perim_exit(mip
);
1018 mac_close((mac_handle_t
)mip
);
1022 mac_hold(const char *macname
, mac_impl_t
**pmip
)
1028 * Check the device name length to make sure it won't overflow our
1031 if (strlen(macname
) >= MAXNAMELEN
)
1035 * Look up its entry in the global hash table.
1037 rw_enter(&i_mac_impl_lock
, RW_WRITER
);
1038 err
= mod_hash_find(i_mac_impl_hash
, (mod_hash_key_t
)macname
,
1039 (mod_hash_val_t
*)&mip
);
1042 rw_exit(&i_mac_impl_lock
);
1046 if (mip
->mi_state_flags
& MIS_DISABLED
) {
1047 rw_exit(&i_mac_impl_lock
);
1051 if (mip
->mi_state_flags
& MIS_EXCLUSIVE_HELD
) {
1052 rw_exit(&i_mac_impl_lock
);
1057 rw_exit(&i_mac_impl_lock
);
1064 mac_rele(mac_impl_t
*mip
)
1066 rw_enter(&i_mac_impl_lock
, RW_WRITER
);
1067 ASSERT(mip
->mi_ref
!= 0);
1068 if (--mip
->mi_ref
== 0) {
1069 ASSERT(mip
->mi_nactiveclients
== 0 &&
1070 !(mip
->mi_state_flags
& MIS_EXCLUSIVE
));
1072 rw_exit(&i_mac_impl_lock
);
1076 * Private GLDv3 function to start a MAC instance.
1079 mac_start(mac_handle_t mh
)
1081 mac_impl_t
*mip
= (mac_impl_t
*)mh
;
1083 mac_group_t
*defgrp
;
1085 ASSERT(MAC_PERIM_HELD((mac_handle_t
)mip
));
1086 ASSERT(mip
->mi_start
!= NULL
);
1089 * Check whether the device is already started.
1091 if (mip
->mi_active
++ == 0) {
1092 mac_ring_t
*ring
= NULL
;
1097 err
= mip
->mi_start(mip
->mi_driver
);
1104 * Start the default tx ring.
1106 if (mip
->mi_default_tx_ring
!= NULL
) {
1108 ring
= (mac_ring_t
*)mip
->mi_default_tx_ring
;
1109 if (ring
->mr_state
!= MR_INUSE
) {
1110 err
= mac_start_ring(ring
);
1118 if ((defgrp
= MAC_DEFAULT_RX_GROUP(mip
)) != NULL
) {
1120 * Start the default ring, since it will be needed
1121 * to receive broadcast and multicast traffic for
1122 * both primary and non-primary MAC clients.
1124 ASSERT(defgrp
->mrg_state
== MAC_GROUP_STATE_REGISTERED
);
1125 err
= mac_start_group_and_rings(defgrp
);
1128 if ((ring
!= NULL
) &&
1129 (ring
->mr_state
== MR_INUSE
))
1130 mac_stop_ring(ring
);
1133 mac_set_group_state(defgrp
, MAC_GROUP_STATE_SHARED
);
1141 * Private GLDv3 function to stop a MAC instance.
1144 mac_stop(mac_handle_t mh
)
1146 mac_impl_t
*mip
= (mac_impl_t
*)mh
;
1149 ASSERT(mip
->mi_stop
!= NULL
);
1150 ASSERT(MAC_PERIM_HELD((mac_handle_t
)mip
));
1153 * Check whether the device is still needed.
1155 ASSERT(mip
->mi_active
!= 0);
1156 if (--mip
->mi_active
== 0) {
1157 if ((grp
= MAC_DEFAULT_RX_GROUP(mip
)) != NULL
) {
1159 * There should be no more active clients since the
1160 * MAC is being stopped. Stop the default RX group
1161 * and transition it back to registered state.
1163 * When clients are torn down, the groups
1164 * are release via mac_release_rx_group which
1165 * knows the the default group is always in
1166 * started mode since broadcast uses it. So
1167 * we can assert that their are no clients
1168 * (since mac_bcast_add doesn't register itself
1169 * as a client) and group is in SHARED state.
1171 ASSERT(grp
->mrg_state
== MAC_GROUP_STATE_SHARED
);
1172 ASSERT(MAC_GROUP_NO_CLIENT(grp
) &&
1173 mip
->mi_nactiveclients
== 0);
1174 mac_stop_group_and_rings(grp
);
1175 mac_set_group_state(grp
, MAC_GROUP_STATE_REGISTERED
);
1178 if (mip
->mi_default_tx_ring
!= NULL
) {
1181 ring
= (mac_ring_t
*)mip
->mi_default_tx_ring
;
1182 if (ring
->mr_state
== MR_INUSE
) {
1183 mac_stop_ring(ring
);
1191 mip
->mi_stop(mip
->mi_driver
);
1196 i_mac_promisc_set(mac_impl_t
*mip
, boolean_t on
)
1200 ASSERT(MAC_PERIM_HELD((mac_handle_t
)mip
));
1201 ASSERT(mip
->mi_setpromisc
!= NULL
);
1205 * Enable promiscuous mode on the device if not yet enabled.
1207 if (mip
->mi_devpromisc
++ == 0) {
1208 err
= mip
->mi_setpromisc(mip
->mi_driver
, B_TRUE
);
1210 mip
->mi_devpromisc
--;
1213 i_mac_notify(mip
, MAC_NOTE_DEVPROMISC
);
1216 if (mip
->mi_devpromisc
== 0)
1220 * Disable promiscuous mode on the device if this is the last
1223 if (--mip
->mi_devpromisc
== 0) {
1224 err
= mip
->mi_setpromisc(mip
->mi_driver
, B_FALSE
);
1226 mip
->mi_devpromisc
++;
1229 i_mac_notify(mip
, MAC_NOTE_DEVPROMISC
);
1237 * The promiscuity state can change any time. If the caller needs to take
1238 * actions that are atomic with the promiscuity state, then the caller needs
1239 * to bracket the entire sequence with mac_perim_enter/exit
1242 mac_promisc_get(mac_handle_t mh
)
1244 mac_impl_t
*mip
= (mac_impl_t
*)mh
;
1247 * Return the current promiscuity.
1249 return (mip
->mi_devpromisc
!= 0);
1253 * Invoked at MAC instance attach time to initialize the list
1254 * of factory MAC addresses supported by a MAC instance. This function
1255 * builds a local cache in the mac_impl_t for the MAC addresses
1256 * supported by the underlying hardware. The MAC clients themselves
1257 * use the mac_addr_factory*() functions to query and reserve
1258 * factory MAC addresses.
1261 mac_addr_factory_init(mac_impl_t
*mip
)
1263 mac_capab_multifactaddr_t capab
;
1268 * First round to see how many factory MAC addresses are available.
1270 bzero(&capab
, sizeof (capab
));
1271 if (!i_mac_capab_get((mac_handle_t
)mip
, MAC_CAPAB_MULTIFACTADDR
,
1272 &capab
) || (capab
.mcm_naddr
== 0)) {
1274 * The MAC instance doesn't support multiple factory
1275 * MAC addresses, we're done here.
1281 * Allocate the space and get all the factory addresses.
1283 addr
= kmem_alloc(capab
.mcm_naddr
* MAXMACADDRLEN
, KM_SLEEP
);
1284 capab
.mcm_getaddr(mip
->mi_driver
, capab
.mcm_naddr
, addr
);
1286 mip
->mi_factory_addr_num
= capab
.mcm_naddr
;
1287 mip
->mi_factory_addr
= kmem_zalloc(mip
->mi_factory_addr_num
*
1288 sizeof (mac_factory_addr_t
), KM_SLEEP
);
1290 for (i
= 0; i
< capab
.mcm_naddr
; i
++) {
1291 bcopy(addr
+ i
* MAXMACADDRLEN
,
1292 mip
->mi_factory_addr
[i
].mfa_addr
,
1293 mip
->mi_type
->mt_addr_length
);
1294 mip
->mi_factory_addr
[i
].mfa_in_use
= B_FALSE
;
1297 kmem_free(addr
, capab
.mcm_naddr
* MAXMACADDRLEN
);
1301 mac_addr_factory_fini(mac_impl_t
*mip
)
1303 if (mip
->mi_factory_addr
== NULL
) {
1304 ASSERT(mip
->mi_factory_addr_num
== 0);
1308 kmem_free(mip
->mi_factory_addr
, mip
->mi_factory_addr_num
*
1309 sizeof (mac_factory_addr_t
));
1311 mip
->mi_factory_addr
= NULL
;
1312 mip
->mi_factory_addr_num
= 0;
1316 * Reserve a factory MAC address. If *slot is set to -1, the function
1317 * attempts to reserve any of the available factory MAC addresses and
1318 * returns the reserved slot id. If no slots are available, the function
1319 * returns ENOSPC. If *slot is not set to -1, the function reserves
1320 * the specified slot if it is available, or returns EBUSY is the slot
1321 * is already used. Returns ENOTSUP if the underlying MAC does not
1322 * support multiple factory addresses. If the slot number is not -1 but
1323 * is invalid, returns EINVAL.
1326 mac_addr_factory_reserve(mac_client_handle_t mch
, int *slot
)
1328 mac_client_impl_t
*mcip
= (mac_client_impl_t
*)mch
;
1329 mac_impl_t
*mip
= mcip
->mci_mip
;
1332 i_mac_perim_enter(mip
);
1334 * Protect against concurrent readers that may need a self-consistent
1335 * view of the factory addresses
1337 rw_enter(&mip
->mi_rw_lock
, RW_WRITER
);
1339 if (mip
->mi_factory_addr_num
== 0) {
1345 /* check the specified slot */
1346 if (*slot
< 1 || *slot
> mip
->mi_factory_addr_num
) {
1350 if (mip
->mi_factory_addr
[*slot
-1].mfa_in_use
) {
1355 /* pick the next available slot */
1356 for (i
= 0; i
< mip
->mi_factory_addr_num
; i
++) {
1357 if (!mip
->mi_factory_addr
[i
].mfa_in_use
)
1361 if (i
== mip
->mi_factory_addr_num
) {
1368 mip
->mi_factory_addr
[*slot
-1].mfa_in_use
= B_TRUE
;
1369 mip
->mi_factory_addr
[*slot
-1].mfa_client
= mcip
;
1372 rw_exit(&mip
->mi_rw_lock
);
1373 i_mac_perim_exit(mip
);
1378 * Release the specified factory MAC address slot.
1381 mac_addr_factory_release(mac_client_handle_t mch
, uint_t slot
)
1383 mac_client_impl_t
*mcip
= (mac_client_impl_t
*)mch
;
1384 mac_impl_t
*mip
= mcip
->mci_mip
;
1386 i_mac_perim_enter(mip
);
1388 * Protect against concurrent readers that may need a self-consistent
1389 * view of the factory addresses
1391 rw_enter(&mip
->mi_rw_lock
, RW_WRITER
);
1393 ASSERT(slot
> 0 && slot
<= mip
->mi_factory_addr_num
);
1394 ASSERT(mip
->mi_factory_addr
[slot
-1].mfa_in_use
);
1396 mip
->mi_factory_addr
[slot
-1].mfa_in_use
= B_FALSE
;
1398 rw_exit(&mip
->mi_rw_lock
);
1399 i_mac_perim_exit(mip
);
1403 * Stores in mac_addr the value of the specified MAC address. Returns
1404 * 0 on success, or EINVAL if the slot number is not valid for the MAC.
1405 * The caller must provide a string of at least MAXNAMELEN bytes.
1408 mac_addr_factory_value(mac_handle_t mh
, int slot
, uchar_t
*mac_addr
,
1409 uint_t
*addr_len
, char *client_name
, boolean_t
*in_use_arg
)
1411 mac_impl_t
*mip
= (mac_impl_t
*)mh
;
1414 ASSERT(slot
> 0 && slot
<= mip
->mi_factory_addr_num
);
1417 * Readers need to hold mi_rw_lock. Writers need to hold mac perimeter
1420 rw_enter(&mip
->mi_rw_lock
, RW_READER
);
1421 bcopy(mip
->mi_factory_addr
[slot
-1].mfa_addr
, mac_addr
, MAXMACADDRLEN
);
1422 *addr_len
= mip
->mi_type
->mt_addr_length
;
1423 in_use
= mip
->mi_factory_addr
[slot
-1].mfa_in_use
;
1424 if (in_use
&& client_name
!= NULL
) {
1425 bcopy(mip
->mi_factory_addr
[slot
-1].mfa_client
->mci_name
,
1426 client_name
, MAXNAMELEN
);
1428 if (in_use_arg
!= NULL
)
1429 *in_use_arg
= in_use
;
1430 rw_exit(&mip
->mi_rw_lock
);
1434 * Returns the number of factory MAC addresses (in addition to the
1435 * primary MAC address), 0 if the underlying MAC doesn't support
1439 mac_addr_factory_num(mac_handle_t mh
)
1441 mac_impl_t
*mip
= (mac_impl_t
*)mh
;
1443 return (mip
->mi_factory_addr_num
);
1448 mac_rx_group_unmark(mac_group_t
*grp
, uint_t flag
)
1452 for (ring
= grp
->mrg_rings
; ring
!= NULL
; ring
= ring
->mr_next
)
1453 ring
->mr_flag
&= ~flag
;
1457 * The following mac_hwrings_xxx() functions are private mac client functions
1458 * used by the aggr driver to access and control the underlying HW Rx group
1459 * and rings. In this case, the aggr driver has exclusive control of the
1460 * underlying HW Rx group/rings, it calls the following functions to
1461 * start/stop the HW Rx rings, disable/enable polling, add/remove mac'
1462 * addresses, or set up the Rx callback.
1466 mac_hwrings_rx_process(void *arg
, mac_resource_handle_t srs
,
1467 mblk_t
*mp_chain
, boolean_t loopback
)
1469 mac_soft_ring_set_t
*mac_srs
= (mac_soft_ring_set_t
*)srs
;
1470 mac_srs_rx_t
*srs_rx
= &mac_srs
->srs_rx
;
1471 mac_direct_rx_t proc
;
1473 mac_resource_handle_t arg2
;
1475 proc
= srs_rx
->sr_func
;
1476 arg1
= srs_rx
->sr_arg1
;
1477 arg2
= mac_srs
->srs_mrh
;
1479 proc(arg1
, arg2
, mp_chain
, NULL
);
1483 * This function is called to get the list of HW rings that are reserved by
1484 * an exclusive mac client.
1486 * Return value: the number of HW rings.
1489 mac_hwrings_get(mac_client_handle_t mch
, mac_group_handle_t
*hwgh
,
1490 mac_ring_handle_t
*hwrh
, mac_ring_type_t rtype
)
1492 mac_client_impl_t
*mcip
= (mac_client_impl_t
*)mch
;
1493 flow_entry_t
*flent
= mcip
->mci_flent
;
1498 if (rtype
== MAC_RING_TYPE_RX
) {
1499 grp
= flent
->fe_rx_ring_group
;
1500 } else if (rtype
== MAC_RING_TYPE_TX
) {
1501 grp
= flent
->fe_tx_ring_group
;
1507 * The mac client did not reserve any RX group, return directly.
1508 * This is probably because the underlying MAC does not support
1516 * This group must be reserved by this mac client.
1518 ASSERT((grp
->mrg_state
== MAC_GROUP_STATE_RESERVED
) &&
1519 (mcip
== MAC_GROUP_ONLY_CLIENT(grp
)));
1521 for (ring
= grp
->mrg_rings
; ring
!= NULL
; ring
= ring
->mr_next
, cnt
++) {
1522 ASSERT(cnt
< MAX_RINGS_PER_GROUP
);
1523 hwrh
[cnt
] = (mac_ring_handle_t
)ring
;
1526 *hwgh
= (mac_group_handle_t
)grp
;
1532 * This function is called to get info about Tx/Rx rings.
1534 * Return value: returns uint_t which will have various bits set
1535 * that indicates different properties of the ring.
1538 mac_hwring_getinfo(mac_ring_handle_t rh
)
1540 mac_ring_t
*ring
= (mac_ring_t
*)rh
;
1541 mac_ring_info_t
*info
= &ring
->mr_info
;
1543 return (info
->mri_flags
);
1547 * Export ddi interrupt handles from the HW ring to the pseudo ring and
1548 * setup the RX callback of the mac client which exclusively controls
1552 mac_hwring_setup(mac_ring_handle_t hwrh
, mac_resource_handle_t prh
,
1553 mac_ring_handle_t pseudo_rh
)
1555 mac_ring_t
*hw_ring
= (mac_ring_t
*)hwrh
;
1556 mac_ring_t
*pseudo_ring
;
1557 mac_soft_ring_set_t
*mac_srs
= hw_ring
->mr_srs
;
1559 if (pseudo_rh
!= NULL
) {
1560 pseudo_ring
= (mac_ring_t
*)pseudo_rh
;
1561 /* Export the ddi handles to pseudo ring */
1562 pseudo_ring
->mr_info
.mri_intr
.mi_ddi_handle
=
1563 hw_ring
->mr_info
.mri_intr
.mi_ddi_handle
;
1564 pseudo_ring
->mr_info
.mri_intr
.mi_ddi_shared
=
1565 hw_ring
->mr_info
.mri_intr
.mi_ddi_shared
;
1567 * Save a pointer to pseudo ring in the hw ring. If
1568 * interrupt handle changes, the hw ring will be
1569 * notified of the change (see mac_ring_intr_set())
1570 * and the appropriate change has to be made to
1571 * the pseudo ring that has exported the ddi handle.
1573 hw_ring
->mr_prh
= pseudo_rh
;
1576 if (hw_ring
->mr_type
== MAC_RING_TYPE_RX
) {
1577 ASSERT(!(mac_srs
->srs_type
& SRST_TX
));
1578 mac_srs
->srs_mrh
= prh
;
1579 mac_srs
->srs_rx
.sr_lower_proc
= mac_hwrings_rx_process
;
1584 mac_hwring_teardown(mac_ring_handle_t hwrh
)
1586 mac_ring_t
*hw_ring
= (mac_ring_t
*)hwrh
;
1587 mac_soft_ring_set_t
*mac_srs
;
1589 if (hw_ring
== NULL
)
1591 hw_ring
->mr_prh
= NULL
;
1592 if (hw_ring
->mr_type
== MAC_RING_TYPE_RX
) {
1593 mac_srs
= hw_ring
->mr_srs
;
1594 ASSERT(!(mac_srs
->srs_type
& SRST_TX
));
1595 mac_srs
->srs_rx
.sr_lower_proc
= mac_rx_srs_process
;
1596 mac_srs
->srs_mrh
= NULL
;
1601 mac_hwring_disable_intr(mac_ring_handle_t rh
)
1603 mac_ring_t
*rr_ring
= (mac_ring_t
*)rh
;
1604 mac_intr_t
*intr
= &rr_ring
->mr_info
.mri_intr
;
1606 return (intr
->mi_disable(intr
->mi_handle
));
1610 mac_hwring_enable_intr(mac_ring_handle_t rh
)
1612 mac_ring_t
*rr_ring
= (mac_ring_t
*)rh
;
1613 mac_intr_t
*intr
= &rr_ring
->mr_info
.mri_intr
;
1615 return (intr
->mi_enable(intr
->mi_handle
));
1619 mac_hwring_start(mac_ring_handle_t rh
)
1621 mac_ring_t
*rr_ring
= (mac_ring_t
*)rh
;
1623 MAC_RING_UNMARK(rr_ring
, MR_QUIESCE
);
1628 mac_hwring_stop(mac_ring_handle_t rh
)
1630 mac_ring_t
*rr_ring
= (mac_ring_t
*)rh
;
1632 mac_rx_ring_quiesce(rr_ring
, MR_QUIESCE
);
1636 mac_hwring_poll(mac_ring_handle_t rh
, int bytes_to_pickup
)
1638 mac_ring_t
*rr_ring
= (mac_ring_t
*)rh
;
1639 mac_ring_info_t
*info
= &rr_ring
->mr_info
;
1641 return (info
->mri_poll(info
->mri_driver
, bytes_to_pickup
));
1645 * Send packets through a selected tx ring.
1648 mac_hwring_tx(mac_ring_handle_t rh
, mblk_t
*mp
)
1650 mac_ring_t
*ring
= (mac_ring_t
*)rh
;
1651 mac_ring_info_t
*info
= &ring
->mr_info
;
1653 ASSERT(ring
->mr_type
== MAC_RING_TYPE_TX
&&
1654 ring
->mr_state
>= MR_INUSE
);
1655 return (info
->mri_tx(info
->mri_driver
, mp
));
1659 * Query stats for a particular rx/tx ring
1662 mac_hwring_getstat(mac_ring_handle_t rh
, uint_t stat
, uint64_t *val
)
1664 mac_ring_t
*ring
= (mac_ring_t
*)rh
;
1665 mac_ring_info_t
*info
= &ring
->mr_info
;
1667 return (info
->mri_stat(info
->mri_driver
, stat
, val
));
1671 * Private function that is only used by aggr to send packets through
1672 * a port/Tx ring. Since aggr exposes a pseudo Tx ring even for ports
1673 * that does not expose Tx rings, aggr_ring_tx() entry point needs
1674 * access to mac_impl_t to send packets through m_tx() entry point.
1675 * It accomplishes this by calling mac_hwring_send_priv() function.
1678 mac_hwring_send_priv(mac_client_handle_t mch
, mac_ring_handle_t rh
, mblk_t
*mp
)
1680 mac_client_impl_t
*mcip
= (mac_client_impl_t
*)mch
;
1681 mac_impl_t
*mip
= mcip
->mci_mip
;
1683 MAC_TX(mip
, rh
, mp
, mcip
);
1688 mac_hwgroup_addmac(mac_group_handle_t gh
, const uint8_t *addr
)
1690 mac_group_t
*group
= (mac_group_t
*)gh
;
1692 return (mac_group_addmac(group
, addr
));
1696 mac_hwgroup_remmac(mac_group_handle_t gh
, const uint8_t *addr
)
1698 mac_group_t
*group
= (mac_group_t
*)gh
;
1700 return (mac_group_remmac(group
, addr
));
1704 * Set the RX group to be shared/reserved. Note that the group must be
1705 * started/stopped outside of this function.
1708 mac_set_group_state(mac_group_t
*grp
, mac_group_state_t state
)
1711 * If there is no change in the group state, just return.
1713 if (grp
->mrg_state
== state
)
1717 case MAC_GROUP_STATE_RESERVED
:
1719 * Successfully reserved the group.
1721 * Given that there is an exclusive client controlling this
1722 * group, we enable the group level polling when available,
1723 * so that SRSs get to turn on/off individual rings they's
1726 ASSERT(MAC_PERIM_HELD(grp
->mrg_mh
));
1728 if (grp
->mrg_type
== MAC_RING_TYPE_RX
&&
1729 GROUP_INTR_DISABLE_FUNC(grp
) != NULL
) {
1730 GROUP_INTR_DISABLE_FUNC(grp
)(GROUP_INTR_HANDLE(grp
));
1734 case MAC_GROUP_STATE_SHARED
:
1736 * Set all rings of this group to software classified.
1737 * If the group has an overriding interrupt, then re-enable it.
1739 ASSERT(MAC_PERIM_HELD(grp
->mrg_mh
));
1741 if (grp
->mrg_type
== MAC_RING_TYPE_RX
&&
1742 GROUP_INTR_ENABLE_FUNC(grp
) != NULL
) {
1743 GROUP_INTR_ENABLE_FUNC(grp
)(GROUP_INTR_HANDLE(grp
));
1745 /* The ring is not available for reservations any more */
1748 case MAC_GROUP_STATE_REGISTERED
:
1749 /* Also callable from mac_register, perim is not held */
1757 grp
->mrg_state
= state
;
1761 * Quiesce future hardware classified packets for the specified Rx ring
1764 mac_rx_ring_quiesce(mac_ring_t
*rx_ring
, uint_t ring_flag
)
1766 ASSERT(rx_ring
->mr_classify_type
== MAC_HW_CLASSIFIER
);
1767 ASSERT(ring_flag
== MR_CONDEMNED
|| ring_flag
== MR_QUIESCE
);
1769 mutex_enter(&rx_ring
->mr_lock
);
1770 rx_ring
->mr_flag
|= ring_flag
;
1771 while (rx_ring
->mr_refcnt
!= 0)
1772 cv_wait(&rx_ring
->mr_cv
, &rx_ring
->mr_lock
);
1773 mutex_exit(&rx_ring
->mr_lock
);
1777 * Please see mac_tx for details about the per cpu locking scheme
1780 mac_tx_lock_all(mac_client_impl_t
*mcip
)
1784 for (i
= 0; i
<= mac_tx_percpu_cnt
; i
++)
1785 mutex_enter(&mcip
->mci_tx_pcpu
[i
].pcpu_tx_lock
);
1789 mac_tx_unlock_all(mac_client_impl_t
*mcip
)
1793 for (i
= mac_tx_percpu_cnt
; i
>= 0; i
--)
1794 mutex_exit(&mcip
->mci_tx_pcpu
[i
].pcpu_tx_lock
);
1798 mac_tx_unlock_allbutzero(mac_client_impl_t
*mcip
)
1802 for (i
= mac_tx_percpu_cnt
; i
> 0; i
--)
1803 mutex_exit(&mcip
->mci_tx_pcpu
[i
].pcpu_tx_lock
);
1807 mac_tx_sum_refcnt(mac_client_impl_t
*mcip
)
1812 for (i
= 0; i
<= mac_tx_percpu_cnt
; i
++)
1813 refcnt
+= mcip
->mci_tx_pcpu
[i
].pcpu_tx_refcnt
;
1819 * Stop future Tx packets coming down from the client in preparation for
1820 * quiescing the Tx side. This is needed for dynamic reclaim and reassignment
1821 * of rings between clients
1824 mac_tx_client_block(mac_client_impl_t
*mcip
)
1826 mac_tx_lock_all(mcip
);
1827 mcip
->mci_tx_flag
|= MCI_TX_QUIESCE
;
1828 while (mac_tx_sum_refcnt(mcip
) != 0) {
1829 mac_tx_unlock_allbutzero(mcip
);
1830 cv_wait(&mcip
->mci_tx_cv
, &mcip
->mci_tx_pcpu
[0].pcpu_tx_lock
);
1831 mutex_exit(&mcip
->mci_tx_pcpu
[0].pcpu_tx_lock
);
1832 mac_tx_lock_all(mcip
);
1834 mac_tx_unlock_all(mcip
);
1838 mac_tx_client_unblock(mac_client_impl_t
*mcip
)
1840 mac_tx_lock_all(mcip
);
1841 mcip
->mci_tx_flag
&= ~MCI_TX_QUIESCE
;
1842 mac_tx_unlock_all(mcip
);
1844 * We may fail to disable flow control for the last MAC_NOTE_TX
1845 * notification because the MAC client is quiesced. Send the
1846 * notification again.
1848 i_mac_notify(mcip
->mci_mip
, MAC_NOTE_TX
);
1852 * Wait for an SRS to quiesce. The SRS worker will signal us when the
1856 mac_srs_quiesce_wait(mac_soft_ring_set_t
*srs
, uint_t srs_flag
)
1858 mutex_enter(&srs
->srs_lock
);
1859 while (!(srs
->srs_state
& srs_flag
))
1860 cv_wait(&srs
->srs_quiesce_done_cv
, &srs
->srs_lock
);
1861 mutex_exit(&srs
->srs_lock
);
1865 * Quiescing an Rx SRS is achieved by the following sequence. The protocol
1866 * works bottom up by cutting off packet flow from the bottommost point in the
1867 * mac, then the SRS, and then the soft rings. There are 2 use cases of this
1868 * mechanism. One is a temporary quiesce of the SRS, such as say while changing
1869 * the Rx callbacks. Another use case is Rx SRS teardown. In the former case
1870 * the QUIESCE prefix/suffix is used and in the latter the CONDEMNED is used
1871 * for the SRS and MR flags. In the former case the threads pause waiting for
1872 * a restart, while in the latter case the threads exit. The Tx SRS teardown
1873 * is also mostly similar to the above.
1875 * 1. Stop future hardware classified packets at the lowest level in the mac.
1876 * Remove any hardware classification rule (CONDEMNED case) and mark the
1877 * rings as CONDEMNED or QUIESCE as appropriate. This prevents the mr_refcnt
1878 * from increasing. Upcalls from the driver that come through hardware
1879 * classification will be dropped in mac_rx from now on. Then we wait for
1880 * the mr_refcnt to drop to zero. When the mr_refcnt reaches zero we are
1881 * sure there aren't any upcall threads from the driver through hardware
1882 * classification. In the case of SRS teardown we also remove the
1883 * classification rule in the driver.
1885 * 2. Stop future software classified packets by marking the flow entry with
1886 * FE_QUIESCE or FE_CONDEMNED as appropriate which prevents the refcnt from
1887 * increasing. We also remove the flow entry from the table in the latter
1888 * case. Then wait for the fe_refcnt to reach an appropriate quiescent value
1889 * that indicates there aren't any active threads using that flow entry.
1891 * 3. Quiesce the SRS and softrings by signaling the SRS. The SRS poll thread,
1892 * SRS worker thread, and the soft ring threads are quiesced in sequence
1893 * with the SRS worker thread serving as a master controller. This
1894 * mechansim is explained in mac_srs_worker_quiesce().
1896 * The restart mechanism to reactivate the SRS and softrings is explained
1897 * in mac_srs_worker_restart(). Here we just signal the SRS worker to start the
1901 mac_rx_srs_quiesce(mac_soft_ring_set_t
*srs
, uint_t srs_quiesce_flag
)
1903 flow_entry_t
*flent
= srs
->srs_flent
;
1904 uint_t mr_flag
, srs_done_flag
;
1906 ASSERT(MAC_PERIM_HELD((mac_handle_t
)FLENT_TO_MIP(flent
)));
1907 ASSERT(!(srs
->srs_type
& SRST_TX
));
1909 if (srs_quiesce_flag
== SRS_CONDEMNED
) {
1910 mr_flag
= MR_CONDEMNED
;
1911 srs_done_flag
= SRS_CONDEMNED_DONE
;
1912 if (srs
->srs_type
& SRST_CLIENT_POLL_ENABLED
)
1913 mac_srs_client_poll_disable(srs
->srs_mcip
, srs
);
1915 ASSERT(srs_quiesce_flag
== SRS_QUIESCE
);
1916 mr_flag
= MR_QUIESCE
;
1917 srs_done_flag
= SRS_QUIESCE_DONE
;
1918 if (srs
->srs_type
& SRST_CLIENT_POLL_ENABLED
)
1919 mac_srs_client_poll_quiesce(srs
->srs_mcip
, srs
);
1922 if (srs
->srs_ring
!= NULL
) {
1923 mac_rx_ring_quiesce(srs
->srs_ring
, mr_flag
);
1926 * SRS is driven by software classification. In case
1927 * of CONDEMNED, the top level teardown functions will
1928 * deal with flow removal.
1930 if (srs_quiesce_flag
!= SRS_CONDEMNED
) {
1931 FLOW_MARK(flent
, FE_QUIESCE
);
1932 mac_flow_wait(flent
, FLOW_DRIVER_UPCALL
);
1937 * Signal the SRS to quiesce itself, and then cv_wait for the
1938 * SRS quiesce to complete. The SRS worker thread will wake us
1939 * up when the quiesce is complete
1941 mac_srs_signal(srs
, srs_quiesce_flag
);
1942 mac_srs_quiesce_wait(srs
, srs_done_flag
);
1949 mac_rx_srs_remove(mac_soft_ring_set_t
*srs
)
1951 flow_entry_t
*flent
= srs
->srs_flent
;
1954 mac_rx_srs_quiesce(srs
, SRS_CONDEMNED
);
1956 * Locate and remove our entry in the fe_rx_srs[] array, and
1957 * adjust the fe_rx_srs array entries and array count by
1958 * moving the last entry into the vacated spot.
1960 mutex_enter(&flent
->fe_lock
);
1961 for (i
= 0; i
< flent
->fe_rx_srs_cnt
; i
++) {
1962 if (flent
->fe_rx_srs
[i
] == srs
)
1966 ASSERT(i
!= 0 && i
< flent
->fe_rx_srs_cnt
);
1967 if (i
!= flent
->fe_rx_srs_cnt
- 1) {
1968 flent
->fe_rx_srs
[i
] =
1969 flent
->fe_rx_srs
[flent
->fe_rx_srs_cnt
- 1];
1970 i
= flent
->fe_rx_srs_cnt
- 1;
1973 flent
->fe_rx_srs
[i
] = NULL
;
1974 flent
->fe_rx_srs_cnt
--;
1975 mutex_exit(&flent
->fe_lock
);
1981 mac_srs_clear_flag(mac_soft_ring_set_t
*srs
, uint_t flag
)
1983 mutex_enter(&srs
->srs_lock
);
1984 srs
->srs_state
&= ~flag
;
1985 mutex_exit(&srs
->srs_lock
);
1989 mac_rx_srs_restart(mac_soft_ring_set_t
*srs
)
1991 flow_entry_t
*flent
= srs
->srs_flent
;
1994 ASSERT(MAC_PERIM_HELD((mac_handle_t
)FLENT_TO_MIP(flent
)));
1995 ASSERT((srs
->srs_type
& SRST_TX
) == 0);
1998 * This handles a change in the number of SRSs between the quiesce and
1999 * and restart operation of a flow.
2001 if (!SRS_QUIESCED(srs
))
2005 * Signal the SRS to restart itself. Wait for the restart to complete
2006 * Note that we only restart the SRS if it is not marked as
2007 * permanently quiesced.
2009 if (!SRS_QUIESCED_PERMANENT(srs
)) {
2010 mac_srs_signal(srs
, SRS_RESTART
);
2011 mac_srs_quiesce_wait(srs
, SRS_RESTART_DONE
);
2012 mac_srs_clear_flag(srs
, SRS_RESTART_DONE
);
2014 mac_srs_client_poll_restart(srs
->srs_mcip
, srs
);
2017 /* Finally clear the flags to let the packets in */
2020 MAC_RING_UNMARK(mr
, MR_QUIESCE
);
2021 /* In case the ring was stopped, safely restart it */
2022 if (mr
->mr_state
!= MR_INUSE
)
2023 (void) mac_start_ring(mr
);
2025 FLOW_UNMARK(flent
, FE_QUIESCE
);
2030 * Temporary quiesce of a flow and associated Rx SRS.
2031 * Please see block comment above mac_rx_classify_flow_rem.
2035 mac_rx_classify_flow_quiesce(flow_entry_t
*flent
, void *arg
)
2039 for (i
= 0; i
< flent
->fe_rx_srs_cnt
; i
++) {
2040 mac_rx_srs_quiesce((mac_soft_ring_set_t
*)flent
->fe_rx_srs
[i
],
2047 * Restart a flow and associated Rx SRS that has been quiesced temporarily
2048 * Please see block comment above mac_rx_classify_flow_rem
2052 mac_rx_classify_flow_restart(flow_entry_t
*flent
, void *arg
)
2056 for (i
= 0; i
< flent
->fe_rx_srs_cnt
; i
++)
2057 mac_rx_srs_restart((mac_soft_ring_set_t
*)flent
->fe_rx_srs
[i
]);
2063 mac_srs_perm_quiesce(mac_client_handle_t mch
, boolean_t on
)
2065 mac_client_impl_t
*mcip
= (mac_client_impl_t
*)mch
;
2066 flow_entry_t
*flent
= mcip
->mci_flent
;
2067 mac_impl_t
*mip
= mcip
->mci_mip
;
2068 mac_soft_ring_set_t
*mac_srs
;
2071 ASSERT(MAC_PERIM_HELD((mac_handle_t
)mip
));
2076 for (i
= 0; i
< flent
->fe_rx_srs_cnt
; i
++) {
2077 mac_srs
= flent
->fe_rx_srs
[i
];
2078 mutex_enter(&mac_srs
->srs_lock
);
2080 mac_srs
->srs_state
|= SRS_QUIESCE_PERM
;
2082 mac_srs
->srs_state
&= ~SRS_QUIESCE_PERM
;
2083 mutex_exit(&mac_srs
->srs_lock
);
2088 mac_rx_client_quiesce(mac_client_handle_t mch
)
2090 mac_client_impl_t
*mcip
= (mac_client_impl_t
*)mch
;
2091 mac_impl_t
*mip
= mcip
->mci_mip
;
2093 ASSERT(MAC_PERIM_HELD((mac_handle_t
)mip
));
2095 if (MCIP_DATAPATH_SETUP(mcip
)) {
2096 (void) mac_rx_classify_flow_quiesce(mcip
->mci_flent
,
2098 (void) mac_flow_walk_nolock(mcip
->mci_subflow_tab
,
2099 mac_rx_classify_flow_quiesce
, NULL
);
2104 mac_rx_client_restart(mac_client_handle_t mch
)
2106 mac_client_impl_t
*mcip
= (mac_client_impl_t
*)mch
;
2107 mac_impl_t
*mip
= mcip
->mci_mip
;
2109 ASSERT(MAC_PERIM_HELD((mac_handle_t
)mip
));
2111 if (MCIP_DATAPATH_SETUP(mcip
)) {
2112 (void) mac_rx_classify_flow_restart(mcip
->mci_flent
, NULL
);
2113 (void) mac_flow_walk_nolock(mcip
->mci_subflow_tab
,
2114 mac_rx_classify_flow_restart
, NULL
);
2119 * This function only quiesces the Tx SRS and softring worker threads. Callers
2120 * need to make sure that there aren't any mac client threads doing current or
2121 * future transmits in the mac before calling this function.
2124 mac_tx_srs_quiesce(mac_soft_ring_set_t
*srs
, uint_t srs_quiesce_flag
)
2126 mac_client_impl_t
*mcip
= srs
->srs_mcip
;
2128 ASSERT(MAC_PERIM_HELD((mac_handle_t
)mcip
->mci_mip
));
2130 ASSERT(srs
->srs_type
& SRST_TX
);
2131 ASSERT(srs_quiesce_flag
== SRS_CONDEMNED
||
2132 srs_quiesce_flag
== SRS_QUIESCE
);
2135 * Signal the SRS to quiesce itself, and then cv_wait for the
2136 * SRS quiesce to complete. The SRS worker thread will wake us
2137 * up when the quiesce is complete
2139 mac_srs_signal(srs
, srs_quiesce_flag
);
2140 mac_srs_quiesce_wait(srs
, srs_quiesce_flag
== SRS_QUIESCE
?
2141 SRS_QUIESCE_DONE
: SRS_CONDEMNED_DONE
);
2145 mac_tx_srs_restart(mac_soft_ring_set_t
*srs
)
2148 * Resizing the fanout could result in creation of new SRSs.
2149 * They may not necessarily be in the quiesced state in which
2150 * case it need be restarted
2152 if (!SRS_QUIESCED(srs
))
2155 mac_srs_signal(srs
, SRS_RESTART
);
2156 mac_srs_quiesce_wait(srs
, SRS_RESTART_DONE
);
2157 mac_srs_clear_flag(srs
, SRS_RESTART_DONE
);
2161 * Temporary quiesce of a flow and associated Rx SRS.
2162 * Please see block comment above mac_rx_srs_quiesce
2166 mac_tx_flow_quiesce(flow_entry_t
*flent
, void *arg
)
2169 * The fe_tx_srs is null for a subflow on an interface that is
2172 if (flent
->fe_tx_srs
!= NULL
)
2173 mac_tx_srs_quiesce(flent
->fe_tx_srs
, SRS_QUIESCE
);
2179 mac_tx_flow_restart(flow_entry_t
*flent
, void *arg
)
2182 * The fe_tx_srs is null for a subflow on an interface that is
2185 if (flent
->fe_tx_srs
!= NULL
)
2186 mac_tx_srs_restart(flent
->fe_tx_srs
);
2191 i_mac_tx_client_quiesce(mac_client_handle_t mch
, uint_t srs_quiesce_flag
)
2193 mac_client_impl_t
*mcip
= (mac_client_impl_t
*)mch
;
2195 ASSERT(MAC_PERIM_HELD((mac_handle_t
)mcip
->mci_mip
));
2197 mac_tx_client_block(mcip
);
2198 if (MCIP_TX_SRS(mcip
) != NULL
) {
2199 mac_tx_srs_quiesce(MCIP_TX_SRS(mcip
), srs_quiesce_flag
);
2200 (void) mac_flow_walk_nolock(mcip
->mci_subflow_tab
,
2201 mac_tx_flow_quiesce
, NULL
);
2206 mac_tx_client_quiesce(mac_client_handle_t mch
)
2208 i_mac_tx_client_quiesce(mch
, SRS_QUIESCE
);
2212 mac_tx_client_condemn(mac_client_handle_t mch
)
2214 i_mac_tx_client_quiesce(mch
, SRS_CONDEMNED
);
2218 mac_tx_client_restart(mac_client_handle_t mch
)
2220 mac_client_impl_t
*mcip
= (mac_client_impl_t
*)mch
;
2222 ASSERT(MAC_PERIM_HELD((mac_handle_t
)mcip
->mci_mip
));
2224 mac_tx_client_unblock(mcip
);
2225 if (MCIP_TX_SRS(mcip
) != NULL
) {
2226 mac_tx_srs_restart(MCIP_TX_SRS(mcip
));
2227 (void) mac_flow_walk_nolock(mcip
->mci_subflow_tab
,
2228 mac_tx_flow_restart
, NULL
);
2233 mac_tx_client_flush(mac_client_impl_t
*mcip
)
2235 ASSERT(MAC_PERIM_HELD((mac_handle_t
)mcip
->mci_mip
));
2237 mac_tx_client_quiesce((mac_client_handle_t
)mcip
);
2238 mac_tx_client_restart((mac_client_handle_t
)mcip
);
2242 mac_client_quiesce(mac_client_impl_t
*mcip
)
2244 mac_rx_client_quiesce((mac_client_handle_t
)mcip
);
2245 mac_tx_client_quiesce((mac_client_handle_t
)mcip
);
2249 mac_client_restart(mac_client_impl_t
*mcip
)
2251 mac_rx_client_restart((mac_client_handle_t
)mcip
);
2252 mac_tx_client_restart((mac_client_handle_t
)mcip
);
2256 * Allocate a minor number.
2259 mac_minor_hold(boolean_t sleep
)
2264 * Grab a value from the arena.
2266 atomic_inc_32(&minor_count
);
2269 minor
= (uint_t
)id_alloc(minor_ids
);
2271 minor
= (uint_t
)id_alloc_nosleep(minor_ids
);
2274 atomic_dec_32(&minor_count
);
2282 * Release a previously allocated minor number.
2285 mac_minor_rele(minor_t minor
)
2288 * Return the value to the arena.
2290 id_free(minor_ids
, minor
);
2291 atomic_dec_32(&minor_count
);
2295 mac_no_notification(mac_handle_t mh
)
2297 mac_impl_t
*mip
= (mac_impl_t
*)mh
;
2299 return (((mip
->mi_state_flags
& MIS_LEGACY
) != 0) ?
2300 mip
->mi_capab_legacy
.ml_unsup_note
: 0);
2304 * Prevent any new opens of this mac in preparation for unregister
2307 i_mac_disable(mac_impl_t
*mip
)
2309 mac_client_impl_t
*mcip
;
2311 rw_enter(&i_mac_impl_lock
, RW_WRITER
);
2312 if (mip
->mi_state_flags
& MIS_DISABLED
) {
2313 /* Already disabled, return success */
2314 rw_exit(&i_mac_impl_lock
);
2318 * See if there are any other references to this mac_t (e.g., VLAN's).
2319 * If so return failure. If all the other checks below pass, then
2320 * set mi_disabled atomically under the i_mac_impl_lock to prevent
2321 * any new VLAN's from being created or new mac client opens of this
2324 if (mip
->mi_ref
> 0) {
2325 rw_exit(&i_mac_impl_lock
);
2330 * mac clients must delete all multicast groups they join before
2331 * closing. bcast groups are reference counted, the last client
2332 * to delete the group will wait till the group is physically
2333 * deleted. Since all clients have closed this mac end point
2334 * mi_bcast_ngrps must be zero at this point
2336 ASSERT(mip
->mi_bcast_ngrps
== 0);
2339 * Don't let go of this if it has some flows.
2340 * All other code guarantees no flows are added to a disabled
2341 * mac, therefore it is sufficient to check for the flow table
2344 mcip
= mac_primary_client_handle(mip
);
2345 if ((mcip
!= NULL
) && mac_link_has_flows((mac_client_handle_t
)mcip
)) {
2346 rw_exit(&i_mac_impl_lock
);
2350 mip
->mi_state_flags
|= MIS_DISABLED
;
2351 rw_exit(&i_mac_impl_lock
);
2356 mac_disable_nowait(mac_handle_t mh
)
2358 mac_impl_t
*mip
= (mac_impl_t
*)mh
;
2361 if ((err
= i_mac_perim_enter_nowait(mip
)) != 0)
2363 err
= i_mac_disable(mip
);
2364 i_mac_perim_exit(mip
);
2369 mac_disable(mac_handle_t mh
)
2371 mac_impl_t
*mip
= (mac_impl_t
*)mh
;
2374 i_mac_perim_enter(mip
);
2375 err
= i_mac_disable(mip
);
2376 i_mac_perim_exit(mip
);
2379 * Clean up notification thread and wait for it to exit.
2382 i_mac_notify_exit(mip
);
2388 * Called when the MAC instance has a non empty flow table, to de-multiplex
2389 * incoming packets to the right flow.
2390 * The MAC's rw lock is assumed held as a READER.
2394 mac_rx_classify(mac_impl_t
*mip
, mac_resource_handle_t mrh
, mblk_t
*mp
)
2396 flow_entry_t
*flent
= NULL
;
2397 uint_t flags
= FLOW_INBOUND
;
2401 * If the mac is a port of an aggregation, pass FLOW_IGNORE_VLAN
2402 * to mac_flow_lookup() so that the VLAN packets can be successfully
2403 * passed to the non-VLAN aggregation flows.
2405 * Note that there is possibly a race between this and
2406 * mac_unicast_remove/add() and VLAN packets could be incorrectly
2407 * classified to non-VLAN flows of non-aggregation mac clients. These
2408 * VLAN packets will be then filtered out by the mac module.
2410 if ((mip
->mi_state_flags
& MIS_EXCLUSIVE
) != 0)
2411 flags
|= FLOW_IGNORE_VLAN
;
2413 err
= mac_flow_lookup(mip
->mi_flow_tab
, mp
, flags
, &flent
);
2415 /* no registered receive function */
2418 mac_client_impl_t
*mcip
;
2421 * This flent might just be an additional one on the MAC client,
2422 * i.e. for classification purposes (different fdesc), however
2423 * the resources, SRS et. al., are in the mci_flent, so if
2424 * this isn't the mci_flent, we need to get it.
2426 if ((mcip
= flent
->fe_mcip
) != NULL
&&
2427 mcip
->mci_flent
!= flent
) {
2428 FLOW_REFRELE(flent
);
2429 flent
= mcip
->mci_flent
;
2430 FLOW_TRY_REFHOLD(flent
, err
);
2434 (flent
->fe_cb_fn
)(flent
->fe_cb_arg1
, flent
->fe_cb_arg2
, mp
,
2436 FLOW_REFRELE(flent
);
2442 mac_rx_flow(mac_handle_t mh
, mac_resource_handle_t mrh
, mblk_t
*mp_chain
)
2444 mac_impl_t
*mip
= (mac_impl_t
*)mh
;
2445 mblk_t
*bp
, *bp1
, **bpp
, *list
= NULL
;
2448 * We walk the chain and attempt to classify each packet.
2449 * The packets that couldn't be classified will be returned
2450 * back to the caller.
2454 while (bp
!= NULL
) {
2459 if (mac_rx_classify(mip
, mrh
, bp1
) != NULL
) {
2468 mac_tx_flow_srs_wakeup(flow_entry_t
*flent
, void *arg
)
2470 mac_ring_handle_t ring
= arg
;
2472 if (flent
->fe_tx_srs
)
2473 mac_tx_srs_wakeup(flent
->fe_tx_srs
, ring
);
2478 i_mac_tx_srs_notify(mac_impl_t
*mip
, mac_ring_handle_t ring
)
2480 mac_client_impl_t
*cclient
;
2481 mac_soft_ring_set_t
*mac_srs
;
2484 * After grabbing the mi_rw_lock, the list of clients can't change.
2485 * If there are any clients mi_disabled must be B_FALSE and can't
2486 * get set since there are clients. If there aren't any clients we
2487 * don't do anything. In any case the mip has to be valid. The driver
2488 * must make sure that it goes single threaded (with respect to mac
2489 * calls) and wait for all pending mac calls to finish before calling
2492 rw_enter(&i_mac_impl_lock
, RW_READER
);
2493 if (mip
->mi_state_flags
& MIS_DISABLED
) {
2494 rw_exit(&i_mac_impl_lock
);
2499 * Get MAC tx srs from walking mac_client_handle list.
2501 rw_enter(&mip
->mi_rw_lock
, RW_READER
);
2502 for (cclient
= mip
->mi_clients_list
; cclient
!= NULL
;
2503 cclient
= cclient
->mci_client_next
) {
2504 if ((mac_srs
= MCIP_TX_SRS(cclient
)) != NULL
) {
2505 mac_tx_srs_wakeup(mac_srs
, ring
);
2508 * Aggr opens underlying ports in exclusive mode
2509 * and registers flow control callbacks using
2510 * mac_tx_client_notify(). When opened in
2511 * exclusive mode, Tx SRS won't be created
2512 * during mac_unicast_add().
2514 if (cclient
->mci_state_flags
& MCIS_EXCLUSIVE
) {
2515 mac_tx_invoke_callbacks(cclient
,
2516 (mac_tx_cookie_t
)ring
);
2519 (void) mac_flow_walk(cclient
->mci_subflow_tab
,
2520 mac_tx_flow_srs_wakeup
, ring
);
2522 rw_exit(&mip
->mi_rw_lock
);
2523 rw_exit(&i_mac_impl_lock
);
2528 mac_multicast_refresh(mac_handle_t mh
, mac_multicst_t refresh
, void *arg
,
2531 mac_impl_t
*mip
= (mac_impl_t
*)mh
;
2533 i_mac_perim_enter((mac_impl_t
*)mh
);
2535 * If no specific refresh function was given then default to the
2536 * driver's m_multicst entry point.
2538 if (refresh
== NULL
) {
2539 refresh
= mip
->mi_multicst
;
2540 arg
= mip
->mi_driver
;
2543 mac_bcast_refresh(mip
, refresh
, arg
, add
);
2544 i_mac_perim_exit((mac_impl_t
*)mh
);
2548 mac_promisc_refresh(mac_handle_t mh
, mac_setpromisc_t refresh
, void *arg
)
2550 mac_impl_t
*mip
= (mac_impl_t
*)mh
;
2553 * If no specific refresh function was given then default to the
2554 * driver's m_promisc entry point.
2556 if (refresh
== NULL
) {
2557 refresh
= mip
->mi_setpromisc
;
2558 arg
= mip
->mi_driver
;
2560 ASSERT(refresh
!= NULL
);
2563 * Call the refresh function with the current promiscuity.
2565 refresh(arg
, (mip
->mi_devpromisc
!= 0));
2569 * The mac client requests that the mac not to change its margin size to
2570 * be less than the specified value. If "current" is B_TRUE, then the client
2571 * requests the mac not to change its margin size to be smaller than the
2572 * current size. Further, return the current margin size value in this case.
2574 * We keep every requested size in an ordered list from largest to smallest.
2577 mac_margin_add(mac_handle_t mh
, uint32_t *marginp
, boolean_t current
)
2579 mac_impl_t
*mip
= (mac_impl_t
*)mh
;
2580 mac_margin_req_t
**pp
, *p
;
2583 rw_enter(&(mip
->mi_rw_lock
), RW_WRITER
);
2585 *marginp
= mip
->mi_margin
;
2588 * If the current margin value cannot satisfy the margin requested,
2589 * return ENOTSUP directly.
2591 if (*marginp
> mip
->mi_margin
) {
2597 * Check whether the given margin is already in the list. If so,
2598 * bump the reference count.
2600 for (pp
= &mip
->mi_mmrp
; (p
= *pp
) != NULL
; pp
= &p
->mmr_nextp
) {
2601 if (p
->mmr_margin
== *marginp
) {
2603 * The margin requested is already in the list,
2604 * so just bump the reference count.
2609 if (p
->mmr_margin
< *marginp
)
2614 p
= kmem_zalloc(sizeof (mac_margin_req_t
), KM_SLEEP
);
2615 p
->mmr_margin
= *marginp
;
2621 rw_exit(&(mip
->mi_rw_lock
));
2626 * The mac client requests to cancel its previous mac_margin_add() request.
2627 * We remove the requested margin size from the list.
2630 mac_margin_remove(mac_handle_t mh
, uint32_t margin
)
2632 mac_impl_t
*mip
= (mac_impl_t
*)mh
;
2633 mac_margin_req_t
**pp
, *p
;
2636 rw_enter(&(mip
->mi_rw_lock
), RW_WRITER
);
2638 * Find the entry in the list for the given margin.
2640 for (pp
= &(mip
->mi_mmrp
); (p
= *pp
) != NULL
; pp
= &(p
->mmr_nextp
)) {
2641 if (p
->mmr_margin
== margin
) {
2642 if (--p
->mmr_ref
== 0)
2646 * There is still a reference to this address so
2647 * there's nothing more to do.
2654 * We did not find an entry for the given margin.
2661 ASSERT(p
->mmr_ref
== 0);
2664 * Remove it from the list.
2667 kmem_free(p
, sizeof (mac_margin_req_t
));
2669 rw_exit(&(mip
->mi_rw_lock
));
2674 mac_margin_update(mac_handle_t mh
, uint32_t margin
)
2676 mac_impl_t
*mip
= (mac_impl_t
*)mh
;
2677 uint32_t margin_needed
= 0;
2679 rw_enter(&(mip
->mi_rw_lock
), RW_WRITER
);
2681 if (mip
->mi_mmrp
!= NULL
)
2682 margin_needed
= mip
->mi_mmrp
->mmr_margin
;
2684 if (margin_needed
<= margin
)
2685 mip
->mi_margin
= margin
;
2687 rw_exit(&(mip
->mi_rw_lock
));
2689 if (margin_needed
<= margin
)
2690 i_mac_notify(mip
, MAC_NOTE_MARGIN
);
2692 return (margin_needed
<= margin
);
2696 * MAC clients use this interface to request that a MAC device not change its
2697 * MTU below the specified amount. At this time, that amount must be within the
2698 * range of the device's current minimum and the device's current maximum. eg. a
2699 * client cannot request a 3000 byte MTU when the device's MTU is currently
2702 * If "current" is set to B_TRUE, then the request is to simply to reserve the
2703 * current underlying mac's maximum for this mac client and return it in mtup.
2706 mac_mtu_add(mac_handle_t mh
, uint32_t *mtup
, boolean_t current
)
2708 mac_impl_t
*mip
= (mac_impl_t
*)mh
;
2709 mac_mtu_req_t
*prev
, *cur
;
2710 mac_propval_range_t mpr
;
2713 i_mac_perim_enter(mip
);
2714 rw_enter(&mip
->mi_rw_lock
, RW_WRITER
);
2716 if (current
== B_TRUE
)
2717 *mtup
= mip
->mi_sdu_max
;
2719 err
= mac_prop_info(mh
, MAC_PROP_MTU
, "mtu", NULL
, 0, &mpr
, NULL
);
2721 rw_exit(&mip
->mi_rw_lock
);
2722 i_mac_perim_exit(mip
);
2726 if (*mtup
> mip
->mi_sdu_max
||
2727 *mtup
< mpr
.mpr_range_uint32
[0].mpur_min
) {
2728 rw_exit(&mip
->mi_rw_lock
);
2729 i_mac_perim_exit(mip
);
2734 for (cur
= mip
->mi_mtrp
; cur
!= NULL
; cur
= cur
->mtr_nextp
) {
2735 if (*mtup
== cur
->mtr_mtu
) {
2737 rw_exit(&mip
->mi_rw_lock
);
2738 i_mac_perim_exit(mip
);
2742 if (*mtup
> cur
->mtr_mtu
)
2748 cur
= kmem_alloc(sizeof (mac_mtu_req_t
), KM_SLEEP
);
2749 cur
->mtr_mtu
= *mtup
;
2752 cur
->mtr_nextp
= prev
->mtr_nextp
;
2753 prev
->mtr_nextp
= cur
;
2755 cur
->mtr_nextp
= mip
->mi_mtrp
;
2759 rw_exit(&mip
->mi_rw_lock
);
2760 i_mac_perim_exit(mip
);
2765 mac_mtu_remove(mac_handle_t mh
, uint32_t mtu
)
2767 mac_impl_t
*mip
= (mac_impl_t
*)mh
;
2768 mac_mtu_req_t
*cur
, *prev
;
2770 i_mac_perim_enter(mip
);
2771 rw_enter(&mip
->mi_rw_lock
, RW_WRITER
);
2774 for (cur
= mip
->mi_mtrp
; cur
!= NULL
; cur
= cur
->mtr_nextp
) {
2775 if (cur
->mtr_mtu
== mtu
) {
2776 ASSERT(cur
->mtr_ref
> 0);
2778 if (cur
->mtr_ref
== 0) {
2780 mip
->mi_mtrp
= cur
->mtr_nextp
;
2782 prev
->mtr_nextp
= cur
->mtr_nextp
;
2784 kmem_free(cur
, sizeof (mac_mtu_req_t
));
2786 rw_exit(&mip
->mi_rw_lock
);
2787 i_mac_perim_exit(mip
);
2794 rw_exit(&mip
->mi_rw_lock
);
2795 i_mac_perim_exit(mip
);
2800 * MAC Type Plugin functions.
2804 mactype_getplugin(const char *pname
)
2806 mactype_t
*mtype
= NULL
;
2807 boolean_t tried_modload
= B_FALSE
;
2809 mutex_enter(&i_mactype_lock
);
2811 find_registered_mactype
:
2812 if (mod_hash_find(i_mactype_hash
, (mod_hash_key_t
)pname
,
2813 (mod_hash_val_t
*)&mtype
) != 0) {
2814 if (!tried_modload
) {
2816 * If the plugin has not yet been loaded, then
2817 * attempt to load it now. If modload() succeeds,
2818 * the plugin should have registered using
2819 * mactype_register(), in which case we can go back
2820 * and attempt to find it again.
2822 if (modload(MACTYPE_KMODDIR
, (char *)pname
) != -1) {
2823 tried_modload
= B_TRUE
;
2824 goto find_registered_mactype
;
2829 * Note that there's no danger that the plugin we've loaded
2830 * could be unloaded between the modload() step and the
2831 * reference count bump here, as we're holding
2832 * i_mactype_lock, which mactype_unregister() also holds.
2834 atomic_inc_32(&mtype
->mt_ref
);
2837 mutex_exit(&i_mactype_lock
);
2841 mactype_register_t
*
2842 mactype_alloc(uint_t mactype_version
)
2844 mactype_register_t
*mtrp
;
2847 * Make sure there isn't a version mismatch between the plugin and
2848 * the framework. In the future, if multiple versions are
2849 * supported, this check could become more sophisticated.
2851 if (mactype_version
!= MACTYPE_VERSION
)
2854 mtrp
= kmem_zalloc(sizeof (mactype_register_t
), KM_SLEEP
);
2855 mtrp
->mtr_version
= mactype_version
;
2860 mactype_free(mactype_register_t
*mtrp
)
2862 kmem_free(mtrp
, sizeof (mactype_register_t
));
2866 mactype_register(mactype_register_t
*mtrp
)
2869 mactype_ops_t
*ops
= mtrp
->mtr_ops
;
2871 /* Do some sanity checking before we register this MAC type. */
2872 if (mtrp
->mtr_ident
== NULL
|| ops
== NULL
)
2876 * Verify that all mandatory callbacks are set in the ops
2879 if (ops
->mtops_unicst_verify
== NULL
||
2880 ops
->mtops_multicst_verify
== NULL
||
2881 ops
->mtops_sap_verify
== NULL
||
2882 ops
->mtops_header
== NULL
||
2883 ops
->mtops_header_info
== NULL
) {
2887 mtp
= kmem_zalloc(sizeof (*mtp
), KM_SLEEP
);
2888 mtp
->mt_ident
= mtrp
->mtr_ident
;
2890 mtp
->mt_type
= mtrp
->mtr_mactype
;
2891 mtp
->mt_nativetype
= mtrp
->mtr_nativetype
;
2892 mtp
->mt_addr_length
= mtrp
->mtr_addrlen
;
2893 if (mtrp
->mtr_brdcst_addr
!= NULL
) {
2894 mtp
->mt_brdcst_addr
= kmem_alloc(mtrp
->mtr_addrlen
, KM_SLEEP
);
2895 bcopy(mtrp
->mtr_brdcst_addr
, mtp
->mt_brdcst_addr
,
2899 mtp
->mt_stats
= mtrp
->mtr_stats
;
2900 mtp
->mt_statcount
= mtrp
->mtr_statcount
;
2902 mtp
->mt_mapping
= mtrp
->mtr_mapping
;
2903 mtp
->mt_mappingcount
= mtrp
->mtr_mappingcount
;
2905 if (mod_hash_insert(i_mactype_hash
,
2906 (mod_hash_key_t
)mtp
->mt_ident
, (mod_hash_val_t
)mtp
) != 0) {
2907 kmem_free(mtp
->mt_brdcst_addr
, mtp
->mt_addr_length
);
2908 kmem_free(mtp
, sizeof (*mtp
));
2915 mactype_unregister(const char *ident
)
2922 * Let's not allow MAC drivers to use this plugin while we're
2923 * trying to unregister it. Holding i_mactype_lock also prevents a
2924 * plugin from unregistering while a MAC driver is attempting to
2925 * hold a reference to it in i_mactype_getplugin().
2927 mutex_enter(&i_mactype_lock
);
2929 if ((err
= mod_hash_find(i_mactype_hash
, (mod_hash_key_t
)ident
,
2930 (mod_hash_val_t
*)&mtp
)) != 0) {
2931 /* A plugin is trying to unregister, but it never registered. */
2936 if (mtp
->mt_ref
!= 0) {
2941 err
= mod_hash_remove(i_mactype_hash
, (mod_hash_key_t
)ident
, &val
);
2944 /* This should never happen, thus the ASSERT() above. */
2948 ASSERT(mtp
== (mactype_t
*)val
);
2950 if (mtp
->mt_brdcst_addr
!= NULL
)
2951 kmem_free(mtp
->mt_brdcst_addr
, mtp
->mt_addr_length
);
2952 kmem_free(mtp
, sizeof (mactype_t
));
2954 mutex_exit(&i_mactype_lock
);
2959 * Checks the size of the value size specified for a property as
2960 * part of a property operation. Returns B_TRUE if the size is
2961 * correct, B_FALSE otherwise.
2964 mac_prop_check_size(mac_prop_id_t id
, uint_t valsize
, boolean_t is_range
)
2969 return (valsize
>= sizeof (mac_propval_range_t
));
2973 minsize
= sizeof (dld_ioc_zid_t
);
2975 case MAC_PROP_AUTOPUSH
:
2977 minsize
= sizeof (struct dlautopush
);
2979 case MAC_PROP_TAGMODE
:
2980 minsize
= sizeof (link_tagmode_t
);
2982 case MAC_PROP_RESOURCE
:
2983 case MAC_PROP_RESOURCE_EFF
:
2984 minsize
= sizeof (mac_resource_props_t
);
2986 case MAC_PROP_DUPLEX
:
2987 minsize
= sizeof (link_duplex_t
);
2989 case MAC_PROP_SPEED
:
2990 minsize
= sizeof (uint64_t);
2992 case MAC_PROP_STATUS
:
2993 minsize
= sizeof (link_state_t
);
2995 case MAC_PROP_AUTONEG
:
2996 case MAC_PROP_EN_AUTONEG
:
2997 minsize
= sizeof (uint8_t);
3000 case MAC_PROP_LLIMIT
:
3001 case MAC_PROP_LDECAY
:
3002 minsize
= sizeof (uint32_t);
3004 case MAC_PROP_FLOWCTRL
:
3005 minsize
= sizeof (link_flowctrl_t
);
3007 case MAC_PROP_ADV_10GFDX_CAP
:
3008 case MAC_PROP_EN_10GFDX_CAP
:
3009 case MAC_PROP_ADV_1000HDX_CAP
:
3010 case MAC_PROP_EN_1000HDX_CAP
:
3011 case MAC_PROP_ADV_100FDX_CAP
:
3012 case MAC_PROP_EN_100FDX_CAP
:
3013 case MAC_PROP_ADV_100HDX_CAP
:
3014 case MAC_PROP_EN_100HDX_CAP
:
3015 case MAC_PROP_ADV_10FDX_CAP
:
3016 case MAC_PROP_EN_10FDX_CAP
:
3017 case MAC_PROP_ADV_10HDX_CAP
:
3018 case MAC_PROP_EN_10HDX_CAP
:
3019 case MAC_PROP_ADV_100T4_CAP
:
3020 case MAC_PROP_EN_100T4_CAP
:
3021 minsize
= sizeof (uint8_t);
3024 minsize
= sizeof (uint16_t);
3026 case MAC_PROP_IPTUN_HOPLIMIT
:
3027 minsize
= sizeof (uint32_t);
3029 case MAC_PROP_IPTUN_ENCAPLIMIT
:
3030 minsize
= sizeof (uint32_t);
3032 case MAC_PROP_MAX_TX_RINGS_AVAIL
:
3033 case MAC_PROP_MAX_RX_RINGS_AVAIL
:
3034 case MAC_PROP_MAX_RXHWCLNT_AVAIL
:
3035 case MAC_PROP_MAX_TXHWCLNT_AVAIL
:
3036 minsize
= sizeof (uint_t
);
3038 case MAC_PROP_WL_ESSID
:
3039 minsize
= sizeof (wl_linkstatus_t
);
3041 case MAC_PROP_WL_BSSID
:
3042 minsize
= sizeof (wl_bssid_t
);
3044 case MAC_PROP_WL_BSSTYPE
:
3045 minsize
= sizeof (wl_bss_type_t
);
3047 case MAC_PROP_WL_LINKSTATUS
:
3048 minsize
= sizeof (wl_linkstatus_t
);
3050 case MAC_PROP_WL_DESIRED_RATES
:
3051 minsize
= sizeof (wl_rates_t
);
3053 case MAC_PROP_WL_SUPPORTED_RATES
:
3054 minsize
= sizeof (wl_rates_t
);
3056 case MAC_PROP_WL_AUTH_MODE
:
3057 minsize
= sizeof (wl_authmode_t
);
3059 case MAC_PROP_WL_ENCRYPTION
:
3060 minsize
= sizeof (wl_encryption_t
);
3062 case MAC_PROP_WL_RSSI
:
3063 minsize
= sizeof (wl_rssi_t
);
3065 case MAC_PROP_WL_PHY_CONFIG
:
3066 minsize
= sizeof (wl_phy_conf_t
);
3068 case MAC_PROP_WL_CAPABILITY
:
3069 minsize
= sizeof (wl_capability_t
);
3071 case MAC_PROP_WL_WPA
:
3072 minsize
= sizeof (wl_wpa_t
);
3074 case MAC_PROP_WL_SCANRESULTS
:
3075 minsize
= sizeof (wl_wpa_ess_t
);
3077 case MAC_PROP_WL_POWER_MODE
:
3078 minsize
= sizeof (wl_ps_mode_t
);
3080 case MAC_PROP_WL_RADIO
:
3081 minsize
= sizeof (wl_radio_t
);
3083 case MAC_PROP_WL_ESS_LIST
:
3084 minsize
= sizeof (wl_ess_list_t
);
3086 case MAC_PROP_WL_KEY_TAB
:
3087 minsize
= sizeof (wl_wep_key_tab_t
);
3089 case MAC_PROP_WL_CREATE_IBSS
:
3090 minsize
= sizeof (wl_create_ibss_t
);
3092 case MAC_PROP_WL_SETOPTIE
:
3093 minsize
= sizeof (wl_wpa_ie_t
);
3095 case MAC_PROP_WL_DELKEY
:
3096 minsize
= sizeof (wl_del_key_t
);
3098 case MAC_PROP_WL_KEY
:
3099 minsize
= sizeof (wl_key_t
);
3101 case MAC_PROP_WL_MLME
:
3102 minsize
= sizeof (wl_mlme_t
);
3106 return (valsize
>= minsize
);
3110 * mac_set_prop() sets MAC or hardware driver properties:
3112 * - MAC-managed properties such as resource properties include maxbw,
3113 * priority, and cpu binding list, as well as the default port VID
3114 * used by bridging. These properties are consumed by the MAC layer
3115 * itself and not passed down to the driver. For resource control
3116 * properties, this function invokes mac_set_resources() which will
3117 * cache the property value in mac_impl_t and may call
3118 * mac_client_set_resource() to update property value of the primary
3119 * mac client, if it exists.
3121 * - Properties which act on the hardware and must be passed to the
3122 * driver, such as MTU, through the driver's mc_setprop() entry point.
3125 mac_set_prop(mac_handle_t mh
, mac_prop_id_t id
, char *name
, void *val
,
3129 mac_impl_t
*mip
= (mac_impl_t
*)mh
;
3131 ASSERT(MAC_PERIM_HELD(mh
));
3134 case MAC_PROP_RESOURCE
: {
3135 mac_resource_props_t
*mrp
;
3137 /* call mac_set_resources() for MAC properties */
3138 ASSERT(valsize
>= sizeof (mac_resource_props_t
));
3139 mrp
= kmem_zalloc(sizeof (*mrp
), KM_SLEEP
);
3140 bcopy(val
, mrp
, sizeof (*mrp
));
3141 err
= mac_set_resources(mh
, mrp
);
3142 kmem_free(mrp
, sizeof (*mrp
));
3147 ASSERT(valsize
>= sizeof (uint16_t));
3148 if (mip
->mi_state_flags
& MIS_IS_VNIC
)
3150 err
= mac_set_pvid(mh
, *(uint16_t *)val
);
3153 case MAC_PROP_MTU
: {
3156 ASSERT(valsize
>= sizeof (uint32_t));
3157 bcopy(val
, &mtu
, sizeof (mtu
));
3158 err
= mac_set_mtu(mh
, mtu
, NULL
);
3162 case MAC_PROP_LLIMIT
:
3163 case MAC_PROP_LDECAY
: {
3166 if (valsize
< sizeof (learnval
) ||
3167 (mip
->mi_state_flags
& MIS_IS_VNIC
))
3169 bcopy(val
, &learnval
, sizeof (learnval
));
3170 if (learnval
== 0 && id
== MAC_PROP_LDECAY
)
3172 if (id
== MAC_PROP_LLIMIT
)
3173 mip
->mi_llimit
= learnval
;
3175 mip
->mi_ldecay
= learnval
;
3181 /* For other driver properties, call driver's callback */
3182 if (mip
->mi_callbacks
->mc_callbacks
& MC_SETPROP
) {
3183 err
= mip
->mi_callbacks
->mc_setprop(mip
->mi_driver
,
3184 name
, id
, valsize
, val
);
3191 * mac_get_prop() gets MAC or device driver properties.
3193 * If the property is a driver property, mac_get_prop() calls driver's callback
3194 * entry point to get it.
3195 * If the property is a MAC property, mac_get_prop() invokes mac_get_resources()
3196 * which returns the cached value in mac_impl_t.
3199 mac_get_prop(mac_handle_t mh
, mac_prop_id_t id
, char *name
, void *val
,
3203 mac_impl_t
*mip
= (mac_impl_t
*)mh
;
3207 bzero(val
, valsize
);
3210 case MAC_PROP_RESOURCE
: {
3211 mac_resource_props_t
*mrp
;
3213 /* If mac property, read from cache */
3214 ASSERT(valsize
>= sizeof (mac_resource_props_t
));
3215 mrp
= kmem_zalloc(sizeof (*mrp
), KM_SLEEP
);
3216 mac_get_resources(mh
, mrp
);
3217 bcopy(mrp
, val
, sizeof (*mrp
));
3218 kmem_free(mrp
, sizeof (*mrp
));
3221 case MAC_PROP_RESOURCE_EFF
: {
3222 mac_resource_props_t
*mrp
;
3224 /* If mac effective property, read from client */
3225 ASSERT(valsize
>= sizeof (mac_resource_props_t
));
3226 mrp
= kmem_zalloc(sizeof (*mrp
), KM_SLEEP
);
3227 mac_get_effective_resources(mh
, mrp
);
3228 bcopy(mrp
, val
, sizeof (*mrp
));
3229 kmem_free(mrp
, sizeof (*mrp
));
3234 ASSERT(valsize
>= sizeof (uint16_t));
3235 if (mip
->mi_state_flags
& MIS_IS_VNIC
)
3237 *(uint16_t *)val
= mac_get_pvid(mh
);
3240 case MAC_PROP_LLIMIT
:
3241 case MAC_PROP_LDECAY
:
3242 ASSERT(valsize
>= sizeof (uint32_t));
3243 if (mip
->mi_state_flags
& MIS_IS_VNIC
)
3245 if (id
== MAC_PROP_LLIMIT
)
3246 bcopy(&mip
->mi_llimit
, val
, sizeof (mip
->mi_llimit
));
3248 bcopy(&mip
->mi_ldecay
, val
, sizeof (mip
->mi_ldecay
));
3251 case MAC_PROP_MTU
: {
3254 ASSERT(valsize
>= sizeof (uint32_t));
3255 mac_sdu_get2(mh
, NULL
, &sdu
, NULL
);
3256 bcopy(&sdu
, val
, sizeof (sdu
));
3260 case MAC_PROP_STATUS
: {
3261 link_state_t link_state
;
3263 if (valsize
< sizeof (link_state
))
3265 link_state
= mac_link_get(mh
);
3266 bcopy(&link_state
, val
, sizeof (link_state
));
3271 case MAC_PROP_MAX_RX_RINGS_AVAIL
:
3272 case MAC_PROP_MAX_TX_RINGS_AVAIL
:
3273 ASSERT(valsize
>= sizeof (uint_t
));
3274 rings
= id
== MAC_PROP_MAX_RX_RINGS_AVAIL
?
3275 mac_rxavail_get(mh
) : mac_txavail_get(mh
);
3276 bcopy(&rings
, val
, sizeof (uint_t
));
3279 case MAC_PROP_MAX_RXHWCLNT_AVAIL
:
3280 case MAC_PROP_MAX_TXHWCLNT_AVAIL
:
3281 ASSERT(valsize
>= sizeof (uint_t
));
3282 vlinks
= id
== MAC_PROP_MAX_RXHWCLNT_AVAIL
?
3283 mac_rxhwlnksavail_get(mh
) : mac_txhwlnksavail_get(mh
);
3284 bcopy(&vlinks
, val
, sizeof (uint_t
));
3287 case MAC_PROP_RXRINGSRANGE
:
3288 case MAC_PROP_TXRINGSRANGE
:
3290 * The value for these properties are returned through
3291 * the MAC_PROP_RESOURCE property.
3300 /* If driver property, request from driver */
3301 if (mip
->mi_callbacks
->mc_callbacks
& MC_GETPROP
) {
3302 err
= mip
->mi_callbacks
->mc_getprop(mip
->mi_driver
, name
, id
,
3310 * Helper function to initialize the range structure for use in
3311 * mac_get_prop. If the type can be other than uint32, we can
3312 * pass that as an arg.
3315 _mac_set_range(mac_propval_range_t
*range
, uint32_t min
, uint32_t max
)
3317 range
->mpr_count
= 1;
3318 range
->mpr_type
= MAC_PROPVAL_UINT32
;
3319 range
->mpr_range_uint32
[0].mpur_min
= min
;
3320 range
->mpr_range_uint32
[0].mpur_max
= max
;
3324 * Returns information about the specified property, such as default
3325 * values or permissions.
3328 mac_prop_info(mac_handle_t mh
, mac_prop_id_t id
, char *name
,
3329 void *default_val
, uint_t default_size
, mac_propval_range_t
*range
,
3332 mac_prop_info_state_t state
;
3333 mac_impl_t
*mip
= (mac_impl_t
*)mh
;
3337 * A property is read/write by default unless the driver says
3341 *perm
= MAC_PROP_PERM_RW
;
3343 if (default_val
!= NULL
)
3344 bzero(default_val
, default_size
);
3347 * First, handle framework properties for which we don't need to
3348 * involve the driver.
3351 case MAC_PROP_RESOURCE
:
3353 case MAC_PROP_LLIMIT
:
3354 case MAC_PROP_LDECAY
:
3357 case MAC_PROP_MAX_RX_RINGS_AVAIL
:
3358 case MAC_PROP_MAX_TX_RINGS_AVAIL
:
3359 case MAC_PROP_MAX_RXHWCLNT_AVAIL
:
3360 case MAC_PROP_MAX_TXHWCLNT_AVAIL
:
3362 *perm
= MAC_PROP_PERM_READ
;
3365 case MAC_PROP_RXRINGSRANGE
:
3366 case MAC_PROP_TXRINGSRANGE
:
3368 * Currently, we support range for RX and TX rings properties.
3369 * When we extend this support to maxbw, cpus and priority,
3370 * we should move this to mac_get_resources.
3371 * There is no default value for RX or TX rings.
3373 if ((mip
->mi_state_flags
& MIS_IS_VNIC
) &&
3374 mac_is_vnic_primary(mh
)) {
3376 * We don't support setting rings for a VLAN
3377 * data link because it shares its ring with the
3378 * primary MAC client.
3381 *perm
= MAC_PROP_PERM_READ
;
3383 range
->mpr_count
= 0;
3384 } else if (range
!= NULL
) {
3385 if (mip
->mi_state_flags
& MIS_IS_VNIC
)
3386 mh
= mac_get_lower_mac_handle(mh
);
3387 mip
= (mac_impl_t
*)mh
;
3388 if ((id
== MAC_PROP_RXRINGSRANGE
&&
3389 mip
->mi_rx_group_type
== MAC_GROUP_TYPE_STATIC
) ||
3390 (id
== MAC_PROP_TXRINGSRANGE
&&
3391 mip
->mi_tx_group_type
== MAC_GROUP_TYPE_STATIC
)) {
3392 if (id
== MAC_PROP_RXRINGSRANGE
) {
3393 if ((mac_rxhwlnksavail_get(mh
) +
3394 mac_rxhwlnksrsvd_get(mh
)) <= 1) {
3396 * doesn't support groups or
3399 range
->mpr_count
= 0;
3402 * supports specifying groups,
3405 _mac_set_range(range
, 0, 0);
3408 if ((mac_txhwlnksavail_get(mh
) +
3409 mac_txhwlnksrsvd_get(mh
)) <= 1) {
3411 * doesn't support groups or
3414 range
->mpr_count
= 0;
3417 * supports specifying groups,
3420 _mac_set_range(range
, 0, 0);
3424 max
= id
== MAC_PROP_RXRINGSRANGE
?
3425 mac_rxavail_get(mh
) + mac_rxrsvd_get(mh
) :
3426 mac_txavail_get(mh
) + mac_txrsvd_get(mh
);
3429 * doesn't support groups or
3432 range
->mpr_count
= 0;
3435 * -1 because we have to leave out the
3438 _mac_set_range(range
, 1, max
- 1);
3444 case MAC_PROP_STATUS
:
3446 *perm
= MAC_PROP_PERM_READ
;
3451 * Get the property info from the driver if it implements the
3452 * property info entry point.
3454 bzero(&state
, sizeof (state
));
3456 if (mip
->mi_callbacks
->mc_callbacks
& MC_PROPINFO
) {
3457 state
.pr_default
= default_val
;
3458 state
.pr_default_size
= default_size
;
3461 * The caller specifies the maximum number of ranges
3462 * it can accomodate using mpr_count. We don't touch
3463 * this value until the driver returns from its
3464 * mc_propinfo() callback, and ensure we don't exceed
3465 * this number of range as the driver defines
3466 * supported range from its mc_propinfo().
3468 * pr_range_cur_count keeps track of how many ranges
3469 * were defined by the driver from its mc_propinfo()
3472 * On exit, the user-specified range mpr_count returns
3473 * the number of ranges specified by the driver on
3474 * success, or the number of ranges it wanted to
3475 * define if that number of ranges could not be
3476 * accomodated by the specified range structure. In
3477 * the latter case, the caller will be able to
3478 * allocate a larger range structure, and query the
3481 state
.pr_range_cur_count
= 0;
3482 state
.pr_range
= range
;
3484 mip
->mi_callbacks
->mc_propinfo(mip
->mi_driver
, name
, id
,
3485 (mac_prop_info_handle_t
)&state
);
3487 if (state
.pr_flags
& MAC_PROP_INFO_RANGE
)
3488 range
->mpr_count
= state
.pr_range_cur_count
;
3491 * The operation could fail if the buffer supplied by
3492 * the user was too small for the range or default
3493 * value of the property.
3495 if (state
.pr_errno
!= 0)
3496 return (state
.pr_errno
);
3498 if (perm
!= NULL
&& state
.pr_flags
& MAC_PROP_INFO_PERM
)
3499 *perm
= state
.pr_perm
;
3503 * The MAC layer may want to provide default values or allowed
3504 * ranges for properties if the driver does not provide a
3505 * property info entry point, or that entry point exists, but
3506 * it did not provide a default value or allowed ranges for
3510 case MAC_PROP_MTU
: {
3513 mac_sdu_get2(mh
, NULL
, &sdu
, NULL
);
3515 if (range
!= NULL
&& !(state
.pr_flags
&
3516 MAC_PROP_INFO_RANGE
)) {
3518 _mac_set_range(range
, sdu
, sdu
);
3521 if (default_val
!= NULL
&& !(state
.pr_flags
&
3522 MAC_PROP_INFO_DEFAULT
)) {
3523 if (mip
->mi_info
.mi_media
== DL_ETHER
)
3525 /* default MTU value */
3526 bcopy(&sdu
, default_val
, sizeof (sdu
));
3535 mac_fastpath_disable(mac_handle_t mh
)
3537 mac_impl_t
*mip
= (mac_impl_t
*)mh
;
3539 if ((mip
->mi_state_flags
& MIS_LEGACY
) == 0)
3542 return (mip
->mi_capab_legacy
.ml_fastpath_disable(mip
->mi_driver
));
3546 mac_fastpath_enable(mac_handle_t mh
)
3548 mac_impl_t
*mip
= (mac_impl_t
*)mh
;
3550 if ((mip
->mi_state_flags
& MIS_LEGACY
) == 0)
3553 mip
->mi_capab_legacy
.ml_fastpath_enable(mip
->mi_driver
);
3557 mac_register_priv_prop(mac_impl_t
*mip
, char **priv_props
)
3561 if (priv_props
== NULL
)
3565 while (priv_props
[nprops
] != NULL
)
3571 mip
->mi_priv_prop
= kmem_zalloc(nprops
* sizeof (char *), KM_SLEEP
);
3573 for (i
= 0; i
< nprops
; i
++) {
3574 mip
->mi_priv_prop
[i
] = kmem_zalloc(MAXLINKPROPNAME
, KM_SLEEP
);
3575 (void) strlcpy(mip
->mi_priv_prop
[i
], priv_props
[i
],
3579 mip
->mi_priv_prop_count
= nprops
;
3583 mac_unregister_priv_prop(mac_impl_t
*mip
)
3587 if (mip
->mi_priv_prop_count
== 0) {
3588 ASSERT(mip
->mi_priv_prop
== NULL
);
3592 for (i
= 0; i
< mip
->mi_priv_prop_count
; i
++)
3593 kmem_free(mip
->mi_priv_prop
[i
], MAXLINKPROPNAME
);
3594 kmem_free(mip
->mi_priv_prop
, mip
->mi_priv_prop_count
*
3597 mip
->mi_priv_prop
= NULL
;
3598 mip
->mi_priv_prop_count
= 0;
3602 * mac_ring_t 'mr' macros. Some rogue drivers may access ring structure
3603 * (by invoking mac_rx()) even after processing mac_stop_ring(). In such
3604 * cases if MAC free's the ring structure after mac_stop_ring(), any
3605 * illegal access to the ring structure coming from the driver will panic
3606 * the system. In order to protect the system from such inadverent access,
3607 * we maintain a cache of rings in the mac_impl_t after they get free'd up.
3608 * When packets are received on free'd up rings, MAC (through the generation
3609 * count mechanism) will drop such packets.
3612 mac_ring_alloc(mac_impl_t
*mip
)
3616 mutex_enter(&mip
->mi_ring_lock
);
3617 if (mip
->mi_ring_freelist
!= NULL
) {
3618 ring
= mip
->mi_ring_freelist
;
3619 mip
->mi_ring_freelist
= ring
->mr_next
;
3620 bzero(ring
, sizeof (mac_ring_t
));
3621 mutex_exit(&mip
->mi_ring_lock
);
3623 mutex_exit(&mip
->mi_ring_lock
);
3624 ring
= kmem_cache_alloc(mac_ring_cache
, KM_SLEEP
);
3626 ASSERT((ring
!= NULL
) && (ring
->mr_state
== MR_FREE
));
3631 mac_ring_free(mac_impl_t
*mip
, mac_ring_t
*ring
)
3633 ASSERT(ring
->mr_state
== MR_FREE
);
3635 mutex_enter(&mip
->mi_ring_lock
);
3636 ring
->mr_state
= MR_FREE
;
3638 ring
->mr_next
= mip
->mi_ring_freelist
;
3639 ring
->mr_mip
= NULL
;
3640 mip
->mi_ring_freelist
= ring
;
3641 mac_ring_stat_delete(ring
);
3642 mutex_exit(&mip
->mi_ring_lock
);
3646 mac_ring_freeall(mac_impl_t
*mip
)
3648 mac_ring_t
*ring_next
;
3649 mutex_enter(&mip
->mi_ring_lock
);
3650 mac_ring_t
*ring
= mip
->mi_ring_freelist
;
3651 while (ring
!= NULL
) {
3652 ring_next
= ring
->mr_next
;
3653 kmem_cache_free(mac_ring_cache
, ring
);
3656 mip
->mi_ring_freelist
= NULL
;
3657 mutex_exit(&mip
->mi_ring_lock
);
3661 mac_start_ring(mac_ring_t
*ring
)
3665 ASSERT(ring
->mr_state
== MR_FREE
);
3667 if (ring
->mr_start
!= NULL
) {
3668 rv
= ring
->mr_start(ring
->mr_driver
, ring
->mr_gen_num
);
3673 ring
->mr_state
= MR_INUSE
;
3678 mac_stop_ring(mac_ring_t
*ring
)
3680 ASSERT(ring
->mr_state
== MR_INUSE
);
3682 if (ring
->mr_stop
!= NULL
)
3683 ring
->mr_stop(ring
->mr_driver
);
3685 ring
->mr_state
= MR_FREE
;
3688 * Increment the ring generation number for this ring.
3694 mac_start_group(mac_group_t
*group
)
3698 if (group
->mrg_start
!= NULL
)
3699 rv
= group
->mrg_start(group
->mrg_driver
);
3705 mac_stop_group(mac_group_t
*group
)
3707 if (group
->mrg_stop
!= NULL
)
3708 group
->mrg_stop(group
->mrg_driver
);
3712 * Called from mac_start() on the default Rx group. Broadcast and multicast
3713 * packets are received only on the default group. Hence the default group
3714 * needs to be up even if the primary client is not up, for the other groups
3715 * to be functional. We do this by calling this function at mac_start time
3716 * itself. However the broadcast packets that are received can't make their
3717 * way beyond mac_rx until a mac client creates a broadcast flow.
3720 mac_start_group_and_rings(mac_group_t
*group
)
3725 ASSERT(group
->mrg_state
== MAC_GROUP_STATE_REGISTERED
);
3726 if ((rv
= mac_start_group(group
)) != 0)
3729 for (ring
= group
->mrg_rings
; ring
!= NULL
; ring
= ring
->mr_next
) {
3730 ASSERT(ring
->mr_state
== MR_FREE
);
3731 if ((rv
= mac_start_ring(ring
)) != 0)
3733 ring
->mr_classify_type
= MAC_SW_CLASSIFIER
;
3738 mac_stop_group_and_rings(group
);
3742 /* Called from mac_stop on the default Rx group */
3744 mac_stop_group_and_rings(mac_group_t
*group
)
3748 for (ring
= group
->mrg_rings
; ring
!= NULL
; ring
= ring
->mr_next
) {
3749 if (ring
->mr_state
!= MR_FREE
) {
3750 mac_stop_ring(ring
);
3752 ring
->mr_classify_type
= MAC_NO_CLASSIFIER
;
3755 mac_stop_group(group
);
3760 mac_init_ring(mac_impl_t
*mip
, mac_group_t
*group
, int index
,
3761 mac_capab_rings_t
*cap_rings
)
3763 mac_ring_t
*ring
, *rnext
;
3764 mac_ring_info_t ring_info
;
3765 ddi_intr_handle_t ddi_handle
;
3767 ring
= mac_ring_alloc(mip
);
3769 /* Prepare basic information of ring */
3772 * Ring index is numbered to be unique across a particular device.
3773 * Ring index computation makes following assumptions:
3774 * - For drivers with static grouping (e.g. ixgbe, bge),
3775 * ring index exchanged with the driver (e.g. during mr_rget)
3776 * is unique only across the group the ring belongs to.
3777 * - Drivers with dynamic grouping (e.g. nxge), start
3778 * with single group (mrg_index = 0).
3780 ring
->mr_index
= group
->mrg_index
* group
->mrg_info
.mgi_count
+ index
;
3781 ring
->mr_type
= group
->mrg_type
;
3782 ring
->mr_gh
= (mac_group_handle_t
)group
;
3784 /* Insert the new ring to the list. */
3785 ring
->mr_next
= group
->mrg_rings
;
3786 group
->mrg_rings
= ring
;
3788 /* Zero to reuse the info data structure */
3789 bzero(&ring_info
, sizeof (ring_info
));
3791 /* Query ring information from driver */
3792 cap_rings
->mr_rget(mip
->mi_driver
, group
->mrg_type
, group
->mrg_index
,
3793 index
, &ring_info
, (mac_ring_handle_t
)ring
);
3795 ring
->mr_info
= ring_info
;
3798 * The interrupt handle could be shared among multiple rings.
3799 * Thus if there is a bunch of rings that are sharing an
3800 * interrupt, then only one ring among the bunch will be made
3801 * available for interrupt re-targeting; the rest will have
3802 * ddi_shared flag set to TRUE and would not be available for
3803 * be interrupt re-targeting.
3805 if ((ddi_handle
= ring_info
.mri_intr
.mi_ddi_handle
) != NULL
) {
3806 rnext
= ring
->mr_next
;
3807 while (rnext
!= NULL
) {
3808 if (rnext
->mr_info
.mri_intr
.mi_ddi_handle
==
3811 * If default ring (mr_index == 0) is part
3812 * of a group of rings sharing an
3813 * interrupt, then set ddi_shared flag for
3814 * the default ring and give another ring
3815 * the chance to be re-targeted.
3817 if (rnext
->mr_index
== 0 &&
3818 !rnext
->mr_info
.mri_intr
.mi_ddi_shared
) {
3819 rnext
->mr_info
.mri_intr
.mi_ddi_shared
=
3822 ring
->mr_info
.mri_intr
.mi_ddi_shared
=
3827 rnext
= rnext
->mr_next
;
3830 * If rnext is NULL, then no matching ddi_handle was found.
3831 * Rx rings get registered first. So if this is a Tx ring,
3832 * then go through all the Rx rings and see if there is a
3833 * matching ddi handle.
3835 if (rnext
== NULL
&& ring
->mr_type
== MAC_RING_TYPE_TX
) {
3836 mac_compare_ddi_handle(mip
->mi_rx_groups
,
3837 mip
->mi_rx_group_count
, ring
);
3841 /* Update ring's status */
3842 ring
->mr_state
= MR_FREE
;
3845 /* Update the ring count of the group */
3846 group
->mrg_cur_count
++;
3848 /* Create per ring kstats */
3849 if (ring
->mr_stat
!= NULL
) {
3851 mac_ring_stat_create(ring
);
3858 * Rings are chained together for easy regrouping.
3861 mac_init_group(mac_impl_t
*mip
, mac_group_t
*group
, int size
,
3862 mac_capab_rings_t
*cap_rings
)
3867 * Initialize all ring members of this group. Size of zero will not
3868 * enter the loop, so it's safe for initializing an empty group.
3870 for (index
= size
- 1; index
>= 0; index
--)
3871 (void) mac_init_ring(mip
, group
, index
, cap_rings
);
3875 mac_init_rings(mac_impl_t
*mip
, mac_ring_type_t rtype
)
3877 mac_capab_rings_t
*cap_rings
;
3879 mac_group_t
*groups
;
3880 mac_group_info_t group_info
;
3881 uint_t group_free
= 0;
3887 boolean_t pseudo_txgrp
= B_FALSE
;
3890 case MAC_RING_TYPE_RX
:
3891 ASSERT(mip
->mi_rx_groups
== NULL
);
3893 cap_rings
= &mip
->mi_rx_rings_cap
;
3894 cap_rings
->mr_type
= MAC_RING_TYPE_RX
;
3896 case MAC_RING_TYPE_TX
:
3897 ASSERT(mip
->mi_tx_groups
== NULL
);
3899 cap_rings
= &mip
->mi_tx_rings_cap
;
3900 cap_rings
->mr_type
= MAC_RING_TYPE_TX
;
3906 if (!i_mac_capab_get((mac_handle_t
)mip
, MAC_CAPAB_RINGS
, cap_rings
))
3908 grpcnt
= cap_rings
->mr_gnum
;
3911 * If we have multiple TX rings, but only one TX group, we can
3912 * create pseudo TX groups (one per TX ring) in the MAC layer,
3913 * except for an aggr. For an aggr currently we maintain only
3914 * one group with all the rings (for all its ports), going
3915 * forwards we might change this.
3917 if (rtype
== MAC_RING_TYPE_TX
&&
3918 cap_rings
->mr_gnum
== 0 && cap_rings
->mr_rnum
> 0 &&
3919 (mip
->mi_state_flags
& MIS_IS_AGGR
) == 0) {
3921 * The -1 here is because we create a default TX group
3922 * with all the rings in it.
3924 grpcnt
= cap_rings
->mr_rnum
- 1;
3925 pseudo_txgrp
= B_TRUE
;
3929 * Allocate a contiguous buffer for all groups.
3931 groups
= kmem_zalloc(sizeof (mac_group_t
) * (grpcnt
+ 1), KM_SLEEP
);
3933 ring_left
= cap_rings
->mr_rnum
;
3936 * Get all ring groups if any, and get their ring members
3939 for (g
= 0; g
< grpcnt
; g
++) {
3942 /* Prepare basic information of the group */
3943 group
->mrg_index
= g
;
3944 group
->mrg_type
= rtype
;
3945 group
->mrg_state
= MAC_GROUP_STATE_UNINIT
;
3946 group
->mrg_mh
= (mac_handle_t
)mip
;
3947 group
->mrg_next
= group
+ 1;
3949 /* Zero to reuse the info data structure */
3950 bzero(&group_info
, sizeof (group_info
));
3954 * This is a pseudo group that we created, apart
3955 * from setting the state there is nothing to be
3958 group
->mrg_state
= MAC_GROUP_STATE_REGISTERED
;
3962 /* Query group information from driver */
3963 cap_rings
->mr_gget(mip
->mi_driver
, rtype
, g
, &group_info
,
3964 (mac_group_handle_t
)group
);
3966 switch (cap_rings
->mr_group_type
) {
3967 case MAC_GROUP_TYPE_DYNAMIC
:
3968 if (cap_rings
->mr_gaddring
== NULL
||
3969 cap_rings
->mr_gremring
== NULL
) {
3971 mac__init__rings_no_addremring
,
3972 char *, mip
->mi_name
,
3973 mac_group_add_ring_t
,
3974 cap_rings
->mr_gaddring
,
3975 mac_group_add_ring_t
,
3976 cap_rings
->mr_gremring
);
3982 case MAC_RING_TYPE_RX
:
3984 * The first RX group must have non-zero
3985 * rings, and the following groups must
3988 if (g
== 0 && group_info
.mgi_count
== 0) {
3990 mac__init__rings__rx__def__zero
,
3991 char *, mip
->mi_name
);
3995 if (g
> 0 && group_info
.mgi_count
!= 0) {
3997 mac__init__rings__rx__nonzero
,
3998 char *, mip
->mi_name
,
3999 int, g
, int, group_info
.mgi_count
);
4004 case MAC_RING_TYPE_TX
:
4006 * All TX ring groups must have zero rings.
4008 if (group_info
.mgi_count
!= 0) {
4010 mac__init__rings__tx__nonzero
,
4011 char *, mip
->mi_name
,
4012 int, g
, int, group_info
.mgi_count
);
4019 case MAC_GROUP_TYPE_STATIC
:
4021 * Note that an empty group is allowed, e.g., an aggr
4022 * would start with an empty group.
4026 /* unknown group type */
4027 DTRACE_PROBE2(mac__init__rings__unknown__type
,
4028 char *, mip
->mi_name
,
4029 int, cap_rings
->mr_group_type
);
4036 * Driver must register group->mgi_addmac/remmac() for rx groups
4037 * to support multiple MAC addresses.
4039 if (rtype
== MAC_RING_TYPE_RX
&&
4040 ((group_info
.mgi_addmac
== NULL
) ||
4041 (group_info
.mgi_remmac
== NULL
))) {
4046 /* Cache driver-supplied information */
4047 group
->mrg_info
= group_info
;
4049 /* Update the group's status and group count. */
4050 mac_set_group_state(group
, MAC_GROUP_STATE_REGISTERED
);
4053 group
->mrg_rings
= NULL
;
4054 group
->mrg_cur_count
= 0;
4055 mac_init_group(mip
, group
, group_info
.mgi_count
, cap_rings
);
4056 ring_left
-= group_info
.mgi_count
;
4058 /* The current group size should be equal to default value */
4059 ASSERT(group
->mrg_cur_count
== group_info
.mgi_count
);
4062 /* Build up a dummy group for free resources as a pool */
4063 group
= groups
+ grpcnt
;
4065 /* Prepare basic information of the group */
4066 group
->mrg_index
= -1;
4067 group
->mrg_type
= rtype
;
4068 group
->mrg_state
= MAC_GROUP_STATE_UNINIT
;
4069 group
->mrg_mh
= (mac_handle_t
)mip
;
4070 group
->mrg_next
= NULL
;
4073 * If there are ungrouped rings, allocate a continuous buffer for
4074 * remaining resources.
4076 if (ring_left
!= 0) {
4077 group
->mrg_rings
= NULL
;
4078 group
->mrg_cur_count
= 0;
4079 mac_init_group(mip
, group
, ring_left
, cap_rings
);
4081 /* The current group size should be equal to ring_left */
4082 ASSERT(group
->mrg_cur_count
== ring_left
);
4086 /* Update this group's status */
4087 mac_set_group_state(group
, MAC_GROUP_STATE_REGISTERED
);
4089 group
->mrg_rings
= NULL
;
4091 ASSERT(ring_left
== 0);
4095 /* Cache other important information to finalize the initialization */
4097 case MAC_RING_TYPE_RX
:
4098 mip
->mi_rx_group_type
= cap_rings
->mr_group_type
;
4099 mip
->mi_rx_group_count
= cap_rings
->mr_gnum
;
4100 mip
->mi_rx_groups
= groups
;
4101 mip
->mi_rx_donor_grp
= groups
;
4102 if (mip
->mi_rx_group_type
== MAC_GROUP_TYPE_DYNAMIC
) {
4104 * The default ring is reserved since it is
4105 * used for sending the broadcast etc. packets.
4107 mip
->mi_rxrings_avail
=
4108 mip
->mi_rx_groups
->mrg_cur_count
- 1;
4109 mip
->mi_rxrings_rsvd
= 1;
4112 * The default group cannot be reserved. It is used by
4113 * all the clients that do not have an exclusive group.
4115 mip
->mi_rxhwclnt_avail
= mip
->mi_rx_group_count
- 1;
4116 mip
->mi_rxhwclnt_used
= 1;
4118 case MAC_RING_TYPE_TX
:
4119 mip
->mi_tx_group_type
= pseudo_txgrp
? MAC_GROUP_TYPE_DYNAMIC
:
4120 cap_rings
->mr_group_type
;
4121 mip
->mi_tx_group_count
= grpcnt
;
4122 mip
->mi_tx_group_free
= group_free
;
4123 mip
->mi_tx_groups
= groups
;
4125 group
= groups
+ grpcnt
;
4126 ring
= group
->mrg_rings
;
4128 * The ring can be NULL in the case of aggr. Aggr will
4129 * have an empty Tx group which will get populated
4130 * later when pseudo Tx rings are added after
4131 * mac_register() is done.
4134 ASSERT(mip
->mi_state_flags
& MIS_IS_AGGR
);
4136 * pass the group to aggr so it can add Tx
4137 * rings to the group later.
4139 cap_rings
->mr_gget(mip
->mi_driver
, rtype
, 0, NULL
,
4140 (mac_group_handle_t
)group
);
4142 * Even though there are no rings at this time
4143 * (rings will come later), set the group
4144 * state to registered.
4146 group
->mrg_state
= MAC_GROUP_STATE_REGISTERED
;
4149 * Ring 0 is used as the default one and it could be
4150 * assigned to a client as well.
4152 while ((ring
->mr_index
!= 0) && (ring
->mr_next
!= NULL
))
4153 ring
= ring
->mr_next
;
4154 ASSERT(ring
->mr_index
== 0);
4155 mip
->mi_default_tx_ring
= (mac_ring_handle_t
)ring
;
4157 if (mip
->mi_tx_group_type
== MAC_GROUP_TYPE_DYNAMIC
)
4158 mip
->mi_txrings_avail
= group
->mrg_cur_count
- 1;
4160 * The default ring cannot be reserved.
4162 mip
->mi_txrings_rsvd
= 1;
4164 * The default group cannot be reserved. It will be shared
4165 * by clients that do not have an exclusive group.
4167 mip
->mi_txhwclnt_avail
= mip
->mi_tx_group_count
;
4168 mip
->mi_txhwclnt_used
= 1;
4175 mac_free_rings(mip
, rtype
);
4181 * The ddi interrupt handle could be shared amoung rings. If so, compare
4182 * the new ring's ddi handle with the existing ones and set ddi_shared
4186 mac_compare_ddi_handle(mac_group_t
*groups
, uint_t grpcnt
, mac_ring_t
*cring
)
4190 ddi_intr_handle_t ddi_handle
;
4193 ddi_handle
= cring
->mr_info
.mri_intr
.mi_ddi_handle
;
4194 for (g
= 0; g
< grpcnt
; g
++) {
4196 for (ring
= group
->mrg_rings
; ring
!= NULL
;
4197 ring
= ring
->mr_next
) {
4200 if (ring
->mr_info
.mri_intr
.mi_ddi_handle
==
4202 if (cring
->mr_type
== MAC_RING_TYPE_RX
&&
4203 ring
->mr_index
== 0 &&
4204 !ring
->mr_info
.mri_intr
.mi_ddi_shared
) {
4205 ring
->mr_info
.mri_intr
.mi_ddi_shared
=
4208 cring
->mr_info
.mri_intr
.mi_ddi_shared
=
4218 * Called to free all groups of particular type (RX or TX). It's assumed that
4219 * no clients are using these groups.
4222 mac_free_rings(mac_impl_t
*mip
, mac_ring_type_t rtype
)
4224 mac_group_t
*group
, *groups
;
4228 case MAC_RING_TYPE_RX
:
4229 if (mip
->mi_rx_groups
== NULL
)
4232 groups
= mip
->mi_rx_groups
;
4233 group_count
= mip
->mi_rx_group_count
;
4235 mip
->mi_rx_groups
= NULL
;
4236 mip
->mi_rx_donor_grp
= NULL
;
4237 mip
->mi_rx_group_count
= 0;
4239 case MAC_RING_TYPE_TX
:
4240 ASSERT(mip
->mi_tx_group_count
== mip
->mi_tx_group_free
);
4242 if (mip
->mi_tx_groups
== NULL
)
4245 groups
= mip
->mi_tx_groups
;
4246 group_count
= mip
->mi_tx_group_count
;
4248 mip
->mi_tx_groups
= NULL
;
4249 mip
->mi_tx_group_count
= 0;
4250 mip
->mi_tx_group_free
= 0;
4251 mip
->mi_default_tx_ring
= NULL
;
4257 for (group
= groups
; group
!= NULL
; group
= group
->mrg_next
) {
4260 if (group
->mrg_cur_count
== 0)
4263 ASSERT(group
->mrg_rings
!= NULL
);
4265 while ((ring
= group
->mrg_rings
) != NULL
) {
4266 group
->mrg_rings
= ring
->mr_next
;
4267 mac_ring_free(mip
, ring
);
4271 /* Free all the cached rings */
4272 mac_ring_freeall(mip
);
4273 /* Free the block of group data strutures */
4274 kmem_free(groups
, sizeof (mac_group_t
) * (group_count
+ 1));
4278 * Associate a MAC address with a receive group.
4280 * The return value of this function should always be checked properly, because
4281 * any type of failure could cause unexpected results. A group can be added
4282 * or removed with a MAC address only after it has been reserved. Ideally,
4283 * a successful reservation always leads to calling mac_group_addmac() to
4284 * steer desired traffic. Failure of adding an unicast MAC address doesn't
4285 * always imply that the group is functioning abnormally.
4287 * Currently this function is called everywhere, and it reflects assumptions
4288 * about MAC addresses in the implementation. CR 6735196.
4291 mac_group_addmac(mac_group_t
*group
, const uint8_t *addr
)
4293 ASSERT(group
->mrg_type
== MAC_RING_TYPE_RX
);
4294 ASSERT(group
->mrg_info
.mgi_addmac
!= NULL
);
4296 return (group
->mrg_info
.mgi_addmac(group
->mrg_info
.mgi_driver
, addr
));
4300 * Remove the association between MAC address and receive group.
4303 mac_group_remmac(mac_group_t
*group
, const uint8_t *addr
)
4305 ASSERT(group
->mrg_type
== MAC_RING_TYPE_RX
);
4306 ASSERT(group
->mrg_info
.mgi_remmac
!= NULL
);
4308 return (group
->mrg_info
.mgi_remmac(group
->mrg_info
.mgi_driver
, addr
));
4312 * This is the entry point for packets transmitted through the bridging code.
4313 * If no bridge is in place, MAC_RING_TX transmits using tx ring. The 'rh'
4314 * pointer may be NULL to select the default ring.
4317 mac_bridge_tx(mac_impl_t
*mip
, mac_ring_handle_t rh
, mblk_t
*mp
)
4322 * Once we take a reference on the bridge link, the bridge
4323 * module itself can't unload, so the callback pointers are
4326 mutex_enter(&mip
->mi_bridge_lock
);
4327 if ((mh
= mip
->mi_bridge_link
) != NULL
)
4328 mac_bridge_ref_cb(mh
, B_TRUE
);
4329 mutex_exit(&mip
->mi_bridge_lock
);
4331 MAC_RING_TX(mip
, rh
, mp
, mp
);
4333 mp
= mac_bridge_tx_cb(mh
, rh
, mp
);
4334 mac_bridge_ref_cb(mh
, B_FALSE
);
4341 * Find a ring from its index.
4344 mac_find_ring(mac_group_handle_t gh
, int index
)
4346 mac_group_t
*group
= (mac_group_t
*)gh
;
4347 mac_ring_t
*ring
= group
->mrg_rings
;
4349 for (ring
= group
->mrg_rings
; ring
!= NULL
; ring
= ring
->mr_next
)
4350 if (ring
->mr_index
== index
)
4353 return ((mac_ring_handle_t
)ring
);
4356 * Add a ring to an existing group.
4358 * The ring must be either passed directly (for example if the ring
4359 * movement is initiated by the framework), or specified through a driver
4360 * index (for example when the ring is added by the driver.
4362 * The caller needs to call mac_perim_enter() before calling this function.
4365 i_mac_group_add_ring(mac_group_t
*group
, mac_ring_t
*ring
, int index
)
4367 mac_impl_t
*mip
= (mac_impl_t
*)group
->mrg_mh
;
4368 mac_capab_rings_t
*cap_rings
;
4369 boolean_t driver_call
= (ring
== NULL
);
4370 mac_group_type_t group_type
;
4372 flow_entry_t
*flent
;
4374 ASSERT(MAC_PERIM_HELD((mac_handle_t
)mip
));
4376 switch (group
->mrg_type
) {
4377 case MAC_RING_TYPE_RX
:
4378 cap_rings
= &mip
->mi_rx_rings_cap
;
4379 group_type
= mip
->mi_rx_group_type
;
4381 case MAC_RING_TYPE_TX
:
4382 cap_rings
= &mip
->mi_tx_rings_cap
;
4383 group_type
= mip
->mi_tx_group_type
;
4390 * There should be no ring with the same ring index in the target
4393 ASSERT(mac_find_ring((mac_group_handle_t
)group
,
4394 driver_call
? index
: ring
->mr_index
) == NULL
);
4398 * The function is called as a result of a request from
4399 * a driver to add a ring to an existing group, for example
4400 * from the aggregation driver. Allocate a new mac_ring_t
4403 ring
= mac_init_ring(mip
, group
, index
, cap_rings
);
4404 ASSERT(group
->mrg_state
> MAC_GROUP_STATE_UNINIT
);
4407 * The function is called as a result of a MAC layer request
4408 * to add a ring to an existing group. In this case the
4409 * ring is being moved between groups, which requires
4410 * the underlying driver to support dynamic grouping,
4411 * and the mac_ring_t already exists.
4413 ASSERT(group_type
== MAC_GROUP_TYPE_DYNAMIC
);
4414 ASSERT(group
->mrg_driver
== NULL
||
4415 cap_rings
->mr_gaddring
!= NULL
);
4416 ASSERT(ring
->mr_gh
== NULL
);
4420 * At this point the ring should not be in use, and it should be
4421 * of the right for the target group.
4423 ASSERT(ring
->mr_state
< MR_INUSE
);
4424 ASSERT(ring
->mr_srs
== NULL
);
4425 ASSERT(ring
->mr_type
== group
->mrg_type
);
4429 * Add the driver level hardware ring if the process was not
4430 * initiated by the driver, and the target group is not the
4433 if (group
->mrg_driver
!= NULL
) {
4434 cap_rings
->mr_gaddring(group
->mrg_driver
,
4435 ring
->mr_driver
, ring
->mr_type
);
4439 * Insert the ring ahead existing rings.
4441 ring
->mr_next
= group
->mrg_rings
;
4442 group
->mrg_rings
= ring
;
4443 ring
->mr_gh
= (mac_group_handle_t
)group
;
4444 group
->mrg_cur_count
++;
4448 * If the group has not been actively used, we're done.
4450 if (group
->mrg_index
!= -1 &&
4451 group
->mrg_state
< MAC_GROUP_STATE_RESERVED
)
4455 * Start the ring if needed. Failure causes to undo the grouping action.
4457 if (ring
->mr_state
!= MR_INUSE
) {
4458 if ((ret
= mac_start_ring(ring
)) != 0) {
4460 cap_rings
->mr_gremring(group
->mrg_driver
,
4461 ring
->mr_driver
, ring
->mr_type
);
4463 group
->mrg_cur_count
--;
4464 group
->mrg_rings
= ring
->mr_next
;
4469 mac_ring_free(mip
, ring
);
4476 * Set up SRS/SR according to the ring type.
4478 switch (ring
->mr_type
) {
4479 case MAC_RING_TYPE_RX
:
4481 * Setup SRS on top of the new ring if the group is
4482 * reserved for someones exclusive use.
4484 if (group
->mrg_state
== MAC_GROUP_STATE_RESERVED
) {
4485 mac_client_impl_t
*mcip
;
4487 mcip
= MAC_GROUP_ONLY_CLIENT(group
);
4489 * Even though this group is reserved we migth still
4490 * have multiple clients, i.e a VLAN shares the
4491 * group with the primary mac client.
4494 flent
= mcip
->mci_flent
;
4495 ASSERT(flent
->fe_rx_srs_cnt
> 0);
4496 mac_rx_srs_group_setup(mcip
, flent
, SRST_LINK
);
4497 mac_fanout_setup(mcip
, flent
,
4498 MCIP_RESOURCE_PROPS(mcip
), mac_rx_deliver
,
4501 ring
->mr_classify_type
= MAC_SW_CLASSIFIER
;
4505 case MAC_RING_TYPE_TX
:
4507 mac_grp_client_t
*mgcp
= group
->mrg_clients
;
4508 mac_client_impl_t
*mcip
;
4509 mac_soft_ring_set_t
*mac_srs
;
4512 if (MAC_GROUP_NO_CLIENT(group
)) {
4513 if (ring
->mr_state
== MR_INUSE
)
4514 mac_stop_ring(ring
);
4519 * If the rings are being moved to a group that has
4520 * clients using it, then add the new rings to the
4523 while (mgcp
!= NULL
) {
4526 mcip
= mgcp
->mgc_client
;
4527 flent
= mcip
->mci_flent
;
4528 is_aggr
= (mcip
->mci_state_flags
& MCIS_IS_AGGR
);
4529 mac_srs
= MCIP_TX_SRS(mcip
);
4530 tx
= &mac_srs
->srs_tx
;
4531 mac_tx_client_quiesce((mac_client_handle_t
)mcip
);
4533 * If we are growing from 1 to multiple rings.
4535 if (tx
->st_mode
== SRS_TX_BW
||
4536 tx
->st_mode
== SRS_TX_SERIALIZE
||
4537 tx
->st_mode
== SRS_TX_DEFAULT
) {
4538 mac_ring_t
*tx_ring
= tx
->st_arg2
;
4541 mac_tx_srs_stat_recreate(mac_srs
, B_TRUE
);
4542 mac_tx_srs_add_ring(mac_srs
, tx_ring
);
4543 if (mac_srs
->srs_type
& SRST_BW_CONTROL
) {
4544 tx
->st_mode
= is_aggr
? SRS_TX_BW_AGGR
:
4547 tx
->st_mode
= is_aggr
? SRS_TX_AGGR
:
4550 tx
->st_func
= mac_tx_get_func(tx
->st_mode
);
4552 mac_tx_srs_add_ring(mac_srs
, ring
);
4553 mac_fanout_setup(mcip
, flent
, MCIP_RESOURCE_PROPS(mcip
),
4554 mac_rx_deliver
, mcip
, NULL
, NULL
);
4555 mac_tx_client_restart((mac_client_handle_t
)mcip
);
4556 mgcp
= mgcp
->mgc_next
;
4564 * For aggr, the default ring will be NULL to begin with. If it
4565 * is NULL, then pick the first ring that gets added as the
4566 * default ring. Any ring in an aggregation can be removed at
4567 * any time (by the user action of removing a link) and if the
4568 * current default ring gets removed, then a new one gets
4569 * picked (see i_mac_group_rem_ring()).
4571 if (mip
->mi_state_flags
& MIS_IS_AGGR
&&
4572 mip
->mi_default_tx_ring
== NULL
&&
4573 ring
->mr_type
== MAC_RING_TYPE_TX
) {
4574 mip
->mi_default_tx_ring
= (mac_ring_handle_t
)ring
;
4577 MAC_RING_UNMARK(ring
, MR_INCIPIENT
);
4582 * Remove a ring from it's current group. MAC internal function for dynamic
4585 * The caller needs to call mac_perim_enter() before calling this function.
4588 i_mac_group_rem_ring(mac_group_t
*group
, mac_ring_t
*ring
,
4589 boolean_t driver_call
)
4591 mac_impl_t
*mip
= (mac_impl_t
*)group
->mrg_mh
;
4592 mac_capab_rings_t
*cap_rings
= NULL
;
4593 mac_group_type_t group_type
;
4595 ASSERT(MAC_PERIM_HELD((mac_handle_t
)mip
));
4597 ASSERT(mac_find_ring((mac_group_handle_t
)group
,
4598 ring
->mr_index
) == (mac_ring_handle_t
)ring
);
4599 ASSERT((mac_group_t
*)ring
->mr_gh
== group
);
4600 ASSERT(ring
->mr_type
== group
->mrg_type
);
4602 if (ring
->mr_state
== MR_INUSE
)
4603 mac_stop_ring(ring
);
4604 switch (ring
->mr_type
) {
4605 case MAC_RING_TYPE_RX
:
4606 group_type
= mip
->mi_rx_group_type
;
4607 cap_rings
= &mip
->mi_rx_rings_cap
;
4610 * Only hardware classified packets hold a reference to the
4611 * ring all the way up the Rx path. mac_rx_srs_remove()
4612 * will take care of quiescing the Rx path and removing the
4613 * SRS. The software classified path neither holds a reference
4614 * nor any association with the ring in mac_rx.
4616 if (ring
->mr_srs
!= NULL
) {
4617 mac_rx_srs_remove(ring
->mr_srs
);
4618 ring
->mr_srs
= NULL
;
4622 case MAC_RING_TYPE_TX
:
4624 mac_grp_client_t
*mgcp
;
4625 mac_client_impl_t
*mcip
;
4626 mac_soft_ring_set_t
*mac_srs
;
4628 mac_ring_t
*rem_ring
;
4629 mac_group_t
*defgrp
;
4630 uint_t ring_info
= 0;
4633 * For TX this function is invoked in three
4636 * 1) In the case of a failure during the
4637 * initial creation of a group when a share is
4638 * associated with a MAC client. So the SRS is not
4639 * yet setup, and will be setup later after the
4640 * group has been reserved and populated.
4642 * 2) From mac_release_tx_group() when freeing
4645 * 3) In the case of aggr, when a port gets removed,
4646 * the pseudo Tx rings that it exposed gets removed.
4648 * In the first two cases the SRS and its soft
4649 * rings are already quiesced.
4652 mac_client_impl_t
*mcip
;
4653 mac_soft_ring_set_t
*mac_srs
;
4654 mac_soft_ring_t
*sringp
;
4655 mac_srs_tx_t
*srs_tx
;
4657 if (mip
->mi_state_flags
& MIS_IS_AGGR
&&
4658 mip
->mi_default_tx_ring
==
4659 (mac_ring_handle_t
)ring
) {
4660 /* pick a new default Tx ring */
4661 mip
->mi_default_tx_ring
=
4662 (group
->mrg_rings
!= ring
) ?
4663 (mac_ring_handle_t
)group
->mrg_rings
:
4664 (mac_ring_handle_t
)(ring
->mr_next
);
4666 /* Presently only aggr case comes here */
4667 if (group
->mrg_state
!= MAC_GROUP_STATE_RESERVED
)
4670 mcip
= MAC_GROUP_ONLY_CLIENT(group
);
4671 ASSERT(mcip
!= NULL
);
4672 ASSERT(mcip
->mci_state_flags
& MCIS_IS_AGGR
);
4673 mac_srs
= MCIP_TX_SRS(mcip
);
4674 ASSERT(mac_srs
->srs_tx
.st_mode
== SRS_TX_AGGR
||
4675 mac_srs
->srs_tx
.st_mode
== SRS_TX_BW_AGGR
);
4676 srs_tx
= &mac_srs
->srs_tx
;
4678 * Wakeup any callers blocked on this
4679 * Tx ring due to flow control.
4681 sringp
= srs_tx
->st_soft_rings
[ring
->mr_index
];
4682 ASSERT(sringp
!= NULL
);
4683 mac_tx_invoke_callbacks(mcip
, (mac_tx_cookie_t
)sringp
);
4684 mac_tx_client_quiesce((mac_client_handle_t
)mcip
);
4685 mac_tx_srs_del_ring(mac_srs
, ring
);
4686 mac_tx_client_restart((mac_client_handle_t
)mcip
);
4689 ASSERT(ring
!= (mac_ring_t
*)mip
->mi_default_tx_ring
);
4690 group_type
= mip
->mi_tx_group_type
;
4691 cap_rings
= &mip
->mi_tx_rings_cap
;
4693 * See if we need to take it out of the MAC clients using
4696 if (MAC_GROUP_NO_CLIENT(group
))
4698 mgcp
= group
->mrg_clients
;
4699 defgrp
= MAC_DEFAULT_TX_GROUP(mip
);
4700 while (mgcp
!= NULL
) {
4701 mcip
= mgcp
->mgc_client
;
4702 mac_srs
= MCIP_TX_SRS(mcip
);
4703 tx
= &mac_srs
->srs_tx
;
4704 mac_tx_client_quiesce((mac_client_handle_t
)mcip
);
4706 * If we are here when removing rings from the
4707 * defgroup, mac_reserve_tx_ring would have
4708 * already deleted the ring from the MAC
4709 * clients in the group.
4711 if (group
!= defgrp
) {
4712 mac_tx_invoke_callbacks(mcip
,
4714 mac_tx_srs_get_soft_ring(mac_srs
, ring
));
4715 mac_tx_srs_del_ring(mac_srs
, ring
);
4718 * Additionally, if we are left with only
4719 * one ring in the group after this, we need
4720 * to modify the mode etc. to. (We haven't
4721 * yet taken the ring out, so we check with 2).
4723 if (group
->mrg_cur_count
== 2) {
4724 if (ring
->mr_next
== NULL
)
4725 rem_ring
= group
->mrg_rings
;
4727 rem_ring
= ring
->mr_next
;
4728 mac_tx_invoke_callbacks(mcip
,
4730 mac_tx_srs_get_soft_ring(mac_srs
,
4732 mac_tx_srs_del_ring(mac_srs
, rem_ring
);
4733 if (rem_ring
->mr_state
!= MR_INUSE
) {
4734 (void) mac_start_ring(rem_ring
);
4736 tx
->st_arg2
= (void *)rem_ring
;
4737 mac_tx_srs_stat_recreate(mac_srs
, B_FALSE
);
4738 ring_info
= mac_hwring_getinfo(
4739 (mac_ring_handle_t
)rem_ring
);
4741 * We are shrinking from multiple
4744 if (mac_srs
->srs_type
& SRST_BW_CONTROL
) {
4745 tx
->st_mode
= SRS_TX_BW
;
4746 } else if (mac_tx_serialize
||
4747 (ring_info
& MAC_RING_TX_SERIALIZE
)) {
4748 tx
->st_mode
= SRS_TX_SERIALIZE
;
4750 tx
->st_mode
= SRS_TX_DEFAULT
;
4752 tx
->st_func
= mac_tx_get_func(tx
->st_mode
);
4754 mac_tx_client_restart((mac_client_handle_t
)mcip
);
4755 mgcp
= mgcp
->mgc_next
;
4764 * Remove the ring from the group.
4766 if (ring
== group
->mrg_rings
)
4767 group
->mrg_rings
= ring
->mr_next
;
4771 pre
= group
->mrg_rings
;
4772 while (pre
->mr_next
!= ring
)
4774 pre
->mr_next
= ring
->mr_next
;
4776 group
->mrg_cur_count
--;
4779 ASSERT(group_type
== MAC_GROUP_TYPE_DYNAMIC
);
4780 ASSERT(group
->mrg_driver
== NULL
||
4781 cap_rings
->mr_gremring
!= NULL
);
4784 * Remove the driver level hardware ring.
4786 if (group
->mrg_driver
!= NULL
) {
4787 cap_rings
->mr_gremring(group
->mrg_driver
,
4788 ring
->mr_driver
, ring
->mr_type
);
4794 mac_ring_free(mip
, ring
);
4800 * Move a ring to the target group. If needed, remove the ring from the group
4801 * that it currently belongs to.
4803 * The caller need to enter MAC's perimeter by calling mac_perim_enter().
4806 mac_group_mov_ring(mac_impl_t
*mip
, mac_group_t
*d_group
, mac_ring_t
*ring
)
4808 mac_group_t
*s_group
= (mac_group_t
*)ring
->mr_gh
;
4811 ASSERT(MAC_PERIM_HELD((mac_handle_t
)mip
));
4812 ASSERT(d_group
!= NULL
);
4813 ASSERT(s_group
->mrg_mh
== d_group
->mrg_mh
);
4815 if (s_group
== d_group
)
4819 * Remove it from current group first.
4821 if (s_group
!= NULL
)
4822 i_mac_group_rem_ring(s_group
, ring
, B_FALSE
);
4825 * Add it to the new group.
4827 rv
= i_mac_group_add_ring(d_group
, ring
, 0);
4830 * Failed to add ring back to source group. If
4831 * that fails, the ring is stuck in limbo, log message.
4833 if (i_mac_group_add_ring(s_group
, ring
, 0)) {
4834 cmn_err(CE_WARN
, "%s: failed to move ring %p\n",
4835 mip
->mi_name
, (void *)ring
);
4843 * Find a MAC address according to its value.
4846 mac_find_macaddr(mac_impl_t
*mip
, uint8_t *mac_addr
)
4850 ASSERT(MAC_PERIM_HELD((mac_handle_t
)mip
));
4852 for (map
= mip
->mi_addresses
; map
!= NULL
; map
= map
->ma_next
) {
4853 if (bcmp(mac_addr
, map
->ma_addr
, map
->ma_len
) == 0)
4861 * Check whether the MAC address is shared by multiple clients.
4864 mac_check_macaddr_shared(mac_address_t
*map
)
4866 ASSERT(MAC_PERIM_HELD((mac_handle_t
)map
->ma_mip
));
4868 return (map
->ma_nusers
> 1);
4872 * Remove the specified MAC address from the MAC address list and free it.
4875 mac_free_macaddr(mac_address_t
*map
)
4877 mac_impl_t
*mip
= map
->ma_mip
;
4879 ASSERT(MAC_PERIM_HELD((mac_handle_t
)mip
));
4880 ASSERT(mip
->mi_addresses
!= NULL
);
4882 map
= mac_find_macaddr(mip
, map
->ma_addr
);
4884 ASSERT(map
!= NULL
);
4885 ASSERT(map
->ma_nusers
== 0);
4887 if (map
== mip
->mi_addresses
) {
4888 mip
->mi_addresses
= map
->ma_next
;
4892 pre
= mip
->mi_addresses
;
4893 while (pre
->ma_next
!= map
)
4895 pre
->ma_next
= map
->ma_next
;
4898 kmem_free(map
, sizeof (mac_address_t
));
4902 * Add a MAC address reference for a client. If the desired MAC address
4903 * exists, add a reference to it. Otherwise, add the new address by adding
4904 * it to a reserved group or setting promiscuous mode. Won't try different
4905 * group is the group is non-NULL, so the caller must explictly share
4906 * default group when needed.
4908 * Note, the primary MAC address is initialized at registration time, so
4909 * to add it to default group only need to activate it if its reference
4910 * count is still zero. Also, some drivers may not have advertised RINGS
4914 mac_add_macaddr(mac_impl_t
*mip
, mac_group_t
*group
, uint8_t *mac_addr
,
4919 boolean_t allocated_map
= B_FALSE
;
4921 ASSERT(MAC_PERIM_HELD((mac_handle_t
)mip
));
4923 map
= mac_find_macaddr(mip
, mac_addr
);
4926 * If the new MAC address has not been added. Allocate a new one
4930 map
= kmem_zalloc(sizeof (mac_address_t
), KM_SLEEP
);
4931 map
->ma_len
= mip
->mi_type
->mt_addr_length
;
4932 bcopy(mac_addr
, map
->ma_addr
, map
->ma_len
);
4934 map
->ma_group
= group
;
4937 /* add the new MAC address to the head of the address list */
4938 map
->ma_next
= mip
->mi_addresses
;
4939 mip
->mi_addresses
= map
;
4941 allocated_map
= B_TRUE
;
4944 ASSERT(map
->ma_group
== NULL
|| map
->ma_group
== group
);
4945 if (map
->ma_group
== NULL
)
4946 map
->ma_group
= group
;
4949 * If the MAC address is already in use, simply account for the
4952 if (map
->ma_nusers
++ > 0)
4956 * Activate this MAC address by adding it to the reserved group.
4958 if (group
!= NULL
) {
4959 err
= mac_group_addmac(group
, (const uint8_t *)mac_addr
);
4961 map
->ma_type
= MAC_ADDRESS_TYPE_UNICAST_CLASSIFIED
;
4967 * The MAC address addition failed. If the client requires a
4968 * hardware classified MAC address, fail the operation.
4976 * Try promiscuous mode.
4978 * For drivers that don't advertise RINGS capability, do
4979 * nothing for the primary address.
4981 if ((group
== NULL
) &&
4982 (bcmp(map
->ma_addr
, mip
->mi_addr
, map
->ma_len
) == 0)) {
4983 map
->ma_type
= MAC_ADDRESS_TYPE_UNICAST_CLASSIFIED
;
4988 * Enable promiscuous mode in order to receive traffic
4989 * to the new MAC address.
4991 if ((err
= i_mac_promisc_set(mip
, B_TRUE
)) == 0) {
4992 map
->ma_type
= MAC_ADDRESS_TYPE_UNICAST_PROMISC
;
4997 * Free the MAC address that could not be added. Don't free
4998 * a pre-existing address, it could have been the entry
4999 * for the primary MAC address which was pre-allocated by
5000 * mac_init_macaddr(), and which must remain on the list.
5005 mac_free_macaddr(map
);
5010 * Remove a reference to a MAC address. This may cause to remove the MAC
5011 * address from an associated group or to turn off promiscuous mode.
5012 * The caller needs to handle the failure properly.
5015 mac_remove_macaddr(mac_address_t
*map
)
5017 mac_impl_t
*mip
= map
->ma_mip
;
5020 ASSERT(MAC_PERIM_HELD((mac_handle_t
)mip
));
5022 ASSERT(map
== mac_find_macaddr(mip
, map
->ma_addr
));
5025 * If it's not the last client using this MAC address, only update
5026 * the MAC clients count.
5028 if (--map
->ma_nusers
> 0)
5032 * The MAC address is no longer used by any MAC client, so remove
5033 * it from its associated group, or turn off promiscuous mode
5034 * if it was enabled for the MAC address.
5036 switch (map
->ma_type
) {
5037 case MAC_ADDRESS_TYPE_UNICAST_CLASSIFIED
:
5039 * Don't free the preset primary address for drivers that
5040 * don't advertise RINGS capability.
5042 if (map
->ma_group
== NULL
)
5045 err
= mac_group_remmac(map
->ma_group
, map
->ma_addr
);
5047 map
->ma_group
= NULL
;
5049 case MAC_ADDRESS_TYPE_UNICAST_PROMISC
:
5050 err
= i_mac_promisc_set(mip
, B_FALSE
);
5060 * We created MAC address for the primary one at registration, so we
5061 * won't free it here. mac_fini_macaddr() will take care of it.
5063 if (bcmp(map
->ma_addr
, mip
->mi_addr
, map
->ma_len
) != 0)
5064 mac_free_macaddr(map
);
5070 * Update an existing MAC address. The caller need to make sure that the new
5071 * value has not been used.
5074 mac_update_macaddr(mac_address_t
*map
, uint8_t *mac_addr
)
5076 mac_impl_t
*mip
= map
->ma_mip
;
5079 ASSERT(MAC_PERIM_HELD((mac_handle_t
)mip
));
5080 ASSERT(mac_find_macaddr(mip
, mac_addr
) == NULL
);
5082 switch (map
->ma_type
) {
5083 case MAC_ADDRESS_TYPE_UNICAST_CLASSIFIED
:
5085 * Update the primary address for drivers that are not
5088 if (mip
->mi_rx_groups
== NULL
) {
5089 err
= mip
->mi_unicst(mip
->mi_driver
, (const uint8_t *)
5097 * If this MAC address is not currently in use,
5098 * simply break out and update the value.
5100 if (map
->ma_nusers
== 0)
5104 * Need to replace the MAC address associated with a group.
5106 err
= mac_group_remmac(map
->ma_group
, map
->ma_addr
);
5110 err
= mac_group_addmac(map
->ma_group
, mac_addr
);
5113 * Failure hints hardware error. The MAC layer needs to
5114 * have error notification facility to handle this.
5115 * Now, simply try to restore the value.
5118 (void) mac_group_addmac(map
->ma_group
, map
->ma_addr
);
5121 case MAC_ADDRESS_TYPE_UNICAST_PROMISC
:
5123 * Need to do nothing more if in promiscuous mode.
5131 * Successfully replaced the MAC address.
5134 bcopy(mac_addr
, map
->ma_addr
, map
->ma_len
);
5140 * Freshen the MAC address with new value. Its caller must have updated the
5141 * hardware MAC address before calling this function.
5142 * This funcitons is supposed to be used to handle the MAC address change
5143 * notification from underlying drivers.
5146 mac_freshen_macaddr(mac_address_t
*map
, uint8_t *mac_addr
)
5148 mac_impl_t
*mip
= map
->ma_mip
;
5150 ASSERT(MAC_PERIM_HELD((mac_handle_t
)mip
));
5151 ASSERT(mac_find_macaddr(mip
, mac_addr
) == NULL
);
5154 * Freshen the MAC address with new value.
5156 bcopy(mac_addr
, map
->ma_addr
, map
->ma_len
);
5157 bcopy(mac_addr
, mip
->mi_addr
, map
->ma_len
);
5160 * Update all MAC clients that share this MAC address.
5162 mac_unicast_update_clients(mip
, map
);
5166 * Set up the primary MAC address.
5169 mac_init_macaddr(mac_impl_t
*mip
)
5174 * The reference count is initialized to zero, until it's really
5177 map
= kmem_zalloc(sizeof (mac_address_t
), KM_SLEEP
);
5178 map
->ma_len
= mip
->mi_type
->mt_addr_length
;
5179 bcopy(mip
->mi_addr
, map
->ma_addr
, map
->ma_len
);
5182 * If driver advertises RINGS capability, it shouldn't have initialized
5183 * its primary MAC address. For other drivers, including VNIC, the
5184 * primary address must work after registration.
5186 if (mip
->mi_rx_groups
== NULL
)
5187 map
->ma_type
= MAC_ADDRESS_TYPE_UNICAST_CLASSIFIED
;
5191 mip
->mi_addresses
= map
;
5195 * Clean up the primary MAC address. Note, only one primary MAC address
5196 * is allowed. All other MAC addresses must have been freed appropriately.
5199 mac_fini_macaddr(mac_impl_t
*mip
)
5201 mac_address_t
*map
= mip
->mi_addresses
;
5207 * If mi_addresses is initialized, there should be exactly one
5208 * entry left on the list with no users.
5210 ASSERT(map
->ma_nusers
== 0);
5211 ASSERT(map
->ma_next
== NULL
);
5213 kmem_free(map
, sizeof (mac_address_t
));
5214 mip
->mi_addresses
= NULL
;
5218 * Logging related functions.
5220 * Note that Kernel statistics have been extended to maintain fine
5221 * granularity of statistics viz. hardware lane, software lane, fanout
5222 * stats etc. However, extended accounting continues to support only
5223 * aggregate statistics like before.
5226 /* Write the flow description to a netinfo_t record */
5228 mac_write_flow_desc(flow_entry_t
*flent
, mac_client_impl_t
*mcip
)
5233 mac_resource_props_t
*mrp
;
5235 ninfo
= kmem_zalloc(sizeof (netinfo_t
), KM_NOSLEEP
);
5238 ndesc
= kmem_zalloc(sizeof (net_desc_t
), KM_NOSLEEP
);
5239 if (ndesc
== NULL
) {
5240 kmem_free(ninfo
, sizeof (netinfo_t
));
5245 * Grab the fe_lock to see a self-consistent fe_flow_desc.
5246 * Updates to the fe_flow_desc are done under the fe_lock
5248 mutex_enter(&flent
->fe_lock
);
5249 fdesc
= &flent
->fe_flow_desc
;
5250 mrp
= &flent
->fe_resource_props
;
5252 ndesc
->nd_name
= flent
->fe_flow_name
;
5253 ndesc
->nd_devname
= mcip
->mci_name
;
5254 bcopy(fdesc
->fd_src_mac
, ndesc
->nd_ehost
, ETHERADDRL
);
5255 bcopy(fdesc
->fd_dst_mac
, ndesc
->nd_edest
, ETHERADDRL
);
5256 ndesc
->nd_sap
= htonl(fdesc
->fd_sap
);
5257 ndesc
->nd_isv4
= (uint8_t)fdesc
->fd_ipversion
== IPV4_VERSION
;
5258 ndesc
->nd_bw_limit
= mrp
->mrp_maxbw
;
5259 if (ndesc
->nd_isv4
) {
5260 ndesc
->nd_saddr
[3] = htonl(fdesc
->fd_local_addr
.s6_addr32
[3]);
5261 ndesc
->nd_daddr
[3] = htonl(fdesc
->fd_remote_addr
.s6_addr32
[3]);
5263 bcopy(&fdesc
->fd_local_addr
, ndesc
->nd_saddr
, IPV6_ADDR_LEN
);
5264 bcopy(&fdesc
->fd_remote_addr
, ndesc
->nd_daddr
, IPV6_ADDR_LEN
);
5266 ndesc
->nd_sport
= htons(fdesc
->fd_local_port
);
5267 ndesc
->nd_dport
= htons(fdesc
->fd_remote_port
);
5268 ndesc
->nd_protocol
= (uint8_t)fdesc
->fd_protocol
;
5269 mutex_exit(&flent
->fe_lock
);
5271 ninfo
->ni_record
= ndesc
;
5272 ninfo
->ni_size
= sizeof (net_desc_t
);
5273 ninfo
->ni_type
= EX_NET_FLDESC_REC
;
5278 /* Write the flow statistics to a netinfo_t record */
5280 mac_write_flow_stats(flow_entry_t
*flent
)
5284 mac_soft_ring_set_t
*mac_srs
;
5285 mac_rx_stats_t
*mac_rx_stat
;
5286 mac_tx_stats_t
*mac_tx_stat
;
5289 ninfo
= kmem_zalloc(sizeof (netinfo_t
), KM_NOSLEEP
);
5292 nstat
= kmem_zalloc(sizeof (net_stat_t
), KM_NOSLEEP
);
5293 if (nstat
== NULL
) {
5294 kmem_free(ninfo
, sizeof (netinfo_t
));
5298 nstat
->ns_name
= flent
->fe_flow_name
;
5299 for (i
= 0; i
< flent
->fe_rx_srs_cnt
; i
++) {
5300 mac_srs
= (mac_soft_ring_set_t
*)flent
->fe_rx_srs
[i
];
5301 mac_rx_stat
= &mac_srs
->srs_rx
.sr_stat
;
5303 nstat
->ns_ibytes
+= mac_rx_stat
->mrs_intrbytes
+
5304 mac_rx_stat
->mrs_pollbytes
+ mac_rx_stat
->mrs_lclbytes
;
5305 nstat
->ns_ipackets
+= mac_rx_stat
->mrs_intrcnt
+
5306 mac_rx_stat
->mrs_pollcnt
+ mac_rx_stat
->mrs_lclcnt
;
5307 nstat
->ns_oerrors
+= mac_rx_stat
->mrs_ierrors
;
5310 mac_srs
= (mac_soft_ring_set_t
*)(flent
->fe_tx_srs
);
5311 if (mac_srs
!= NULL
) {
5312 mac_tx_stat
= &mac_srs
->srs_tx
.st_stat
;
5314 nstat
->ns_obytes
= mac_tx_stat
->mts_obytes
;
5315 nstat
->ns_opackets
= mac_tx_stat
->mts_opackets
;
5316 nstat
->ns_oerrors
= mac_tx_stat
->mts_oerrors
;
5319 ninfo
->ni_record
= nstat
;
5320 ninfo
->ni_size
= sizeof (net_stat_t
);
5321 ninfo
->ni_type
= EX_NET_FLSTAT_REC
;
5326 /* Write the link description to a netinfo_t record */
5328 mac_write_link_desc(mac_client_impl_t
*mcip
)
5332 flow_entry_t
*flent
= mcip
->mci_flent
;
5334 ninfo
= kmem_zalloc(sizeof (netinfo_t
), KM_NOSLEEP
);
5337 ndesc
= kmem_zalloc(sizeof (net_desc_t
), KM_NOSLEEP
);
5338 if (ndesc
== NULL
) {
5339 kmem_free(ninfo
, sizeof (netinfo_t
));
5343 ndesc
->nd_name
= mcip
->mci_name
;
5344 ndesc
->nd_devname
= mcip
->mci_name
;
5345 ndesc
->nd_isv4
= B_TRUE
;
5347 * Grab the fe_lock to see a self-consistent fe_flow_desc.
5348 * Updates to the fe_flow_desc are done under the fe_lock
5349 * after removing the flent from the flow table.
5351 mutex_enter(&flent
->fe_lock
);
5352 bcopy(flent
->fe_flow_desc
.fd_src_mac
, ndesc
->nd_ehost
, ETHERADDRL
);
5353 mutex_exit(&flent
->fe_lock
);
5355 ninfo
->ni_record
= ndesc
;
5356 ninfo
->ni_size
= sizeof (net_desc_t
);
5357 ninfo
->ni_type
= EX_NET_LNDESC_REC
;
5362 /* Write the link statistics to a netinfo_t record */
5364 mac_write_link_stats(mac_client_impl_t
*mcip
)
5368 flow_entry_t
*flent
;
5369 mac_soft_ring_set_t
*mac_srs
;
5370 mac_rx_stats_t
*mac_rx_stat
;
5371 mac_tx_stats_t
*mac_tx_stat
;
5374 ninfo
= kmem_zalloc(sizeof (netinfo_t
), KM_NOSLEEP
);
5377 nstat
= kmem_zalloc(sizeof (net_stat_t
), KM_NOSLEEP
);
5378 if (nstat
== NULL
) {
5379 kmem_free(ninfo
, sizeof (netinfo_t
));
5383 nstat
->ns_name
= mcip
->mci_name
;
5384 flent
= mcip
->mci_flent
;
5385 if (flent
!= NULL
) {
5386 for (i
= 0; i
< flent
->fe_rx_srs_cnt
; i
++) {
5387 mac_srs
= (mac_soft_ring_set_t
*)flent
->fe_rx_srs
[i
];
5388 mac_rx_stat
= &mac_srs
->srs_rx
.sr_stat
;
5390 nstat
->ns_ibytes
+= mac_rx_stat
->mrs_intrbytes
+
5391 mac_rx_stat
->mrs_pollbytes
+
5392 mac_rx_stat
->mrs_lclbytes
;
5393 nstat
->ns_ipackets
+= mac_rx_stat
->mrs_intrcnt
+
5394 mac_rx_stat
->mrs_pollcnt
+ mac_rx_stat
->mrs_lclcnt
;
5395 nstat
->ns_oerrors
+= mac_rx_stat
->mrs_ierrors
;
5399 mac_srs
= (mac_soft_ring_set_t
*)(mcip
->mci_flent
->fe_tx_srs
);
5400 if (mac_srs
!= NULL
) {
5401 mac_tx_stat
= &mac_srs
->srs_tx
.st_stat
;
5403 nstat
->ns_obytes
= mac_tx_stat
->mts_obytes
;
5404 nstat
->ns_opackets
= mac_tx_stat
->mts_opackets
;
5405 nstat
->ns_oerrors
= mac_tx_stat
->mts_oerrors
;
5408 ninfo
->ni_record
= nstat
;
5409 ninfo
->ni_size
= sizeof (net_stat_t
);
5410 ninfo
->ni_type
= EX_NET_LNSTAT_REC
;
5415 typedef struct i_mac_log_state_s
{
5420 } i_mac_log_state_t
;
5423 * For a given flow, if the description has not been logged before, do it now.
5424 * If it is a VNIC, then we have collected information about it from the MAC
5425 * table, so skip it.
5427 * Called through mac_flow_walk_nolock()
5429 * Return 0 if successful.
5432 mac_log_flowinfo(flow_entry_t
*flent
, void *arg
)
5434 mac_client_impl_t
*mcip
= flent
->fe_mcip
;
5435 i_mac_log_state_t
*lstate
= arg
;
5442 * If the name starts with "vnic", and fe_user_generated is true (to
5443 * exclude the mcast and active flow entries created implicitly for
5444 * a vnic, it is a VNIC flow. i.e. vnic1 is a vnic flow,
5445 * vnic/bge1/mcast1 is not and neither is vnic/bge1/active.
5447 if (strncasecmp(flent
->fe_flow_name
, "vnic", 4) == 0 &&
5448 (flent
->fe_type
& FLOW_USER
) != 0) {
5452 if (!flent
->fe_desc_logged
) {
5454 * We don't return error because we want to continue the
5455 * walk in case this is the last walk which means we
5456 * need to reset fe_desc_logged in all the flows.
5458 if ((ninfo
= mac_write_flow_desc(flent
, mcip
)) == NULL
)
5460 list_insert_tail(lstate
->mi_list
, ninfo
);
5461 flent
->fe_desc_logged
= B_TRUE
;
5465 * Regardless of the error, we want to proceed in case we have to
5466 * reset fe_desc_logged.
5468 ninfo
= mac_write_flow_stats(flent
);
5472 list_insert_tail(lstate
->mi_list
, ninfo
);
5474 if (mcip
!= NULL
&& !(mcip
->mci_state_flags
& MCIS_DESC_LOGGED
))
5475 flent
->fe_desc_logged
= B_FALSE
;
5481 * Log the description for each mac client of this mac_impl_t, if it
5482 * hasn't already been done. Additionally, log statistics for the link as
5483 * well. Walk the flow table and log information for each flow as well.
5484 * If it is the last walk (mci_last), then we turn off mci_desc_logged (and
5485 * also fe_desc_logged, if flow logging is on) since we want to log the
5486 * description if and when logging is restarted.
5488 * Return 0 upon success or -1 upon failure
5491 i_mac_impl_log(mac_impl_t
*mip
, i_mac_log_state_t
*lstate
)
5493 mac_client_impl_t
*mcip
;
5496 i_mac_perim_enter(mip
);
5498 * Only walk the client list for NIC and etherstub
5500 if ((mip
->mi_state_flags
& MIS_DISABLED
) ||
5501 ((mip
->mi_state_flags
& MIS_IS_VNIC
) &&
5502 (mac_get_lower_mac_handle((mac_handle_t
)mip
) != NULL
))) {
5503 i_mac_perim_exit(mip
);
5507 for (mcip
= mip
->mi_clients_list
; mcip
!= NULL
;
5508 mcip
= mcip
->mci_client_next
) {
5509 if (!MCIP_DATAPATH_SETUP(mcip
))
5511 if (lstate
->mi_lenable
) {
5512 if (!(mcip
->mci_state_flags
& MCIS_DESC_LOGGED
)) {
5513 ninfo
= mac_write_link_desc(mcip
);
5514 if (ninfo
== NULL
) {
5516 * We can't terminate it if this is the last
5517 * walk, else there might be some links with
5518 * mi_desc_logged set to true, which means
5519 * their description won't be logged the next
5520 * time logging is started (similarly for the
5521 * flows within such links). We can continue
5522 * without walking the flow table (i.e. to
5523 * set fe_desc_logged to false) because we
5524 * won't have written any flow stuff for this
5525 * link as we haven't logged the link itself.
5527 i_mac_perim_exit(mip
);
5528 if (lstate
->mi_last
)
5533 mcip
->mci_state_flags
|= MCIS_DESC_LOGGED
;
5534 list_insert_tail(lstate
->mi_list
, ninfo
);
5538 ninfo
= mac_write_link_stats(mcip
);
5539 if (ninfo
== NULL
&& !lstate
->mi_last
) {
5540 i_mac_perim_exit(mip
);
5543 list_insert_tail(lstate
->mi_list
, ninfo
);
5545 if (lstate
->mi_last
)
5546 mcip
->mci_state_flags
&= ~MCIS_DESC_LOGGED
;
5548 if (lstate
->mi_fenable
) {
5549 if (mcip
->mci_subflow_tab
!= NULL
) {
5550 (void) mac_flow_walk_nolock(
5551 mcip
->mci_subflow_tab
, mac_log_flowinfo
,
5556 i_mac_perim_exit(mip
);
5561 * modhash walker function to add a mac_impl_t to a list
5565 i_mac_impl_list_walker(mod_hash_key_t key
, mod_hash_val_t
*val
, void *arg
)
5567 list_t
*list
= (list_t
*)arg
;
5568 mac_impl_t
*mip
= (mac_impl_t
*)val
;
5570 if ((mip
->mi_state_flags
& MIS_DISABLED
) == 0) {
5571 list_insert_tail(list
, mip
);
5575 return (MH_WALK_CONTINUE
);
5579 i_mac_log_info(list_t
*net_log_list
, i_mac_log_state_t
*lstate
)
5581 list_t mac_impl_list
;
5585 /* Create list of mac_impls */
5586 ASSERT(RW_LOCK_HELD(&i_mac_impl_lock
));
5587 list_create(&mac_impl_list
, sizeof (mac_impl_t
), offsetof(mac_impl_t
,
5589 mod_hash_walk(i_mac_impl_hash
, i_mac_impl_list_walker
, &mac_impl_list
);
5590 rw_exit(&i_mac_impl_lock
);
5592 /* Create log entries for each mac_impl */
5593 for (mip
= list_head(&mac_impl_list
); mip
!= NULL
;
5594 mip
= list_next(&mac_impl_list
, mip
)) {
5595 if (i_mac_impl_log(mip
, lstate
) != 0)
5599 /* Remove elements and destroy list of mac_impls */
5600 rw_enter(&i_mac_impl_lock
, RW_WRITER
);
5601 while ((mip
= list_remove_tail(&mac_impl_list
)) != NULL
) {
5604 rw_exit(&i_mac_impl_lock
);
5605 list_destroy(&mac_impl_list
);
5608 * Write log entries to files outside of locks, free associated
5609 * structures, and remove entries from the list.
5611 while ((ninfo
= list_head(net_log_list
)) != NULL
) {
5612 (void) exacct_commit_netinfo(ninfo
->ni_record
, ninfo
->ni_type
);
5613 list_remove(net_log_list
, ninfo
);
5614 kmem_free(ninfo
->ni_record
, ninfo
->ni_size
);
5615 kmem_free(ninfo
, sizeof (*ninfo
));
5617 list_destroy(net_log_list
);
5621 * The timer thread that runs every mac_logging_interval seconds and logs
5622 * link and/or flow information.
5626 mac_log_linkinfo(void *arg
)
5628 i_mac_log_state_t lstate
;
5629 list_t net_log_list
;
5631 list_create(&net_log_list
, sizeof (netinfo_t
),
5632 offsetof(netinfo_t
, ni_link
));
5634 rw_enter(&i_mac_impl_lock
, RW_READER
);
5635 if (!mac_flow_log_enable
&& !mac_link_log_enable
) {
5636 rw_exit(&i_mac_impl_lock
);
5639 lstate
.mi_fenable
= mac_flow_log_enable
;
5640 lstate
.mi_lenable
= mac_link_log_enable
;
5641 lstate
.mi_last
= B_FALSE
;
5642 lstate
.mi_list
= &net_log_list
;
5644 /* Write log entries for each mac_impl in the list */
5645 i_mac_log_info(&net_log_list
, &lstate
);
5647 if (mac_flow_log_enable
|| mac_link_log_enable
) {
5648 mac_logging_timer
= timeout(mac_log_linkinfo
, NULL
,
5649 SEC_TO_TICK(mac_logging_interval
));
5653 typedef struct i_mac_fastpath_state_s
{
5654 boolean_t mf_disable
;
5656 } i_mac_fastpath_state_t
;
5658 /* modhash walker function to enable or disable fastpath */
5661 i_mac_fastpath_walker(mod_hash_key_t key
, mod_hash_val_t
*val
,
5664 i_mac_fastpath_state_t
*state
= arg
;
5665 mac_handle_t mh
= (mac_handle_t
)val
;
5667 if (state
->mf_disable
)
5668 state
->mf_err
= mac_fastpath_disable(mh
);
5670 mac_fastpath_enable(mh
);
5672 return (state
->mf_err
== 0 ? MH_WALK_CONTINUE
: MH_WALK_TERMINATE
);
5676 * Start the logging timer.
5679 mac_start_logusage(mac_logtype_t type
, uint_t interval
)
5681 i_mac_fastpath_state_t dstate
= {B_TRUE
, 0};
5682 i_mac_fastpath_state_t estate
= {B_FALSE
, 0};
5685 rw_enter(&i_mac_impl_lock
, RW_WRITER
);
5687 case MAC_LOGTYPE_FLOW
:
5688 if (mac_flow_log_enable
) {
5689 rw_exit(&i_mac_impl_lock
);
5693 case MAC_LOGTYPE_LINK
:
5694 if (mac_link_log_enable
) {
5695 rw_exit(&i_mac_impl_lock
);
5703 /* Disable fastpath */
5704 mod_hash_walk(i_mac_impl_hash
, i_mac_fastpath_walker
, &dstate
);
5705 if ((err
= dstate
.mf_err
) != 0) {
5706 /* Reenable fastpath */
5707 mod_hash_walk(i_mac_impl_hash
, i_mac_fastpath_walker
, &estate
);
5708 rw_exit(&i_mac_impl_lock
);
5713 case MAC_LOGTYPE_FLOW
:
5714 mac_flow_log_enable
= B_TRUE
;
5716 case MAC_LOGTYPE_LINK
:
5717 mac_link_log_enable
= B_TRUE
;
5721 mac_logging_interval
= interval
;
5722 rw_exit(&i_mac_impl_lock
);
5723 mac_log_linkinfo(NULL
);
5728 * Stop the logging timer if both link and flow logging are turned off.
5731 mac_stop_logusage(mac_logtype_t type
)
5733 i_mac_log_state_t lstate
;
5734 i_mac_fastpath_state_t estate
= {B_FALSE
, 0};
5735 list_t net_log_list
;
5737 list_create(&net_log_list
, sizeof (netinfo_t
),
5738 offsetof(netinfo_t
, ni_link
));
5740 rw_enter(&i_mac_impl_lock
, RW_WRITER
);
5742 lstate
.mi_fenable
= mac_flow_log_enable
;
5743 lstate
.mi_lenable
= mac_link_log_enable
;
5744 lstate
.mi_list
= &net_log_list
;
5747 lstate
.mi_last
= B_TRUE
;
5750 case MAC_LOGTYPE_FLOW
:
5751 if (lstate
.mi_fenable
) {
5752 ASSERT(mac_link_log_enable
);
5753 mac_flow_log_enable
= B_FALSE
;
5754 mac_link_log_enable
= B_FALSE
;
5758 case MAC_LOGTYPE_LINK
:
5759 if (!lstate
.mi_lenable
|| mac_flow_log_enable
) {
5760 rw_exit(&i_mac_impl_lock
);
5763 mac_link_log_enable
= B_FALSE
;
5769 /* Reenable fastpath */
5770 mod_hash_walk(i_mac_impl_hash
, i_mac_fastpath_walker
, &estate
);
5772 (void) untimeout(mac_logging_timer
);
5773 mac_logging_timer
= 0;
5775 /* Write log entries for each mac_impl in the list */
5776 i_mac_log_info(&net_log_list
, &lstate
);
5780 * Walk the rx and tx SRS/SRs for a flow and update the priority value.
5783 mac_flow_update_priority(mac_client_impl_t
*mcip
, flow_entry_t
*flent
)
5787 mac_soft_ring_set_t
*mac_srs
;
5789 if (flent
->fe_rx_srs_cnt
<= 0)
5792 if (((mac_soft_ring_set_t
*)flent
->fe_rx_srs
[0])->srs_type
==
5794 pri
= FLOW_PRIORITY(mcip
->mci_min_pri
,
5796 flent
->fe_resource_props
.mrp_priority
);
5798 pri
= mcip
->mci_max_pri
;
5801 for (count
= 0; count
< flent
->fe_rx_srs_cnt
; count
++) {
5802 mac_srs
= flent
->fe_rx_srs
[count
];
5803 mac_update_srs_priority(mac_srs
, pri
);
5806 * If we have a Tx SRS, we need to modify all the threads associated
5809 if (flent
->fe_tx_srs
!= NULL
)
5810 mac_update_srs_priority(flent
->fe_tx_srs
, pri
);
5814 * RX and TX rings are reserved according to different semantics depending
5815 * on the requests from the MAC clients and type of rings:
5817 * On the Tx side, by default we reserve individual rings, independently from
5820 * On the Rx side, the reservation is at the granularity of the group
5821 * of rings, and used for v12n level 1 only. It has a special case for the
5824 * If a share is allocated to a MAC client, we allocate a TX group and an
5825 * RX group to the client, and assign TX rings and RX rings to these
5826 * groups according to information gathered from the driver through
5827 * the share capability.
5829 * The foreseable evolution of Rx rings will handle v12n level 2 and higher
5830 * to allocate individual rings out of a group and program the hw classifier
5831 * based on IP address or higher level criteria.
5835 * mac_reserve_tx_ring()
5836 * Reserve a unused ring by marking it with MR_INUSE state.
5837 * As reserved, the ring is ready to function.
5839 * Notes for Hybrid I/O:
5841 * If a specific ring is needed, it is specified through the desired_ring
5842 * argument. Otherwise that argument is set to NULL.
5843 * If the desired ring was previous allocated to another client, this
5844 * function swaps it with a new ring from the group of unassigned rings.
5847 mac_reserve_tx_ring(mac_impl_t
*mip
, mac_ring_t
*desired_ring
)
5850 mac_grp_client_t
*mgcp
;
5851 mac_client_impl_t
*mcip
;
5852 mac_soft_ring_set_t
*srs
;
5854 ASSERT(MAC_PERIM_HELD((mac_handle_t
)mip
));
5857 * Find an available ring and start it before changing its status.
5858 * The unassigned rings are at the end of the mi_tx_groups
5861 group
= MAC_DEFAULT_TX_GROUP(mip
);
5863 /* Can't take the default ring out of the default group */
5864 ASSERT(desired_ring
!= (mac_ring_t
*)mip
->mi_default_tx_ring
);
5866 if (desired_ring
->mr_state
== MR_FREE
) {
5867 ASSERT(MAC_GROUP_NO_CLIENT(group
));
5868 if (mac_start_ring(desired_ring
) != 0)
5870 return (desired_ring
);
5873 * There are clients using this ring, so let's move the clients
5874 * away from using this ring.
5876 for (mgcp
= group
->mrg_clients
; mgcp
!= NULL
; mgcp
= mgcp
->mgc_next
) {
5877 mcip
= mgcp
->mgc_client
;
5878 mac_tx_client_quiesce((mac_client_handle_t
)mcip
);
5879 srs
= MCIP_TX_SRS(mcip
);
5880 ASSERT(mac_tx_srs_ring_present(srs
, desired_ring
));
5881 mac_tx_invoke_callbacks(mcip
,
5882 (mac_tx_cookie_t
)mac_tx_srs_get_soft_ring(srs
,
5884 mac_tx_srs_del_ring(srs
, desired_ring
);
5885 mac_tx_client_restart((mac_client_handle_t
)mcip
);
5887 return (desired_ring
);
5891 * For a reserved group with multiple clients, return the primary client.
5893 static mac_client_impl_t
*
5894 mac_get_grp_primary(mac_group_t
*grp
)
5896 mac_grp_client_t
*mgcp
= grp
->mrg_clients
;
5897 mac_client_impl_t
*mcip
;
5899 while (mgcp
!= NULL
) {
5900 mcip
= mgcp
->mgc_client
;
5901 if (mcip
->mci_flent
->fe_type
& FLOW_PRIMARY_MAC
)
5903 mgcp
= mgcp
->mgc_next
;
5909 * Hybrid I/O specifies the ring that should be given to a share.
5910 * If the ring is already used by clients, then we need to release
5911 * the ring back to the default group so that we can give it to
5912 * the share. This means the clients using this ring now get a
5913 * replacement ring. If there aren't any replacement rings, this
5914 * function returns a failure.
5917 mac_reclaim_ring_from_grp(mac_impl_t
*mip
, mac_ring_type_t ring_type
,
5918 mac_ring_t
*ring
, mac_ring_t
**rings
, int nrings
)
5920 mac_group_t
*group
= (mac_group_t
*)ring
->mr_gh
;
5921 mac_resource_props_t
*mrp
;
5922 mac_client_impl_t
*mcip
;
5923 mac_group_t
*defgrp
;
5929 mcip
= MAC_GROUP_ONLY_CLIENT(group
);
5931 mcip
= mac_get_grp_primary(group
);
5932 ASSERT(mcip
!= NULL
);
5933 ASSERT(mcip
->mci_share
== NULL
);
5935 mrp
= MCIP_RESOURCE_PROPS(mcip
);
5936 if (ring_type
== MAC_RING_TYPE_RX
) {
5937 defgrp
= mip
->mi_rx_donor_grp
;
5938 if ((mrp
->mrp_mask
& MRP_RX_RINGS
) == 0) {
5939 /* Need to put this mac client in the default group */
5940 if (mac_rx_switch_group(mcip
, group
, defgrp
) != 0)
5944 * Switch this ring with some other ring from
5945 * the default group.
5947 for (tring
= defgrp
->mrg_rings
; tring
!= NULL
;
5948 tring
= tring
->mr_next
) {
5949 if (tring
->mr_index
== 0)
5951 for (j
= 0; j
< nrings
; j
++) {
5952 if (rings
[j
] == tring
)
5960 if (mac_group_mov_ring(mip
, group
, tring
) != 0)
5962 if (mac_group_mov_ring(mip
, defgrp
, ring
) != 0) {
5963 (void) mac_group_mov_ring(mip
, defgrp
, tring
);
5967 ASSERT(ring
->mr_gh
== (mac_group_handle_t
)defgrp
);
5971 defgrp
= MAC_DEFAULT_TX_GROUP(mip
);
5972 if (ring
== (mac_ring_t
*)mip
->mi_default_tx_ring
) {
5974 * See if we can get a spare ring to replace the default
5977 if (defgrp
->mrg_cur_count
== 1) {
5979 * Need to get a ring from another client, see if
5980 * there are any clients that can be moved to
5981 * the default group, thereby freeing some rings.
5983 for (i
= 0; i
< mip
->mi_tx_group_count
; i
++) {
5984 tgrp
= &mip
->mi_tx_groups
[i
];
5985 if (tgrp
->mrg_state
==
5986 MAC_GROUP_STATE_REGISTERED
) {
5989 mcip
= MAC_GROUP_ONLY_CLIENT(tgrp
);
5991 mcip
= mac_get_grp_primary(tgrp
);
5992 ASSERT(mcip
!= NULL
);
5993 mrp
= MCIP_RESOURCE_PROPS(mcip
);
5994 if ((mrp
->mrp_mask
& MRP_TX_RINGS
) == 0) {
5995 ASSERT(tgrp
->mrg_cur_count
== 1);
5997 * If this ring is part of the
5998 * rings asked by the share we cannot
5999 * use it as the default ring.
6001 for (j
= 0; j
< nrings
; j
++) {
6002 if (rings
[j
] == tgrp
->mrg_rings
)
6007 mac_tx_client_quiesce(
6008 (mac_client_handle_t
)mcip
);
6009 mac_tx_switch_group(mcip
, tgrp
,
6011 mac_tx_client_restart(
6012 (mac_client_handle_t
)mcip
);
6017 * All the rings are reserved, can't give up the
6020 if (defgrp
->mrg_cur_count
<= 1)
6024 * Swap the default ring with another.
6026 for (tring
= defgrp
->mrg_rings
; tring
!= NULL
;
6027 tring
= tring
->mr_next
) {
6029 * If this ring is part of the rings asked by the
6030 * share we cannot use it as the default ring.
6032 for (j
= 0; j
< nrings
; j
++) {
6033 if (rings
[j
] == tring
)
6039 ASSERT(tring
!= NULL
);
6040 mip
->mi_default_tx_ring
= (mac_ring_handle_t
)tring
;
6044 * The Tx ring is with a group reserved by a MAC client. See if
6047 ASSERT(group
->mrg_state
== MAC_GROUP_STATE_RESERVED
);
6048 mcip
= MAC_GROUP_ONLY_CLIENT(group
);
6050 mcip
= mac_get_grp_primary(group
);
6051 ASSERT(mcip
!= NULL
);
6052 mrp
= MCIP_RESOURCE_PROPS(mcip
);
6053 mac_tx_client_quiesce((mac_client_handle_t
)mcip
);
6054 if ((mrp
->mrp_mask
& MRP_TX_RINGS
) == 0) {
6055 ASSERT(group
->mrg_cur_count
== 1);
6056 /* Put this mac client in the default group */
6057 mac_tx_switch_group(mcip
, group
, defgrp
);
6060 * Switch this ring with some other ring from
6061 * the default group.
6063 for (tring
= defgrp
->mrg_rings
; tring
!= NULL
;
6064 tring
= tring
->mr_next
) {
6065 if (tring
== (mac_ring_t
*)mip
->mi_default_tx_ring
)
6068 * If this ring is part of the rings asked by the
6069 * share we cannot use it for swapping.
6071 for (j
= 0; j
< nrings
; j
++) {
6072 if (rings
[j
] == tring
)
6078 if (tring
== NULL
) {
6079 mac_tx_client_restart((mac_client_handle_t
)mcip
);
6082 if (mac_group_mov_ring(mip
, group
, tring
) != 0) {
6083 mac_tx_client_restart((mac_client_handle_t
)mcip
);
6086 if (mac_group_mov_ring(mip
, defgrp
, ring
) != 0) {
6087 (void) mac_group_mov_ring(mip
, defgrp
, tring
);
6088 mac_tx_client_restart((mac_client_handle_t
)mcip
);
6092 mac_tx_client_restart((mac_client_handle_t
)mcip
);
6093 ASSERT(ring
->mr_gh
== (mac_group_handle_t
)defgrp
);
6098 * Populate a zero-ring group with rings. If the share is non-NULL,
6099 * the rings are chosen according to that share.
6100 * Invoked after allocating a new RX or TX group through
6101 * mac_reserve_rx_group() or mac_reserve_tx_group(), respectively.
6102 * Returns zero on success, an errno otherwise.
6105 i_mac_group_allocate_rings(mac_impl_t
*mip
, mac_ring_type_t ring_type
,
6106 mac_group_t
*src_group
, mac_group_t
*new_group
, mac_share_handle_t share
,
6109 mac_ring_t
**rings
, *ring
;
6111 int rv
= 0, i
= 0, j
;
6113 ASSERT((ring_type
== MAC_RING_TYPE_RX
&&
6114 mip
->mi_rx_group_type
== MAC_GROUP_TYPE_DYNAMIC
) ||
6115 (ring_type
== MAC_RING_TYPE_TX
&&
6116 mip
->mi_tx_group_type
== MAC_GROUP_TYPE_DYNAMIC
));
6119 * First find the rings to allocate to the group.
6121 if (share
!= NULL
) {
6122 /* get rings through ms_squery() */
6123 mip
->mi_share_capab
.ms_squery(share
, ring_type
, NULL
, &nrings
);
6124 ASSERT(nrings
!= 0);
6125 rings
= kmem_alloc(nrings
* sizeof (mac_ring_handle_t
),
6127 mip
->mi_share_capab
.ms_squery(share
, ring_type
,
6128 (mac_ring_handle_t
*)rings
, &nrings
);
6129 for (i
= 0; i
< nrings
; i
++) {
6131 * If we have given this ring to a non-default
6132 * group, we need to check if we can get this
6136 if (ring
->mr_gh
!= (mac_group_handle_t
)src_group
||
6137 ring
== (mac_ring_t
*)mip
->mi_default_tx_ring
) {
6138 if (mac_reclaim_ring_from_grp(mip
, ring_type
,
6139 ring
, rings
, nrings
) != 0) {
6147 * Pick one ring from default group.
6149 * for now pick the second ring which requires the first ring
6150 * at index 0 to stay in the default group, since it is the
6151 * ring which carries the multicast traffic.
6152 * We need a better way for a driver to indicate this,
6153 * for example a per-ring flag.
6155 rings
= kmem_alloc(ringcnt
* sizeof (mac_ring_handle_t
),
6157 for (ring
= src_group
->mrg_rings
; ring
!= NULL
;
6158 ring
= ring
->mr_next
) {
6159 if (ring_type
== MAC_RING_TYPE_RX
&&
6160 ring
->mr_index
== 0) {
6163 if (ring_type
== MAC_RING_TYPE_TX
&&
6164 ring
== (mac_ring_t
*)mip
->mi_default_tx_ring
) {
6171 ASSERT(ring
!= NULL
);
6173 /* Not enough rings as required */
6174 if (nrings
!= ringcnt
) {
6180 switch (ring_type
) {
6181 case MAC_RING_TYPE_RX
:
6182 if (src_group
->mrg_cur_count
- nrings
< 1) {
6183 /* we ran out of rings */
6188 /* move receive rings to new group */
6189 for (i
= 0; i
< nrings
; i
++) {
6190 rv
= mac_group_mov_ring(mip
, new_group
, rings
[i
]);
6192 /* move rings back on failure */
6193 for (j
= 0; j
< i
; j
++) {
6194 (void) mac_group_mov_ring(mip
,
6195 src_group
, rings
[j
]);
6202 case MAC_RING_TYPE_TX
: {
6203 mac_ring_t
*tmp_ring
;
6205 /* move the TX rings to the new group */
6206 for (i
= 0; i
< nrings
; i
++) {
6207 /* get the desired ring */
6208 tmp_ring
= mac_reserve_tx_ring(mip
, rings
[i
]);
6209 if (tmp_ring
== NULL
) {
6213 ASSERT(tmp_ring
== rings
[i
]);
6214 rv
= mac_group_mov_ring(mip
, new_group
, rings
[i
]);
6216 /* cleanup on failure */
6217 for (j
= 0; j
< i
; j
++) {
6218 (void) mac_group_mov_ring(mip
,
6219 MAC_DEFAULT_TX_GROUP(mip
),
6229 /* add group to share */
6231 mip
->mi_share_capab
.ms_sadd(share
, new_group
->mrg_driver
);
6234 /* free temporary array of rings */
6235 kmem_free(rings
, nrings
* sizeof (mac_ring_handle_t
));
6241 mac_group_add_client(mac_group_t
*grp
, mac_client_impl_t
*mcip
)
6243 mac_grp_client_t
*mgcp
;
6245 for (mgcp
= grp
->mrg_clients
; mgcp
!= NULL
; mgcp
= mgcp
->mgc_next
) {
6246 if (mgcp
->mgc_client
== mcip
)
6250 VERIFY(mgcp
== NULL
);
6252 mgcp
= kmem_zalloc(sizeof (mac_grp_client_t
), KM_SLEEP
);
6253 mgcp
->mgc_client
= mcip
;
6254 mgcp
->mgc_next
= grp
->mrg_clients
;
6255 grp
->mrg_clients
= mgcp
;
6260 mac_group_remove_client(mac_group_t
*grp
, mac_client_impl_t
*mcip
)
6262 mac_grp_client_t
*mgcp
, **pprev
;
6264 for (pprev
= &grp
->mrg_clients
, mgcp
= *pprev
; mgcp
!= NULL
;
6265 pprev
= &mgcp
->mgc_next
, mgcp
= *pprev
) {
6266 if (mgcp
->mgc_client
== mcip
)
6270 ASSERT(mgcp
!= NULL
);
6272 *pprev
= mgcp
->mgc_next
;
6273 kmem_free(mgcp
, sizeof (mac_grp_client_t
));
6277 * mac_reserve_rx_group()
6279 * Finds an available group and exclusively reserves it for a client.
6280 * The group is chosen to suit the flow's resource controls (bandwidth and
6281 * fanout requirements) and the address type.
6282 * If the requestor is the pimary MAC then return the group with the
6283 * largest number of rings, otherwise the default ring when available.
6286 mac_reserve_rx_group(mac_client_impl_t
*mcip
, uint8_t *mac_addr
, boolean_t move
)
6288 mac_share_handle_t share
= mcip
->mci_share
;
6289 mac_impl_t
*mip
= mcip
->mci_mip
;
6290 mac_group_t
*grp
= NULL
;
6294 mac_resource_props_t
*mrp
= MCIP_RESOURCE_PROPS(mcip
);
6297 boolean_t need_exclgrp
= B_FALSE
;
6299 mac_group_t
*candidate_grp
= NULL
;
6300 mac_client_impl_t
*gclient
;
6301 mac_resource_props_t
*gmrp
;
6302 mac_group_t
*donorgrp
= NULL
;
6303 boolean_t rxhw
= mrp
->mrp_mask
& MRP_RX_RINGS
;
6304 boolean_t unspec
= mrp
->mrp_mask
& MRP_RXRINGS_UNSPEC
;
6305 boolean_t isprimary
;
6307 ASSERT(MAC_PERIM_HELD((mac_handle_t
)mip
));
6309 isprimary
= mcip
->mci_flent
->fe_type
& FLOW_PRIMARY_MAC
;
6312 * Check if a group already has this mac address (case of VLANs)
6313 * unless we are moving this MAC client from one group to another.
6315 if (!move
&& (map
= mac_find_macaddr(mip
, mac_addr
)) != NULL
) {
6316 if (map
->ma_group
!= NULL
)
6317 return (map
->ma_group
);
6319 if (mip
->mi_rx_groups
== NULL
|| mip
->mi_rx_group_count
== 0)
6322 * If exclusive open, return NULL which will enable the
6323 * caller to use the default group.
6325 if (mcip
->mci_state_flags
& MCIS_EXCLUSIVE
)
6328 /* For dynamic groups default unspecified to 1 */
6329 if (rxhw
&& unspec
&&
6330 mip
->mi_rx_group_type
== MAC_GROUP_TYPE_DYNAMIC
) {
6331 mrp
->mrp_nrxrings
= 1;
6334 * For static grouping we allow only specifying rings=0 and
6337 if (rxhw
&& mrp
->mrp_nrxrings
> 0 &&
6338 mip
->mi_rx_group_type
== MAC_GROUP_TYPE_STATIC
) {
6343 * We have explicitly asked for a group (with nrxrings,
6346 if (unspec
|| mrp
->mrp_nrxrings
> 0) {
6347 need_exclgrp
= B_TRUE
;
6348 need_rings
= mrp
->mrp_nrxrings
;
6349 } else if (mrp
->mrp_nrxrings
== 0) {
6351 * We have asked for a software group.
6355 } else if (isprimary
&& mip
->mi_nactiveclients
== 1 &&
6356 mip
->mi_rx_group_type
== MAC_GROUP_TYPE_DYNAMIC
) {
6358 * If the primary is the only active client on this
6359 * mip and we have not asked for any rings, we give
6360 * it the default group so that the primary gets to
6361 * use all the rings.
6366 /* The group that can donate rings */
6367 donorgrp
= mip
->mi_rx_donor_grp
;
6370 * The number of rings that the default group can donate.
6371 * We need to leave at least one ring.
6373 donor_grp_rcnt
= donorgrp
->mrg_cur_count
- 1;
6376 * Try to exclusively reserve a RX group.
6378 * For flows requiring HW_DEFAULT_RING (unicast flow of the primary
6379 * client), try to reserve the a non-default RX group and give
6380 * it all the rings from the donor group, except the default ring
6382 * For flows requiring HW_RING (unicast flow of other clients), try
6383 * to reserve non-default RX group with the specified number of
6384 * rings, if available.
6386 * For flows that have not asked for software or hardware ring,
6387 * try to reserve a non-default group with 1 ring, if available.
6389 for (i
= 1; i
< mip
->mi_rx_group_count
; i
++) {
6390 grp
= &mip
->mi_rx_groups
[i
];
6392 DTRACE_PROBE3(rx__group__trying
, char *, mip
->mi_name
,
6393 int, grp
->mrg_index
, mac_group_state_t
, grp
->mrg_state
);
6396 * Check if this group could be a candidate group for
6397 * eviction if we need a group for this MAC client,
6398 * but there aren't any. A candidate group is one
6399 * that didn't ask for an exclusive group, but got
6400 * one and it has enough rings (combined with what
6401 * the donor group can donate) for the new MAC
6404 if (grp
->mrg_state
>= MAC_GROUP_STATE_RESERVED
) {
6406 * If the primary/donor group is not the default
6407 * group, don't bother looking for a candidate group.
6408 * If we don't have enough rings we will check
6409 * if the primary group can be vacated.
6411 if (candidate_grp
== NULL
&&
6412 donorgrp
== MAC_DEFAULT_RX_GROUP(mip
)) {
6413 ASSERT(!MAC_GROUP_NO_CLIENT(grp
));
6414 gclient
= MAC_GROUP_ONLY_CLIENT(grp
);
6415 if (gclient
== NULL
)
6416 gclient
= mac_get_grp_primary(grp
);
6417 ASSERT(gclient
!= NULL
);
6418 gmrp
= MCIP_RESOURCE_PROPS(gclient
);
6419 if (gclient
->mci_share
== NULL
&&
6420 (gmrp
->mrp_mask
& MRP_RX_RINGS
) == 0 &&
6422 (grp
->mrg_cur_count
+ donor_grp_rcnt
>=
6424 candidate_grp
= grp
;
6430 * This group could already be SHARED by other multicast
6431 * flows on this client. In that case, the group would
6432 * be shared and has already been started.
6434 ASSERT(grp
->mrg_state
!= MAC_GROUP_STATE_UNINIT
);
6436 if ((grp
->mrg_state
== MAC_GROUP_STATE_REGISTERED
) &&
6437 (mac_start_group(grp
) != 0)) {
6441 if (mip
->mi_rx_group_type
!= MAC_GROUP_TYPE_DYNAMIC
)
6443 ASSERT(grp
->mrg_cur_count
== 0);
6446 * Populate the group. Rings should be taken
6447 * from the donor group.
6449 nrings
= rxhw
? need_rings
: isprimary
? donor_grp_rcnt
: 1;
6452 * If the donor group can't donate, let's just walk and
6453 * see if someone can vacate a group, so that we have
6454 * enough rings for this, unless we already have
6455 * identified a candiate group..
6457 if (nrings
<= donor_grp_rcnt
) {
6458 err
= i_mac_group_allocate_rings(mip
, MAC_RING_TYPE_RX
,
6459 donorgrp
, grp
, share
, nrings
);
6462 * For a share i_mac_group_allocate_rings gets
6463 * the rings from the driver, let's populate
6464 * the property for the client now.
6466 if (share
!= NULL
) {
6467 mac_client_set_rings(
6468 (mac_client_handle_t
)mcip
,
6469 grp
->mrg_cur_count
, -1);
6471 if (mac_is_primary_client(mcip
) && !rxhw
)
6472 mip
->mi_rx_donor_grp
= grp
;
6477 DTRACE_PROBE3(rx__group__reserve__alloc__rings
, char *,
6478 mip
->mi_name
, int, grp
->mrg_index
, int, err
);
6481 * It's a dynamic group but the grouping operation
6484 mac_stop_group(grp
);
6486 /* We didn't find an exclusive group for this MAC client */
6487 if (i
>= mip
->mi_rx_group_count
) {
6493 * If we found a candidate group then we switch the
6494 * MAC client from the candidate_group to the default
6495 * group and give the group to this MAC client. If
6496 * we didn't find a candidate_group, check if the
6497 * primary is in its own group and if it can make way
6498 * for this MAC client.
6500 if (candidate_grp
== NULL
&&
6501 donorgrp
!= MAC_DEFAULT_RX_GROUP(mip
) &&
6502 donorgrp
->mrg_cur_count
>= need_rings
) {
6503 candidate_grp
= donorgrp
;
6505 if (candidate_grp
!= NULL
) {
6506 boolean_t prim_grp
= B_FALSE
;
6509 * Switch the MAC client from the candidate group
6510 * to the default group.. If this group was the
6511 * donor group, then after the switch we need
6512 * to update the donor group too.
6514 grp
= candidate_grp
;
6515 gclient
= MAC_GROUP_ONLY_CLIENT(grp
);
6516 if (gclient
== NULL
)
6517 gclient
= mac_get_grp_primary(grp
);
6518 if (grp
== mip
->mi_rx_donor_grp
)
6520 if (mac_rx_switch_group(gclient
, grp
,
6521 MAC_DEFAULT_RX_GROUP(mip
)) != 0) {
6525 mip
->mi_rx_donor_grp
=
6526 MAC_DEFAULT_RX_GROUP(mip
);
6527 donorgrp
= MAC_DEFAULT_RX_GROUP(mip
);
6532 * Now give this group with the required rings
6533 * to this MAC client.
6535 ASSERT(grp
->mrg_state
== MAC_GROUP_STATE_REGISTERED
);
6536 if (mac_start_group(grp
) != 0)
6539 if (mip
->mi_rx_group_type
!= MAC_GROUP_TYPE_DYNAMIC
)
6542 donor_grp_rcnt
= donorgrp
->mrg_cur_count
- 1;
6543 ASSERT(grp
->mrg_cur_count
== 0);
6544 ASSERT(donor_grp_rcnt
>= need_rings
);
6545 err
= i_mac_group_allocate_rings(mip
, MAC_RING_TYPE_RX
,
6546 donorgrp
, grp
, share
, need_rings
);
6549 * For a share i_mac_group_allocate_rings gets
6550 * the rings from the driver, let's populate
6551 * the property for the client now.
6553 if (share
!= NULL
) {
6554 mac_client_set_rings(
6555 (mac_client_handle_t
)mcip
,
6556 grp
->mrg_cur_count
, -1);
6558 DTRACE_PROBE2(rx__group__reserved
,
6559 char *, mip
->mi_name
, int, grp
->mrg_index
);
6562 DTRACE_PROBE3(rx__group__reserve__alloc__rings
, char *,
6563 mip
->mi_name
, int, grp
->mrg_index
, int, err
);
6564 mac_stop_group(grp
);
6568 ASSERT(grp
!= NULL
);
6570 DTRACE_PROBE2(rx__group__reserved
,
6571 char *, mip
->mi_name
, int, grp
->mrg_index
);
6576 * mac_rx_release_group()
6578 * This is called when there are no clients left for the group.
6579 * The group is stopped and marked MAC_GROUP_STATE_REGISTERED,
6580 * and if it is a non default group, the shares are removed and
6581 * all rings are assigned back to default group.
6584 mac_release_rx_group(mac_client_impl_t
*mcip
, mac_group_t
*group
)
6586 mac_impl_t
*mip
= mcip
->mci_mip
;
6589 ASSERT(group
!= MAC_DEFAULT_RX_GROUP(mip
));
6591 if (mip
->mi_rx_donor_grp
== group
)
6592 mip
->mi_rx_donor_grp
= MAC_DEFAULT_RX_GROUP(mip
);
6595 * This is the case where there are no clients left. Any
6596 * SRS etc on this group have also be quiesced.
6598 for (ring
= group
->mrg_rings
; ring
!= NULL
; ring
= ring
->mr_next
) {
6599 if (ring
->mr_classify_type
== MAC_HW_CLASSIFIER
) {
6600 ASSERT(group
->mrg_state
== MAC_GROUP_STATE_RESERVED
);
6602 * Remove the SRS associated with the HW ring.
6603 * As a result, polling will be disabled.
6605 ring
->mr_srs
= NULL
;
6607 ASSERT(group
->mrg_state
< MAC_GROUP_STATE_RESERVED
||
6608 ring
->mr_state
== MR_INUSE
);
6609 if (ring
->mr_state
== MR_INUSE
) {
6610 mac_stop_ring(ring
);
6615 /* remove group from share */
6616 if (mcip
->mci_share
!= NULL
) {
6617 mip
->mi_share_capab
.ms_sremove(mcip
->mci_share
,
6621 if (mip
->mi_rx_group_type
== MAC_GROUP_TYPE_DYNAMIC
) {
6625 * Rings were dynamically allocated to group.
6626 * Move rings back to default group.
6628 while ((ring
= group
->mrg_rings
) != NULL
) {
6629 (void) mac_group_mov_ring(mip
, mip
->mi_rx_donor_grp
,
6633 mac_stop_group(group
);
6635 * Possible improvement: See if we can assign the group just released
6636 * to a another client of the mip
6641 * When we move the primary's mac address between groups, we need to also
6642 * take all the clients sharing the same mac address along with it (VLANs)
6643 * We remove the mac address for such clients from the group after quiescing
6644 * them. When we add the mac address we restart the client. Note that
6645 * the primary's mac address is removed from the group after all the
6646 * other clients sharing the address are removed. Similarly, the primary's
6647 * mac address is added before all the other client's mac address are
6648 * added. While grp is the group where the clients reside, tgrp is
6649 * the group where the addresses have to be added.
6652 mac_rx_move_macaddr_prim(mac_client_impl_t
*mcip
, mac_group_t
*grp
,
6653 mac_group_t
*tgrp
, uint8_t *maddr
, boolean_t add
)
6655 mac_impl_t
*mip
= mcip
->mci_mip
;
6656 mac_grp_client_t
*mgcp
= grp
->mrg_clients
;
6657 mac_client_impl_t
*gmcip
;
6660 prim
= (mcip
->mci_state_flags
& MCIS_UNICAST_HW
) != 0;
6663 * If the clients are in a non-default group, we just have to
6664 * walk the group's client list. If it is in the default group
6665 * (which will be shared by other clients as well, we need to
6666 * check if the unicast address matches mcip's unicast.
6668 while (mgcp
!= NULL
) {
6669 gmcip
= mgcp
->mgc_client
;
6670 if (gmcip
!= mcip
&&
6671 (grp
!= MAC_DEFAULT_RX_GROUP(mip
) ||
6672 mcip
->mci_unicast
== gmcip
->mci_unicast
)) {
6674 mac_rx_client_quiesce(
6675 (mac_client_handle_t
)gmcip
);
6676 (void) mac_remove_macaddr(mcip
->mci_unicast
);
6678 (void) mac_add_macaddr(mip
, tgrp
, maddr
, prim
);
6679 mac_rx_client_restart(
6680 (mac_client_handle_t
)gmcip
);
6683 mgcp
= mgcp
->mgc_next
;
6689 * Move the MAC address from fgrp to tgrp. If this is the primary client,
6690 * we need to take any VLANs etc. together too.
6693 mac_rx_move_macaddr(mac_client_impl_t
*mcip
, mac_group_t
*fgrp
,
6696 mac_impl_t
*mip
= mcip
->mci_mip
;
6697 uint8_t maddr
[MAXMACADDRLEN
];
6700 boolean_t multiclnt
= B_FALSE
;
6702 mac_rx_client_quiesce((mac_client_handle_t
)mcip
);
6703 ASSERT(mcip
->mci_unicast
!= NULL
);
6704 bcopy(mcip
->mci_unicast
->ma_addr
, maddr
, mcip
->mci_unicast
->ma_len
);
6706 prim
= (mcip
->mci_state_flags
& MCIS_UNICAST_HW
) != 0;
6707 if (mcip
->mci_unicast
->ma_nusers
> 1) {
6708 mac_rx_move_macaddr_prim(mcip
, fgrp
, NULL
, maddr
, B_FALSE
);
6711 ASSERT(mcip
->mci_unicast
->ma_nusers
== 1);
6712 err
= mac_remove_macaddr(mcip
->mci_unicast
);
6714 mac_rx_client_restart((mac_client_handle_t
)mcip
);
6716 mac_rx_move_macaddr_prim(mcip
, fgrp
, fgrp
, maddr
,
6722 * Program the H/W Classifier first, if this fails we need
6723 * not proceed with the other stuff.
6725 if ((err
= mac_add_macaddr(mip
, tgrp
, maddr
, prim
)) != 0) {
6726 /* Revert back the H/W Classifier */
6727 if ((err
= mac_add_macaddr(mip
, fgrp
, maddr
, prim
)) != 0) {
6729 * This should not fail now since it worked earlier,
6733 "mac_rx_switch_group: switching %p back"
6734 " to group %p failed!!", (void *)mcip
,
6737 mac_rx_client_restart((mac_client_handle_t
)mcip
);
6739 mac_rx_move_macaddr_prim(mcip
, fgrp
, fgrp
, maddr
,
6744 mcip
->mci_unicast
= mac_find_macaddr(mip
, maddr
);
6745 mac_rx_client_restart((mac_client_handle_t
)mcip
);
6747 mac_rx_move_macaddr_prim(mcip
, fgrp
, tgrp
, maddr
, B_TRUE
);
6752 * Switch the MAC client from one group to another. This means we need
6753 * to remove the MAC address from the group, remove the MAC client,
6754 * teardown the SRSs and revert the group state. Then, we add the client
6755 * to the destination group, set the SRSs, and add the MAC address to the
6759 mac_rx_switch_group(mac_client_impl_t
*mcip
, mac_group_t
*fgrp
,
6763 mac_group_state_t next_state
;
6764 mac_client_impl_t
*group_only_mcip
;
6765 mac_client_impl_t
*gmcip
;
6766 mac_impl_t
*mip
= mcip
->mci_mip
;
6767 mac_grp_client_t
*mgcp
;
6769 ASSERT(fgrp
== mcip
->mci_flent
->fe_rx_ring_group
);
6771 if ((err
= mac_rx_move_macaddr(mcip
, fgrp
, tgrp
)) != 0)
6775 * The group might be reserved, but SRSs may not be set up, e.g.
6776 * primary and its vlans using a reserved group.
6778 if (fgrp
->mrg_state
== MAC_GROUP_STATE_RESERVED
&&
6779 MAC_GROUP_ONLY_CLIENT(fgrp
) != NULL
) {
6780 mac_rx_srs_group_teardown(mcip
->mci_flent
, B_TRUE
);
6782 if (fgrp
!= MAC_DEFAULT_RX_GROUP(mip
)) {
6783 mgcp
= fgrp
->mrg_clients
;
6784 while (mgcp
!= NULL
) {
6785 gmcip
= mgcp
->mgc_client
;
6786 mgcp
= mgcp
->mgc_next
;
6787 mac_group_remove_client(fgrp
, gmcip
);
6788 mac_group_add_client(tgrp
, gmcip
);
6789 gmcip
->mci_flent
->fe_rx_ring_group
= tgrp
;
6791 mac_release_rx_group(mcip
, fgrp
);
6792 ASSERT(MAC_GROUP_NO_CLIENT(fgrp
));
6793 mac_set_group_state(fgrp
, MAC_GROUP_STATE_REGISTERED
);
6795 mac_group_remove_client(fgrp
, mcip
);
6796 mac_group_add_client(tgrp
, mcip
);
6797 mcip
->mci_flent
->fe_rx_ring_group
= tgrp
;
6799 * If there are other clients (VLANs) sharing this address
6800 * we should be here only for the primary.
6802 if (mcip
->mci_unicast
->ma_nusers
> 1) {
6804 * We need to move all the clients that are using
6807 mgcp
= fgrp
->mrg_clients
;
6808 while (mgcp
!= NULL
) {
6809 gmcip
= mgcp
->mgc_client
;
6810 mgcp
= mgcp
->mgc_next
;
6811 if (mcip
->mci_unicast
== gmcip
->mci_unicast
) {
6812 mac_group_remove_client(fgrp
, gmcip
);
6813 mac_group_add_client(tgrp
, gmcip
);
6814 gmcip
->mci_flent
->fe_rx_ring_group
=
6820 * The default group will still take the multicast,
6821 * broadcast traffic etc., so it won't go to
6822 * MAC_GROUP_STATE_REGISTERED.
6824 if (fgrp
->mrg_state
== MAC_GROUP_STATE_RESERVED
)
6825 mac_rx_group_unmark(fgrp
, MR_CONDEMNED
);
6826 mac_set_group_state(fgrp
, MAC_GROUP_STATE_SHARED
);
6828 next_state
= mac_group_next_state(tgrp
, &group_only_mcip
,
6829 MAC_DEFAULT_RX_GROUP(mip
), B_TRUE
);
6830 mac_set_group_state(tgrp
, next_state
);
6832 * If the destination group is reserved, setup the SRSs etc.
6834 if (tgrp
->mrg_state
== MAC_GROUP_STATE_RESERVED
) {
6835 mac_rx_srs_group_setup(mcip
, mcip
->mci_flent
, SRST_LINK
);
6836 mac_fanout_setup(mcip
, mcip
->mci_flent
,
6837 MCIP_RESOURCE_PROPS(mcip
), mac_rx_deliver
, mcip
, NULL
,
6839 mac_rx_group_unmark(tgrp
, MR_INCIPIENT
);
6841 mac_rx_switch_grp_to_sw(tgrp
);
6847 * Reserves a TX group for the specified share. Invoked by mac_tx_srs_setup()
6848 * when a share was allocated to the client.
6851 mac_reserve_tx_group(mac_client_impl_t
*mcip
, boolean_t move
)
6853 mac_impl_t
*mip
= mcip
->mci_mip
;
6854 mac_group_t
*grp
= NULL
;
6858 mac_group_t
*defgrp
;
6859 mac_share_handle_t share
= mcip
->mci_share
;
6860 mac_resource_props_t
*mrp
= MCIP_RESOURCE_PROPS(mcip
);
6863 boolean_t need_exclgrp
= B_FALSE
;
6865 mac_group_t
*candidate_grp
= NULL
;
6866 mac_client_impl_t
*gclient
;
6867 mac_resource_props_t
*gmrp
;
6868 boolean_t txhw
= mrp
->mrp_mask
& MRP_TX_RINGS
;
6869 boolean_t unspec
= mrp
->mrp_mask
& MRP_TXRINGS_UNSPEC
;
6870 boolean_t isprimary
;
6872 isprimary
= mcip
->mci_flent
->fe_type
& FLOW_PRIMARY_MAC
;
6874 * When we come here for a VLAN on the primary (dladm create-vlan),
6875 * we need to pair it along with the primary (to keep it consistent
6876 * with the RX side). So, we check if the primary is already assigned
6877 * to a group and return the group if so. The other way is also
6878 * true, i.e. the VLAN is already created and now we are plumbing
6881 if (!move
&& isprimary
) {
6882 for (gclient
= mip
->mi_clients_list
; gclient
!= NULL
;
6883 gclient
= gclient
->mci_client_next
) {
6884 if (gclient
->mci_flent
->fe_type
& FLOW_PRIMARY_MAC
&&
6885 gclient
->mci_flent
->fe_tx_ring_group
!= NULL
) {
6886 return (gclient
->mci_flent
->fe_tx_ring_group
);
6891 if (mip
->mi_tx_groups
== NULL
|| mip
->mi_tx_group_count
== 0)
6894 /* For dynamic groups, default unspec to 1 */
6895 if (txhw
&& unspec
&&
6896 mip
->mi_tx_group_type
== MAC_GROUP_TYPE_DYNAMIC
) {
6897 mrp
->mrp_ntxrings
= 1;
6900 * For static grouping we allow only specifying rings=0 and
6903 if (txhw
&& mrp
->mrp_ntxrings
> 0 &&
6904 mip
->mi_tx_group_type
== MAC_GROUP_TYPE_STATIC
) {
6910 * We have explicitly asked for a group (with ntxrings,
6913 if (unspec
|| mrp
->mrp_ntxrings
> 0) {
6914 need_exclgrp
= B_TRUE
;
6915 need_rings
= mrp
->mrp_ntxrings
;
6916 } else if (mrp
->mrp_ntxrings
== 0) {
6918 * We have asked for a software group.
6923 defgrp
= MAC_DEFAULT_TX_GROUP(mip
);
6925 * The number of rings that the default group can donate.
6926 * We need to leave at least one ring - the default ring - in
6929 defnrings
= defgrp
->mrg_cur_count
- 1;
6932 * Primary gets default group unless explicitly told not
6933 * to (i.e. rings > 0).
6935 if (isprimary
&& !need_exclgrp
)
6938 nrings
= (mrp
->mrp_mask
& MRP_TX_RINGS
) != 0 ? mrp
->mrp_ntxrings
: 1;
6939 for (i
= 0; i
< mip
->mi_tx_group_count
; i
++) {
6940 grp
= &mip
->mi_tx_groups
[i
];
6941 if ((grp
->mrg_state
== MAC_GROUP_STATE_RESERVED
) ||
6942 (grp
->mrg_state
== MAC_GROUP_STATE_UNINIT
)) {
6944 * Select a candidate for replacement if we don't
6945 * get an exclusive group. A candidate group is one
6946 * that didn't ask for an exclusive group, but got
6947 * one and it has enough rings (combined with what
6948 * the default group can donate) for the new MAC
6951 if (grp
->mrg_state
== MAC_GROUP_STATE_RESERVED
&&
6952 candidate_grp
== NULL
) {
6953 gclient
= MAC_GROUP_ONLY_CLIENT(grp
);
6954 if (gclient
== NULL
)
6955 gclient
= mac_get_grp_primary(grp
);
6956 gmrp
= MCIP_RESOURCE_PROPS(gclient
);
6957 if (gclient
->mci_share
== NULL
&&
6958 (gmrp
->mrp_mask
& MRP_TX_RINGS
) == 0 &&
6960 (grp
->mrg_cur_count
+ defnrings
) >=
6962 candidate_grp
= grp
;
6968 * If the default can't donate let's just walk and
6969 * see if someone can vacate a group, so that we have
6970 * enough rings for this.
6972 if (mip
->mi_tx_group_type
!= MAC_GROUP_TYPE_DYNAMIC
||
6973 nrings
<= defnrings
) {
6974 if (grp
->mrg_state
== MAC_GROUP_STATE_REGISTERED
) {
6975 rv
= mac_start_group(grp
);
6982 /* The default group */
6983 if (i
>= mip
->mi_tx_group_count
) {
6985 * If we need an exclusive group and have identified a
6986 * candidate group we switch the MAC client from the
6987 * candidate group to the default group and give the
6988 * candidate group to this client.
6990 if (need_exclgrp
&& candidate_grp
!= NULL
) {
6992 * Switch the MAC client from the candidate group
6993 * to the default group.
6995 grp
= candidate_grp
;
6996 gclient
= MAC_GROUP_ONLY_CLIENT(grp
);
6997 if (gclient
== NULL
)
6998 gclient
= mac_get_grp_primary(grp
);
6999 mac_tx_client_quiesce((mac_client_handle_t
)gclient
);
7000 mac_tx_switch_group(gclient
, grp
, defgrp
);
7001 mac_tx_client_restart((mac_client_handle_t
)gclient
);
7004 * Give the candidate group with the specified number
7005 * of rings to this MAC client.
7007 ASSERT(grp
->mrg_state
== MAC_GROUP_STATE_REGISTERED
);
7008 rv
= mac_start_group(grp
);
7011 if (mip
->mi_tx_group_type
!= MAC_GROUP_TYPE_DYNAMIC
)
7014 ASSERT(grp
->mrg_cur_count
== 0);
7015 ASSERT(defgrp
->mrg_cur_count
> need_rings
);
7017 err
= i_mac_group_allocate_rings(mip
, MAC_RING_TYPE_TX
,
7018 defgrp
, grp
, share
, need_rings
);
7021 * For a share i_mac_group_allocate_rings gets
7022 * the rings from the driver, let's populate
7023 * the property for the client now.
7025 if (share
!= NULL
) {
7026 mac_client_set_rings(
7027 (mac_client_handle_t
)mcip
, -1,
7028 grp
->mrg_cur_count
);
7030 mip
->mi_tx_group_free
--;
7033 DTRACE_PROBE3(tx__group__reserve__alloc__rings
, char *,
7034 mip
->mi_name
, int, grp
->mrg_index
, int, err
);
7035 mac_stop_group(grp
);
7040 * We got an exclusive group, but it is not dynamic.
7042 if (mip
->mi_tx_group_type
!= MAC_GROUP_TYPE_DYNAMIC
) {
7043 mip
->mi_tx_group_free
--;
7047 rv
= i_mac_group_allocate_rings(mip
, MAC_RING_TYPE_TX
, defgrp
, grp
,
7050 DTRACE_PROBE3(tx__group__reserve__alloc__rings
,
7051 char *, mip
->mi_name
, int, grp
->mrg_index
, int, rv
);
7052 mac_stop_group(grp
);
7056 * For a share i_mac_group_allocate_rings gets the rings from the
7057 * driver, let's populate the property for the client now.
7059 if (share
!= NULL
) {
7060 mac_client_set_rings((mac_client_handle_t
)mcip
, -1,
7061 grp
->mrg_cur_count
);
7063 mip
->mi_tx_group_free
--;
7068 mac_release_tx_group(mac_client_impl_t
*mcip
, mac_group_t
*grp
)
7070 mac_impl_t
*mip
= mcip
->mci_mip
;
7071 mac_share_handle_t share
= mcip
->mci_share
;
7073 mac_soft_ring_set_t
*srs
= MCIP_TX_SRS(mcip
);
7074 mac_group_t
*defgrp
;
7076 defgrp
= MAC_DEFAULT_TX_GROUP(mip
);
7078 if (srs
->srs_soft_ring_count
> 0) {
7079 for (ring
= grp
->mrg_rings
; ring
!= NULL
;
7080 ring
= ring
->mr_next
) {
7081 ASSERT(mac_tx_srs_ring_present(srs
, ring
));
7082 mac_tx_invoke_callbacks(mcip
,
7084 mac_tx_srs_get_soft_ring(srs
, ring
));
7085 mac_tx_srs_del_ring(srs
, ring
);
7088 ASSERT(srs
->srs_tx
.st_arg2
!= NULL
);
7089 srs
->srs_tx
.st_arg2
= NULL
;
7090 mac_srs_stat_delete(srs
);
7094 mip
->mi_share_capab
.ms_sremove(share
, grp
->mrg_driver
);
7096 /* move the ring back to the pool */
7097 if (mip
->mi_tx_group_type
== MAC_GROUP_TYPE_DYNAMIC
) {
7098 while ((ring
= grp
->mrg_rings
) != NULL
)
7099 (void) mac_group_mov_ring(mip
, defgrp
, ring
);
7101 mac_stop_group(grp
);
7102 mip
->mi_tx_group_free
++;
7106 * Disassociate a MAC client from a group, i.e go through the rings in the
7107 * group and delete all the soft rings tied to them.
7110 mac_tx_dismantle_soft_rings(mac_group_t
*fgrp
, flow_entry_t
*flent
)
7112 mac_client_impl_t
*mcip
= flent
->fe_mcip
;
7113 mac_soft_ring_set_t
*tx_srs
;
7117 tx_srs
= flent
->fe_tx_srs
;
7118 tx
= &tx_srs
->srs_tx
;
7120 /* Single ring case we haven't created any soft rings */
7121 if (tx
->st_mode
== SRS_TX_BW
|| tx
->st_mode
== SRS_TX_SERIALIZE
||
7122 tx
->st_mode
== SRS_TX_DEFAULT
) {
7124 mac_srs_stat_delete(tx_srs
);
7125 /* Fanout case, where we have to dismantle the soft rings */
7127 for (ring
= fgrp
->mrg_rings
; ring
!= NULL
;
7128 ring
= ring
->mr_next
) {
7129 ASSERT(mac_tx_srs_ring_present(tx_srs
, ring
));
7130 mac_tx_invoke_callbacks(mcip
,
7131 (mac_tx_cookie_t
)mac_tx_srs_get_soft_ring(tx_srs
,
7133 mac_tx_srs_del_ring(tx_srs
, ring
);
7135 ASSERT(tx
->st_arg2
== NULL
);
7140 * Switch the MAC client from one group to another. This means we need
7141 * to remove the MAC client, teardown the SRSs and revert the group state.
7142 * Then, we add the client to the destination roup, set the SRSs etc.
7145 mac_tx_switch_group(mac_client_impl_t
*mcip
, mac_group_t
*fgrp
,
7148 mac_client_impl_t
*group_only_mcip
;
7149 mac_impl_t
*mip
= mcip
->mci_mip
;
7150 flow_entry_t
*flent
= mcip
->mci_flent
;
7151 mac_group_t
*defgrp
;
7152 mac_grp_client_t
*mgcp
;
7153 mac_client_impl_t
*gmcip
;
7154 flow_entry_t
*gflent
;
7156 defgrp
= MAC_DEFAULT_TX_GROUP(mip
);
7157 ASSERT(fgrp
== flent
->fe_tx_ring_group
);
7159 if (fgrp
== defgrp
) {
7161 * If this is the primary we need to find any VLANs on
7162 * the primary and move them too.
7164 mac_group_remove_client(fgrp
, mcip
);
7165 mac_tx_dismantle_soft_rings(fgrp
, flent
);
7166 if (mcip
->mci_unicast
->ma_nusers
> 1) {
7167 mgcp
= fgrp
->mrg_clients
;
7168 while (mgcp
!= NULL
) {
7169 gmcip
= mgcp
->mgc_client
;
7170 mgcp
= mgcp
->mgc_next
;
7171 if (mcip
->mci_unicast
!= gmcip
->mci_unicast
)
7173 mac_tx_client_quiesce(
7174 (mac_client_handle_t
)gmcip
);
7176 gflent
= gmcip
->mci_flent
;
7177 mac_group_remove_client(fgrp
, gmcip
);
7178 mac_tx_dismantle_soft_rings(fgrp
, gflent
);
7180 mac_group_add_client(tgrp
, gmcip
);
7181 gflent
->fe_tx_ring_group
= tgrp
;
7182 /* We could directly set this to SHARED */
7183 tgrp
->mrg_state
= mac_group_next_state(tgrp
,
7184 &group_only_mcip
, defgrp
, B_FALSE
);
7186 mac_tx_srs_group_setup(gmcip
, gflent
,
7188 mac_fanout_setup(gmcip
, gflent
,
7189 MCIP_RESOURCE_PROPS(gmcip
), mac_rx_deliver
,
7192 mac_tx_client_restart(
7193 (mac_client_handle_t
)gmcip
);
7196 if (MAC_GROUP_NO_CLIENT(fgrp
)) {
7201 fgrp
->mrg_state
= MAC_GROUP_STATE_REGISTERED
;
7203 * Additionally, we also need to stop all
7204 * the rings in the default group, except
7205 * the default ring. The reason being
7206 * this group won't be released since it is
7207 * the default group, so the rings won't
7208 * be stopped otherwise.
7210 ringcnt
= fgrp
->mrg_cur_count
;
7211 ring
= fgrp
->mrg_rings
;
7212 for (cnt
= 0; cnt
< ringcnt
; cnt
++) {
7213 if (ring
->mr_state
== MR_INUSE
&&
7215 (mac_ring_t
*)mip
->mi_default_tx_ring
) {
7216 mac_stop_ring(ring
);
7219 ring
= ring
->mr_next
;
7221 } else if (MAC_GROUP_ONLY_CLIENT(fgrp
) != NULL
) {
7222 fgrp
->mrg_state
= MAC_GROUP_STATE_RESERVED
;
7224 ASSERT(fgrp
->mrg_state
== MAC_GROUP_STATE_SHARED
);
7228 * We could have VLANs sharing the non-default group with
7231 mgcp
= fgrp
->mrg_clients
;
7232 while (mgcp
!= NULL
) {
7233 gmcip
= mgcp
->mgc_client
;
7234 mgcp
= mgcp
->mgc_next
;
7237 mac_tx_client_quiesce((mac_client_handle_t
)gmcip
);
7238 gflent
= gmcip
->mci_flent
;
7240 mac_group_remove_client(fgrp
, gmcip
);
7241 mac_tx_dismantle_soft_rings(fgrp
, gflent
);
7243 mac_group_add_client(tgrp
, gmcip
);
7244 gflent
->fe_tx_ring_group
= tgrp
;
7245 /* We could directly set this to SHARED */
7246 tgrp
->mrg_state
= mac_group_next_state(tgrp
,
7247 &group_only_mcip
, defgrp
, B_FALSE
);
7248 mac_tx_srs_group_setup(gmcip
, gflent
, SRST_LINK
);
7249 mac_fanout_setup(gmcip
, gflent
,
7250 MCIP_RESOURCE_PROPS(gmcip
), mac_rx_deliver
,
7253 mac_tx_client_restart((mac_client_handle_t
)gmcip
);
7255 mac_group_remove_client(fgrp
, mcip
);
7256 mac_release_tx_group(mcip
, fgrp
);
7257 fgrp
->mrg_state
= MAC_GROUP_STATE_REGISTERED
;
7260 /* Add it to the tgroup */
7261 mac_group_add_client(tgrp
, mcip
);
7262 flent
->fe_tx_ring_group
= tgrp
;
7263 tgrp
->mrg_state
= mac_group_next_state(tgrp
, &group_only_mcip
,
7266 mac_tx_srs_group_setup(mcip
, flent
, SRST_LINK
);
7267 mac_fanout_setup(mcip
, flent
, MCIP_RESOURCE_PROPS(mcip
),
7268 mac_rx_deliver
, mcip
, NULL
, NULL
);
7272 * This is a 1-time control path activity initiated by the client (IP).
7273 * The mac perimeter protects against other simultaneous control activities,
7274 * for example an ioctl that attempts to change the degree of fanout and
7275 * increase or decrease the number of softrings associated with this Tx SRS.
7277 static mac_tx_notify_cb_t
*
7278 mac_client_tx_notify_add(mac_client_impl_t
*mcip
,
7279 mac_tx_notify_t notify
, void *arg
)
7281 mac_cb_info_t
*mcbi
;
7282 mac_tx_notify_cb_t
*mtnfp
;
7284 ASSERT(MAC_PERIM_HELD((mac_handle_t
)mcip
->mci_mip
));
7286 mtnfp
= kmem_zalloc(sizeof (mac_tx_notify_cb_t
), KM_SLEEP
);
7287 mtnfp
->mtnf_fn
= notify
;
7288 mtnfp
->mtnf_arg
= arg
;
7289 mtnfp
->mtnf_link
.mcb_objp
= mtnfp
;
7290 mtnfp
->mtnf_link
.mcb_objsize
= sizeof (mac_tx_notify_cb_t
);
7291 mtnfp
->mtnf_link
.mcb_flags
= MCB_TX_NOTIFY_CB_T
;
7293 mcbi
= &mcip
->mci_tx_notify_cb_info
;
7294 mutex_enter(mcbi
->mcbi_lockp
);
7295 mac_callback_add(mcbi
, &mcip
->mci_tx_notify_cb_list
, &mtnfp
->mtnf_link
);
7296 mutex_exit(mcbi
->mcbi_lockp
);
7301 mac_client_tx_notify_remove(mac_client_impl_t
*mcip
, mac_tx_notify_cb_t
*mtnfp
)
7303 mac_cb_info_t
*mcbi
;
7306 ASSERT(MAC_PERIM_HELD((mac_handle_t
)mcip
->mci_mip
));
7308 if (!mac_callback_find(&mcip
->mci_tx_notify_cb_info
,
7309 &mcip
->mci_tx_notify_cb_list
, &mtnfp
->mtnf_link
)) {
7311 "mac_client_tx_notify_remove: callback not "
7312 "found, mcip 0x%p mtnfp 0x%p", (void *)mcip
, (void *)mtnfp
);
7316 mcbi
= &mcip
->mci_tx_notify_cb_info
;
7317 cblist
= &mcip
->mci_tx_notify_cb_list
;
7318 mutex_enter(mcbi
->mcbi_lockp
);
7319 if (mac_callback_remove(mcbi
, cblist
, &mtnfp
->mtnf_link
))
7320 kmem_free(mtnfp
, sizeof (mac_tx_notify_cb_t
));
7322 mac_callback_remove_wait(&mcip
->mci_tx_notify_cb_info
);
7323 mutex_exit(mcbi
->mcbi_lockp
);
7327 * mac_client_tx_notify():
7328 * call to add and remove flow control callback routine.
7330 mac_tx_notify_handle_t
7331 mac_client_tx_notify(mac_client_handle_t mch
, mac_tx_notify_t callb_func
,
7334 mac_client_impl_t
*mcip
= (mac_client_impl_t
*)mch
;
7335 mac_tx_notify_cb_t
*mtnfp
= NULL
;
7337 i_mac_perim_enter(mcip
->mci_mip
);
7339 if (callb_func
!= NULL
) {
7340 /* Add a notify callback */
7341 mtnfp
= mac_client_tx_notify_add(mcip
, callb_func
, ptr
);
7343 mac_client_tx_notify_remove(mcip
, (mac_tx_notify_cb_t
*)ptr
);
7345 i_mac_perim_exit(mcip
->mci_mip
);
7347 return ((mac_tx_notify_handle_t
)mtnfp
);
7351 mac_bridge_vectors(mac_bridge_tx_t txf
, mac_bridge_rx_t rxf
,
7352 mac_bridge_ref_t reff
, mac_bridge_ls_t lsf
)
7354 mac_bridge_tx_cb
= txf
;
7355 mac_bridge_rx_cb
= rxf
;
7356 mac_bridge_ref_cb
= reff
;
7357 mac_bridge_ls_cb
= lsf
;
7361 mac_bridge_set(mac_handle_t mh
, mac_handle_t link
)
7363 mac_impl_t
*mip
= (mac_impl_t
*)mh
;
7366 mutex_enter(&mip
->mi_bridge_lock
);
7367 if (mip
->mi_bridge_link
== NULL
) {
7368 mip
->mi_bridge_link
= link
;
7373 mutex_exit(&mip
->mi_bridge_lock
);
7375 mac_poll_state_change(mh
, B_FALSE
);
7376 mac_capab_update(mh
);
7382 * Disable bridging on the indicated link.
7385 mac_bridge_clear(mac_handle_t mh
, mac_handle_t link
)
7387 mac_impl_t
*mip
= (mac_impl_t
*)mh
;
7389 mutex_enter(&mip
->mi_bridge_lock
);
7390 ASSERT(mip
->mi_bridge_link
== link
);
7391 mip
->mi_bridge_link
= NULL
;
7392 mutex_exit(&mip
->mi_bridge_lock
);
7393 mac_poll_state_change(mh
, B_TRUE
);
7394 mac_capab_update(mh
);
7398 mac_no_active(mac_handle_t mh
)
7400 mac_impl_t
*mip
= (mac_impl_t
*)mh
;
7402 i_mac_perim_enter(mip
);
7403 mip
->mi_state_flags
|= MIS_NO_ACTIVE
;
7404 i_mac_perim_exit(mip
);
7408 * Walk the primary VLAN clients whenever the primary's rings property
7409 * changes and update the mac_resource_props_t for the VLAN's client.
7410 * We need to do this since we don't support setting these properties
7411 * on the primary's VLAN clients, but the VLAN clients have to
7412 * follow the primary w.r.t the rings property;
7415 mac_set_prim_vlan_rings(mac_impl_t
*mip
, mac_resource_props_t
*mrp
)
7417 mac_client_impl_t
*vmcip
;
7418 mac_resource_props_t
*vmrp
;
7420 for (vmcip
= mip
->mi_clients_list
; vmcip
!= NULL
;
7421 vmcip
= vmcip
->mci_client_next
) {
7422 if (!(vmcip
->mci_flent
->fe_type
& FLOW_PRIMARY_MAC
) ||
7423 mac_client_vid((mac_client_handle_t
)vmcip
) ==
7427 vmrp
= MCIP_RESOURCE_PROPS(vmcip
);
7429 vmrp
->mrp_nrxrings
= mrp
->mrp_nrxrings
;
7430 if (mrp
->mrp_mask
& MRP_RX_RINGS
)
7431 vmrp
->mrp_mask
|= MRP_RX_RINGS
;
7432 else if (vmrp
->mrp_mask
& MRP_RX_RINGS
)
7433 vmrp
->mrp_mask
&= ~MRP_RX_RINGS
;
7435 vmrp
->mrp_ntxrings
= mrp
->mrp_ntxrings
;
7436 if (mrp
->mrp_mask
& MRP_TX_RINGS
)
7437 vmrp
->mrp_mask
|= MRP_TX_RINGS
;
7438 else if (vmrp
->mrp_mask
& MRP_TX_RINGS
)
7439 vmrp
->mrp_mask
&= ~MRP_TX_RINGS
;
7441 if (mrp
->mrp_mask
& MRP_RXRINGS_UNSPEC
)
7442 vmrp
->mrp_mask
|= MRP_RXRINGS_UNSPEC
;
7444 vmrp
->mrp_mask
&= ~MRP_RXRINGS_UNSPEC
;
7446 if (mrp
->mrp_mask
& MRP_TXRINGS_UNSPEC
)
7447 vmrp
->mrp_mask
|= MRP_TXRINGS_UNSPEC
;
7449 vmrp
->mrp_mask
&= ~MRP_TXRINGS_UNSPEC
;
7454 * We are adding or removing ring(s) from a group. The source for taking
7455 * rings is the default group. The destination for giving rings back is
7456 * the default group.
7459 mac_group_ring_modify(mac_client_impl_t
*mcip
, mac_group_t
*group
,
7460 mac_group_t
*defgrp
)
7462 mac_resource_props_t
*mrp
= MCIP_RESOURCE_PROPS(mcip
);
7467 mac_impl_t
*mip
= mcip
->mci_mip
;
7471 boolean_t rx_group
= group
->mrg_type
== MAC_RING_TYPE_RX
;
7479 * If we are asked for just a group, we give 1 ring, else
7480 * the specified number of rings.
7483 ringcnt
= (mrp
->mrp_mask
& MRP_RXRINGS_UNSPEC
) ? 1:
7486 ringcnt
= (mrp
->mrp_mask
& MRP_TXRINGS_UNSPEC
) ? 1:
7490 /* don't allow modifying rings for a share for now. */
7491 ASSERT(mcip
->mci_share
== NULL
);
7493 if (ringcnt
== group
->mrg_cur_count
)
7496 if (group
->mrg_cur_count
> ringcnt
) {
7497 modify
= group
->mrg_cur_count
- ringcnt
;
7499 if (mip
->mi_rx_donor_grp
== group
) {
7500 ASSERT(mac_is_primary_client(mcip
));
7501 mip
->mi_rx_donor_grp
= defgrp
;
7503 defgrp
= mip
->mi_rx_donor_grp
;
7506 ring
= group
->mrg_rings
;
7507 rings
= kmem_alloc(modify
* sizeof (mac_ring_handle_t
),
7510 for (count
= 0; count
< modify
; count
++) {
7511 next
= ring
->mr_next
;
7512 rv
= mac_group_mov_ring(mip
, defgrp
, ring
);
7514 /* cleanup on failure */
7515 for (j
= 0; j
< count
; j
++) {
7516 (void) mac_group_mov_ring(mip
, group
,
7524 kmem_free(rings
, modify
* sizeof (mac_ring_handle_t
));
7527 if (ringcnt
>= MAX_RINGS_PER_GROUP
)
7530 modify
= ringcnt
- group
->mrg_cur_count
;
7533 if (group
!= mip
->mi_rx_donor_grp
)
7534 defgrp
= mip
->mi_rx_donor_grp
;
7537 * This is the donor group with all the remaining
7538 * rings. Default group now gets to be the donor
7540 mip
->mi_rx_donor_grp
= defgrp
;
7542 end
= mip
->mi_rx_group_count
;
7545 end
= mip
->mi_tx_group_count
- 1;
7548 * If the default doesn't have any rings, lets see if we can
7549 * take rings given to an h/w client that doesn't need it.
7550 * For now, we just see if there is any one client that can donate
7551 * all the required rings.
7553 if (defgrp
->mrg_cur_count
< (modify
+ 1)) {
7554 for (i
= start
; i
< end
; i
++) {
7556 tgrp
= &mip
->mi_rx_groups
[i
];
7557 if (tgrp
== group
|| tgrp
->mrg_state
<
7558 MAC_GROUP_STATE_RESERVED
) {
7561 mcip
= MAC_GROUP_ONLY_CLIENT(tgrp
);
7563 mcip
= mac_get_grp_primary(tgrp
);
7564 ASSERT(mcip
!= NULL
);
7565 mrp
= MCIP_RESOURCE_PROPS(mcip
);
7566 if ((mrp
->mrp_mask
& MRP_RX_RINGS
) != 0)
7568 if ((tgrp
->mrg_cur_count
+
7569 defgrp
->mrg_cur_count
) < (modify
+ 1)) {
7572 if (mac_rx_switch_group(mcip
, tgrp
,
7577 tgrp
= &mip
->mi_tx_groups
[i
];
7578 if (tgrp
== group
|| tgrp
->mrg_state
<
7579 MAC_GROUP_STATE_RESERVED
) {
7582 mcip
= MAC_GROUP_ONLY_CLIENT(tgrp
);
7584 mcip
= mac_get_grp_primary(tgrp
);
7585 mrp
= MCIP_RESOURCE_PROPS(mcip
);
7586 if ((mrp
->mrp_mask
& MRP_TX_RINGS
) != 0)
7588 if ((tgrp
->mrg_cur_count
+
7589 defgrp
->mrg_cur_count
) < (modify
+ 1)) {
7592 /* OK, we can switch this to s/w */
7593 mac_tx_client_quiesce(
7594 (mac_client_handle_t
)mcip
);
7595 mac_tx_switch_group(mcip
, tgrp
, defgrp
);
7596 mac_tx_client_restart(
7597 (mac_client_handle_t
)mcip
);
7600 if (defgrp
->mrg_cur_count
< (modify
+ 1))
7603 if ((rv
= i_mac_group_allocate_rings(mip
, group
->mrg_type
, defgrp
,
7604 group
, mcip
->mci_share
, modify
)) != 0) {
7611 * Given the poolname in mac_resource_props, find the cpupart
7612 * that is associated with this pool. The cpupart will be used
7613 * later for finding the cpus to be bound to the networking threads.
7615 * use_default is set B_TRUE if pools are enabled and pool_default
7616 * is returned. This avoids a 2nd lookup to set the poolname
7617 * for pool-effective.
7621 * NULL - pools are disabled or if the 'cpus' property is set.
7622 * cpupart of pool_default - pools are enabled and the pool
7623 * is not available or poolname is blank
7624 * cpupart of named pool - pools are enabled and the pool
7628 mac_pset_find(mac_resource_props_t
*mrp
, boolean_t
*use_default
)
7633 *use_default
= B_FALSE
;
7635 /* CPUs property is set */
7636 if (mrp
->mrp_mask
& MRP_CPUS
)
7639 ASSERT(pool_lock_held());
7641 /* Pools are disabled, no pset */
7642 if (pool_state
== POOL_DISABLED
)
7645 /* Pools property is set */
7646 if (mrp
->mrp_mask
& MRP_POOL
) {
7647 if ((pool
= pool_lookup_pool_by_name(mrp
->mrp_pool
)) == NULL
) {
7648 /* Pool not found */
7649 DTRACE_PROBE1(mac_pset_find_no_pool
, char *,
7651 *use_default
= B_TRUE
;
7652 pool
= pool_default
;
7654 /* Pools property is not set */
7656 *use_default
= B_TRUE
;
7657 pool
= pool_default
;
7660 /* Find the CPU pset that corresponds to the pool */
7661 mutex_enter(&cpu_lock
);
7662 if ((cpupart
= cpupart_find(pool
->pool_pset
->pset_id
)) == NULL
) {
7663 DTRACE_PROBE1(mac_find_pset_no_pset
, psetid_t
,
7664 pool
->pool_pset
->pset_id
);
7666 mutex_exit(&cpu_lock
);
7672 mac_set_pool_effective(boolean_t use_default
, cpupart_t
*cpupart
,
7673 mac_resource_props_t
*mrp
, mac_resource_props_t
*emrp
)
7675 ASSERT(pool_lock_held());
7677 if (cpupart
!= NULL
) {
7678 emrp
->mrp_mask
|= MRP_POOL
;
7680 (void) strcpy(emrp
->mrp_pool
,
7683 ASSERT(strlen(mrp
->mrp_pool
) != 0);
7684 (void) strcpy(emrp
->mrp_pool
,
7688 emrp
->mrp_mask
&= ~MRP_POOL
;
7689 bzero(emrp
->mrp_pool
, MAXPATHLEN
);
7693 struct mac_pool_arg
{
7694 char mpa_poolname
[MAXPATHLEN
];
7695 pool_event_t mpa_what
;
7700 mac_pool_link_update(mod_hash_key_t key
, mod_hash_val_t
*val
, void *arg
)
7702 struct mac_pool_arg
*mpa
= arg
;
7703 mac_impl_t
*mip
= (mac_impl_t
*)val
;
7704 mac_client_impl_t
*mcip
;
7705 mac_resource_props_t
*mrp
, *emrp
;
7706 boolean_t pool_update
= B_FALSE
;
7707 boolean_t pool_clear
= B_FALSE
;
7708 boolean_t use_default
= B_FALSE
;
7709 cpupart_t
*cpupart
= NULL
;
7711 mrp
= kmem_zalloc(sizeof (*mrp
), KM_SLEEP
);
7712 i_mac_perim_enter(mip
);
7713 for (mcip
= mip
->mi_clients_list
; mcip
!= NULL
;
7714 mcip
= mcip
->mci_client_next
) {
7715 pool_update
= B_FALSE
;
7716 pool_clear
= B_FALSE
;
7717 use_default
= B_FALSE
;
7718 mac_client_get_resources((mac_client_handle_t
)mcip
, mrp
);
7719 emrp
= MCIP_EFFECTIVE_PROPS(mcip
);
7722 * When pools are enabled
7724 if ((mpa
->mpa_what
== POOL_E_ENABLE
) &&
7725 ((mrp
->mrp_mask
& MRP_CPUS
) == 0)) {
7726 mrp
->mrp_mask
|= MRP_POOL
;
7727 pool_update
= B_TRUE
;
7731 * When pools are disabled
7733 if ((mpa
->mpa_what
== POOL_E_DISABLE
) &&
7734 ((mrp
->mrp_mask
& MRP_CPUS
) == 0)) {
7735 mrp
->mrp_mask
|= MRP_POOL
;
7736 pool_clear
= B_TRUE
;
7740 * Look for links with the pool property set and the poolname
7741 * matching the one which is changing.
7743 if (strcmp(mrp
->mrp_pool
, mpa
->mpa_poolname
) == 0) {
7745 * The pool associated with the link has changed.
7747 if (mpa
->mpa_what
== POOL_E_CHANGE
) {
7748 mrp
->mrp_mask
|= MRP_POOL
;
7749 pool_update
= B_TRUE
;
7754 * This link is associated with pool_default and
7755 * pool_default has changed.
7757 if ((mpa
->mpa_what
== POOL_E_CHANGE
) &&
7758 (strcmp(emrp
->mrp_pool
, "pool_default") == 0) &&
7759 (strcmp(mpa
->mpa_poolname
, "pool_default") == 0)) {
7760 mrp
->mrp_mask
|= MRP_POOL
;
7761 pool_update
= B_TRUE
;
7765 * Get new list of cpus for the pool, bind network
7766 * threads to new list of cpus and update resources.
7769 if (MCIP_DATAPATH_SETUP(mcip
)) {
7771 cpupart
= mac_pset_find(mrp
, &use_default
);
7772 mac_fanout_setup(mcip
, mcip
->mci_flent
, mrp
,
7773 mac_rx_deliver
, mcip
, NULL
, cpupart
);
7774 mac_set_pool_effective(use_default
, cpupart
,
7778 mac_update_resources(mrp
, MCIP_RESOURCE_PROPS(mcip
),
7783 * Clear the effective pool and bind network threads
7784 * to any available CPU.
7787 if (MCIP_DATAPATH_SETUP(mcip
)) {
7788 emrp
->mrp_mask
&= ~MRP_POOL
;
7789 bzero(emrp
->mrp_pool
, MAXPATHLEN
);
7790 mac_fanout_setup(mcip
, mcip
->mci_flent
, mrp
,
7791 mac_rx_deliver
, mcip
, NULL
, NULL
);
7793 mac_update_resources(mrp
, MCIP_RESOURCE_PROPS(mcip
),
7797 i_mac_perim_exit(mip
);
7798 kmem_free(mrp
, sizeof (*mrp
));
7799 return (MH_WALK_CONTINUE
);
7803 mac_pool_update(void *arg
)
7805 mod_hash_walk(i_mac_impl_hash
, mac_pool_link_update
, arg
);
7806 kmem_free(arg
, sizeof (struct mac_pool_arg
));
7810 * Callback function to be executed when a noteworthy pool event
7815 mac_pool_event_cb(pool_event_t what
, poolid_t id
, void *arg
)
7818 char *poolname
= NULL
;
7819 struct mac_pool_arg
*mpa
;
7822 mpa
= kmem_zalloc(sizeof (struct mac_pool_arg
), KM_SLEEP
);
7826 case POOL_E_DISABLE
:
7830 pool
= pool_lookup_pool_by_id(id
);
7832 kmem_free(mpa
, sizeof (struct mac_pool_arg
));
7836 pool_get_name(pool
, &poolname
);
7837 (void) strlcpy(mpa
->mpa_poolname
, poolname
,
7838 sizeof (mpa
->mpa_poolname
));
7842 kmem_free(mpa
, sizeof (struct mac_pool_arg
));
7848 mpa
->mpa_what
= what
;
7850 mac_pool_update(mpa
);
7854 * Set effective rings property. This could be called from datapath_setup/
7855 * datapath_teardown or set-linkprop.
7856 * If the group is reserved we just go ahead and set the effective rings.
7857 * Additionally, for TX this could mean the default group has lost/gained
7858 * some rings, so if the default group is reserved, we need to adjust the
7859 * effective rings for the default group clients. For RX, if we are working
7860 * with the non-default group, we just need * to reset the effective props
7861 * for the default group clients.
7864 mac_set_rings_effective(mac_client_impl_t
*mcip
)
7866 mac_impl_t
*mip
= mcip
->mci_mip
;
7868 mac_group_t
*defgrp
;
7869 flow_entry_t
*flent
= mcip
->mci_flent
;
7870 mac_resource_props_t
*emrp
= MCIP_EFFECTIVE_PROPS(mcip
);
7871 mac_grp_client_t
*mgcp
;
7872 mac_client_impl_t
*gmcip
;
7874 grp
= flent
->fe_rx_ring_group
;
7876 defgrp
= MAC_DEFAULT_RX_GROUP(mip
);
7878 * If we have reserved a group, set the effective rings
7879 * to the ring count in the group.
7881 if (grp
->mrg_state
== MAC_GROUP_STATE_RESERVED
) {
7882 emrp
->mrp_mask
|= MRP_RX_RINGS
;
7883 emrp
->mrp_nrxrings
= grp
->mrg_cur_count
;
7887 * We go through the clients in the shared group and
7888 * reset the effective properties. It is possible this
7889 * might have already been done for some client (i.e.
7890 * if some client is being moved to a group that is
7891 * already shared). The case where the default group is
7892 * RESERVED is taken care of above (note in the RX side if
7893 * there is a non-default group, the default group is always
7896 if (grp
!= defgrp
|| grp
->mrg_state
== MAC_GROUP_STATE_SHARED
) {
7897 if (grp
->mrg_state
== MAC_GROUP_STATE_SHARED
)
7898 mgcp
= grp
->mrg_clients
;
7900 mgcp
= defgrp
->mrg_clients
;
7901 while (mgcp
!= NULL
) {
7902 gmcip
= mgcp
->mgc_client
;
7903 emrp
= MCIP_EFFECTIVE_PROPS(gmcip
);
7904 if (emrp
->mrp_mask
& MRP_RX_RINGS
) {
7905 emrp
->mrp_mask
&= ~MRP_RX_RINGS
;
7906 emrp
->mrp_nrxrings
= 0;
7908 mgcp
= mgcp
->mgc_next
;
7913 /* Now the TX side */
7914 grp
= flent
->fe_tx_ring_group
;
7916 defgrp
= MAC_DEFAULT_TX_GROUP(mip
);
7918 if (grp
->mrg_state
== MAC_GROUP_STATE_RESERVED
) {
7919 emrp
->mrp_mask
|= MRP_TX_RINGS
;
7920 emrp
->mrp_ntxrings
= grp
->mrg_cur_count
;
7921 } else if (grp
->mrg_state
== MAC_GROUP_STATE_SHARED
) {
7922 mgcp
= grp
->mrg_clients
;
7923 while (mgcp
!= NULL
) {
7924 gmcip
= mgcp
->mgc_client
;
7925 emrp
= MCIP_EFFECTIVE_PROPS(gmcip
);
7926 if (emrp
->mrp_mask
& MRP_TX_RINGS
) {
7927 emrp
->mrp_mask
&= ~MRP_TX_RINGS
;
7928 emrp
->mrp_ntxrings
= 0;
7930 mgcp
= mgcp
->mgc_next
;
7935 * If the group is not the default group and the default
7936 * group is reserved, the ring count in the default group
7937 * might have changed, update it.
7939 if (grp
!= defgrp
&&
7940 defgrp
->mrg_state
== MAC_GROUP_STATE_RESERVED
) {
7941 gmcip
= MAC_GROUP_ONLY_CLIENT(defgrp
);
7942 emrp
= MCIP_EFFECTIVE_PROPS(gmcip
);
7943 emrp
->mrp_ntxrings
= defgrp
->mrg_cur_count
;
7946 emrp
= MCIP_EFFECTIVE_PROPS(mcip
);
7950 * Check if the primary is in the default group. If so, see if we
7951 * can give it a an exclusive group now that another client is
7952 * being configured. We take the primary out of the default group
7953 * because the multicast/broadcast packets for the all the clients
7954 * will land in the default ring in the default group which means
7955 * any client in the default group, even if it is the only on in
7956 * the group, will lose exclusive access to the rings, hence
7960 mac_check_primary_relocation(mac_client_impl_t
*mcip
, boolean_t rxhw
)
7962 mac_impl_t
*mip
= mcip
->mci_mip
;
7963 mac_group_t
*defgrp
= MAC_DEFAULT_RX_GROUP(mip
);
7964 flow_entry_t
*flent
= mcip
->mci_flent
;
7965 mac_resource_props_t
*mrp
= MCIP_RESOURCE_PROPS(mcip
);
7970 * Check if the primary is in the default group, if not
7971 * or if it is explicitly configured to be in the default
7972 * group OR set the RX rings property, return.
7974 if (flent
->fe_rx_ring_group
!= defgrp
|| mrp
->mrp_mask
& MRP_RX_RINGS
)
7978 * If the new client needs an exclusive group and we
7979 * don't have another for the primary, return.
7981 if (rxhw
&& mip
->mi_rxhwclnt_avail
< 2)
7984 mac_addr
= flent
->fe_flow_desc
.fd_dst_mac
;
7986 * We call this when we are setting up the datapath for
7987 * the first non-primary.
7989 ASSERT(mip
->mi_nactiveclients
== 2);
7991 * OK, now we have the primary that needs to be relocated.
7993 ngrp
= mac_reserve_rx_group(mcip
, mac_addr
, B_TRUE
);
7996 if (mac_rx_switch_group(mcip
, defgrp
, ngrp
) != 0) {
7997 mac_stop_group(ngrp
);