2 =============================================
5 Kernel Revisions for features:
8 bsg support : 2.6.30 (?TBD?)
13 This file documents the features and components of the SCSI FC Transport.
14 It also provides documents the API between the transport and FC LLDDs.
15 The FC transport can be found at:
16 drivers/scsi/scsi_transport_fc.c
17 include/scsi/scsi_transport_fc.h
18 include/scsi/scsi_netlink_fc.h
19 include/scsi/scsi_bsg_fc.h
21 This file is found at Documentation/scsi/scsi_fc_transport.txt
24 FC Remote Ports (rports)
25 ========================================================================
29 FC Virtual Ports (vports)
30 ========================================================================
33 -------------------------------
35 New FC standards have defined mechanisms which allows for a single physical
36 port to appear on as multiple communication ports. Using the N_Port Id
37 Virtualization (NPIV) mechanism, a point-to-point connection to a Fabric
38 can be assigned more than 1 N_Port_ID. Each N_Port_ID appears as a
39 separate port to other endpoints on the fabric, even though it shares one
40 physical link to the switch for communication. Each N_Port_ID can have a
41 unique view of the fabric based on fabric zoning and array lun-masking
42 (just like a normal non-NPIV adapter). Using the Virtual Fabric (VF)
43 mechanism, adding a fabric header to each frame allows the port to
44 interact with the Fabric Port to join multiple fabrics. The port will
45 obtain an N_Port_ID on each fabric it joins. Each fabric will have its
46 own unique view of endpoints and configuration parameters. NPIV may be
47 used together with VF so that the port can obtain multiple N_Port_IDs
48 on each virtual fabric.
50 The FC transport is now recognizing a new object - a vport. A vport is
51 an entity that has a world-wide unique World Wide Port Name (wwpn) and
52 World Wide Node Name (wwnn). The transport also allows for the FC4's to
53 be specified for the vport, with FCP_Initiator being the primary role
54 expected. Once instantiated by one of the above methods, it will have a
55 distinct N_Port_ID and view of fabric endpoints and storage entities.
56 The fc_host associated with the physical adapter will export the ability
57 to create vports. The transport will create the vport object within the
58 Linux device tree, and instruct the fc_host's driver to instantiate the
59 virtual port. Typically, the driver will create a new scsi_host instance
60 on the vport, resulting in a unique <H,C,T,L> namespace for the vport.
61 Thus, whether a FC port is based on a physical port or on a virtual port,
62 each will appear as a unique scsi_host with its own target and lun space.
64 Note: At this time, the transport is written to create only NPIV-based
65 vports. However, consideration was given to VF-based vports and it
66 should be a minor change to add support if needed. The remaining
67 discussion will concentrate on NPIV.
69 Note: World Wide Name assignment (and uniqueness guarantees) are left
70 up to an administrative entity controlling the vport. For example,
71 if vports are to be associated with virtual machines, a XEN mgmt
72 utility would be responsible for creating wwpn/wwnn's for the vport,
73 using its own naming authority and OUI. (Note: it already does this
74 for virtual MAC addresses).
77 Device Trees and Vport Objects:
78 -------------------------------
80 Today, the device tree typically contains the scsi_host object,
81 with rports and scsi target objects underneath it. Currently the FC
82 transport creates the vport object and places it under the scsi_host
83 object corresponding to the physical adapter. The LLDD will allocate
84 a new scsi_host for the vport and link its object under the vport.
85 The remainder of the tree under the vports scsi_host is the same
86 as the non-NPIV case. The transport is written currently to easily
87 allow the parent of the vport to be something other than the scsi_host.
88 This could be used in the future to link the object onto a vm-specific
89 device tree. If the vport's parent is not the physical port's scsi_host,
90 a symbolic link to the vport object will be placed in the physical
93 Here's what to expect in the device tree :
94 The typical Physical Port's Scsi_Host:
95 /sys/devices/.../host17/
96 and it has the typical descendant tree:
97 /sys/devices/.../host17/rport-17:0-0/target17:0:0/17:0:0:0:
98 and then the vport is created on the Physical Port:
99 /sys/devices/.../host17/vport-17:0-0
100 and the vport's Scsi_Host is then created:
101 /sys/devices/.../host17/vport-17:0-0/host18
102 and then the rest of the tree progresses, such as:
103 /sys/devices/.../host17/vport-17:0-0/host18/rport-18:0-0/target18:0:0/18:0:0:0:
105 Here's what to expect in the sysfs tree :
107 /sys/class/scsi_host/host17 physical port's scsi_host
108 /sys/class/scsi_host/host18 vport's scsi_host
110 /sys/class/fc_host/host17 physical port's fc_host
111 /sys/class/fc_host/host18 vport's fc_host
113 /sys/class/fc_vports/vport-17:0-0 the vport's fc_vport
115 /sys/class/fc_remote_ports/rport-17:0-0 rport on the physical port
116 /sys/class/fc_remote_ports/rport-18:0-0 rport on the vport
120 -------------------------------
122 The new fc_vport class object has the following attributes
125 The WWNN of the vport
128 The WWPN of the vport
131 Indicates the FC4 roles enabled on the vport.
133 symbolic_name: Read_Write
134 A string, appended to the driver's symbolic port name string, which
135 is registered with the switch to identify the vport. For example,
136 a hypervisor could set this string to "Xen Domain 2 VM 5 Vport 2",
137 and this set of identifiers can be seen on switch management screens
138 to identify the port.
140 vport_delete: Write_Only
141 When written with a "1", will tear down the vport.
143 vport_disable: Write_Only
144 When written with a "1", will transition the vport to a disabled.
145 state. The vport will still be instantiated with the Linux kernel,
146 but it will not be active on the FC link.
147 When written with a "0", will enable the vport.
149 vport_last_state: Read_Only
150 Indicates the previous state of the vport. See the section below on
153 vport_state: Read_Only
154 Indicates the state of the vport. See the section below on
157 vport_type: Read_Only
158 Reflects the FC mechanism used to create the virtual port.
159 Only NPIV is supported currently.
162 For the fc_host class object, the following attributes are added for vports:
164 max_npiv_vports: Read_Only
165 Indicates the maximum number of NPIV-based vports that the
166 driver/adapter can support on the fc_host.
168 npiv_vports_inuse: Read_Only
169 Indicates how many NPIV-based vports have been instantiated on the
172 vport_create: Write_Only
173 A "simple" create interface to instantiate a vport on an fc_host.
174 A "<WWPN>:<WWNN>" string is written to the attribute. The transport
175 then instantiates the vport object and calls the LLDD to create the
176 vport with the role of FCP_Initiator. Each WWN is specified as 16
177 hex characters and may *not* contain any prefixes (e.g. 0x, x, etc).
179 vport_delete: Write_Only
180 A "simple" delete interface to teardown a vport. A "<WWPN>:<WWNN>"
181 string is written to the attribute. The transport will locate the
182 vport on the fc_host with the same WWNs and tear it down. Each WWN
183 is specified as 16 hex characters and may *not* contain any prefixes
188 -------------------------------
190 Vport instantiation consists of two parts:
191 - Creation with the kernel and LLDD. This means all transport and
192 driver data structures are built up, and device objects created.
193 This is equivalent to a driver "attach" on an adapter, which is
194 independent of the adapter's link state.
195 - Instantiation of the vport on the FC link via ELS traffic, etc.
196 This is equivalent to a "link up" and successful link initialization.
197 Further information can be found in the interfaces section below for
200 Once a vport has been instantiated with the kernel/LLDD, a vport state
201 can be reported via the sysfs attribute. The following states exist:
203 FC_VPORT_UNKNOWN - Unknown
204 An temporary state, typically set only while the vport is being
205 instantiated with the kernel and LLDD.
207 FC_VPORT_ACTIVE - Active
208 The vport has been successfully been created on the FC link.
209 It is fully functional.
211 FC_VPORT_DISABLED - Disabled
212 The vport instantiated, but "disabled". The vport is not instantiated
213 on the FC link. This is equivalent to a physical port with the
216 FC_VPORT_LINKDOWN - Linkdown
217 The vport is not operational as the physical link is not operational.
219 FC_VPORT_INITIALIZING - Initializing
220 The vport is in the process of instantiating on the FC link.
221 The LLDD will set this state just prior to starting the ELS traffic
222 to create the vport. This state will persist until the vport is
223 successfully created (state becomes FC_VPORT_ACTIVE) or it fails
224 (state is one of the values below). As this state is transitory,
225 it will not be preserved in the "vport_last_state".
227 FC_VPORT_NO_FABRIC_SUPP - No Fabric Support
228 The vport is not operational. One of the following conditions were
230 - The FC topology is not Point-to-Point
231 - The FC port is not connected to an F_Port
232 - The F_Port has indicated that NPIV is not supported.
234 FC_VPORT_NO_FABRIC_RSCS - No Fabric Resources
235 The vport is not operational. The Fabric failed FDISC with a status
236 indicating that it does not have sufficient resources to complete
239 FC_VPORT_FABRIC_LOGOUT - Fabric Logout
240 The vport is not operational. The Fabric has LOGO'd the N_Port_ID
241 associated with the vport.
243 FC_VPORT_FABRIC_REJ_WWN - Fabric Rejected WWN
244 The vport is not operational. The Fabric failed FDISC with a status
245 indicating that the WWN's are not valid.
247 FC_VPORT_FAILED - VPort Failed
248 The vport is not operational. This is a catchall for all other
252 The following state table indicates the different state transitions:
254 State Event New State
255 --------------------------------------------------------------------
256 n/a Initialization Unknown
257 Unknown: Link Down Linkdown
258 Link Up & Loop No Fabric Support
259 Link Up & no Fabric No Fabric Support
260 Link Up & FLOGI response No Fabric Support
261 indicates no NPIV support
262 Link Up & FDISC being sent Initializing
263 Disable request Disable
264 Linkdown: Link Up Unknown
265 Initializing: FDISC ACC Active
266 FDISC LS_RJT w/ no resources No Fabric Resources
267 FDISC LS_RJT w/ invalid Fabric Rejected WWN
268 pname or invalid nport_id
269 FDISC LS_RJT failed for Vport Failed
272 Disable request Disable
273 Disable: Enable request Unknown
274 Active: LOGO received from fabric Fabric Logout
276 Disable request Disable
277 Fabric Logout: Link still up Unknown
279 The following 4 error states all have the same transitions:
284 Disable request Disable
285 Link goes down Linkdown
288 Transport <-> LLDD Interfaces :
289 -------------------------------
291 Vport support by LLDD:
293 The LLDD indicates support for vports by supplying a vport_create()
294 function in the transport template. The presence of this function will
295 cause the creation of the new attributes on the fc_host. As part of
296 the physical port completing its initialization relative to the
297 transport, it should set the max_npiv_vports attribute to indicate the
298 maximum number of vports the driver and/or adapter supports.
303 The LLDD vport_create() syntax is:
305 int vport_create(struct fc_vport *vport, bool disable)
308 vport: Is the newly allocated vport object
309 disable: If "true", the vport is to be created in a disabled stated.
310 If "false", the vport is to be enabled upon creation.
312 When a request is made to create a new vport (via sgio/netlink, or the
313 vport_create fc_host attribute), the transport will validate that the LLDD
314 can support another vport (e.g. max_npiv_vports > npiv_vports_inuse).
315 If not, the create request will be failed. If space remains, the transport
316 will increment the vport count, create the vport object, and then call the
317 LLDD's vport_create() function with the newly allocated vport object.
319 As mentioned above, vport creation is divided into two parts:
320 - Creation with the kernel and LLDD. This means all transport and
321 driver data structures are built up, and device objects created.
322 This is equivalent to a driver "attach" on an adapter, which is
323 independent of the adapter's link state.
324 - Instantiation of the vport on the FC link via ELS traffic, etc.
325 This is equivalent to a "link up" and successful link initialization.
327 The LLDD's vport_create() function will not synchronously wait for both
328 parts to be fully completed before returning. It must validate that the
329 infrastructure exists to support NPIV, and complete the first part of
330 vport creation (data structure build up) before returning. We do not
331 hinge vport_create() on the link-side operation mainly because:
332 - The link may be down. It is not a failure if it is. It simply
333 means the vport is in an inoperable state until the link comes up.
334 This is consistent with the link bouncing post vport creation.
335 - The vport may be created in a disabled state.
336 - This is consistent with a model where: the vport equates to a
337 FC adapter. The vport_create is synonymous with driver attachment
338 to the adapter, which is independent of link state.
340 Note: special error codes have been defined to delineate infrastructure
341 failure cases for quicker resolution.
343 The expected behavior for the LLDD's vport_create() function is:
344 - Validate Infrastructure:
345 - If the driver or adapter cannot support another vport, whether
346 due to improper firmware, (a lie about) max_npiv, or a lack of
347 some other resource - return VPCERR_UNSUPPORTED.
348 - If the driver validates the WWN's against those already active on
349 the adapter and detects an overlap - return VPCERR_BAD_WWN.
350 - If the driver detects the topology is loop, non-fabric, or the
351 FLOGI did not support NPIV - return VPCERR_NO_FABRIC_SUPP.
352 - Allocate data structures. If errors are encountered, such as out
353 of memory conditions, return the respective negative Exxx error code.
354 - If the role is FCP Initiator, the LLDD is to :
355 - Call scsi_host_alloc() to allocate a scsi_host for the vport.
356 - Call scsi_add_host(new_shost, &vport->dev) to start the scsi_host
357 and bind it as a child of the vport device.
358 - Initializes the fc_host attribute values.
359 - Kick of further vport state transitions based on the disable flag and
360 link state - and return success (zero).
362 LLDD Implementers Notes:
363 - It is suggested that there be a different fc_function_templates for
364 the physical port and the virtual port. The physical port's template
365 would have the vport_create, vport_delete, and vport_disable functions,
366 while the vports would not.
367 - It is suggested that there be different scsi_host_templates
368 for the physical port and virtual port. Likely, there are driver
369 attributes, embedded into the scsi_host_template, that are applicable
370 for the physical port only (link speed, topology setting, etc). This
371 ensures that the attributes are applicable to the respective scsi_host.
374 Vport Disable/Enable:
376 The LLDD vport_disable() syntax is:
378 int vport_disable(struct fc_vport *vport, bool disable)
381 vport: Is vport to be enabled or disabled
382 disable: If "true", the vport is to be disabled.
383 If "false", the vport is to be enabled.
385 When a request is made to change the disabled state on a vport, the
386 transport will validate the request against the existing vport state.
387 If the request is to disable and the vport is already disabled, the
388 request will fail. Similarly, if the request is to enable, and the
389 vport is not in a disabled state, the request will fail. If the request
390 is valid for the vport state, the transport will call the LLDD to
391 change the vport's state.
393 Within the LLDD, if a vport is disabled, it remains instantiated with
394 the kernel and LLDD, but it is not active or visible on the FC link in
395 any way. (see Vport Creation and the 2 part instantiation discussion).
396 The vport will remain in this state until it is deleted or re-enabled.
397 When enabling a vport, the LLDD reinstantiates the vport on the FC
398 link - essentially restarting the LLDD statemachine (see Vport States
404 The LLDD vport_delete() syntax is:
406 int vport_delete(struct fc_vport *vport)
409 vport: Is vport to delete
411 When a request is made to delete a vport (via sgio/netlink, or via the
412 fc_host or fc_vport vport_delete attributes), the transport will call
413 the LLDD to terminate the vport on the FC link, and teardown all other
414 datastructures and references. If the LLDD completes successfully,
415 the transport will teardown the vport objects and complete the vport
416 removal. If the LLDD delete request fails, the vport object will remain,
417 but will be in an indeterminate state.
419 Within the LLDD, the normal code paths for a scsi_host teardown should
420 be followed. E.g. If the vport has a FCP Initiator role, the LLDD
421 will call fc_remove_host() for the vports scsi_host, followed by
422 scsi_remove_host() and scsi_host_put() for the vports scsi_host.
426 fc_host port_type attribute:
427 There is a new fc_host port_type value - FC_PORTTYPE_NPIV. This value
428 must be set on all vport-based fc_hosts. Normally, on a physical port,
429 the port_type attribute would be set to NPORT, NLPORT, etc based on the
430 topology type and existence of the fabric. As this is not applicable to
431 a vport, it makes more sense to report the FC mechanism used to create
435 FC drivers are required to call fc_remove_host() prior to calling
436 scsi_remove_host(). This allows the fc_host to tear down all remote
437 ports prior the scsi_host being torn down. The fc_remove_host() call
438 was updated to remove all vports for the fc_host as well.
441 Transport supplied functions
442 ----------------------------
444 The following functions are supplied by the FC-transport for use by LLDs.
446 fc_vport_create - create a vport
447 fc_vport_terminate - detach and remove a vport
452 * fc_vport_create - Admin App or LLDD requests creation of a vport
453 * @shost: scsi host the virtual port is connected to.
454 * @ids: The world wide names, FC4 port roles, etc for
458 * This routine assumes no locks are held on entry.
461 fc_vport_create(struct Scsi_Host *shost, struct fc_vport_identifiers *ids)
464 * fc_vport_terminate - Admin App or LLDD requests termination of a vport
465 * @vport: fc_vport to be terminated
467 * Calls the LLDD vport_delete() function, then deallocates and removes
468 * the vport from the shost and object tree.
471 * This routine assumes no locks are held on entry.
474 fc_vport_terminate(struct fc_vport *vport)
477 FC BSG support (CT & ELS passthru, and more)
478 ========================================================================
487 The following people have contributed to this document:
495 james.smart@emulex.com