WO2017023271A1 - Disk array having controller to allocate ports - Google Patents

Disk array having controller to allocate ports Download PDF

Info

Publication number
WO2017023271A1
WO2017023271A1 PCT/US2015/043245 US2015043245W WO2017023271A1 WO 2017023271 A1 WO2017023271 A1 WO 2017023271A1 US 2015043245 W US2015043245 W US 2015043245W WO 2017023271 A1 WO2017023271 A1 WO 2017023271A1
Authority
WO
WIPO (PCT)
Prior art keywords
ports
initiator
initiators
paths
path
Prior art date
Application number
PCT/US2015/043245
Other languages
French (fr)
Inventor
Krishna PUTTAGUNTA
Rupin T. Mohan
Vivek Agarwal
Navaruparajah NADARAJAH
Original Assignee
Hewlett Packard Enterprise Development Lp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Enterprise Development Lp filed Critical Hewlett Packard Enterprise Development Lp
Priority to PCT/US2015/043245 priority Critical patent/WO2017023271A1/en
Publication of WO2017023271A1 publication Critical patent/WO2017023271A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1017Server selection for load balancing based on a round robin mechanism

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A system and method for a disk array in a storage network. The storage network may include multiple host servers comprising initiators, wherein each host server comprises at least one initiator. The storage network may include a fabric coupling the multiple host servers to the disk array. The disk array has a storage array controller to configure zones of the fabric and to allocate ports of the storage array controller to the initiators.

Description

DISK ARRAY HAVING CONTROLLER TO ALLOCATE PORTS
BACKGROUND
[0001] Storage systems such as storage networks, storage area networks, and other storage systems, have controllers and storage disks for storing data. Client or host devices may request to access the data in the storage. A storage network, such as storage area network (SAN), may be a dedicated network that provides access to consolidated data storage. A SAN facilitates a host client device to access data volumes stored in a storage array or disk array. As technologies advance and the demand for efficient and rapid access to stored data expand, there is a need for the continuous improvement in the provision of data via storage networks.
BRIEF DESCRIPTION OF THE DRAWINGS
[0002] Certain exemplary embodiments are described in the following detailed description and in reference to the drawings, in which:
[0003] FIG. 1 is a block diagram of a disk array having a storage array controller in accordance with examples;
[0004] FIG. 2 is a block diagram of a storage network having a disk array in accordance with examples;
[0005] FIG. 3 is a block diagram of a storage network with dual fabrics and groups of hosts in accordance with examples;
[0006] FIG. 4 is a block diagram of a method of arranging a storage network in accordance with examples;
[0007] FIG. 5 is a block flow diagram of a method of allocating ports of a storage array controller to initiators in a storage network in accordance with examples;
[0008] FIG. 6 is a block flow diagram of a method of allocating ports of a storage array controller to initiators in a storage network in accordance with examples; [0009] FIG. 7 is a block flow diagram of a method of allocating ports of a storage array controller to initiators in a storage network in accordance with examples; and
[0010] FIG. 8 is a block diagram showing a tangible, non-transitory, computer-readable medium that stores code configured to direct a processor to arrange a network in accordance with examples.
DETAILED DESCRIPTION OF SPECIFIC EMBODIMENTS
[0011] Examples of the present techniques accommodate automatic path load balancing in target-driven peer zoning in a Storage Area Network (SAN) or similar network, with or without multipath software. A SAN may play a beneficial role in a datacenter by providing access and connectivity between storage arrays and servers. The network or fabric of the SAN may include of one or more switches, where the number of switches may depend on the size of the SAN. Multiple switches in a SAN fabric, and SAN dual fabrics for redundancy and high availability, may be typical for an enterprise class datacenter.
[0012] Fibre Channel (FC) is a storage protocol for SAN deployments. Fibre channel may be applicable because of its enterprise class performance, availability, and security. As part of security, FC zoning is a technique that generally restricts access to a select set of devices in a pre-configured group known as a zone. The zoning may prevent unauthorized access by those outside of the zone membership. These zones may be configured manually by SAN administrators prior to regular communication between end devices.
However, configuring these zones can be a complex task and typically requires knowledge of available devices in a fabric. The configuring of the zones typically require knowledge of the paths via which these accessible devices can be reached, including with multiple paths available to access the devices.
[0013] An approach of handling multiple paths may be to include most or all paths available in the zones and configure multipath software to handle load balancing. Yet, this approach may require multipath software installation, and hence makes the SAN dependent on the multipath software. In addition, including most or all available paths in each zone for the SAN configured with multipath software may not be efficient from a switch resource utilization perspective, e.g., with regard to Content Addressable Memory (CAM) or CAM entries.
[0014] In response, examples of the present techniques load balance available paths efficiently, and uniquely automate this process, with or without the presence of multipath software. There may be at least two aspects to load balancing. One is to distribute paths within a host. A second is to distribute paths among most or all available hosts in that fabric of the SAN. In contrast, multipath software generally balances load within a host among most or all available preconfigured paths, and does not distribute available target ports across the hosts in a given fabric.
[0015] Examples of the present techniques utilize target-based peer zoning, or similar types of zoning, which distribute available target ports across the hosts in a fabric. Certain peer zoning is defined in FC-GS-x standards. Also, target-based peer zoning may be employed. It should be understood that the present techniques are not limited to a particular standard.
[0016] In target-based peer zoning, a preference may be to have one target port in a zone. Although peer zoning, in general, permits multiple principal members, a practice employs one target port and multiple initiators in a zone, for security reasons as well as for effective utilization of switch hardware resources. Thus, with target-based peer zoning where initiators are distributed among available target ports, the distribution may not be well-balanced if the user manually selects target ports for a given host. Hence, as indicated, examples of the present techniques implement auto path load balancing in target-driven peer zoning in a SAN. Therefore, for multiple paths available to reach logical units (LUNs) within a storage array, zones may be configured to substantially equally distribute the paths among multiple zones so that most or all paths are effectively utilized. Such may avoid or reduce inefficient load distribution in a given SAN. In summary, examples herein address inefficient load distribution by way of automatically distributing servers among most or all available paths so that load distribution is substantially uniform and significantly error free. Further, such may work well in conjunction with multipath software when available.
[0017] As indicated above, zones may be configured using switch
management tools and by manually mapping initiators to targets. Typically, for switch management tools, the manual mapping of initiators to targets may be accomplished considering the number of target paths available and which hosts (servers) need to communicate to which storage arrays, and then distributing (e.g., equally) the available paths among groups of servers. This is a manual process and prone to human error or oversight which may result in under- utilization of available paths and, in turn, under-utilization of SAN bandwidth itself. As indicated, another approach is to include most or all paths available in every zone and have multipath software handle load balancing. Again, the downside may be that multipath software might be mandatory, and also more resources on the switch (CAM entries in the hardware) may be consumed if every zone includes all available paths. As hardware resources may be a premium in the application-specific integrated circuit (ASIC) of the switch, the inclusion of most or all available paths in every zone, as via multipath software, may not be efficient.
[0018] With examples of the present techniques, zoning and path distribution may be combined and automated while provisioning logical unit numbers (LUNs) to an initiator on the storage array end. A LUN may identify one or more physical or virtual storage devices to an initiator, e.g., Small Computer System Interface (SCSI) initiator, in a host server to facilitate data exchange. In computer storage, a logical unit number, or LUN, is a number used to identify a logical unit, which is a device addressed by the SCSI protocol or SAN protocols. A LUN may most often be used to refer to a logical disk as created on a SAN. The term "LUN" may also refer to the logical disk itself.
[0019] Examples herein employ target-based peer zoning. In a peer zone, there may be one principal member and a set of peer members so that when a peer zone is configured, a switch may allow most or all peer members to communicate with the principal member, but in certain examples, limited or no access is allowed between any two peer members. When this peer zoning is performed from a storage array device, the technique may become target-based peer zoning. In another example, on a storage array, one of the functions may be provisioning of LUNs to hosts or initiator ports. Generally, this is an independent task and performed outside of zoning operation, and transparent to the administrator who configures the zones.
[0020] Examples of the present techniques automate the zoning operation while provisioning the LUNs to initiators. Generally, in a storage array, there may be multiple target ports from which LUNs can be accessed via the SAN. The assumption in some examples is that generally any LUN can be accessed from any target port on the storage array, which is true in certain cases.
Examples herein employ instructions or code executable by a processor to identify most or all available paths in a given storage array. Then, the examples may pick one initiator at a time and allocate one of the available paths to the initiator. The procedure may be repeated for most or all initiators, but allocating a different path each time, such as on a round robin basis. However, of course, exceptions and additional rules may be applied. If a host has multiple initiator ports, the examples may insure the respective initiators get a different path so that when a host employs multi-path software, additional load balancing may be enforced using host side software.
[0021] FIG. 1 is a disk array 1 00 having a storage array controller 102. The disk array 100 may be utilized in a storage network, such as a storage area network (SAN). In one example, a storage array controller generally has a processor, memory, and other components and circuitry. The storage array controller 102 has a network arranger 104 which may be instructions or code stored in nonvolatile memory of the storage array controller 102, and executed by the controller 102 to implement various techniques described herein. For instance, the network arranger 104 may configure zones of a fabric of the storage network, allocate ports 106 of the storage array controller 102 to the initiators or host servers in the network, perform LUN provisioning of the disk array, iteratively assign the ports 106 or fabric paths to initiators, and so forth. The storage array controller 102 may have one or more ports 1 06, and indeed, generally may have multiple ports 106. Further, the disk array 100 may have more than one storage array controller 1 02. In one example, disk array 1 00 may have two storage array controllers 102 to accompany, respectively, the two fabrics in a dual fabric network.
[0022] Moreover, the disk array 100 has an array 1 08 of disks 1 1 0. The storage array controller 102 may manage the disks 1 10. The disks 1 10 may be disk drives, hard disk drives (HDD), solid state disks (SSD), flash media, and so on. The disks 1 10 may store data including "user data" which may refer to data that a person or entity might use in the course of business performing a job function or for personal use. Such user data may be business data and reports, Web pages, user files, image files, video files, audio files, software applications, or any other similar type of data that a user may wish to save to or retrieve from storage.
[0023] In sum, the storage array controller 1 02 may include nonvolatile memory, and code stored in the nonvolatile memory. The code may be executable by the storage array controller 102 to configure the zones of a storage network, and allocate ports of the storage array controller to the initiators during LUN provisioning. The storage array controller 102 may include the network arranger 104, which may be the code and nonvolatile memory, to configure the zones of the fabric and to allocate ports of the storage array controller to the initiators during the LUN provisioning.
[0024] FIG. 2 is a storage network 200 such as a SAN and having a disk array 100. A fabric 202 couples multiple host servers 204 to the disk array 100. The fabric 202 may include switches and may have, for example, a FC topology. The host servers 204 have initiators 206 which may be, for example, ports of the host servers 204. In certain examples, the fabric 202 may couple the initiators 206 to the ports 106 of the storage array controller 102. An initiator may be a host requesting data. A target may be a disk or array having the data, and/or a port of the disk array or storage array controller.
[0025] As indicated, the storage array controller 102 may configure zones through the fabric 202 and allocate the ports 106 of the storage array controller 102 to the initiators 206 during the LUN provisioning of the disk array 1 00. The fabric 202 includes paths to the ports 1 06, and wherein to allocate the ports 106 includes distributing the paths to the initiators 206. The paths may be from the initiators through the fabric 202 to the ports 1 06. The configuring of the zones and the allocation of the ports 106 may be performed contemporaneously and/or automatically by the storage array controller 102. The allocation of the ports 106 by the storage array controller 102 may include distributing the paths to the initiators 206, and to assign a path to one initiator 206 at a time. Indeed, to allocate the ports may include to iteratively: (1 ) select an initiator 206 during the LUN provisioning; (2) identify paths from the initiator 206 through the fabric to the ports 106; and (3) choose and assign a path to the initiator 206.
[0026] A storage area network, such as a SAN, may be configured for automatic path load balancing and multipath software coexistence. The multipath software, if employed, may balance load across two fabrics. In the illustrated example of FIG. 3 discussed below, each host has one connection to each fabric. Target load balancing may balance load automatically across hosts within a fabric. Thus, examples may accommodate auto target-path load balancing and host multipath software co-existence in a dual fabric. FIG. 3 may be an example of four peer zones configured automatically in each fabric, while LUNs are provisioned to individual host servers within a given fabric.
[0027] As discussed below for FIG. 3, there are four sets of hosts configured into four zones in each fabric in this example. On the target side, each set may access their LUNs via one target port. Also, the same set of hosts can access the same LUNs via another target port in the second fabric. Multipath software can handle load balancing between two fabrics, while automatic load balancing of the present techniques has already distributed the load substantially equally among most or all available target ports within each fabric. In one example, these techniques may distribute the available target ports within a single fabric automatically while target based zones are being created. Thus, with this approach, an advantage of distributing load with or without multipath software may be realized and, therefore, improve multipath software when multipath software exists.
[0028] However, when most or all available paths in each zone are included for operation of multipath software, the zone database size may increase significantly on the switch side and may ultimately reach scalability issues unnecessarily. Also, the CAM entries in a switch that define access controls within a zone may be limited, and thus adding multiple targets in a given zone may consume these entries relatively quickly. Hence, scalability limits may be reached. Yet, in certain examples of the present techniques, unnecessary change notifications can be reduced or avoided with, for instance, a single target port within a given zone. Accordingly, examples may work efficiently with or without multipath software, and may also advantageously reduce use of resources on switch hardware while configuring zones.
[0029] FIG. 3 is a storage network 300 with dual fabrics 302 and 304 and four groups 306 of host servers 308. Of course, more or less amount of groups 306 of host servers 308 may be employed. Each group 306 has its respective "n" number of host servers 308. Each host server 308 has one or more ports or initiators. The two fabrics 302 and 304 may provide redundancy in the storage network 300 which may be SAN. In one example, the fabrics 302 and 304 employ a FC protocol. In the illustrated example, the fabrics 302 and 304 may have switches 31 0 and be configured with zones 31 2. The zones 312 in the first fabric 302 are labeled Z1 1 , Z12, Z13, and Z14. The four zones 31 2 in the second fabric 304 are labeled Z21 , Z22, Z23, and Z24. Of course, more or less than four zones 312 per fabric 302 and 304 may be configured.
[0030] The fabrics 302 and 304 couple the host servers 308 to the disk array 314. In the illustrated example, the disk array 314 has two storage array controllers 316 and 31 8, each dedicated, respectively, to the two fabrics 302 and 304. Each controller has a network arranger 320 and 322 to configure the zones 312 in their respective fabrics 302 and 304, and to allocate their respective ports 319 and paths through the zones 31 2 to the host servers 308 as initiators during the LUN provisioning. The network arranger 320 and 322 may be instructions or code stored in the nonvolatile memory of the respective storage array controller, and where the code is executable by the storage array controller to implement the network arranger and perform actions of the network arranger. The two network arrangers 320 and 322 may be the same or similar, work independently or in conjunction, and so forth. Further, the storage array controllers 316 and 31 8 may manage the array 324 of disks 326 in the disk array 314. As indicated above, the disks 326 may be disk drives, hard disk drives (HDD), solid state disks (SSD), flash media, and the like.
[0031] In operation, the storage array controllers 31 6 and 31 8 may configure the multiple initiators from each host group 306 to a single target port 1 19 on the respective controllers 31 6 and 318. In the illustration, each line 328 from a host group 306 may represent "n" number of initiators or paths for that host group 306. Each line 330 from the fabrics 302 and 304 to the storage array controllers 316 and 318 may represent targeting a dedicated single port 1 19 on the respective controller 316 and 318. Thus, the storage array controllers 31 6 and 318 may configure zones 312 in the respective fabrics 302 and 304 with multiple "n" imitators from a host group 306 to a single target port 319. For example, the multiple "n" initiators of the host group 306 labeled HostGroup-1 may be configured as the zone 31 2 labeled Z1 1 to one target port 1 19 on the first storage array controller 31 6. Likewise, the multiple "n" initiators of the same host group 306 labeled HostGroup-1 may be configured as the zone 312 labeled Z21 in the second fabric 304 to one target port 1 19 on the second storage array controller 31 8. Similarly, the fabrics 302 and 304 may be configured by the storage array controllers 316 and 31 8 with the remaining zones Z12, Z1 3, Z14 (in fabric 302) and zones Z22, Z23, Z24 (in fabric 304) with respective initiators from the remaining host groups 306.
[0032] FIG. 4 is a method 400 of arranging a storage network. In examples, the arranging is performed by a storage array controller in a disk array. The storage network may be SAN, and includes the disk array and a fabric coupling multiple host servers to the disk array. The multiple host servers are, or have, initiators. The initiators may be ports, or involve ports, of the host servers. The actions by the storage array controller represented in blocks 404, 406, and/or 408 may be performed contemporaneously and/or automatically by the storage array controller.
[0033] At block 404, the arranging of the network by the storage array controller includes the storage array controller provisioning LUNs of the disk array to the initiators. At block 406, the arranging of the network by the storage array controller includes the storage array controller allocating ports of the storage array controller to the initiators, which may involve distributing paths to the initiators. The ports as allocated may each be the target port for a respective zone. The paths may be through the fabric from the initiators to the storage array controller, wherein each path may include one port of the storage array controller. The distributing of the paths to the initiators may include identifying paths from the initiators to the storage array controller, and assigning a path to one initiator at a time, wherein the path is one of the paths identified.
[0034] At block 408, the arranging of the network by the storage array controller includes configuring zones from the initiators through the fabric to the storage array controller, wherein each zone includes one port of the storage array controller and at least two initiators. In certain examples, the one port in a respective zone may be a target port for that zone and may be associated with multiple initiators or host servers.
[0035] FIG. 5 is a method 500 by a storage array controller of a disk array. The method is allocating ports of the storage array controller to initiators in a storage network, which involves distributing fabric paths to the initiators. The storage network includes one or more fabrics coupling multiple host servers comprising the initiators to the disk array. The storage network may be SAN and involve FC protocol. In certain examples, the method 500 is analogous to block 406 of method 400.
[0036] The method 500 including distributing paths to the initiators includes iteratively selecting (block 502) an initiator during the LUN provisioning, identifying (block 504) the paths from the initiator through the fabric to the ports, and choosing and assigning (block 506) a path to the initiator. The procedure may be repeated for most or all initiators, but allocating a different path each time, such as on a round robin basis. However, of course, exceptions and additional rules may be applied. If a host has multiple initiator ports, the examples may insure the respective initiators get a different path so that when a host employs multi-path software, additional load balancing may be enforced using host side software. [0037] FIG. 6 is a method 600 of allocating ports of a storage array controller of a disk array to initiators, which involves distributing fabric paths to the initiators. The method 600 may be performed, including automatically, by the storage array controller. As with method 500 of FIG. 5, the storage network includes one or more fabrics coupling multiple host servers comprising the initiators to the disk array. In certain examples, the fabric(s) may include FC switches. Moreover, in some examples, the method 600 is analogous to block 406 of method 400.
[0038] The method 600 includes distributing paths to the initiators, which may include iteratively performing the actions in blocks 602-610. At block 602, the method includes the storage array controller picking an initiator during the LUN provisioning of the disk array. At block 604, the method includes identifying paths from the initiator through the fabric to the ports of the storage array controller. At block 606, the method includes selecting a path, e.g., on a round robin basis, in response to the initiator not being provisioned. At block 608, the method includes assigning the path to the initiator in response to the path not being assigned to another initiator on the same host server as the initiator, wherein the path is one of the paths identified. The method of distributing paths to the initiators may also include, as indicated in block 610, iteratively selecting another path, e.g., also on a round robin basis, in response to the selected path being assigned to another initiator from the same host server as the initiator, and assign the other path to the initiator, wherein the other path is one of the paths identified.
[0039] Thus, examples of the present techniques may involve path allocation performed by the storage array controller, and which may be performed automatically. As mentioned, a practice in target-based peer zoning may be to configure zones on a per target port basis. The target port that is configuring the zone may act as the "principal" member, and most or all initiators that communicate or talk to this target port may be "peer" members. Thus, in a given zone, there may be only one target port and multiple initiator ports.
Therefore, examples herein provide for the storage array controller pick a target port among the potentially many available ports. Again, such may be performed automatically by the storage array controller.
[0040] FIG. 7 is a method 700 of allocating ports of a storage array controller of a disk array to initiators in a storage network, which involves distributing fabric paths to the initiators. In certain examples, the method 700 is analogous to block 406 of method 400. The method 700 may allocate available paths on a round robin basis to one initiator at a time until most or all initiators are provisioned with their LUNs. The actions may involve identifying most or all available paths (ports) on the disk array or storage array controller, picking a path on a round robin basis while provisioning a LUN to an initiator (first time), checking if there are other initiators from the same host that are already allocated to this path, and if yes, picking another path (may follow round robin basis here again). The technique may assign the path, as selected above, to the initiator, and repeat the same until most or all initiators are provisioned. In certain examples, peer zoning may be started once all initiators are provisioned.
[0041] At block 702, the method 700 begins. As indicated, the method 700 may be performed by a storage array controller of a disk array in a storage network. The storage network may be a SAN, and have one or more fabrics coupling multiple host servers comprising initiators to the disk array, such as to ports of the storage array controller. At block 704, the method picks an initiator during LUN provisioning of the disk array. At block 706, the method includes identifying most or all available paths to the disk array. In the illustrated example, the paths identified are those available in a single fabric if more than one fabric is employed, e.g., if the SAN is a dual fabric network. At decision block 708, the method determined if the initiator is being provisioned for the first time. If no, i.e., if the initiator has already been provisioned, then the method proceeds to block 710 which determines if all initiators have been provisioned. If all initiators have been provisioned, the method concludes, as indicated by reference numeral 712. If all initiators have not been provisioned, then the method returns to block 704 to select or pick an initiator.
[0042] However, if at decision block 708, the answer is yes, i.e., the initiator is being provisioned for the first time, the method picks a path, e.g., on a round robin basis, as indicated in block 714. At block 716, the method determines if the selected path is already assigned to another initiator from the same host as the initiator picked in block 704. If the path is not already assigned to another initiator from the same host, the path is assigned to the initiator, as noted in block 718. Conversely, if the path is already assigned to another initiator from the same host, another path is picked, e.g., also on a round-robin basis, as noted in block 720, prior to assigning the path (the other path picked in block 718) to the initiator. After a path is assigned to the initiator, the method proceed to decision block 710 to determine if all initiators are provisioned, as discussed.
[0043] Lastly, some examples accommodate path distribution for multiple initiators on a single host. In some examples, multiple initiators from the same host may be distributed to different paths, for instance, on each of two switches in the fabric. If the paths available are more than the number of initiators per host, then each initiator may thus be assigned a unique path. Otherwise, the paths may be distributed, e.g., equally, on a round robin basis again to the initiators to provide increased load distribution. Another factor that can be taken into account for load distribution is speed of each link if ports on a storage array are not of the same link speed. If the ports are of different link speeds, then appropriate weight may be given while distributing initiators on available paths. For example, if one port supports 16 GB and another 8 GB, the 16 GB can generally support 2x initiators compared to 8 GB, and such taken into consideration for distributing the initiators. Similarly, if there are host ports or initiators that require increased or maximum possible bandwidth, as another example, the host server ports or initiators can be allocated an independent target port, for instance, on a 1 :1 basis.
[0044] In sum, examples may provide for allocation of paths (mapping initiators to target ports) when multiple paths are available for efficient utilization of SAN bandwidth. The allocation may be performed by a storage array controller of a disk array, and may be automatic. Multipath software may not be required. Some examples combine peer zoning, target-based zoning, and path selection into a single operation. Certain examples provide for automatic visibility to available target paths prior to creation of zones, and reduce or eliminate errors in load balancing. Further, multipath software may handle load balancing within a host, while examples herein auto load balance across multiple hosts in a fabric.
[0045] FIG. 8 is a block diagram showing a tangible, non-transitory, computer-readable medium that stores code configured to operate a data storage system. The computer-readable medium is referred to by the reference number 800. The computer-readable medium 800 can include RAM, a hard disk drive, an array of hard disk drives, an optical drive, an array of optical drives, a non-volatile memory, a flash drive, a digital versatile disk (DVD), or a compact disk (CD), among others. The computer-readable medium 800 may be accessed by a processor 802 over a computer bus 804. Furthermore, the computer-readable medium 800 may include code configured to perform the methods and techniques described herein. The computer readable medium 800 may be the nonvolatile memory in the storage array or disk array 1 00 of FIG. 1 , memory in the storage network 200 of FIG. 2, and so forth. The computer readable medium 800 may include firmware executed by a processor or storage array controller of the FIGS. 1 and 2.
[0046] The various software components discussed herein may be stored on the computer-readable medium 800. A portion 806 of the computer-readable medium 800 can include network arranger, which may be a module or executable code that directs a processor or controller in arranging a storage network, such as a SAN. As discussed above, the arranging may be to configure zones, allocates paths (ports) the initiators during LUN provisioning, and so on. The arranging may be iterative.
[0047] An example includes a disk array having a storage array controller and an array of disks to store data for a storage network, the storage network having a fabric coupling multiple host servers comprising initiators to the disk array, wherein each host server comprises at least one initiator. The storage array controller to manage the disks, and to configure zones of the fabric and to allocate ports of the storage array controller to the initiators during LUN provisioning of the disk array to the initiators, and wherein the ports as allocated are target ports for respective zones. The configuring of the zones and allocating of the ports may be performed contemporaneously and/or
automatically by the storage array controller. To allocate the ports may include to identify paths from the initiators through the fabric to the ports and to assign a path to one initiator at a time, wherein the path is one of the identified paths, and wherein to configure the zones and allocate the ports may comprise target- based peer zoning.
[0048] Another example is a storage network, such as a SAN, which includes multiple host servers comprising initiators, wherein each host server comprises at least one initiator. The storage network may include a fabric coupling the multiple host servers to a disk array. The disk array has an array of disks and a storage array controller to manage the disks, wherein the storage array controller to configure zones of the fabric and to allocate ports of the storage array controller to the initiators during LUN provisioning of the disk array to the initiators. The ports as allocated are target ports for respective zones. The configuring of the zones and allocating of the ports may be performed contemporaneously and/or automatically by the storage array controller.
Moreover, in examples, the fabric has paths to the ports, wherein to allocate the ports includes distributing the paths to the initiators, and wherein the ports as allocated each are a target port for a respective zone. In examples, to allocate the ports includes to identify paths from the initiators through the fabric to the ports and to assign a path to one initiator at a time, and wherein the path is one of the identified paths. Thus, to allocate the ports may include to iteratively: select an initiator during the LUN provisioning; identify paths from the initiator through the fabric to the ports; and choose and assign a path to the initiator, wherein the path is one of the paths identified, wherein to configure the zones and allocate the ports may comprise target-based peer zoning.
[0049] In yet another example, a method includes arranging a storage network having a fabric coupling multiple host servers comprising initiators to a disk array, the arranging by a storage array controller in the disk array and including: provisioning logical unit numbers (LUNs) of the disk array to the initiators; configuring zones from the initiators through the fabric to the storage array controller, wherein each zone comprises one port of the storage array controller as a target port, and wherein each zone comprises at least two initiators; and allocating ports of the storage array controller to the initiators, wherein the ports as allocated each comprise the target port for a respective zone, the allocating comprising distributing paths to the initiators, the paths through the fabric from the initiators to the storage array controller. The configuring of the zones, provisioning the LUNs, and allocating the ports may be performed contemporaneously and/or automatically by the storage array controller, wherein distributing paths to the initiators may include identifying paths from the initiators to the storage array controller, and assigning a path to one initiator at a time, and wherein the path is one of the paths identified. Thus, distributing paths to the initiators may include iteratively: selecting an initiator during the LUN provisioning; identifying the paths from the initiator through the fabric to the ports; and choosing and assigning a path to the initiator. The storage network may be a storage area network (SAN), the fabric may have switches and a FC topology. The configuring of the zones and allocating the ports may comprise target-based peer zoning, wherein each path has one port of the storage array controller, the one port as the target port for the respective zone.
[0050] Lastly, another example includes a tangible, non-transitory, computer- readable medium comprising instructions that direct a processor to arrange a storage network, including to configure zones of a fabric and to allocate ports of a storage array controller of a disk array to initiators during logical unit number (LUN) provisioning of the disk array, wherein the ports as allocated are target ports. The processor may be the storage array controller. The storage array controller may be directed by the instructions to configure the zones
contemporaneous with allocating the ports to the initiators during the LUN provisioning. The fabric may have paths through the fabric to the ports, and wherein to allocate the ports includes distributing the paths to the initiators, including to assign a path to one initiator at a time.
[0051] While the present techniques may be susceptible to various modifications and alternative forms, the exemplary examples discussed above have been shown only by way of example. It is to be understood that the technique is not intended to be limited to the particular examples disclosed herein. Indeed, the present techniques include all alternatives, modifications, and equivalents falling within the true spirit and scope of the appended claims.

Claims

CLAIMS What is claimed is:
1 . A disk array comprising:
an array of disks to store data for a storage network, the storage network comprising a fabric coupling multiple host servers comprising initiators to the disk array, wherein each host server comprises at least one initiator; and
a storage array controller to manage the disks and to configure zones of the fabric and to allocate ports of the storage array controller to the initiators during logical unit number (LUN) provisioning of the disk array to the initiators, and wherein the ports as allocated comprise target ports for respective zones.
2. The disk array of claim 1 , wherein the storage array controller to configure the zones contemporaneous with allocating the ports to the initiators during the LUN provisioning, wherein the fabric comprises paths to the ports, wherein to allocate the ports comprises distributing the paths to the initiators, and wherein the ports as allocated each comprise a target port for a respective zone.
3. The disk array of claim 1 , wherein the storage array controller to automatically configure the zones and allocate the ports, wherein to allocate the ports comprises to identify paths from the initiators through the fabric to the ports and to assign a path to one initiator at a time, and wherein the path comprises one of the identified paths.
4. The disk array of claim 1 , wherein the storage network comprises a storage area network (SAN), and wherein to allocate the ports comprises to iteratively:
select an initiator during the LUN provisioning;
identify paths from the initiator through the fabric to the ports; and choose and assign a path to the initiator, wherein the path comprises one of the paths identified.
5. The disk array of claim 1 , wherein the fabric comprises switches, and wherein to allocate the ports comprises iteratively to:
pick an initiator during the LUN provisioning;
identify paths from the initiator through the fabric to the ports;
select a path on a round robin basis in response to the initiator not being provisioned, wherein the path is one of the paths identified; and
assign the path to the initiator in response to the path not being assigned to another initiator on a same host server as the initiator.
6. The disk array of claim 5, wherein to allocate the ports comprises to iteratively select another path on a round robin basis in response to the path being assigned to another initiator from the same host server as the initiator, and assign the another path to the initiator, wherein the another path comprises one of the paths identified.
7. The disk array of claim 1 , wherein to configure the zones and allocate the ports comprises target-based peer zoning.
8. A method comprising:
arranging a storage network having a fabric coupling multiple host servers comprising initiators to a disk array, the arranging by a storage array controller in the disk array and comprising:
provisioning logical unit numbers (LUNs) of the disk array to the initiators;
configuring zones from the initiators through the fabric to the storage array controller, wherein each zone comprises one port of the storage array controller as a target port, and wherein each zone comprises at least two initiators; and
allocating ports of the storage array controller to the initiators, wherein the ports as allocated each comprise the target port for a respective zone, the allocating comprising distributing paths to the initiators, the paths through the fabric from the initiators to the storage array controller.
9. The method of claim 8, wherein configuring the zones, provisioning the LUNs, and allocating the ports are performed
contemporaneously by the storage array controller, wherein distributing paths to the initiators comprises identifying paths from the initiators to the storage array controller, and assigning a path to one initiator at a time, and wherein the path comprises one of the paths identified.
10. The method of claim 8, wherein configuring the zones, provisioning the LUNs, and allocating the ports are performed automatically by the storage array controller, and wherein distributing paths to the initiators comprises iteratively:
selecting an initiator during the LUN provisioning;
identifying the paths from the initiator through the fabric to the ports; and choosing and assigning a path to the initiator.
1 1 . The method of claim 8, wherein the storage network comprises a storage area network (SAN) and the fabric comprises switches, and wherein distributing paths to the initiators comprises iteratively:
picking an initiator during the LUN provisioning;
identifying paths from the initiator through the fabric to the ports of the storage array controller, the ports comprising target ports;
selecting a path on a round robin basis in response to the initiator not being provisioned, wherein the path is one of the paths identified; and
assigning the path to the initiator in response to the path not being assigned to another initiator on a same host server as is the initiator.
12. The method of claim 1 1 , wherein distributing paths to the initiators comprises iteratively selecting another path on the round robin basis in response to the path being assigned to another initiator from the same host server as the initiator picked, and assign the another path to the initiator picked.
13. The method of claim 8, wherein configuring the zones and allocating the ports comprises target-based peer zoning, wherein each path comprises one port of the storage array controller, the one port comprising the target port for the respective zone.
14. A tangible, non-transitory, computer-readable medium comprising instructions that direct a processor to:
arrange a storage network, comprising to configure zones of a fabric of the storage network and to allocate ports of a storage array controller of a disk array to initiators during logical unit number (LUN) provisioning of the disk array, wherein the ports as allocated comprise target ports.
15. The computer-readable medium of claim 14, wherein the processor comprises the storage array controller, and wherein the storage array controller directed by the instructions to configure the zones contemporaneous with allocating the ports to the initiators during the LUN provisioning, wherein the fabric comprises paths through the fabric to the ports, and wherein to allocate the ports comprises distributing the paths to the initiators, comprising to assign a path to one initiator at a time.
PCT/US2015/043245 2015-07-31 2015-07-31 Disk array having controller to allocate ports WO2017023271A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2015/043245 WO2017023271A1 (en) 2015-07-31 2015-07-31 Disk array having controller to allocate ports

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2015/043245 WO2017023271A1 (en) 2015-07-31 2015-07-31 Disk array having controller to allocate ports

Publications (1)

Publication Number Publication Date
WO2017023271A1 true WO2017023271A1 (en) 2017-02-09

Family

ID=57943985

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/043245 WO2017023271A1 (en) 2015-07-31 2015-07-31 Disk array having controller to allocate ports

Country Status (1)

Country Link
WO (1) WO2017023271A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10733131B1 (en) 2019-02-01 2020-08-04 Hewlett Packard Enterprise Development Lp Target port set selection for a connection path based on comparison of respective loads
US10873626B2 (en) 2016-04-29 2020-12-22 Hewlett Packard Enterprise Development Lp Target driven peer-zoning synchronization
US10897506B2 (en) 2014-07-02 2021-01-19 Hewlett Packard Enterprise Development Lp Managing port connections
US11050825B1 (en) * 2020-01-30 2021-06-29 EMC IP Holding Company LLC Storage system port usage information sharing between host devices

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090168772A1 (en) * 2003-07-21 2009-07-02 Dropps Frank R Lun based hard zoning in fibre channel switches
US20100077408A1 (en) * 2008-09-24 2010-03-25 Sun Microsystems, Inc. Storage area network and method for provisioning therein
US7990994B1 (en) * 2004-02-13 2011-08-02 Habanero Holdings, Inc. Storage gateway provisioning and configuring
US20130262811A1 (en) * 2012-03-27 2013-10-03 Hitachi, Ltd. Method and apparatus of memory management by storage system
US20130332614A1 (en) * 2012-06-12 2013-12-12 Centurylink Intellectual Property Llc High Performance Cloud Storage

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090168772A1 (en) * 2003-07-21 2009-07-02 Dropps Frank R Lun based hard zoning in fibre channel switches
US7990994B1 (en) * 2004-02-13 2011-08-02 Habanero Holdings, Inc. Storage gateway provisioning and configuring
US20100077408A1 (en) * 2008-09-24 2010-03-25 Sun Microsystems, Inc. Storage area network and method for provisioning therein
US20130262811A1 (en) * 2012-03-27 2013-10-03 Hitachi, Ltd. Method and apparatus of memory management by storage system
US20130332614A1 (en) * 2012-06-12 2013-12-12 Centurylink Intellectual Property Llc High Performance Cloud Storage

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10897506B2 (en) 2014-07-02 2021-01-19 Hewlett Packard Enterprise Development Lp Managing port connections
US10873626B2 (en) 2016-04-29 2020-12-22 Hewlett Packard Enterprise Development Lp Target driven peer-zoning synchronization
US10733131B1 (en) 2019-02-01 2020-08-04 Hewlett Packard Enterprise Development Lp Target port set selection for a connection path based on comparison of respective loads
US11050825B1 (en) * 2020-01-30 2021-06-29 EMC IP Holding Company LLC Storage system port usage information sharing between host devices

Similar Documents

Publication Publication Date Title
US10439878B1 (en) Process-based load balancing and failover policy implementation in storage multi-path layer of host device
US8281033B1 (en) Techniques for path selection
JP5567147B2 (en) Storage control device or storage system having a plurality of storage control devices
US7428614B2 (en) Management system for a virtualized storage environment
US8260986B2 (en) Methods and apparatus for managing virtual ports and logical units on storage systems
US8527697B2 (en) Virtualized data storage in a network computing environment
US8972656B1 (en) Managing accesses to active-active mapped logical volumes
JP2007133854A (en) Computerized system and method for resource allocation
US10664198B2 (en) Sharing alias addresses among logical devices
US10025502B2 (en) Fibre channel initiator alias/port suggestion and autocomplete
US10579287B2 (en) Sharing alias addresses among logical devices
US10248460B2 (en) Storage management computer
US8423713B2 (en) Cluster type storage system and method of controlling the same
US20140325146A1 (en) Creating and managing logical volumes from unused space in raid disk groups
WO2017023271A1 (en) Disk array having controller to allocate ports
US10171309B1 (en) Topology service
WO2016093843A1 (en) Configuration of storage using profiles and templates
US20180107409A1 (en) Storage area network having fabric-attached storage drives, san agent-executing client devices, and san manager
US10229085B2 (en) Fibre channel hardware card port assignment and management method for port names
CN103561098A (en) Method, device and system for selecting storage resources
US20150248254A1 (en) Computer system and access control method
US20210099520A1 (en) Host device with multi-path layer configured for io control using detected storage port resource availability
US9600430B2 (en) Managing data paths between computer applications and data storage devices
US20220276800A1 (en) Mapping Storage Volumes to Storage Processing Nodes Using Input/Output Operation Constraints and Cost Function
JP6712982B2 (en) Storage device, storage system, and storage device management method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15900539

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15900539

Country of ref document: EP

Kind code of ref document: A1