WO2016018251A1 - Processing nodes - Google Patents

Processing nodes Download PDF

Info

Publication number
WO2016018251A1
WO2016018251A1 PCT/US2014/048614 US2014048614W WO2016018251A1 WO 2016018251 A1 WO2016018251 A1 WO 2016018251A1 US 2014048614 W US2014048614 W US 2014048614W WO 2016018251 A1 WO2016018251 A1 WO 2016018251A1
Authority
WO
WIPO (PCT)
Prior art keywords
processing nodes
direct
computer system
attached storage
coupled
Prior art date
Application number
PCT/US2014/048614
Other languages
French (fr)
Inventor
Chui Ching CHIU
Tse-Jen SUNG
Jim KUO
Ku-Hsu NIEN
Kang-Jong PENG
Hung-Chu LEE
Original Assignee
Hewlett-Packard Development Company, L.P.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett-Packard Development Company, L.P. filed Critical Hewlett-Packard Development Company, L.P.
Priority to PCT/US2014/048614 priority Critical patent/WO2016018251A1/en
Publication of WO2016018251A1 publication Critical patent/WO2016018251A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0635Configuration or reconfiguration of storage systems by changing the path, e.g. traffic rerouting, path reconfiguration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0607Improving or facilitating administration, e.g. storage management by facilitating the process of upgrading existing storage systems, e.g. for improving compatibility between host and storage device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0689Disk arrays, e.g. RAID, JBOD

Definitions

  • Direct-attached storage refers to a data storage device that is directly attached to a computing device without a storage area network (SAN) in between.
  • the data capacity and connectivity of a DAS system can be improved through the use of switches or expanders, which enable a number of storage devices to be coupled to multiple computing devices.
  • components of the DAS system are implemented as blade devices deployed in a blade enclosure.
  • a single blade enclosure may include several blade servers, storage controllers, and switches, among other components.
  • the blade enclosure can provide a variety of services such as power, cooling, networking, various interconnects, and system management.
  • the DAS system may use the Serial Attached Small Computer System Interface (Serial Attached SCSI (SAS)) protocol for physically connecting and transferring data between the computing devices and the storage devices.
  • SAS Serial Attached SCSI
  • FIG. 1 is a block diagram of a computer system with direct attached storage
  • FIG. 2 is a block diagram of a computer system showing circuitry for routing a control signal bus between processing nodes and the storage drives;
  • FIG. 3 is a block diagram of a computer system showing circuitry for routing a data storage bus between the processing nodes and the storage drives;
  • FIG. 4 is a process flow diagram of a method for configuring a computer system.
  • the present disclosure provides techniques for configuring a multi-node computer system that includes direct-attached storage (DAS) drives.
  • the multi-node system includes two or more processing nodes and a number of storage drives.
  • the processing nodes and the storage drives are housed within the same enclosure.
  • the multi-node system may be configured, for example, as a blade server.
  • each storage drive can be coupled to one of the processing nodes using a storage protocol such as Serial Attached Small Computer System Interface (Serial Attached SCSI (SAS)) for transferring data between the computing device and the storage device.
  • SAS Serial Attached SCSI
  • Processing nodes can be coupled to specific storage drives through a SAS switch, which includes an expander.
  • the storage resources of the DAS devices are made accessible to specific processing nodes by configuring zone groups, which control how the expanders route
  • the configuration of the expander is usually accomplished through storage management firmware loaded onto a processing device of the expander.
  • the storage management firmware may be operatively coupled through a network connection to a storage administration device that enables an administrator to configure the switches remotely.
  • the SAS switch contributes significant additional cost to the system. Additionally, significant cost and effort is involved in creating and maintaining the expander firmware and other firmware and software components used in the storage system configuration process.
  • the data storage resources of a DAS-based computer system can be coupled to the processing nodes without the use of a SAS switch, expander, or computer code.
  • the coupling of storage drives to processing nodes is accomplished through hardware, such as manual switches, multiplexers, programmable logic devices, and the like.
  • the elimination of the SAS switch can reduce the cost of the system as well as eliminate the time and effort of generating and maintaining the firmware and software used to accomplish the storage system configuration.
  • Fig. 1 is a block diagram of a computer system with direct attached storage.
  • the computer system 100 may include a number or processing nodes 102.
  • One or more of the processing nodes 102 may be general purpose processing devices, referred to herein as compute nodes.
  • the compute nodes may be blade servers.
  • One or more of the processing nodes 102 may be specialized processing nodes that are configured to perform a dedicated function.
  • one or more of the processing nodes 1 02 may be storage nodes that are configured to process data storage transactions received from other processing nodes 102.
  • the computer system 100 also includes a number of storage drives 104, which may be any suitable type of storage drive, including disk drives, solid state drives, tape drives, and the like.
  • a particular processing node 1 02 can be coupled to one, several, or even all of the storage drives 104.
  • Each storage drive 1 04 can be coupled to one of the processing nodes 102 at a time.
  • the processing nodes 102 and the storage drives 104 can use any suitable storage protocol for connecting and transferring data on the storage bus, including SAS, Serial ATA (SATA), SCSI, and others.
  • the storage bus refers to the bus that transfers storage protocol instructions and storage data between the processing nodes 102 and the storage drives 104.
  • processing nodes 102 may be coupled to storage drives 1 04 through an additional data bus, referred to herein as a control signal bus.
  • the control signal bus can be used to transfer control messages and signals between the processing nodes 1 02 and the storage drives 104.
  • some or all of the storage drives 1 04 may include or be coupled to visual indicators, such as Light Emitting Diode (LED) indicators, that indicate a status of the storage drive 104.
  • the processing nodes 1 02 may be configured to control the LEDs disposed on any storage drive 104 to which it is coupled. The LEDs can be activated to indicate that a particular storage drive 104 has failed or is actively storing or reading data or to help a user locate a particular storage drive, for example.
  • LED Light Emitting Diode
  • the computer system 100 also includes routing circuitry 1 06, which is used to couple the storage drives 104 to assigned processing nodes 102 based on the storage system configuration selected by the user.
  • the routing circuitry 106 is configured to route the storage bus and/or the control signal bus.
  • the user can determine the storage system configuration through a switch 108, which may be a manual switch.
  • a manual switch is a physical switch that can be actuated manually and does not use programming code to operate. Examples of manual switches includes, toggle buttons, toggle switches, dip switches, rotary switches, and the like.
  • At least one of the storage drives 104 is coupled to the processing nodes 1 02 through the routing circuitry 106.
  • some of the storage drives 104 may be coupled to the processing nodes 102 directly.
  • Various configurations are possible in additional to the particular configuration shown in Fig. 2.
  • all of the storage drives 1 04 may be coupled to respective processing nodes 102 through the routing circuitry 106.
  • some or all of the components of the computer system 100 are housed within a common enclosure, sometimes referred to as a chassis.
  • the processing nodes 102 and storage drives 104 may be disposed on blades that are inserted into slots of the enclosure.
  • the enclosure can include a backplane with conductive traces that provide the communications paths between the processing nodes 102 and the storage drives 1 04.
  • the routing circuitry 106 can be included in the backplane.
  • the switch 108 can be disposed on an external surface of the enclosure to enable user access.
  • the computer system 100 is configured to provide a number of configuration choices that can be selected by the user by adjusting the switch 1 08.
  • the setting of the switch 108 determines a configuration of the routing between the plurality of processing nodes 102 and the plurality of storage drives 104. For example, the user may want to assign a greater amount of storage capacity to some processing nodes 102 compared to others. Accordingly, the switch 1 08 can be configured to change the number of storage drives 104 assigned to a particular processing node 102.
  • Fig. 2 is a block diagram of a computer system showing circuitry for routing a control signal bus between the processing nodes and the storage drives.
  • the computer system 100 shown in Fig. 2 includes four processing nodes 102, two of which are compute nodes 202 and two of which are storage nodes 204.
  • One of the storage nodes 204 is directly coupled to four storage drives 104, meaning that the routing circuitry is not located between the storage node 204 and the storage drives 1 04.
  • One of the storage nodes 204 is directly coupled to two storage drives 104.
  • the two compute nodes 202 are not directly coupled to any of the storage drives 1 04.
  • the computer system 100 also includes six additional storage drives 104, each of which can be selectively coupled to one of the four processing nodes 102 through the routing circuitry 1 06.
  • the storage drives 1 04 that cannot be re-routed to another processing node 102 are referred to as dedicated storage drives 206, and the storage drives 104 that can be selectively routed to a different processing node 102 are referred to as routable storage drives 208.
  • the computer system 100 of Figs. 2 and 3 are one example of a computer system in accordance with the present techniques. For example, various modifications can be made in accordance with the design considerations of a particular implementation, such as the number of dedicated storage drives, the number of routable storage drives, and the number of processing nodes, among others.
  • Each processing node 102 can include two connections to a control signal bus, which may be used to send control messages to one or more LEDs associated with the storage drives 104, for example.
  • the storage protocol bus used to transfer storage instructions and storage data between the processing nodes 102 and the storage drives 104 is not shown in Fig. 2, but is described further below in relation to Fig. 3.
  • each processing node 102 can send control messages to a corresponding Peripheral Interface Controller (PIC) 210 over a serial bus such as I2C, for example.
  • PIC Peripheral Interface Controller
  • the PIC 210 can decode the commands and send
  • Each control signal can be a logic high or logic low signal that turns on a particular LED.
  • the control signals can be used to indicate a drive operating status, identify a drive selected by a management application, indicate drive failure, or identify a particular drive as being coupled to a particular node.
  • Each storage drive 104 can have any suitable number of LEDs, with corresponding control signals.
  • each PIC 21 0 is coupled to a memory 212 such as a Non-Volatile Random Access Memory (NVRAM).
  • the memory 21 2 is to store backplane design information, such as NVRAM version, backplane identifier, bay counts, among other information.
  • the memory 212 can be read by the PIC 210 and enables the PIC software to accommodate different hardware architectures.
  • each PIC 210 is coupled to the LEDs of four storage drives. However, additional configurations are also possible.
  • a single line is shown between PIC 210 and each storage drive 104, it will be appreciated that each storage drive 104 can have any suitable number of LEDs, each of which may receive control signals through the PIC 210.
  • the user can select the storage configuration through the switch 108, which is shown in Fig. 2 as a dual-inline package (DIP) switch.
  • the routing circuitry 106 can include a programmable logic device (PLD) 214, such as a Field
  • the switch 108 is operatively coupled to the programmable logic device 214 and, in this example, provides two selection inputs to the programmable logic device 214, labeled "SEL 0" and "SEL 1 .”
  • the selection inputs received by the programmable logic device 214 determine which processing nodes 102 the routable storage drives 208 are coupled to.
  • control signals from the PIC 210 can be routed directly to the
  • the remaining storage drives 104 are coupled to the processing nodes 102 through the routing circuitry 1 06 so that they can be selectively coupled to different processing nodes 102 depending on the user selection.
  • the programmable logic device 214 of Fig. 2 includes routing inputs, AO through A1 1 , and routing outputs, B0 through B5. Command signals are received from the PICs 210 at the routing inputs and output at corresponding routing outputs. The coupling between the routing inputs and routing outputs depends on the system configuration selected by the user.
  • the programmable logic device 214 can include any suitable number of routing inputs and routing outputs, depending on the design details of a particular implementation. For a detailed example of the routing configuration provided by each possible configuration of the switch, see Table 1 . Table 1: Selectable Storage Configurations for the Example Computer System shown in Fig. 2.
  • the Column labeled “No. of Drives,” refers to the total number of storage drives 104 assigned to the two storage nodes 204.
  • the columns labeled “PLD input” refers to the logic state of the two selection inputs provided by the switch 108, and the columns labeled “PLD output” refers to the resulting configuration of the programmable logic device 214.
  • the columns labeled B0 through B5 indicate which routing input, AO through A1 1 , will be connected to each routing output, BO through B5, for each configuration.
  • the computer system 1 00 can be set to one of four possible configurations. Each configuration is characterized by the number of storage drives 104 assigned to the storage nodes 204. In the first configuration, only the six dedicated storage drives 206 are coupled to the storage nodes 204, which is the lowest number of storage drives 1 04 that can be assigned to the storage nodes 204 in this example.
  • the routable storage drives 208 are coupled to the compute nodes 202, with three routable storage drives 208 coupled to each compute node 202.
  • some of the routable storage drives 208 can be re-routed to the storage nodes 204 simply by adjusting the switch 1 08.
  • SEL 0 H (logical high)
  • SEL 1 L (logical low)
  • four of the routable storage drives 208 are routed to the storage nodes 204, and each compute node 202 is now coupled to only one routable storage drive 208.
  • Table 1 The other routing configurations that are possible in the example of Fig. 2 are shown in Table 1 .
  • the programmable logic device 214 also outputs three multiplexer signals labeled MUX SEL 0, MUX SEL 1 , MUX SEL 2.
  • the multiplexer signals are used to control the routing of the storage protocol bus, as described below in relation to Fig. 3.
  • Fig. 3 is a block diagram of a computer system showing circuitry for routing a storage protocol bus between the processing nodes and the storage drives.
  • Fig. 3 shows the same compute nodes 202, storage nodes 204, dedicated storage drives 206, and routable storage drives 208 that were shown in Fig. 2. However, rather than showing the control signal bus that is used for controlling the storage drive LEDs, Fig. 3 shows the routing of the storage protocol bus.
  • the routing circuitry 1 06 can includes a set of multiplexers 302, labeled MUX 0 through MUX 6. Each multiplexer 302 is coupled to one of the routable storage drives 208 and selectively couples the routable storage drive 208 to one of two possible processing nodes 102.
  • each routable storage drive 208 can be routed to one of the compute nodes 202 or one of the storage nodes 204.
  • the multiplexer signals output by the programmable logic device 214 (Fig. 2) are input to the multiplexers 302 and determine which of the two possible processing nodes 102 each storage drive 104 will be coupled to.
  • Table 2 For a detailed example of the routing configuration provided by each possible configuration of the switch, see Table 2.
  • Table 2 Selectable Storage Configurations for the Example Computer System shown in Fig. 2.
  • the Column labeled “No. of Drives,” refers to the total number of storage drives 104 assigned to the two storage nodes 204.
  • the columns labeled “PLD input” refer to the logic state of the two selection inputs provided by the switch 108, and the multiplexer signals labeled MUX SEL 0, MUX SEL 1 , MUX SEL 2 represent the logic state of the multiplexer signals output by the programmable logic device 214.
  • the columns labeled MUX 0 through MUX 5 indicate which multiplexer output, B or C, is coupled to the multiplexer input, A. As shown in Fig. 2, multiplexer output B is the output on the left and multiplexer output C is the output on the right.
  • Fig. 4 is a process flow diagram of a method for configuring a computer system.
  • the method 400 can be performed by the routing circuitry 106 shown in Figs. 1 -3 without the use of computer-readable code such as software or firmware.
  • the method can begin at block 402.
  • a change is detected in a setting of a manual switch.
  • the switch may be disposed in a server enclosure and configured to enable a user to select a different zoning configuration for the server enclosure.
  • each setting of the switch corresponds to a different number of storage drives to be coupled to a particular processing nodes or group of processing nodes.
  • the each switch setting may correspond to a number of storage drives to be assigned to the storage nodes of the server.
  • one or more storage drives may be coupled to a selected one of a pair of processing nodes as described below in relation to blocks 404, 406, and 408.
  • a control signal bus can be coupled from the selected one of the pair of processing nodes the storage drive, using a programmable logic device, such as the programmable logic device 214 shown in Figs. 1 -3.
  • the control signal bus can be a communication bus used by the coupled processing node to an indicator disposed on the storage drive, such as an LED or set of LEDs.
  • the programmable logic device can generate one or more multiplexer inputs and send the multiplexer inputs to a multiplexer or set of multiplexers.
  • Each of the multiplexers can be configured to couple the storage drive to one of the pair of processing nodes.
  • the multiplexer or set of multiplexers couple the storage protocol bus from the selected one of the pair of processing nodes to the storage drive. Storage instructions and data can then be sent to and from the storage drive to the associated processing node to which it is coupled.
  • Fig. 4 The process diagram of Fig. 4 is not intended to indicate that the elements of method 400 are to be executed in any particular order, or that all of the elements of the method 200 are to be included in every case. Further, any number of additional elements not shown in Fig. 4 can be included in the method 400, depending on the details of the specific implementation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Bus Control (AREA)

Abstract

A computer system that includes a plurality of processing nodes and a plurality of direct-attached storage devices coupled to the plurality of processing nodes. Each direct-attached storage device is coupled to one of the processing nodes. The computer system includes a manual switch, where the setting of the switch determines a configuration of the routing between the plurality of processing nodes and the plurality of direct-attached storage devices.

Description

PROCESSING NODES
BACKGROUND
[0001] Direct-attached storage (DAS) refers to a data storage device that is directly attached to a computing device without a storage area network (SAN) in between. The data capacity and connectivity of a DAS system can be improved through the use of switches or expanders, which enable a number of storage devices to be coupled to multiple computing devices. Often, components of the DAS system are implemented as blade devices deployed in a blade enclosure. For example, a single blade enclosure may include several blade servers, storage controllers, and switches, among other components. The blade enclosure can provide a variety of services such as power, cooling, networking, various interconnects, and system management. The DAS system may use the Serial Attached Small Computer System Interface (Serial Attached SCSI (SAS)) protocol for physically connecting and transferring data between the computing devices and the storage devices. Non- blade server solutions are also possible.
BRIEF DESCRIPTION OF THE DRAWINGS
[0001] Certain exemplary embodiments are described in the following detailed description and in reference to the drawings, in which:
[0002] Fig. 1 is a block diagram of a computer system with direct attached storage;
[0003] Fig. 2 is a block diagram of a computer system showing circuitry for routing a control signal bus between processing nodes and the storage drives;
[0004] Fig. 3 is a block diagram of a computer system showing circuitry for routing a data storage bus between the processing nodes and the storage drives; and
[0005] Fig. 4 is a process flow diagram of a method for configuring a computer system.
DETAILED DESCRIPTION OF SPECIFIC EMBODIMENTS
[0006] The present disclosure provides techniques for configuring a multi-node computer system that includes direct-attached storage (DAS) drives. The multi-node system includes two or more processing nodes and a number of storage drives. In some examples, the processing nodes and the storage drives are housed within the same enclosure. The multi-node system may be configured, for example, as a blade server.
[0007] In a typical DAS system, each storage drive can be coupled to one of the processing nodes using a storage protocol such as Serial Attached Small Computer System Interface (Serial Attached SCSI (SAS)) for transferring data between the computing device and the storage device. Processing nodes can be coupled to specific storage drives through a SAS switch, which includes an expander. The storage resources of the DAS devices are made accessible to specific processing nodes by configuring zone groups, which control how the expanders route
connections through the switch to couple specific processing nodes to specific storage drives. The configuration of the expander is usually accomplished through storage management firmware loaded onto a processing device of the expander. The storage management firmware may be operatively coupled through a network connection to a storage administration device that enables an administrator to configure the switches remotely. The SAS switch contributes significant additional cost to the system. Additionally, significant cost and effort is involved in creating and maintaining the expander firmware and other firmware and software components used in the storage system configuration process.
[0008] In accordance with the techniques disclosed herein, the data storage resources of a DAS-based computer system can be coupled to the processing nodes without the use of a SAS switch, expander, or computer code. As described below, the coupling of storage drives to processing nodes is accomplished through hardware, such as manual switches, multiplexers, programmable logic devices, and the like. The elimination of the SAS switch can reduce the cost of the system as well as eliminate the time and effort of generating and maintaining the firmware and software used to accomplish the storage system configuration.
[0009] Fig. 1 is a block diagram of a computer system with direct attached storage. The computer system 100 may include a number or processing nodes 102. One or more of the processing nodes 102 may be general purpose processing devices, referred to herein as compute nodes. In some examples, the compute nodes may be blade servers. One or more of the processing nodes 102 may be specialized processing nodes that are configured to perform a dedicated function. For example, one or more of the processing nodes 1 02 may be storage nodes that are configured to process data storage transactions received from other processing nodes 102.
[0010] The computer system 100 also includes a number of storage drives 104, which may be any suitable type of storage drive, including disk drives, solid state drives, tape drives, and the like. A particular processing node 1 02 can be coupled to one, several, or even all of the storage drives 104. Each storage drive 1 04 can be coupled to one of the processing nodes 102 at a time. In some examples, the processing nodes 102 and the storage drives 104 can use any suitable storage protocol for connecting and transferring data on the storage bus, including SAS, Serial ATA (SATA), SCSI, and others. As used herein the storage bus refers to the bus that transfers storage protocol instructions and storage data between the processing nodes 102 and the storage drives 104.
[0011] In some examples, processing nodes 102 may be coupled to storage drives 1 04 through an additional data bus, referred to herein as a control signal bus. The control signal bus can be used to transfer control messages and signals between the processing nodes 1 02 and the storage drives 104. For example, some or all of the storage drives 1 04 may include or be coupled to visual indicators, such as Light Emitting Diode (LED) indicators, that indicate a status of the storage drive 104. The processing nodes 1 02 may be configured to control the LEDs disposed on any storage drive 104 to which it is coupled. The LEDs can be activated to indicate that a particular storage drive 104 has failed or is actively storing or reading data or to help a user locate a particular storage drive, for example.
[0012] The computer system 100 also includes routing circuitry 1 06, which is used to couple the storage drives 104 to assigned processing nodes 102 based on the storage system configuration selected by the user. In some examples, the routing circuitry 106 is configured to route the storage bus and/or the control signal bus. The user can determine the storage system configuration through a switch 108, which may be a manual switch. As used herein, a manual switch is a physical switch that can be actuated manually and does not use programming code to operate. Examples of manual switches includes, toggle buttons, toggle switches, dip switches, rotary switches, and the like. At least one of the storage drives 104 is coupled to the processing nodes 1 02 through the routing circuitry 106. Additionally, some of the storage drives 104 may be coupled to the processing nodes 102 directly. Various configurations are possible in additional to the particular configuration shown in Fig. 2. For example, in some examples, all of the storage drives 1 04 may be coupled to respective processing nodes 102 through the routing circuitry 106.
[0013] In some examples, some or all of the components of the computer system 100 are housed within a common enclosure, sometimes referred to as a chassis. For example, the processing nodes 102 and storage drives 104 may be disposed on blades that are inserted into slots of the enclosure. The enclosure can include a backplane with conductive traces that provide the communications paths between the processing nodes 102 and the storage drives 1 04. The routing circuitry 106 can be included in the backplane. The switch 108 can be disposed on an external surface of the enclosure to enable user access.
[0014] In some examples, the computer system 100 is configured to provide a number of configuration choices that can be selected by the user by adjusting the switch 1 08. The setting of the switch 108 determines a configuration of the routing between the plurality of processing nodes 102 and the plurality of storage drives 104. For example, the user may want to assign a greater amount of storage capacity to some processing nodes 102 compared to others. Accordingly, the switch 1 08 can be configured to change the number of storage drives 104 assigned to a particular processing node 102.
[0015] Fig. 2 is a block diagram of a computer system showing circuitry for routing a control signal bus between the processing nodes and the storage drives. The computer system 100 shown in Fig. 2 includes four processing nodes 102, two of which are compute nodes 202 and two of which are storage nodes 204. One of the storage nodes 204 is directly coupled to four storage drives 104, meaning that the routing circuitry is not located between the storage node 204 and the storage drives 1 04. One of the storage nodes 204 is directly coupled to two storage drives 104. The two compute nodes 202 are not directly coupled to any of the storage drives 1 04. In addition the storage drives 104 that are directly coupled to a storage node 204, the computer system 100 also includes six additional storage drives 104, each of which can be selectively coupled to one of the four processing nodes 102 through the routing circuitry 1 06. The storage drives 1 04 that cannot be re-routed to another processing node 102 are referred to as dedicated storage drives 206, and the storage drives 104 that can be selectively routed to a different processing node 102 are referred to as routable storage drives 208. It will be appreciated that the computer system 100 of Figs. 2 and 3 are one example of a computer system in accordance with the present techniques. For example, various modifications can be made in accordance with the design considerations of a particular implementation, such as the number of dedicated storage drives, the number of routable storage drives, and the number of processing nodes, among others.
[0016] Each processing node 102 can include two connections to a control signal bus, which may be used to send control messages to one or more LEDs associated with the storage drives 104, for example. The storage protocol bus used to transfer storage instructions and storage data between the processing nodes 102 and the storage drives 104 is not shown in Fig. 2, but is described further below in relation to Fig. 3. To control the LEDs, each processing node 102 can send control messages to a corresponding Peripheral Interface Controller (PIC) 210 over a serial bus such as I2C, for example. The PIC 210 can decode the commands and send
corresponding control signals to the LEDs on the storage drives 1 04. Each control signal can be a logic high or logic low signal that turns on a particular LED. The control signals can be used to indicate a drive operating status, identify a drive selected by a management application, indicate drive failure, or identify a particular drive as being coupled to a particular node. Each storage drive 104 can have any suitable number of LEDs, with corresponding control signals.
[0017] In some examples, each PIC 21 0 is coupled to a memory 212 such as a Non-Volatile Random Access Memory (NVRAM). The memory 21 2 is to store backplane design information, such as NVRAM version, backplane identifier, bay counts, among other information. The memory 212 can be read by the PIC 210 and enables the PIC software to accommodate different hardware architectures. In the system shown in Fig. 2, each PIC 210 is coupled to the LEDs of four storage drives. However, additional configurations are also possible. Furthermore, although a single line is shown between PIC 210 and each storage drive 104, it will be appreciated that each storage drive 104 can have any suitable number of LEDs, each of which may receive control signals through the PIC 210.
[0018] The user can select the storage configuration through the switch 108, which is shown in Fig. 2 as a dual-inline package (DIP) switch. The routing circuitry 106 can include a programmable logic device (PLD) 214, such as a Field
Programmable Gate Array (FPGA), a Complex Programmable Logic Device (CPLD), or Programmable Array Logic (PAL), among others. The switch 108 is operatively coupled to the programmable logic device 214 and, in this example, provides two selection inputs to the programmable logic device 214, labeled "SEL 0" and "SEL 1 ." The selection inputs received by the programmable logic device 214 determine which processing nodes 102 the routable storage drives 208 are coupled to.
[0019] For those storage drives 1 04 that are coupled directly to a processing node 1 02, control signals from the PIC 210 can be routed directly to the
corresponding storage drive, i.e., without passing through the routing circuitry 106 or any other routing or switching device. The remaining storage drives 104 are coupled to the processing nodes 102 through the routing circuitry 1 06 so that they can be selectively coupled to different processing nodes 102 depending on the user selection. The programmable logic device 214 of Fig. 2 includes routing inputs, AO through A1 1 , and routing outputs, B0 through B5. Command signals are received from the PICs 210 at the routing inputs and output at corresponding routing outputs. The coupling between the routing inputs and routing outputs depends on the system configuration selected by the user. The programmable logic device 214 can include any suitable number of routing inputs and routing outputs, depending on the design details of a particular implementation. For a detailed example of the routing configuration provided by each possible configuration of the switch, see Table 1 . Table 1: Selectable Storage Configurations for the Example Computer System shown in Fig. 2.
Figure imgf000008_0001
[0020] In Table 1 , the Column labeled "No. of Drives," refers to the total number of storage drives 104 assigned to the two storage nodes 204. The columns labeled "PLD input" refers to the logic state of the two selection inputs provided by the switch 108, and the columns labeled "PLD output" refers to the resulting configuration of the programmable logic device 214. The columns labeled B0 through B5 indicate which routing input, AO through A1 1 , will be connected to each routing output, BO through B5, for each configuration.
[0021] In the example shown in Fig. 2 and Table 1 , the computer system 1 00 can be set to one of four possible configurations. Each configuration is characterized by the number of storage drives 104 assigned to the storage nodes 204. In the first configuration, only the six dedicated storage drives 206 are coupled to the storage nodes 204, which is the lowest number of storage drives 1 04 that can be assigned to the storage nodes 204 in this example. The routable storage drives 208 are coupled to the compute nodes 202, with three routable storage drives 208 coupled to each compute node 202. If the user decides that additional storage drives 104 should be assigned to the storage nodes 204, some of the routable storage drives 208 can be re-routed to the storage nodes 204 simply by adjusting the switch 1 08. For example, to assign a total of ten storage drives 104 to the storage nodes 204, the user can adjust the switch to SEL 0 = H (logical high) and SEL 1 = L (logical low). In that configuration, four of the routable storage drives 208 are routed to the storage nodes 204, and each compute node 202 is now coupled to only one routable storage drive 208. The other routing configurations that are possible in the example of Fig. 2 are shown in Table 1 .
[0022] The programmable logic device 214 also outputs three multiplexer signals labeled MUX SEL 0, MUX SEL 1 , MUX SEL 2. The multiplexer signals are used to control the routing of the storage protocol bus, as described below in relation to Fig. 3.
[0023] Fig. 3 is a block diagram of a computer system showing circuitry for routing a storage protocol bus between the processing nodes and the storage drives. Fig. 3 shows the same compute nodes 202, storage nodes 204, dedicated storage drives 206, and routable storage drives 208 that were shown in Fig. 2. However, rather than showing the control signal bus that is used for controlling the storage drive LEDs, Fig. 3 shows the routing of the storage protocol bus. As shown in Fig. 3, the routing circuitry 1 06 can includes a set of multiplexers 302, labeled MUX 0 through MUX 6. Each multiplexer 302 is coupled to one of the routable storage drives 208 and selectively couples the routable storage drive 208 to one of two possible processing nodes 102. In the examples of Figs. 2 and 3, each routable storage drive 208 can be routed to one of the compute nodes 202 or one of the storage nodes 204. The multiplexer signals output by the programmable logic device 214 (Fig. 2) are input to the multiplexers 302 and determine which of the two possible processing nodes 102 each storage drive 104 will be coupled to. For a detailed example of the routing configuration provided by each possible configuration of the switch, see Table 2.
Table 2: Selectable Storage Configurations for the Example Computer System shown in Fig. 2.
Figure imgf000010_0001
[0024] In Table 2, the Column labeled "No. of Drives," refers to the total number of storage drives 104 assigned to the two storage nodes 204. The columns labeled "PLD input" refer to the logic state of the two selection inputs provided by the switch 108, and the multiplexer signals labeled MUX SEL 0, MUX SEL 1 , MUX SEL 2 represent the logic state of the multiplexer signals output by the programmable logic device 214. The columns labeled MUX 0 through MUX 5 indicate which multiplexer output, B or C, is coupled to the multiplexer input, A. As shown in Fig. 2, multiplexer output B is the output on the left and multiplexer output C is the output on the right.
[0025] Fig. 4 is a process flow diagram of a method for configuring a computer system. The method 400 can be performed by the routing circuitry 106 shown in Figs. 1 -3 without the use of computer-readable code such as software or firmware. The method can begin at block 402.
[0026] At block 402, a change is detected in a setting of a manual switch. The switch may be disposed in a server enclosure and configured to enable a user to select a different zoning configuration for the server enclosure. In some examples, each setting of the switch corresponds to a different number of storage drives to be coupled to a particular processing nodes or group of processing nodes. For example, the each switch setting may correspond to a number of storage drives to be assigned to the storage nodes of the server. In response to the change in the switch setting, one or more storage drives may be coupled to a selected one of a pair of processing nodes as described below in relation to blocks 404, 406, and 408.
[0027] At block 404, a control signal bus can be coupled from the selected one of the pair of processing nodes the storage drive, using a programmable logic device, such as the programmable logic device 214 shown in Figs. 1 -3. The control signal bus can be a communication bus used by the coupled processing node to an indicator disposed on the storage drive, such as an LED or set of LEDs.
[0028] At block 406, the programmable logic device can generate one or more multiplexer inputs and send the multiplexer inputs to a multiplexer or set of multiplexers. Each of the multiplexers can be configured to couple the storage drive to one of the pair of processing nodes.
[0029] At block 408, the multiplexer or set of multiplexers couple the storage protocol bus from the selected one of the pair of processing nodes to the storage drive. Storage instructions and data can then be sent to and from the storage drive to the associated processing node to which it is coupled.
[0030] The process diagram of Fig. 4 is not intended to indicate that the elements of method 400 are to be executed in any particular order, or that all of the elements of the method 200 are to be included in every case. Further, any number of additional elements not shown in Fig. 4 can be included in the method 400, depending on the details of the specific implementation.
[0031] While the present techniques may be susceptible to various modifications and alternative forms, the exemplary examples discussed above have been shown only by way of example. It is to be understood that the technique is not intended to be limited to the particular examples disclosed herein. Indeed, the present techniques include all alternatives, modifications, and equivalents falling within the true spirit and scope of the application.

Claims

CLAIMS What is claimed is:
1 . A computer system comprising:
an enclosure;
a plurality of processing nodes housed within the enclosure;
a plurality of direct-attached storage devices housed within the enclosure and coupled to the plurality of processing nodes, wherein each direct-attached storage device is coupled to one of the processing nodes; and
a manual switch disposed in the enclosure, where a setting of the switch determines a configuration of the routing between the plurality of processing nodes and the plurality of direct-attached storage devices.
2. The computer system of claim 1 , comprising a programmable logic device coupled to the manual switch, the programmable logic device to route a control signal bus between the plurality of processing nodes and the plurality of direct-attached storage devices based on the switch setting.
3. The computer system of claim 1 , comprising a programmable logic device coupled to the manual switch, the programmable logic device to control one or more multiplexers based on the switch setting, the one or more multiplexers to route a storage protocol bus between the plurality of processing nodes and the plurality of direct-attached storage devices.
4. The computer system of claim 3, wherein the programmable logic device and the one or more multiplexers are included in a backplane of the enclosure.
5. The computer system of claim 3, wherein each setting of the manual switch indicates a number of storage drives to couple to one or more processing nodes.
6. The computer system of claim 1 , wherein the direct-attached storage drives are coupled to the processing nodes through a Small Computer System Interface (SCSI) bus.
7. A method of configuring a server, comprising:
detecting a change in a setting of a manual switch disposed in a server enclosure; and
in response to the change, coupling a direct-attached storage drive to a selected one of a pair of processing nodes.
8. The method of claim 7, wherein coupling the direct-attached storage drive to the selected one of the pair of processing nodes comprises:
generating a multiplexer input and sending the multiplexer input to a multiplexer configured to couple the direct-attached storage drive to the selected one of the pair of processing nodes.
9. The method of claim 7, wherein coupling the direct-attached storage drive to the selected one of the pair of processing nodes comprises:
coupling, via a programmable logic device, a control signal bus from the selected one of the pair of processing nodes to an indicator of the direct-attached storage drive.
10. The method of claim 7, wherein coupling the direct-attached storage drive to the selected one of the pair of processing nodes comprises:
coupling, via a programmable logic device, a control signal bus from the selected one of the pair of processing nodes to the direct-attached storage drive; sending a multiplexer signal from the programmable logic device to a multiplexer; and
coupling, via the multiplexer, a storage protocol bus from the selected one of the pair of processing nodes to the direct-attached storage drive.
1 1 . A computer system, comprising:
a plurality of processing nodes;
a plurality of direct-attached storage drives coupled to the plurality of processing nodes through a set of multiplexers, wherein each storage drive of the plurality of direct-attached storage drives is coupled to one of the processing nodes; and a manual switch coupled to a logic device, wherein a setting of the switch determines a set of multiplexer inputs generated by the logic device and sent to the set of multiplexers, wherein the set of multiplexer inputs determine which one of the plurality of processing nodes each storage drive is coupled to.
12. The computer system of claim 1 1 , wherein the multiplexers are serial attached Small Computer System Interface (serial attached SCSI (SAS))
multiplexers.
13. The computer system of claim 1 1 , wherein the logic device is configured to couple LED control signals from the plurality of processing nodes to the plurality of direct-attached storage drives, wherein the LED control signals are routed based on the configuration of the switch.
14. The computer system of claim 1 1 , wherein the logic device is a complex programmable logic device (CPLD) included in a backplane of a chassis that houses the plurality of processing nodes and the a plurality of direct-attached storage drives.
15. The computer system of claim 1 1 , wherein each setting of the manual switch indicates a number of storage drives to be coupled to one or more of the plurality of processing nodes.
PCT/US2014/048614 2014-07-29 2014-07-29 Processing nodes WO2016018251A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2014/048614 WO2016018251A1 (en) 2014-07-29 2014-07-29 Processing nodes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2014/048614 WO2016018251A1 (en) 2014-07-29 2014-07-29 Processing nodes

Publications (1)

Publication Number Publication Date
WO2016018251A1 true WO2016018251A1 (en) 2016-02-04

Family

ID=55217975

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2014/048614 WO2016018251A1 (en) 2014-07-29 2014-07-29 Processing nodes

Country Status (1)

Country Link
WO (1) WO2016018251A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003330762A (en) * 2002-05-09 2003-11-21 Hitachi Ltd Control method for storage system, storage system, switch and program
US20040088297A1 (en) * 2002-10-17 2004-05-06 Coates Joshua L. Distributed network attached storage system
US20100064169A1 (en) * 2003-04-23 2010-03-11 Dot Hill Systems Corporation Network storage appliance with integrated server and redundant storage controllers
US20110267188A1 (en) * 2010-04-29 2011-11-03 Wilson Larry E Configurable Control Of Data Storage Device Visual Indicators In A Server Computer System
US20130198350A1 (en) * 2001-06-05 2013-08-01 Daniel Moore Multi-Class Heterogeneous Clients in a Clustered Filesystem

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130198350A1 (en) * 2001-06-05 2013-08-01 Daniel Moore Multi-Class Heterogeneous Clients in a Clustered Filesystem
JP2003330762A (en) * 2002-05-09 2003-11-21 Hitachi Ltd Control method for storage system, storage system, switch and program
US20040088297A1 (en) * 2002-10-17 2004-05-06 Coates Joshua L. Distributed network attached storage system
US20100064169A1 (en) * 2003-04-23 2010-03-11 Dot Hill Systems Corporation Network storage appliance with integrated server and redundant storage controllers
US20110267188A1 (en) * 2010-04-29 2011-11-03 Wilson Larry E Configurable Control Of Data Storage Device Visual Indicators In A Server Computer System

Similar Documents

Publication Publication Date Title
JP7105710B2 (en) Systems and methods for supporting multi-mode and/or multi-speed NVMe-oF devices
US10162786B2 (en) Storage node based on PCI express interface
CN110740157A (en) Storage system and remote access method
US11086813B1 (en) Modular non-volatile memory express storage appliance and method therefor
US9934050B2 (en) System and method for network-based ISCSI boot parameter deployment
US8788753B2 (en) Systems configured for improved storage system communication for N-way interconnectivity
US20170286144A1 (en) Software-defined storage cluster unified frontend
US11012510B2 (en) Host device with multi-path layer configured for detecting target failure status and updating path availability
KR102171716B1 (en) Chained, scalable storage devices
CN101166100A (en) Blade server system and its configuration method and switch module
US9940019B2 (en) Online migration of a logical volume between storage systems
US10229085B2 (en) Fibre channel hardware card port assignment and management method for port names
US9021166B2 (en) Server direct attached storage shared through physical SAS expanders
US8972618B2 (en) Staged discovery in a data storage fabric
US8255737B1 (en) System and method for a redundant communication fabric in a network storage system
US9779003B2 (en) Safely mapping and unmapping host SCSI volumes
WO2003091887A1 (en) Method and apparatus for dual porting a single port serial ata disk drive
US9027019B2 (en) Storage drive virtualization
WO2016018251A1 (en) Processing nodes
US11029882B2 (en) Secure multiple server access to a non-volatile storage device
KR101379166B1 (en) Preservation of logical communication paths in a data processing system
US10528294B2 (en) Provisioning and managing virtual machines from a storage management system
US20130024614A1 (en) Storage manager
US11630581B2 (en) Host bus adaptor (HBA) virtualization awareness for effective input-output load balancing
US10664428B2 (en) SAS automatic zoning mechanism

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14898508

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14898508

Country of ref document: EP

Kind code of ref document: A1