EP2100415B1 - System and method for networking computing clusters - Google Patents

System and method for networking computing clusters Download PDF

Info

Publication number
EP2100415B1
EP2100415B1 EP07865860.6A EP07865860A EP2100415B1 EP 2100415 B1 EP2100415 B1 EP 2100415B1 EP 07865860 A EP07865860 A EP 07865860A EP 2100415 B1 EP2100415 B1 EP 2100415B1
Authority
EP
European Patent Office
Prior art keywords
switch
interfaces
package
switch package
packages
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP07865860.6A
Other languages
German (de)
French (fr)
Other versions
EP2100415A2 (en
Inventor
Shannon V. Davidson
James D. Ballew
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Raytheon Co
Original Assignee
Raytheon Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=39512566&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=EP2100415(B1) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Raytheon Co filed Critical Raytheon Co
Publication of EP2100415A2 publication Critical patent/EP2100415A2/en
Application granted granted Critical
Publication of EP2100415B1 publication Critical patent/EP2100415B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/40Constructional details, e.g. power supply, mechanical construction or backplane
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/35Switches specially adapted for specific applications
    • H04L49/356Switches specially adapted for specific applications for storage area networks
    • H04L49/358Infiniband Switches

Definitions

  • This invention relates generally to networking computing clusters and, in particular, networking computing clusters using a switch package.
  • a method for networking a computer cluster includes communicatively coupling together each of a plurality of client nodes through a plurality of switches, each switch comprising a plurality of switch ports to establish a network fabric.
  • the method also includes positioning at least two of the plurality of switches inside a switch package.
  • the method includes electrically interconnecting at least a subset of the plurality of switch ports of the at least two of the one or more switches within the switch package.
  • the switch package (200) is provided in an enclosure with a plurality of interfaces coupled to the switches. One set of interfaces are provided on a first side of the enclosure and a second set of interfaces are provided on an opposite side of the enclosure.
  • Particular embodiments of the present invention may provide one or more technical advantages.
  • Some embodiments include a network fabric having highly compact and modular switch packages that provide a more flexible, optimized, and cost-efficient solution for building high performance computing arrays.
  • the switch packages may have a compact form factor and enhanced accessibility that is compatible with commodity computing equipment.
  • Various embodiments may support network connections that have a higher bandwidth than the direct computer connections.
  • Certain embodiments of the present invention may provide some, all, or none of the above advantages. Certain embodiments may provide one or more other technical advantages, one or more of which may be readily apparent to those skilled in the art from the figures, descriptions, and claims included herein.
  • FIGURES 1 through 3C of the drawings like numerals being used for like and corresponding parts of the various drawings.
  • Particular examples specified throughout this document are intended for example purposes only, and are not intended to limit the scope of the present disclosure.
  • the illustrations in FIGURES 1 through 3C are not necessarily drawn to scale.
  • FIGURE 1 is a block diagram illustrating an example embodiment of a portion of a computing cluster 100.
  • Computing cluster 100 generally includes a plurality of client nodes 102 interconnected by a network fabric 104.
  • the network fabric 104 in some embodiments of the present invention may include a plurality of standardized, compact, and modular switch packages that can be used to construct large scale, fault tolerant, and high performance computing clusters using commodity computers coupled at each of the plurality of client nodes 102.
  • Client nodes 102 generally refer to any suitable device or devices operable to communicate with each other through network fabric 104, including one or more of the following: switches, processing elements, memory elements, and I/O elements.
  • client nodes 102 include commodity computers.
  • Network fabric 104 generally refers to any interconnecting system capable of transmitting audio, video, signals, data, messages, or any combination of the preceding.
  • network fabric 104 comprises a plurality of switches interconnected by copper cables.
  • Supercomputers and fat-tree network clusters are generally used to solve large-scale computing problems. Some computing clusters are scaled to thousands and even tens of thousands of processors in order to solve the largest of problems.
  • Conventional network computing arrays typically include multiple network array switches, each switch individually packaged within a rack-mountable 1U enclosure having 24-port connectors physically positioned on one side of the package.
  • conventional computing networks are typically formed using fat-tree architectures.
  • this type of network fabric typically does not scale well, has limited performance due in part to long cable lengths, typically has a short mean time between failure (“MTBF”), and is often cost prohibitive.
  • MTBF mean time between failure
  • teachings of some of the embodiments of the present invention recognize a network fabric including highly compact and modular switch packages may provide a more flexible, optimized, and cost-efficient solution for building high performance computing arrays using commodity computers.
  • the modular switch packages may support multi-dimensional, mesh network arrays with network connections inside and outside of the switch package, thereby reducing the number of external cables and space requirements for the network fabric.
  • the network connections of various embodiments may support a higher bandwidth than the direct computer connections.
  • the switch packages of some embodiments may have enhanced switch density and accessibility, thereby maximizing the space available to commodity computing equipment.
  • the switch packages are modular in that they may be configured to support any of a variety of network cluster architectures.
  • certain of these advantages are achieved by enclosing a plurality of switches within the switch package, communicatively coupling together each of the switches within the switch package, and providing interfaces to the switches on opposite sides of the switch package.
  • certain of these advantages are achieved by coupling one or more modular daughter cards to each switch package, the daughter cards configurable for particularized needs.
  • FIGURES 2A through 2C An example embodiment of a modular switch package operable to support, for example, single-rail, single-dimensional and/or two-dimensional network cluster architectures is illustrated in FIGURES 2A through 2C , while FIGURES 3A through 3C illustrate an example embodiment of a modular switch package operable to support, for example, two-dimensional or three-dimensional network architectures.
  • FIGURE 2A is a block diagram illustrating one embodiment of a portion of a modular network switch package 200 that may form a portion of the network fabric 104 of FIGURE 1 .
  • Switch package 200 generally provides a standardized, compact network switch that can be used to construct very large scale, fault tolerant, and high performance computing clusters, such as the computing cluster 100 of FIGURE 1 .
  • switch package 200 is operable to support, for example, dual-rail, one-dimensional, and/or two-dimensional network cluster architectures.
  • Switch package 200 generally includes the following components attached to a common motherboard 202: a plurality of switches 204 coupled to respective switch receptors 206, one or more daughter cards 208 coupled to respective daughter card receptors 210, and a plurality of interfaces 212.
  • switch package 200 and associated components may all fit within a standard 1U enclosure having interfaces on both sides of the enclosure.
  • the 1U enclosure may be operable to mount horizontally in a standard nineteen-inch equipment rack.
  • such embodiments may greatly enhance the networking capability and routing density associated with space typically dedicated to a standard 1U enclosure.
  • FIGURES 2B, 2C, 3B and 3C A more detailed description of example physical layouts for a 1U enclosure is explained further below with reference to FIGURES 2B, 2C, 3B and 3C .
  • Motherboard 202 generally refers to any suitable circuit board having connectors 214 and receptors 206 and 210 that together make up at least a portion of an electronic system.
  • Connectors 214 generally refer to any interconnecting medium capable of transmitting audio, video, signals, data, messages, or any combination of the preceding.
  • connectors 214 are communicative paths or traces that electrically couple the switch receptors 206, the daughter card receptor 210, and the interfaces 212 as shown. Although illustrated as a single line for simplicity, in this particular embodiment, each connector 214 actually comprises three independent connectors.
  • Connectors 214 may be formed, for example, using photolithographic techniques on a surface of motherboard 202.
  • Switch receptors 206 and daughter card receptor 210 generally refer to any mounting surface or socket operable to receive and electrically couple to switches 204 and daughter cards 208 respectively.
  • Switches 204 generally refer to any device capable of routing between respective switch ports any audio, video, signals, data, messages, or any combination of the preceding.
  • switches 204a and 204b are each 24-port Infiniband switches mounted on switch receptors 206a and 206b respectively; however, any appropriate switch or router may be used.
  • Each switch 204a and 204b comprises an integrated circuit that allows communication between each of the respective switch ports.
  • switch 204a may route data from connectors 214d to connectors 214c.
  • the switches 204 in this example each have twenty-four ports, any appropriate number of ports may be used without departing from the scope of the present disclosure.
  • Connectors 214c enable communication between switch 204a and 204b, the communication internal to switch package 200.
  • switch nodes 204a and 204b are able to communicate without the use of external interfaces 212 and associated cabling, which enhances bandwidth capabilities and simplifies network fabric 104 implementation.
  • Connectors 214a, 214b, and 214d enable communication between each switch 204 and a plurality of interfaces 212.
  • Interfaces 212 generally enable switch package 200 to communicate externally.
  • interfaces 212 include twenty-four client interfaces 212a and 212b and four network interfaces 212c and 212d; however, any appropriate number of interfaces may be used.
  • Each client interface 212a and 212b is a 4X Infiniband port that can be coupled to a commodity computer; however, other types of interfaces may be used.
  • each 4X Infiniband port is associated with one port of a respective 24-port switch 204.
  • interfaces 212a and 212b may alternatively use, for example, 12X Infiniband connectors for higher density or any other appropriate connector.
  • Each network interface 212c, 212d, 212e, and 212f is a 12X Infiniband port that can be coupled to other switch packages; however, other types of interfaces may be used.
  • Each 12X Infiniband port is associated with three switch ports of a respective switch 204a or 204b.
  • a daughter card 208 mounts on motherboard 202 to provide two additional network interfaces 212e and 212f, each interface 212e and 212f a 12X Infiniband port; however, other types and/or numbers of interfaces may be used.
  • Daughter card 208 generally refers to any secondary circuit board capable of coupling to daughter card receptor 210.
  • daughter card receptor 210 is operable to receive any of a variety of daughter cards 208, thus providing a modular switch package 200 that may be configured and optimized to any particular need or network architecture.
  • the daughter card 208 of various embodiments may include one or more switches mounted to the daughter card 208.
  • daughter card 208 comprises connectors 216a and 216b that respectively enable communication between interfaces 212e and 212f and connectors 214a and 214b.
  • Various other embodiments may not include a daughter card 208 and associated daughter card receptor 210.
  • connectors 214a and 214b may couple directly to interfaces 212e and 214f respectively without coupling to connectors 216a and 216b within a daughter card 208.
  • connectors 216a and 216b may be, for example, traces on motherboard 202.
  • interfaces 212c, 212d, 212e and 212f are physically positioned on opposite sides of switch package 200 from interfaces 212a and 212b.
  • two different sides of switch package 200 are used for connections to maximize the density of interfaces 212.
  • Example embodiments of the physical layout of interfaces 212 are illustrated in FIGURES 2B and 2C respectively.
  • FIGURE 2B is a side view illustrating one embodiment of the front of the modular network switch package 200 of FIGURE 2A .
  • switch package 200 fits within a standard 1U enclosure operable to mount horizontally in a standard nineteen-inch equipment rack.
  • Each of the motherboard 202, daughter card 208, and switches 204 fit within the 1U enclosure of switch package 200.
  • the front of switch package 200 generally includes six 12X Infiniband interfaces 212c, 212d, 212e, and 212f, each operable to provide connections to other switch packages; however, other types and/or numbers of interfaces may be used.
  • multiple interconnected switch packages 200 may form at least a portion of the network array fabric 104 of FIGURE 1 .
  • the back of switch package 200 generally includes twenty-four 4X Infiniband client interfaces 212a and 212b, each operable to provide connections to the HCA port of a computer (not explicitly shown); however, other types and/or numbers of interfaces may be used.
  • the computers may be mounted in the same equipment rack.
  • the back of switch package 200 also includes two power jacks 260.
  • Switch package 200 may support any of a variety of network architectures.
  • switch package 200 may support two-dimensional and/or dual-rail architectures by interconnecting switch package 200 with other similarly configured switch packages 200 using network interfaces 212c, 212d, 212e, and 212f.
  • various other embodiments may use alternative switch package 200 configurations to support other network architectures.
  • switch package 200 may interconnect with other similarly configured switch packages 200 to form one-dimensional network architecture.
  • the one-dimensional network architecture may have individual switch nodes 204a and 204b extending theoretically in positive and negative directions along a single axis.
  • switches 204a and 204b may communicate with other respective switch packages 200 through interfaces 212e and 212f respectively.
  • the remaining interfaces 212a, 212b, 212c, and 212d may include a total of thirty-six 4X Infiniband connections, enabling each switch 204a and 204b in the one-dimensional network configuration to communicate with up to eighteen client nodes; however, connections other than 4X Infiniband may be used.
  • Switch package 200 supports multi-dimensional arrays with network connections both inside and outside the switch package 200 enclosure, in one embodiment.
  • the modular daughter card receptor 210 and associated daughter card 208 enables alternative configurations with greater complexity than what is illustrated in FIGURE 2A .
  • switch package 200 may be alternatively configured with a daughter card 208 operable to additionally support three-dimensional network architecture, as illustrated in FIGURES 3A and 3B .
  • FIGURE 3A is a block diagram illustrating an alternative embodiment of a portion of a modular network switch package 300 that may form a portion of the network fabric 104 of FIGURE 1 .
  • This particular embodiment differs from the example embodiment illustrated in FIGURE 2A in that switch package 300 more conveniently supports various network configurations, including, for example, dual-rail, two-dimensional and/or three-dimensional network cluster architectures.
  • Switch package 300 generally provides a standardized, compact network switch that can be used to construct very large scale, fault tolerant, and high performance computing clusters, such as the computing cluster 100 of FIGURE 1 .
  • Switch package 300 generally includes the following components coupled to a common motherboard 302: a plurality of switches 304 coupled to respective switch receptors 306, one or more daughter cards 308 coupled to respective daughter card receptors 310, and a plurality of interfaces 312.
  • switch package 300 and associated components may all fit within a standard 1U enclosure having interfaces on both sides of the enclosure.
  • the 1U enclosure may be operable to mount horizontally in a standard nineteen-inch equipment rack.
  • such embodiments may greatly enhance the networking capability and routing density associated with space typically dedicated to a standard 1U enclosure. A more detailed description of example physical layouts for a 1U enclosure is explained further below with reference to FIGURES 3B and 3C .
  • FIGURE 3A One difference between the example embodiments of FIGURE 3A and FIGURE 2A is the configuration of daughter card 308 and respective interfaces 312e, 312f, 312g, and 312h.
  • the other features of switch package 300 are substantially similar to respective features of switch package 200.
  • motherboard 302, daughter card receptor 310, switches 304, switch receptors 306, connectors 314a, 314b, 314c, and 314d and interfaces 312a, 312b, 312c, and 312d are substantially similar in structure and function to motherboard 202, daughter card receptor 210, switches 204, switch receptors 206, connectors 214a, 214b, 214c, and 214d and interfaces 212a, 212b, 212c, and 212d respectively of FIGURE 2A .
  • Various other embodiments may not include a modular daughter card 208 and associated daughter card receptor 210.
  • connectors 314a and 314b may communicatively couple to switch receptors 352a and 352b respectively and/or switches 350a and 350b respectively without communicatively coupling to connectors 316 within a daughter card 208.
  • connectors 310 may simply be, for example, traces on motherboard 302.
  • the support of a three-dimensional network architecture may be effected by replicating the base design of FIGURE 2A onto a printed circuit board of daughter card 308. That is, daughter card 308 includes two 24-port Infiniband switches 350a and 350b coupled by switch receptors 352a and 352b to connectors 310; however, other types and/or numbers of interfaces may be used. In addition, daughter card 308 couples switches 350a and 350b to interfaces 312g and 312h respectively, thus doubling the network connectivity of switch package 300 over that of switch package 200. In operation, each switch 304a, 304b, 350a, and 350b may communicate with each other switch 304a, 304b, 350a, and 350b within switch package 300.
  • each switch 304a, 304b, 350a, and 350b may communicate with up to six client nodes through respective interfaces 312c, and 312d, 312e, and 312f. Examples embodiments of the physical layout of interfaces 312 are illustrated in FIGURES 3B and 3C respectively.
  • FIGURE 3B is a side view illustrating one embodiment of the front of the modular network switch package 300 of FIGURE 3A .
  • switch package 200 comprises a standard 1U enclosure operable to mount horizontally in a standard nineteen-inch equipment rack.
  • Each of the motherboard 302, daughter card 308, and switches 304 and 350 may fit within the 1U enclosure of switch package 300.
  • the front of switch package 300 generally includes sixteen 12X Infiniband interfaces 312a, 312b, 312g and 312h, each operable to provide connections to other network switch packages; however, other types of and/or numbers of interfaces may be used.
  • the back of switch package 300 generally includes twenty-four 4X Infiniband interfaces 212c, 212d, 212e and 212f, each operable to provide connections to the HCA port on a computer (not explicitly shown); however, other types and/or numbers of interfaces may be used.
  • the computers may be mounted in the same equipment rack.
  • Switch packages 300 may be configured and interconnected in any of a variety of network cluster architectures. For example, pairs of switch packages 300 may be used to construct network nodes for a three-dimensional, dual-rail network. In addition, switch package 300 may interconnect with other similarly configured switch packages 300 to form a three-dimensional, mesh network architecture.
  • the three-dimensional, mesh network architecture may have individual switch nodes 350a, 350b, 304a, and 304b extending theoretically in positive and negative directions along three orthogonal axis, X, Y, and Z.
  • switch 304b may communicate with four other switches in a theoretical X-Y plane using interfaces 312a, the four other switches residing in one or more other similarly configured switch packages 300.
  • Switch 304a may also communicate with switch 304b and 350a in a theoretical positive and negative Z direction respectively. Up to six of the remaining switch ports of switch 304b may be used to connect to six clients 102 through interfaces 312d.
  • switch package 300 may interconnect with other similarly configured switch packages 300 to form two-dimensional network architecture.
  • the two-dimensional network architecture may have individual switch nodes 350a, 350b, 304a, and 304b extending theoretically in positive and negative directions along two orthogonal axis, X and Y.
  • switch 304b may communicate with four switches in a theoretical X-Y plane, two of the four switches 350b and 304a internal to switch package 300, and the other two switches residing in one or more other similarly configured switch packages 300.
  • the communication between switch packages 300 may be effected, for example, using two 12X Infiniband connectors for each of the interfaces 312c, 312d, 312e, and 312f; however, other types and/or numbers of interfaces may be used.
  • the communication between each switch package 300 and up to forty-eight respectively coupled client nodes 102 may be effected using, for example, up to sixteen twelve 12X Infiniband connectors for each of the interfaces 312a, 312b, 312g, and 312h; however, other types and/or numbers of interfaces may be used. In such a configuration, half of the network connections are internal to switch packages 300.
  • switch enclosure 300 Since the physical size of a switch enclosure is typically determined by the space required for the interfaces, such embodiments reduce the overall size of switch package 300 by a factor of two.
  • two-dimensional network architecture can be linearly scaled to almost any size while minimizing the length of interconnecting cables. This is very desirable for Double Data Rate and Quad Data Rate networks, where long copper cables are not an option and fiber optic connections are very expensive.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Small-Scale Networks (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Computer And Data Communications (AREA)
  • Multi Processors (AREA)

Description

    TECHNICAL FIELD
  • This invention relates generally to networking computing clusters and, in particular, networking computing clusters using a switch package.
  • BACKGROUND
  • The computing needs for high performance computing continues to grow. Commodity processors have become powerful enough to apply to some problems, but often must be scaled to thousands or even tens of thousands of processors in order to solve the largest of problems. However, traditional methods of interconnecting these processors to form computing clusters are problematic for a variety of reasons. For example, some conventional interconnecting switches have limited scalability and fault tolerance characteristics that inadequately take advantage of low cost commodity computers. Example of switch architectures can be found in US 7061907 , EP 1737253 , and "Building high performance linux clusters" published 01 June 2004 by Harbaugh.
  • SUMMARY OF THE EXAMPLE EMBODIMENTS
  • It is an object of the present invention to provide a method for networking a computer cluster, a modular computer network switch package, and a computer cluster network. This object can be achieved by the features as defined in the independent claims. Further enhancements are characterized in the dependent claims.
  • In certain embodiments, a method for networking a computer cluster includes communicatively coupling together each of a plurality of client nodes through a plurality of switches, each switch comprising a plurality of switch ports to establish a network fabric. The method also includes positioning at least two of the plurality of switches inside a switch package. In addition, the method includes electrically interconnecting at least a subset of the plurality of switch ports of the at least two of the one or more switches within the switch package. The switch package (200) is provided in an enclosure with a plurality of interfaces coupled to the switches. One set of interfaces are provided on a first side of the enclosure and a second set of interfaces are provided on an opposite side of the enclosure.
  • Particular embodiments of the present invention may provide one or more technical advantages. Some embodiments include a network fabric having highly compact and modular switch packages that provide a more flexible, optimized, and cost-efficient solution for building high performance computing arrays. In addition, in some embodiments the switch packages may have a compact form factor and enhanced accessibility that is compatible with commodity computing equipment. Various embodiments may support network connections that have a higher bandwidth than the direct computer connections.
  • Certain embodiments of the present invention may provide some, all, or none of the above advantages. Certain embodiments may provide one or more other technical advantages, one or more of which may be readily apparent to those skilled in the art from the figures, descriptions, and claims included herein.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • For a more complete understanding of the present invention and its advantages, reference is made, by way of example, to the following description, taken in conjunction with the accompanying drawings, in which:
    • FIGURE 1 is a block diagram illustrating an example embodiment of a portion of a computer cluster;
    • FIGURE 2A is a block diagram illustrating one embodiment of a portion of a modular network switch package that may form a portion of the computer cluster of FIGURE 1;
    • FIGURE 2B is a side view illustrating one embodiment of a front face of the modular network switch package of FIGURE 2A;
    • FIGURE 2C is a side view illustrating one embodiment of a back face of the modular network switch package of FIGURE 2A;
    • FIGURE 3A is a block diagram illustrating one embodiment of a portion of a modular network switch package that may form a portion of the computing cluster of FIGURE 1;
    • FIGURE 3B is a side view illustrating one embodiment of a front face of the modular network switch package of FIGURE 3A; and
    • FIGURE 3C is a side view illustrating one embodiment of a back face of the modular network switch package of FIGURE 3A.
    DESCRIPTION OF EXAMPLE EMBODIMENTS
  • In accordance with the teachings of the present invention, a system and method for networking computer clusters are provided. By utilizing a modular switch package, particular embodiments may provide a more flexible, optimized, and cost-efficient solution for building high performance computing arrays. Embodiments of the present invention and its advantages are best understood by referring to FIGURES 1 through 3C of the drawings, like numerals being used for like and corresponding parts of the various drawings. Particular examples specified throughout this document are intended for example purposes only, and are not intended to limit the scope of the present disclosure. Moreover, the illustrations in FIGURES 1 through 3C are not necessarily drawn to scale.
  • FIGURE 1 is a block diagram illustrating an example embodiment of a portion of a computing cluster 100. Computing cluster 100 generally includes a plurality of client nodes 102 interconnected by a network fabric 104. As will be shown, the network fabric 104 in some embodiments of the present invention may include a plurality of standardized, compact, and modular switch packages that can be used to construct large scale, fault tolerant, and high performance computing clusters using commodity computers coupled at each of the plurality of client nodes 102.
  • Client nodes 102 generally refer to any suitable device or devices operable to communicate with each other through network fabric 104, including one or more of the following: switches, processing elements, memory elements, and I/O elements. In the example embodiment, client nodes 102 include commodity computers. Network fabric 104 generally refers to any interconnecting system capable of transmitting audio, video, signals, data, messages, or any combination of the preceding. In this particular embodiment, network fabric 104 comprises a plurality of switches interconnected by copper cables.
  • Supercomputers and fat-tree network clusters are generally used to solve large-scale computing problems. Some computing clusters are scaled to thousands and even tens of thousands of processors in order to solve the largest of problems. Conventional network computing arrays typically include multiple network array switches, each switch individually packaged within a rack-mountable 1U enclosure having 24-port connectors physically positioned on one side of the package. In addition, conventional computing networks are typically formed using fat-tree architectures. However, such conventional computing clusters are problematic for a variety of reasons. For example, this type of network fabric typically does not scale well, has limited performance due in part to long cable lengths, typically has a short mean time between failure ("MTBF"), and is often cost prohibitive.
  • Accordingly, teachings of some of the embodiments of the present invention recognize a network fabric including highly compact and modular switch packages may provide a more flexible, optimized, and cost-efficient solution for building high performance computing arrays using commodity computers. In various embodiments, the modular switch packages may support multi-dimensional, mesh network arrays with network connections inside and outside of the switch package, thereby reducing the number of external cables and space requirements for the network fabric. In addition, the network connections of various embodiments may support a higher bandwidth than the direct computer connections. As will be shown, the switch packages of some embodiments may have enhanced switch density and accessibility, thereby maximizing the space available to commodity computing equipment. In various embodiments, the switch packages are modular in that they may be configured to support any of a variety of network cluster architectures.
  • According to the teachings of the invention, in some embodiments certain of these advantages are achieved by enclosing a plurality of switches within the switch package, communicatively coupling together each of the switches within the switch package, and providing interfaces to the switches on opposite sides of the switch package. In addition, in some embodiments certain of these advantages are achieved by coupling one or more modular daughter cards to each switch package, the daughter cards configurable for particularized needs.
  • An example embodiment of a modular switch package operable to support, for example, single-rail, single-dimensional and/or two-dimensional network cluster architectures is illustrated in FIGURES 2A through 2C, while FIGURES 3A through 3C illustrate an example embodiment of a modular switch package operable to support, for example, two-dimensional or three-dimensional network architectures.
  • FIGURE 2A is a block diagram illustrating one embodiment of a portion of a modular network switch package 200 that may form a portion of the network fabric 104 of FIGURE 1. Switch package 200 generally provides a standardized, compact network switch that can be used to construct very large scale, fault tolerant, and high performance computing clusters, such as the computing cluster 100 of FIGURE 1. In this particular embodiment, switch package 200 is operable to support, for example, dual-rail, one-dimensional, and/or two-dimensional network cluster architectures. Switch package 200 generally includes the following components attached to a common motherboard 202: a plurality of switches 204 coupled to respective switch receptors 206, one or more daughter cards 208 coupled to respective daughter card receptors 210, and a plurality of interfaces 212. As explained further below, in various embodiments, switch package 200 and associated components, including multiple switch 204 nodes, may all fit within a standard 1U enclosure having interfaces on both sides of the enclosure. In such embodiments, the 1U enclosure may be operable to mount horizontally in a standard nineteen-inch equipment rack. In addition, such embodiments may greatly enhance the networking capability and routing density associated with space typically dedicated to a standard 1U enclosure. A more detailed description of example physical layouts for a 1U enclosure is explained further below with reference to FIGURES 2B, 2C, 3B and 3C.
  • Motherboard 202 generally refers to any suitable circuit board having connectors 214 and receptors 206 and 210 that together make up at least a portion of an electronic system. Connectors 214 generally refer to any interconnecting medium capable of transmitting audio, video, signals, data, messages, or any combination of the preceding. In this particular embodiment, connectors 214 are communicative paths or traces that electrically couple the switch receptors 206, the daughter card receptor 210, and the interfaces 212 as shown. Although illustrated as a single line for simplicity, in this particular embodiment, each connector 214 actually comprises three independent connectors. Connectors 214 may be formed, for example, using photolithographic techniques on a surface of motherboard 202. Switch receptors 206 and daughter card receptor 210 generally refer to any mounting surface or socket operable to receive and electrically couple to switches 204 and daughter cards 208 respectively.
  • Switches 204 generally refer to any device capable of routing between respective switch ports any audio, video, signals, data, messages, or any combination of the preceding. In this particular example embodiment, switches 204a and 204b are each 24-port Infiniband switches mounted on switch receptors 206a and 206b respectively; however, any appropriate switch or router may be used. Each switch 204a and 204b comprises an integrated circuit that allows communication between each of the respective switch ports. For example, switch 204a may route data from connectors 214d to connectors 214c. Although the switches 204 in this example each have twenty-four ports, any appropriate number of ports may be used without departing from the scope of the present disclosure. Connectors 214c enable communication between switch 204a and 204b, the communication internal to switch package 200. Thus, switch nodes 204a and 204b are able to communicate without the use of external interfaces 212 and associated cabling, which enhances bandwidth capabilities and simplifies network fabric 104 implementation. Connectors 214a, 214b, and 214d enable communication between each switch 204 and a plurality of interfaces 212.
  • Interfaces 212 generally enable switch package 200 to communicate externally. In this particular embodiment, interfaces 212 include twenty-four client interfaces 212a and 212b and four network interfaces 212c and 212d; however, any appropriate number of interfaces may be used. Each client interface 212a and 212b is a 4X Infiniband port that can be coupled to a commodity computer; however, other types of interfaces may be used. In addition, each 4X Infiniband port is associated with one port of a respective 24-port switch 204. However, as described further below, interfaces 212a and 212b may alternatively use, for example, 12X Infiniband connectors for higher density or any other appropriate connector. Each network interface 212c, 212d, 212e, and 212f is a 12X Infiniband port that can be coupled to other switch packages; however, other types of interfaces may be used. Each 12X Infiniband port is associated with three switch ports of a respective switch 204a or 204b. In this particular example configuration, a daughter card 208 mounts on motherboard 202 to provide two additional network interfaces 212e and 212f, each interface 212e and 212f a 12X Infiniband port; however, other types and/or numbers of interfaces may be used.
  • Daughter card 208 generally refers to any secondary circuit board capable of coupling to daughter card receptor 210. In this particular embodiment, daughter card receptor 210 is operable to receive any of a variety of daughter cards 208, thus providing a modular switch package 200 that may be configured and optimized to any particular need or network architecture. As described below with reference to FIGURE 3A, the daughter card 208 of various embodiments may include one or more switches mounted to the daughter card 208. However, in this particular embodiment, daughter card 208 comprises connectors 216a and 216b that respectively enable communication between interfaces 212e and 212f and connectors 214a and 214b. Various other embodiments may not include a daughter card 208 and associated daughter card receptor 210. For example, in various other embodiments, connectors 214a and 214b may couple directly to interfaces 212e and 214f respectively without coupling to connectors 216a and 216b within a daughter card 208. In such embodiments, connectors 216a and 216b may be, for example, traces on motherboard 202.
  • As shown in FIGURE 2A, interfaces 212c, 212d, 212e and 212f are physically positioned on opposite sides of switch package 200 from interfaces 212a and 212b. Thus, in this particular embodiment, two different sides of switch package 200 are used for connections to maximize the density of interfaces 212. Example embodiments of the physical layout of interfaces 212 are illustrated in FIGURES 2B and 2C respectively.
  • FIGURE 2B is a side view illustrating one embodiment of the front of the modular network switch package 200 of FIGURE 2A. In this particular embodiment, switch package 200 fits within a standard 1U enclosure operable to mount horizontally in a standard nineteen-inch equipment rack. Each of the motherboard 202, daughter card 208, and switches 204 fit within the 1U enclosure of switch package 200. As shown in FIGURE 2B, the front of switch package 200 generally includes six 12X Infiniband interfaces 212c, 212d, 212e, and 212f, each operable to provide connections to other switch packages; however, other types and/or numbers of interfaces may be used. In various embodiments, multiple interconnected switch packages 200 may form at least a portion of the network array fabric 104 of FIGURE 1. As shown in FIGURE 2C, the back of switch package 200 generally includes twenty-four 4X Infiniband client interfaces 212a and 212b, each operable to provide connections to the HCA port of a computer (not explicitly shown); however, other types and/or numbers of interfaces may be used. In various embodiments, the computers may be mounted in the same equipment rack. In this particular embodiment, the back of switch package 200 also includes two power jacks 260.
  • Switch package 200 may support any of a variety of network architectures. For example, switch package 200 may support two-dimensional and/or dual-rail architectures by interconnecting switch package 200 with other similarly configured switch packages 200 using network interfaces 212c, 212d, 212e, and 212f. However, various other embodiments may use alternative switch package 200 configurations to support other network architectures. For example, switch package 200 may interconnect with other similarly configured switch packages 200 to form one-dimensional network architecture. The one-dimensional network architecture may have individual switch nodes 204a and 204b extending theoretically in positive and negative directions along a single axis. To illustrate, in some embodiments, switches 204a and 204b may communicate with other respective switch packages 200 through interfaces 212e and 212f respectively. The remaining interfaces 212a, 212b, 212c, and 212d may include a total of thirty-six 4X Infiniband connections, enabling each switch 204a and 204b in the one-dimensional network configuration to communicate with up to eighteen client nodes; however, connections other than 4X Infiniband may be used.
  • Switch package 200 supports multi-dimensional arrays with network connections both inside and outside the switch package 200 enclosure, in one embodiment. The modular daughter card receptor 210 and associated daughter card 208 enables alternative configurations with greater complexity than what is illustrated in FIGURE 2A. For example, in various other embodiments, switch package 200 may be alternatively configured with a daughter card 208 operable to additionally support three-dimensional network architecture, as illustrated in FIGURES 3A and 3B.
  • FIGURE 3A is a block diagram illustrating an alternative embodiment of a portion of a modular network switch package 300 that may form a portion of the network fabric 104 of FIGURE 1. This particular embodiment differs from the example embodiment illustrated in FIGURE 2A in that switch package 300 more conveniently supports various network configurations, including, for example, dual-rail, two-dimensional and/or three-dimensional network cluster architectures. Switch package 300 generally provides a standardized, compact network switch that can be used to construct very large scale, fault tolerant, and high performance computing clusters, such as the computing cluster 100 of FIGURE 1. Switch package 300 generally includes the following components coupled to a common motherboard 302: a plurality of switches 304 coupled to respective switch receptors 306, one or more daughter cards 308 coupled to respective daughter card receptors 310, and a plurality of interfaces 312. As explained further below, in various embodiments, switch package 300 and associated components, including multiple switch 204 nodes, may all fit within a standard 1U enclosure having interfaces on both sides of the enclosure. In such embodiments, the 1U enclosure may be operable to mount horizontally in a standard nineteen-inch equipment rack. In addition, such embodiments may greatly enhance the networking capability and routing density associated with space typically dedicated to a standard 1U enclosure. A more detailed description of example physical layouts for a 1U enclosure is explained further below with reference to FIGURES 3B and 3C.
  • One difference between the example embodiments of FIGURE 3A and FIGURE 2A is the configuration of daughter card 308 and respective interfaces 312e, 312f, 312g, and 312h. The other features of switch package 300 are substantially similar to respective features of switch package 200. That is, motherboard 302, daughter card receptor 310, switches 304, switch receptors 306, connectors 314a, 314b, 314c, and 314d and interfaces 312a, 312b, 312c, and 312d are substantially similar in structure and function to motherboard 202, daughter card receptor 210, switches 204, switch receptors 206, connectors 214a, 214b, 214c, and 214d and interfaces 212a, 212b, 212c, and 212d respectively of FIGURE 2A. Various other embodiments may not include a modular daughter card 208 and associated daughter card receptor 210. For example, in various other embodiments, connectors 314a and 314b may communicatively couple to switch receptors 352a and 352b respectively and/or switches 350a and 350b respectively without communicatively coupling to connectors 316 within a daughter card 208. In such embodiments, connectors 310 may simply be, for example, traces on motherboard 302.
  • In the example embodiment of FIGURE 3A, the support of a three-dimensional network architecture may be effected by replicating the base design of FIGURE 2A onto a printed circuit board of daughter card 308. That is, daughter card 308 includes two 24- port Infiniband switches 350a and 350b coupled by switch receptors 352a and 352b to connectors 310; however, other types and/or numbers of interfaces may be used. In addition, daughter card 308 couples switches 350a and 350b to interfaces 312g and 312h respectively, thus doubling the network connectivity of switch package 300 over that of switch package 200. In operation, each switch 304a, 304b, 350a, and 350b may communicate with each other switch 304a, 304b, 350a, and 350b within switch package 300. In addition, each switch 304a, 304b, 350a, and 350b may communicate with up to six client nodes through respective interfaces 312c, and 312d, 312e, and 312f. Examples embodiments of the physical layout of interfaces 312 are illustrated in FIGURES 3B and 3C respectively.
  • FIGURE 3B is a side view illustrating one embodiment of the front of the modular network switch package 300 of FIGURE 3A. In this particular embodiment, switch package 200 .comprises a standard 1U enclosure operable to mount horizontally in a standard nineteen-inch equipment rack. Each of the motherboard 302, daughter card 308, and switches 304 and 350 may fit within the 1U enclosure of switch package 300. As shown in FIGURE 3B, the front of switch package 300 generally includes sixteen 12X Infiniband interfaces 312a, 312b, 312g and 312h, each operable to provide connections to other network switch packages; however, other types of and/or numbers of interfaces may be used. As shown in FIGURE 3C, the back of switch package 300 generally includes twenty-four 4X Infiniband interfaces 212c, 212d, 212e and 212f, each operable to provide connections to the HCA port on a computer (not explicitly shown); however, other types and/or numbers of interfaces may be used. In various embodiments, the computers may be mounted in the same equipment rack.
  • Switch packages 300 may be configured and interconnected in any of a variety of network cluster architectures. For example, pairs of switch packages 300 may be used to construct network nodes for a three-dimensional, dual-rail network. In addition, switch package 300 may interconnect with other similarly configured switch packages 300 to form a three-dimensional, mesh network architecture. The three-dimensional, mesh network architecture may have individual switch nodes 350a, 350b, 304a, and 304b extending theoretically in positive and negative directions along three orthogonal axis, X, Y, and Z. To illustrate, in some embodiments, switch 304b may communicate with four other switches in a theoretical X-Y plane using interfaces 312a, the four other switches residing in one or more other similarly configured switch packages 300. Switch 304a may also communicate with switch 304b and 350a in a theoretical positive and negative Z direction respectively. Up to six of the remaining switch ports of switch 304b may be used to connect to six clients 102 through interfaces 312d.
  • In various other embodiments, switch package 300 may interconnect with other similarly configured switch packages 300 to form two-dimensional network architecture. The two-dimensional network architecture may have individual switch nodes 350a, 350b, 304a, and 304b extending theoretically in positive and negative directions along two orthogonal axis, X and Y. To illustrate, in some embodiments, switch 304b may communicate with four switches in a theoretical X-Y plane, two of the four switches 350b and 304a internal to switch package 300, and the other two switches residing in one or more other similarly configured switch packages 300. In such embodiments, the communication between switch packages 300 may be effected, for example, using two 12X Infiniband connectors for each of the interfaces 312c, 312d, 312e, and 312f; however, other types and/or numbers of interfaces may be used. In addition, the communication between each switch package 300 and up to forty-eight respectively coupled client nodes 102 may be effected using, for example, up to sixteen twelve 12X Infiniband connectors for each of the interfaces 312a, 312b, 312g, and 312h; however, other types and/or numbers of interfaces may be used. In such a configuration, half of the network connections are internal to switch packages 300. Since the physical size of a switch enclosure is typically determined by the space required for the interfaces, such embodiments reduce the overall size of switch package 300 by a factor of two. In addition, in various embodiments, such two-dimensional network architecture can be linearly scaled to almost any size while minimizing the length of interconnecting cables. This is very desirable for Double Data Rate and Quad Data Rate networks, where long copper cables are not an option and fiber optic connections are very expensive.
  • Although the present invention has been described with several embodiments, diverse changes, substitutions, variations, alterations, and modifications may be suggested to one skilled in the art, and it is intended that the invention encompass all such changes, substitutions, variations, alterations, and modifications as fall within the scope of the appended claims.

Claims (14)

  1. A method for networking a computer cluster, comprising:
    communicatively coupling together each of a plurality of client computer nodes (102) through a plurality of switches (204), each switch (204) comprising a plurality of switch ports to establish a network fabric (104);
    positioning at least two of the plurality of switches (204) inside a switch package (200); and
    electrically interconnecting at least a subset of the plurality of switch ports of the at least two of the switches (204) within the switch package (200);
    providing the switch package (200) in an enclosure with a plurality of interfaces (212) coupled to the switches (204), one set of interfaces (212) on a first side of the enclosure and a second set of interfaces (212) on an opposite side of the enclosure; and
    providing the switch package (200) with at least one modular card receptor (210); and
    coupling a modular card (208) to each of the at least one modular card receptor (210), the modular card being coupled to an interface (212).
  2. The method of Claim 1, wherein the first set of interfaces (212) couple to client computer nodes (102) and the second set of interfaces (212) couple to one or more other switch packages (200) in the network fabric (104), the first set of interfaces (212) being different than the second set of interfaces (212).
  3. The method of Claim 1 or claim 2, and further comprising interconnecting the switch package (200) to a plurality of other switch packages (200) using the interfaces (212) to form a multi-dimensional network architecture.
  4. The method of Claim 3 or the switch package of claim 11, wherein the multi-dimensional network architecture is selected from the group consisting of:
    two-dimensional;
    three-dimensional;
    two-dimensional, dual-rail; and
    three-dimensional, dual-rail.
  5. The method of Claim 1, and further comprising:
    providing a plurality of the switch packages (200);
    routing each of the communication paths through at least one of the plurality of switch packages (200).
  6. The method of Claim 5, and further comprising:
    mounting at least a subset of the plurality of switch packages (200) within an equipment rack; and
    mounting at least one of the plurality of client computer nodes (102) within the equipment rack.
  7. The method of Claim 6, and further comprising:
    communicating between the at least one of the plurality of client computer nodes (102) and at least one switch package (200) of the at least a subset of the plurality of switch packages (200) at a first bandwidth; and
    communicating between the at least one switch package (200) of the at least a subset of the plurality of switch packages (200) and at least one other switch package (200) of the at least a subset of the plurality of switch packages (200) at a second bandwidth greater than the first bandwidth.
  8. The method of Claim 7 or the switch package of claim 14, wherein communicating at a second bandwidth comprises communicating using a communication link from the group consisting of:
    Infiniband;
    Infiniband Double Data Rate;
    Infiniband Quad Data Rate; and
    10GigE.
  9. A switch package for networking a computer cluster, comprising:
    a plurality of switches (204) operable to communicatively couple together each of a plurality of client computer nodes (102), each switch (204) comprising a plurality of switch ports to establish a network fabric (104), at least a subset of the plurality of switch ports of the at least two of the switches (204) are electrically interconnected;
    an enclosure with a plurality of interfaces (212) coupled to a subset of the plurality of switch ports, one set of interfaces (212) disposed on a first side of the enclosure and a second set of interfaces (212) disposed on a second side of the enclosure opposite the first side; and
    at least one modular card receptor (210); and
    a modular card (208) coupled to each of the at least one modular card receptor (210), the modular card being coupled to an interface (212).
  10. The switch package of Claim 9, wherein the first set of interfaces (212) couple to client computer nodes (102) and the second set of interfaces (212) couple to one or more other switch packages (200) in the network fabric (104)
  11. The switch package of Claim 9 or claim 10, wherein the second set of interfaces (212) couple to a plurality of switch packages (200) to form a multi-dimensional network architecture.
  12. The switch package of Claim 9, wherein each communication path in the network fabric (104) is routed through the switch package (200).
  13. The switch package of any of Claims 4 or 9 to 12, and further comprising:
    an equipment rack wherein the switch package (200) is mounted, at least one of the plurality of client computer nodes (102) being mounted in the equipment rack.
  14. The switch package of Claim 9, wherein:
    at least one of the plurality of client computer nodes (102) communicates with the switch package (200) at a first bandwidth; and
    the switch package (200) communicates with at least one other switch package (200) at a second bandwidth greater than the first bandwidth.
EP07865860.6A 2007-01-12 2007-12-19 System and method for networking computing clusters Active EP2100415B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/622,921 US8144697B2 (en) 2007-01-12 2007-01-12 System and method for networking computing clusters
PCT/US2007/088091 WO2008088651A2 (en) 2007-01-12 2007-12-19 System and method for networking computing clusters

Publications (2)

Publication Number Publication Date
EP2100415A2 EP2100415A2 (en) 2009-09-16
EP2100415B1 true EP2100415B1 (en) 2013-09-18

Family

ID=39512566

Family Applications (1)

Application Number Title Priority Date Filing Date
EP07865860.6A Active EP2100415B1 (en) 2007-01-12 2007-12-19 System and method for networking computing clusters

Country Status (4)

Country Link
US (1) US8144697B2 (en)
EP (1) EP2100415B1 (en)
JP (1) JP5384369B2 (en)
WO (1) WO2008088651A2 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101945050B (en) * 2010-09-25 2014-03-26 中国科学院计算技术研究所 Dynamic fault tolerance method and system based on fat tree structure
US9762505B2 (en) * 2014-01-07 2017-09-12 Lenovo Enterprise Solutions (Singapore) Pte. Ltd. Collaborative route reservation and ranking in high performance computing fabrics
US9391845B2 (en) * 2014-09-24 2016-07-12 Intel Corporation System, method and apparatus for improving the performance of collective operations in high performance computing

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5485455A (en) * 1994-01-28 1996-01-16 Cabletron Systems, Inc. Network having secure fast packet switching and guaranteed quality of service
JPH07230434A (en) * 1994-02-18 1995-08-29 Gijutsu Kenkyu Kumiai Shinjiyohou Shiyori Kaihatsu Kiko Mutual coupling network device
JP3709322B2 (en) * 2000-03-10 2005-10-26 株式会社日立製作所 Multidimensional crossbar network and parallel computer system
US6591285B1 (en) * 2000-06-16 2003-07-08 Shuo-Yen Robert Li Running-sum adder networks determined by recursive construction of multi-stage networks
JP4397109B2 (en) * 2000-08-14 2010-01-13 富士通株式会社 Information processing apparatus and crossbar board unit / back panel assembly manufacturing method
US7061907B1 (en) * 2000-09-26 2006-06-13 Dell Products L.P. System and method for field upgradeable switches built from routing components
US20020167902A1 (en) * 2001-04-27 2002-11-14 Foster Michael S. Method and system for performing security via virtual addressing in a communications network
US7139267B2 (en) * 2002-03-05 2006-11-21 Industrial Technology Research Institute System and method of stacking network switches
US7406038B1 (en) * 2002-04-05 2008-07-29 Ciphermax, Incorporated System and method for expansion of computer network switching system without disruption thereof
JP2004120042A (en) * 2002-09-24 2004-04-15 Toshiba Corp Data transmission system for duplicated system
IL152676A0 (en) * 2002-11-06 2003-06-24 Teracross Ltd Method and apparatus for high performance single block scheduling in distributed systems
US7527155B2 (en) * 2004-02-11 2009-05-05 International Business Machines Corporation Apparatus and system for vertically storing computing devices
JP2006146391A (en) * 2004-11-17 2006-06-08 Hitachi Ltd Multiprocessor system
DE602005005974T2 (en) 2005-06-20 2009-06-18 Alcatel Lucent Fault-tolerant one-level switching matrix for a telecommunications system
US7720377B2 (en) * 2006-01-23 2010-05-18 Hewlett-Packard Development Company, L.P. Compute clusters employing photonic interconnections for transmitting optical signals between compute cluster nodes
US20070253437A1 (en) * 2006-04-28 2007-11-01 Ramesh Radhakrishnan System and method for intelligent information handling system cluster switches

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BHANOTA ET AL: "The BlueGene/L supercomputer", NUCLEAR PHYSICS B. PROCEEDINGS SUPPLEMENT, NORTH-HOLLAND, AMSTERDAM, NL, vol. 119, 1 May 2003 (2003-05-01), pages 114 - 121, XP005025475, ISSN: 0920-5632 *

Also Published As

Publication number Publication date
US8144697B2 (en) 2012-03-27
US20080170581A1 (en) 2008-07-17
JP2010515997A (en) 2010-05-13
EP2100415A2 (en) 2009-09-16
JP5384369B2 (en) 2014-01-08
WO2008088651A2 (en) 2008-07-24
WO2008088651A3 (en) 2008-11-27

Similar Documents

Publication Publication Date Title
US7452236B2 (en) Cabling for rack-mount devices
US9461768B2 (en) Terabit top-of-rack switch
US7766692B2 (en) Cable interconnect systems with cable connectors implementing storage devices
US6205532B1 (en) Apparatus and methods for connecting modules using remote switching
EP3038440B1 (en) High density cabled midplanes and backplanes
US8103137B2 (en) Optical network for cluster computing
US20150293867A1 (en) Computer system with groups of processor boards
US8948166B2 (en) System of implementing switch devices in a server system
US20080112133A1 (en) Switch chassis
US8060682B1 (en) Method and system for multi-level switch configuration
US20100254703A1 (en) Optical Network for Cluster Computing
US6814582B2 (en) Rear interconnect blade for rack mounted systems
WO2002028158A1 (en) System and method for cartridge-based, geometry-variant scalable electronic systems
US20070230148A1 (en) System and method for interconnecting node boards and switch boards in a computer system chassis
US20080123552A1 (en) Method and system for switchless backplane controller using existing standards-based backplanes
US9160686B2 (en) Method and apparatus for increasing overall aggregate capacity of a network
US20080101395A1 (en) System and Method for Networking Computer Clusters
EP2100415B1 (en) System and method for networking computing clusters
US20230010285A1 (en) Auxiliary cable organization structure for network rack system
US9750135B2 (en) Dual faced ATCA backplane
US20160154196A1 (en) Modular optical backplane and enclosure
US20060123021A1 (en) Hierarchical packaging for telecommunications and computing platforms
WO2023093105A1 (en) Optical backplane interconnection apparatus and communication device
KR100440590B1 (en) A backplane apparatus for connection of the high density cables
WO2002027437A2 (en) Hexagonal structures for scalable electronic systems

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20090505

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC MT NL PL PT RO SE SI SK TR

DAX Request for extension of the european patent (deleted)
17Q First examination report despatched

Effective date: 20101027

REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Ref document number: 602007032976

Country of ref document: DE

Free format text: PREVIOUS MAIN CLASS: H04L0012560000

Ipc: H04L0012931000

RIC1 Information provided on ipc code assigned before grant

Ipc: H04L 12/931 20130101AFI20130411BHEP

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

INTG Intention to grant announced

Effective date: 20130517

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC MT NL PL PT RO SE SI SK TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 633277

Country of ref document: AT

Kind code of ref document: T

Effective date: 20131015

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602007032976

Country of ref document: DE

Effective date: 20131114

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130807

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130918

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130918

REG Reference to a national code

Ref country code: NL

Ref legal event code: VDEP

Effective date: 20130918

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 633277

Country of ref document: AT

Kind code of ref document: T

Effective date: 20130918

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130918

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20131219

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130918

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130918

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130918

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130918

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130918

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130918

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140118

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130918

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130918

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130918

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130918

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130918

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130918

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602007032976

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140120

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

26N No opposition filed

Effective date: 20140619

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130918

Ref country code: LU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20131219

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130918

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130918

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602007032976

Country of ref document: DE

Effective date: 20140619

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20131219

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20131231

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20131231

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130918

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130918

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20071219

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130918

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 9

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 10

REG Reference to a national code

Ref country code: DE

Ref legal event code: R082

Ref document number: 602007032976

Country of ref document: DE

Representative=s name: KLUNKER IP PATENTANWAELTE PARTG MBB, DE

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 11

REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Ref document number: 602007032976

Country of ref document: DE

Free format text: PREVIOUS MAIN CLASS: H04L0012931000

Ipc: H04L0049100000

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230530

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20231124

Year of fee payment: 17

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20231122

Year of fee payment: 17

Ref country code: DE

Payment date: 20231121

Year of fee payment: 17