WO2013160684A1 - High density computer enclosure for efficient hosting of accelerator processors - Google Patents

High density computer enclosure for efficient hosting of accelerator processors Download PDF

Info

Publication number
WO2013160684A1
WO2013160684A1 PCT/GB2013/051056 GB2013051056W WO2013160684A1 WO 2013160684 A1 WO2013160684 A1 WO 2013160684A1 GB 2013051056 W GB2013051056 W GB 2013051056W WO 2013160684 A1 WO2013160684 A1 WO 2013160684A1
Authority
WO
WIPO (PCT)
Prior art keywords
panel
computer according
chassis
psu
computer
Prior art date
Application number
PCT/GB2013/051056
Other languages
French (fr)
Inventor
Gianni DE FABRITIIS
Matthew Harvey
Original Assignee
Acellera Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Acellera Ltd filed Critical Acellera Ltd
Publication of WO2013160684A1 publication Critical patent/WO2013160684A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/40Bus structure
    • G06F13/4063Device-to-bus coupling
    • G06F13/409Mechanical coupling

Definitions

  • This specification relates to the manner in which computers with accelerator processors on input/output (IO) expansion cards are configured and provided in a given area to conserve space and deal with cooling issues associated with high density packaging. Aspects of the specification relate with minimizing the packaging of such computer systems without compromising the ability to assemble the system from standardized commodity components.
  • IO input/output
  • clusters or “farms” of servers as a platform for parallel computing . Since the practical size of a cluster system may be limited not only by the capital cost but also the electrical power and physical space available at the colocation site, there is a need for high density, power-efficient hardware that is nevertheless cost- optimized by virtue of being assembled from commodity components.
  • a contemporary computer cluster may be assembled from servers provided in a high density enclosure that houses several independent CPU systems and provides a cost reduction through common power supply unit (PSU) and cooling infrastructure. Such a configuration optimizes for density of CPUs and memory. Servers of this type are typically installed within an industry standard 19" equipment rack.
  • PSU power supply unit
  • a contemporary cluster installation comprising one or more 19" racks configured with high-density servers may approach an electrical load of up to 30 kW per rack. Therefore, careful attention to cooling can be required.
  • Airflow through such systems is typically front-to-back: cold air is drawn in through the front of each server, passes over heat exchangers on the server mainboard and CPU and the hot air exhausted at the rear.
  • Accelerator processors are specialized computational units connected as peripheral devices to host CPU systems by means of a standard 10 interconnect, such as peripheral component interconnect (PCI) Express.
  • PCI peripheral component interconnect
  • the most common class of contemporary attached processors has arisen from graphics processing technology and is known as GPUs (graphics processing units), e.g. the GeForce and Tesla GPUs from Nvidia Corp and the Radeon and Firestream products from Advanced Micro Devices Corp.
  • Other attached processors include the Intel MIC and IBM PowerXCell Accelerator Board.
  • Accelerator processors are optimized for high computational performance and frequently have a high electrical power load.
  • PCI Express the PCI-SIG High Power Card Electromechanical Specification Rev 1.0 permits a maximum power draw of 300W per AP in a package up to 55x312x111 mm in size.
  • High power AP cards may be actively cooled by blower assembly integrated into the packaging or be passively cooled, relying on the enclosing chassis to provide adequate airflow over its heat exchanger.
  • the computational power of the host system CPU can be of secondary importance to that of the AP itself, as the latter is used for the majority of the computational task, with the host CPU used for the program's nominal control and 10 tasks.
  • Programs may use one or more APs at once. In the case where several APs are used in parallel, data exchange between them is most likely to be performed . Therefore, the host system must provide adequate PCI Express 10 bandwidth (for AP ⁇ ->AP and AP ⁇ -> host transfers) to support this mode of operation . In the case where a program uses only a single AP, an additional, independent instance of the program may be run for each AP attached to the system .
  • the servers employed in their construction are typically equipped with management processors (also known as “baseboard management controllers” or BMCs) that operate independently of the host system and which can be used to access its console, monitor environmental and performance sensors or perform remote power cycling.
  • management processors also known as “baseboard management controllers” or BMCs
  • BMCs represent an additional cost in the construction of the server system. BMCs are only available for certain server mainboards, further restricting choice of components. In particular, BMC functions are seldom available on mainboards targeted for the workstation or desktop market.
  • the monitoring and console redirection functions of a BMC although convenient, are functions that might be adequately performed in-band by the host's own operating system or associated software services. Software resetting of a server under operating system control is also possible, but often does not cause the full re-initialization of hardware that a hardware reset or power cycle does. Furthermore, if the operating system is unresponsive, it may not be possible to effect the software reset at all . Therefore, there is a need in the art for a method and apparatus for performing remote-control hardware reset of a computer system that is not equipped with a BMC.
  • Server mainboards have hardware reset capability that can be activated by electrically connecting two pins on a header connector. Typically, these pins are connected to a push-to-make switch mounted on the chassis to allow for manual reset by a systems administrator. Alternatively, the connection between the pins may be effected by using an electronic circuit. There is a need in the art for an electronic circuit for performing a reset of the server to which it is attached and which permits remote activation by means of an electrical signal . Further, there is a need in the art to address the labour cost of system
  • the specification relates to a computer, containing : a chassis having a front panel, a rear panel and side panels;
  • AP accelerator processors
  • PSU power supply unit
  • the specification relates to a server rack, having the computer disclosed herein.
  • Figure 1 shows a front perspective view of a chassis in accordance with an embodiment disclosed in the specification
  • Figure 2 shows a rear perspective view of a chassis in accordance with an embodiment disclosed in the specification
  • Figure 3 shows a side perspective view of chassis with side panel removed in accordance with an embodiment disclosed in the specification
  • Figure 4 shows a plan view of a chassis interior with side panel removed in accordance with an embodiment disclosed in the specification
  • FIG. 5 shows an exploded view of chassis components in accordance with an embodiment disclosed in the specification
  • FIG. 6 shows a diagram of three chasses installed on 19" equipment rack shelf in accordance with an embodiment disclosed in the specification
  • Figure 7 shows a diagram of 20 chasses installed in a 19" equipment rack in accordance with an embodiment disclosed in the specification
  • Figure 8 shows a diagram of four narrow variant chasses installed on a 19" equipment rack shelf in accordance with an embodiment disclosed in the specification
  • Figure 9 shows a diagram of standard height and innovative low profile 6-pin PCI Express power plug in accordance with an embodiment disclosed in the specification
  • Figure 10 shows a schematic of a reset circuit in accordance with an embodiment disclosed in the specification
  • Figure 11 shows a PCI Express card, indicating 10 bracket in accordance with an embodiment disclosed in the specification.
  • Figures 1 and 2 show front and rear external aspects of a chassis (2) in assembled form, in accordance with an embodiment disclosed in the specification.
  • a front-panel cutout (4) is provided for a power switch grilles (6) for cool air intake.
  • An additional grille (16) is provided for hot air exhaust.
  • An input socket (18) for main electricity is mounted on the rear panel (8) .
  • a cutout on a rear panel (20) is provided to allow a main electricity cable to pass directly through the chassis (2) to the internal PSU (42), so minimizing the component count.
  • a cutout (22) is provided for the reset circuit socket .
  • FIGS 3 and 4 show internal views of the chassis (2) .
  • Access to the chassis (2) is by removal of a removable side panel (24), which is attached by means of securing screws.
  • Mounting stand-offs (28) are attached in the fixed side panel in a configuration compatible with the ATX mainboard mounting specification .
  • the volume (30) indicates that occupied by up to 8 single-slot or 4-double-slot width APs.
  • the volume (32) indicates that occupied by the CPU heat exchanger.
  • Two axial fans (34) attached directly on the front panel (4) via through-hole mountings.
  • An "S" shaped bracket (36) is provided for mounting a hard disk drive (38) and for providing a rest (40) for the edge of the PSU (42) which is itself can be attached to a midplane (44, Figure 5).
  • the midplane (44) can provide structural support for the chassis (2), preventing flexion .
  • a flange (46) provides additional strength and resistance for bending and contains a set of holes (46) for securing the power cables that pass to the APs (30) .
  • Apertures (50) provide for power cabling and are sized to permit the passage of ATX power connectors.
  • APs conforming to the PCI Express mechanical specification have an L-shaped rear bracket (60) for secure chassis (2) mounting, as shown in figure 11.
  • the mainboard is mounted on low profile standoffs (28). This reduced standoff (28) provides insufficient clearance for the end of this bracket causing it to clash with the base panel (26). Cutouts (64) can be provided in the base panel (26) allowing the end of the brackets (60) to pass through.
  • An alternative narrow variant of the chassis provides mounting holes (52, Fig 2) in the metalwork between the PCI Express cutouts (Fig 2, 14) by which a double-width AP with L-bracket removed may be mounted.
  • Airflow Air for cooling flows from the front of the chassis (2) to the rear and is driven by the front axial fans (Fig 4, 34). Air flow across the AP heat exchangers may be driven completely by the front panel fans or be augmented by blowers integrated into the AP heat-exchanger inside the AP itself.
  • the use of an ATX power supply (Fig. 4, 42) can introduce some complexity as these units contain their own fans.
  • the PSU is mounted on the midplane such that its air intake is on the side closest to the fixed side panel .
  • the air outlet is on the side attached to the midplane and is blows across the CPU heat-exchanger before passing out of the rear grille.
  • a lip running the width of the midplane acts to minimize the air flow from the front fans bleeding off over the CPU rather than passing through the APs and for structural strength to avoid bending due to PSU.
  • Rack mounting Servers mounted in a 19" equipment rack are typically mounted horizontally attached to rails or sliders that run between front and rear rack posts.
  • the computer enclosure (1) disclosed herein, can be mounted vertically, for instance, upon a standard 19" shelf, as shown in Figure 6.
  • the shelf (56) is attached to the rack posts (58) and the computer enclosure (1) set upon it.
  • the standard width chassis (2) is sufficiently narrow to allow 3 units to be placed side-by-side on a single shelf, between the 19" rack posts.
  • the shelf (56) may have a lip along its edges to minimize movement of the chasses.
  • the combined height of the chassis and shelf occupies 8 rack units, allowing 5 shelves of chasses to be accommodated in a standard 42u 19" rack ( Figure 7).
  • An alternative variant of the chasses is sufficiently narrow to permit 4 chasses to sit on a single shelf. ( Figure 8) .
  • Narrow variant The narrow variant of the chassis allows 4 units to sit on a single shelf in a 19" rack. In order to accomplish this, the maximum width of the chassis must be 120mm . This can necessitates several internal changes.
  • the mainboard standoffs (28) are further reduced in profile.
  • the AP L 10 brackets (Fig 11, 60) must be removed and the custom mounting holes (Fig 2, 52) used . Front, back, top and bottom panels are reduced in width. The midplane is reduced in width, losing the two top ventilation holes (Fig 5, 62) .
  • the PSU (42) therefore sits flush with the base panel (26) and the removable side panel (24).
  • the PCI Express electromechanical specification permits the power inputs to an AP to be positioned on the top or side of the unit. If these sockets are positioned on the top, flush with the edge, there may be insufficient clearance between them and the side panel of the chassis and the accelerator processors (Fig 4, 30) to accommodate a standard PCI Express power plug (Fig 9, 66).
  • a low-profile version of the PCI Express power plug can be used, as disclosed in Fig 9 (68) .
  • This plug (68) has the same pin and electrical configuration as the standard plug, but a reduced height body. Modified plugs can be manufactured for such an enclosure.
  • Assembly The 7 major panels of the chasses are all fabricated separately and attached together with screw fixings.
  • Figure 5 shows an exploded view of the chassis (2).
  • This construction technique allows the bare chasses to be shipped unassembled, so reducing shipping costs where charges are levied on the volume of freight rather than its mass.
  • This modular construction also facilitates in-field upgrade of individual panels, if required (for example to accommodate a new PSU mounting). This may be useful if a chassis is expected to house several generations of APs over its operational life.
  • Reset Circuit The reset circuit ( Figure 10) is designed to allow the server to which it is attached to be reset by means of a remotely generated electrical signal . It is also able to produce such a signal for the purposes of driving the reset of a second server similarly equipped with this circuit.
  • the intended mode of operation is to connect pairs of servers together via their remote reset circuits so allowing each to reset the other. In this manner, a failed and unresponsive system can be remotely reset by sending a remote reset signal from the afflicted system's peer. Only a simultaneous failure of both machines in a pair will require manual intervention, so reducing maintenance overhead .
  • the reset circuit contains four signaling lines. Two lines are used for input to activate the reset and two lines are outputs for activating the reset circuit of the peer computer. [0052] These 4 electrical signal lines may be presented for external connection by means of an appropriate socket connector, for example a 4-pole 3.5 mm jack socket, for which a cutout is provided on the rear panel (Fig 2, 22) .
  • a pair of servers equipped with the reset circuit are connected together using 4-pole plug complementary to the socket and which has a cross ⁇ over wiring l ⁇ ->3, 2- ⁇ >4. This configuration connects the reset and signal generation lines of each system, allowing each system in the pair to reset and be reset by the other.
  • the reset circuit provides optical isolation, preventing a ground loop from forming between then two servers are connected.
  • the reset signal is initiated by applying positive voltage to pin 1.
  • This signal may be generated by operation of a general purpose 10 line on the mainboard, or if none are present, by re-purposing an existing control line, for example a control line on a parallel printer port or DTR/RTS lines of an RS-232 serial port or system beeper.
  • Contemporary mainboards, despite typically not having externally presented serial or parallel ports retain these functions and provide the appropriate electrical signaling on mainboard header connectors.
  • the RS232 DTR line is the preferred source for the reset signal .
  • the reset signal is generated under software control from the activating system .
  • the circuit can be assembled in a physically compact packaging . It can be assembled on a small PCB that may plug directly into the RS232 or front panel 10 header connectors.

Abstract

Disclosed is a computer containing a chassis having a front panel, a rear panel and side panels. A mainboard is positioned proximate to the rear panel, and one or more accelerator processors (AP) are coupled to the mainboard. Also, provided is a power supply unit (PSU) positioned proximate to the front panel. The computer can provide a server having high density of APs. Further, disclosed is a server rack having such computers.

Description

HIGH-DENSITY COM PUTER ENCLOSURE FOR EFFICIENT HOSTING OF ACCELERATOR PROCESSORS
CROSS-REFERENCE TO RELATED APPLICATION
This application claims the benefit of and priority to United States
Provisional Patent Application No. 61/638,002 filed April 25, 2012 under the title HIGH-DENSITY COMPUTER ENCLOSURE FOR EFFICIENT HOSTING OF
ACCELERATOR PROCESSORS.
The content of the above patent application is hereby expressly
incorporated by reference into the detailed description hereof.
FIELD
[0001] This specification relates to the manner in which computers with accelerator processors on input/output (IO) expansion cards are configured and provided in a given area to conserve space and deal with cooling issues associated with high density packaging. Aspects of the specification relate with minimizing the packaging of such computer systems without compromising the ability to assemble the system from standardized commodity components.
BACKGROUND
[0002] Many computers today are assembled from commodity components built to one or more industry standards. The use of standardized components using specific interfaces allows for ease of supply, efficiency of production and competitiveness in pricing.
[0003] The per-dollar performance advantage of many standardized components over similar custom pieces arises from their larger sales volumes, thereby allowing more resources to be committed to their development.
Competition on performance grounds in the commodity component market leads also to rapid technology development and product refreshes.
[0004] In the field of scientific and technical computing, many
computational activities remain too costly in terms of the required computing resources to be completed within an acceptable timeframe by a software program running on a single central processing unit (CPU) . Frequent use is therefore made of parallel computing programming techniques that reduce the time taken to complete a computational task by dividing the work across multiple CPUs simultaneously.
[0005] To support these activities it has become common to deploy
"clusters" or "farms" of servers as a platform for parallel computing . Since the practical size of a cluster system may be limited not only by the capital cost but also the electrical power and physical space available at the colocation site, there is a need for high density, power-efficient hardware that is nevertheless cost- optimized by virtue of being assembled from commodity components.
[0006] A contemporary computer cluster may be assembled from servers provided in a high density enclosure that houses several independent CPU systems and provides a cost reduction through common power supply unit (PSU) and cooling infrastructure. Such a configuration optimizes for density of CPUs and memory. Servers of this type are typically installed within an industry standard 19" equipment rack.
[0007] A contemporary cluster installation comprising one or more 19" racks configured with high-density servers may approach an electrical load of up to 30 kW per rack. Therefore, careful attention to cooling can be required.
Airflow through such systems is typically front-to-back: cold air is drawn in through the front of each server, passes over heat exchangers on the server mainboard and CPU and the hot air exhausted at the rear.
[0008] For large high-density cluster deployments, specialized 19" racks with integrated cooling circuits or heat exchangers are often employed in order to effectively address the heat dissipation .
[0009] This cooling and racking infrastructure is often obtained
independently of the computing equipment itself and, if so obtained, will have a longer operational lifetime than any cost-sensitive computing system housed within it.
[0010] Systems that are engineered for density can impose limitations on the selection of commodity components suitable to use in their construction. For example, it can be challenging to construct systems from mainboards or power supplies conforming to the most-commonly used commodity form factors (ATX), leading to an increase in part cost through reduced component choice.
[0011] A recent development in the high performance and scientific computing field is the use of accelerator processors (AP) for the purpose of increasing computational performance. Accelerator processors are specialized computational units connected as peripheral devices to host CPU systems by means of a standard 10 interconnect, such as peripheral component interconnect (PCI) Express. The most common class of contemporary attached processors has arisen from graphics processing technology and is known as GPUs (graphics processing units), e.g. the GeForce and Tesla GPUs from Nvidia Corp and the Radeon and Firestream products from Advanced Micro Devices Corp. Other attached processors include the Intel MIC and IBM PowerXCell Accelerator Board.
[0012] Accelerator processors are optimized for high computational performance and frequently have a high electrical power load. For those attached via PCI Express, the PCI-SIG High Power Card Electromechanical Specification Rev 1.0 permits a maximum power draw of 300W per AP in a package up to 55x312x111 mm in size. High power AP cards may be actively cooled by blower assembly integrated into the packaging or be passively cooled, relying on the enclosing chassis to provide adequate airflow over its heat exchanger.
[0013] For the software programs that have been developed to maximally exploit APs, the computational power of the host system CPU can be of secondary importance to that of the AP itself, as the latter is used for the majority of the computational task, with the host CPU used for the program's nominal control and 10 tasks.
[0014] Programs may use one or more APs at once. In the case where several APs are used in parallel, data exchange between them is most likely to be performed . Therefore, the host system must provide adequate PCI Express 10 bandwidth (for AP<->AP and AP<-> host transfers) to support this mode of operation . In the case where a program uses only a single AP, an additional, independent instance of the program may be run for each AP attached to the system .
[0015] Given the need to use APs in a high-density cluster computing context, and the differences in power draw and physical arrangement of components between servers with and without attached APs, there exists a need for a server chassis that provides a high-density packaging for attached processors and their associated host system that can optimize for the number of APs per host system and 10 bandwidth per AP. The density at which computers with APs may be provided in a given space must be optimized by what means are possible while still having the ability to use standardized components. As such there exists a need to increase the density at which computers with APs may be provided in a given space.
[0016] Furthermore AP technology appears to currently developing at a greater rate than the more mature CPU/server technologies so in many cases it could be appropriate for the operator of AP equipment to depreciate the host and APs at different rates. Thus a single host system may provide a platform for several generations of AP before requiring a technology refresh itself. This mode of operation will yield cost reductions through reduced operating costs by reducing the cost of an AP technology refresh and by delivering more computing capability within a fixed power envelope. Therefore there exists a need for an optimized server chassis to provide easy access to the installed APs to facilitate field replacement and upgrading . There is the additional need for a chassis to allow maximum choice in AP selection by supporting both actively and passively- cooled APs designed to the limit of the relevant electromechanical specification .
[0017] Since the installation of any AP-equipped systems in a cluster configuration is likely to be within existing or independently-obtained rack and cooling infrastructure, there is a need for a chassis design to conform to these industry standard expectations for physical dimensions and airflow.
[0018] In large cluster installations, the failure of individual computers from hardware or software faults is a routine occurrence. Many faults can be recovered through the use of automatic monitoring software, or by intervention of a systems administrator. Faults which render the computer unresponsive to network connection attempts and so inaccessible to the administrator require a hardware reset or power-cycle of the equipment. This can be particularly problematic to accomplish if the system is co-located in a remote site.
[0019] To improve the maintenance of cluster systems, the servers employed in their construction are typically equipped with management processors (also known as "baseboard management controllers" or BMCs) that operate independently of the host system and which can be used to access its console, monitor environmental and performance sensors or perform remote power cycling.
[0020] Although convenient, BMCs represent an additional cost in the construction of the server system. BMCs are only available for certain server mainboards, further restricting choice of components. In particular, BMC functions are seldom available on mainboards targeted for the workstation or desktop market. The monitoring and console redirection functions of a BMC, although convenient, are functions that might be adequately performed in-band by the host's own operating system or associated software services. Software resetting of a server under operating system control is also possible, but often does not cause the full re-initialization of hardware that a hardware reset or power cycle does. Furthermore, if the operating system is unresponsive, it may not be possible to effect the software reset at all . Therefore, there is a need in the art for a method and apparatus for performing remote-control hardware reset of a computer system that is not equipped with a BMC.
[0021] Server mainboards have hardware reset capability that can be activated by electrically connecting two pins on a header connector. Typically, these pins are connected to a push-to-make switch mounted on the chassis to allow for manual reset by a systems administrator. Alternatively, the connection between the pins may be effected by using an electronic circuit. There is a need in the art for an electronic circuit for performing a reset of the server to which it is attached and which permits remote activation by means of an electrical signal . Further, there is a need in the art to address the labour cost of system
maintenance by reducing the circumstances under which on-site visits to a co- location facility are required for fault fixing . SUMMARY OF THE INVENTION
[0022] In one aspect, the specification relates to a computer, containing : a chassis having a front panel, a rear panel and side panels;
a mainboard positioned proximate to the rear panel,
one or more accelerator processors (AP) coupled to the mainboard; and a power supply unit (PSU) positioned proximate to the front panel.
[0023] In another aspect, the specification relates to a server rack, having the computer disclosed herein.
BRIEF DESCRIPTION OF THE DRAWINGS
[0024] Reference will now be made, by way of example, to the
accompanying drawings which show example embodiments of the present application, and in which :
[0025] Figure 1 shows a front perspective view of a chassis in accordance with an embodiment disclosed in the specification;
[0026] Figure 2 shows a rear perspective view of a chassis in accordance with an embodiment disclosed in the specification;
[0027] Figure 3 shows a side perspective view of chassis with side panel removed in accordance with an embodiment disclosed in the specification;
[0028] Figure 4 shows a plan view of a chassis interior with side panel removed in accordance with an embodiment disclosed in the specification;
[0029] Figure 5 shows an exploded view of chassis components in accordance with an embodiment disclosed in the specification;
[0030] Figure 6 shows a diagram of three chasses installed on 19" equipment rack shelf in accordance with an embodiment disclosed in the specification; [0031] Figure 7 shows a diagram of 20 chasses installed in a 19" equipment rack in accordance with an embodiment disclosed in the specification;
[0032] Figure 8 shows a diagram of four narrow variant chasses installed on a 19" equipment rack shelf in accordance with an embodiment disclosed in the specification;
[0033] Figure 9 shows a diagram of standard height and innovative low profile 6-pin PCI Express power plug in accordance with an embodiment disclosed in the specification;
[0034] Figure 10 shows a schematic of a reset circuit in accordance with an embodiment disclosed in the specification;
[0035] Figure 11 shows a PCI Express card, indicating 10 bracket in accordance with an embodiment disclosed in the specification.
[0036] Similar reference numerals may have been used in different figures to denote similar components.
DESCRIPTION OF EXAMPLE EMBODIMENTS
[0037] Description of computer enclosure (1) : Figures 1 and 2 show front and rear external aspects of a chassis (2) in assembled form, in accordance with an embodiment disclosed in the specification. In the front panel (3) (Fig 1) a front-panel cutout (4) is provided for a power switch grilles (6) for cool air intake. On the rear panel (8) (Fig 2) cutouts are provided for mainboard 10 ports (12) and 8 PCI express 10 cards (14). The positioning of these cutouts conforms to the ATX specification to allow the use of standardized mainboards. An additional grille (16) is provided for hot air exhaust. An input socket (18) for main electricity is mounted on the rear panel (8) . As a variation, a cutout on a rear panel (20) is provided to allow a main electricity cable to pass directly through the chassis (2) to the internal PSU (42), so minimizing the component count. A cutout (22) is provided for the reset circuit socket .
[0038] Figures 3 and 4 show internal views of the chassis (2) . Access to the chassis (2) is by removal of a removable side panel (24), which is attached by means of securing screws. Mounting stand-offs (28) are attached in the fixed side panel in a configuration compatible with the ATX mainboard mounting specification . The volume (30) indicates that occupied by up to 8 single-slot or 4-double-slot width APs. The volume (32) indicates that occupied by the CPU heat exchanger.
[0039] Two axial fans (34) attached directly on the front panel (4) via through-hole mountings. An "S" shaped bracket (36) is provided for mounting a hard disk drive (38) and for providing a rest (40) for the edge of the PSU (42) which is itself can be attached to a midplane (44, Figure 5). The midplane (44) can provide structural support for the chassis (2), preventing flexion . A flange (46) provides additional strength and resistance for bending and contains a set of holes (46) for securing the power cables that pass to the APs (30) . Apertures (50) provide for power cabling and are sized to permit the passage of ATX power connectors. [0040] APs conforming to the PCI Express mechanical specification have an L-shaped rear bracket (60) for secure chassis (2) mounting, as shown in figure 11. To minimize the width of the chassis (2), the mainboard is mounted on low profile standoffs (28). This reduced standoff (28) provides insufficient clearance for the end of this bracket causing it to clash with the base panel (26). Cutouts (64) can be provided in the base panel (26) allowing the end of the brackets (60) to pass through.
[0041] An alternative narrow variant of the chassis provides mounting holes (52, Fig 2) in the metalwork between the PCI Express cutouts (Fig 2, 14) by which a double-width AP with L-bracket removed may be mounted. [0042] Airflow: Air for cooling flows from the front of the chassis (2) to the rear and is driven by the front axial fans (Fig 4, 34). Air flow across the AP heat exchangers may be driven completely by the front panel fans or be augmented by blowers integrated into the AP heat-exchanger inside the AP itself. [0043] The use of an ATX power supply (Fig. 4, 42) can introduce some complexity as these units contain their own fans. The PSU is mounted on the midplane such that its air intake is on the side closest to the fixed side panel . The air outlet is on the side attached to the midplane and is blows across the CPU heat-exchanger before passing out of the rear grille.
[0044] A lip running the width of the midplane (Fig 5, 54) acts to minimize the air flow from the front fans bleeding off over the CPU rather than passing through the APs and for structural strength to avoid bending due to PSU.
[0045] Rack mounting : Servers mounted in a 19" equipment rack are typically mounted horizontally attached to rails or sliders that run between front and rear rack posts. In contrast, the computer enclosure (1), disclosed herein, can be mounted vertically, for instance, upon a standard 19" shelf, as shown in Figure 6. The shelf (56) is attached to the rack posts (58) and the computer enclosure (1) set upon it. The standard width chassis (2) is sufficiently narrow to allow 3 units to be placed side-by-side on a single shelf, between the 19" rack posts. The shelf (56) may have a lip along its edges to minimize movement of the chasses. The combined height of the chassis and shelf occupies 8 rack units, allowing 5 shelves of chasses to be accommodated in a standard 42u 19" rack (Figure 7). An alternative variant of the chasses is sufficiently narrow to permit 4 chasses to sit on a single shelf. (Figure 8) .
[0046] Narrow variant: The narrow variant of the chassis allows 4 units to sit on a single shelf in a 19" rack. In order to accomplish this, the maximum width of the chassis must be 120mm . This can necessitates several internal changes. The mainboard standoffs (28) are further reduced in profile. The AP L 10 brackets (Fig 11, 60) must be removed and the custom mounting holes (Fig 2, 52) used . Front, back, top and bottom panels are reduced in width. The midplane is reduced in width, losing the two top ventilation holes (Fig 5, 62) . The PSU (42) therefore sits flush with the base panel (26) and the removable side panel (24). [0047] The PCI Express electromechanical specification permits the power inputs to an AP to be positioned on the top or side of the unit. If these sockets are positioned on the top, flush with the edge, there may be insufficient clearance between them and the side panel of the chassis and the accelerator processors (Fig 4, 30) to accommodate a standard PCI Express power plug (Fig 9, 66). A low-profile version of the PCI Express power plug can be used, as disclosed in Fig 9 (68) . This plug (68) has the same pin and electrical configuration as the standard plug, but a reduced height body. Modified plugs can be manufactured for such an enclosure. [0048] Assembly: The 7 major panels of the chasses are all fabricated separately and attached together with screw fixings. Figure 5 shows an exploded view of the chassis (2). This construction technique allows the bare chasses to be shipped unassembled, so reducing shipping costs where charges are levied on the volume of freight rather than its mass. This modular construction also facilitates in-field upgrade of individual panels, if required (for example to accommodate a new PSU mounting). This may be useful if a chassis is expected to house several generations of APs over its operational life.
[0049] Reset Circuit: The reset circuit (Figure 10) is designed to allow the server to which it is attached to be reset by means of a remotely generated electrical signal . It is also able to produce such a signal for the purposes of driving the reset of a second server similarly equipped with this circuit.
[0050] The intended mode of operation is to connect pairs of servers together via their remote reset circuits so allowing each to reset the other. In this manner, a failed and unresponsive system can be remotely reset by sending a remote reset signal from the afflicted system's peer. Only a simultaneous failure of both machines in a pair will require manual intervention, so reducing maintenance overhead .
[0051] The reset circuit contains four signaling lines. Two lines are used for input to activate the reset and two lines are outputs for activating the reset circuit of the peer computer. [0052] These 4 electrical signal lines may be presented for external connection by means of an appropriate socket connector, for example a 4-pole 3.5 mm jack socket, for which a cutout is provided on the rear panel (Fig 2, 22) .
[0053] A pair of servers equipped with the reset circuit are connected together using 4-pole plug complementary to the socket and which has a cross¬ over wiring l <->3, 2-< >4. This configuration connects the reset and signal generation lines of each system, allowing each system in the pair to reset and be reset by the other.
[0054] Aspects of operation : · The reset circuit provides optical isolation, preventing a ground loop from forming between then two servers are connected.
• The reset signal is initiated by applying positive voltage to pin 1. This signal may be generated by operation of a general purpose 10 line on the mainboard, or if none are present, by re-purposing an existing control line, for example a control line on a parallel printer port or DTR/RTS lines of an RS-232 serial port or system beeper. Contemporary mainboards, despite typically not having externally presented serial or parallel ports retain these functions and provide the appropriate electrical signaling on mainboard header connectors. The RS232 DTR line is the preferred source for the reset signal . · The reset signal is generated under software control from the activating system .
• The circuit can be assembled in a physically compact packaging . It can be assembled on a small PCB that may plug directly into the RS232 or front panel 10 header connectors. [0055] Certain adaptations and modifications of the described
embodiments can be made. Therefore, the above discussed embodiments are considered to be illustrative and not restrictive.

Claims

WHAT IS CLAIMED IS:
1. A computer, comprising:
a chassis having a front panel, a rear panel and side panels;
a mainboard positioned proximate to the rear panel,
one or more accelerator processors (AP) coupled to the mainboard; and a power supply unit (PSU) positioned proximate to the front panel.
2. The computer according to claim 1, wherein the side panel comprises a removable panel and a base panel.
3. The computer according to claim 1 or 2, further comprising a PSU power cable cut-out on the rear panel.
4. The computer according to any one of claims 1 to 3, further comprising AP mounting holes adapted for mounting AP to the rear panel.
5. The computer according to any one of claims 1 to 4, further comprising a midplane panel.
6. The computer according to claim 5, wherein the PSU is coupled to the midplane panel.
7. The computer according to claim 5 or 6, further comprising one or more fans coupled to the front panel.
8. The computer according to claim 7, wherein the midplane panel comprises apertures for permitting air flow from the front panel to the rear panel.
9. The computer according to any one of claims 1 to 8, further comprising an S-shaped bracket, the S-shaped bracket providing a rest for the PSU.
10. The computer according to any one of claims 1 to 9, further comprising cutouts in the base panel for receiving one or more brackets of the one or more APs.
11. A server rack comprising the computer as defined in any one of claims 1 to 10.
12. A server rack comprising a standard server rack and a plurality of
computers, wherein the computers provide 24 or 32 AP per 8U of the server rack.
PCT/GB2013/051056 2012-04-25 2013-04-25 High density computer enclosure for efficient hosting of accelerator processors WO2013160684A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201261638002P 2012-04-25 2012-04-25
US61/638,002 2012-04-25

Publications (1)

Publication Number Publication Date
WO2013160684A1 true WO2013160684A1 (en) 2013-10-31

Family

ID=48289482

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2013/051056 WO2013160684A1 (en) 2012-04-25 2013-04-25 High density computer enclosure for efficient hosting of accelerator processors

Country Status (1)

Country Link
WO (1) WO2013160684A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6266247B1 (en) * 1998-08-24 2001-07-24 Racal Instruments Inc. Power supply connection system
US20030227755A1 (en) * 2002-06-10 2003-12-11 Haworth Stephen Paul Electronics assembly
US20040264123A1 (en) * 2003-06-26 2004-12-30 International Business Machines Corporation Server packaging architecture utilizing a blind docking processor-to-midplane mechanism

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6266247B1 (en) * 1998-08-24 2001-07-24 Racal Instruments Inc. Power supply connection system
US20030227755A1 (en) * 2002-06-10 2003-12-11 Haworth Stephen Paul Electronics assembly
US20040264123A1 (en) * 2003-06-26 2004-12-30 International Business Machines Corporation Server packaging architecture utilizing a blind docking processor-to-midplane mechanism

Similar Documents

Publication Publication Date Title
US7764511B2 (en) Multidirectional configurable architecture for multi-processor system
US7639486B2 (en) Rack system providing flexible configuration of computer systems with front access
US10271460B2 (en) Server system
KR101558118B1 (en) System and method for flexible storage and networking provisioning in large scalable processor installations
US20080043405A1 (en) Chassis partition architecture for multi-processor system
US20080259555A1 (en) Modular blade server
US20100027213A1 (en) Server
US20080037209A1 (en) Computer chassis for two motherboards oriented one above the other
US20080024977A1 (en) Flow-through cooling for computer systems
AU2015238911A1 (en) Modular mass storage system
US9713279B2 (en) Front access server
JP2013004082A (en) Server rack system
KR100859760B1 (en) Scalable internet engine
CN107656588B (en) Server system with optimized heat dissipation and installation method
CN100541391C (en) Chassis partition architecture for multi-processor system
US20080037214A1 (en) Computer chassis for two horizontally oriented motherboards
US20100027214A1 (en) Motherboard module array
CN103034302B (en) Servomechanism
CN101470497A (en) Electric connection structure and method for heat radiating device and switching circuit board
CN105094699A (en) Memory system of cloud server
KR101429893B1 (en) Personal super computer
CN103777713A (en) Server
WO2013160684A1 (en) High density computer enclosure for efficient hosting of accelerator processors
CN114077290A (en) A frame and calculation type server for calculating type server
CN218630661U (en) 4U server supporting 8GPU modules

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13720499

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13720499

Country of ref document: EP

Kind code of ref document: A1