US20150254205A1 - Low Cost, High Performance and High Data Throughput Server Blade - Google Patents
Low Cost, High Performance and High Data Throughput Server Blade Download PDFInfo
- Publication number
- US20150254205A1 US20150254205A1 US14/198,510 US201414198510A US2015254205A1 US 20150254205 A1 US20150254205 A1 US 20150254205A1 US 201414198510 A US201414198510 A US 201414198510A US 2015254205 A1 US2015254205 A1 US 2015254205A1
- Authority
- US
- United States
- Prior art keywords
- circuit board
- main circuit
- connectors
- sata
- computer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
- G06F13/40—Bus structure
- G06F13/4063—Device-to-bus coupling
- G06F13/4068—Electrical coupling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/20—Handling requests for interconnection or transfer for access to input/output bus
- G06F13/28—Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
- G06F13/40—Bus structure
- G06F13/4004—Coupling between buses
- G06F13/4022—Coupling between buses using switching circuits, e.g. switching matrix, connection or expansion network
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
- G06F13/40—Bus structure
- G06F13/4063—Device-to-bus coupling
- G06F13/409—Mechanical coupling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
- G06F13/42—Bus transfer protocol, e.g. handshake; Synchronisation
- G06F13/4282—Bus transfer protocol, e.g. handshake; Synchronisation on a serial bus, e.g. I2C bus, SPI bus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/16—Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
- G06F15/161—Computing infrastructure, e.g. computer clusters, blade chassis or hardware partitioning
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- the present invention relates generally to computer servers and processing. More particularly, the invention relates to low cost, high performance and high data throughput server blades.
- Certain server applications such as video streaming, may be suitable for physicalization due to relatively high input/output (I/O) bandwidth requirements coupled with relatively low processing power requirements.
- I/O input/output
- existing blade servers based on virtualization may not be well suited for these applications, as these blade servers may have higher processing power than needed along with limited I/O bandwidth across the few large physical servers.
- the cost of processors and system components for traditional server applications tends to decrease more slowly than the cost of processors and components for high volume consumer applications.
- a server blade insertable into a chassis of a blade server system comprises: (1) a main circuit board coupled to the chassis upon insertion; (2) a plurality of connectors residing on the main circuit board; (3) a plurality of grouped hard disk drives; and (4) a plurality of computer modules, each insertable into a corresponding one of the plurality of connectors.
- Each of the plurality of grouped hard disk drives couples to one or more of the plurality of computer modules.
- Each of the plurality of grouped hard disk drives includes a first hard disk drive exposed proximate to a front side of the chassis, and a second hard disk drive positioned between the first hard disk drive and a back side of the chassis.
- a first subset of the plurality of grouped hard disk drives includes a first grouped hard disk drive and a second grouped hard disk drive stacked on the first grouped hard disk drive.
- the server blade insertable into the chassis of the blade server system comprises: (1) a main circuit board that couples to the chassis upon insertion; (2) a plurality of computer modules; (3) a plurality of connectors residing on the main circuit board, each adapted to connect to a corresponding one of the plurality of computer modules; and (4) a plurality of hot-plug hard drive storage modules, each removable from a front side of the server blade while the server blade is installed in the chassis.
- Each of the plurality of hot-plug hard drive storage modules comprises a frame, a first hard disk drive attached to a front portion of the frame, and a second hard disk drive attached to a rear portion of the frame. Each of the first hard disk drive and the second hard disk drive are coupled to at least one of the plurality of computer modules.
- the server blade insertable into the chassis of the blade server system comprises: (1) a main circuit board coupled to the chassis upon insertion; (2) a plurality of connectors disposed on the main circuit board; (3) a plurality of computer modules, each insertable into a corresponding one of the plurality of connectors; and (4) a hub disposed on the main circuit board that couples to the chassis and to a communication controller included in each of the plurality of computer modules.
- the hub processes input data to obtain requests distributed to the plurality of computer modules, each of the plurality of computer modules generates an output data stream in response to a corresponding request, each output data stream has a first bandwidth higher than a second bandwidth of the corresponding request, and the hub aggregates the output data streams of the plurality of computer modules to obtain output data.
- FIG. 1 illustrates a top view of a server blade, according to an embodiment of the invention
- FIG. 2 illustrates a front view of a server blade, according to an embodiment of the invention
- FIG. 3 illustrates a logical view of a computer module, according to an embodiment of the invention.
- FIG. 4 illustrates a logical view of a blade server system, according to an embodiment of the invention.
- Processors designed for use in high volume consumer applications can provide a higher performance per cost than processors designed for low volume, high performance server applications.
- aggressive competition in high volume consumer computer systems can drive cost of processors and components for consumer applications down more rapidly than that of high end server processors and components.
- Embodiments of the invention include low cost computer modules incorporating these highly integrated consumer processors and system components, and at the same time take advantage of blade server design concepts. The use of a large number of these low cost computer modules within a blade server system for certain server applications, such as video streaming applications, can result in reduced cost and increased performance.
- the server blade 100 includes a main circuit board 110 that can couple to a chassis.
- the main circuit board 110 can insert into connectors 103 A- 103 N coupled to a backplane or midplane 111 .
- Computer modules 101 ( 101 A- 101 H in the illustrated embodiment) are electrically connected to the main circuit board 100 , such as by inserting into connectors (see FIG. 2 ) on the main circuit board 110 .
- the computer modules 101 are coupled to a management controller 112 .
- a mezzanine plug-on card 105 is disposed on the main circuit board 110 .
- the mezzanine plug-on card 105 can also insert into one or more connectors 103 to the backplane or midplane 111 .
- each of the hard disk drives 116 is coupled to a corresponding one of the computer modules 110 through a high speed SATA (Serial Advanced Attachment Technology) Rev. 2 interface.
- An input/output (IO) hub 324 included in the computer module 110 may provide one SATA interface port.
- the SATA interface port included in the computer module 110 can connect through connectors and the main circuit board 110 to the corresponding hard disk drive 116 .
- the bandwidth requirements of each of the computer modules 110 can be met by the corresponding one of the hard disk drives 116 over the SATA Rev. 2 interface, and by corresponding switching bandwidth (see discussion with reference to FIG. 4 ).
- the effective data bandwidth per computer module 110 can be approximately 150 Mbyte/s.
- the server blade 100 may be desirable to increase the number of computer modules 110 and corresponding hard disk drives 116 per server blade 100 .
- the Server System Infrastructure (SSI) Forum has provided mechanical, electrical, and power specifications for a standardized server blade. These specifications include a 407.9 mm blade depth (corresponding to a length of lateral sides 122 and 124 of the server blade 100 , shown in FIG. 1 ), a 279.4 mm blade width (corresponding to a length of front side 120 of the server blade 100 , shown in FIG.
- the front side 120 of the server blade 100 corresponds to a front side of the main circuit board 110
- the lateral sides 122 and 124 of the server blade 100 corresponds to lateral sides of the main circuit board 110 .
- a first side 130 of each computer module 101 has a length of approximately 62 mm, and a second side 132 of each computer module 101 has a length of approximately 86 mm.
- eight computer modules 101 A- 101 H can be coupled to the server blade 100 .
- the second side 132 of the computer module 101 A is positioned adjacent to the front side 120 of the server blade 100 so that there is a path for front-to-back airflow over the computer modules 101 . In this embodiment, there is then sufficient space along the front side 120 of the server blade 100 to dispose two hard disk drives 116 A and 116 C of typical dimensions adjacent to the front side 120 .
- each pair of hard disk drives 116 is included in a corresponding grouped hard disk drive 102 .
- This structural arrangement is to overcome the limited front surface area of the server blade 100 .
- grouped hard disk drive 102 A includes the hard disk drives 116 A and 116 B
- grouped hard disk drive 102 B includes the hard disk drives 116 C and 116 D.
- the grouped hard disk drives 102 may be oriented such that the hard disk drives 116 A and 116 C are exposed proximate to the front side 120 of the server blade 100 , and therefore proximate to a front side of the chassis to which the server blade 100 is coupled.
- the hard disk drives 116 A and 116 C may be exposed at the front side of the chassis to which the server blade 100 is coupled.
- the hard disk drive 116 B may be positioned between the hard disk drive 116 A and a back side of the chassis to which the server blade 100 is coupled, and the hard disk drive 116 D may be positioned between the hard disk drive 116 C and a back side of the chassis to which the server blade 100 is coupled.
- a front view of the server blade 100 is illustrated.
- a grouped hard disk drive 102 C may be stacked on the grouped hard disk drive 102 A
- a grouped hard disk drive 102 D may be stacked on the grouped hard disk drive 102 B.
- the grouped hard disk drives 102 C and 102 D may be oriented similarly to the grouped hard disk drives 102 A and 102 B, respectively.
- eight hard disk drives 116 can be included in the blade server 100 while maintaining the front-to-back airflow path over the computer modules 101 , and while staying within the SSI specifications for blade height.
- an SSI specified server blade can expose at most four hard disk drives 116 , in two stacks of two, at the front side 120 of the main circuit board 110 .
- eight hard disk drives 116 may be used to support eight computer modules 101 (one hard disk drive 116 per computer module 101 ). This can yield an aggregate hard disk drive data bandwidth of up to 1.5 Gbyte/s with eight hard disk drives 116 concurrently being accessed by eight corresponding computer modules 101 .
- eight hard disk drives 116 may be used to support four computer modules 101 (for example, the two hard disk drives 116 in one of the grouped hard disk drives 102 per computer module 101 ).
- the grouped hard disk drives 102 may correspond to hot-plug hard drive storage modules 106 that are removable from the front side 120 of the server blade 100 .
- a hot-plug hard drive storage module 106 includes a grouped hard disk drive 102 , and is positioned similarly on the main circuit board 110 .
- Frames 107 can be attached to the main circuit board 110 , and each of the hot-plug hard drive storage modules 106 may be adapted to be placed in a corresponding one of the frames 107 .
- Each of the hot-plug hard drive storage modules 106 may include a first hard disk drive (such as hard disk drives 116 A and 116 C) disposed in a front portion of the frame 107 adjacent to the front side 120 of the server blade 100 , and may include a second hard disk drive (such as hard disk drives 116 B and 116 D) disposed in a rear portion of the frame 107 .
- each of the hot-plug hard drive storage modules 106 includes a connector that supports two SATA connections for the two hard disk drives 116 to connect to one or two computer modules 101 via the main circuit board 110 .
- a hot-plug hard drive storage module 106 C may be stacked on the hot-plug hard drive storage module 106 A, and a hot-plug hard drive storage module 106 D may be stacked on the hot-plug hard drive storage module 106 B.
- At least one of the computer modules 101 may be positioned between the grouped hard disk drive 102 A and the grouped hard disk drive 102 B.
- the grouped hard disk drive 102 A may be positioned proximate to the lateral side 124 of the server blade 100
- the grouped hard disk drive 102 B may be positioned proximate to the lateral side 122 of the server blade 100 . This can allow airflow from the front side of the chassis to pass between the grouped hard disk drive 102 A and the grouped hard disk drive 102 B, so that the front-to-back airflow passes over the computer modules 101 A- 101 F.
- an air baffle 104 disposed on the main circuit board 110 is positioned to direct the front-to-back airflow toward the lateral side 124 of the server blade 100 .
- airflow can be provided for cooling the computer modules 101 G- 101 H, which are positioned at least partially behind the grouped hard disk drive 102 A.
- the front-to-back airflow is substantially centrally positioned over the main circuit board 110 . This may facilitate efficient direction of the airflow toward the computer modules 101 G and 101 H adjacent to the lateral side 124 of the server blade 100 .
- the grouped hard disk drive 102 A may be positioned next to the grouped hard disk drive 102 B, such that the computer module 101 A is positioned adjacent to either the lateral side 122 or the lateral side 124 of the server blade 100 .
- the front-to-back airflow is substantially laterally positioned over the main circuit board 110 .
- operational status indicators of the hard disk drives 116 that are displaced from the front side 120 of the main circuit board 110 can be provided at a front side 140 of the corresponding grouped hard disk drives 102 .
- a visual indicator (such as an LED indicator) that the hard disk drive 116 B is operating may be provided at the front side 140 of the grouped hard disk drive 102 A, along with a visual indicator that the hard disk drive 116 A is operating.
- a processor 322 (see FIG. 3 ) on each of the computer modules 101 may communicate serially (such as via I 2 C) with the management controller 112 on the main circuit board 110 .
- the management controller 112 can monitor the operational status of the processors 322 . If processor failure is detected, the management controller 112 can alert an administrator. Information such as temperature, identification of the computer module 101 , and size of memory or storage device can also be communicated serially to the management controller 112 .
- a power switch controlled by the management controller 112 can shut off power to any one of the computer modules 101 .
- the management controller 112 can alert high level software to shift the workload of the failed computer module 101 to another computer module 101 , and subsequently shut off its power so that the failed computer module 101 does not affect the operation of the rest of the server blade 100 .
- the management controller 112 supports at least one of 1:1, 1+1, and N+1 redundancy of the computer modules 101 .
- the management controller 112 can support load-balancing between two or more of the computer modules 101 .
- a battery 114 on the main circuit board 110 can provide power to each of the computer modules 101 for maintaining data in memory, such as a static CMOS memory, or for keeping a portion of circuitry on each of the computer modules 101 active when the remainder of the circuitry on the computer modules 101 is powered down.
- the power from the battery 114 can be kept on even if the main power to one or more of the computer modules 101 is shut off by the management controller 112 for saving power when the one or more of the computer modules 101 are not in use.
- two batteries 114 can reside on the main circuit board 110 so that battery power is continuously available during replacement of one of the two batteries 114 .
- Embodiments of the present invention can use different numbers of computer modules 101 to populate the server blade 100 .
- Other embodiments can use server blades 100 of different form factors, electrical, and power specifications.
- An embodiment of the present invention uses plug-in computer modules 101 to simplify manufacturing and facilitate ease of repair. In this embodiment, if a computer module 101 fails, only the failed computer module 101 needs to be unplugged and replaced, saving the rest of the server blade 100 . This also allows the server blade 100 to be populated partially with computer modules 101 , with the option of plugging in additional computer modules 101 later.
- the grouped hard disk drives 102 A and 102 C may be proximate to the lateral surface 124 of the server blade 100
- the grouped hard disk drives 102 B and 102 D may be proximate to the lateral surface 122 of the server blade 100
- the computer module 101 A may be positioned between the grouped hard disk drive 102 A and the grouped hard disk drive 102 B.
- a connector 212 is disposed on the main circuit board 110 .
- the computer module 101 A is insertable into the connector 212 , which is adapted to connect to the computer module 101 A, and to couple the computer module 101 A to the main circuit board 110 .
- the connector 212 may be a vertical connector.
- the computer module 101 includes an integrated system on chip 321 comprising a processor 322 and a memory controller 323 , a main memory 327 coupled to the memory controller 323 , an input/output hub 324 , a communication controller 325 , and a mass storage device 326 .
- the main memory 327 is directly coupled to the memory controller 323 .
- the computer module 101 is low in height to fit within the SSI height limitation of 41.7 mm. It is contemplated that the computer module 101 may be of even lower height.
- One embodiment of the present invention uses a horizontally fitted double data rate (DDR) DDR2/3 small outline dual in-line memory module (SODIMM) as the main memory 327 within the computer module 101 to meet the SSI height limitation.
- the SODIMM memory may be a plug-in unit to improve reusability.
- Another embodiment uses DDR2/3 Micro-DIMM as the main memory 327 to reduce the size of the computer module 101 .
- embodiments of the invention can use low power double data rate (LPDDR) LPDDR2 memory or future generations of low power DDR memory with low voltage swing low-voltage differential signaling (LVDS) data links.
- the processor 322 may be a low power processor or system chip, e.g., system on chip (SOC), that can operate with a low profile top mounted heat sink to fit within the SSI height limitation.
- SOC system on chip
- the low profile heat sink can be sufficient for air cooling for low power system chips.
- the computer module 101 can be a small printed circuit board populated on both sides with major components such as a system chip, the input/output hub 324 , a SODIMM memory horizontal socket, the connector 212 (see FIG.
- the flash drive may be a plug-in unit to improve reusability.
- the system chip can be soldered directly on the small printed circuit board to reduce cost and to remove the additional height of an expensive socket. Without the socket, the top mounted heat sink can increase in height to increase cooling for the system chip.
- the computer module 101 includes one USB flash drive or one solid state drive (SSD).
- USB 3.0 released in 2008 has a signaling rate of 4.8 Gbit/sec versus 480 Mbit/s for USB 2.0.
- a USB 3.0 flash drive interfaces to the input/output hub 324 or the processor 322 in the computer module 101 .
- a USB flash drive or a SATA SSD can serve as local cache on the computer module 101 to store frequently accessed content and video streams.
- USB 3.0 connections can have an effective data bandwidth of over 2.4 Gb/s or 300 MByte/s.
- a single SATA SSD can yield an effective data bandwidth of around 150 to 300 MByte/s.
- the computer module 101 includes a flash drive or a local SSD as cache. This can provide a higher storage data bandwidth than the hard disk drives 116 included in the server blade 100 (see FIG. 1 ).
- two USB flash drives or 2 SATA SSDs can be included in a single computer module 101 to further increase data bandwidth. Either the USB flash drive or the SATA SSD can be used to store operating system or virtualization software to allow the computer module 101 to boot up upon power up.
- FIG. 4 a logical view of a blade server system 400 according to an embodiment of the invention is illustrated.
- One or more main circuit boards 110 couple to a chassis 401 upon insertion into connectors 103 of the midplane or backplane 111 .
- the circuit boards 110 can be powered, at least in part, by the power supply 422 .
- the computer modules 101 each insert into a corresponding connector 212 , and are each coupled to a hard disk drive 116 .
- the computer modules 101 can be powered, at least in part, by the power supply 422 via the power regulator 424 .
- a hub 402 on the main circuit board 110 couples to the communication controller 325 on each of the computer modules 101 .
- the hub 402 can include switches such as an Ethernet switch or a PCI Express switch.
- the hub 412 can include switches such as a 10 Gigabit Ethernet (10 GbE) switch or a PCI Express switch.
- two 1 Gigabit Ethernet (GbE) connections are provided from each computer module 101 to the main circuit board 110 .
- the two GbE connections can provide approximately 200 Mbyte/s of network bandwidth.
- These GbE links from each computer module 101 connect to the Ethernet switching hub 402 on the main circuit board 110 with separate connections.
- the Ethernet switch 402 can have 16 GbE ports and 2 10 GbE ports.
- the 10 GbE ports can connect to the 10 GbE switch 412 within the console midplane 111 .
- high speed PCI Express channels are provided from either the system chip 321 or the input/output hub 325 to provide data communication to the main circuit board 110 , and eventually to an external network.
- PCI Express 2.0 can have an effective 400 Mbyte/s per link data throughput.
- PCI Express 3.0 can have an effective per link data throughput about twice that of PCI Express Rev. 2.0.
- a x1 PCI Express 2.0 link is provided from the computer module 101 coupled through the connector 212 to the PCI Express switch 402 on the main circuit board 110 of the server blade 100 (see FIG. 1 ) to serve as the communication channel.
- a x1 PCI Express 2.0 channel can provide an approximately 400 Mbyte/s data transfer rate sufficient to handle the storage data transfer rate of both the local flash drive and the external SATA hard disk drive 116 for the computer module 101 .
- the PCI Express switch 402 functions similarly to an Ethernet switching hub to direct data communication between the various PCI Express links. It provides communication between the computer modules 101 and to one or two x2 PCI Express links that couple to the switch 412 through the midplane 111 of the blade server system 400 .
- 2 PCI Express switches 402 can be provided on the main circuit board 110 . Each of the PCI Express switches 402 has a x1 PCI Express link to each of the computer modules 101 .
- the PCI switch 402 has a x2 PCI Express link to another PCI Express switch 412 in the chassis 401 through the midplane 111 .
- the PCI Express switch 412 in the chassis 401 then can connect to a 10 GbE controller (not shown) for external network communication.
- Other embodiments are contemplated in which any combination of links of PCI Express Rev. 2 or Rev. 3 with GbE links are used to carry data within the chassis 401 .
- input data 410 to the chassis 401 is processed by the switch 412 to obtain data distributed to each of the main circuit boards 110 .
- the data traverses link 414 .
- the link 414 is a 10 Gigabit Ethernet link or a x2 PCI Express link.
- the switch 402 processes the input data to the main circuit board 410 to obtain requests. In a video streaming application, for example, these requests may be requests for on-demand video programming. These requests are then distributed by the switch 402 to the corresponding computer modules 101 via the link 404 .
- the link 404 includes 2 GbE links or a x1 PCI Express 2.0 link.
- Each of the computer modules 101 generates an output data stream in response to a corresponding request, where the output data stream has a first bandwidth higher than a second bandwidth of the corresponding request.
- the output data stream may be the requested video stream.
- the output data stream originates on the computer module 101 and traverses the link 404 to the switch 402 .
- the switch 402 then aggregates the output data streams from the computer modules 101 to obtain output data that traverses the link 416 to the switch 412 , then is output from the switch 412 as output data 420 .
- each of the computer modules 101 is connected to the management controller 112 by a link 406 .
- the link 406 is a GbE link.
- the management controller 112 may be connected to an external network via links 418 and 419 and the switch 412 , and may be connected to other main circuit boards 110 within the chassis 401 by the links 418 and 419 .
- the links 418 and 419 may be GbE links.
- remote Keyboard/Video/Mouse (KVM) functions for each computer module 101 can be supported through Ethernet communication.
- a Gigabit Ethernet switch 112 on the main circuit board 110 can select KVM from a particular computer module 101 by selecting data from a dedicated Ethernet link 406 (such as a 1 GbE link) from the computer module 101 .
- An administrator on an external network can access the KVM function of each computer module 101 one at a time through the Ethernet switch 112 .
- the multiple computer module server blade 100 can be used for a multiple client blade application.
- Each client can be assigned to one computer module at a time, e.g. time sharing between multiple clients.
- 3D graphics information from multiple computer modules 101 can be directed through the high speed 10 GbE switch 412 .
- an “eight computer module” server blade has essentially eight separate computers that can be assigned individually to each of eight remote clients. If one computer module 101 fails, a client user can be switched to another computer module 101 utilizing the Ethernet switching hub 412 .
- each computer module 101 can support more than one client user at a time through virtualization.
Abstract
A server blade insertable into a chassis of a blade server system includes a main circuit board coupled to the chassis upon insertion, a plurality of connectors residing on the main circuit board, a plurality of grouped hard disk drives, and a plurality of computer modules, each insertable into a corresponding one of the connectors. Each of the grouped hard disk drives couples to one or more of the computer modules. Each of the grouped hard disk drives includes a first hard disk drive exposed proximate to a front side of the chassis, and a second hard disk drive positioned between the first hard disk drive and a back side of the chassis. A subset of the grouped hard disk drives includes a first grouped hard disk drive and a second grouped hard disk drive stacked on the first grouped hard disk drive.
Description
- This application is a continuation of U.S. patent application Ser. No. 13/214,020 filed Aug. 19, 2011, which claims priority to U.S. Provisional Application Ser. No. 61/375,356, filed on Aug. 20, 2010, the contents of which are incorporated herein by reference.
- The present invention relates generally to computer servers and processing. More particularly, the invention relates to low cost, high performance and high data throughput server blades.
- As processing power, memory capacity, and data bandwidth increases, there are limitations on computing efficiency under a single operating system (OS) instance. In the server space, one answer has been virtualization, which allows many OS instances to share the resources of a few large physical servers. However, for many consumers, this high level of computing power may not be necessary. Smaller processors that provide good performance at lower cost can be used to disaggregate the OS instances onto many smaller servers, a concept called physicalization that can be an alternative to virtualization for smaller data centers.
- Certain server applications, such as video streaming, may be suitable for physicalization due to relatively high input/output (I/O) bandwidth requirements coupled with relatively low processing power requirements. However, existing blade servers based on virtualization may not be well suited for these applications, as these blade servers may have higher processing power than needed along with limited I/O bandwidth across the few large physical servers. In addition, the cost of processors and system components for traditional server applications tends to decrease more slowly than the cost of processors and components for high volume consumer applications. Thus, there remains a need in the blade server space for compact, low cost, high data throughput computer modules that incorporate highly integrated consumer processors and system components for applications such as video streaming.
- It is against this background that a need arose to develop the server blade described herein.
- One aspect of the invention relates to a server blade. In one embodiment, a server blade insertable into a chassis of a blade server system comprises: (1) a main circuit board coupled to the chassis upon insertion; (2) a plurality of connectors residing on the main circuit board; (3) a plurality of grouped hard disk drives; and (4) a plurality of computer modules, each insertable into a corresponding one of the plurality of connectors. Each of the plurality of grouped hard disk drives couples to one or more of the plurality of computer modules. Each of the plurality of grouped hard disk drives includes a first hard disk drive exposed proximate to a front side of the chassis, and a second hard disk drive positioned between the first hard disk drive and a back side of the chassis. A first subset of the plurality of grouped hard disk drives includes a first grouped hard disk drive and a second grouped hard disk drive stacked on the first grouped hard disk drive.
- In another embodiment, the server blade insertable into the chassis of the blade server system comprises: (1) a main circuit board that couples to the chassis upon insertion; (2) a plurality of computer modules; (3) a plurality of connectors residing on the main circuit board, each adapted to connect to a corresponding one of the plurality of computer modules; and (4) a plurality of hot-plug hard drive storage modules, each removable from a front side of the server blade while the server blade is installed in the chassis. Each of the plurality of hot-plug hard drive storage modules comprises a frame, a first hard disk drive attached to a front portion of the frame, and a second hard disk drive attached to a rear portion of the frame. Each of the first hard disk drive and the second hard disk drive are coupled to at least one of the plurality of computer modules.
- In another embodiment, the server blade insertable into the chassis of the blade server system comprises: (1) a main circuit board coupled to the chassis upon insertion; (2) a plurality of connectors disposed on the main circuit board; (3) a plurality of computer modules, each insertable into a corresponding one of the plurality of connectors; and (4) a hub disposed on the main circuit board that couples to the chassis and to a communication controller included in each of the plurality of computer modules. The hub processes input data to obtain requests distributed to the plurality of computer modules, each of the plurality of computer modules generates an output data stream in response to a corresponding request, each output data stream has a first bandwidth higher than a second bandwidth of the corresponding request, and the hub aggregates the output data streams of the plurality of computer modules to obtain output data.
-
FIG. 1 illustrates a top view of a server blade, according to an embodiment of the invention; -
FIG. 2 illustrates a front view of a server blade, according to an embodiment of the invention; -
FIG. 3 illustrates a logical view of a computer module, according to an embodiment of the invention; and -
FIG. 4 illustrates a logical view of a blade server system, according to an embodiment of the invention. - Processors designed for use in high volume consumer applications can provide a higher performance per cost than processors designed for low volume, high performance server applications. In addition, aggressive competition in high volume consumer computer systems can drive cost of processors and components for consumer applications down more rapidly than that of high end server processors and components. Embodiments of the invention include low cost computer modules incorporating these highly integrated consumer processors and system components, and at the same time take advantage of blade server design concepts. The use of a large number of these low cost computer modules within a blade server system for certain server applications, such as video streaming applications, can result in reduced cost and increased performance.
- Referring to
FIG. 1 , a top view of aserver blade 100 according to an embodiment of the invention is illustrated. Theserver blade 100 includes amain circuit board 110 that can couple to a chassis. For example, themain circuit board 110 can insert intoconnectors 103A-103N coupled to a backplane ormidplane 111. Computer modules 101 (101A-101H in the illustrated embodiment) are electrically connected to themain circuit board 100, such as by inserting into connectors (seeFIG. 2 ) on themain circuit board 110. Thecomputer modules 101 are coupled to amanagement controller 112. In one embodiment, a mezzanine plug-oncard 105 is disposed on themain circuit board 110. The mezzanine plug-oncard 105 can also insert into one ormore connectors 103 to the backplane ormidplane 111. - For high volume video-streaming server applications, sufficient hard disk drive data bandwidth and corresponding network bandwidth is specified at each of the
computer modules 110 to support a large number of real-time video streams. The hard disk drive data bandwidth can be provided by coupling each of thecomputer modules 110 to a correspondinghard disk drive 116 disposed on theserver blade 100. In one embodiment, each of thehard disk drives 116 is coupled to a corresponding one of thecomputer modules 110 through a high speed SATA (Serial Advanced Attachment Technology) Rev. 2 interface. An input/output (IO) hub 324 (seeFIG. 3 ) included in thecomputer module 110 may provide one SATA interface port. The SATA interface port included in thecomputer module 110 can connect through connectors and themain circuit board 110 to the correspondinghard disk drive 116. As SATA Rev. 2 can support 2.4 Gb/s of actual transfer rate, and as conventional hard disk drives typically can saturate the original SATA 1.5 Gb/s bandwidth, the bandwidth requirements of each of thecomputer modules 110 can be met by the corresponding one of thehard disk drives 116 over the SATA Rev. 2 interface, and by corresponding switching bandwidth (see discussion with reference toFIG. 4 ). In this embodiment, the effective data bandwidth percomputer module 110 can be approximately 150 Mbyte/s. - Referring to
FIGS. 1 and 2 , to increase the effective data bandwidth perserver blade 100, it may be desirable to increase the number ofcomputer modules 110 and correspondinghard disk drives 116 perserver blade 100. At the same time, it may be desirable to design theserver blade 100 to comply with industry standards. For example, the Server System Infrastructure (SSI) Forum has provided mechanical, electrical, and power specifications for a standardized server blade. These specifications include a 407.9 mm blade depth (corresponding to a length oflateral sides server blade 100, shown inFIG. 1 ), a 279.4 mm blade width (corresponding to a length offront side 120 of theserver blade 100, shown inFIG. 1 ), a 41.7 mm blade height (corresponding to aheight 224 of theserver blade 100, shown inFIG. 2 ), and a maximum power of 450 Watts. In one embodiment, thefront side 120 of theserver blade 100 corresponds to a front side of themain circuit board 110, and thelateral sides server blade 100 corresponds to lateral sides of themain circuit board 110. - It can be advantageous to design the
server blade 100 to increase the number ofcomputer modules 110 and correspondinghard disk drives 116 perserver blade 100, taking into account limitations on blade size associated with mechanical specifications for server blades such as those of SSI, and other considerations such as airflow paths for cooling and operational requirements. In one embodiment, afirst side 130 of eachcomputer module 101 has a length of approximately 62 mm, and asecond side 132 of eachcomputer module 101 has a length of approximately 86 mm. As shown inFIG. 1 , and in an embodiment corresponding to the SSI mechanical specifications, eightcomputer modules 101A-101H can be coupled to theserver blade 100. Thesecond side 132 of thecomputer module 101A is positioned adjacent to thefront side 120 of theserver blade 100 so that there is a path for front-to-back airflow over thecomputer modules 101. In this embodiment, there is then sufficient space along thefront side 120 of theserver blade 100 to dispose twohard disk drives front side 120. - In one embodiment, each pair of
hard disk drives 116 is included in a corresponding grouped hard disk drive 102. This structural arrangement is to overcome the limited front surface area of theserver blade 100. For example, groupedhard disk drive 102A includes thehard disk drives hard disk drive 102B includes thehard disk drives hard disk drives front side 120 of theserver blade 100, and therefore proximate to a front side of the chassis to which theserver blade 100 is coupled. In one embodiment, thehard disk drives server blade 100 is coupled. Thehard disk drive 116B may be positioned between thehard disk drive 116A and a back side of the chassis to which theserver blade 100 is coupled, and thehard disk drive 116D may be positioned between thehard disk drive 116C and a back side of the chassis to which theserver blade 100 is coupled. By orienting the groupedhard disk drives hard disk drives 116 can be positioned adjacent to themain circuit board 110. - In addition, referring to
FIG. 2 , a front view of theserver blade 100 according to an embodiment of the invention is illustrated. In one embodiment, a grouped hard disk drive 102C may be stacked on the groupedhard disk drive 102A, and a grouped hard disk drive 102D may be stacked on the groupedhard disk drive 102B. The grouped hard disk drives 102C and 102D may be oriented similarly to the groupedhard disk drives hard disk drives 116 can be included in theblade server 100 while maintaining the front-to-back airflow path over thecomputer modules 101, and while staying within the SSI specifications for blade height. Note that for typical hard disk drive sizes, an SSI specified server blade can expose at most four hard disk drives 116, in two stacks of two, at thefront side 120 of themain circuit board 110. In one embodiment, eight hard disk drives 116 may be used to support eight computer modules 101 (onehard disk drive 116 per computer module 101). This can yield an aggregate hard disk drive data bandwidth of up to 1.5 Gbyte/s with eighthard disk drives 116 concurrently being accessed by eightcorresponding computer modules 101. Alternatively, eight hard disk drives 116 may be used to support four computer modules 101 (for example, the twohard disk drives 116 in one of the grouped hard disk drives 102 per computer module 101). - Referring to
FIG. 1 , the grouped hard disk drives 102 may correspond to hot-plug harddrive storage modules 106 that are removable from thefront side 120 of theserver blade 100. In one embodiment, a hot-plug harddrive storage module 106 includes a grouped hard disk drive 102, and is positioned similarly on themain circuit board 110.Frames 107 can be attached to themain circuit board 110, and each of the hot-plug harddrive storage modules 106 may be adapted to be placed in a corresponding one of theframes 107. Each of the hot-plug harddrive storage modules 106 may include a first hard disk drive (such ashard disk drives frame 107 adjacent to thefront side 120 of theserver blade 100, and may include a second hard disk drive (such as hard disk drives 116B and 116D) disposed in a rear portion of theframe 107. In one embodiment, each of the hot-plug harddrive storage modules 106 includes a connector that supports two SATA connections for the twohard disk drives 116 to connect to one or twocomputer modules 101 via themain circuit board 110. In addition, referring toFIG. 2 , in one embodiment a hot-plug hard drive storage module 106C may be stacked on the hot-plug harddrive storage module 106A, and a hot-plug hard drive storage module 106D may be stacked on the hot-plug harddrive storage module 106B. - Referring to
FIG. 1 , at least one of thecomputer modules 101 may be positioned between the groupedhard disk drive 102A and the groupedhard disk drive 102B. The groupedhard disk drive 102A may be positioned proximate to thelateral side 124 of theserver blade 100, and the groupedhard disk drive 102B may be positioned proximate to thelateral side 122 of theserver blade 100. This can allow airflow from the front side of the chassis to pass between the groupedhard disk drive 102A and the groupedhard disk drive 102B, so that the front-to-back airflow passes over thecomputer modules 101A-101F. In one embodiment, anair baffle 104 disposed on themain circuit board 110 is positioned to direct the front-to-back airflow toward thelateral side 124 of theserver blade 100. In this way, airflow can be provided for cooling thecomputer modules 101G-101H, which are positioned at least partially behind the groupedhard disk drive 102A. In this embodiment, the front-to-back airflow is substantially centrally positioned over themain circuit board 110. This may facilitate efficient direction of the airflow toward thecomputer modules lateral side 124 of theserver blade 100. - Alternatively, the grouped
hard disk drive 102A may be positioned next to the groupedhard disk drive 102B, such that thecomputer module 101A is positioned adjacent to either thelateral side 122 or thelateral side 124 of theserver blade 100. In this embodiment, the front-to-back airflow is substantially laterally positioned over themain circuit board 110. - In one embodiment, operational status indicators of the
hard disk drives 116 that are displaced from thefront side 120 of the main circuit board 110 (such as the hard disk drives 116B and 116D) can be provided at afront side 140 of the corresponding grouped hard disk drives 102. For example, a visual indicator (such as an LED indicator) that thehard disk drive 116B is operating may be provided at thefront side 140 of the groupedhard disk drive 102A, along with a visual indicator that thehard disk drive 116A is operating. - Referring to
FIGS. 1 and 3 , a processor 322 (seeFIG. 3 ) on each of thecomputer modules 101 may communicate serially (such as via I2C) with themanagement controller 112 on themain circuit board 110. Themanagement controller 112 can monitor the operational status of theprocessors 322. If processor failure is detected, themanagement controller 112 can alert an administrator. Information such as temperature, identification of thecomputer module 101, and size of memory or storage device can also be communicated serially to themanagement controller 112. A power switch controlled by themanagement controller 112 can shut off power to any one of thecomputer modules 101. If a failedcomputer module 101 is detected, themanagement controller 112 can alert high level software to shift the workload of the failedcomputer module 101 to anothercomputer module 101, and subsequently shut off its power so that the failedcomputer module 101 does not affect the operation of the rest of theserver blade 100. In one embodiment, themanagement controller 112 supports at least one of 1:1, 1+1, and N+1 redundancy of thecomputer modules 101. Alternatively or in addition, themanagement controller 112 can support load-balancing between two or more of thecomputer modules 101. - In one embodiment, a
battery 114 on themain circuit board 110 can provide power to each of thecomputer modules 101 for maintaining data in memory, such as a static CMOS memory, or for keeping a portion of circuitry on each of thecomputer modules 101 active when the remainder of the circuitry on thecomputer modules 101 is powered down. The power from thebattery 114 can be kept on even if the main power to one or more of thecomputer modules 101 is shut off by themanagement controller 112 for saving power when the one or more of thecomputer modules 101 are not in use. In one embodiment, twobatteries 114 can reside on themain circuit board 110 so that battery power is continuously available during replacement of one of the twobatteries 114. - Embodiments of the present invention can use different numbers of
computer modules 101 to populate theserver blade 100. Other embodiments can useserver blades 100 of different form factors, electrical, and power specifications. An embodiment of the present invention uses plug-incomputer modules 101 to simplify manufacturing and facilitate ease of repair. In this embodiment, if acomputer module 101 fails, only the failedcomputer module 101 needs to be unplugged and replaced, saving the rest of theserver blade 100. This also allows theserver blade 100 to be populated partially withcomputer modules 101, with the option of plugging inadditional computer modules 101 later. - Referring to
FIG. 2 , the groupedhard disk drives 102A and 102C may be proximate to thelateral surface 124 of theserver blade 100, and the grouped hard disk drives 102B and 102D may be proximate to thelateral surface 122 of theserver blade 100. Thecomputer module 101A may be positioned between the groupedhard disk drive 102A and the groupedhard disk drive 102B. In one embodiment, aconnector 212 is disposed on themain circuit board 110. Thecomputer module 101A is insertable into theconnector 212, which is adapted to connect to thecomputer module 101A, and to couple thecomputer module 101A to themain circuit board 110. In one embodiment, theconnector 212 may be a vertical connector. There is a similar connector (not shown) corresponding to each of thecomputer modules 101 that is adapted to couple the corresponding one of thecomputer modules 101 to themain circuit board 110. - Referring to
FIG. 3 , a logical view of thecomputer module 101 according to an embodiment of the invention is illustrated. Thecomputer module 101 includes an integrated system onchip 321 comprising aprocessor 322 and amemory controller 323, amain memory 327 coupled to thememory controller 323, an input/output hub 324, acommunication controller 325, and amass storage device 326. In one embodiment, themain memory 327 is directly coupled to thememory controller 323. - Referring to
FIGS. 1-3 , in one embodiment, thecomputer module 101 is low in height to fit within the SSI height limitation of 41.7 mm. It is contemplated that thecomputer module 101 may be of even lower height. One embodiment of the present invention uses a horizontally fitted double data rate (DDR) DDR2/3 small outline dual in-line memory module (SODIMM) as themain memory 327 within thecomputer module 101 to meet the SSI height limitation. The SODIMM memory may be a plug-in unit to improve reusability. Another embodiment uses DDR2/3 Micro-DIMM as themain memory 327 to reduce the size of thecomputer module 101. To reduce power consumption, embodiments of the invention can use low power double data rate (LPDDR) LPDDR2 memory or future generations of low power DDR memory with low voltage swing low-voltage differential signaling (LVDS) data links. In one embodiment, theprocessor 322 may be a low power processor or system chip, e.g., system on chip (SOC), that can operate with a low profile top mounted heat sink to fit within the SSI height limitation. The low profile heat sink can be sufficient for air cooling for low power system chips. Thecomputer module 101 can be a small printed circuit board populated on both sides with major components such as a system chip, the input/output hub 324, a SODIMM memory horizontal socket, the connector 212 (seeFIG. 2 ) to themain circuit board 110 of the server blade 100 (seeFIG. 1 ), and a USB or SATA flash drive as themass storage device 326. The flash drive may be a plug-in unit to improve reusability. The system chip can be soldered directly on the small printed circuit board to reduce cost and to remove the additional height of an expensive socket. Without the socket, the top mounted heat sink can increase in height to increase cooling for the system chip. - In one embodiment, the
computer module 101 includes one USB flash drive or one solid state drive (SSD). USB 3.0 released in 2008 has a signaling rate of 4.8 Gbit/sec versus 480 Mbit/s for USB 2.0. In one embodiment, a USB 3.0 flash drive interfaces to the input/output hub 324 or theprocessor 322 in thecomputer module 101. In another embodiment, a USB flash drive or a SATA SSD can serve as local cache on thecomputer module 101 to store frequently accessed content and video streams. USB 3.0 connections can have an effective data bandwidth of over 2.4 Gb/s or 300 MByte/s. A single SATA SSD can yield an effective data bandwidth of around 150 to 300 MByte/s. In one embodiment, thecomputer module 101 includes a flash drive or a local SSD as cache. This can provide a higher storage data bandwidth than the hard disk drives 116 included in the server blade 100 (seeFIG. 1 ). In other embodiments, two USB flash drives or 2 SATA SSDs can be included in asingle computer module 101 to further increase data bandwidth. Either the USB flash drive or the SATA SSD can be used to store operating system or virtualization software to allow thecomputer module 101 to boot up upon power up. - Referring to
FIG. 4 , a logical view of ablade server system 400 according to an embodiment of the invention is illustrated. One or moremain circuit boards 110 couple to achassis 401 upon insertion intoconnectors 103 of the midplane orbackplane 111. Thecircuit boards 110 can be powered, at least in part, by the power supply 422. Thecomputer modules 101 each insert into acorresponding connector 212, and are each coupled to ahard disk drive 116. Thecomputer modules 101 can be powered, at least in part, by the power supply 422 via thepower regulator 424. - Referring to
FIGS. 3 and 4 , in one embodiment, ahub 402 on themain circuit board 110 couples to thecommunication controller 325 on each of thecomputer modules 101. Thehub 402 can include switches such as an Ethernet switch or a PCI Express switch. Similarly, thehub 412 can include switches such as a 10 Gigabit Ethernet (10 GbE) switch or a PCI Express switch. - In one embodiment, two 1 Gigabit Ethernet (GbE) connections are provided from each
computer module 101 to themain circuit board 110. The two GbE connections can provide approximately 200 Mbyte/s of network bandwidth. These GbE links from eachcomputer module 101 connect to theEthernet switching hub 402 on themain circuit board 110 with separate connections. TheEthernet switch 402 can have 16 GbE ports and 2 10 GbE ports. The 10 GbE ports can connect to the 10GbE switch 412 within theconsole midplane 111. - Referring to
FIGS. 3 and 4 , in another embodiment, high speed PCI Express channels are provided from either thesystem chip 321 or the input/output hub 325 to provide data communication to themain circuit board 110, and eventually to an external network. PCI Express 2.0 can have an effective 400 Mbyte/s per link data throughput. PCI Express 3.0 can have an effective per link data throughput about twice that of PCI Express Rev. 2.0. In one embodiment, a x1 PCI Express 2.0 link is provided from thecomputer module 101 coupled through theconnector 212 to thePCI Express switch 402 on themain circuit board 110 of the server blade 100 (seeFIG. 1 ) to serve as the communication channel. A x1 PCI Express 2.0 channel can provide an approximately 400 Mbyte/s data transfer rate sufficient to handle the storage data transfer rate of both the local flash drive and the external SATAhard disk drive 116 for thecomputer module 101. The PCI Express switch 402 functions similarly to an Ethernet switching hub to direct data communication between the various PCI Express links. It provides communication between thecomputer modules 101 and to one or two x2 PCI Express links that couple to theswitch 412 through themidplane 111 of theblade server system 400. To support fault tolerance, 2 PCI Express switches 402 can be provided on themain circuit board 110. Each of the PCI Express switches 402 has a x1 PCI Express link to each of thecomputer modules 101. ThePCI switch 402 has a x2 PCI Express link to anotherPCI Express switch 412 in thechassis 401 through themidplane 111. ThePCI Express switch 412 in thechassis 401 then can connect to a 10 GbE controller (not shown) for external network communication. Other embodiments are contemplated in which any combination of links of PCI Express Rev. 2 or Rev. 3 with GbE links are used to carry data within thechassis 401. - In one embodiment,
input data 410 to thechassis 401 is processed by theswitch 412 to obtain data distributed to each of themain circuit boards 110. Upon arrival at amain circuit board 110, the data traverseslink 414. In one embodiment, thelink 414 is a 10 Gigabit Ethernet link or a x2 PCI Express link. Theswitch 402 processes the input data to themain circuit board 410 to obtain requests. In a video streaming application, for example, these requests may be requests for on-demand video programming. These requests are then distributed by theswitch 402 to thecorresponding computer modules 101 via the link 404. In one embodiment, the link 404 includes 2 GbE links or a x1 PCI Express 2.0 link. Each of thecomputer modules 101 generates an output data stream in response to a corresponding request, where the output data stream has a first bandwidth higher than a second bandwidth of the corresponding request. For example, the output data stream may be the requested video stream. In one embodiment, the output data stream originates on thecomputer module 101 and traverses the link 404 to theswitch 402. Theswitch 402 then aggregates the output data streams from thecomputer modules 101 to obtain output data that traverses thelink 416 to theswitch 412, then is output from theswitch 412 asoutput data 420. - In one embodiment, each of the
computer modules 101 is connected to themanagement controller 112 by alink 406. In one embodiment, thelink 406 is a GbE link. Themanagement controller 112 may be connected to an external network via links 418 and 419 and theswitch 412, and may be connected to othermain circuit boards 110 within thechassis 401 by the links 418 and 419. In one embodiment, the links 418 and 419 may be GbE links. - In one embodiment, remote Keyboard/Video/Mouse (KVM) functions for each
computer module 101 can be supported through Ethernet communication. AGigabit Ethernet switch 112 on themain circuit board 110 can select KVM from aparticular computer module 101 by selecting data from a dedicated Ethernet link 406 (such as a 1 GbE link) from thecomputer module 101. An administrator on an external network can access the KVM function of eachcomputer module 101 one at a time through theEthernet switch 112. - Referring to
FIG. 1 , in another embodiment, the multiple computermodule server blade 100 can be used for a multiple client blade application. Each client can be assigned to one computer module at a time, e.g. time sharing between multiple clients. To transmit compressed high performance three-dimensional (3D) graphics information from thecomputer module 101 to the remote client demands high network bandwidth. 3D graphics information frommultiple computer modules 101 can be directed through thehigh speed 10GbE switch 412. In addition, an “eight computer module” server blade has essentially eight separate computers that can be assigned individually to each of eight remote clients. If onecomputer module 101 fails, a client user can be switched to anothercomputer module 101 utilizing theEthernet switching hub 412. In one embodiment, eachcomputer module 101 can support more than one client user at a time through virtualization. - The figures provided are merely representational and may not be drawn to scale. Certain proportions thereof may be exaggerated, while others may be minimized. The figures are intended to illustrate various implementations of the invention that can be understood and appropriately carried out by those of ordinary skill in the art.
- The foregoing description, for purposes of explanation, used specific nomenclature to provide a thorough understanding of the invention. However, it will be apparent to one skilled in the art that specific details are not required in order to practice the invention. Thus, the foregoing descriptions of specific embodiments of the invention are presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed; obviously, many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, they thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the following claims and their equivalents define the scope of the invention.
Claims (25)
1. A modular server, comprising:
a main circuit board housed in a chassis, comprising
a plurality of connectors residing on the main circuit board, each adapted to connect to a computer module; and
a plurality of computer modules, each insertable into one of the plurality of connectors, each computer module comprising
an integrated system on a chip (SoC) comprising a processor and a memory controller;
a main memory coupled to the memory controller;
a first low voltage differential signal (LVDS) channel directly extending from the SoC, the first LVDS channel comprising two unidirectional, serial bit channels to transmit data in opposite directions, wherein the first LVDS channel is configured to convey a serial bit stream of address and data bits of a Peripheral Component Interface (PCI) bus transaction; and
a second LVDS channel adapted to couple to the main circuit board through one of the plurality of connectors, comprising two unidirectional, serial bit channels to transmit data in opposite directions, wherein the second LVDS channel conveys a serial bit stream of address and data bits of a PCI bus transaction.
2. The modular server of claim 1 wherein the main memory comprises a Small Outline Dual In-line Memory Module (SODIMM) Double Data Rate (DDR) memory socket and a SODIMM DDR memory module.
3. The modular server of claim 1 wherein each computer module further comprises a third LVDS channel adapted to couple to the main circuit board through one of the plurality of connectors, comprising two unidirectional, serial bit channels to transmit data in opposite directions, and wherein the third LVDS channel conveys Ethernet protocol traffic.
4. The modular server of claim 1 wherein one of the computer modules further comprises a Serial Advanced Attachment Technology (SATA) bus directly extending from the SoC, wherein the SATA bus couples to the main circuit board through one of the plurality of connectors of the computer module.
5. The modular server of claim 1 wherein one of the computer modules further comprises a Serial Advanced Attachment Technology (SATA) flash drive.
6. A modular server, comprising:
a main circuit board housed in a chassis comprising
a plurality of connectors residing on the main circuit board, each adapted to connect to a computer module; and
a plurality of computer modules, each insertable into one of the plurality of connectors, each computer module comprising
an integrated system on a chip (SoC) comprising a processor, a graphics controller and a memory controller;
a main memory directly coupled to the integrated SoC;
a serial communication link I2C bus directly extending from the SoC, coupled to the main circuit board through one of the plurality of connectors;
a first low voltage differential signal (LVDS) channel directly extending from the SoC, the first LVDS channel comprising two unidirectional, serial bit channels to convey data in opposite directions, wherein the LVDS channel is configured to output a serial bit stream of address and data bits of a Peripheral Component Interface (PCI) bus transaction, wherein the first LVDS channel couples directly to the main circuit board; and
a second LVDS channel comprising two unidirectional, serial bit channels to convey data in opposite directions; wherein the second LVDS channel couples to the main circuit board.
7. The modular server of claim 6 wherein the main memory comprises a Small Outline Dual In-line Memory Module (SODIMM) Double Data Rate (DDR) memory socket and a SODIMM DDR memory module.
8. The modular server of claim 6 wherein the second LVDS channel conveys Ethernet protocol traffic.
9. The modular server of claim 6 wherein one of the computer modules further comprises a Serial Advanced Attachment Technology (SATA) bus directly extending from the SoC, wherein the SATA bus directly couples to the main circuit board through one of the plurality of connectors.
10. The modular server of claim 6 wherein one of the computer modules further comprises a Serial Advanced Attachment Technology (SATA) flash drive.
11. A modular server, comprising:
a chassis comprising a power supply; and
a main circuit board housed in the chassis, and coupled to the power supply, comprising
a plurality of connectors residing on the main circuit board, each adapted to connect to a computer module;
a plurality of computer modules, each insertable into one of the plurality of connectors for operation and for receiving power from the power supply, each computer module comprising
an integrated system on a chip (SoC) comprising a processor and a memory controller;
a main memory directly coupled to the memory controller, comprising a Small Outline Dual In-line Memory Module (SODIMM) Double Data Rate (DDR) memory socket and a SODIMM DDR memory module;
a first low voltage differential signal (LVDS) channel directly extending from the SoC, the first LVDS channel comprising two unidirectional, serial bit channels to convey data in opposite directions, wherein the LVDS channel is configured to output a serial bit stream of address and data bits of a Peripheral Component Interface (PCI) bus transaction, wherein the first LVDS channel couples directly to the main circuit board.
12. The modular server of claim 11 wherein one of the computer modules further comprises a Serial Advanced Attachment Technology (SATA) bus, wherein the SATA bus connects to the main circuit board through one of the plurality of connectors of the computer module.
13. The modular server of claim 12 wherein the SATA bus interface connects to a SATA Disk Drive coupled to the main server board.
14. The modular server of claim 12 wherein the computer module further comprises a second LVDS channel adapted to couple to the main circuit board through one of the plurality of connectors, wherein the second LVDS channel comprises two unidirectional, serial bit channels to transmit data in opposite directions, and wherein a third LVDS channel conveys Ethernet protocol traffic.
15. The modular server of claim 11 wherein one of the computer modules further comprises a Serial Advanced Attachment Technology (SATA) flash drive.
16. A modular server, comprising:
a main circuit board housed in a chassis comprising
a plurality of connectors residing on the main circuit board, each adapted to connect to a computer module;
an Ethernet Switching Hub coupled to the main circuit board;
a plurality of computer modules, each insertable into one of the plurality of connectors, each computer module comprising
an integrated system on a chip (SoC) comprising a processor, a graphics controller and a memory controller;
a main memory directly coupled to the memory controller,
a first low voltage differential signal (LVDS) channel connected to the main circuit board through one of the plurality of connectors, wherein the first LVDS channel comprises two unidirectional, serial bit channels to convey data in opposite directions, wherein the LVDS channel is configured to output a serial bit stream of address and data bits of a Peripheral Component Interface (PCI) bus transaction; and
a second LVDS channel adapted to conveys Ethernet protocol traffic to the Ethernet Switching Hub with a point-to-point connection through one of the plurality of connectors, wherein the second LVDS channel comprises two unidirectional, serial bit channels to transmit data in opposite directions.
17. The modular server of claim 16 wherein the main memory comprises a Small Outline Dual In-line Memory Module (SODIMM) Double Data Rate DDR memory socket and a SODIMM DDR memory module.
18. The modular server of claim 16 wherein each computer module further comprises a serial communication link I2C bus extending directly from the integrated SoC, adapted to connect to the main circuit board.
19. The computer module of claim 16 wherein one of the computer modules further comprises a Serial Advanced Attachment Technology (SATA) bus interface directly extending from the integrated SoC, wherein the SATA bus interface directly couples to the main circuit board through one of the plurality of connectors.
20. The computer module of claim 16 wherein one of the computer modules further comprises a Serial Advanced Attachment Technology (SATA) flash drive.
21. A modular server, comprising:
a chassis comprising a power supply; and
a main circuit board housed in the chassis, comprising
a plurality of connectors residing on the main circuit board, each adapted to connect to a computer module; a plurality of computer modules, each insertable into one of the plurality of connectors for operation and for receiving power from the power supply, each computer module comprising
an integrated system on a chip (SoC) comprising a processor, a graphics controller and a memory controller;
a main memory directly coupled to the memory controller;
a Serial Advanced Attachment Technology (SATA) bus interface directly extending from the integrated SoC, wherein the SATA bus interface directly couples to the main circuit board through one of the plurality of connectors; and
a first low voltage differential signal (LVDS) channel adapted to connect to the main circuit board, comprising two unidirectional, serial bit channels to convey data in opposite directions, wherein the first LVDS channel is configured to output a serial bit stream of address and data bits of a Peripheral Component Interface (PCI) bus transaction.
22. The modular server of claim 21 wherein the SATA bus interface connects to a SATA Disk Drive coupled to the main server board.
23. The modular server of claim 21 wherein the main memory comprises a Small Outline Dual In-line Memory Module (SODIMM) Double Data Rate DDR memory socket and a SODIMM DDR memory module.
24. The modular server of claim 21 wherein one of the computer modules further comprises a SATA flash drive.
25. The modular server of claim 21 wherein the computer module further comprises a serial communication link I2C bus extending directly from the integrated SoC, adapted to connect to the main circuit board.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/198,510 US20150254205A1 (en) | 2010-08-20 | 2014-03-05 | Low Cost, High Performance and High Data Throughput Server Blade |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US37535610P | 2010-08-20 | 2010-08-20 | |
US13/214,020 US8671153B1 (en) | 2010-08-20 | 2011-08-19 | Low cost, high performance and high data throughput server blade |
US14/198,510 US20150254205A1 (en) | 2010-08-20 | 2014-03-05 | Low Cost, High Performance and High Data Throughput Server Blade |
Publications (1)
Publication Number | Publication Date |
---|---|
US20150254205A1 true US20150254205A1 (en) | 2015-09-10 |
Family
ID=50192845
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/214,020 Expired - Fee Related US8671153B1 (en) | 2010-08-20 | 2011-08-19 | Low cost, high performance and high data throughput server blade |
US14/198,510 Abandoned US20150254205A1 (en) | 2010-08-20 | 2014-03-05 | Low Cost, High Performance and High Data Throughput Server Blade |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/214,020 Expired - Fee Related US8671153B1 (en) | 2010-08-20 | 2011-08-19 | Low cost, high performance and high data throughput server blade |
Country Status (1)
Country | Link |
---|---|
US (2) | US8671153B1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180011811A1 (en) * | 2015-01-28 | 2018-01-11 | Hewlett-Packard Development Company, L.P. | Redirection of lane resources |
GB2552208A (en) * | 2016-07-14 | 2018-01-17 | Nebra Micro Ltd | Clustering system |
US20180062293A1 (en) * | 2016-08-23 | 2018-03-01 | American Megatrends, Inc. | Backplane controller module using small outline dual in-line memory module (sodimm) connector |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6643777B1 (en) | 1999-05-14 | 2003-11-04 | Acquis Technology, Inc. | Data security method and device for computer modules |
US6718415B1 (en) | 1999-05-14 | 2004-04-06 | Acqis Technology, Inc. | Computer system and method including console housing multiple computer modules having independent processing units, mass storage devices, and graphics controllers |
US9456506B2 (en) | 2013-12-20 | 2016-09-27 | International Business Machines Corporation | Packaging for eight-socket one-hop SMP topology |
US10366036B2 (en) * | 2014-04-04 | 2019-07-30 | Hewlett Packard Enterprise Development Lp | Flexible input/output zone in a server chassis |
JP6900233B2 (en) * | 2017-05-01 | 2021-07-07 | Dynabook株式会社 | Computer systems and electronics |
CN115705270A (en) * | 2021-08-06 | 2023-02-17 | 富联精密电子(天津)有限公司 | Hard disk in-place detection device and method |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040268015A1 (en) * | 2003-01-21 | 2004-12-30 | Nextio Inc. | Switching apparatus and method for providing shared I/O within a load-store fabric |
US7480303B1 (en) * | 2005-05-16 | 2009-01-20 | Pericom Semiconductor Corp. | Pseudo-ethernet switch without ethernet media-access-controllers (MAC's) that copies ethernet context registers between PCI-express ports |
US20090157858A1 (en) * | 2007-12-15 | 2009-06-18 | International Business Machines Corporation | Managing Virtual Addresses Of Blade Servers In A Data Center |
US20090164684A1 (en) * | 2007-12-20 | 2009-06-25 | International Business Machines Corporation | Throttling A Point-To-Point, Serial Input/Output Expansion Subsystem Within A Computing System |
US20090292854A1 (en) * | 2008-05-22 | 2009-11-26 | Khoo Ken | Use of bond option to alternate between pci configuration space |
US20100082874A1 (en) * | 2008-09-29 | 2010-04-01 | Hitachi, Ltd. | Computer system and method for sharing pci devices thereof |
US20100167557A1 (en) * | 2008-12-29 | 2010-07-01 | Virtium Technology, Inc. | Multi-function module |
US7783818B1 (en) * | 2007-12-28 | 2010-08-24 | Emc Corporation | Modularized interconnect between root complexes and I/O modules |
US20110145618A1 (en) * | 2009-12-11 | 2011-06-16 | International Business Machines Corporation | Reducing Current Draw Of A Plurality Of Solid State Drives At Computer Startup |
US20120072633A1 (en) * | 2010-09-22 | 2012-03-22 | Wilocity, Ltd. | Hot Plug Process in a Distributed Interconnect Bus |
US8230145B2 (en) * | 2007-07-31 | 2012-07-24 | Hewlett-Packard Development Company, L.P. | Memory expansion blade for multiple architectures |
US20130013957A1 (en) * | 2011-07-07 | 2013-01-10 | International Business Machines Corporation | Reducing impact of a switch failure in a switch fabric via switch cards |
US20130346665A1 (en) * | 2012-06-20 | 2013-12-26 | International Business Machines Corporation | Versatile lane configuration using a pcie pie-8 interface |
US20140059266A1 (en) * | 2012-08-24 | 2014-02-27 | Simoni Ben-Michael | Methods and apparatus for sharing a network interface controller |
US8739179B2 (en) * | 2008-06-30 | 2014-05-27 | Oracle America Inc. | Method and system for low-overhead data transfer |
US20150039871A1 (en) * | 2013-07-31 | 2015-02-05 | Sudhir V. Shetty | Systems And Methods For Infrastructure Template Provisioning In Modular Chassis Systems |
Family Cites Families (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5086499A (en) | 1989-05-23 | 1992-02-04 | Aeg Westinghouse Transportation Systems, Inc. | Computer network for real time control with automatic fault identification and by-pass |
US5689654A (en) | 1992-06-29 | 1997-11-18 | Elonex F.P. Holdings, Ltd. | Digital assistant system including a host computer with a docking bay for the digital assistant wherein a heat sink is moved into contact with a docked digital assistant for cooling the digital assistant |
US5640302A (en) | 1992-06-29 | 1997-06-17 | Elonex Ip Holdings | Modular portable computer |
DE59209363D1 (en) | 1992-10-12 | 1998-07-09 | Leunig Gmbh | Device for the optional data transfer and file transfer |
US5764924A (en) | 1995-08-24 | 1998-06-09 | Ncr Corporation | Method and apparatus for extending a local PCI bus to a remote I/O backplane |
US5721842A (en) | 1995-08-25 | 1998-02-24 | Apex Pc Solutions, Inc. | Interconnection system for viewing and controlling remotely connected computers with on-screen video overlay for controlling of the interconnection switch |
US5815681A (en) | 1996-05-21 | 1998-09-29 | Elonex Plc Ltd. | Integrated network switching hub and bus structure |
KR100287091B1 (en) | 1996-08-19 | 2001-04-16 | 포만 제프리 엘 | Single Pointer / Keyboard for Multiple Computers |
US6029183A (en) | 1996-08-29 | 2000-02-22 | Xybernaut Corporation | Transferable core computer |
US5999952A (en) | 1997-08-15 | 1999-12-07 | Xybernaut Corporation | Core computer unit |
JP3617884B2 (en) | 1996-09-18 | 2005-02-09 | 株式会社東芝 | Portable information equipment |
US6038621A (en) | 1996-11-04 | 2000-03-14 | Hewlett-Packard Company | Dynamic peripheral control of I/O buffers in peripherals with modular I/O |
AU6260398A (en) | 1997-02-03 | 1998-08-25 | Curt A. Schmidt | A computer system using a plurality of remote workstations and centralized computer modules |
US5971804A (en) | 1997-06-30 | 1999-10-26 | Emc Corporation | Backplane having strip transmission line ethernet bus |
US6304895B1 (en) | 1997-08-22 | 2001-10-16 | Apex Inc. | Method and system for intelligently controlling a remotely located computer |
US6332180B1 (en) | 1998-06-10 | 2001-12-18 | Compaq Information Technologies Group, L.P. | Method and apparatus for communication in a multi-processor computer system |
US6202169B1 (en) | 1997-12-31 | 2001-03-13 | Nortel Networks Corporation | Transitioning between redundant computer systems on a network |
DE19805299A1 (en) | 1998-02-10 | 1999-08-12 | Deutz Ag | Electronic control device |
US6025989A (en) | 1998-04-21 | 2000-02-15 | International Business Machines Corporation | Modular node assembly for rack mounted multiprocessor computer |
US6216185B1 (en) | 1998-05-01 | 2001-04-10 | Acqis Technology, Inc. | Personal computer peripheral console with attached computer module |
US6345330B2 (en) | 1998-05-01 | 2002-02-05 | Acqis Technology, Inc. | Communication channel and interface devices for bridging computer interface buses |
US6378009B1 (en) | 1998-08-25 | 2002-04-23 | Avocent Corporation | KVM (keyboard, video, and mouse) switch having a network interface circuit coupled to an external network and communicating in accordance with a standard network protocol |
US6161157A (en) | 1998-10-27 | 2000-12-12 | Intel Corporation | Docking system |
US6321335B1 (en) | 1998-10-30 | 2001-11-20 | Acqis Technology, Inc. | Password protected modular computer method and device |
US6311268B1 (en) | 1998-11-06 | 2001-10-30 | Acqis Technology, Inc. | Computer module device and method for television use |
TW392111B (en) | 1998-12-16 | 2000-06-01 | Mustek Systems Inc | Sharing system for sharing peripheral device via network |
US6314522B1 (en) | 1999-01-13 | 2001-11-06 | Acqis Technology, Inc. | Multi-voltage level CPU module |
GB2350212B (en) | 1999-02-09 | 2003-10-08 | Adder Tech Ltd | Data routing device and system |
US6453344B1 (en) | 1999-03-31 | 2002-09-17 | Amdahl Corporation | Multiprocessor servers with controlled numbered of CPUs |
US6643777B1 (en) | 1999-05-14 | 2003-11-04 | Acquis Technology, Inc. | Data security method and device for computer modules |
US6718415B1 (en) | 1999-05-14 | 2004-04-06 | Acqis Technology, Inc. | Computer system and method including console housing multiple computer modules having independent processing units, mass storage devices, and graphics controllers |
US6452790B1 (en) | 1999-07-07 | 2002-09-17 | Acquis Technology, Inc. | Computer module device and method |
US6430000B1 (en) * | 2000-04-13 | 2002-08-06 | General Dynamics Information Systems, Inc. | Hermetically sealed plural disk drive housing |
US20060265361A1 (en) | 2005-05-23 | 2006-11-23 | Chu William W | Intelligent search agent |
US20090083811A1 (en) * | 2007-09-26 | 2009-03-26 | Verivue, Inc. | Unicast Delivery of Multimedia Content |
US7822895B1 (en) * | 2007-12-28 | 2010-10-26 | Emc Corporation | Scalable CPU (central processing unit) modules for enabling in-place upgrades of electronics systems |
US8289692B2 (en) * | 2008-03-14 | 2012-10-16 | Hewlett-Packard Development Company, L.P. | Blade server for increased processing capacity |
TWI439843B (en) * | 2008-04-23 | 2014-06-01 | Ibm | Printed circuit assembly with automatic selection of storage configuration based on installed paddle board |
-
2011
- 2011-08-19 US US13/214,020 patent/US8671153B1/en not_active Expired - Fee Related
-
2014
- 2014-03-05 US US14/198,510 patent/US20150254205A1/en not_active Abandoned
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040268015A1 (en) * | 2003-01-21 | 2004-12-30 | Nextio Inc. | Switching apparatus and method for providing shared I/O within a load-store fabric |
US7480303B1 (en) * | 2005-05-16 | 2009-01-20 | Pericom Semiconductor Corp. | Pseudo-ethernet switch without ethernet media-access-controllers (MAC's) that copies ethernet context registers between PCI-express ports |
US8230145B2 (en) * | 2007-07-31 | 2012-07-24 | Hewlett-Packard Development Company, L.P. | Memory expansion blade for multiple architectures |
US20090157858A1 (en) * | 2007-12-15 | 2009-06-18 | International Business Machines Corporation | Managing Virtual Addresses Of Blade Servers In A Data Center |
US20090164684A1 (en) * | 2007-12-20 | 2009-06-25 | International Business Machines Corporation | Throttling A Point-To-Point, Serial Input/Output Expansion Subsystem Within A Computing System |
US7783818B1 (en) * | 2007-12-28 | 2010-08-24 | Emc Corporation | Modularized interconnect between root complexes and I/O modules |
US20090292854A1 (en) * | 2008-05-22 | 2009-11-26 | Khoo Ken | Use of bond option to alternate between pci configuration space |
US8739179B2 (en) * | 2008-06-30 | 2014-05-27 | Oracle America Inc. | Method and system for low-overhead data transfer |
US20100082874A1 (en) * | 2008-09-29 | 2010-04-01 | Hitachi, Ltd. | Computer system and method for sharing pci devices thereof |
US20100167557A1 (en) * | 2008-12-29 | 2010-07-01 | Virtium Technology, Inc. | Multi-function module |
US20110145618A1 (en) * | 2009-12-11 | 2011-06-16 | International Business Machines Corporation | Reducing Current Draw Of A Plurality Of Solid State Drives At Computer Startup |
US20120072633A1 (en) * | 2010-09-22 | 2012-03-22 | Wilocity, Ltd. | Hot Plug Process in a Distributed Interconnect Bus |
US20130013957A1 (en) * | 2011-07-07 | 2013-01-10 | International Business Machines Corporation | Reducing impact of a switch failure in a switch fabric via switch cards |
US20130346665A1 (en) * | 2012-06-20 | 2013-12-26 | International Business Machines Corporation | Versatile lane configuration using a pcie pie-8 interface |
US20140059266A1 (en) * | 2012-08-24 | 2014-02-27 | Simoni Ben-Michael | Methods and apparatus for sharing a network interface controller |
US20150039871A1 (en) * | 2013-07-31 | 2015-02-05 | Sudhir V. Shetty | Systems And Methods For Infrastructure Template Provisioning In Modular Chassis Systems |
Non-Patent Citations (1)
Title |
---|
PCI Express Base Specification Revision 3.0 November 10,2010 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180011811A1 (en) * | 2015-01-28 | 2018-01-11 | Hewlett-Packard Development Company, L.P. | Redirection of lane resources |
US10210128B2 (en) * | 2015-01-28 | 2019-02-19 | Hewlett-Packard Development Company, L.P. | Redirection of lane resources |
GB2552208A (en) * | 2016-07-14 | 2018-01-17 | Nebra Micro Ltd | Clustering system |
US20180062293A1 (en) * | 2016-08-23 | 2018-03-01 | American Megatrends, Inc. | Backplane controller module using small outline dual in-line memory module (sodimm) connector |
US10044123B2 (en) * | 2016-08-23 | 2018-08-07 | American Megatrends, Inc. | Backplane controller module using small outline dual in-line memory module (SODIMM) connector |
Also Published As
Publication number | Publication date |
---|---|
US8671153B1 (en) | 2014-03-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8671153B1 (en) | Low cost, high performance and high data throughput server blade | |
US11615044B2 (en) | Graphics processing unit peer-to-peer arrangements | |
GB2524140A (en) | Computer system with groups of processor boards | |
US20020080575A1 (en) | Network switch-integrated high-density multi-server system | |
US8599564B2 (en) | Server architecture | |
US20130077223A1 (en) | Server | |
US11100040B2 (en) | Modular remote direct memory access interfaces | |
US10545901B2 (en) | Memory card expansion | |
KR102146301B1 (en) | Two-headed switch including a drive bay for fabric-attached devices | |
US10251303B2 (en) | Server display for displaying server component information | |
US20150089100A1 (en) | Inter-device data-transport via memory channels | |
CN110134206B (en) | Computing board card | |
WO2018011425A1 (en) | Clustering system | |
CN110806989A (en) | Storage server | |
CN115481068B (en) | Server and data center | |
CN113741642B (en) | High-density GPU server | |
CN113552926B (en) | Cable module | |
CN209248518U (en) | A kind of solid state hard disk expansion board clamping and server | |
CN209879419U (en) | Calculation board card | |
CN205229909U (en) | Power backplate based on multi -path server computer board and interconnection integrated circuit board | |
CN217847021U (en) | AI edge server system architecture with high performance computing power | |
CN218499155U (en) | Network switch and data center | |
US9166316B2 (en) | Data storage connecting device | |
CN217587961U (en) | Artificial intelligence server hardware architecture based on double-circuit domestic CPU | |
CN220795800U (en) | Four-way CPU server |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: ACQIS LLC, TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CHU, WILLIAM W. Y.;REEL/FRAME:032359/0793 Effective date: 20140304 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |