WO2018011425A1 - Système de regroupement - Google Patents

Système de regroupement Download PDF

Info

Publication number
WO2018011425A1
WO2018011425A1 PCT/EP2017/067943 EP2017067943W WO2018011425A1 WO 2018011425 A1 WO2018011425 A1 WO 2018011425A1 EP 2017067943 W EP2017067943 W EP 2017067943W WO 2018011425 A1 WO2018011425 A1 WO 2018011425A1
Authority
WO
WIPO (PCT)
Prior art keywords
connectors
interface board
interface
power
motherboard
Prior art date
Application number
PCT/EP2017/067943
Other languages
English (en)
Inventor
Peter BATCHELOR
Original Assignee
Nebra Micro Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from IT102016000073909A external-priority patent/IT201600073909A1/it
Priority claimed from GB1612223.6A external-priority patent/GB2552208A/en
Application filed by Nebra Micro Ltd filed Critical Nebra Micro Ltd
Publication of WO2018011425A1 publication Critical patent/WO2018011425A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/18Packaging or power distribution
    • G06F1/183Internal mounting support structures, e.g. for printed circuit boards, internal connecting means
    • G06F1/185Mounting of expansion boards

Definitions

  • the present invention relates to networking of computer systems and components. In particular networking of components for use in high density computer networks.
  • Servers are purpose-built computing devices used for performing particular tasks. They are normally mounted on standardised 19-inch server racks which provide a high density support structure in which to house multiple servers. Conventional server systems are run twenty- four hours a day in order to provide uninterrupted availability of resources. They can be used for several different computing purposes, for example, providing additional computing power over a network, storage, communications, mail, web content, printing and gaming.
  • blade servers provide stripped down server computers with modular designs arranged to make good use of space.
  • Several servers can be housed within a single blade, which greatly increases the performance per unit volume, and decreases cooling requirements, increasing performance per watt.
  • the size of a server Due to the standardisation of server sizes, the size of a server is referred to in rack units, U, being 19 inches (480mm) wide and 1.75 inches (44mm) tall.
  • Common server racks have a form-factor of 42U high, which limits the number of servers that can be placed in a single rack.
  • Some high-end blade systems today can achieve a density of 2352 processing cores per 42U rack.
  • the present invention aims to reduce the density constraints of current systems by providing an ultra-high density modular cluster server system.
  • the present invention provides several advantages over the prior art. For example, due to the hierarchical structure of the system, the cluster system may use low power components with a relatively small footprint compared to typical server systems. This leads to a much higher density of modules than present systems, greatly increasing the amount of computing power or storage space available per unit area. Further, the use of lower electrical power modules reduces the cooling requirements of the system. As the components are smaller, self-retaining connectors can be used to allow freestanding interface boards, reducing the complexity of the structure and further increasing density.
  • the interface board connector may be a self-retaining connector.
  • the interface board connector may be a SODIMM connector.
  • the plurality of interface boards may be arranged to communicate with each other via a network pass through. At least one of the modules may be engaged with the module connector such that it is positioned substantially parallel to the interface board.
  • the motherboard may further include a network controller. The network controller may be arranged to receive inputs from the plurality of interface boards.
  • the interface boards may comprise pins relating to power, Ethernet, serial data, enabling, power fault and grounding.
  • the module connectors may comprise pins relating to power, Ethernet, serial data, enabling, powering down, universal serial bus, serial peripheral interfacing and grounding.
  • a cluster system may comprise at least one computing cluster, in accordance with the above; and a networking board, in data communication with the, or each of the, computing cluster(s).
  • An interface board arranged to be inserted into and powered by a motherboard, comprises: a plurality of connectors arranged to receive a plurality of compute modules; and a network controller arranged to control networking between the interface board and the plurality of connectors, wherein the network controller is also arranged to act as a network pass through between one or more other interface boards.
  • any interface board may directly communicate with any other interface board within the server system without external overheads, allowing for much greater scalability beyond a single unit.
  • the interface board may be arranged to be inserted into and powered by the motherboard using a self-retaining connector.
  • the interface board may be arranged to be inserted into and powered by the motherboard using a SODIMM connector.
  • the plurality of connectors may be further arranged to hold one or each of the modules positioned substantially parallel to the interface board.
  • the network controller may be further arranged to send inputs to the motherboard.
  • the connectors may comprise pins relating to power, Ethernet, serial data, enabling, powering down, power faults, universal serial bus, serial peripheral interfacing and grounding.
  • FIGURE 1 shows an isometric view of a cluster, in accordance with the present invention
  • FIGURE 2 shows a circuit diagram of a motherboard according to the present invention
  • FIGURE 3 shows a circuit diagram of a daughterboard according to the present invention
  • FIGURE 4 show a circuit diagram of a compute module according to the present invention
  • FIGURE 5 shows a circuit diagram of a storage module according to the present invention
  • FIGURE 6 shows a circuit diagram of a networking board according to the present invention
  • FIGURE 7 shows a schematic of the overall network structure according to the present invention.
  • FIGURE 8 shows a schematic of a cluster structure according to the present invention.
  • a clustering system in accordance with an exemplary embodiment of the present invention is shown in Figures 1 to 8.
  • Figure 1 shows a cluster 100 comprising a motherboard 200, designed to act as the structural base, as well as the networking and power core of each cluster 100.
  • the motherboard includes four 80-pin slot connectors 205 and a modular connector 201 to allow for external Ethernet connections to the cluster 100.
  • the cluster 100 further comprises four interface boards (also referred to as daughterboards) 300 individually attached to the motherboard via the 80-pin slot connectors 205.
  • Each daughterboard 300 includes four M.2 slot connectors 303 and is connected to four modules 400, 500, via the M.2 slot connectors 303.
  • the purpose of the motherboard 200 is to distribute power and to control data and communications channels within the cluster 100.
  • the motherboard 200 has several communication interfaces as well as other components.
  • the modular connector 201 has an enhanced bandwidth capable of supporting data and control information for the motherboard 200 on a single channel.
  • the modular connector 201 is an RJ45 connector, however, it may, of course, be any type of modular connector.
  • modular connectors 201 which provide two data channels to the motherboard 200, one channel being used as a command channel and the other being for data communication external to the local network.
  • the motherboard includes Ethernet hubs 202 which control the data channel and allow access to the daughterboards 300.
  • the Ethernet hubs 202 operate at base 10/100/1000, allowing for a maximum data transfer rate of 1Gb per second per cluster 100.
  • a microcontroller 206, or network controller 206 controls the Ethernet hubs 202, although the input for the microcontroller 206 comes from the daughterboards 300 owing to limitations in the Ethernet network.
  • the microcontroller 206 on the motherboard 200 is Ethernet enabled.
  • the microcontroller 206 may receive inputs directly, allowing a cluster 100 to be monitored and controlled even if the interface boards 300 are in an Off state.
  • the motherboard 200 also includes power connectors 203 to supply a 12V power input to the motherboard 200.
  • a power regulator 204 takes the 12V input from the power connectors 203 and provides a regulated 3.3 V output to the electronics of the motherboard 200.
  • the 80-pin slot connectors 205 provide the communication interface between the motherboard 200 and the daughterboards 300.
  • Each of the 80-pin slot connectors 205 has the following connections: four 12V power pins; eight Ethernet pins forming two channels, three I 2 C (or other serial bus) pins; an enable pin for each daughterboard; a power fault pin; and seven ground pins.
  • the remaining pins of the connectors 205 are unallocated, available for use when configuring or customising the system.
  • the daughterboard 300 is shown which can be connected to the 80-pin slot connector 205 on the motherboard 200 using an 80-pin edge connector 301.
  • the 80-pin edge connector 301 has corresponding specifications to the 80-pin slot connector 205 described above.
  • the daughterboard 300 has further communication interfaces in four M.2 slot connectors 303, two on each face of the daughterboard 300, with B keying, and a 6-pin programming header 307.
  • the M.2 slot connectors 303 are arranged with the slots at right angles such that the modules 400, 500, when connected, sit substantially parallel to the daughterboard 300.
  • the B keying of the M.2 slot connectors 303 specifies where the orientation notch is placed.
  • M.2 slot connectors 303 have standardised pin formats, in the present embodiment they are being used in a bespoke configuration, which is: two 12V power pins; four Ethernet pins for one channel; three I 2 C (or other serial bus) pins; an enable pin; a power down pin; a power fault pin; two universal serial bus (USB) pins; three serial peripheral interface bus (SPI) pins; and six ground pins.
  • This pin configuration allows for the possibility of multiple alternate communication lines to be utilised in alternate embodiments beyond those described here. It should be noted, however, that alternate keying arrangements may be used.
  • the daughterboard 300 also comprises a 5 port Ethernet hub 302 for distributing the data channel received through the 80-pin edge connector 301 through each of the four M.2 slot connectors 303.
  • the Ethernet hub 202 of the motherboard 200 communicates directly with the modules 400, 500, removing the need for an Ethernet hub 302 on the daughterboard 300.
  • the 80-pin edge connectors 205, 301 may be replaced with other types of connector.
  • the connector 205, 301 may be a small outline dual in-line memory module (SODIMM), as this allows for orientation and relation control.
  • a SODIMM connector is one of several connectors with a self-retaining locking mechanism which locks a connected board in place.
  • Other examples of self-retaining connectors are: DIMM connectors, screw connectors and multiple in-line pin connectors.
  • computing clusters of this type would use external retention mechanisms, as the systems are generally too large and the components too heavy to rely on self-retaining mechanisms.
  • the present invention uses smaller boards and, as such, self-retaining connectors may be used.
  • the self-retaining nature of the connectors allows the interface boards to be freestanding, removing the need for any external retention mechanism. Therefore, the boards can be placed closer together, increasing cluster, and therefore computing, density.
  • the freestanding nature of the interface boards also reduces the complexity of the structure, and facilitates the use of the M.2 connectors for parallel connection of the modules to the interface boards.
  • the present invention uses a SODIMM connection.
  • SODIMM connection is due to the size of the connector, the cost, and the nature of the self- retaining mechanism which, for SODIMM connectors, is a pair of simple-to-use lugs.
  • the SODIMM connection therefore allows for the simplest means of adding/removing interface boards at low cost.
  • the small size of SODIMM connectors makes them especially unsuited to large size computing cluster systems, however, the present system relies on much smaller components, atypical of a computing cluster, which allows for the use of SODIMM connectors.
  • An Ethernet to SPI controller 305 uses the control channel from the motherboard 200 and converts it to the SPI communication standard to be used by a microcontroller 306 (or network controller 306).
  • the microcontroller 306 then routes the control data to each M.2 slot connector 303 via the I 2 C communication lines.
  • the microcontroller 306 is also capable of communicating with all other microcontrollers 306, 206 in the cluster 100 via I 2 C communication lines.
  • the microcontroller 306 of the daughterboard 300 can communicate external instructions to the microcontroller 206 of the motherboard 200, which, in one embodiment, does not have a direct external connection, and transfers data to or from other daughterboards 300 in the cluster 100 or within the wider system.
  • the microcontroller 306 of the daughterboard 300 may act as a network pass through (or network interface), shunting data through to other daughterboards 300 or motherboards 200.
  • each daughterboard 300 is able to directly network with any other daughterboard 300 or motherboard 200, allowing for a simpler network structure.
  • the pass through is enabled using a direct Ethernet connection between each daughterboard 300.
  • a daughterboard 300 receives data through the pass through, it simply redirects it on to the intended destination. Therefore, a first module 400, 500 may directly communicate with a second module 400, 500 utilising the pass through connection of a daughterboard 300.
  • the pass through function may be enabled by Ethernet connected microcontrollers 206 in the motherboard 200, which would perform the same function as the Ethernet-enabled daughterboard microcontrollers 306 in acting as a point which facilitates inter-component communication at high speed.
  • an external system into which the motherboard 200 is inserted, may also perform the pass through function.
  • the daughterboard 300 also includes a 12V to 3.3V power regulator 304 for the daughterboard's electronics.
  • the daughterboard 300 also comprises hot swap controllers 308. These sit on the main power lines into each daughterboard 300 and act as a switch allowing power to be enabled and disabled.
  • the controller 308 also provides over voltage and over current protection, as well as delayed start up functionality.
  • the hot swap controller 308 allows each daughterboard 300 to be fully turned off, this enables power saving when a daughterboard 300 is not required, but also allows the daughterboard 300 to be replaced without powering down the whole system.
  • the hot swap controller 308 receives control inputs from the microcontroller 206 of the motherboard 200.
  • modules 400, 500 comprise M.2 edge connectors 403, 503 arranged to connect with the M.2 slot connectors 303 of the daughterboard 300.
  • a module 400, 500 may be used as a compute module or as a storage module.
  • Figure 4 shows a compute module 400
  • Figure 5 shows a storage module 500.
  • a compute module 400 allows the cluster 100 to operate as an ultra-high density compute platform. As mentioned above, this could be for server hardware, or it could be, for example, for desktop rendering.
  • the specific processor used will change depending on the system requirements and, for example, may be a mobile phone/tablet CPU due to the high performance to cost ratio, a high spec GPU or an FPGA.
  • the compute module 400 has a single communication interface, being the M.2 edge connector 403, the pin out for which corresponds to the pin out described for the daughterboard 300 above.
  • the compute module 400 also comprises all the necessary electronics required for a processor 406 to function: a power regulator 404; RAM 401 for the processor; communications conversion 402; and storage 405 for firmware storage.
  • the communications conversion 402 may be an Ethernet to USB hub 402.
  • An Ethernet to USB hub 402 may be used in conjunction with a quad core processor to facilitate communication.
  • other processors do not require this form of communications conversion.
  • the storage module 500 acts as an ultra-high density storage array where, for example, each module may comprise one or more solid state drives (SSDs) associated with corresponding NAND Flash integrated chips 505.
  • Storage module 500 has M.2 pin outs 503 which correspond to the pin out described for the daughterboard 300 above, and also includes all the necessary electronics for a functioning storage module: a power regulator 504; a CPU 501 to interface with the rest of the cluster 100; and one or more NAND Flash integrated chips for storing data 505.
  • the SSD chips associated with the NAND Flash integrated chips 505 may be up to 2TB in size with current technology.
  • the CPU 501 need only be a basic storage controller or a low spec processor used to direct data toward the memory of the module 500.
  • FIG. 6 shows a networking board 600 according to the present invention.
  • the networking board 600 comprises modular connectors 601 for receiving data and control signals and cluster connectors 602 for transmitting the data and control signals to the clusters 100.
  • the networking board 600 further comprises communication hubs 604 for controlling the flow of the signals.
  • the networking board 600 also comprises inter-board connectors 605 arranged to facilitate communication with other networking boards 600. In this manner, a number of identical networking boards 600 may be connected together to control networking of systems of any size.
  • the networking boards 600 are arranged to communicate directly with any cluster 100 attached to the board 600, through the cluster connectors 602, and also with any other cluster 100 attached to any other board 600, through the inter-board connectors 605.
  • Each communications hub 604 comprises four ports to facilitate communication within the networking board 600.
  • One port supports a data feed to/from a cluster 100 via a cluster connector 602.
  • One port supports an external communication feed via the modular connectors 601.
  • Two ports support communication to/from other networking boards 600 via the inter- board connectors 605.
  • the networking board 600 is directly connected to two clusters 100, and therefore requires two communication hubs 604 and two cluster connectors 602.
  • the networking board 600 may be arranged to directly connect to any number of clusters 100, with differing numbers of communications hubs 604 and cluster connectors 602 as required.
  • the data is transmitted via Ethernet using Ethernet hubs 604 through RJ45 connectors 601, 602.
  • some of the cluster connectors 602 are dual RJ45 connectors, arranged to transmit the data and control signals over a single channel.
  • the networking board 600 further comprises a power connector 606, a power regulator 607 and a hot swap controller 608.
  • the hot swap controller 608 allows clusters 100 to be powered on/off and removed without affecting the rest of the system.
  • FIG. 7 shows an example of server system 700 comprising ten clusters 100 of the type described above.
  • the server system 700 is designed to use the cluster 100 in a rack mounted format.
  • Several clusters 100 are provided with additional infrastructure to operate in a server environment.
  • a power supply unit (PSU) 701 receives power from an external power input 702 and provides power directly to each cluster 100 through the power connectors 203 of the motherboards.
  • the PSU 701 also supplies power directly to a networking board 600 (or a series of interconnected networking boards 600) and a fan and system management board 705.
  • the networking board(s) 600 directly interface(s) with each cluster 100 and the fan and system management board 705.
  • the fan and system management board 705 sends and receives data through the networking board(s) 600 in order to control the temperature of the server system 700.
  • the fan and system management board 705 also provides controls for system power on/off as well as fan controlled temperature regulation.
  • the server system further comprises a chassis to house the clusters.
  • each server system 700 has a width of 450mm, which includes mounting rails for mounting the system 700 within a cabinet.
  • Each cluster is 80mm in height, and the server system can be standardised to a height of 2U, where U is a unit of height in a rack mount server (about 45mm).
  • the depth of the system 700 is dictated by the number of clusters 100.
  • four clusters 100 can fit side by side along the width of the cabinet.
  • server systems 700 are designed in groups of four clusters 100 up to a maximum of 32 clusters 100, comprising a total of 512 modules 400, 500, in a single 2U chassis. Therefore, a 42U rack may contain as many as 672 clusters 100, comprising 10752 modules 400, 500.
  • Figure 8 shows a schematic of a cluster 100.
  • the power connectors 203 of the motherboard 200 provide power to the components of the motherboard 200, to each daughterboard 300 and to each module 400, 500.
  • the 12V input is regulated to a 3.3V output before sending it to board components.
  • Data and control signals are received at the modular connector 201 of the motherboard 200. In a preferred embodiment, these signals are transmitted through 10G Ethernet to the Ethernet hub 202.
  • the Ethernet hub 202 sends controls signals and data signals to the microcontroller 206.
  • the data signals are transmitted via an Ethernet to SPI controller.
  • the control signals may be sent directly to the microcontroller 206 as it is Ethernet enabled.
  • Control signals are sent to the hot swap controller 308 of each daughterboard 300, from the microcontroller 206, to control the on/off state.
  • Ethernet communications are sent directly to the daughterboards 300 and modules 400, 500 from the Ethernet hub 202 on the motherboard 200.
  • the data and control signals are sent to the microcontroller 306, which further routes control data to the modules 400, 500 via I 2 C communication lines. Data signals are sent to the modules 400, 500 through 1G Ethernet.
  • a daughterboard 300 may comprise four quad core ARM ® CPUs with 4GB of RAM, one CPU per compute module 400.
  • Each cluster 100 of four daughterboards 300 requires roughly 16W of power. Taking all the periphery components into account, a system 700 comprising 32 clusters 100 will have a theoretical power envelope of about 600W. Therefore a rack of 10752 compute modules 400 will have a theoretical total power envelope of 12.6kW providing 43TB of RAM.
  • each cluster 100 may contain any combination of compute modules 400 and storage modules 500 according to a user's requirements.
  • a single daughterboard 300 might carry two compute modules 400 and two storage modules 500, four compute modules 400, or four storage modules 500.
  • one motherboard may comprise four interface boards, and one interface board may comprise four modules, it is to be understood that this is one example.
  • a daughterboard may comprise six storage modules.
  • a motherboard may comprise five interface boards. The present invention should not be considered as limited to the specific structure described above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Power Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Power Sources (AREA)

Abstract

La présente invention concerne un matériel informatique modulaire et un logiciel de mise en réseau. Plus spécifiquement, elle concerne un système modulaire dans lequel une carte mère comprend plusieurs cartes d'interface, et chaque carte d'interface comprend de multiples modules comprenant en outre un matériel de traitement ou de stockage. Chaque carte d'interface est mise en réseau de sorte qu'elle peut communiquer directement avec n'importe quelle autre carte d'interface du système, ce qui permet à n'importe quelle carte du système de communiquer avec n'importe quelle autre carte dans le système. On obtient ainsi un système qui présente une grande variabilité dimensionnelle et peut utiliser des composants à faible puissance pour réduire les besoins en refroidissement, par exemple, dans un système serveur.
PCT/EP2017/067943 2016-07-14 2017-07-14 Système de regroupement WO2018011425A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
IT102016000073909 2016-07-14
IT102016000073909A IT201600073909A1 (it) 2016-07-14 2016-07-14 Sistema di clustering.
GB1612223.6A GB2552208A (en) 2016-07-14 2016-07-14 Clustering system
GB1612223.6 2016-07-14

Publications (1)

Publication Number Publication Date
WO2018011425A1 true WO2018011425A1 (fr) 2018-01-18

Family

ID=59501401

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2017/067943 WO2018011425A1 (fr) 2016-07-14 2017-07-14 Système de regroupement

Country Status (1)

Country Link
WO (1) WO2018011425A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108683680A (zh) * 2018-05-31 2018-10-19 广东公信智能会议股份有限公司 一种会议系统的分线器
CN111694788A (zh) * 2020-04-21 2020-09-22 恒信大友(北京)科技有限公司 一种母板电路
CN113918495A (zh) * 2021-10-11 2022-01-11 北京小米移动软件有限公司 电源子板
CN114976794A (zh) * 2022-05-30 2022-08-30 禾多科技(北京)有限公司 模块化中央域控制器及车辆的控制方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060140179A1 (en) * 2004-12-29 2006-06-29 Edoardo Campini Method and apparatus to couple a module to a management controller on an interconnect
US20070133188A1 (en) * 2005-12-05 2007-06-14 Yi-Hsiung Su Adapting Apparatus, Method Thereof, and Computer System Thereof
CN201111027Y (zh) * 2007-11-22 2008-09-03 深圳市昭营科技有限公司 直插式电脑卡
WO2011034900A1 (fr) * 2009-09-15 2011-03-24 Bae Systems Information And Electronic Systems Integration Inc. Carte mezzanine avancée pour héberger un pmc ou un xmc
CN202512473U (zh) * 2012-03-14 2012-10-31 浪潮电子信息产业股份有限公司 一种基于ddr3 sodimm设计的控制板卡
US20140173157A1 (en) * 2012-12-14 2014-06-19 Microsoft Corporation Computing enclosure backplane with flexible network support
US20150169013A1 (en) * 2013-12-17 2015-06-18 Giga-Byte Technology Co., Ltd. Stacked expansion card assembly

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060140179A1 (en) * 2004-12-29 2006-06-29 Edoardo Campini Method and apparatus to couple a module to a management controller on an interconnect
US20070133188A1 (en) * 2005-12-05 2007-06-14 Yi-Hsiung Su Adapting Apparatus, Method Thereof, and Computer System Thereof
CN201111027Y (zh) * 2007-11-22 2008-09-03 深圳市昭营科技有限公司 直插式电脑卡
WO2011034900A1 (fr) * 2009-09-15 2011-03-24 Bae Systems Information And Electronic Systems Integration Inc. Carte mezzanine avancée pour héberger un pmc ou un xmc
CN202512473U (zh) * 2012-03-14 2012-10-31 浪潮电子信息产业股份有限公司 一种基于ddr3 sodimm设计的控制板卡
US20140173157A1 (en) * 2012-12-14 2014-06-19 Microsoft Corporation Computing enclosure backplane with flexible network support
US20150169013A1 (en) * 2013-12-17 2015-06-18 Giga-Byte Technology Co., Ltd. Stacked expansion card assembly

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108683680A (zh) * 2018-05-31 2018-10-19 广东公信智能会议股份有限公司 一种会议系统的分线器
CN108683680B (zh) * 2018-05-31 2024-03-26 广东公信智能会议股份有限公司 一种会议系统的分线器
CN111694788A (zh) * 2020-04-21 2020-09-22 恒信大友(北京)科技有限公司 一种母板电路
CN113918495A (zh) * 2021-10-11 2022-01-11 北京小米移动软件有限公司 电源子板
CN114976794A (zh) * 2022-05-30 2022-08-30 禾多科技(北京)有限公司 模块化中央域控制器及车辆的控制方法

Similar Documents

Publication Publication Date Title
EP2535786B1 (fr) Serveur, ensemble de serveurs et procédé de régulation de la vitesse d'un ventilateur
US7388757B2 (en) Monolithic backplane having a first and second portion
US7012815B2 (en) Computer systems
US11153986B2 (en) Configuring a modular storage system
US7315456B2 (en) Configurable IO subsystem
US20020124128A1 (en) Server array hardware architecture and system
US20070124529A1 (en) Subrack with front and rear insertion of AMC modules
US20080259555A1 (en) Modular blade server
WO2018011425A1 (fr) Système de regroupement
US20170181311A1 (en) Microserver system
GB2552208A (en) Clustering system
KR20150049572A (ko) 랙 마운트 서버의 전원을 공유하기 위한 시스템 및 그 운영 방법
CN209821735U (zh) 一种4u8节点的可扩展计算型服务器
US10481649B2 (en) Computer system, expansion component, auxiliary supply component and use thereof
WO2008052880A1 (fr) Système de serveur à lames
US7152126B2 (en) Stacked 3U payload module unit
CN116319122A (zh) 机架服务器及其网络配置方法和服务器机柜
CN114340248B (zh) 一种存储服务器及其独立机头控制系统
CN115481068A (zh) 服务器及数据中心
CN212906134U (zh) 处理器组件和服务器
CN210428236U (zh) 一种高密度八路服务器
US20060036794A1 (en) 3U hot-swappable power module and method
US20180039592A1 (en) System and method for distributed console server architecture
CN103677153A (zh) 服务器及服务器机架系统
CN219958163U (zh) 一种刀片服务器及服务器集群

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17746014

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17746014

Country of ref document: EP

Kind code of ref document: A1