CN218481828U - Multi-node server and multi-node server system - Google Patents

Multi-node server and multi-node server system Download PDF

Info

Publication number
CN218481828U
CN218481828U CN202222289755.0U CN202222289755U CN218481828U CN 218481828 U CN218481828 U CN 218481828U CN 202222289755 U CN202222289755 U CN 202222289755U CN 218481828 U CN218481828 U CN 218481828U
Authority
CN
China
Prior art keywords
board
node
pci
server
cards
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202222289755.0U
Other languages
Chinese (zh)
Inventor
刘猛
钟鹏
赵传迅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Uniview Technologies Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN202222289755.0U priority Critical patent/CN218481828U/en
Application granted granted Critical
Publication of CN218481828U publication Critical patent/CN218481828U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Multi Processors (AREA)

Abstract

The utility model relates to a multinode server technical field provides a multinode server and multinode server system, multinode server includes: the device comprises a first panel, a second panel, a back plate, a central exchange plate and a hard disk; at least two node board cards can be configured in a space between the first panel and the back plate, the back plate is configured with at least two board card interfaces and at least two hard disk interfaces, and at least one central exchange board and at least two hard disks are configured in a space between the second panel and the back plate; the board card interface is used for plugging the node board cards, the hard disk is connected with the node board cards through the hard disk interface, the central exchange board is connected with the node board cards through the board card interface, and the central exchange board is used for forwarding external data to at least two node board cards and forwarding received data of the node board cards to other node board cards. The utility model discloses each node integrated circuit board all can realize the data interaction, greatly reduced the network deployment degree of difficulty, can realize the server deployment sooner, and can dispose the node integrated circuit board in a flexible way.

Description

Multi-node server and multi-node server system
Technical Field
The utility model relates to a multinode server technical field especially relates to a multinode server and multinode server system.
Background
In recent years, the development speed of server product forms is very fast, from a traditional architecture server, to a blade server, to a multi-node server. Each form appears in different degrees and different aspects, and the original form and architecture are optimized and upgraded, so that the system function is better exerted.
The multi-node server is used as a server cluster, has strong expansibility, and more nodes can be added in the server cluster system along with the increase of the demand and the load. However, in the conventional multi-node server, each node is independent from another node and a node board card cannot be flexibly configured.
SUMMERY OF THE UTILITY MODEL
The utility model provides a multinode server and multinode server system for solve among the prior art each node mutually independent and can not dispose the defect of node integrated circuit board in a flexible way, realize that each node integrated circuit board all can realize the data interaction, greatly reduced the network deployment degree of difficulty, can realize the server deployment sooner, and can dispose the purpose of node integrated circuit board in a flexible way.
The utility model provides a multinode server, include: the system comprises a first panel, a second panel, a back panel, a central exchange board and a hard disk; at least two node board cards can be configured in a space between the first panel and the back plate, the back plate is configured with at least one central exchange board interface, at least two board card interfaces and at least two hard disk interfaces, and at least one central exchange board and at least two hard disks are configured in a space between the second panel and the back plate; the integrated circuit board interface is used for being connected with the node integrated circuit board in an inserting mode, the hard disk interface is used for being connected with the hard disk, the hard disk is connected with the node integrated circuit board through the hard disk interface, the central exchange board is connected with the node integrated circuit board through the integrated circuit board interface, and the central exchange board is used for forwarding external data to the at least two node integrated circuit boards and forwarding the received data of the node integrated circuit board to other node integrated circuit boards.
According to the utility model provides a pair of multinode server, the type of node integrated circuit board includes: and calculating a board card or an intelligent board card, wherein the intelligent board card is configured with a plurality of GPU acceleration cards.
According to the utility model provides a pair of multinode server, the integrated circuit board interface is the PCI-E interface, central switchboard includes: the multi-gigabit network card is connected with the CPU chip, the CPU chip is connected with the PCI-E exchange chip, and the PCI-E exchange chip is connected with the node board card through the PCI-E interface.
According to the utility model provides a pair of multinode server, the type of node integrated circuit board includes: the GPU acceleration card comprises a computing board card and an intelligent board card, wherein the intelligent board card is provided with a plurality of GPU acceleration cards.
According to the utility model provides a pair of multinode server, the integrated circuit board interface is the PCI-E interface, central switchboard includes: the network card comprises a tera network card, a CPU chip, a first PCI-E exchange chip, a second PCI-E exchange chip and a third PCI-E exchange chip, wherein the tera network card is connected with the CPU chip, the CPU chip is connected with the first PCI-E exchange chip, the first PCI-E exchange chip is respectively connected with a first calculation board card and a second calculation board card through corresponding PCI-E interfaces, the second PCI-E exchange chip is respectively connected with the first calculation board card and a first group of intelligent board cards through corresponding PCI-E interfaces, and the third PCI-E exchange chip is respectively connected with the second calculation board card and a second group of intelligent board cards through corresponding PCI-E interfaces.
According to the utility model provides a pair of multinode server, central switchboard sets up the upside of the direction of height of multinode server, a plurality of hard disks set up the downside of the direction of height of multinode server.
According to the utility model provides a pair of multinode server, a plurality of integrated circuit board interfaces are followed multinode server's width direction sets up.
According to the utility model provides a pair of multinode server, every the node integrated circuit board corresponds two the hard disk.
The utility model also provides a multinode server system, include:
a multi-node server as in any above;
and the client is in communication connection with the central switch board in the multi-node server and is used for inputting external data into the central switch board.
According to the utility model provides a pair of multinode server system, the system still includes:
and the external storage equipment is in communication connection with the central exchange board and is used for storing the data processed by the plurality of node board cards.
The utility model provides a multinode server and multinode server system, on the one hand, because dispose central switchboard in the space between second panel and the backplate, the backplate disposes at least one central switchboard interface, central switchboard interface is used for pegging graft central switchboard, central switchboard passes through the integrated circuit board interface and links to each other with the node integrated circuit board, central switchboard is used for retransmitting external data to a plurality of node integrated circuit boards and retransmits the data of the node integrated circuit board received to other node integrated circuit boards, each node integrated circuit board can all realize data interaction, greatly reduced the network deployment degree of difficulty, can realize the server deployment sooner; on the other hand, at least two node board cards can be configured in the space between the first panel and the back plate, the back plate is configured with at least two board card interfaces and at least two hard disk interfaces, the board card interfaces are used for being plugged with the node board cards, the hard disk interfaces are used for being plugged with hard disks, the hard disks are connected with the node board cards through the hard disk interfaces, and the node board cards can be flexibly configured according to requirements.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic structural diagram of a multi-node server provided by the present invention;
fig. 2 is a schematic structural diagram of a data center server provided by the present invention;
fig. 3 is a schematic structural diagram of a deep learning GPU server provided by the present invention;
FIG. 4 is a schematic diagram of an HPC-type GPU server provided by the present invention;
fig. 5 is a schematic structural diagram of a multi-node server system provided by the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the present invention clearer, the drawings of the present invention are combined to clearly and completely describe the technical solutions of the present invention, and obviously, the described embodiments are some embodiments of the present invention, not all embodiments. Based on the embodiments in the present invention, all other embodiments obtained by a person skilled in the art without creative efforts belong to the protection scope of the present invention.
The multi-node server of the present invention is described below with reference to fig. 1 to 4.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a multi-node server according to the present invention. As shown in fig. 1, the present invention provides a multi-node server, including: a first face plate (not shown in fig. 1), a second face plate (not shown in fig. 1), a backplane 1, a central switching board 2, and a hard disk 3.
The first panel can be a rear panel of the multi-node server, the rear panel is a baffle behind the multi-node server, and at least two node board cards 4 can be configured in a space between the first panel and the back panel 1 and can be flexibly configured according to requirements. For example, for a 4U multi-node server as shown in fig. 1, 8 node boards 4 may be arranged in the space between the first panel and the backplane 1.
The backplane 1 is configured with at least one central switch board interface (not shown in fig. 1), at least two board card interfaces (not shown in fig. 1), and at least two hard disk interfaces (not shown in fig. 1), where the central switch board interface is used to plug in the central switch board 2, the board card interface is used to plug in the node board card 4, and the hard disk interface is used to plug in the hard disk 3.
The second panel can be a front panel of a multi-node server, the front panel is a baffle in front of the multi-node server, and at least one central exchange board 2 and at least two hard disks 3 are arranged in a space between the second panel and the back board 1.
Alternatively, the central exchange board 2 is disposed at the upper side in the height direction of the multi-node server, and the hard disk 3 is disposed at the lower side in the height direction of the multi-node server. For example, for a 4U multi-node server as shown in fig. 1, the space between the second panel and the backplane 1 is divided into an upper 1.5U first subspace and a lower 2.5U second subspace, the upper 1.5U first subspace being configured with the central switch board 2, and the lower 2.5U second subspace being configured with the hard disk 3. The central exchange board 2 and the hard disks 3 are arranged on the same side and are arranged up and down, so that the space between the second panel and the back board 1 can be reasonably utilized.
The central exchange board 2 is connected with the node board cards 4 through the board card interfaces, the central exchange board 2 is used for forwarding external data to the node board cards 4 and forwarding received data of the node board cards 4 to other node board cards 4, data interaction can be achieved through all the node board cards 4, networking difficulty is greatly reduced, and server deployment can be achieved quickly.
Optionally, the plurality of board card interfaces are arranged along the width direction of the multi-node server. For example, for a 4U multi-node server as shown in fig. 1, 8 node boards 4 are divided into two rows, and each row of 4 node boards 4 is transversely plugged into a corresponding board interface, so that more node boards 4 can be configured in the space between the first panel and the backplane 1, thereby increasing the number of nodes of the multi-node server.
The hard disk interface is used for inserting a hard disk 3, and the hard disk 3 is connected with the node board card 4 through the hard disk interface. Optionally, each node board card 4 corresponds to two hard disks 3. The two hard disks 3 are respectively connected with the node board card 4 through respective hard disk interfaces, and can store the operation data of the node board card 4.
In this embodiment, on one hand, since a central switch board is configured in a space between the second panel and the backplane, the backplane is configured with at least one central switch board interface, the central switch board interface is used for plugging the central switch board, the central switch board is connected with the node board cards through board card interfaces, the central switch board is used for forwarding external data to the plurality of node board cards and forwarding received data of the node board cards to other node board cards, each node board card can realize data interaction, thereby greatly reducing networking difficulty and realizing server deployment more quickly; on the other hand, at least two node board cards can be configured in the space between the first panel and the back plate, the back plate is configured with at least two board card interfaces and at least two hard disk interfaces, the board card interfaces are used for inserting the node board cards, the hard disk interfaces are used for inserting hard disks, the hard disks are connected with the node board cards through the hard disk interfaces, and the node board cards can be flexibly configured according to requirements.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a data center server provided by the present invention. As shown in fig. 2, the type of the configured node board card is a computing board card. As the data center server needs more computing nodes, all the node board cards are configured into computing board cards. For a 4U server, 8 computing boards can be configured at most, each computing board is a two-way CPU, and the two-way CPUs can be interconnected through a fast channel (UPI), so that the computing boards can be configured to 16 node servers at most.
The board card interface is a PCI-E (peripheral component interconnect express, high speed serial computer extended bus standard) interface, and the central switch board includes: the system comprises at least two gigabit network cards (which can adopt 2 x 10G convergence), a CPU chip and a PCI-E switching chip, wherein the at least two gigabit network cards are respectively connected with the CPU chip, the CPU chip is connected with the PCI-E switching chip (which can adopt a PCI-E x8 slot), and the PCI-E switching chip is respectively connected with each computing board card through a PCI-E interface (which can adopt a PCI-E x4 slot).
After receiving the external data through the ten-gigabit network card, the central exchange board forwards the external data to each computing board through the PCI-E interface to perform data processing, retrieval and the like.
In this embodiment, all of the plurality of node boards may be configured as computing boards, and the multi-node server may be configured as a data center server. The board card interface is set to be a PCI-E interface, and because the PCI-E interface has the hot plug characteristic, if a single computing board card fails, the power of the whole computer is not needed, and only the failed computing board card needs to be replaced, so that the work of other equipment in the whole computer is not influenced.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a deep learning GPU (graphics processing unit) server provided by the present invention. As shown in fig. 3, the configured node board card is an intelligent board card. The deep learning type GPU server is mainly used for algorithm training, reasoning and other applications, more GPU acceleration cards are needed to provide calculation power, all the node board cards are configured into intelligent board cards, and each intelligent board card is configured with a plurality of GPU acceleration cards. For a 4U server, a maximum of 8 smart cards can be configured, and each smart card can be configured with 4 GPU accelerator cards, so that a maximum of 32 GPU accelerator cards can be configured.
The board card interface is a PCI-E interface, and the central exchange board comprises: the intelligent card comprises two ten-million network cards (which can adopt 2 x 10G convergence), a CPU chip and a PCI-E switching chip, wherein the two ten-million network cards are respectively connected with the CPU chip, the CPU chip is connected with the PCI-E switching chip (which can adopt a PCI-E x8 slot), and the PCI-E switching chip is respectively connected with each intelligent card through a PCI-E interface (which can adopt a PCI-E x4 slot).
After receiving external data through the ten-gigabit network card, the central exchange board forwards the external data to each intelligent board card through the PCI-E interface to perform algorithm training, reasoning and the like.
In this embodiment, all of the plurality of node boards may be configured as smart boards, and the multi-node server may be configured as a deep learning GPU server. The board card interface is set as a PCI-E interface, and because the PCI-E interface has the hot plug characteristic, if a single intelligent board card breaks down, the whole machine does not need to be powered off, and only the broken intelligent board card needs to be replaced, so that the work of other equipment in the whole machine is not influenced.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an HPC (High Performance Computing) GPU server according to the present invention. As shown in fig. 4, the configured node boards are of the types of the computing board and the smart board. Because the HPC type GPU server is mainly used for GPU calculation with high concurrency characteristics, the data analysis is realized by mounting a GPU accelerator card on a high-performance CPU, and the performance requirement on a single machine is high.
As shown in fig. 4, for example, 2 node boards in a 4U multi-node server are configured as computation boards, 6 node boards are configured as smart boards, each computation board mounts 3 smart boards, and each smart board is configured with a plurality of GPU acceleration cards.
The board card interface is a PCI-E interface, and the central exchange board comprises: two ten-million network cards (which can adopt 2 × 10g convergence), a CPU chip, a first PCI-E switching chip, a second PCI-E switching chip, and a third PCI-E switching chip, wherein each ten-million network card is connected to the CPU chip, the CPU chip is connected to the first PCI-E switching chip, the first PCI-E switching chip is connected to the first computing board (i.e., computing board 1 in fig. 4) and the second computing board (i.e., computing board 0 in fig. 4) through corresponding PCI-E interfaces, the second PCI-E switching chip is connected to the first computing board and the first group of intelligent boards (i.e., intelligent boards 3, 5, and 7 in fig. 4) through corresponding PCI-E interfaces, and the third PCI-E switching chip is connected to the second computing board and the second group of intelligent boards (i.e., intelligent boards 2, 4, and 6 in fig. 4) through corresponding PCI-E interfaces.
After receiving external data through the ten-gigabit network card, the central exchange board forwards the external data to the two computing board cards through the PCI-E interface for data analysis, and then forwards the data to the corresponding intelligent board cards through the computing board cards for GPU computing.
In this embodiment, the plurality of node boards may be configured as a combination of a compute board and a smart board, and since the CPU of the compute board has a multi-core feature, it may provide highly concurrent data processing, and may configure a multi-node server as an HPC type GPU server. The board card interface is set as a PCI-E interface, and because the PCI-E interface has the hot plug characteristic, if a single calculation board card or an intelligent board card has a fault, the power of the whole computer is not needed, and the fault calculation board card or the intelligent board card can be replaced without influencing the work of other equipment in the whole computer.
The present embodiment further provides a multi-node server system, as shown in fig. 5, where the multi-node server system includes any one of the multi-node servers described above, and a client. The client is connected with the central switch board in the multi-node server through network communication, and external data can be input into the central switch board through a network.
Optionally, the multi-node server system further comprises: an external storage device. The external storage device is connected with the central exchange board through network communication, and can store data processed by the plurality of node board cards.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; although the present invention has been described in detail with reference to the foregoing embodiments, those skilled in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A multi-node server, comprising: the system comprises a first panel, a second panel, a back panel, a central exchange board and a hard disk; at least two node board cards can be configured in a space between the first panel and the back plate, the back plate is configured with at least one central exchange board interface, at least two board card interfaces and at least two hard disk interfaces, and at least one central exchange board and at least two hard disks are configured in a space between the second panel and the back plate; the central switch board interface is used for being plugged into the central switch board, the board card interface is used for being plugged into the node board cards, the hard disk interface is used for being plugged into the hard disks, the hard disks are connected with the node board cards through the hard disk interface, the central switch board is connected with the node board cards through the board card interface, and the central switch board is used for forwarding external data to the at least two node board cards and forwarding the received data of the node board cards to other node board cards.
2. The multinode server of claim 1, wherein the types of node boards comprise: and calculating a board card or an intelligent board card, wherein the intelligent board card is configured with a plurality of GPU acceleration cards.
3. The multinode server of claim 2, wherein the board interface is a PCI-E interface, and wherein the central switch board comprises: the multi-gigabit network card is connected with the CPU chip, the CPU chip is connected with the PCI-E exchange chip, and the PCI-E exchange chip is connected with the node board card through the PCI-E interface.
4. The multinode server of claim 1, wherein the types of node boards comprise: the GPU acceleration card comprises a computing board card and an intelligent board card, wherein the intelligent board card is provided with a plurality of GPU acceleration cards.
5. The multinode server of claim 4, wherein the board interface is a PCI-E interface, and wherein the central switch board comprises: the network card comprises a tera network card, a CPU chip, a first PCI-E exchange chip, a second PCI-E exchange chip and a third PCI-E exchange chip, wherein the tera network card is connected with the CPU chip, the CPU chip is connected with the first PCI-E exchange chip, the first PCI-E exchange chip is respectively connected with a first calculation board card and a second calculation board card through corresponding PCI-E interfaces, the second PCI-E exchange chip is respectively connected with the first calculation board card and a first group of intelligent board cards through corresponding PCI-E interfaces, and the third PCI-E exchange chip is respectively connected with the second calculation board card and a second group of intelligent board cards through corresponding PCI-E interfaces.
6. The multinode server of claim 1, wherein the central switch board is disposed at an upper side in a height direction of the multinode server, and the plurality of hard disks are disposed at a lower side in the height direction of the multinode server.
7. The multi-node server of claim 1, wherein the plurality of board interfaces are arranged along a width of the multi-node server.
8. The multi-node server of claim 1, wherein each of the node boards corresponds to two of the hard disks.
9. A multi-node server system, comprising:
the multi-node server of any of claims 1 to 8;
and the client is in communication connection with the central switch board in the multi-node server and is used for inputting external data into the central switch board.
10. The multi-node server system of claim 9, wherein the system further comprises:
and the external storage equipment is in communication connection with the central exchange board and is used for storing the data processed by the plurality of node board cards.
CN202222289755.0U 2022-08-29 2022-08-29 Multi-node server and multi-node server system Active CN218481828U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202222289755.0U CN218481828U (en) 2022-08-29 2022-08-29 Multi-node server and multi-node server system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202222289755.0U CN218481828U (en) 2022-08-29 2022-08-29 Multi-node server and multi-node server system

Publications (1)

Publication Number Publication Date
CN218481828U true CN218481828U (en) 2023-02-14

Family

ID=85166533

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202222289755.0U Active CN218481828U (en) 2022-08-29 2022-08-29 Multi-node server and multi-node server system

Country Status (1)

Country Link
CN (1) CN218481828U (en)

Similar Documents

Publication Publication Date Title
US8116332B2 (en) Switch arbitration
US7388757B2 (en) Monolithic backplane having a first and second portion
US8484399B2 (en) System and method for configuring expansion bus links to generate a double-bandwidth link slot
US20060288132A1 (en) Memory single-to-multi load repeater architecture
EP2568392A1 (en) Computer subsystem and computer system
US20070124529A1 (en) Subrack with front and rear insertion of AMC modules
JPH05207011A (en) Reconstructible and trouble-resistant multiple interconnection metwork and its protocol
CN102819517A (en) PCIE (peripheral component interconnect-express) interface card
CN209248436U (en) A kind of expansion board clamping and server
CN214851260U (en) Intelligent network card out-of-band connection system
WO2006071714A1 (en) Multiple cell computer systems and methods
CN218481828U (en) Multi-node server and multi-node server system
CN211062032U (en) Scorpio high-performance storage node based on domestic Shenwei platform
CN113312304A (en) Interconnection device, mainboard and server
CN110806989A (en) Storage server
CN107332654B (en) FPGA-based multi-board card array parallel decryption device and method thereof
CN216352292U (en) Server mainboard and server
CN209248518U (en) A kind of solid state hard disk expansion board clamping and server
CN114968895A (en) Heterogeneous interconnection system and cluster
US20210182110A1 (en) System, board card and electronic device for data accelerated processing
CN212906134U (en) Processor assembly and server
CN111651293B (en) Micro-fusion framework distributed system and construction method
CN114116588B (en) ATCA board card
CN110727631B (en) H-shaped assembling method based on orthogonal and non-orthogonal heterogeneous interconnection of double middle plates
Kwon et al. Microserver architecture with high-speed interconnected network

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant