CN115268581A - AI edge server system architecture with high performance computing power - Google Patents

AI edge server system architecture with high performance computing power Download PDF

Info

Publication number
CN115268581A
CN115268581A CN202210709739.4A CN202210709739A CN115268581A CN 115268581 A CN115268581 A CN 115268581A CN 202210709739 A CN202210709739 A CN 202210709739A CN 115268581 A CN115268581 A CN 115268581A
Authority
CN
China
Prior art keywords
module
computing node
node module
accelerator card
card
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210709739.4A
Other languages
Chinese (zh)
Inventor
林增权
吴戈
吕腾
李鸿强
莫良伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baode Computer System Co ltd
Original Assignee
Baode Computer System Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Baode Computer System Co ltd filed Critical Baode Computer System Co ltd
Priority to CN202210709739.4A priority Critical patent/CN115268581A/en
Publication of CN115268581A publication Critical patent/CN115268581A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/18Packaging or power distribution
    • G06F1/183Internal mounting support structures, e.g. for printed circuit boards, internal connecting means
    • G06F1/185Mounting of expansion boards
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/18Packaging or power distribution
    • G06F1/183Internal mounting support structures, e.g. for printed circuit boards, internal connecting means
    • G06F1/186Securing of expansion boards in correspondence to slots provided at the computer enclosure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/40Bus structure
    • G06F13/4004Coupling between buses
    • G06F13/4022Coupling between buses using switching circuits, e.g. switching matrix, connection or expansion network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/40Bus structure
    • G06F13/4063Device-to-bus coupling
    • G06F13/4068Electrical coupling
    • G06F13/4081Live connection to bus, e.g. hot-plugging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2213/00Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F2213/0026PCI express
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Power Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Mathematical Physics (AREA)
  • Multi Processors (AREA)

Abstract

The application discloses a high-performance computing AI edge server system architecture which is used for improving the flexibility of use and meeting the increasing requirements on data real-time performance and safety. The AI edge server system architecture in the present application includes: the computer case comprises a case body, a CPU computing node module, an accelerator card computing node module, a routing board and a power supply module, wherein the CPU computing node module comprises a PCIe channel, a first accelerator card hot plug module is arranged on the accelerator card computing node module, an accelerator card connecting network is inserted in the first accelerator card hot plug module, the first accelerator card hot plug module is converged at an accelerator card computing node module network interface through a switching module in a switching way, a second accelerator card hot plug module is also arranged on the accelerator card computing node module, the second accelerator card hot plug module is connected to the CPU computing node module through a cable, an accelerator card inserted in the second accelerator card hot plug module is converged at the accelerator card computing node module network interface through the PCIe channel and the switching module, and the network interface is connected with the routing board network interface.

Description

AI edge server system architecture with high performance computing power
Technical Field
The present application relates to the field of computer technologies, and in particular, to a high-performance computing AI edge server system architecture.
Background
With the proliferation of the number of terminal devices of the internet of things and the increasing demand for real-time data and security, edge computing becomes crucial in application scenarios of many industries, such as road management and automatic driving of intelligent traffic, quality detection and device monitoring of intelligent manufacturing, disease monitoring and auxiliary diagnosis of intelligent medical care, and the like. Edge computing is still in an early development stage in China, and with the increasing and more complex data, the number of accelerator cards of the edge computing server widely used in the market at present is limited, so that the computing power of edge computing is limited.
At present, in order to solve the problem of limited number of accelerator cards of an edge computing server, a large number of accelerator cards are expanded to improve the computing power of edge computing. Usually, the expansion is performed by using a Peripheral Component Interconnect Express (PCIE) expansion method.
When the existing accelerator card expansion mode is used, a complex backboard structure is needed when the accelerator card is expanded, the PCIE bus rate is very high, a plurality of high-speed connectors and signal reconstruction or signal amplification chips are needed, the operation and maintenance are difficult, and the requirement for change is difficult to meet.
Disclosure of Invention
In order to solve the technical problems, the application provides a high-performance computing AI edge server system architecture, which is used for solving the problems of difficult operation and maintenance, limited number of expanded accelerator cards and limited computing power of the original standard-based server, improving the flexibility of use and meeting the increasing requirements on data instantaneity and safety.
The application provides a high-performance computationally intensive AI edge server system architecture, comprising:
the system comprises a case, a CPU (central processing unit) computing node module, an accelerator card computing node module, a routing board and a power supply module;
the power supply module is electrically connected with the CPU computing node module, the accelerator card computing node module and the routing board respectively;
the CPU computing node module is arranged on the lower layer of the chassis and comprises a PCIe channel;
the accelerator card computing node module is installed on the upper layer of the chassis, a first accelerator card hot plug module is arranged on the accelerator card computing node module, an accelerator card connecting network is inserted in the first accelerator card hot plug module, a switching module is arranged on the accelerator card, the first accelerator card hot plug module is converged on a network interface of the accelerator card computing node module through the switching module in a switching mode, a second accelerator card hot plug module is further arranged on the accelerator card computing node module and connected on the CPU computing node module through a cable, the accelerator card inserted in the second accelerator card hot plug module passes through the PCIe channel and is converged on the network interface of the accelerator card computing node module through the switching module, and the network interface of the accelerator card computing node module is connected with the network interface of the routing board.
Optionally, a PCIe slot is disposed on the CPU compute node module, and the PCIe slot is used for inserting a GPU card.
Optionally, the CPU computing node module further includes a hard disk module for providing a storage function.
Optionally, a fan is further disposed on the CPU computation node module, and is configured to dissipate heat for the CPU computation node module.
Optionally, the CPU computation node module is provided with a memory for storing data.
Optionally, a middle plate is installed on the accelerator card computing node module;
and the network interface of the accelerator card computing node module is connected with the middle plate through a connector, and the network interface on the middle plate is connected with the network cable of the network interface of the routing plate.
Optionally, an OCP3.0 module is disposed on the CPU compute node module.
Optionally, the second accelerator card plugging module includes 18 accelerator card modules supporting PCIe
Optionally, the chassis is a 4U double-layer chassis.
Optionally, the chassis further includes a chassis upper cover;
the upper cover of the case is arranged at the opening of the case and used for sealing the case.
According to the technical scheme, the embodiment of the application has the following advantages:
in the application, the CPU computing node module comprises a PCIe channel, a first accelerating card hot plug module is arranged on the accelerating card computing node module, an accelerating card connecting network inserted in the first accelerating card hot plug module, a switching module is arranged on the accelerating card, the first accelerating card hot plug module is converged at a network interface of the accelerating card computing node module through the switching module, a second accelerating card plug module is further arranged on the accelerating card computing node module and is connected to the CPU computing node module through a cable, an accelerating card inserted in the second accelerating card plug module is converged at the network interface of the accelerating card computing node module through the PCIe channel and the switching module, and the network interface of the accelerating card computing node module is connected with a network interface of the routing board. The accelerator cards are exchanged through the exchange module, are connected through the network, are converged at the network interface of the routing board, can provide two IP and terminal connections through the conversion of the routing board, and each accelerator card works independently and can provide a plurality of signal channels, so that the AI computing capacity is greatly improved, the use flexibility is improved, the operation and maintenance are simple, and the increasing data real-time and safety requirements are met.
Drawings
In order to more clearly illustrate the technical solutions in the present application, the drawings required for the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings may be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic diagram of a high performance computing AI edge server system architecture according to the present application;
FIG. 2 is a schematic top view of a CPU compute node module according to the present application;
FIG. 3 is a schematic top view of an acceleration card compute node module of the present application;
FIG. 4 is a schematic top view of an accelerator card of the present application;
fig. 5 is a schematic power supply line diagram of the architecture of the high-performance computationally AI edge server system in the present application.
Detailed Description
In the present application, the terms "upper", "lower", "left", "right", "front", "rear", "top", "bottom", "inner", "outer", "middle", "vertical", "horizontal", "transverse", "longitudinal", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are used only for explaining relative positional relationships between the respective components or constituent parts, and do not particularly limit specific mounting orientations of the respective components or constituent parts.
Moreover, some of the above terms may be used to indicate other meanings besides the orientation or positional relationship, for example, the term "on" may also be used to indicate some kind of attachment or connection relationship in some cases. The specific meaning of these terms in this application will be understood by those of ordinary skill in the art as the case may be.
Furthermore, the terms "mounted," "disposed," "provided," "connected," and "connected" are to be construed broadly. For example, it may be a fixed connection, a removable connection, or a unitary construction; can be a mechanical connection, or an electrical connection; may be directly connected, or indirectly connected through intervening media, or may be in internal communication between two devices, elements or components. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as the case may be.
In addition, the structures, the proportions, the sizes, and the like, which are illustrated in the accompanying drawings and described in the present application, are intended to be considered illustrative and not restrictive, and therefore, not limiting, since those skilled in the art will understand and read the present application, it is understood that any modifications of the structures, changes in the proportions, or adjustments in the sizes, which are not necessarily essential to the practice of the present application, are intended to be within the scope of the present disclosure without affecting the efficacy and attainment of the same.
The embodiment of the application provides a high-performance computing AI edge server system architecture, which is used for solving the problems of difficult operation and maintenance, limited number of expanded accelerator cards and limited computing power of the original standard-based server, improving the use flexibility and meeting the increasing requirements on data instantaneity and safety.
Typically, edge computing is often tied to the internet of things. Internet of things devices participate in increasingly powerful processes, and as a result, large amounts of data generated need to be migrated to the "edge" of the network, without the data having to be continuously transferred back and forth between centralized servers for processing. Thus, edge computing is more efficient, less delayed, faster in managing large amounts of data from internet of things devices, and scalable. Edge computing is still in an early development stage in China, along with the increasing and more complex data, the number of accelerator cards of an edge computing server widely used in the market at present is limited, the computing power of edge computing is limited, in the past, if a large number of accelerator cards need to be expanded, a PCIe expansion mode is used, a complex backboard structure needs to be used, the PCIe bus speed is very high and can reach 2.5Gbps-16Gbps, a plurality of high-speed connectors and signal reconstruction or signal amplification chips need to be used, the operation and the maintenance are difficult, and the changing requirements are difficult to meet. The high performance computationally intensive AI edge server system architecture of the present application is capable of efficiently solving the above-mentioned problems,
referring to fig. 1, fig. 1 is a schematic structural diagram of a high-performance computing AI edge server system architecture according to the present application, including:
the computer comprises a case, a CPU computing node module 1, an accelerator card computing node module 2, a routing board and a power module 11;
the power supply module 11 is respectively electrically connected with the CPU computing node module 1, the accelerator card computing node module 2 and the routing board;
the CPU computing node module 1 is arranged on the lower layer of the chassis, and the CPU computing node module 1 comprises a PCIe channel;
the accelerator card node module is installed on the upper layer of a chassis, a first accelerator card hot plug module 21 is arranged on the accelerator card computing node module 2, an accelerator card connected network is arranged on the first accelerator card hot plug module 21, a switching module 24 is arranged on the accelerator card, the first accelerator card hot plug module 21 is switched and converged on a network interface of the accelerator card computing node module 2 through the switching module 24, a second accelerator card hot plug module is further arranged on the accelerator card computing node module 2 and connected to the CPU computing node module 1 through a cable, an accelerator card arranged on the second accelerator card hot plug module is converged on a network interface of the accelerator card computing node module 2 through a PCIe channel and the switching module 24, and the network interface of the accelerator card computing node module 2 is connected with a network interface of a routing board.
With the increasing requirements of network devices on bandwidth, flexibility and performance, the PCIe standard is developed. PCIe (peripheral component interconnect express) is a high-speed serial computer expansion bus standard. PCIe belongs to high-speed serial point-to-point double-channel high-bandwidth transmission, connected devices distribute independent channel bandwidth and do not share bus bandwidth, functions of active power management, error reporting, end-to-end reliable transmission, hot plug, service quality and the like are mainly supported, and data transmission rate is high. The channels are paths or interfaces for connecting external signals, the PCIe channels are PCIe signals, one channel corresponds to one signal, for example, force, temperature, humidity and the like of a plurality of points are measured, signals acquired by the channels are generally sequentially transmitted to the signal conditioning circuit in turn, then are subjected to A/D conversion and then are transmitted to the microprocessor. The number of channels is typically a multiple of 8, i.e. 8 channels, 16 channels, etc.
Referring to fig. 2, fig. 3 and fig. 4, the power module 11 is connected to the motherboard connector in a hot plugging manner to supply power to the motherboard, where hot plugging is hot plugging, which means that the module and the board card are plugged into or pulled out of the system without affecting the normal operation of the system under the condition of not turning off the system power supply, so as to improve the reliability, quick maintainability, redundancy, and timely recovery capability to a disaster of the system. The mainboard is provided with PCIe slots 13, the PCIe slots 13 are used for inserting GPU cards, generally 3 PCIE 5.0 slots are arranged, 3A 100 GPU cards can be expanded, and the utilization rate of computing resources can be optimized; still include hard disk module 12 on the CPU calculation node module 1, this hard disk module 12 is 12 3.5 cun hard disk modules, and the backplate of hard disk module 12 passes through PCIe passageway and is connected with the slim sas interface cable on the mainboard, provides the memory function, improves the operating performance and the flexibility of use of computer. The CPU computing node module is also provided with a fan 15 which is used for dissipating heat for the CPU computing node module and reducing the reduction of the computing performance of the CPU at high temperature; the CPU computing node module is provided with a memory 14, and the memory is 32 DDR5 memories for storing data. The CPU calculation node module 1 further comprises a main board, an OCP3.0 module, 2 Sapphire rapid series processors, a routing board and a group of 1+1 hot-swap power modules 11,1+1, which share 50% of load, if one power module 11 fails, the other power module will bear all the load, and the condition that the power module 11 is shut down due to failure of only one power module 11 is prevented. The first accelerator card computing node module 2 includes 25 first accelerator card hot plug modules arranged in front, 16 first accelerator card hot plug modules arranged in rear, a second accelerator card computing node module 2 including 18 PCIe lanes, 100 accelerator cards in total, a switching module 24 arranged on the accelerator card, and a set of 1+1 hot plug power module 11 and a cooling fan. Optionally, the accelerator card computing node module 2 is further installed with a middle board 23, a network interface of the accelerator card computing node module 2 is connected to the middle board 23 through a connector, and a network interface on the middle board 23 is connected to a network cable of a network interface of the routing board.
Referring to fig. 5, the power supply circuit scheme may be: the 16 first accelerating card hot plug modules and the 25 first accelerating card hot plug modules are directly connected through a network. And the 18 second accelerator card hot plug modules walk PCIe channels on the mainboard of the CPU computing node module, and the clock sequence number of each channel is different. The power supply of the 18 second accelerator card hot plug modules of the accelerator card computing node module 2 supplies power to the power supply of the CPU computing node module 1. 2000W1+1 redundant hot plug power supply of the CPU computing node module supplies power to the main board, and supplies power to 18 second accelerator card hot plug modules and fans of the CPU computing node module through cables; the 16 first accelerator card hot plug modules and the 25 first accelerator card hot plug modules of the accelerator card computing node module 2 are both directly powered by the hot plug power supply of 2000W1+1 above.
The system comprises an accelerator card node, wherein 25 first accelerator card hot plug modules and 16 first accelerator card hot plug modules of the accelerator card node are directly connected with each other through a network, 1 exchange module can exchange 9 accelerator cards at most, two groups of 25 first accelerator card hot plug modules and two groups of 16 first accelerator card hot plug modules are exchanged and converged at 10 network interfaces through 10 exchange modules arranged at the back of the accelerator cards and are connected with a first middle plate through a connector, and the 10 network interfaces on the first middle plate are connected with 10 network interface network lines of a routing plate; 18 second accelerator card plug modules of upper layer accelerator card node link to each other through mcio interface and the mcio interface of lower floor CPU computational node upper main board, walk the PCIe signal, 2 network interfaces have been assembled in through 2 exchange module exchanges then, be connected through connector and second medium plate, 2 network interfaces on the second medium plate and 2 network interface network cable connections of route board, finally all accelerator cards assemble in the route board through 12 network interfaces altogether, it links to each other with the terminal to externally provide 2 IP addresses by the route board. Each accelerator card works independently, can provide as many as 2000 signal channels to the maximum extent, greatly improves AI computing capacity, improves flexibility of use, is simple in operation and maintenance, and meets ever-increasing data real-time and safety requirements.
Optionally, the chassis is a 4U double-deck chassis. "U" in the server field refers specifically to the thickness of a rack server, and is an abbreviation for unit representing the external dimensions of the server, and the detailed dimensions are determined by the american society for electronics industry, which is a group of industries. The thickness is given in centimeters as a basic unit. 1U is 4.45cm,4U is 4 times of 1U and 17.8cm. In this embodiment, the size of the chassis may be adjusted according to the actual need of the equipment to be installed.
Optionally, the chassis further comprises a chassis upper cover;
the upper cover of the case is arranged at the opening of the case and used for sealing the case. The situation that dust enters the machine box to cause damage to components in the machine box is reduced.
It should be noted that the above summary and the detailed description are intended to demonstrate the practical application of the technical solutions provided in the present application, and should not be construed as limiting the scope of the present application. Various modifications, equivalent substitutions, or improvements may be made by those skilled in the art within the spirit and principles of the present application. The protection scope of this application is subject to the appended claims.

Claims (10)

1. A high performance computationally intensive AI edge server system architecture, comprising:
the system comprises a case, a CPU (central processing unit) computing node module, an accelerator card computing node module, a routing board and a power supply module;
the power supply module is electrically connected with the CPU computing node module, the accelerator card computing node module and the routing board respectively;
the CPU computing node module is arranged on the lower layer of the case and comprises a PCIe channel;
the utility model discloses a network interface of accelerating card computing node module, including accelerating card computing node module, PCIe passageway, switching module, CPU, switching module, accelerating card module, switching module, PCIe channel and switching module, accelerating card computing node module installs quick-witted case upper strata, be provided with first accelerating card hot plug module on the accelerating card computing node module, the accelerating card that inserts connects on the first accelerating card hot plug module connects the network with the accelerating card of inserting, be provided with switching module on the accelerating card, first accelerating card hot plug module passes through the switching module switching assemble in the network interface of accelerating card computing node module, the network interface of accelerating card computing node module with the network interface of routing board connects.
2. The AI edge server system architecture of claim 1, wherein the CPU compute node modules are provided with PCIe slots for receiving GPU cards.
3. The AI edge server system architecture of claim 1, wherein the CPU compute node module further includes a hard disk module thereon for providing storage functionality.
4. The AI edge server system architecture of claim 1, wherein the CPU compute node module is further provided with a fan for dissipating heat from the CPU compute node module.
5. The AI edge server system architecture of claim 1, wherein the CPU compute node module has a memory disposed thereon for storing data.
6. The AI edge server system architecture of claim 1, wherein the accelerator card compute node module has a midplane installed thereon;
and the network interface of the accelerator card computing node module is connected with the middle plate through a connector, and the network interface on the middle plate is connected with the network cable of the network interface of the routing plate.
7. The AI edge server system architecture of claim 1, wherein an OCP3.0 module is provided on the CPU compute node module.
8. The AI edge server system architecture of claim 1, wherein the second accelerator card plug module includes 18 PCIe enabled accelerator card modules.
9. The AI edge server system architecture of any of claims 1-8, wherein the chassis is a 4U dual-tier chassis.
10. The AI edge server system architecture of claim 9, wherein the chassis further includes a chassis top cover;
the upper cover of the case is arranged at the opening of the case and used for sealing the case.
CN202210709739.4A 2022-06-22 2022-06-22 AI edge server system architecture with high performance computing power Pending CN115268581A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210709739.4A CN115268581A (en) 2022-06-22 2022-06-22 AI edge server system architecture with high performance computing power

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210709739.4A CN115268581A (en) 2022-06-22 2022-06-22 AI edge server system architecture with high performance computing power

Publications (1)

Publication Number Publication Date
CN115268581A true CN115268581A (en) 2022-11-01

Family

ID=83760689

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210709739.4A Pending CN115268581A (en) 2022-06-22 2022-06-22 AI edge server system architecture with high performance computing power

Country Status (1)

Country Link
CN (1) CN115268581A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117931722A (en) * 2024-03-20 2024-04-26 苏州元脑智能科技有限公司 Computing device and server system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117931722A (en) * 2024-03-20 2024-04-26 苏州元脑智能科技有限公司 Computing device and server system
CN117931722B (en) * 2024-03-20 2024-06-07 苏州元脑智能科技有限公司 Computing device and server system

Similar Documents

Publication Publication Date Title
US8116332B2 (en) Switch arbitration
US10482051B2 (en) Data storage device carrier system
US20150254205A1 (en) Low Cost, High Performance and High Data Throughput Server Blade
CN1901530B (en) Server system
US7111120B2 (en) Scalable disk array controller
CN115268581A (en) AI edge server system architecture with high performance computing power
TW493293B (en) Method and system for directly interconnecting storage devices to controller cards within a highly available storage system
CN202443354U (en) A multi-node cable-free modular computer
CN212569645U (en) Flexibly configurable edge server system architecture
CN100541387C (en) A kind of server system based on the Opteron processor
WO2024045752A1 (en) Server and electronic device
CN115481068B (en) Server and data center
CN217847021U (en) AI edge server system architecture with high performance computing power
CN116501678A (en) Topological board card and on-board system
CN114340248B (en) Storage server and independent machine head control system thereof
CN216352292U (en) Server mainboard and server
CN214011980U (en) Server with RAS (remote server system) characteristic
CN113840489A (en) Blade computer system based on hybrid architecture
CN210428236U (en) High-density eight-path server
CN112260969B (en) Blade type edge computing equipment based on CPCI framework
CN214256754U (en) PCB connecting plate module for data synchronization of fault-tolerant computer
CN217587961U (en) Artificial intelligence server hardware architecture based on double-circuit domestic CPU
US20240232119A9 (en) Scaling midplane bandwidth between storage processors via network devices
US20240134814A1 (en) Scaling midplane bandwidth between storage processors via network devices
CN218630661U (en) 4U server supporting 8GPU modules

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination