CN209879419U - Calculation board card - Google Patents

Calculation board card Download PDF

Info

Publication number
CN209879419U
CN209879419U CN201920852931.2U CN201920852931U CN209879419U CN 209879419 U CN209879419 U CN 209879419U CN 201920852931 U CN201920852931 U CN 201920852931U CN 209879419 U CN209879419 U CN 209879419U
Authority
CN
China
Prior art keywords
data
chip
computing
interface
board
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201920852931.2U
Other languages
Chinese (zh)
Inventor
闫骏
阮剑
宋粮勇
刘云
刘青青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yunduo Data Technology Co Ltd
Original Assignee
Shenzhen Yunduo Data Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yunduo Data Technology Co Ltd filed Critical Shenzhen Yunduo Data Technology Co Ltd
Priority to CN201920852931.2U priority Critical patent/CN209879419U/en
Application granted granted Critical
Publication of CN209879419U publication Critical patent/CN209879419U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Multi Processors (AREA)

Abstract

The utility model discloses a calculation board card, which is characterized by comprising a switching board card and a calculation board card, wherein the switching board card comprises an M.2 socket, a bridge chip and a PCIE interface, the bridge chip comprises a first interface and a second interface, and the first interface is connected with the M.2 socket; the second interface is connected with the PCIE interface; the computing force board card comprises an M.2 plug and an AI chip, the AI chip comprises a data interface connected with the M.2 plug, and the M.2 plug is detachably connected with the M.2 socket. The utility model discloses an use M.2 socket switching bridging chip, make the server host computer can conveniently dispose the computing power.

Description

Calculation board card
Technical Field
The embodiment of the utility model provides a relate to computer field artificial intelligence operation technique, especially relate to a calculate integrated circuit board.
Background
With the rapid development of the internet and information industry, various voice, image and video data are developed in a blowout manner, the traditional manual data processing is gradually replaced by big data processing, and the application of an artificial intelligence (AI for short) technology enables the big data analysis processing capability to be leaped again.
The deep learning technology has triggered the high-speed development of artificial intelligence application, leading human beings to enter the intelligent era from the information era. Deep learning is essentially a machine learning technique that requires powerful hardware computing power to perform complex data processing and operations. For such huge data processing and operation, in the existing artificial intelligence solution, a dedicated AI accelerated processing chip is used to perform deep learning operation, but even if a single ultra-high performance AI accelerated processing chip is used, the processing capability of the chip is far from the operation requirement.
In the prior art, the AI calculation servers are all large-scale devices, a computation array is generally formed by a large number of GPUs, and at present, there is no powerful AI calculation server which uses a single chassis and is configured according to the number of computation boards and cards as required.
SUMMERY OF THE UTILITY MODEL
In order to solve the problems, the utility model provides a calculating board card, which comprises a switching board card and a calculating force board card,
the switching board card comprises an M.2 socket, a bridging chip and a PCIE interface, wherein the bridging chip comprises a first interface and a second interface, and the first interface is connected with the M.2 socket; the second interface is connected with the PCIE interface;
the computing board card comprises an M.2 plug and an AI chip, the AI chip comprises a third interface connected with the M.2 plug, and the M.2 plug is detachably connected with the M.2 socket;
the bridge chip acquires first data from external equipment through the PCIE interface, transmits the first data to the AI chip for calculation, and transmits a calculation result based on the first data to the external equipment; or the bridge chip acquires a plurality of second data from the external device, transmits the plurality of second data to the plurality of AI chips in parallel for calculation, and transmits a calculation result based on first data to the external device, wherein the first data is characteristic data of a preset event, and the calculation result is an AI judgment result of the preset event.
Furthermore, the computing board cards are multiple and are connected to the bridge chip in parallel.
Further, the computing force board cards further comprise a plurality of control chips, and each computing force board card comprises a plurality of AI chips which are connected to the M.2 plug through the control chips.
Further, the plurality of AI chips are connected in series to the control chip.
Further, the PCIE interface includes a power supply end, and is configured to provide a working power supply for the bridge chip and the AI chip.
Further, the first data is image data, and the second data is one or more of an object, a human face, and a fingerprint.
Further, the computing board further comprises a power circuit for supplying power to the AI chip through the m.2 socket and the m.2 plug.
Furthermore, the plurality of computing boards comprise PCIE switching flat cables, and the plurality of computing boards are electrically connected to the server motherboard through the PCIE switching flat cables.
Furthermore, the one end that the calculation integrated circuit board is close to server mainframe lateral wall still includes fixed buckle, is used for with the calculation integrated circuit board is fixed on server mainframe surface.
Furthermore, the surface of the computing board card also comprises a heat dissipation device covering the surface of the computing board card.
The utility model discloses an use M.2 socket switching bridging chip, make the server host computer can conveniently dispose the computing power.
Drawings
Fig. 1 is a schematic structural diagram of a computing board in the first embodiment of the present invention.
Fig. 2 is a schematic structural diagram of a force calculation board card in the second embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a server host in the third embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Abbreviations appearing in the examples are explained below:
m.2 interface: the m.2 interface is a new interface specification proposed by Intel to replace MSATA. The M.2 interface is divided into two types, and respectively supports an SATA channel and an NVME channel, wherein the SATA3.0 has only 6G bandwidth, and the latter is a PCIE channel, so that the bandwidth of 32G can be provided, the NVME is used as a new generation storage specification, and as the PCIE channel has sufficient bandwidth, the lifting space is extremely large, and the transmission speed is faster.
AI chip: the chip is used for executing AI algorithm in the field of artificial intelligence calculation, and is mainly used in the fields of image processing, voice processing, portrait recognition and the like.
FPGA: the Field-Programmable Gate Array is a product of further development on the basis of Programmable devices. The circuit is a semi-custom circuit in the field of application-specific integrated circuits, not only overcomes the defects of the custom circuit, but also overcomes the defect that the number of gate circuits of the original programmable device is limited.
PCIE: a packet, serial, point-to-point based high performance interconnect bus protocol. Which defines a layered architecture comprising a software layer, a processing layer, a data link layer and a physical layer. The software layer is the key for keeping compatibility with the PCI bus, and the PCIE adopts the same use model and read-write communication model as the PCI and the PCI-X. Various common things are supported, such as memory read-write things, I/O read-write things and configuration read-write things. Moreover, since the address space model is not changed, the existing operating system and driver software can run on the PCIE system without modification. PCIE also supports a new transaction type called messaging transaction. This is because the PCIE protocol requires an alternative method to notify the host system to service device interrupts, power management, hot plug support, etc., without many sideband signals.
Example one
The embodiment of the utility model provides a calculate integrated circuit board can assemble in narrow and small space, provides sufficient power of calculating for the server simultaneously.
As shown in fig. 1, the computing board 1 includes a patch board 100 and a computing board 200.
The adapter board card 100 includes an m.2 socket 101, a bridge chip 102 and a PCIE interface 103, where the bridge chip 102 includes a first interface 112 and a second interface 122, and the first interface 112 is connected to the m.2 socket 101; the second interface 122 is connected to the PCIE interface 103.
The computing power board 200 includes an m.2 plug 201 and an AI chip 202, the AI chip 202 includes a third interface 212 connected to the m.2 plug 201, and the m.2 plug 201 is detachably connected to the m.2 socket 101.
The computing board 1, also called AI accelerator or computing card, is a module dedicated to processing a large number of computing tasks in artificial intelligence applications (other non-computing tasks are still handled by the processor), for computing the input data and performing AI computation acceleration processing
The riser card 100 is used to provide merging or splitting of data in the computing card 1. Also can be used to debug, overhaul, introduce or draw forth the signal in order to conveniently connect test instrument or provide the signal source, the utility model discloses in, switching integrated circuit board 100 is used for providing the merge or the shut of data.
Both the m.2 receptacle 101 and the m.2 plug 201 employ an m.2 data interface.
Therefore, in the present embodiment, the PCIE interface 103 is connected to the m.2 socket 101 through the bridge chip 102, so as to improve the data transmission rate.
The PCIE interface 103 is a conductive contact type interface, and adopts a point-to-point serial connection popular in the industry at present, and compared with a shared parallel architecture of PCIE and earlier computer buses, each device has its own dedicated connection, and does not need to request a bandwidth to the entire bus, and can increase the data transmission rate to a very high frequency, so as to achieve a high bandwidth that cannot be provided by PCI. Compared with the traditional PCI bus which can only realize one-way transmission in a single time period, the PCIE dual-simplex connection can provide higher transmission rate and quality. Because it is higher to use the AI chip to carry out the requirement of operation to data interface, consequently the utility model discloses a PCIE interface can make the real-time transmission demand of a large amount of image processing data of interface burden, guarantees the normal function of server host computer. The server may have specifications of 1U, 2U, or 4U, and the PCIE interface 103 may select a plurality of PCIE x4, PCIE x8, or PCIE x16 interfaces according to the different specifications of the server. The present embodiment is preferably a pcie x16 interface.
In a specific working process, the bridge chip 102 acquires first data from an external device through the PCIE interface 103 and transmits the first data to the AI chip 202 for calculation, and then transmits a calculation result based on the first data to the external device; or the bridge chip 102 acquires a plurality of second data from the external device, transmits the plurality of second data to the plurality of AI chips 202 in parallel for calculation, and then combines the calculation results based on the second data into the calculation results of the first data and transmits the calculation results to the external device, where the first data is feature data of a preset event, specifically, in this embodiment, the feature data of a preset time refers to image data and other AI algorithm tasks that need to be processed by the AI chip in this embodiment, and the second data is to-be-processed image data or data obtained by decomposing the other AI algorithm tasks according to the bridge chip 102, specifically, one or more of object, human face, and fingerprint recognition. The first/second operation results are operation determination results of the AI chips 202 of the preset event.
Optionally, the PCIE interface 103 includes a first power supply terminal 113, where the power supply terminal 113 is configured to provide an operating power supply for the bridge chip 102.
Optionally, the computing board 1 further includes a second power supply terminal 111 for supplying power to the AI chip 202 through the m.2 socket 101 and the m.2 plug 201.
The computing board card of the embodiment one inserts the computing board card formed by the plurality of AI chips and the control chip on the adapter board through the m.2 interface, and can conveniently configure the computing power of the server according to the requirement.
Example two
As shown in fig. 2, the second embodiment is the same as the first embodiment except that another computing board 300 is provided.
The computing power board 300 includes an m.2 plug 301 and an AI chip 302, and further includes a control chip 303 that manages a plurality of AI chips 302.
The AI chip 302 includes a fourth interface 313 connected to the control chip 303, and the control chip 303 includes a fifth interface 314 connected to the m.2 plug 301. In the present embodiment, each force board 300 includes a plurality of AI chips 302, and the plurality of AI chips 302 are connected to the m.2 plug 301 through the control chip 303. Specifically, a plurality of AI chips 302 are connected in series to the control chip 303.
The AI chip 302 and the control chip 303 are connected through a fourth interface 313, specifically, the fourth interface 313 is used to transmit a large amount of data between the AI chip 302 and the control chip 303, and since there is a large amount of data exchange between the AI chip 302 and the control chip 303, the fourth interface 313 adopts a special data interface, and this embodiment is preferably an FIP data interface.
The control chip 303 of the present embodiment may be a Field Programmable Gate Array (FPGA) chip for artificial intelligence calculation, an Application Specific Integrated Circuit (ASIC) chip for artificial intelligence calculation, or a Graphics Processing Unit (GPU) chip, and the like, and the present embodiment adopts an FPGA control chip. It should be noted that the control chip 303 and the AI chip 302 may adopt various suitable interconnection manners, and optionally, in this embodiment, a plurality of AI chips 302 are connected to the control chip 303 in series.
The computing power board 300 further includes a power management chip 304 for performing conversion, distribution, and management of electric power to the control chip 303 and the plurality of AI chips 302. The Power management chip 304(Power management integrated Circuits) plays roles of converting, distributing, detecting and other Power management functions in the electronic device system, and is mainly responsible for driving the subsequent stage circuit to output Power, and the quality of the performance of the Power management chip directly affects the performance of the server host. The commonly used power management chip includes HIP6301, IS6537, RT9237, ADP3168, KA7500, TL494 and/or SLG46722CPLD, etc., in this embodiment, the model of the power management chip 304 IS preferably SLG46722 CPLD.
In the second embodiment, the control chip and the power circuit are added to the computing board, so that the computing board can reasonably distribute computing data tasks and electric energy.
EXAMPLE III
In the third embodiment, the structure of the computing board 1 in the server host is further refined on the basis of the first embodiment and the second embodiment.
As shown in fig. 3, specifically, the plurality of computing boards 1 includes a PCIE patch cord 400, a fixed board 500, and a heat sink 600.
The server host 2 comprises a PCIE slot 3, a server motherboard 4, a power supply 5, a memory 6, a processor 7, a disk array 8 and a fixed card seat 9, the PCIE slot 3 is disposed on the surface of the server motherboard 4, the power supply 5 is electrically connected with the server motherboard 4, the memory 6 and the processor 7 are disposed on the surface of the server motherboard 4, the disk array 8 is electrically connected with the memory 6 and the processor 7, and the fixed card seat 9 is disposed on the side wall of the server host 2.
The plurality of computing boards 1 are stacked in the server host 2.
The PCIE switch cable 400 electrically connects the plurality of computing boards 1 to the server board 4 through the PCIE slot 3. One end of the PCIE switch flat cable 400 is connected to the computing board 1, and the other end is inserted into the PCIE slot 3, so that the computing board 1 can be electrically connected to the server host 2.
The one end that calculates integrated circuit board 1 and be close to server host 2 lateral wall still includes range upon range of fixed picture peg 500 that sets up, and fixed picture peg 500 is located the one end that calculates integrated circuit board 1 and be close to server host 2 lateral wall, can be connected with fixed cassette 9 for fix calculation integrated circuit board 1 in server host side through mechanical structure, prevent to calculate integrated circuit board 1 and remove or damage components and parts in server host 2. Meanwhile, due to the fact that the interior of the server host is narrow, the space occupied in the server host 2 can be reduced due to the fact that the computing board cards 1 are arranged in a stacked mode, the computing board cards 1 can be configured in the server host 2 according to needs through the fixed inserting plate 500, and computing power is improved.
The heat dissipation device 600 is a heat dissipation cover fixedly covering the surface of the computing board 1, and is used for promoting the heat dissipation of the computing board 1 and protecting components on the surface of the computing board 1. Because the calculation board 1 can generate heat when executing the calculation task, the heat may exceed the warning temperature, which results in unstable operation of the circuit of the components, shortened service life, and even damage to the components on the calculation board, a heat dissipation device is required to absorb heat, and the temperature of each component is guaranteed to be normal. The heat dissipation device 600 may be a heat dissipation fan and/or a water-cooled heat sink, wherein the heat dissipation fan accelerates heat dissipation by accelerating air convection; the water-cooled radiator uses liquid to carry heat away from the radiator in a forced circulation manner under the driving of the pump, and has the advantages of stable cooling, small dependence on the environment and the like. The water-cooled radiator has large heat capacity, relatively small thermal fluctuation and better stability. In this embodiment, therefore, the heat sink 600 is preferably a water-cooled heat sink covering the computing board. Meanwhile, the radiating cover is additionally arranged on the surface of the calculating board card 1, so that the effect of isolating dust can be achieved, the problem that the dust falls into the surface of the calculating board card 1 to cause short circuit and the like is prevented, and the normal working state of the calculating board card 1 is kept.
The PCIE slot 3 is used to fix the PCIE switch flat cable 400, the PCIE slot 3 is specifically an array having a plurality of slots, and each PCIE slot 3 can be fixedly installed with one computing board 1 by plugging the PCIE switch flat cable 400. When a plurality of the computing boards 1 are installed, the server may form a resource pool for AI computation. It should be noted that, through the PCIE slot 3, the computing board 1 can be installed on the server motherboard 4 in a hot plug manner, and the installation number of the computing board 1 can be adjusted as needed, so that the server host 2 can conveniently adjust the scale of the resource pool for AI computation, and the number configuration of the computing board 1 is performed as needed, thereby improving the computing power of the server host.
The power supply 5 is respectively electrically connected with the computing board 1, the server motherboard 4, the memory 6, the processor 7 and the disk array 8, and is used for supplying power to the components. The power supply 5 is also used to supply power to the control chip 203 and the AI chips 202 in the first and second embodiments.
The memory 6 is electrically connected with the processor 7; the memory 6 may be used to store server programs and/or modules, and the processor 7 may implement various functions of the server apparatus by operating or executing the server programs and/or modules stored in the memory 6. The memory 6 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 6 may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The processor 7 is electrically connected to the computing board 1, and is configured to coordinate and control the interfaces of the AI chips 202. The Processor 7 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, and the processor 7 is the control center of the server, connecting the various parts of the overall computer apparatus using various interfaces and lines.
As the computing board card in the first and second embodiments does not include a storage unit, the third server host of this embodiment further provides a disk array 8, where the disk array 8 is a disk group with a large capacity formed by combining a plurality of independent disks, and the performance of the entire disk system is improved by using an addition effect generated by providing data by individual disks, as shown in the figure, an internal disk array card is provided in the server host 2 in this embodiment and is used for storing data after the computing board card 2 completes computing.
In the working process of the computing board 1, the processor calls a server program and/or a module in the memory to obtain first data in the disk array, the control chip 203 receives the first data from the processor, decomposes the first data into a plurality of second data according to the number of the AI chips 202, distributes a second data operation task from the control chip 203 to the AI chips 202 through the first interface 112 to perform computation, and the AI chips 202 return second data operation results to the control chip 203. The control chip 203 merges the second data operation results into the first data operation results, and the PCIE interface 103 transmits the received first data operation results to the external device, that is, the disk array. In the working process of the computing board 1, the first data refers to feature data of a preset event, specifically, in this embodiment, image data that needs to be processed by the AI chip 202 and other AI algorithm tasks, and the second data is data obtained by decomposing the image data to be processed or other AI algorithm tasks according to the control chip 203, specifically, one or more of object, face, and fingerprint identification. The first/second operation results are operation determination results of the AI chips 202 of the preset event.
In the third embodiment, the fixed buckle and the heat dissipation device are additionally arranged on the surface of the computing board 1, so that the computing board 1 can occupy a smaller space in the server mainframe box, heat dissipation is timely performed, and the normal work of the computing board 1 is ensured.
It should be noted that the foregoing is only a preferred embodiment of the present invention and the technical principles applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail with reference to the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the scope of the present invention.

Claims (10)

1. A calculation board card is characterized by comprising a switching board card and a calculation board card,
the switching board card comprises an M.2 socket, a bridging chip and a PCIE interface, wherein the bridging chip comprises a first interface and a second interface, and the first interface is connected with the M.2 socket; the second interface is connected with the PCIE interface;
the computing board card comprises an M.2 plug and an AI chip, the AI chip comprises a third interface connected with the M.2 plug, and the M.2 plug is detachably connected with the M.2 socket;
the bridge chip acquires first data from external equipment through the PCIE interface, transmits the first data to the AI chip for calculation, and transmits a calculation result based on the first data to the external equipment; or the bridge chip acquires a plurality of second data from the external device, transmits the plurality of second data to the plurality of AI chips in parallel for calculation, and transmits a calculation result based on first data to the external device, wherein the first data is characteristic data of a preset event, and the calculation result is an AI judgment result of the preset event.
2. The computing board of claim 1, wherein the computing board is in a plurality, and the plurality of computing boards are connected to the bridge chip in parallel.
3. The computing board of claim 2, wherein each computing board card further comprises a plurality of AI chips, and the plurality of AI chips are connected to the m.2 plug through the control chip.
4. The computing board of claim 3, wherein the AI chips are serially connected to the control chip.
5. The computing board of claim 2, wherein the PCIE interface includes a power supply terminal configured to provide a working power supply for the bridge chip and the AI chip.
6. The computing board of claim 1, wherein the first data is image data and the second data is one or more of an object, a human face, and a fingerprint.
7. The computing board of claim 1, further comprising a power circuit for powering the AI chip via the m.2 receptacle and the m.2 plug.
8. The computing board of claim 1, wherein the number of the computing boards is multiple, the multiple computing boards include PCIE patch cords, and the multiple computing boards are electrically connected to the server motherboard through the PCIE patch cords.
9. The computing board of claim 1, wherein the end of the computing board proximate to the side wall of the server chassis further comprises a securing clip for securing the computing board to the surface of the server chassis.
10. The computing board of claim 1, wherein the computing board surface further comprises a heat sink overlying the computing board surface.
CN201920852931.2U 2019-06-06 2019-06-06 Calculation board card Active CN209879419U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201920852931.2U CN209879419U (en) 2019-06-06 2019-06-06 Calculation board card

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201920852931.2U CN209879419U (en) 2019-06-06 2019-06-06 Calculation board card

Publications (1)

Publication Number Publication Date
CN209879419U true CN209879419U (en) 2019-12-31

Family

ID=68948429

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201920852931.2U Active CN209879419U (en) 2019-06-06 2019-06-06 Calculation board card

Country Status (1)

Country Link
CN (1) CN209879419U (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110134206A (en) * 2019-06-06 2019-08-16 深圳云朵数据科技有限公司 A kind of calculating board

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110134206A (en) * 2019-06-06 2019-08-16 深圳云朵数据科技有限公司 A kind of calculating board
CN110134206B (en) * 2019-06-06 2024-04-23 深圳云朵数据科技有限公司 Computing board card

Similar Documents

Publication Publication Date Title
US10091904B2 (en) Storage sled for data center
CN102129274B (en) Server, server subassembly and fan speed control method
US10055379B2 (en) Peripheral component interconnect express card
CN110134205B (en) AI calculates server
CN110134206B (en) Computing board card
US20080259555A1 (en) Modular blade server
JP3157935U (en) server
MX2012014354A (en) Systems and methods for dynamic multi-link compilation partitioning.
CN104881105A (en) Electronic device
CN210428286U (en) Modular edge server structure
CN209879419U (en) Calculation board card
CN210534653U (en) AI calculation server
CN106371530A (en) Server
CN209879377U (en) Calculation board card
CN107908585A (en) A kind of PCIE BOX plates for surpassing calculation function with PCIe card and GPU
CN209911891U (en) AI calculation server
CN216352292U (en) Server mainboard and server
CN107491408B (en) Computing server node
CN115639880A (en) Server
CN109656476B (en) Hardware acceleration module and video processing equipment
CN103677152A (en) Storage server and rack system thereof
CN207704331U (en) A kind of computer main board
CN113552926A (en) Cable module
CN217847021U (en) AI edge server system architecture with high performance computing power
CN217587961U (en) Artificial intelligence server hardware architecture based on double-circuit domestic CPU

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant