CN210534653U - AI calculation server - Google Patents

AI calculation server Download PDF

Info

Publication number
CN210534653U
CN210534653U CN201920852920.4U CN201920852920U CN210534653U CN 210534653 U CN210534653 U CN 210534653U CN 201920852920 U CN201920852920 U CN 201920852920U CN 210534653 U CN210534653 U CN 210534653U
Authority
CN
China
Prior art keywords
server
area
heat dissipation
data
board
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201920852920.4U
Other languages
Chinese (zh)
Inventor
宋粮勇
刘云
刘青青
闫骏
阮剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yunduo Data Technology Co ltd
Original Assignee
Shenzhen Yunduo Data Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yunduo Data Technology Co ltd filed Critical Shenzhen Yunduo Data Technology Co ltd
Priority to CN201920852920.4U priority Critical patent/CN210534653U/en
Application granted granted Critical
Publication of CN210534653U publication Critical patent/CN210534653U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Multi Processors (AREA)

Abstract

The embodiment of the utility model discloses AI calculation server, it includes: the computer case comprises a first area, a second area and a third area which are sequentially arranged; the server mainboard is arranged in the first area; the AI computing board is arranged in parallel with the server mainboard and is electrically connected with the server mainboard through a PCIE (peripheral component interface express) switching flat cable; the hard disk array is arranged in the third area and is electrically connected with the server mainboard through a data flat cable; the first heat dissipation device is arranged in the second area and comprises a heat dissipation support for separating the first area from the third area, the heat dissipation support comprises a heat dissipation channel communicated with the first area and the third area, and the first heat dissipation device further comprises a heat dissipation fan fixed in the heat dissipation channel. The AI computing boards can be overlapped and transversely arranged in the narrow space of the server, and the heat dissipation channel forms a good heat dissipation channel between the intensive boards in the case, so that the effect that the case always runs at the working temperature is ensured.

Description

AI calculation server
Technical Field
The embodiment of the utility model provides an AI calculation server is related to AI calculation field, especially.
Background
With the rapid development of the internet and information industry, various voice, image and video data are developed in a blowout manner, the traditional manual data processing is gradually replaced by big data processing, and the application of an artificial intelligence (AI for short) technology enables the big data analysis processing capability to be leaped again.
The deep learning technology has triggered the high-speed development of artificial intelligence application, leading human beings to enter the intelligent era from the information era. Deep learning is essentially a machine learning technique that requires powerful hardware computing power to perform complex data processing and operations. For such huge data processing and operation, in the existing artificial intelligence solution, a dedicated AI accelerated processing chip is used to perform deep learning operation, but even if a single ultra-high performance AI accelerated processing chip is used, the processing capability of the chip is far from the operation requirement.
In the prior art, the AI calculation servers are all large-scale devices, generally form a computational power array by a large number of GPUs, and at present, no powerful AI calculation server suitable for a single chassis exists.
SUMMERY OF THE UTILITY MODEL
The utility model provides an AI calculation server to realize that a plurality of calculation power boards card can overlap horizontal in the narrow and small space of server and constitute minimum image feature processing center.
An embodiment of the utility model provides an AI calculation server, include:
the computer case comprises a first area, a second area and a third area which are sequentially arranged;
the server mainboard and the image processing board card are arranged in the first area, and the image processing board card is connected with a display card interface of the server mainboard;
the AI computing board card is arranged in a third area and is electrically connected with the server mainboard through a PCIE (peripheral component interface express) switching flat cable;
the hard disk array is arranged in the third area and is electrically connected with the server main board through a data flat cable;
the first heat dissipation device is arranged in the second area and comprises a heat dissipation support for separating the first area from the third area, the heat dissipation support comprises a heat dissipation channel communicated with the first area and the third area, and the first heat dissipation device further comprises a heat dissipation fan fixed in the heat dissipation channel.
Further, the AI computing server further comprises a power module arranged in the first area, one end of the power module is fixed at an air inlet of the case, and an air outlet at the other end of the power module faces the first heat dissipation device.
Furthermore, the AI computing boards are stacked, and the side part of each AI computing board and the side wall of the server are fixed.
Further, the AI computation server also includes a processor and a memory installed on the server motherboard.
Further, the AI computing server also includes a second heat sink coupled to the processor.
Further, the AI computing board card comprises a switching board card and a computing force board card,
the switching board card comprises an M.2 socket, a bridging chip and a PCIE interface, wherein the bridging chip comprises a first interface and a second interface, and the first interface is connected with the M.2 socket; the second interface is connected with the PCIE interface;
the computing force board card comprises an M.2 plug and an AI chip, the AI chip comprises a data interface connected with the M.2 plug, and the M.2 plug is detachably connected with the M.2 socket;
the bridge chip acquires first data from external equipment through the PCIE interface, transmits the first data to the AI chip for calculation, and transmits a calculation result based on the first data to the external equipment; or the bridge chip acquires a plurality of second data from the external device, transmits the plurality of second data to the plurality of AI chips in parallel for calculation, and transmits a calculation result based on first data to the external device, wherein the first data is characteristic data of a preset event, and the calculation result is an AI judgment result of the preset event.
Further, the computing force board card comprises a plurality of control chips, and each computing force board card comprises a plurality of AI chips which are connected to the M.2 plug through the control chips.
Furthermore, the computing board cards are multiple and are connected to the bridge chip in parallel.
Further, each computing board card comprises a plurality of AI chips, and the AI chips are connected to the control chip in parallel.
Furthermore, the radiator support further comprises a first wire arranging hole or/and a second wire arranging hole, the first wire arranging hole is used for penetrating through a PCIE (peripheral component interface express) switching flat cable, and the second wire arranging hole is used for penetrating through a data flat cable.
The utility model discloses an image processing integrated circuit board and AI calculation integrated circuit board have been integrated to quick-witted incasement, the utility model discloses a server can constitute minimum image feature processing center, if the district to specific area, carries out image feature calculation power center, and the image feature of each frame picture of real-time analytic monitoring video carries out the feature to the video and beats mark and compress the storage.
Drawings
Fig. 1 is a schematic structural diagram of an AI computation server according to a first embodiment of the present invention.
Fig. 2 is a schematic structural diagram of an AI calculation board in the second embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a force calculating board in the second embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the steps as a sequential process, many of the steps can be performed in parallel, concurrently or simultaneously. In addition, the order of the steps may be rearranged. A process may be terminated when its operations are completed, but may have additional steps not included in the figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc.
Furthermore, the terms "first," "second," and the like may be used herein to describe various orientations, actions, steps, elements, or the like, but the orientations, actions, steps, or elements are not limited by these terms. These terms are only used to distinguish one direction, action, step or element from another direction, action, step or element. For example, the first speed difference may be referred to as a second speed difference, and similarly, the second speed difference may be referred to as a first speed difference, without departing from the scope of the present application. The first speed difference and the second speed difference are both speed differences, but they are not the same speed difference. The terms "first", "second", etc. are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Example one
Fig. 1 is the embodiment of the present invention provides a structural schematic diagram of an AI computation server, which is applicable to a situation that a plurality of computation force boards can be installed in a narrow space of the server.
The AI calculation server provided by the embodiment includes a case 1, a server motherboard 2, an AI calculation board 3, a hard disk array 4, and a first heat dissipation device 5.
The computer case comprises a case 1, wherein the case 1 comprises a first area 101, a second area 102 and a third area 103 which are sequentially arranged.
The server mainboard 2 and the image processing board card 6 which are arranged in the first area 101, the image processing board card 3 and the display card interface of the server mainboard 2 are connected.
The AI calculation board 3 is arranged in the third area, and the AI calculation board 3 is electrically connected with the server mainboard 2 through a PCIE (peripheral component interface express) switching flat cable 7.
And the hard disk array 4 is arranged in the third area 103, and the hard disk array 4 is electrically connected with the server main board 2 through a data flat cable.
The first heat dissipation device 5 is arranged in the second area 102, the first heat dissipation device 5 comprises a heat dissipation bracket 501 for separating the first area 101 from the third area 103, the heat dissipation bracket 501 comprises a heat dissipation channel 5011 for communicating the first area 101 with the third area 103, and the first heat dissipation device 5 further comprises a heat dissipation fan which is fixed in the heat dissipation channel 5011.
In this embodiment, the case 1 generally includes a housing, a bracket, various switches on a panel, an indicator light, and the like. The shell is made of steel plates and plastics in a combined mode, is high in hardness and mainly plays a role in protecting elements inside the case 1. The bracket is mainly used to fix a main board, a power supply, and various components, and partitions the cabinet 1 into a first area 101, a second area 102, and a third area 103.
The server motherboard 2 is fixed in the first area 101 of the case 1, and the image processing board card 6 communicates with the CPU through an interface with a graphics card on the server motherboard 2. The AI calculation board 3 is installed in the third area 103 and installed in parallel with the server motherboard 2, and the AI calculation board 3 is electrically connected with the server motherboard 2 through a PCIE switching flat cable 7. The PCIE switch bus 7 is used to connect to a PCIE slot of the server motherboard 2, receive data sent by the CPU of the host, and send data sent by the CPU of the server motherboard 2 to the AI computation board 3. Meanwhile, the PCIE switch-over bus 7 is also configured to return operation result data of the AI computation board 3 to the CPU of the server motherboard 2. In the prior art, a PCIE interface of an AI computation board is used to connect to a PCIE slot of a host and receive data sent by a CPU of the host, and if the operation capability of the AI computation board is unadjustable, the GPU needs to be externally connected to increase the image processing capability if the operation capability of the GPU needs to be increased. The PCIE switching bus 7 of this embodiment may set a plurality of PCIE slots to be connected to the AI computation board 3, that is, a plurality of AI computation boards may be used in one server at the same time, so that the flexibility of the AI computation board is greatly improved, and the hardware cost is reduced.
The hard disk array 4 is provided in the third area 103 of the enclosure 1, and the same data is stored in different places (hence, redundantly) in the plurality of hard disks. By placing data on multiple hard disks, input and output operations can be overlapped in a balanced manner, improving performance. Storing redundant data also increases fault tolerance because multiple hard disks increase the Mean Time Between Failure (MTBF). The hard disk array can provide functions such as online capacity expansion, dynamic modification of array levels, automatic data recovery, drive roaming, caching, etc. It can provide a solution for performance, data protection, reliability, availability, and manageability.
The servers with high operation strength use at least two or more CPUs and the interior of the servers mostly adopts a SCSI disk array form, so that the heat generation inside the servers is large, and good heat dissipation is a necessary condition of an excellent server case. The heat dissipation performance is mainly shown in three aspects, namely the number and the positions of the fans, the rationality of the heat dissipation channel and the material selection of the chassis material. The first heat sink 5 of the present embodiment is disposed in the second area 102, and includes a heat sink bracket 501 that partitions the chassis 1 into the first area 101, the second area 102, and the third area 103. The heat radiator support 501 isolates the hard disk and the mainboard through the heat radiating channels 5011 of the first area 101 and the third area 103 which are communicated, the independent heat radiating channels 5011 are respectively arranged for the hard disk and the mainboard, air is exhausted from the heat radiating channels 5011 through the heat radiating fans fixed in the heat radiating channels 5011, the air entering the mainboard and the power supply is not hot air any more, heat radiation deterioration caused by heat transfer and interference is avoided, independent heat radiation is achieved through independent work of all the partitions, optimized heat radiation of all the parts is achieved, the heat radiating effect of the hard disk, the mainboard and the power supply is improved, and the shutdown phenomenon is avoided.
The embodiment of the utility model provides a calculate integrated circuit board 3 through PCIE switching winding displacement 7 through the AI and 2 electricity of server mainboard are connected to set up the radiating channel 5011 of the first region 101 of UNICOM and third region 103, solve the problem that server hardware wasting of resources or system calculation power are not enough and the server heat dissipation is not enough, realize that a plurality of calculation power board cards 302 can overlap in the narrow and small space of horizontal server, and radiating channel 5011 forms good radiating channel between the intensive integrated circuit board of quick-witted incasement, the effect of having guaranteed that quick-witted case 1 is in operation in the operating temperature all the time. An image processing 6 and AI calculation integrated circuit board 3 have been integrated to quick-witted incasement 1, the utility model discloses the server can constitute minimum image feature processing center, like the district to specific area, carries out image feature calculation power center, and the image feature of each frame picture of real-time analytic monitoring video carries out the feature to the video and beats mark and compression storage.
In an alternative embodiment, the AI computing server further includes a power module 8 disposed in the first area 101, one end of the power module 8 is fixed to the air inlet of the chassis 1, and the air outlet at the other end of the power module 8 faces the first heat dissipation device 5.
Further comprising a processor 8 and a memory 9 mounted on said server motherboard 1.
In this embodiment, the Processor 9 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, and the like. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, and the processor 9 is the control center for the server, connecting the various parts of the overall computer apparatus using various interfaces and lines.
The memory 10 may be used to store server programs and/or modules, and the processor 8 may implement various functions of the server apparatus by running or executing the server programs and/or modules stored in the memory 9 and calling up data stored in the memory 9. The memory 10 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the terminal, and the like. In addition, the memory 10 may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
Further comprising a second heat sink connected to said processor 9.
In this embodiment, the power module 8 is disposed in the first area 101, and the air inlet of the power module 8 is located at the center of the lower portion of the side wall of the rear end of the server chassis. The air inlet is arranged at the central position, so that air entering the air inlet is also arranged at the central position of the power supply, the air outlet at the other end faces the first heat dissipation device 5, the heat dissipation channel 5011 of the first heat dissipation device 5 and the ventilation of the heat dissipation fan inside the first heat dissipation device can fully achieve the effect of comprehensive and balanced heat dissipation, and the heat dissipation effect is further improved.
The second heat dissipation device is further connected with the processor 9, and can be used for independently dissipating heat of the processor 9, effectively dissipating heat with extremely high efficiency, and avoiding downtime.
Further radiator support 501 still includes first winding displacement hole or/and second winding displacement hole, first winding displacement hole is used for wearing to establish PCIE switching winding displacement 7, the second winding displacement hole is used for wearing to establish the data winding displacement.
Example two
The second embodiment further refines the partial structure on the basis of the first embodiment, and specifically includes the following steps:
as shown in fig. 2, the AI calculation boards 3 are multiple, the AI calculation boards 3 are stacked, and a side portion of each AI calculation board 3 and a side wall of the server are fixed.
The AI calculation board card 3 includes a transfer board card 301 and an computation force board card 302, the transfer board card 301 includes an m.2 socket 3011, a bridge chip 3012 and a PCIE interface 3013, the bridge chip 3012 includes a first interface and a second interface, and the first interface is connected to the m.2 socket 3011; the second interface is connected to the PCIE interface 3013.
The computing power board cards 302 are multiple, and the multiple computing power board cards 302 are connected to the bridge chip 3012 in parallel.
In this embodiment, the m.2 interface is a new interface specification proposed by Intel to replace MSATA. The M.2 interface is divided into two types, and respectively supports an SATA channel and an NVME channel, wherein SATA3.0 only has 6G bandwidth, the latter is a PCI-E channel, and can provide bandwidth up to 32G, NVME is used as a new generation storage specification, and due to the fact that the PCI-E channel is sufficient in bandwidth, the space can be greatly increased, and the transmission speed is higher. Therefore, in this embodiment, the PCIE interface 3013 is connected to the m.2 socket 3011 through the bridge chip 3012, so as to improve the data transmission rate.
The PCIE interface 3013 connected to the second interface of the bridge chip 3012 receives data sent by the host CPU, sends the data sent by the CPU to the computation board 302 through the PCIE interface 3013 connected to the second interface of the bridge chip 3012 and the m.2 socket 3011 connected to the first interface of the bridge chip 3012 for data processing, and receives processing result data returned by the computation board 302. The side part of the AI computing board 3 and the side wall of the server are fixed and stacked.
Further, the computing board cards 302 are multiple, and the multiple computing board cards 302 are connected to the bridge chip 3012 in parallel. When multiple force board cards 302 are installed, the AI computation server can form a resource pool for artificial intelligence computations. It should be noted that the computing board 302 can be installed on the AI computing server in a plug-in manner of an m.2 interface. In this way, the AI computation server can adjust the number of installations of the computation force boards 302 as needed. Here, the AI calculation server installs the computation board 302 by plugging the m.2 interface, and can conveniently adjust the scale of the resource pool of the AI calculation board 3.
Further, in order to improve the main board calculation power in one AI calculation server, the AI calculation boards 3 may be multiple, multiple calculation boards are stacked, and the side portion of each calculation board is fixed on the side wall of the server.
In the second embodiment, the function of the AI computation board 3 is further improved on the basis of the above technical solution, because the CPU of the server usually has a standard PCIE interface, the m.2 interface is converted into a PCIE interface through the adapter board 301, and the AI computation server installs the computation board 302 in a plug-in manner of the m.2 interface, which can conveniently adjust the scale of the resource pool of the AI computation board 3.
EXAMPLE III
In the third embodiment, on the basis of the second embodiment, the AI calculation board 3 is further refined, which specifically includes the following steps:
as shown in fig. 3, the power board 302 includes an m.2 plug 3021 and an AI chip 3022, the AI chip 3022 includes a data interface connected to the m.2 plug 3021, and the m.2 plug 3021 is detachably connected to the m.2 socket 3011.
The bridge chip 3012 obtains first data from an external device through the PCIE interface 3013, transmits the first data to the AI chip 3022 for calculation, and then transmits a calculation result based on the first data to the external device; or the bridge chip 3012 acquires a plurality of second data from the external device, transmits the plurality of second data to the plurality of AI chips 3022 in parallel for calculation, and then transmits a calculation result based on first data to the external device, where the first data is feature data of a preset event, and the calculation result is an AI determination result of the preset event.
The power board 302 further includes a control chip 3023, and each power board 302 includes a plurality of AI chips 3022, and the AI chips 3022 are connected to the m.2 plug 3021 through the control chip 3023.
Each force board 302 includes a plurality of AI chips 3022, and the plurality of AI chips 3022 are connected to the control chip 3023 in parallel.
In the present embodiment, the power board 302 includes a plurality of AI chips 3022, and since there is a large amount of data exchanged between the AI chip 3022 and the control chip 3023, a special data interface is adopted, and the embodiment of the present invention adopts a FIP interface, and the plurality of AI chips 3022 are connected to the m.2 plug 3021 through the control chip 3023. A plurality of AI chips 3022 are connected in parallel with the control chip 3023 through the FIP interface.
The PCIE interface 3013 connected to the second interface of the bridge chip 3012 receives data sent by the host CPU, sends the data sent by the CPU to the AI chip 3022 connected to the m.2 plug 3021 via the data interface through the m.2 socket 3011 connected to the first interface of the bridge chip 3012, processes the data by the AI chip 3022, and returns budget result data of the AI chip 3022 to the CPU according to the original data transmission path.
The bridge chip 3012 obtains first data from an external device through the PCIE interface 3013 and transmits the first data to the AI chip 3022 through the m.2 socket 3011 for calculation. The first data is feature data of a preset event, the control chip 3023 takes out an unprocessed column, performs a feature column check on the unprocessed column, identifies a mark of a feature data type therein, and the AI chip 3022 finds a corresponding feature processing algorithm according to the feature engineering knowledge base and processes the column by using the corresponding algorithm. Optimizing the AI operation can be reducing the number of calls and reducing the calculation data volume, the embodiment of the utility model provides a can also be that control chip 3023 splits into a plurality of second data with a complete first data, there is the data dependency between each second data, and the result of every second data all can set up the save point, and simultaneously, every second data processing all can restart alone, every second data can exist on the same calculation node, handle by AI chip 3022, distributed parallel computing as far as possible, improve the concurrency degree of execution. After the processing is completed, the control chip 3023 merges the AI determination results of the plurality of second data into an AI determination result of the preset event.
In the third embodiment, the function of the power board 302 is further improved on the basis of the above technical solution, a large amount of data exchange is performed between the AI chip 3022 and the control chip 3023, a special data interface needs to be adopted, and the FIP interface is converted into the m.2 interface through the control chip 3023, so that the power of the power board 302 can be modulated.
It should be noted that the foregoing is only a preferred embodiment of the present invention and the technical principles applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail with reference to the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the scope of the present invention.

Claims (10)

1. An AI computation server, comprising:
a case including a first region, a second region and a third region sequentially arranged,
the server mainboard and the image processing board card are arranged in the first area, and the image processing board card is connected with a display card interface of the server mainboard;
the AI computing board card is arranged in a third area and is electrically connected with the server mainboard through a PCIE (peripheral component interface express) switching flat cable;
the hard disk array is arranged in the third area and is electrically connected with the server main board through a data flat cable;
the first heat dissipation device is arranged in the second area and comprises a heat dissipation support for separating the first area from the third area, the heat dissipation support comprises a heat dissipation channel communicated with the first area and the third area, and the first heat dissipation device further comprises a heat dissipation fan fixed in the heat dissipation channel.
2. The AI calculation server of claim 1, further comprising a power module disposed in the first region, wherein one end of the power module is fixed to an air inlet of the chassis, and an air outlet at the other end of the power module faces the first heat sink.
3. The AI computation server of claim 1, wherein the AI computation boards are stacked, and a side portion of each AI computation board and a side wall of the server are fixed.
4. The AI computation server of claim 1, further comprising a processor and a memory mounted on the server motherboard.
5. The AI computation server of claim 4, further comprising a second heat sink coupled to the processor.
6. The AI computation server of claim 1, wherein the AI computation board includes a patch board and a computation board,
the switching board card comprises an M.2 socket, a bridging chip and a PCIE interface, wherein the bridging chip comprises a first interface and a second interface, and the first interface is connected with the M.2 socket; the second interface is connected with the PCIE interface;
the computing force board card comprises an M.2 plug and an AI chip, the AI chip comprises a data interface connected with the M.2 plug, and the M.2 plug is detachably connected with the M.2 socket;
the bridge chip acquires first data from external equipment through the PCIE interface, transmits the first data to the AI chip for calculation, and transmits a calculation result based on the first data to the external equipment; or the bridge chip acquires a plurality of second data from the external device, transmits the plurality of second data to the plurality of AI chips in parallel for calculation, and transmits a calculation result based on first data to the external device, wherein the first data is characteristic data of a preset event, and the calculation result is an AI judgment result of the preset event.
7. The AI computing server of claim 6, wherein the force board cards further include a plurality of AI chips, each force board card including a plurality of AI chips, the plurality of AI chips being connected to the m.2 plug via the control chip.
8. The AI computation server of claim 7, wherein the computing power board is multiple in number, and the multiple computing power boards are connected to the bridge chip in parallel.
9. The AI computation server of claim 8, wherein each computing board card includes a plurality of AI chips that are serially connected to the bridge chip.
10. The AI computing server of claim 1, wherein the heat sink support further comprises a first wire arranging hole for passing a PCIE switch-over wire, or a second wire arranging hole for passing a data wire, or both.
CN201920852920.4U 2019-06-06 2019-06-06 AI calculation server Active CN210534653U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201920852920.4U CN210534653U (en) 2019-06-06 2019-06-06 AI calculation server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201920852920.4U CN210534653U (en) 2019-06-06 2019-06-06 AI calculation server

Publications (1)

Publication Number Publication Date
CN210534653U true CN210534653U (en) 2020-05-15

Family

ID=70594508

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201920852920.4U Active CN210534653U (en) 2019-06-06 2019-06-06 AI calculation server

Country Status (1)

Country Link
CN (1) CN210534653U (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110134205A (en) * 2019-06-06 2019-08-16 深圳云朵数据科技有限公司 A kind of AI calculation server
CN113655860A (en) * 2021-07-30 2021-11-16 中国长城科技集团股份有限公司 Heat dissipation machine case and server

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110134205A (en) * 2019-06-06 2019-08-16 深圳云朵数据科技有限公司 A kind of AI calculation server
CN110134205B (en) * 2019-06-06 2024-03-29 深圳云朵数据科技有限公司 AI calculates server
CN113655860A (en) * 2021-07-30 2021-11-16 中国长城科技集团股份有限公司 Heat dissipation machine case and server

Similar Documents

Publication Publication Date Title
CN110134205B (en) AI calculates server
US10368148B2 (en) Configurable computing resource physical location determination
US10021806B2 (en) System and method for flexible storage and networking provisioning in large scalable processor installations
TWI591485B (en) Computer-readable storage device, system and method for reducing management ports of multiple node chassis system
CN210534653U (en) AI calculation server
CN102346520A (en) Server system
CN110069111A (en) A kind of AI calculation server
US6874014B2 (en) Chip multiprocessor with multiple operating systems
CN110134206B (en) Computing board card
CN209911891U (en) AI calculation server
TW201222274A (en) Computer chassis system
CN210428286U (en) Modular edge server structure
JP6878570B2 (en) Methods and devices for resource reconfiguration
US11093422B2 (en) Processor/endpoint communication coupling configuration system
US11126486B2 (en) Prediction of power shutdown and outage incidents
CN209879419U (en) Calculation board card
CN209879377U (en) Calculation board card
US11809893B2 (en) Systems and methods for collapsing resources used in cloud deployments
CN216352292U (en) Server mainboard and server
US11366696B2 (en) System, board card and electronic device for data accelerated processing
CN105824375A (en) Server
CN103677152A (en) Storage server and rack system thereof
US20110153901A1 (en) Virtual usb key for blade server
CN115202853A (en) Virtual reality video cloud computing service method
CN217587961U (en) Artificial intelligence server hardware architecture based on double-circuit domestic CPU

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant