CN110069111A - A kind of AI calculation server - Google Patents
A kind of AI calculation server Download PDFInfo
- Publication number
- CN110069111A CN110069111A CN201910492704.8A CN201910492704A CN110069111A CN 110069111 A CN110069111 A CN 110069111A CN 201910492704 A CN201910492704 A CN 201910492704A CN 110069111 A CN110069111 A CN 110069111A
- Authority
- CN
- China
- Prior art keywords
- board
- chip
- data
- area
- radiator
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000004364 calculation method Methods 0.000 title claims abstract description 48
- 230000017525 heat dissipation Effects 0.000 claims abstract description 29
- 238000006073 displacement reaction Methods 0.000 claims abstract description 11
- 238000004804 winding Methods 0.000 claims abstract description 11
- 230000000694 effects Effects 0.000 abstract description 7
- 238000012545 processing Methods 0.000 description 15
- 238000013473 artificial intelligence Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000000034 method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000007812 deficiency Effects 0.000 description 3
- 238000009434 installation Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000005611 electricity Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 239000002699 waste material Substances 0.000 description 2
- TVEXGJYMHHTVKP-UHFFFAOYSA-N 6-oxabicyclo[3.2.1]oct-3-en-7-one Chemical compound C1C2C(=O)OC1C=CC2 TVEXGJYMHHTVKP-UHFFFAOYSA-N 0.000 description 1
- 229910000831 Steel Inorganic materials 0.000 description 1
- 230000033228 biological regulation Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000004100 electronic packaging Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 229920003023 plastic Polymers 0.000 description 1
- 239000004033 plastic Substances 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 238000004080 punching Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 229910000679 solder Inorganic materials 0.000 description 1
- 239000010959 steel Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000032258 transport Effects 0.000 description 1
- 238000009423 ventilation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/18—Packaging or power distribution
- G06F1/181—Enclosures
- G06F1/182—Enclosures with special features, e.g. for use in industrial environments; grounding or shielding against radio frequency interference [RFI] or electromagnetical interference [EMI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/18—Packaging or power distribution
- G06F1/183—Internal mounting support structures, e.g. for printed circuit boards, internal connecting means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/20—Cooling means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
- G06F13/40—Bus structure
- G06F13/4004—Coupling between buses
- G06F13/4022—Coupling between buses using switching circuits, e.g. switching matrix, connection or expansion network
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2213/00—Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F2213/0026—PCI express
Abstract
The embodiment of the invention discloses a kind of AI calculation servers comprising: cabinet, cabinet include the first area being set in sequence, second area and third region;The server master board of first area is set;The AI of parallel server master board setting calculates board, and AI computing board card is electrically connected by PCIE switching winding displacement and service device mainboard;The hard disk array in third region is set, and hard disk array is electrically connected by data flat cable and server master board;First radiator of second area is set, first radiator includes the radiator bearer for being spaced first area and third region, radiator bearer includes the heat dissipation channel of connection first area and third region, and the first radiator further includes radiator fan, and radiator fan is fixed in heat dissipation channel.Realize that multiple AI calculate board and can be overlapped in the small space for being horizontally placed on server, and heat dissipation channel forms good heat dissipation channel between board intensive in cabinet, ensure that the effect of cabinet run in operating temperature always.
Description
Technical field
The present embodiments relate to AI calculating field, especially a kind of AI calculation server.
Background technique
With the fast development of internet and information industry, various sound, image, video data are in the hair of blowout
Exhibition, big data processing traditional artificial data is gradually replaced handle, and artificial intelligence (abbreviation AI) technology using so that
Big data analysis processing capacity is leaped again.
Depth learning technology has caused the high speed development of artificial intelligence application, when the mankind being led to enter intelligent by the information age
Generation.Deep learning essence is a kind of machine learning techniques, needs powerful hardware computing capability, to complete complicated data processing
And operation.For so huge data processing and operation, in existing artificial intelligence solution, calculated using dedicated AI
Board executes deep learning operation, but even if the AI of single very-high performance calculates board, processing capacity is also much not achieved
Operation demand.
Prior art AI calculation server is all large scale equipment, generally passes through a fairly large number of GPU group preconceived plan power array, mesh
The preceding powerful AI calculation server for being applicable in single machine case not yet.
Summary of the invention
The present invention provides a kind of AI calculation server, is horizontally placed on the narrow of server to realize that multiple calculation power boards can be overlapped
In small space.
The embodiment of the present invention proposes a kind of AI calculation server, comprising:
Cabinet, the cabinet include the first area being set in sequence, second area and third region;
The server master board of the first area is set;
The AI of the parallel server master board setting calculates board, and the AI computing board card passes through PCIE switching winding displacement and institute
State server master board electrical connection;
The hard disk array in third region is set, and the hard disk array is electrically connected by data flat cable and server master board;
First radiator of second area is set, and first radiator includes interval first area and third area
The radiator bearer in domain, radiator bearer include the heat dissipation channel of first area described in connection and third region, and described first dissipates
Thermal further includes radiator fan, and the radiator fan is fixed in the heat dissipation channel.
Further, AI calculation server further includes the power module that first area is arranged in, described power module one end
It is fixed on the air intake vent of cabinet, the air outlet of the power module other end is towards first radiator.
Further, it is multiple, the multiple AI calculating board stacking placement, each AI computing board that the AI, which calculates board,
The side of card and server side wall are fixed.
Further, AI calculation server further includes the processor and memory for being mounted on the server master board.
Further, AI calculation server further includes the second radiator connected with the processor.
Further, the AI calculates board including switching board and calculates power board,
The switching board includes M.2 socket, bridging chip and PCIE interface, the bridging chip include first interface and
Second interface, the first interface connection M.2 socket;The second interface connects the PCIE interface;
The calculation power board includes that M.2 plug and AI chip, the AI chip include the data connected with the M.2 plug
Interface, the M.2 plug and the M.2 socket detachable connection;
Wherein, the bridging chip obtains the first data from external equipment by PCIE interface and is transmitted to the AI chip
It is calculated, the calculated result based on the first data is then transmitted to external equipment;Or the bridging chip will be from outside
Equipment obtain the first data be decomposed into multiple second data, by the multiple second data parallel be transmitted to multiple AI chips into
Row calculates, and the calculated result based on the first data is then transmitted to external equipment, and first data are the spy of predeterminable event
Data are levied, the calculated result is the AI judging result of predeterminable event.
Further, the calculation power board include control chip, it is each calculate power board include AI chip be it is multiple, it is described
Multiple AI chips are connected to the M.2 plug by the control chip.
Further, the calculation power board is multiple, and the multiple calculation power board is connected in parallel to the bridging chip.
Further, it is each calculate the AI chip that power board includes be it is multiple, the multiple AI chip is connected in parallel to described
Control chip.
The present invention is electrically connected by AI computing board card by PCIE switching winding displacement and the server master board, the firstth area of connection
The heat dissipation channel in domain and third region, settlement server hardware resource waste or system calculate power deficiency and server radiating is insufficient
Problem realizes that multiple AI calculate board and can be overlapped in the small space for being horizontally placed on server, and heat dissipation channel is close in cabinet
Good heat dissipation channel is formed between collection board, ensure that the effect of cabinet run in operating temperature always.
Detailed description of the invention
Fig. 1 is the structural schematic diagram of one of embodiment of the present invention one AI calculation server.
Fig. 2 is the structural schematic diagram that one of the embodiment of the present invention two AI calculates board.
Fig. 3 is the structural schematic diagram that one of embodiment of the present invention two calculates power board.
Specific embodiment
The present invention is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining the present invention rather than limiting the invention.It also should be noted that in order to just
Only the parts related to the present invention are shown in description, attached drawing rather than entire infrastructure.
It should be mentioned that some exemplary embodiments are described as before exemplary embodiment is discussed in greater detail
The processing or method described as flow chart.Although each step is described as the processing of sequence by flow chart, many of these
Step can be implemented concurrently, concomitantly or simultaneously.In addition, the sequence of each step can be rearranged.When its operation
Processing can be terminated when completion, it is also possible to have the additional step being not included in attached drawing.Handle the side of can correspond to
Method, function, regulation, subroutine, subprogram etc..
In addition, term " first ", " second " etc. can be used to describe herein various directions, movement, step or element etc.,
But these directions, movement, step or element should not be limited by these terms.These terms are only used to by first direction, movement, step
Rapid or element and another direction, movement, step or element are distinguished.For example, the case where not departing from scope of the present application
Under, it can be second speed difference by First Speed difference, and similarly, it is poor second speed difference can be known as First Speed
Value.First Speed difference and second speed difference both speed difference, but it is not same speed difference.Term " the
One ", " second " etc. is not understood to indicate or imply relative importance or implicitly indicates the number of indicated technical characteristic
Amount." first " is defined as a result, the feature of " second " can explicitly or implicitly include one or more of the features.
In the description of the present invention, the meaning of " plurality " is at least two, such as two, three etc., unless otherwise clearly specific limit
It is fixed.
Embodiment one
Fig. 1 is that a kind of AI (Artificial Intelligence, artificial intelligence) that the embodiment of the present invention one provides is calculated
The structural schematic diagram of server, the present embodiment are applicable to the small space inside information that multiple calculation power boards can be loaded on server
Condition.
A kind of AI calculation server provided in this embodiment includes cabinet 1, server master board 2, AI calculating board 3, hard disk
Array 4 and the first radiator 5.
Cabinet 1, the cabinet 1 include first area 101, second area 102 and the third region 103 being set in sequence.
Server master board 2 is arranged in the first area 101.
AI calculates the parallel server master board 2 of board 3 and is arranged, and the AI calculates board 3 and passes through PCIE switching 6 He of winding displacement
The server master board 2 is electrically connected.
The hard disk array 4 in third region 103 is set, and the hard disk array 4 passes through 2 electricity of data flat cable and server master board
Connection.
First radiator 5 of second area 102 is set, and first radiator 5 includes interval first area 101
With the radiator bearer 501 in third region 103, radiator bearer 501 includes first area 101 and third region described in connection
103 heat dissipation channel 5011, first radiator 5 further includes radiator fan, and the radiator fan is fixed on the heat dissipation
In channel 5011.
The present embodiment, cabinet 1 generally comprise shell, bracket, the various switches on panel, indicator light etc..Shell steel plate
It combines and is made with plastics, hardness is high, mainly plays a part of to protect 1 internal element of cabinet.Bracket is mainly used for fixed mainboard, power supply
And various parts, and cabinet 1 is divided into first area 101, second area 102 and third region 103.
Server master board 2 is fixed on the first area 101 of cabinet 1, and AI calculates board 3 and is installed on first area 101, with clothes
Being engaged in, device mainboard 2 is parallel to be installed, and AI calculates board 3 and is electrically connected by PCIE switching winding displacement 6 and server master board 2.PCIE switch-over row
Line 6 is used to connect the PCIE slot of server master board 2, the data that receiving host CPU is sent, and the CPU of server master board 2 is sent out
The data sent are sent to AI and calculate board 3.Meanwhile PCIE switching winding displacement 6 is also used to return to AI meter to the CPU of server master board 2
Calculate the operation result data of board 3.The PCIE interface that AI calculates board in the prior art is used to connect the PCIE slot of host, connects
The data that host CPU is sent are received, in the case where AI calculates the nonadjustable situation of board operational capability, if necessary to increase GPU operation energy
Power then needs external connection GPU to increase image-capable.And the present embodiment PCIE switching winding displacement 6 can be set it is multiple
PCIE slot calculates board 3 with AI and connect, that is to say, that can calculate board using multiple AI simultaneously in a server, greatly
Ground improves the flexibility in use that AI calculates board, reduces hardware cost.
The third region 103 of cabinet 1 is arranged in hard disk array 4, and identical data are stored in the different of multiple hard disks
The method of local (therefore, redundantly).By the way that data are placed on multiple hard disks, the mode of input-output operation symmetrical is handed over
It is folded, improved performance.Because multiple hard disks increase average time between failures (MTBF), storage redundant data also increases appearance
It is wrong.Hard disk array is capable of providing on-line rapid estimation, dynamic modification array rank, automaticdata recovery, driver roaming, cache store
The functions such as punching.It can provide the solution of performance, data protection, reliability, availability and manageability.Hard disk array 4 is logical
It crosses data flat cable and server master board 2 is electrically connected.Hardware needed for winding displacement reduces interior company, as commonly used in traditional Electronic Packaging
Solder joint, junction line, bottom plate route and cable, so that winding displacement is provided higher assembly reliability and quality.
CPU at least two used in the big server of intensity or more is run, and mostly uses SCSI plus inside
The form of disk array, so that server internal calorific value is very big, so good thermal diffusivity is a excellent service device
The necessary condition of cabinet.Heat dissipation performance is mainly manifested in three aspects, first is that the quantity of fan and position, second is that heat dissipation channel
Reasonability, third is that the selection of cabinet material.First radiator 5 of the present embodiment is arranged in second area 102, including by cabinet
1 is separated into the radiator bearer 501 of first area 101, second area 102 and third region 103.Radiator bearer 501 passes through
5011 isolation hard disk of heat dissipation channel and mainboard of first area 101 and third region 103 described in connection are that hard disk and mainboard are distinguished
Individual heat dissipation channel 5011 is set, by being fixed on the radiator fan inside heat dissipation channel 5011 to the wind of heat dissipation channel 5011
Outlet air is carried out, the wind into mainboard and power supply is no longer hot wind, thus is avoided that because heat transfer is disliked with caused heat dissipation is harassed
Change, and worked independently by each subregion and independently radiated, optimizes heat dissipation to reach each section, realize promotion hard disk, mainboard and electricity
The heat dissipation effect in source avoids generating delay machine phenomenon.
The embodiment of the present invention calculates board 3 by AI and is electrically connected by PCIE switching winding displacement 6 and the server master board 2,
And the heat dissipation channel 5011 in connection first area 101 and third region 103 is set, settlement server hardware resource waste or is
System calculates the problem of power deficiency and server radiating deficiency, realizes that multiple AI calculating boards 3 can be overlapped and is horizontally placed on the narrow of server
In small space, and heat dissipation channel 5011 forms good heat dissipation channel between board intensive in cabinet, ensure that the beginning of cabinet 1
The effect run in operating temperature eventually.
In alternate embodiment, AI calculation server further includes the power module 7 that first area 101 is arranged in, the power supply
7 one end of module is fixed on the air intake vent of cabinet 1, and the air outlet of 7 other end of power module is towards first radiator 5.
It further comprise the processor 8 and memory 9 for being mounted on the server master board 1.
The present embodiment, processor 8 can be central processing unit (Central Processing Unit, CPU), may be used also
To be other general processors, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit
(Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-
Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor is also possible to any conventional processor
Deng processor 8 is the control centre of server, utilizes the various pieces of various interfaces and the entire computer installation of connection.
Memory 9 can be used for storage server program and/or module, and processor 8 is stored in storage by operation or execution
Server program and/or module in device 9, and the data being stored in memory 9 are called, realize the various of server unit
Function.Memory 9 can mainly include storing program area and storage data area, wherein storing program area can storage program area, extremely
Application program needed for a few function etc.;Storage data area, which can be stored, uses created data etc. according to terminal.In addition,
Memory 9 may include high-speed random access memory, can also include nonvolatile memory, such as hard disk, memory, grafting
Formula hard disk, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card, flash card
(Flash Card), at least one disk memory, flush memory device or other volatile solid-state parts.
It further comprise the second radiator with the processor 8 connection.
The present embodiment, power module 7 are arranged in first area 101, after the air inlet of power module 7 is located at machine box for server
The center of the lower portion of end side wall.It is centrally-disposed, such air inlet into wind be also at the centre bit of power supply
It sets, the air outlet of the other end passes through the heat dissipation channel 5011 of the first radiator 5 and dissipating for inside towards the first radiator 5
The ventilation of Hot-air fan can achieve the effect that general equilibrium radiates entirely, and heat dissipation effect is further promoted.
Second radiator is also connect with processor 8, can individually be radiated to processor 8, carry out having for ultrahigh in efficiency
Effect heat dissipation avoids generating delay machine phenomenon.
Embodiment two
Embodiment two has done further refinement on the basis of example 1, to part-structure, specific as follows:
As shown in Fig. 2, it is multiple, the multiple AI calculating board 3 stacking placement, each AI meter that the AI, which calculates board 3,
The side and server side wall for calculating board 3 are fixed.
The AI calculates board 3 and includes switching board 301 and calculate power board 302, and the switching board 301 includes M.2 inserting
Seat 3011, bridging chip 3012 and PCIE interface 3013, the bridging chip 3012 includes first interface and second interface, described
First interface connection M.2 socket 3011;The second interface connects the PCIE interface 3013.
The calculation power board 302 be it is multiple, the multiple calculation power board 302 is connected in parallel to the bridging chip 3012.
The present embodiment, M.2 interface, is a kind of interface specification that substitution MSATA is new that Intel is released.M.2 interface is divided to two kinds
Type supports the channel SATA and the channel NVME respectively, and wherein SATA3.0 only has 6G bandwidth, and the latter is to walk the channel PCI-E, energy
The bandwidth for being up to 32G is provided, NVME can room for promotion pole due to walking PCI-E bandwidth chahnel abundance as storage specification of new generation
Greatly, transmission speed is faster.Therefore, PCIE interface 3013 is connect by the present embodiment by bridging chip 3012 with M.2 socket 3011
Get up, improve data transfer rate.
The data that the 3013 receiving host CPU of PCIE interface being connected with the second interface of bridging chip 3012 is sent, and
Data that CPU is sent by the PCIE interface 3013 that is connected with the second interface of bridging chip 3012 and are arrived and bridging chip
M.2 socket 3011 that 3012 first interface is connected are sent to and calculate the progress data processing of power board 302, and receive by calculation power
The processing result data that board 302 returns.AI calculates the side of board 3 and server side wall is fixed and placement is laminated.
Further, calculation power board 302 is multiple, and multiple calculation power boards 302 are connected in parallel to bridging chip 3012.Work as peace
When equipped with multiple calculation power boards 302, AI calculation server can form the resource pool calculated about artificial intelligence.It needs to illustrate
It is to calculate power board 302 to be mounted on AI calculation server with the pluggable mode of M.2 interface.In this way, AI calculation server can
With the installation number as needed for adjusting calculation power board 302.Here, the pluggable mode peace that AI calculation server passes through M.2 interface
Dress calculates power board 302, can be conveniently adjusted the scale that AI calculates the resource pool of board 3.
Further, be to improve mainboard in an AI calculation server to calculate power, AI calculate board 3 can be it is multiple, it is more
A calculating board stacking is placed, and each side for calculating board is fixed on the side wall of server.
The AI further perfect based on the above technical solution of embodiment two calculates the function of board 3, due to service
The CPU of device is generally configured with standard PCIE interface, by the way that switching board 301 will M.2 at PCIE interface, AI is calculated to be serviced interface conversion
Device calculates power board 302 by the pluggable mode installation of M.2 interface, can be conveniently adjusted the rule that AI calculates the resource pool of board 3
Mould.
Embodiment three
Embodiment three on the basis of example 2, calculates board 3 to AI and has done further refinement, specific as follows:
As shown in figure 3, the calculation power board 302 includes that M.2 plug 3021 and AI chip 3022, the AI chip 3022 wrap
Include the data-interface with the M.2 plug 3021 connection, the M.2 plug 3021 and the detachable company of the M.2 socket 3011
It connects.
Wherein, the bridging chip 3012 obtains the first data from external equipment by PCIE interface 3013 and is transmitted to institute
It states AI chip 3022 to be calculated, the calculated result based on the first data is then transmitted to external equipment;Or the bridge joint
First data obtained from external equipment are decomposed into multiple second data by chip 3012, and the multiple second data parallel is passed
It transports to multiple AI chips 3022 to be calculated, the calculated result based on the first data is then transmitted to external equipment, described
One data are the characteristic of predeterminable event, and the calculated result is the AI judging result of predeterminable event.
The calculation power board 302 further comprises control chip 3023, each AI chip 3022 calculated power board 302 and include
To be multiple, the multiple AI chip 3022 is connected to M.2 plug 3021 by the control chip 3023.
It is each calculate power board 302 include AI chip 3022 be it is multiple, the multiple AI chip 3022 is connected in parallel to institute
State control chip 3023.
The present embodiment, calculate power board 302 include multiple AI chips 3022, due to AI chip 3022 and control chip 3023 it
Between have a large amount of data exchange, so using special data-interface, the embodiment of the present invention uses FIP interface, multiple AI chips
3022 are connected to M.2 plug 3021 by controlling chip 3023.Multiple AI chips 3022 pass through FIP interface and control chip 3023
Parallel connection.
The data that the 3013 receiving host CPU of PCIE interface being connected with the second interface of bridging chip 3012 is sent, and
The data that CPU is sent are sent to and are M.2 inserted by M.2 socket 3011 being connected with the first interface of bridging chip 3012
In the first 3021 AI chip 3022 via data-interface connection, data are handled by AI chip 3022, and pass according to the data of script
Defeated route returns to the budget result data of AI chip 3022 to CPU.
Bridging chip 3012 obtains the first data from external equipment by PCIE interface 3013 and is passed by M.2 socket 3011
The AI chip 3022 is transported to be calculated.First data are the characteristic of predeterminable event, and control chip 3023 takes out one
Still untreated data column, to carries out characteristic series inspection, identify the label of wherein feature data types, 3022 basis of AI chip
Feature Engineering knowledge base finds its corresponding characteristic processing algorithm, is handled using corresponding algorithm the column.Optimize AI fortune
Calculation, which can be to reduce call number and reduce, calculates data volume, and the embodiment of the present invention can also be that control chip 3023 is complete by one
The first whole data split into multiple second data, there is data dependence relation between each second data, and each second data
As a result preservation point can be set, meanwhile, each second data processing can individually restart, and each second data may exist
It in same calculate node, is handled by AI chip 3022, as far as possible Distributed Parallel Computing, improves the concurrency of execution.?
After the completion of processing, the AI judging result of multiple second data is merged into the AI judging result of predeterminable event by control chip 3023.
The function of calculating power board 302 further perfect based on the above technical solution of embodiment three, AI chip
There is a large amount of data exchange between 3022 and control chip 3023, needs using special data-interface, by controlling chip
3023 by FIP interface conversion at M.2 interface, realize that calculating power board 302 calculates the modulated of power.
Note that the above is only a better embodiment of the present invention and the applied technical principle.It will be appreciated by those skilled in the art that
The invention is not limited to the specific embodiments described herein, be able to carry out for a person skilled in the art it is various it is apparent variation,
It readjusts and substitutes without departing from protection scope of the present invention.Therefore, although being carried out by above embodiments to the present invention
It is described in further detail, but the present invention is not limited to the above embodiments only, without departing from the inventive concept, also
It may include more other equivalent embodiments, and the scope of the invention is determined by the scope of the appended claims.
Claims (9)
1. a kind of AI calculation server, comprising:
Cabinet, the cabinet include the first area being set in sequence, second area and third region;
The server master board of the first area is set;
The AI of the parallel server master board setting calculates board, and the AI computing board card passes through PCIE switching winding displacement and the clothes
Business device mainboard electrical connection;
The hard disk array in third region is set, and the hard disk array is electrically connected by data flat cable and server master board;
First radiator of second area is set, and first radiator includes interval first area and third region
Radiator bearer, radiator bearer include the heat dissipation channel of first area described in connection and third region, the first heat dissipation dress
Setting further includes radiator fan, and the radiator fan is fixed in the heat dissipation channel.
2. AI calculation server according to claim 1, which is characterized in that further include the power supply mould that first area is arranged in
Block, described power module one end are fixed on the air intake vent of cabinet, and the air outlet of the power module other end radiates towards described first
Device.
3. AI calculation server according to claim 1, which is characterized in that the AI calculate board be it is multiple, it is described more
A AI calculates board stacking and places, and each AI calculates the side of board and server side wall is fixed.
4. AI calculation server according to claim 1, which is characterized in that further comprise being mounted on the server master
The processor and memory of plate.
5. AI calculation server according to claim 1, which is characterized in that further comprise being connected with the processor
Second radiator.
6. AI calculation server according to claim 1, which is characterized in that the AI calculate board include switching board and
Power board is calculated,
The switching board includes that M.2 socket, bridging chip and PCIE interface, the bridging chip include first interface and second
Interface, the first interface connection M.2 socket;The second interface connects the PCIE interface;
The calculation power board includes that M.2 plug and AI chip, the AI chip include that the data connected with the M.2 plug connect
Mouthful, the M.2 plug and the M.2 socket detachable connection;
Wherein, the bridging chip obtains the first data from external equipment by PCIE interface and is transmitted to the AI chip and carries out
It calculates, the calculated result based on the first data is then transmitted to external equipment;Or the bridging chip will be from external equipment
The first data obtained are decomposed into multiple second data, and the multiple second data parallel is transmitted to multiple AI chips and is counted
It calculates, the calculated result based on the first data is then transmitted to external equipment, first data are the characteristic of predeterminable event
According to the calculated result is the AI judging result of predeterminable event.
7. AI calculation server according to claim 1, which is characterized in that the calculation power board further comprises control core
Piece, it is each calculate power board include AI chip be it is multiple, the multiple AI chip by the control chip be connected to described in M.2
Plug.
8. AI calculation server according to claim 1, which is characterized in that the calculation power board be it is multiple, it is the multiple
It calculates power board and is connected in parallel to the bridging chip.
9. AI calculation server according to claim 1, which is characterized in that the AI chip that each calculation power board includes is more
A, the multiple AI chip parallel-by-bit is connected to the control chip.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910492704.8A CN110069111A (en) | 2019-06-06 | 2019-06-06 | A kind of AI calculation server |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910492704.8A CN110069111A (en) | 2019-06-06 | 2019-06-06 | A kind of AI calculation server |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110069111A true CN110069111A (en) | 2019-07-30 |
Family
ID=67372508
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910492704.8A Pending CN110069111A (en) | 2019-06-06 | 2019-06-06 | A kind of AI calculation server |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110069111A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110414457A (en) * | 2019-08-01 | 2019-11-05 | 深圳云朵数据技术有限公司 | A kind of calculation Force system for video monitoring |
CN110609595A (en) * | 2019-10-18 | 2019-12-24 | 引力互联国际有限公司 | Artificial intelligence device |
CN111104459A (en) * | 2019-08-22 | 2020-05-05 | 华为技术有限公司 | Storage device, distributed storage system, and data processing method |
CN113900509A (en) * | 2021-09-03 | 2022-01-07 | 重庆科创职业学院 | Artificial intelligence computing device |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101853059A (en) * | 2010-05-12 | 2010-10-06 | 姚学民 | Cloud computing server system for heat dissipation, energy conservation and safe data storage |
CN105183103A (en) * | 2015-08-31 | 2015-12-23 | 浪潮(北京)电子信息产业有限公司 | Server chassis |
CN206594594U (en) * | 2016-11-18 | 2017-10-27 | 国源君安(北京)科技有限公司 | A kind of many board concurrent operation equipment |
CN108304341A (en) * | 2018-03-13 | 2018-07-20 | 算丰科技(北京)有限公司 | AI chip high speeds transmission architecture, AI operations board and server |
CN108388532A (en) * | 2018-03-13 | 2018-08-10 | 算丰科技(北京)有限公司 | The AI operations that configurable hardware calculates power accelerate board and its processing method, server |
CN108646890A (en) * | 2018-05-31 | 2018-10-12 | 北京比特大陆科技有限公司 | A kind of radiator, computing device and dig mine machine |
US20180357047A1 (en) * | 2016-01-27 | 2018-12-13 | Bonsai AI, Inc. | Interface for working with simulations on premises |
US20190138908A1 (en) * | 2018-12-28 | 2019-05-09 | Francesc Guim Bernat | Artificial intelligence inference architecture with hardware acceleration |
CN209911891U (en) * | 2019-06-06 | 2020-01-07 | 深圳云朵数据科技有限公司 | AI calculation server |
-
2019
- 2019-06-06 CN CN201910492704.8A patent/CN110069111A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101853059A (en) * | 2010-05-12 | 2010-10-06 | 姚学民 | Cloud computing server system for heat dissipation, energy conservation and safe data storage |
CN105183103A (en) * | 2015-08-31 | 2015-12-23 | 浪潮(北京)电子信息产业有限公司 | Server chassis |
US20180357047A1 (en) * | 2016-01-27 | 2018-12-13 | Bonsai AI, Inc. | Interface for working with simulations on premises |
CN206594594U (en) * | 2016-11-18 | 2017-10-27 | 国源君安(北京)科技有限公司 | A kind of many board concurrent operation equipment |
CN108304341A (en) * | 2018-03-13 | 2018-07-20 | 算丰科技(北京)有限公司 | AI chip high speeds transmission architecture, AI operations board and server |
CN108388532A (en) * | 2018-03-13 | 2018-08-10 | 算丰科技(北京)有限公司 | The AI operations that configurable hardware calculates power accelerate board and its processing method, server |
CN108646890A (en) * | 2018-05-31 | 2018-10-12 | 北京比特大陆科技有限公司 | A kind of radiator, computing device and dig mine machine |
US20190138908A1 (en) * | 2018-12-28 | 2019-05-09 | Francesc Guim Bernat | Artificial intelligence inference architecture with hardware acceleration |
CN209911891U (en) * | 2019-06-06 | 2020-01-07 | 深圳云朵数据科技有限公司 | AI calculation server |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110414457A (en) * | 2019-08-01 | 2019-11-05 | 深圳云朵数据技术有限公司 | A kind of calculation Force system for video monitoring |
CN111104459A (en) * | 2019-08-22 | 2020-05-05 | 华为技术有限公司 | Storage device, distributed storage system, and data processing method |
WO2021031619A1 (en) * | 2019-08-22 | 2021-02-25 | 华为技术有限公司 | Storage device, distributed storage system, and data processing method |
CN115269717A (en) * | 2019-08-22 | 2022-11-01 | 华为技术有限公司 | Storage device, distributed storage system, and data processing method |
CN115422284A (en) * | 2019-08-22 | 2022-12-02 | 华为技术有限公司 | Storage device, distributed storage system, and data processing method |
CN115422284B (en) * | 2019-08-22 | 2023-11-10 | 华为技术有限公司 | Storage device, distributed storage system, and data processing method |
CN110609595A (en) * | 2019-10-18 | 2019-12-24 | 引力互联国际有限公司 | Artificial intelligence device |
CN113900509A (en) * | 2021-09-03 | 2022-01-07 | 重庆科创职业学院 | Artificial intelligence computing device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110069111A (en) | A kind of AI calculation server | |
CN110134205A (en) | A kind of AI calculation server | |
US10873521B2 (en) | Methods and apparatus for SDI support for fast startup | |
US8105882B2 (en) | Processing a memory request in a chip multiprocessor having a stacked arrangement | |
US20180027376A1 (en) | Configurable Computing Resource Physical Location Determination | |
US20080089034A1 (en) | Heat dissipation apparatus utilizing empty component slot | |
KR20140101338A (en) | System and method for flexible storage and networking provisioning in large scalable processor installations | |
US10856441B1 (en) | System and method for bi-side heating vapor chamber structure in an information handling system | |
US6874014B2 (en) | Chip multiprocessor with multiple operating systems | |
CN110134206A (en) | A kind of calculating board | |
CN110414457A (en) | A kind of calculation Force system for video monitoring | |
CN210534653U (en) | AI calculation server | |
CN102045989A (en) | Heat pipe radiating module | |
CN210428286U (en) | Modular edge server structure | |
CN114039919A (en) | Traffic scheduling method, medium, device and computing equipment | |
CN209911891U (en) | AI calculation server | |
US20160026589A1 (en) | Adaptive Circuit Board Assembly and Flexible PCI Express Bus | |
US20070148019A1 (en) | Method and device for connecting several types of fans | |
US20180081729A1 (en) | Methods and modules relating to allocation of host machines | |
CN109284108A (en) | Date storage method, device, electronic equipment and storage medium | |
CN103677152A (en) | Storage server and rack system thereof | |
US20180285311A1 (en) | Server having ip miniature computing unit | |
CN206460456U (en) | A kind of hard-disk interface expanding unit and hard-disk system | |
CN213957979U (en) | Isolated switching matrix reinforcing server | |
Watts et al. | Implementing an IBM system x iDataPlex solution |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |