CN108304341A - AI chip high speeds transmission architecture, AI operations board and server - Google Patents

AI chip high speeds transmission architecture, AI operations board and server Download PDF

Info

Publication number
CN108304341A
CN108304341A CN201810205669.2A CN201810205669A CN108304341A CN 108304341 A CN108304341 A CN 108304341A CN 201810205669 A CN201810205669 A CN 201810205669A CN 108304341 A CN108304341 A CN 108304341A
Authority
CN
China
Prior art keywords
chips
high speeds
chip high
transmission architecture
chip
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810205669.2A
Other languages
Chinese (zh)
Inventor
梁思达
范靖
李超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing suneng Technology Co.,Ltd.
Original Assignee
Feng Feng Technology (beijing) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Feng Feng Technology (beijing) Co Ltd filed Critical Feng Feng Technology (beijing) Co Ltd
Priority to CN201810205669.2A priority Critical patent/CN108304341A/en
Publication of CN108304341A publication Critical patent/CN108304341A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4282Bus transfer protocol, e.g. handshake; Synchronisation on a serial bus, e.g. I2C bus, SPI bus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/382Information transfer, e.g. on bus using universal interface adapter
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/40Bus structure
    • G06F13/4004Coupling between buses
    • G06F13/4027Coupling between buses using bus bridges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4204Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus
    • G06F13/4221Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus being an input/output bus, e.g. ISA bus, EISA bus, PCI bus, SCSI bus

Abstract

The embodiment of the invention discloses a kind of AI chip high speeds transmission architecture, AI operations board and servers.The AI chip high speeds transmission architecture realizes the bidirectional high speed serial interlinkage of multiple AI chips using high speed connector between plate and high-speed differential serial SERDES links.The hardware that the embodiment of the present invention realizes AI operation boards calculates power flexibly configurable, greatly improves the data transmission credibility and calculation process performance of AI operation boards.

Description

AI chip high speeds transmission architecture, AI operations board and server
Technical field
The present invention relates to chip data transmission technologies, more particularly to a kind of AI chip high speeds transmission architecture, AI operation boards Card and server.
Background technology
With the fast development of internet and information industry, various sound, image, video data are in the hair of blowout Exhibition, big data processing gradually replace traditional artificial data to handle.It, can be to peace by the analysis and processing to big data Full management, health or even discovery innovation etc. everyway bring unprecedented breakthrough.Big data analysis and processing institute The demand brought has caused the fast development of artificial intelligence AI chips, and the application of artificial intelligence technology is but also big data analysis Processing capacity is leaped again.
Depth learning technology has caused the high speed development of artificial intelligence application, when the mankind being led to enter intelligent by the information age Generation.Deep learning essence is a kind of machine learning techniques, needs powerful hardware computing capability, to complete answering for large-scale data Miscellaneous data processing and operation.For so huge data processing and operation, in existing artificial intelligence solution, using special AI chips execute deep learning operation, but even if single its processing capacity of the AI chips of very-high performance if much reach not To operation demand.In order to meet the process demand of large-scale data, technical staff begins to use multiple AI chips compositions to calculate collection Group builds AI operation boards, to constitute deep learning server system, greatly improves the calculation process of deep learning Ability.
However, for the AI operation boards that the interconnection of multiple AI chips is constituted, the data throughout of superelevation is for AI cores The data transfer bandwidth of piece brings great challenge, how to improve the transmission bandwidth between chip and chip, while ensureing again The accurate reliability of data transmission becomes the critical issue for realizing the interconnection communication of AI chips.
Invention content
To solve the above-mentioned problems, according to an aspect of the present invention, a kind of AI chip high speeds transmission architecture is proposed, it is described AI chip high speed transmission architectures include multiple AI chips and multiple high speed connectors, and the AI chips connect including two SERDES Mouthful, adjacent two AI chips couple high speed connector by SERDES interfaces and realize interconnection, constitute bidirectional linked list interconnection architecture; One in two SERDES interfaces of the AI chips for upper level AI chips into row data communication, another be used for Next stage AI chips are into row data communication.
In some embodiments, the high speed connector be can plug high-speed signal connector.
In some embodiments, the high speed connector includes EdgeLine CoEdge connectors.
In some embodiments, the AI chips include ASIC processing chips.
In some embodiments, the AI chips include tensor processing unit TPU.
In some embodiments, the SERDES interfaces include the transmission channel of up direction and down direction.
In some embodiments, it is logical to respectively include the transmission of 20 tunnels for the transmission channel of the up direction and down direction Road.
In some embodiments, the single channel transmission rate of the transmission channel is 10Gbps.
According to another aspect of the present invention, propose a kind of AI operations board, the AI operations board include PCIE interfaces, AI chip high speed transmission architectures described in interface bridgt circuit and aforementioned any embodiment;The PCIE interfaces are for connecting master Machine PCIE slots, the interface bridge joint circuit on one side couple the PCIE interfaces, and the other end is connected to institute by high speed connector AI chip high speed transmission architectures are stated, for being the SERDES being adapted to the AI chip high speeds transmission architecture by PCIE interface conversions Interface;Further include multiple power management moulds that multiple AI chips in the AI chip high speeds transmission architecture are powered respectively Block.
In some embodiments, the PCIE interfaces be used for by host CPU sends wait for that operational data is transmitted to described in Interface bridgt circuit.
In some embodiments, the interface bridgt circuit is used to wait for operational data via SERDES by what host was sent Interface is sent to multiple AI chips in the AI chip high speeds transmission architecture and carries out calculation process.
In some embodiments, the interface bridgt circuit is additionally operable to receive the operation knot that the multiple AI chips return Fruit data, and it is transferred to host CPU via PCIE interfaces.
In some embodiments, the interface bridgt circuit is additionally operable to control powering on for the multiple power management module Sequential.
According to another aspect of the present invention, it is also proposed that a kind of server, the server include:
Host comprising PCIE slots;And
Connect the AI operation boards described in the aforementioned any embodiment of the PCIE slots of the host.
The embodiment of the present invention realizes multiple AI cores using high speed connector between plate and high-speed differential serial SERDES links The high-speed transfer framework of the high information throughput of piece serial interlinkage, and AI operation boards are realized based on the high-speed transfer framework Hardware calculates power flexible configuration, greatly improves the data transmission credibility and calculation process performance of AI operation boards.
Description of the drawings
Fig. 1 is the structural schematic diagram of AI chip high speeds transmission architecture according to an embodiment of the invention;
Fig. 2 is the communication link schematic diagram of AI chip high speeds transmission architecture according to an embodiment of the invention;
Fig. 3 is the structural representation for the AI operation boards that AI chip high speeds transmission architecture according to an embodiment of the invention is realized Figure;
Fig. 4 is the structural schematic diagram of server according to an embodiment of the invention.
Specific implementation mode
To make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with specific embodiment, and reference Attached drawing, the present invention is described in more detail.
Fig. 1 is the structural schematic diagram of AI chip high speeds transmission architecture according to an embodiment of the invention.As shown in Figure 1, institute It includes multiple AI chips 1 and multiple high speed connectors 2 to state AI chip high speed transmission architectures;There are two SERDES for the AI chips tool The SERDES interfaces of interface, adjacent two AI chips are interconnected by high speed connector 2, constitute bidirectional linked list interconnection architecture, institute One in two SERDES interfaces is stated for, into row data communication, another to be used for and next stage AI cores with upper level AI chips Piece is into row data communication.
In some embodiments, the high speed connector can plug, such as MOLEX companies may be used EdgeLine CoEdge connectors.
In some embodiments, it is real that ASIC processing chips may be used for executing AI calculation process in the AI chips Existing, the AI operations include that deep learning calculates.
In some embodiments, the tensor processing unit (Tensor of Google companies may be used in the AI chips Processing Unit, abbreviation TPU) it realizes.
In the embodiment of the present invention, on the one hand, due to there is a large amount of data transmission to exchange between AI chips and other AI chips, So high-speed transfer IP uses serial SERDES frameworks between AI chips, using IEEE 10G KR backboards transmission standards come to letter Number carry out specification.SERDES interfaces be a kind of time division multiplexing (TDM), point-to-point (P2P) serial communication technology, that is, sending out Sending end multi-path low speed parallel signal is converted into high-speed differential serial signal, is transmitted through transmission media, and in receiving terminal height Fast differential serial signals are re-converted into speed parallel signals, and this point-to-point serial communication technology makes full use of transmission media Channel capacity, required transmission channel and device pin number can be reduced, the transmission speed of promotion signal substantially reduces logical Believe cost.
On the other hand, adjacent AI chips by can the high speed connector of plug realize that high speed interconnects, connect at a high speed between plate Device forms the interconnection pattern similar to High speed rear panel structure, and need not weld the plug for achieving that AI chips, in this way Configuration combine so that each AI chips high-speed computation handle data while, data can be carried out by high speed serialization link It is shared, and by the pattern of daisy chain, the unlimited interconnection of AI chips theoretically may be implemented, to meet customized calculation power Demand realizes the flexible configuration for calculating the hardware of AI operation boards power.
Fig. 2 is the communication link schematic diagram of AI chip high speeds transmission architecture according to an embodiment of the invention.Such as Fig. 2 institutes Showing, two SERDES interfaces of AI chips respectively include the high-speed differential serial transmission link that the rate in 40 channels is 10Gbps, The link includes the transmission channel in uplink and downlink direction, and the transmission channel in uplink and downlink direction is symmetrical arranged, i.e. uplink Include respectively 20 channels with down direction.
The information throughput of the transmission channel is 40*10Gbps=400Gbps, and uplink and downlink transmission channel is respectively 200Gbps can meet the real-time Transmission demand of ultrahigh speed operational data.
It is all due to being interconnected by plate grade high speed connector between AI chips in the embodiment of the present invention The length of 10Gbps high-speed differential signal cablings has been above 10inch, via, connector on plate, pad these spurious impedances The noise that mutually crosstalk is brought between reflection, signal caused by discontinuous all influences high density, two-forty signal long distance transmission Reliability.The embodiment of the present invention carries out Signal quality assessment using a large amount of emulation and measuring technology, it is ensured that such high speed Transmit the reliability of interconnection pattern.
Fig. 3 is that the structure for the AI operations board 10 that AI chip high speeds transmission architecture according to an embodiment of the invention is realized is shown It is intended to.As shown in figure 3, the AI operations board 10 includes multiple AI chips 1 and multiple high speed connectors 2, the AI chips are used In executing AI calculation process, there are two SERDES interfaces for tool, and the SERDES interfaces of adjacent two AI chips are by connecting at a high speed Connect the interconnection of device 2, constitute bidirectional linked list interconnection architecture, one in described two SERDES interfaces for upper level AI chips into Row data communication, another is used for next stage AI chips into row data communication.
The AI operations board 10 further includes PCIE interfaces 3, interface bridgt circuit 4 and is corresponded to the multiple AI chips Multiple power management modules 5 of power supply.Wherein, PCIE interfaces 3 are used to connect the PCIE slots of host, and receiving host CPU is sent Data, and by host CPU send data be sent to interface bridgt circuit 4.Meanwhile PCIE interfaces 3 are additionally operable to host CPU Return to the operation result data of multiple AI chips.
Interface bridgt circuit 4 connects PCIE interfaces, and the first AI chips of down direction are connected by high speed connector, For being the SERDES interfaces being adapted to AI chips by PCIE interface conversions, wait for operational data via SERDES by what host was sent Interface is sent to multiple AI chips and carries out calculation process.Meanwhile interface bridgt circuit 4 is additionally operable to receive the operation that AI chips return Result data, and it is transferred to host CPU via PCIE interfaces 3.
In some embodiments, interface bridgt circuit 4 further includes the control function to board, such as to power management mould Electrifying timing sequence control, the control and scheduling etc. to AI chips of block.
Power management module 5 for respectively to the multiple AI chips carry out it is independently-powered, realize the function of power management.
In some embodiments, power management module 5 is connected to interface bridgt circuit 4, institute by low-frequency serial bus It states interface bridgt circuit 4 to be additionally operable to control the electrifying timing sequence of the power management module, corresponding AI chips is powered.
The embodiment of the present invention, on the one hand, due to there is a large amount of data transmission to exchange between AI chips and other AI chips, institute Serial SERDES frameworks are used with high-speed transfer IP between AI chips, using IEEE 10G KR backboard transmission standards come to signal Carry out specification.
On the other hand, adjacent AI chips by can the high speed connector of plug realize that high speed interconnects, connect at a high speed between plate Device forms the interconnection pattern similar to High speed rear panel structure, and such configuration is combined so that each AI chips are at high-speed computation While managing data, the shared of data can be carried out by high speed serialization link, and by the pattern of daisy chain, theoretically may be used To realize the unlimited interconnection of AI chips, to meet customized calculation power demand, realize the spirit that power is calculated the hardware of AI operation boards Configuration living.
Further, since the CPU of host (such as PC machine or server) is generally configured with standard PCIE interfaces, but will not generally match Set SERDES interfaces.In order to be adapted to the SERDES interfaces of AI chips, the embodiment of the present invention is configured with one on AI operation boards A interface bridgt circuit is used for PCIE interface conversions into SERDES interfaces.AI chips and existing interface mark were both ensure that as a result, Data exchange between accurate host, and realize the high speed data transfer framework infinitely interconnected between AI chips so that this hair The quantity of the AI chips carried on the AI operation boards of bright embodiment is flexibly configurable.
Fig. 4 is the structural schematic diagram of server 100 according to an embodiment of the invention.As shown in figure 4, the embodiment of the present invention Server 100 include:
Host 20 comprising PCIE slots;
And the AI operations board 10 of the previous embodiment of the connection host PC IE slots.
The embodiment of the present invention is designed by the AI chip high speed transmission architectures of customization, will be fixed on traditional AI operations board Number of chips become configurable so that AI operation boards hardware calculate power flexibly configurable, greatly improve AI The data transmission credibility and calculation process performance of operation board.
Particular embodiments described above has carried out further in detail the purpose of the present invention, technical solution and advantageous effect It describes in detail bright, it should be understood that the above is only a specific embodiment of the present invention, is not intended to restrict the invention, it is all Within the spirit and principles in the present invention, any modification, equivalent substitution, improvement and etc. done should be included in the guarantor of the present invention Within the scope of shield.

Claims (14)

1. a kind of AI chip high speeds transmission architecture, which is characterized in that the AI chip high speeds transmission architecture includes multiple AI chips With multiple high speed connectors, the AI chips include two SERDES interfaces, and adjacent two AI chips pass through SERDES interfaces It couples high speed connector and realizes interconnection, constitute bidirectional linked list interconnection architecture;One in two SERDES interfaces of the AI chips It is a for into row data communication, another to be used for next stage AI chips into row data communication with upper level AI chips.
2. AI chip high speeds transmission architecture according to claim 1, which is characterized in that the high speed connector is can plug High-speed signal connector.
3. AI chip high speeds transmission architecture according to claim 2, which is characterized in that the high speed connector includes EdgeLine CoEdge connectors.
4. AI chip high speeds transmission architecture according to claim 1, which is characterized in that the AI chips include ASIC processing Chip.
5. AI chip high speeds transmission architecture according to claim 4, which is characterized in that the AI chips include tensor processing Unit TPU.
6. AI chip high speeds transmission architecture according to claim 1, which is characterized in that the SERDES interfaces include uplink The transmission channel in direction and down direction.
7. AI chip high speeds transmission architecture according to claim 6, which is characterized in that the up direction and down direction Transmission channel respectively include 20 tunnel transmission channels.
8. AI chip high speeds transmission architecture according to claim 7, which is characterized in that the single channel of the transmission channel passes Defeated rate is 10Gbps.
9. a kind of AI operations board, which is characterized in that the AI operations board includes PCIE interfaces, interface bridgt circuit and such as Claim 1-8 any one of them AI chip high speed transmission architectures;The PCIE interfaces are for connecting host PC IE slots, institute It states interface bridge joint circuit on one side and couples the PCIE interfaces, the other end is connected to the AI chip high speeds by high speed connector and passes Defeated framework, for being the SERDES interfaces being adapted to the AI chip high speeds transmission architecture by PCIE interface conversions;Further include point Other multiple power management modules that multiple AI chips in the AI chip high speeds transmission architecture are powered.
10. AI operations board according to claim 9, which is characterized in that the PCIE interfaces are for sending host CPU Wait for that operational data is transmitted to the interface bridgt circuit.
11. AI operations board according to claim 10, which is characterized in that the interface bridgt circuit is for sending out host That send waits for that operational data is sent to multiple AI chips in the AI chip high speeds transmission architecture via SERDES interfaces and carries out operation Processing.
12. AI operations board according to claim 11, which is characterized in that the interface bridgt circuit is additionally operable to receive institute The operation result data that multiple AI chips return are stated, and host CPU is transferred to via PCIE interfaces.
13. AI operations board according to claim 9, which is characterized in that the interface bridgt circuit is additionally operable to control institute State the electrifying timing sequence of multiple power management modules.
14. a kind of server, which is characterized in that including:
Host comprising PCIE slots;And
Connect the PCIE slots of the host such as claim 9-13 any one of them AI operation boards.
CN201810205669.2A 2018-03-13 2018-03-13 AI chip high speeds transmission architecture, AI operations board and server Pending CN108304341A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810205669.2A CN108304341A (en) 2018-03-13 2018-03-13 AI chip high speeds transmission architecture, AI operations board and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810205669.2A CN108304341A (en) 2018-03-13 2018-03-13 AI chip high speeds transmission architecture, AI operations board and server

Publications (1)

Publication Number Publication Date
CN108304341A true CN108304341A (en) 2018-07-20

Family

ID=62850020

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810205669.2A Pending CN108304341A (en) 2018-03-13 2018-03-13 AI chip high speeds transmission architecture, AI operations board and server

Country Status (1)

Country Link
CN (1) CN108304341A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191364A (en) * 2018-08-01 2019-01-11 南京天数智芯科技有限公司 Accelerate the hardware structure of artificial intelligence process device
CN110069111A (en) * 2019-06-06 2019-07-30 深圳云朵数据科技有限公司 A kind of AI calculation server
CN110134205A (en) * 2019-06-06 2019-08-16 深圳云朵数据科技有限公司 A kind of AI calculation server
CN110515590A (en) * 2019-08-30 2019-11-29 上海寒武纪信息科技有限公司 Multiplier, data processing method, chip and electronic equipment
CN110515586A (en) * 2019-08-30 2019-11-29 上海寒武纪信息科技有限公司 Multiplier, data processing method, chip and electronic equipment
CN110618956A (en) * 2019-08-01 2019-12-27 苏州浪潮智能科技有限公司 BMC cloud platform resource pooling method and system
CN111131095A (en) * 2019-12-24 2020-05-08 杭州迪普科技股份有限公司 Message forwarding method and device
CN111260044A (en) * 2018-11-30 2020-06-09 上海寒武纪信息科技有限公司 Data comparator, data processing method, chip and electronic equipment
CN111258540A (en) * 2018-11-30 2020-06-09 上海寒武纪信息科技有限公司 Multiplier, data processing method, chip and electronic equipment
CN111258542A (en) * 2018-11-30 2020-06-09 上海寒武纪信息科技有限公司 Multiplier, data processing method, chip and electronic equipment
CN111258539A (en) * 2018-11-30 2020-06-09 上海寒武纪信息科技有限公司 Multiplier, data processing method, chip and electronic equipment
CN111258543A (en) * 2018-11-30 2020-06-09 上海寒武纪信息科技有限公司 Multiplier, data processing method, chip and electronic equipment
WO2020258917A1 (en) * 2019-06-28 2020-12-30 华为技术有限公司 Data exchange chip and server
CN113312304A (en) * 2021-06-04 2021-08-27 海光信息技术股份有限公司 Interconnection device, mainboard and server
CN114088224A (en) * 2021-11-22 2022-02-25 上海聪链信息科技有限公司 Computer board chip temperature monitoring system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4729944B2 (en) * 2005-03-01 2011-07-20 パナソニック電工株式会社 Information monitoring system
CN102520769A (en) * 2011-12-31 2012-06-27 曙光信息产业股份有限公司 Server
CN205210880U (en) * 2015-12-16 2016-05-04 山东海量信息技术研究院 PCIEBox keysets based on high -end server
CN105760324A (en) * 2016-05-11 2016-07-13 北京比特大陆科技有限公司 Data processing device and server
CN205983537U (en) * 2016-05-11 2017-02-22 北京比特大陆科技有限公司 Data processing device and system, server

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4729944B2 (en) * 2005-03-01 2011-07-20 パナソニック電工株式会社 Information monitoring system
CN102520769A (en) * 2011-12-31 2012-06-27 曙光信息产业股份有限公司 Server
CN205210880U (en) * 2015-12-16 2016-05-04 山东海量信息技术研究院 PCIEBox keysets based on high -end server
CN105760324A (en) * 2016-05-11 2016-07-13 北京比特大陆科技有限公司 Data processing device and server
CN205983537U (en) * 2016-05-11 2017-02-22 北京比特大陆科技有限公司 Data processing device and system, server

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11669715B2 (en) 2018-08-01 2023-06-06 Shanghai Iluvatar Corex Semiconductor Co., Ltd. Hardware architecture for accelerating artificial intelligent processor
CN109191364A (en) * 2018-08-01 2019-01-11 南京天数智芯科技有限公司 Accelerate the hardware structure of artificial intelligence process device
WO2020026157A3 (en) * 2018-08-01 2021-10-07 南京天数智芯科技有限公司 Hardware architecture of accelerated artificial intelligence processor
CN111258542A (en) * 2018-11-30 2020-06-09 上海寒武纪信息科技有限公司 Multiplier, data processing method, chip and electronic equipment
CN111258540B (en) * 2018-11-30 2023-01-20 上海寒武纪信息科技有限公司 Multiplier, data processing method, chip and electronic equipment
CN111258542B (en) * 2018-11-30 2022-06-17 上海寒武纪信息科技有限公司 Multiplier, data processing method, chip and electronic equipment
CN111260044A (en) * 2018-11-30 2020-06-09 上海寒武纪信息科技有限公司 Data comparator, data processing method, chip and electronic equipment
CN111258540A (en) * 2018-11-30 2020-06-09 上海寒武纪信息科技有限公司 Multiplier, data processing method, chip and electronic equipment
CN111258539A (en) * 2018-11-30 2020-06-09 上海寒武纪信息科技有限公司 Multiplier, data processing method, chip and electronic equipment
CN111258543A (en) * 2018-11-30 2020-06-09 上海寒武纪信息科技有限公司 Multiplier, data processing method, chip and electronic equipment
CN110134205B (en) * 2019-06-06 2024-03-29 深圳云朵数据科技有限公司 AI calculates server
CN110134205A (en) * 2019-06-06 2019-08-16 深圳云朵数据科技有限公司 A kind of AI calculation server
CN110069111A (en) * 2019-06-06 2019-07-30 深圳云朵数据科技有限公司 A kind of AI calculation server
WO2020258917A1 (en) * 2019-06-28 2020-12-30 华为技术有限公司 Data exchange chip and server
CN110618956B (en) * 2019-08-01 2021-06-29 苏州浪潮智能科技有限公司 BMC cloud platform resource pooling method and system
CN110618956A (en) * 2019-08-01 2019-12-27 苏州浪潮智能科技有限公司 BMC cloud platform resource pooling method and system
CN110515586A (en) * 2019-08-30 2019-11-29 上海寒武纪信息科技有限公司 Multiplier, data processing method, chip and electronic equipment
CN110515590A (en) * 2019-08-30 2019-11-29 上海寒武纪信息科技有限公司 Multiplier, data processing method, chip and electronic equipment
CN110515586B (en) * 2019-08-30 2024-04-09 上海寒武纪信息科技有限公司 Multiplier, data processing method, chip and electronic equipment
CN111131095B (en) * 2019-12-24 2021-08-24 杭州迪普科技股份有限公司 Message forwarding method and device
CN111131095A (en) * 2019-12-24 2020-05-08 杭州迪普科技股份有限公司 Message forwarding method and device
CN113312304A (en) * 2021-06-04 2021-08-27 海光信息技术股份有限公司 Interconnection device, mainboard and server
CN114088224A (en) * 2021-11-22 2022-02-25 上海聪链信息科技有限公司 Computer board chip temperature monitoring system
CN114088224B (en) * 2021-11-22 2024-04-05 上海聪链信息科技有限公司 Calculating plate chip temperature monitoring system

Similar Documents

Publication Publication Date Title
CN108304341A (en) AI chip high speeds transmission architecture, AI operations board and server
US20150304248A1 (en) 50 gb/s ethernet using serializer/deserializer lanes
CN109120624B (en) Multi-plane loose coupling high-bandwidth data exchange system
CN108388532A (en) The AI operations that configurable hardware calculates power accelerate board and its processing method, server
DE202013105453U1 (en) Training frame in PMA size for 100GBASE-KP4
CN101169771B (en) Multiple passage internal bus external interface device and its data transmission method
CN106155959A (en) Data transmission method and data transmission system
DE202013104344U1 (en) Fast PMA alignment device in 100GBASE-KP4
US6260092B1 (en) Point to point or ring connectable bus bridge and an interface with method for enhancing link performance in a point to point connectable bus bridge system using the fiber channel
CN108345555A (en) Interface bridgt circuit based on high-speed serial communication and its method
CN100573490C (en) Modular interconnect structure
CN1964285A (en) A master control device with double CPU and realization method
CN107748726A (en) A kind of GPU casees
CN109840231A (en) A kind of PCIe-SRIO interconnecting device and its method
CN108614797A (en) A kind of high low-frequency serial bus integrated interface of polymorphic type
WO2020258917A1 (en) Data exchange chip and server
CN109407574A (en) Output-controlling device and its method may be selected in a kind of multibus
CN109086238A (en) A kind of server serial interface management system and method redirected based on USB
CN206348789U (en) A kind of embedded signal processing system based on CPCIE and OpenVPX frameworks
CN104598430A (en) Network interface interconnection design and control system for CPU (Central Processing Unit) interconnection expansion systems
CN208386577U (en) Communication system based on M-LVDS how main high-speed bus in real time
CN210428438U (en) Interface conversion board
CN107332841A (en) Multi-protocols hybrid switching module based on PowerPC
WO2019079645A1 (en) Systems, apparatus and methods for managing connectivity of networked devices
CN202025310U (en) Serial port debugging system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20190418

Address after: 100192 2nd Floor, Building 25, No. 1 Hospital, Baosheng South Road, Haidian District, Beijing

Applicant after: BEIJING BITMAIN TECHNOLOGY CO., LTD.

Address before: 100192 No.25 Building, No.1 Hospital, Baosheng South Road, Haidian District, Beijing

Applicant before: Feng Feng Technology (Beijing) Co., Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210811

Address after: 100192 Building No. 25, No. 1 Hospital, Baosheng South Road, Haidian District, Beijing, No. 301

Applicant after: SUANFENG TECHNOLOGY (BEIJING) Co.,Ltd.

Address before: 100192 2nd Floor, Building 25, No. 1 Hospital, Baosheng South Road, Haidian District, Beijing

Applicant before: BITMAIN TECHNOLOGIES Inc.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220224

Address after: 100176 901, floor 9, building 8, courtyard 8, KEGU 1st Street, Beijing Economic and Technological Development Zone, Daxing District, Beijing (Yizhuang group, high-end industrial area of Beijing Pilot Free Trade Zone)

Applicant after: Beijing suneng Technology Co.,Ltd.

Address before: 100192 Building No. 25, No. 1 Hospital, Baosheng South Road, Haidian District, Beijing, No. 301

Applicant before: SUANFENG TECHNOLOGY (BEIJING) CO.,LTD.