CN109408455A - A kind of artificial intelligence SOC processor chips - Google Patents

A kind of artificial intelligence SOC processor chips Download PDF

Info

Publication number
CN109408455A
CN109408455A CN201811429970.8A CN201811429970A CN109408455A CN 109408455 A CN109408455 A CN 109408455A CN 201811429970 A CN201811429970 A CN 201811429970A CN 109408455 A CN109408455 A CN 109408455A
Authority
CN
China
Prior art keywords
interface
bus
artificial intelligence
unit
processor chips
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811429970.8A
Other languages
Chinese (zh)
Inventor
颜军
龚永红
唐芳福
许怡冰
蒋晓华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Oubite Aerospace Polytron Technologies Inc
Original Assignee
Zhuhai Oubite Aerospace Polytron Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Oubite Aerospace Polytron Technologies Inc filed Critical Zhuhai Oubite Aerospace Polytron Technologies Inc
Priority to CN201811429970.8A priority Critical patent/CN109408455A/en
Publication of CN109408455A publication Critical patent/CN109408455A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • G06F15/7807System on chip, i.e. computer system on a single chip; System in package, i.e. computer system on one or more chips in a single package
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a kind of artificial intelligence SOC processor chips, chip adds the isomery mode of AI association processing unit using primary processor, use 4 core SPARC LEON4, AI association processing unit is made of 4 core GPU units and 4 core neural network accelerators, it is particularly suitable for prototype software operation neural network based, it is fast to handle neural network algorithm speed, it is low in energy consumption.

Description

A kind of artificial intelligence SOC processor chips
Technical field
The present invention relates to integrated circuit fields, especially a kind of artificial intelligence SOC processor chips.
Background technique
Artificial intelligence is the strategic industry of the Fashion of Future, is the important development strategy of China's sciemtifec and technical sphere, and AI chip It is the basis of China AI industry as the key technique of entire artificial intelligence field, is to realize that artificial intelligence is prominent Broken important outpost.Artificial intelligence is landed in the fields such as security protection, logistics, unmanned, medical, education, drives AI chip demand Rapid growth.Core chips will determine the architecture and future ecology in a new calculating epoch, therefore, global IT giant All throw the research and development that huge fund accelerates artificial intelligence core chips, it is intended to seize the new strategic high ground for calculating the epoch, control artificial intelligence It can epoch dominant right.
Deep learning and the maximum difference of traditional calculations mode are exactly not need extensive programming in logic, but need magnanimity simultaneously The heavy demand that row calculating, new calculating mode and artificial intelligence epoch newly calculate, expedites the emergence of out new dedicated computing chip.
Deep learning algorithm is mature, calculate power promotion and big data collectively promotes artificial intelligence and realizes great-leap-forward development, manually Intelligent use, which emerges one after another, further pushes the promotion of calculation power demand, and mainstream chip CPU, GPU is in processing neural network instantly Power consumption is higher when algorithm, and speed is unable to satisfy.
Summary of the invention
To solve the above problems, the purpose of the present invention is to provide a kind of artificial intelligence SOC processor chips, processing nerve Network algorithm speed is fast, low in energy consumption, is particularly suitable for prototype software operation neural network based.
A kind of artificial intelligence SOC processor chips, including
Primary processor: 4 core SPARC LEON4 are used;
AI assists processing unit: for completing neural network computing, by 4 core GPU units and 4 core neural network accelerator structures At;
Peripheral Interface unit: for meeting the data interaction demand of image procossing, including high-speed interface and low-speed interface;
AMBA bus is separately connected primary processor, AI association's processing unit and peripheral hardware and connects for connecting each functional unit of chip Mouth unit.
Further, the neural network accelerator is made of 512 multiply-accumulators, can calculate 8 fixed points and 16 floating-points.
Further, the SPARC LEON4 processor, using seven grades of flowing water, branch prediction mechanisms, internal memory management is single First MMU and FPU Float Point Unit FPU supports symmetric multiprocessor SMP architecture.
Further, the GPU unit is responsible for floating-point operation, it is possible to provide double precision, single precision and half precision floating point arithmetic, often There are four shader core in a GPU, and each shader core is again by Computing Instruction and EVIS Instruction two parts composition.
Further, the AMBA bus includes AXI bus, ahb bus and APB bus, the AXI bus and ahb bus It is attached by AXI/AHB bridge-jointing unit, the ahb bus and APB bus are attached by AHB/APB bridge-jointing unit.
Further, the primary processor and AI association processing unit are respectively connected to AXI bus.
Further, the high-speed interface include 3.0 interface of PCIE, Rapid I/O interface, Giga-Ethernet interface and 2.0 interface of USB, 3.0 interface of PCIE, Rapid I/O interface, Giga-Ethernet interface and 2.0 interface of USB difference It is connected to AXI bus.
Further, the low-speed interface include MIPI interface, CAN interface, JTAP interface, 1553B interface, GPIO interface, I2C interface, SPI interface and UART interface, the MIPI interface, CAN interface and JTAP interface are respectively connected to ahb bus, institute It states TAP interface, 1553B interface, GPIO interface, I2C interface, SPI interface and UART interface and is respectively connected to APB bus.
It further, further include graph image codec unit, the graph image codec unit is connected to AXI bus.
It further, further include DDR controller, the DDR controller is connected to AXI bus.
The one or more technical solutions provided in the embodiment of the present invention at least have the following beneficial effects: that chip uses Primary processor adds the isomery mode of AI association processing unit, assists processing unit by 4 core GPU units using 4 core SPARC LEON4, AI It is constituted with 4 core neural network accelerators, is particularly suitable for prototype software operation neural network based, processing neural network algorithm speed Degree is fast, low in energy consumption.
Detailed description of the invention
The invention will be further described with example with reference to the accompanying drawing.
Fig. 1 is a kind of block architecture diagram of artificial intelligence SOC processor chips of one embodiment of the invention.
Specific embodiment
Referring to Fig.1, An embodiment provides a kind of artificial intelligence SOC processor chips, including
Primary processor: 4 core SPARC LEON4 are used, as shown in figure 1 CPU0, CPU1, CPU2 and CPU3;
AI assists processing unit: for completing neural network computing, by 4 core GPU units and 4 core neural network accelerator structures At GPU0, GPU1, GPU2, GPU3, NN0, NN1, NN2 and NN3 as shown in figure 1;
Peripheral Interface unit: for meeting the data interaction demand of image procossing, including high-speed interface and low-speed interface;
AMBA bus is separately connected primary processor, AI association's processing unit and peripheral hardware and connects for connecting each functional unit of chip Mouth unit.
In the present embodiment, chip adds the isomery mode of AI association processing unit using primary processor, uses 4 core SPARC LEON4, AI association processing unit are made of 4 core GPU units and 4 core neural network accelerators, are particularly suitable for neural network based Prototype software operation, processing neural network algorithm speed is fast, low in energy consumption.
Further, another embodiment of the invention additionally provides a kind of artificial intelligence SOC processor chips, In, the neural network accelerator is made of 512 multiply-accumulators, can calculate 8 fixed points and 16 floating-points.
Further, another embodiment of the invention additionally provides a kind of artificial intelligence SOC processor chips, In, the SPARC LEON4 processor, using seven grades of flowing water, branch prediction mechanisms, internal memory administrative unit MMU and floating-point Arithmetic element FPU supports symmetric multiprocessor SMP architecture.
Further, another embodiment of the invention additionally provides a kind of artificial intelligence SOC processor chips, In, the GPU unit is responsible for floating-point operation, it is possible to provide double precision, single precision and half precision floating point arithmetic have four in each GPU A shader core, each shader core are again by Computing Instruction and EVIS Instruction two It is grouped as.
Further, another embodiment of the invention additionally provides a kind of artificial intelligence SOC processor chips, In, the AMBA bus includes AXI bus, ahb bus and APB bus, and the AXI bus and ahb bus pass through AXI/AHB bridge Order member is attached, and the ahb bus and APB bus are attached by AHB/APB bridge-jointing unit.
Further, another embodiment of the invention additionally provides a kind of artificial intelligence SOC processor chips, In, the primary processor and AI association processing unit are respectively connected to AXI bus.
Further, another embodiment of the invention additionally provides a kind of artificial intelligence SOC processor chips, In, the high-speed interface includes 3.0 interface of PCIE, 2.0 interface of Rapid I/O interface, Giga-Ethernet interface and USB, It is total that 3.0 interface of PCIE, Rapid I/O interface, Giga-Ethernet interface and 2.0 interface of USB are respectively connected to AXI Line.
Further, another embodiment of the invention additionally provides a kind of artificial intelligence SOC processor chips, In, the low-speed interface includes that MIPI interface, CAN interface, JTAP interface, 1553B interface, GPIO interface, I2C interface, SPI connect Mouthful and UART interface, the MIPI interface, CAN interface and JTAP interface be respectively connected to ahb bus, the TAP interface, 1553B interface, GPIO interface, I2C interface, SPI interface and UART interface are respectively connected to APB bus.
Further, another embodiment of the invention additionally provides a kind of artificial intelligence SOC processor chips, In, it further include graph image codec unit, the graph image codec unit is connected to AXI bus.
Further, another embodiment of the invention additionally provides a kind of artificial intelligence SOC processor chips, In, it further include DDR controller, the DDR controller is connected to AXI bus.
Further, another embodiment of the invention additionally provides a kind of artificial intelligence SOC processor chips, packet It includes:
Primary processor: 4 core SPARC LEON4 processors are used, as shown in figure 1 CPU0, CPU1, CPU2 and CPU3;It is described SPARC LEON4 processor, using seven grades of flowing water, branch prediction mechanisms, internal memory administrative unit MMU and FPU Float Point Unit FPU supports symmetric multiprocessor SMP architecture;
AI assists processing unit: for completing neural network computing, by 4 core GPU units and 4 core neural network accelerator structures At GPU0, GPU1, GPU2, GPU3, NN0, NN1, NN2 and NN3 as shown in figure 1;The neural network accelerator is multiplied by 512 Accumulator composition, can calculate 8 fixed points and 16 floating-points;The GPU unit is responsible for floating-point operation, it is possible to provide double precision, single essence Degree and half precision floating point arithmetic, there are four shader core in each GPU, and each shader core is again by Computing Instruction and EVIS Instruction two parts composition;
Peripheral Interface unit: for meeting the data interaction demand of image procossing, including high-speed interface and low-speed interface;
AMBA bus is separately connected primary processor, AI association's processing unit and peripheral hardware and connects for connecting each functional unit of chip Mouth unit;The AMBA bus includes the APB bus of the AXI bus of 128bit, the ahb bus of 128bit and 32bit, described AXI bus and ahb bus are attached by AXI/AHB bridge-jointing unit, and the ahb bus and APB bus pass through AHB/APB bridge Order member is attached, and realizes the connection of each functional unit in piece, whole system is made to interconnect;
The primary processor and AI association processing unit are respectively connected to AXI bus;
The high-speed interface includes 3.0 interface of PCIE, Rapid I/O interface, Giga-Ethernet interface and USB 2.0 Interface, 3.0 interface of PCIE, Rapid I/O interface, Giga-Ethernet interface and 2.0 interface of USB are respectively connected to AXI bus;
The low-speed interface include MIPI interface, CAN interface, JTAP interface, 1553B interface, GPIO interface, I2C interface, SPI interface and UART interface, the MIPI interface, CAN interface and JTAP interface are respectively connected to ahb bus, and the TAP connects Mouth, 1553B interface, GPIO interface, I2C interface, SPI interface and UART interface are respectively connected to APB bus;
It further include graph image codec unit, the graph image codec unit is connected to AXI bus;Further include DDR controller, Memnory Controller With EDAC as shown in figure 1, the DDR controller are connected by CACHE unit It is connected to AXI bus;It further include debugging unit, the debugging unit is connected to APB bus;It further include DMA unit, the DMA is mono- Member is connected to AXI bus.
The explanation of some key components in system:
GPU unit: also known as programmable engine (Programmable Engine), GPU unit is responsible for floating-point operation, can Double precision, single precision and half precision floating point arithmetic are provided, operational capability is up to 8GFLOPS@1GHz;Have in programmable engine Four shader core, so a cycle can execute four instructions, there are four register texts in each shader core Part, each register file have 128 registers, and each register 128bit saves environmental variance when kernel is executed, Temporary variable monopolizes register, and each shader core is by Computing Instructons and EVIS Instruction two Part forms.
Neural network accelerator: being made of multiple NN cores, includes 512 multiply-accumulators in each NN core, thus composition is big The concurrent operation matrix of scale, especially suitable for convolution algorithm and matrix operation.NN core can be configured to 8 fixed point modes and 16 Floating point mode;Neural network accelerator is mainly for fixed-point calculation, and operational capability is up to 3TOPS@1GHz.
Graph image codec unit: video graphics hardware codec, the formats such as hardware realization H264, H265, JPEG Coding and decryption, realize the compression and decompression of data.
Giga-Ethernet interface: Gigabit Ethernet technology is as newest fast Ethernet technology, great advantage It is to inherit the cheap advantage of traditional ether technology.Gigabit technology is still ether technology, it is used and 10M Ethernet Identical frame format, frame structure, network protocol, complete/half-duplex operation, flow control mode and wiring system.It is upgraded to thousand Mbit ethernet need not change web application, network management component and network operating system, can farthest protect investment.This Outside, ieee standard will support the multimode fibre that maximum distance is 550 meters, the single mode optical fiber that maximum distance is 70 kms and it is maximum away from From the coaxial cable for 100 meters.Gigabit Ethernet has filled up 802.3 Ethernets/Fast Ethernet standard deficiency.This part IP It only include ethernet controller (MAC), since Ethernet interface is various informative, flexibility in order to balance, physical layer interface transmitting-receiving Device (PHY) is general external.
Rapid I/O interface: RapidIO is mainly used in embedded system intraconnection, and chip is supported to arrive to chip, plate Communication between plate can be used as the backboard connection of embedded device.RapidIO 1.x standard support signal rate be 1.25GHz, 2.5GHz and 3.125GHz;RapidIO 2.x standard increases the transmission rate for supporting 5GHz and 6.25GHz.At present in the world Almost all of embedded mainstream vendor has supported RapidIO technology.
PCIE3.0 interface: PCIE is a kind of high speed serialization computer expansion bus standard, and it is point-to-point double to belong to high speed serialization Channel high bandwidth transmission, the equipment distribution connected exclusively enjoys bandwidth chahnel, does not share bus bandwidth, main to support active power source pipe Reason, error reporting, end-to-end reliability transmission, the functions such as hot plug and service quality.Its main advantage is exactly data Transmission rate is high, and newest interface is PCIE3.0 interface, bit rate 8Gbps, about twice of previous generation product bandwidth.
DDR controller: DDR controller is the interface between external DDR and internal processor, and DDR controller, which determines, is Maximum DDR memory size, DDR memory BANK number, type of memory and the speed, memory grain data depth sum number that system can use According to width etc. important parameter, to also be produced bigger effect to the overall performance of computer system.In addition to this in module also There is ECC (Error Checking and Correcting) mistake error-detection error-correction function, improves the stability of system.
The above, only presently preferred embodiments of the present invention, the invention is not limited to above embodiment, as long as It reaches technical effect of the invention with identical means, all should belong to protection scope of the present invention.

Claims (10)

1. a kind of artificial intelligence SOC processor chips, which is characterized in that including
Primary processor: 4 core SPARC LEON4 are used;
AI assists processing unit: for completing neural network computing, being made of 4 core GPU units and 4 core neural network accelerators;
Peripheral Interface unit: for meeting the data interaction demand of image procossing, including high-speed interface and low-speed interface;
AMBA bus is separately connected primary processor, AI association's processing unit and Peripheral Interface list for connecting each functional unit of chip Member.
2. a kind of artificial intelligence SOC processor chips according to claim 1, which is characterized in that the neural network adds Fast device is made of 512 multiply-accumulators, can calculate 8 fixed points and 16 floating-points.
3. a kind of artificial intelligence SOC processor chips according to claim 1, which is characterized in that the SPARC LEON4 Processor, using seven grades of flowing water, branch prediction mechanisms, internal memory administrative unit MMU and FPU Float Point Unit FPU, support pair Claim multiprocessor SMP architecture.
4. a kind of artificial intelligence SOC processor chips according to claim 1, which is characterized in that the GPU unit is responsible for Floating-point operation, it is possible to provide double precision, single precision and half precision floating point arithmetic, there are four shader core in each GPU, each Shader core is made of Computing Instruction and EVIS Instruction two parts again.
5. a kind of artificial intelligence SOC processor chips according to claim 1, which is characterized in that the AMBA bus packet AXI bus, ahb bus and APB bus are included, the AXI bus and ahb bus are attached by AXI/AHB bridge-jointing unit, institute It states ahb bus and APB bus is attached by AHB/APB bridge-jointing unit.
6. a kind of artificial intelligence SOC processor chips according to claim 5, which is characterized in that the primary processor and AI association processing unit is respectively connected to AXI bus.
7. a kind of artificial intelligence SOC processor chips according to claim 5, which is characterized in that the high-speed interface packet Include 3.0 interface of PCIE, 2.0 interface of Rapid I/O interface, Giga-Ethernet interface and USB, 3.0 interface of PCIE, Rapid I/O interface, Giga-Ethernet interface and 2.0 interface of USB are respectively connected to AXI bus.
8. a kind of artificial intelligence SOC processor chips according to claim 5, which is characterized in that the low-speed interface packet MIPI interface, CAN interface, JTAP interface, 1553B interface, GPIO interface, I2C interface, SPI interface and UART interface are included, it is described MIPI interface, CAN interface and JTAP interface are respectively connected to ahb bus, the TAP interface, 1553B interface, GPIO interface, I2C interface, SPI interface and UART interface are respectively connected to APB bus.
9. a kind of artificial intelligence SOC processor chips according to claim 5, which is characterized in that further include graph image Codec unit, the graph image codec unit are connected to AXI bus.
10. a kind of artificial intelligence SOC processor chips according to claim 5, which is characterized in that further include DDR control Device, the DDR controller are connected to AXI bus.
CN201811429970.8A 2018-11-27 2018-11-27 A kind of artificial intelligence SOC processor chips Pending CN109408455A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811429970.8A CN109408455A (en) 2018-11-27 2018-11-27 A kind of artificial intelligence SOC processor chips

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811429970.8A CN109408455A (en) 2018-11-27 2018-11-27 A kind of artificial intelligence SOC processor chips

Publications (1)

Publication Number Publication Date
CN109408455A true CN109408455A (en) 2019-03-01

Family

ID=65456026

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811429970.8A Pending CN109408455A (en) 2018-11-27 2018-11-27 A kind of artificial intelligence SOC processor chips

Country Status (1)

Country Link
CN (1) CN109408455A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110070187A (en) * 2019-04-18 2019-07-30 山东超越数控电子股份有限公司 A kind of design method of the portable computer towards artificial intelligence application
CN110569173A (en) * 2019-09-16 2019-12-13 山东超越数控电子股份有限公司 Server health management chip based on Loongson IP core and implementation method
CN112069115A (en) * 2020-09-18 2020-12-11 上海燧原科技有限公司 Data transmission method, equipment and system
CN112199322A (en) * 2020-09-30 2021-01-08 中国电力科学研究院有限公司 Electric power intelligent control terminal and SOC electric power chip architecture based on RISC-V
TWI729491B (en) * 2019-09-11 2021-06-01 立端科技股份有限公司 Ethernet network communication system using gipo pins and network server having the same

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN207458128U (en) * 2017-09-07 2018-06-05 哈尔滨理工大学 A kind of convolutional neural networks accelerator based on FPGA in vision application
KR20180075913A (en) * 2016-12-27 2018-07-05 삼성전자주식회사 A method for input processing using neural network calculator and an apparatus thereof
CN209149302U (en) * 2018-11-27 2019-07-23 珠海欧比特宇航科技股份有限公司 A kind of artificial intelligence SOC processor chips

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180075913A (en) * 2016-12-27 2018-07-05 삼성전자주식회사 A method for input processing using neural network calculator and an apparatus thereof
CN207458128U (en) * 2017-09-07 2018-06-05 哈尔滨理工大学 A kind of convolutional neural networks accelerator based on FPGA in vision application
CN209149302U (en) * 2018-11-27 2019-07-23 珠海欧比特宇航科技股份有限公司 A kind of artificial intelligence SOC processor chips

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110070187A (en) * 2019-04-18 2019-07-30 山东超越数控电子股份有限公司 A kind of design method of the portable computer towards artificial intelligence application
TWI729491B (en) * 2019-09-11 2021-06-01 立端科技股份有限公司 Ethernet network communication system using gipo pins and network server having the same
CN110569173A (en) * 2019-09-16 2019-12-13 山东超越数控电子股份有限公司 Server health management chip based on Loongson IP core and implementation method
CN110569173B (en) * 2019-09-16 2022-12-27 超越科技股份有限公司 Server health management chip based on Loongson IP core and implementation method
CN112069115A (en) * 2020-09-18 2020-12-11 上海燧原科技有限公司 Data transmission method, equipment and system
CN112069115B (en) * 2020-09-18 2021-06-25 上海燧原科技有限公司 Data transmission method, equipment and system
CN112199322A (en) * 2020-09-30 2021-01-08 中国电力科学研究院有限公司 Electric power intelligent control terminal and SOC electric power chip architecture based on RISC-V

Similar Documents

Publication Publication Date Title
CN109408455A (en) A kind of artificial intelligence SOC processor chips
US11537532B2 (en) Lookahead priority collection to support priority elevation
US11663135B2 (en) Bias-based coherency in an interconnect fabric
CN107111582B (en) Multi-core bus architecture with non-blocking high performance transaction credit system
KR102413593B1 (en) Methods and circuits for deadlock avoidance
CN109582605A (en) Pass through the consistency memory devices of PCIe
CN104583937B (en) For the power and the method and apparatus that optimize of delay to chain road
CN206649376U (en) One kind is applied in the road server PCH configuration structures of purley platforms eight
CN209149302U (en) A kind of artificial intelligence SOC processor chips
US20210112132A1 (en) System, apparatus and method for handling multi-protocol traffic in data link layer circuitry
EP4235441A1 (en) System, method and apparatus for peer-to-peer communication
CN114253889A (en) Approximate data bus inversion techniques for delay sensitive applications
CN112115080A (en) Techniques for generating input/output performance metrics
Otani et al. Peach: A multicore communication system on chip with PCI Express
RU167666U1 (en) Processor Module (MBE2S-PC)
Xu et al. A feasible architecture for ARM-based microserver systems considering energy efficiency
CN210572737U (en) Secondary radar signal processing device
CN112313633B (en) Hardware-based virtual-to-physical address translation for programmable logic hosts in a system-on-chip
CN113434445A (en) Management system and server for I3C to access DIMM
Litz Improving the scalability of high performance computer systems
Andersson et al. Development of a functional prototype of the quad core NGMP space processor
Solokhina et al. Radiation tolerant heterogeneous Multicore “system on chip” with built-in multichannel SpaceFibre switch for onboard data management and mass storage device: Components, short paper
CN209433292U (en) A kind of MicroATX mainboard based on 421 processor of Shen prestige and ICH2 nest plate
CN206532284U (en) A kind of universal interface processor
Vogt et al. IBM BladeCenter QS22: Design, performance, and utilization in hybrid computing systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination