CN110163361A - A kind of computing device and method - Google Patents

A kind of computing device and method Download PDF

Info

Publication number
CN110163361A
CN110163361A CN201910195820.3A CN201910195820A CN110163361A CN 110163361 A CN110163361 A CN 110163361A CN 201910195820 A CN201910195820 A CN 201910195820A CN 110163361 A CN110163361 A CN 110163361A
Authority
CN
China
Prior art keywords
data
input data
mentioned
unit
operational order
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910195820.3A
Other languages
Chinese (zh)
Other versions
CN110163361B (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Cambricon Information Technology Co Ltd
Original Assignee
Shanghai Cambricon Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201810149287.2A external-priority patent/CN110163350B/en
Priority claimed from CN201810207915.8A external-priority patent/CN110276447A/en
Application filed by Shanghai Cambricon Information Technology Co Ltd filed Critical Shanghai Cambricon Information Technology Co Ltd
Priority claimed from CN201880002628.1A external-priority patent/CN110383300B/en
Publication of CN110163361A publication Critical patent/CN110163361A/en
Application granted granted Critical
Publication of CN110163361B publication Critical patent/CN110163361B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/38Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
    • G06F7/48Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
    • G06F7/57Arithmetic logic units [ALU], i.e. arrangements or devices for performing two or more of the operations covered by groups G06F7/483 – G06F7/556 or for performing logical operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Abstract

A kind of computing device, comprising: for obtaining the storage unit (10) of input data and computations;For extracting computations from storage unit (10), which is decoded to obtain one or more operational orders and one or more operational orders and input data are sent to the controller unit (11) of arithmetic element (12);With the arithmetic element (12) of the result for input data execution being calculated according to one or more operational orders computations.Computing device to participate in machine learning calculate data be indicated using fixed-point data, can training for promotion operation processing speed and treatment effeciency.

Description

A kind of computing device and method
Technical field
This application involves technical field of information processing, and in particular to a kind of computing device and method.
Background technique
With the continuous development of information technology and the growing demand of people, requirement of the people to information timeliness is more next It is higher.Currently, terminal is all based on general processor acquisition to the acquisition and processing of information.
In practice, it has been found that this mode for handling information based on general processor runs software program, is limited to lead to With the operating rate of processor, especially in the biggish situation of general processor load, information processing efficiency is lower, time delay compared with Greatly, for the computation model of information processing such as training pattern, the calculation amount of training operation is bigger, and general processor is complete Time at training operation is long, low efficiency.
Summary of the invention
The embodiment of the present application provides a kind of computing device and method, can promote the processing speed of operation, improve efficiency.
In a first aspect, including: storage unit, controller unit and conversion the embodiment of the invention provides a kind of computing device Unit;
The controller unit, for obtaining one or more operational orders, wherein the operational order includes operation domain And operation code, the operation code are used to indicate the function of the operational order, the operation domain includes the first ground of the first input data Location, the length of the first input data, the first address of output data, scaling position, the first input data data type mark Position and action type mark;The operational order is parsed, to obtain the first address of the first input data, the first ground of output data Location, scaling position, the data type flag bit of the first input data and action type mark;According to first input data First address the first input data is obtained from the storage unit, then by first input data, scaling position, The data type flag bit and action type identification transmission of one input data are to the converting unit;By the multiple operational order It is transmitted to the arithmetic element;
The converting unit, for according to the scaling position, the data type flag bit and the action type First input data is converted to the second input data by mark, and second input data is fixed-point data.
In a possible embodiment, the controller unit obtains one or more computations, specifically includes:
The controller unit obtains computations, parses the computations to obtain one or more of calculating and refer to It enables.
In a possible embodiment, the operation domain of the operational order further includes the length of the first input data, institute It states controller unit and is also used to parse the operational order and obtain the length of first input data;
The controller unit obtains described according to the first address of first input data from the storage unit One input data, comprising:
The controller unit is according to the length of first input data and the first input data from the storage unit It is middle to obtain first input data.
In a possible embodiment, the computing device further includes for executing machine learning calculating, described device Arithmetic element;
The controller unit is also used to one or more of operational orders being transmitted to the arithmetic element;
The converting unit is also used to second input data being transmitted to the arithmetic element;
The arithmetic element, for being transported according to one or more of operational orders to second input data It calculates, to obtain the calculated result of the computations;And the calculated result is stored to output described in the storage unit The corresponding position of the first address of data.
In a possible embodiment, the machine learning calculating includes: artificial neural network operation, and described first is defeated Entering data includes: input neuron number evidence and weight data;The calculated result is output nerve metadata.
In a possible embodiment, the arithmetic element includes a main process task circuit and multiple from processing circuit;
The main process task circuit, for second input data carry out execute preamble processing and with it is the multiple from Data and the multiple operational order are transmitted between processing circuit;
It is the multiple from processing circuit, for according to from second input data of main process task circuit transmission and described more A operational order simultaneously executes intermediate operations and obtains multiple intermediate results, and multiple intermediate results are transferred to the main process task electricity Road;
The main process task circuit obtains based on the computations by executing subsequent processing to the multiple intermediate result Calculate result.
In a possible embodiment, the computing device further includes direct memory access DMA unit, and the storage is single Member includes: register, any combination in caching;
The caching, for storing first input data;Wherein, the caching includes that scratchpad caches;
The register, for storing scalar data in first input data;
The DMA unit, for reading data from the storage unit or to the storage unit stores data.
In a possible embodiment, the controller unit includes: instruction cache unit, instruction cache unit and deposits Store up queue unit;
Described instruction cache unit, for storing the associated computations of artificial neural network operation;
Described instruction processing unit, for parsing to obtain the data conversion instruction and the multiple to the computations Operational order, and the data conversion instruction is parsed to obtain the operation code and operation domain of the data conversion instruction;
The storage queue unit, for storing instruction queue, the instruction queue include: to wait for by the tandem of the queue The multiple operational orders or computations executed.
In a possible embodiment, the controller unit further include:
Dependence processing unit, for determining the first operational order and the 0th operation before first operational order Instruction whether there is incidence relation, and there are incidence relations for such as first operational order and the 0th operational order, will be described First operational order is buffered in described instruction cache unit, after the 0th operational order is finished, from described instruction Cache unit extracts first operational order and is transmitted to the arithmetic element;
The determination first operational order whether there is with the 0th operational order before the first operational order to be associated with System includes:
The first storage address section of required data in first operational order is extracted according to first operational order, The 0th storage address section of required data in the 0th operational order is extracted according to the 0th operational order, such as described the One storage address section and the 0th storage address section have Chong Die region, determine first operational order with it is described 0th operational order has incidence relation, and such as first storage address section and the 0th storage address section do not have weight Folded region determines that first operational order and the 0th operational order do not have incidence relation.
In a possible embodiment, when first input data is fixed-point data, the arithmetic element is also wrapped It includes:
Derivation unit is derived by one or more for the scaling position according to first input data Between result scaling position, wherein one or more of intermediate results be obtained according to the first input data operation 's.
In a possible embodiment, the arithmetic element further include: data buffer storage unit, it is one for caching Or multiple intermediate results.
In a possible embodiment, the arithmetic element includes: tree-shaped module, and the tree-shaped module includes: one Root port and multiple ports, the root port of the tree-shaped module connect the main process task circuit, the tree-shaped module it is multiple Branch port is separately connected multiple one from processing circuit from processing circuit;
The tree-shaped module, for forward the main process task circuit and the multiple data between processing circuit and Operational order, wherein the tree type model is that n pitches tree construction, and the n is the integer more than or equal to 2.
In a possible embodiment, the arithmetic element further includes branch process circuit,
The main process task circuit is specifically used for determining that the input neuron is broadcast data, and weight is distribution data, will One distribution data is distributed into multiple data blocks, by the multiple data block at least one data block, broadcast data and At least one operational order in multiple operational orders is sent to the branch process circuit;
The branch process circuit, for forwarding the main process task circuit and the multiple data between processing circuit Block, broadcast data and operational order;
It is the multiple from processing circuit, for being executed according to the operational order to the data block and broadcast data received Operation obtains intermediate result, and intermediate result is transferred to the branch process circuit;
The main process task circuit is also used to the intermediate result that the branch process circuit is sent carrying out subsequent processing to obtain The operational order as a result, the result of the computations is sent to the controller unit.
In a possible embodiment, it is the multiple from processing circuit be in array distribution;Each from processing circuit and phase Other adjacent are connected from processing circuit, the multiple K from processing circuit of the main process task circuit connection from processing circuit, The K is from processing circuit are as follows: n of the 1st row from processing circuit, m row n m arranged from processing circuit and the 1st from Processing circuit;
The K from processing circuit, for the main process task circuit and multiple data between processing circuit with And the forwarding of instruction;
The main process task circuit is also used to determine that the input neuron is broadcast data, and weight is distribution data, by one A distribution data are distributed into multiple data blocks, by least one data block and multiple operational orders in the multiple data block In at least one operational order be sent to the K from processing circuit;
The K is a from processing circuit, for converting the main process task circuit and the multiple number between processing circuit According to;
It is the multiple from processing circuit, for being executed during operation obtains according to the operational order to the data block received Between as a result, and operation result is transferred to the K from processing circuit;
The main process task circuit, by being handled to obtain based on this from the intermediate result that processing circuit is sent by the K Calculate instruction as a result, the result of the computations is sent to the controller unit.
In a possible embodiment, the main process task circuit, specifically for the centre for sending multiple processing circuits As a result it is combined sequence and obtains the result of the computations;
Or the main process task circuit, specifically for by the intermediate result of the transmission of multiple processing circuits be combined sequence with And the result of the computations is obtained after activation processing.
In a possible embodiment, the main process task circuit includes: in activation processing circuit and addition process circuit One kind or any combination;
The activation processing circuit, for executing the activation operation of data in main process task circuit;
The addition process circuit, for executing add operation or accumulating operation;
It is described to include: from processing circuit
Multiplication process circuit obtains result of product for executing product calculation to the data block received;
Accumulation process circuit obtains the intermediate result for executing accumulating operation to the result of product.
Second aspect, the embodiment of the invention provides a kind of calculation methods, comprising:
Controller unit acquires one or more operational orders;And the operational order is parsed, it is defeated to obtain first Enter the first address of data, the length of the first input data, the first address of output data, scaling position, the first input data Data type flag bit and action type mark;It is obtained from the storage unit according to the first address of first input data First input data;
Converting unit will be described according to the scaling position, the data type flag bit and the action type mark First input data is converted to the second input data, which is fixed-point data.
In a possible embodiment, the controller unit obtains one or more operational orders, comprising:
Obtain computations;The computations are parsed to obtain one or more of operational orders.
In a possible embodiment, the operation domain of the operational order further includes the length of the first input data;
Controller unit obtains first input data according to the first address of first input data, comprising: according to The first address of first input data and the length of the first input data obtain first input data.
In a possible embodiment, the calculation method is the method for executing machine learning calculating, this method Further include:
Computations are calculated to second input data execution according to the multiple operational order in arithmetic element Calculated result, and the calculated result is stored into the corresponding memory space of the address data output.
In a possible embodiment, the machine learning calculating includes: artificial neural network operation, and described first is defeated Entering data includes: input neuron and weight;The calculated result is output neuron.
In a possible embodiment, when first input data and second input data are fixed-point data When, the scaling position of the scaling position of first input data and second input data is inconsistent.
In a possible embodiment, when first input data is fixed-point data, the method also includes:
The arithmetic element is derived by among one or more according to the scaling position of first input data As a result scaling position, wherein one or more of intermediate results are obtained according to the first input data operation.
The third aspect, the embodiment of the invention provides a kind of machine learning arithmetic unit, the machine learning arithmetic unit packets Include computing device described in one or more first aspect.The machine learning arithmetic unit from other processing units for obtaining It takes to operational data and control information, and executes specified machine learning operation, implementing result is passed to it by I/O interface His processing unit;
It, can between the multiple computing device when the machine learning arithmetic unit includes multiple computing devices To be linked by specific structure and transmit data;
Wherein, multiple computing devices are interconnected by PCIE bus and are transmitted data, more massive to support The operation of machine learning;Multiple computing devices share same control system or possess respective control system;It is multiple described Computing device shared drive possesses respective memory;The mutual contact mode of multiple computing devices is any interconnection topology.
Fourth aspect, the embodiment of the invention provides a kind of combined treatment device, which includes such as third Machine learning processing unit, general interconnecting interface described in aspect and other processing units.The machine learning arithmetic unit with it is upper It states other processing units to interact, the common operation completing user and specifying.The combined treatment device can also include storage dress It sets, which connect with the machine learning arithmetic unit and other described processing units respectively, for saving the machine The data of device study arithmetic unit and other processing units.
5th aspect, the embodiment of the invention provides a kind of neural network chip, which includes above-mentioned the Computing device described in one side, described in machine learning arithmetic unit or above-mentioned fourth aspect described in the above-mentioned third aspect Combined treatment device.
6th aspect, the embodiment of the invention provides a kind of neural network chip encapsulating structure, neural network chip envelopes Assembling structure includes neural network chip described in above-mentioned 5th aspect;
7th aspect, the embodiment of the invention provides a kind of board, which includes memory device, interface arrangement and control Neural network chip described in device and above-mentioned 5th aspect;
Wherein, the neural network chip and the memory device, the control device and the interface arrangement are distinguished Connection;
The memory device, for storing data;
The interface arrangement, for realizing the data transmission between the chip and external equipment;
The control device is monitored for the state to the chip.
Further, the memory device includes: multiple groups storage unit, and storage unit described in each group and the chip are logical Cross bus connection, the storage unit are as follows: DDR SDRAM;
The chip includes: DDR controller, the control for data transmission and data storage to each storage unit System;
The interface arrangement are as follows: standard PCIE interface.
Eighth aspect, the embodiment of the invention provides a kind of electronic device, which includes above-mentioned 5th aspect institute Plate described in neural network chip encapsulating structure described in the neural network chip stated, the 6th aspect or above-mentioned 7th aspect Card.
In some embodiments, the electronic equipment includes data processing equipment, robot, computer, printer, scanning Instrument, tablet computer, intelligent terminal, mobile phone, automobile data recorder, navigator, sensor, camera, server, cloud server, Camera, video camera, projector, wrist-watch, earphone, mobile storage, wearable device, the vehicles, household electrical appliance, and/or medical treatment Equipment.
In some embodiments, the vehicles include aircraft, steamer and/or vehicle;The household electrical appliance include electricity Depending on, air-conditioning, micro-wave oven, refrigerator, electric cooker, humidifier, washing machine, electric light, gas-cooker, kitchen ventilator;The Medical Devices include Nuclear Magnetic Resonance, B ultrasound instrument and/or electrocardiograph.
The aspects of the invention or other aspects can more straightforwards in the following description.
Detailed description of the invention
In order to more clearly explain the technical solutions in the embodiments of the present application, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, the accompanying drawings in the following description is some embodiments of the present application, for ability For the those of ordinary skill of domain, without creative efforts, it can also be obtained according to these attached drawings other attached Figure.
Fig. 1 provides a kind of data structure schematic diagram of fixed-point data for the embodiment of the present application;
Fig. 2 provides the data structure schematic diagram of another fixed-point data for the embodiment of the present application;
Fig. 3 the embodiment of the present application provides a kind of structural schematic diagram of computing device;
Fig. 3 A is the structural schematic diagram for the computing device that the application one embodiment provides;
Fig. 3 B is the structural schematic diagram for the computing device that another embodiment of the application provides;
Fig. 3 C is the structural schematic diagram for the computing device that another embodiment of the application provides;
Fig. 3 D is the structural schematic diagram of main process task circuit provided by the embodiments of the present application;
Fig. 3 E is the structural schematic diagram for the computing device that another embodiment of the application provides;
Fig. 3 F is the structural schematic diagram of tree-shaped module provided by the embodiments of the present application;
Fig. 3 G is the structural schematic diagram for the computing device that another embodiment of the application provides;
Fig. 3 H is the structural schematic diagram for the computing device that another embodiment of the application provides;
Fig. 4 is a kind of single layer artificial neural network forward operation flow chart provided by the embodiments of the present application;
Fig. 5 is a kind of neural network forward operation provided by the embodiments of the present application and reverse train flow chart;
Fig. 6 is a kind of structure chart of combined treatment device provided by the embodiments of the present application;
Fig. 6 A is the structural schematic diagram for the computing device that another embodiment of the application provides;
Fig. 7 is the structure chart of another combined treatment device provided by the embodiments of the present application;
Fig. 8 is a kind of structural schematic diagram of board provided by the embodiments of the present application;
Fig. 9 is a kind of flow diagram of calculation method provided by the embodiments of the present application.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete Site preparation description, it is clear that described embodiment is some embodiments of the present application, instead of all the embodiments.Based on this Shen Please in embodiment, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, shall fall in the protection scope of this application.
The description and claims of this application and term " first ", " second ", " third " and " in the attached drawing Four " etc. are not use to describe a particular order for distinguishing different objects.In addition, term " includes " and " having " and it Any deformation, it is intended that cover and non-exclusive include.Such as it contains the process, method of a series of steps or units, be System, product or equipment are not limited to listed step or unit, but optionally further comprising the step of not listing or list Member, or optionally further comprising other step or units intrinsic for these process, methods, product or equipment.
Referenced herein " embodiment " is it is meant that a particular feature, structure, or characteristic described can wrap in conjunction with the embodiments It is contained at least one embodiment of the application.Each position in the description occur the phrase might not each mean it is identical Embodiment, nor the independent or alternative embodiment with other embodiments mutual exclusion.Those skilled in the art explicitly and Implicitly understand, embodiment described herein can be combined with other embodiments
The structure of fixed-point data is described below, participates in Fig. 1, Fig. 1 provides a kind of number of fixed-point data for the embodiment of the present application According to structural schematic diagram.There is the fixed-point data of symbol as shown in Figure 1, which accounts for X bit, which can be described as again X fixed-point datas.Wherein, the X fixed-point data includes the decimal for accounting for the sign bit of 1 bit, the integer-bit of M-bit and N-bit Position, X-1=M+N.For signless fixed-point data, the integer-bit of M-bit and the decimal place of N-bit, i.e. X=M+N are only included.
Compared to 32 floating data representations, the short position fixed-point data representation that the present invention uses is in addition to occupying ratio Special digit is less outer, for same layer, same type of data in the model of network, such as all convolution kernels of first convolutional layer, Neuron or biased data are inputted, there is additionally provided the scaling position of a flag bit record fixed-point data, the mark Position is Point Location.The size of above-mentioned flag bit can be adjusted according to the distribution of input data in this way, to reach Precision and fixed-point data to adjustment fixed-point data can indicate range.
For example, it is 5 to have 16 fixed-point datas of symbol that floating number 68.6875, which is converted to scaling position,.Wherein, It is 5 to have 16 fixed-point datas of symbol for scaling position, integer part accounts for 10 bits, fractional part accounts for 5 bits, symbol Position accounts for 1 bit.Above-mentioned floating number 68.6875 has been converted into 16 fixed-point datas of symbol by above-mentioned converting unit 0000010010010110, as shown in Figure 2.
Computing device used in this application is introduced first.Refering to Fig. 3, a kind of computing device is provided, the computing device packet It includes: controller unit 11, arithmetic element 12 and converting unit 13, wherein controller unit 11 is connect with arithmetic element 12, conversion Unit 13 is connected with above controller unit 11 and arithmetic element 12;
In a possible embodiment, controller unit 11, for obtaining the first input data and computations.
In one embodiment, the first input data is machine learning data.Further, machine learning data include defeated Enter neural metadata, weight data.Output nerve metadata is final output or intermediate data.
In a kind of optinal plan, obtaining the first input data and computations mode can specifically be inputted by data Output unit obtains, which is specifically as follows one or more data I/O interfaces or I/O pin.
Above-mentioned computations include but is not limited to: forward operation instruction or reverse train instruction or other neural networks fortune Instruction etc. is calculated, such as convolution algorithm instruction, the application specific embodiment are not intended to limit the specific manifestation of above-mentioned computations Form.
Above controller unit 11 is also used to parse the computations and obtains data conversion instruction and/or one or more A operational order, wherein the data conversion instruction includes operation domain and operation code, which is used to indicate the data class The operation domain of the function of type conversion instruction, the data type conversion instruction includes scaling position, is used to indicate the first input The flag bit of the data type of data and the conversion regime mark of data type.
When the operation domain of above-mentioned data conversion instruction is the address of memory space, above controller unit 11 is according to the ground Obtained in the corresponding memory space in location above-mentioned scaling position, the data type for being used to indicate the first input data flag bit and The conversion regime of data type identifies.
Above controller unit 11 is by the operation code of the data conversion instruction and operation domain and first input data It is transmitted to the converting unit 13;The multiple operational order is transmitted to the arithmetic element 12;
Above-mentioned converting unit 13, for being inputted according to the operation code and operation domain of the data conversion instruction by described first Data are converted to the second input data, which is fixed-point data;And second input data is transmitted to fortune Calculate unit 12;
Above-mentioned arithmetic element 12, for carrying out operation to second input data according to the multiple operational order, with Obtain the calculated result of the computations.
In one in possible embodiment, arithmetic element 12 is arranged to one master and multiple slaves knot by technical solution provided by the present application Data can be split the computations of forward operation by structure by the computations according to forward operation, logical in this way Cross it is multiple can carry out concurrent operation to the biggish part of calculation amount from processing circuit 102, to improve arithmetic speed, save Operation time, and then reduce power consumption.As shown in Figure 3A, above-mentioned arithmetic element 12 include a main process task circuit 101 and it is multiple from Processing circuit 102;
Above-mentioned main process task circuit 101, for above-mentioned second input data carry out execute preamble processing and with it is above-mentioned more It is a from transmitting data and above-mentioned multiple operational orders between processing circuit 102;
It is above-mentioned multiple from processing circuit 102, for according to from above-mentioned main process task circuit 101 transmit the second input data and Above-mentioned multiple operational orders simultaneously execute intermediate operations and obtain multiple intermediate results, and multiple intermediate results are transferred to above-mentioned main place Manage circuit 101;
Above-mentioned main process task circuit 101 obtains above-mentioned computations for executing subsequent processing to above-mentioned multiple intermediate results Calculated result.
In one embodiment, machine learning operation includes deep learning operation (i.e. artificial neural network operation), machine Learning data (i.e. the first input data) includes input neuron and weight (i.e. neural network model data).Output neuron is The calculated result or intermediate result of above-mentioned computations.Below by taking deep learning operation as an example, it will be appreciated that, it is not limited to Deep learning operation.
Optionally, above-mentioned computing device can also include: the storage unit 10 and direct memory access (direct Memory access, DMA) unit 50, storage unit 10 may include: register, one or any combination in caching, tool Body, the caching, for storing the computations;The register 201, for storing first input data and mark Amount.Wherein the first input data includes input neuron, weight and output neuron.
The caching 202 caches for scratchpad.
DMA unit 50 is used to read from storage unit 10 or storing data.
In a kind of possible embodiment, be stored in above-mentioned register 201 above-mentioned operational order, the first input data, Scaling position, the flag bit of data type for being used to indicate the first input data and the conversion regime of data type identify;On State controller unit 11 directly obtained from above-mentioned register 201 above-mentioned operational order, the first input data, scaling position, It is used to indicate the flag bit of the data type of the first input data and the conversion regime mark of data type;Number is inputted by first According to the conversion regime mark of, scaling position, the flag bit for the data type for being used to indicate the first input data and data type It is transmitted to above-mentioned converting unit 13 out;Above-mentioned operational order is transmitted to above-mentioned arithmetic element 12;
Above-mentioned converting unit 13 is according to the mark of above-mentioned scaling position, the data type for being used to indicate the first input data The conversion regime of position and data type, which is identified, is converted to the second input data for above-mentioned first input data;Then this is second defeated Enter data and is transmitted to above-mentioned arithmetic element 12;
Above-mentioned arithmetic element 12 carries out operation to above-mentioned second input data according to above-mentioned operational order, to obtain operation knot Fruit.
Optionally, which includes: instruction cache unit 110, instruction process unit 111 and storage team's list Member 113;
Described instruction cache unit 110, for storing the associated computations of artificial neural network operation;
Described instruction processing unit 111, for parsing to obtain the data conversion instruction and described to the computations Multiple operational orders, and the data conversion instruction is parsed to obtain the operation code and operation domain of the data conversion instruction;
The storage queue unit 113, for storing instruction queue, the instruction queue include: suitable by the front and back of the queue The pending multiple operational orders of sequence or computations.
For example, main process task circuit 101 also may include a control unit in an optional technical solution, The control unit may include master instruction processing unit, be specifically used for Instruction decoding into microcommand.Certainly another optional It also may include another control unit from processing circuit 102 in scheme, which includes single from instruction processing Member, specifically for receiving and processing microcommand.Above-mentioned microcommand can pass through for the next stage instruction of instruction, the microcommand To obtaining after the fractionation or decoding of instruction, it can be further decoded as the control signal of each component, each unit or each processing circuit.
In a kind of optinal plan, the structure of the computations can be as shown in table 1 below.
Operation code Register or immediate Register/immediate ……
Table 1
Ellipsis expression in upper table may include multiple registers or immediate.
In alternative dispensing means, which may include: one or more operation domains and an operation code. The computations may include neural network computing instruction.By taking neural network computing instructs as an example, as shown in table 1, wherein deposit Device number 0, register number 1, register number 2, register number 3, register number 4 can be operation domain.Wherein, each register number 0, Register number 1, register number 2, register number 3, register number 4 can be the number of one or more register.
Table 2
Above-mentioned register can be chip external memory, certainly in practical applications, or on-chip memory, for depositing Store up data, which is specifically as follows n dimension data, and n is the integer more than or equal to 1, for example, be 1 dimension data when n=1, i.e., to Amount is 2 dimension datas, i.e. matrix when such as n=2, is multidimensional tensor when such as n=3 or 3 or more.
Optionally, which can also include:
Dependence processing unit 112 determines the first operational order and described for when with multiple operational orders The 0th operational order before one operational order whether there is incidence relation, such as first operational order and the 0th operation There are incidence relations for instruction, then first operational order are buffered in described instruction cache unit 110, in the 0th fortune After calculating instruction execution, first operational order is extracted from described instruction cache unit 110 and is transmitted to the arithmetic element;
The determination first operational order whether there is with the 0th operational order before the first operational order to be associated with System includes: the first storage that required data (such as matrix) in first operational order are extracted according to first operational order Address section extracts the 0th stored address area of required matrix in the 0th operational order according to the 0th operational order Between, such as first storage address section has Chong Die region with the 0th storage address section, it is determined that described first Operational order and the 0th operational order have incidence relation, such as first storage address section and the 0th storage Location section does not have the region of overlapping, it is determined that first operational order does not have with the 0th operational order to be associated with System.
In another alternative embodiment, as shown in Figure 3B, above-mentioned arithmetic element 12 include a main process task circuit 101, It is multiple from processing circuit 102 and multiple branch process circuits 103.
Above-mentioned main process task circuit 101 is specifically used for determining that the input neuron is broadcast data, and weight is distribution number According to a distribution data being distributed into multiple data blocks, by least one data block in the multiple data block, broadcast data And at least one operational order in multiple operational orders is sent to the branch process circuit 103;
The branch process circuit 103, for forward the main process task circuit 101 with it is the multiple from processing circuit 102 Between data block, broadcast data and operational order;
It is the multiple from processing circuit 102, for according to the operational order to the data block and broadcast data received It executes operation and obtains intermediate result, and intermediate result is transferred to the branch process circuit 103;
Above-mentioned main process task circuit 101, the intermediate result for being also used to send in above-mentioned branch process circuit 103 carry out subsequent place Reason obtain above-mentioned operational order as a result, the result of above-mentioned computations is sent to above controller unit 11.
In another alternative embodiment, arithmetic element 12 may include 101 He of main process task circuit as shown in Figure 3 C It is multiple from processing circuit 102.As shown in Figure 3 C, it is multiple from processing circuit 102 be in array distribution;Each from processing circuit 102 with Other adjacent are connected from processing circuit 102, and main process task circuit 101 connects the multiple K from processing circuit 102 from Circuit 102 is managed, the K is a from processing circuit 102 are as follows: the n of n of the 1st row from processing circuit 102, m row is a from processing circuit The m of 102 and the 1st column is a from processing circuit 102, it should be noted that as shown in Figure 3 C K only wrap from processing circuit 102 Include the n of the 1st row from processing circuit 102, m row n m arranged from processing circuit 102 and the 1st from processing circuit 102, That is K from processing circuit 102 be multiple slave processing circuits directly being connect with main process task circuit 101 from processing circuit 102 102。
K is a from processing circuit 102, in above-mentioned main process task circuit 101 and multiple numbers between processing circuit 102 Accordingly and instruction forwarding;
The main process task circuit 101 is also used to determine that above-mentioned input neuron is broadcast data, and weight is distribution data, One distribution data is distributed into multiple data blocks, by least one data block and multiple operations in the multiple data block At least one operational order in instruction is sent to the K from processing circuit 102;
The K from processing circuit 102, for convert the main process task circuit 101 with it is the multiple from processing circuit 102 Between data;
It is the multiple from processing circuit 102, obtained for executing operation to the data block received according to the operational order The K are transferred to from processing circuit 102 to intermediate result, and by operation result;
The main process task circuit 101, for handle the intermediate result that the K send from processing circuit 102 To the computations as a result, the result of the computations is sent to the controller unit 11.
Optionally, as shown in Figure 3D, the main process task circuit 101 in above-mentioned Fig. 3 A- Fig. 3 C can also include: activation processing electricity One of road 1011, addition process circuit 1012 or any combination;
Processing circuit 1011 is activated, for executing the activation operation of data in main process task circuit 101;
Addition process circuit 1012, for executing add operation or accumulating operation.
Above-mentioned from processing circuit 102 includes: multiplication process circuit, is obtained for executing product calculation to the data block that receives To result of product;Forward process circuit (optional), for forwarding the data block received or result of product.Accumulation process electricity Road obtains the intermediate result for executing accumulating operation to the result of product.
In a kind of feasible embodiment, above controller unit 11 obtains one or more operational order, wherein should Operational order can be variable format operational order or fixed point format operational order.
Wherein, above controller unit can obtain computations from storage unit 10, and parse the computations to obtain To one or more operational order.
Wherein, above-mentioned variable format operational order includes operation code and operation domain, which is used to indicate this can declension The function of formula operational order, aforesaid operations domain include that the first address of the first input data, the length of the first input data are (optional Ground), the first address of output data, scaling position, the first input data data type flag bit (optionally) and operation class Type mark.
When above-mentioned operational order is variable format operational order, above controller unit 11 parses above-mentioned variable format fortune Instruction is calculated, to obtain the first address, small of the first address of above-mentioned first input data, the length of the first input data, output data Several positions, the data type flag bit of the first input data and action type mark, then according to the head of the first input data The length of address and above-mentioned first input data obtains the first input data from said memory cells 10, then by the first input Data, scaling position, the data type flag bit of the first input data and action type identification transmission are to above-mentioned converting unit 13, then by the above-mentioned arithmetic element 12 of the first address of above-mentioned output data;
Above-mentioned converting unit 13 is according to above-mentioned data type flag bit, above-mentioned scaling position and aforesaid operations type identification Above-mentioned first input data is converted to the second input data by the action type of instruction;Then second input data is transmitted to Above-mentioned arithmetic element 12.
The main process task circuit 101 of above-mentioned arithmetic element 12 and above-mentioned second input data is transported from processing circuit 102 It calculates, to obtain the result of above-mentioned computations;The result of the computations is stored into said memory cells 10 above-mentioned output The corresponding position of the first address of data.
Wherein, aforesaid operations type identification is used to indicate above-mentioned arithmetic element 12 and carries out the data for participating in operation when operation Type.The type includes fixed-point data, floating data, integer data and discrete data etc..
In a kind of possible embodiment, be stored in said memory cells 10 above-mentioned first input data first address, The length of first input data, the first address of output data, scaling position, the first input data data type flag bit and Action type mark, above controller unit 11 directly obtain above-mentioned first input data from said memory cells 10 First address, the length of the first input data, the first address of output data, scaling position, the data type of the first input data Flag bit and action type mark, then carry out subsequent operation as procedure described above.
For example, aforesaid operations type identification is 0 or 1.When the flag bit is 1, the main place of above-mentioned arithmetic element 12 It manages circuit 101 and carries out floating-point operation from processing circuit 102, is i.e. the data type of participation operation is floating data;When operation class When type is identified as 0, the main process task circuit 101 of above-mentioned arithmetic element 12 and from processing circuit 102 carry out fixed-point calculation, that is, participate in fortune The data type of calculation is fixed-point data.
Above-mentioned arithmetic element 12 can be identified according to above-mentioned Data Labels position and action type the type for determining input data and Carry out the type of operation.
Specifically, referring to table 3, table 3 is the mapping table of data type flag bit and action type mark.
Table 3
As shown in table 3, when aforesaid operations type identification is 0 and above-mentioned data type flag bit is 0, above-mentioned first input Data are fixed-point data, the main process task circuit 101 of above-mentioned arithmetic element 12 and from processing circuit 102 carry out fixed-point calculation, not into Row data conversion;When aforesaid operations type identification is 0 and above-mentioned data type flag bit is 1, above-mentioned first input data is Floating data, the main process task circuit 101 of above-mentioned arithmetic element 12 and from processing circuit 102 carry out floating-point operation, without data Conversion;When aforesaid operations type identification is 1 and above-mentioned data type flag bit is 0, above-mentioned first input data is fixed-point number According to, above-mentioned first input data is first converted to the second input data according to above-mentioned scaling position by above-mentioned converting unit 13, the Two input datas be floating data, the main process task circuit 101 of above-mentioned arithmetic element 12 and from processing circuit 102 to second input number According to progress operation;When aforesaid operations type identification is 1 and above-mentioned data type flag bit is 1, above-mentioned first input data is Above-mentioned first input data is first converted to the second input number according to above-mentioned scaling position by floating data, above-mentioned converting unit 13 Be fixed-point data according to, the second input data, the main process task circuit 101 of above-mentioned arithmetic element 12 and from processing circuit 102 to this Two input datas carry out operation.
Wherein, above-mentioned fixed-point data includes 64 fixed-point datas, 32 fixed-point datas, 16 fixed-point datas.Above-mentioned floating-point 64 floating datas of data, 32 floating datas and 16 floating datas.The mapping of above-mentioned flag bit and action type mark is closed System is specifically referring also to the following table 4.
Table 4
As shown in table 4, when aforesaid operations type identification is 0000 and above-mentioned data type flag bit is 0, above-mentioned first It is fixed-point data that input data, which is 64, the main process task circuit 101 of above-mentioned arithmetic element 12 and from processing circuit 102 carry out 64 it is fixed Point processing, without data type conversion;When aforesaid operations type identification is 0000 and above-mentioned data type flag bit is 1, It is floating data that above-mentioned first input data, which is 64, the main process task circuit 101 of above-mentioned arithmetic element 12 and from processing circuit 102 into 64 floating-point operations of row, without data type conversion;When aforesaid operations type identification is 0001 and above-mentioned data type mark When position is 0, it is fixed-point data that above-mentioned first input data, which is 32, the main process task circuit 101 of above-mentioned arithmetic element 12 and from processing Circuit 102 carries out 32 fixed-point calculations, without data type conversion;When aforesaid operations type identification is 0001 and above-mentioned data When type code position is 1, it is floating data that above-mentioned first input data, which is 32, the main process task circuit 101 of above-mentioned arithmetic element 12 32 floating-point operations are carried out with from processing circuit 102, without data type conversion;When aforesaid operations type identification be 0010 and When above-mentioned data type flag bit is 0, it is fixed-point data that above-mentioned first input data, which is 16, the main process task of above-mentioned arithmetic element 12 Circuit 101 and from processing circuit 102 carry out 16 fixed-point calculations, without data type conversion;When aforesaid operations type identification For 0010 and when above-mentioned data type flag bit is 1, it is floating data that above-mentioned first input data, which is 16, above-mentioned arithmetic element 12 Main process task circuit 101 and from processing circuit 102 carry out 16 floating-point operations, without data type conversion.
When aforesaid operations type identification is 0011 and above-mentioned data type flag bit is 0, above-mentioned first input data is 64 be fixed-point data, and it is defeated that above-mentioned first input data is first converted to second according to above-mentioned scaling position by above-mentioned converting unit 13 Enter data, it is floating data that the second input data, which is 64, then the main process task circuit 101 of above-mentioned arithmetic element 12 and from processing electricity Road 102 carries out 64 floating-point operations to second input data;When aforesaid operations type identification is 0011 and above-mentioned data type When flag bit is 1, it is floating data that above-mentioned first input data, which is 64, and above-mentioned converting unit 13 is first according to above-mentioned scaling position Above-mentioned first input data is converted into the second input data, it is fixed-point data that the second input data, which is 64, then above-mentioned operation The main process task circuit 101 of unit 12 and from processing circuit 102 to second input data carry out 64 fixed-point calculations.
When aforesaid operations type identification is 0100 and above-mentioned data type flag bit is 0, above-mentioned first input data is 32 be fixed-point data, and above-mentioned first input data is converted to the second input according to above-mentioned scaling position by above-mentioned converting unit 13 Data, it is floating data that the second input data, which is 64, then the main process task circuit 101 of above-mentioned arithmetic element 12 and from processing circuit 102 pairs of second input datas carry out 64 floating-point operations;When aforesaid operations type identification is 0100 and above-mentioned data type mark When will position is 1, it is floating data that above-mentioned first input data, which is 32, and above-mentioned converting unit 13 first will according to above-mentioned scaling position Above-mentioned first input data is converted to the second input data, and it is fixed-point data that the second input data, which is 64, then above-mentioned operation list Member 12 main process task circuit 101 and from processing circuit 102 to second input data carry out 64 fixed-point calculations.
When aforesaid operations type identification is 0101 and above-mentioned data type flag bit is 0, above-mentioned first input data is 16 be fixed-point data, and it is defeated that above-mentioned first input data is first converted to second according to above-mentioned scaling position by above-mentioned converting unit 13 Enter data, it is floating data that the second input data, which is 64, then the main process task circuit 101 of above-mentioned arithmetic element 12 and from processing electricity Road 102 carries out 64 floating-point operations to second input data;When aforesaid operations type identification is 0101 and above-mentioned data type When flag bit is 1, it is floating data that above-mentioned first input data, which is 16, and above-mentioned converting unit 13 is first according to above-mentioned scaling position Above-mentioned first input data is converted into the second input data, it is fixed-point data that the second input data, which is 64, then above-mentioned operation The main process task circuit 101 of unit 12 and from processing circuit 102 to second input data carry out 64 fixed-point calculations.
When aforesaid operations type identification is 0110 and above-mentioned data type flag bit is 0, above-mentioned first input data is 32 be fixed-point data, and it is defeated that above-mentioned first input data is first converted to second according to above-mentioned scaling position by above-mentioned converting unit 13 Enter data, it is floating data that the second input data, which is 32, then the main process task circuit 101 of above-mentioned arithmetic element 12 and from processing electricity Road 102 carries out 32 floating-point operations to second input data;When aforesaid operations type identification is 0110 and above-mentioned data type When flag bit is 1, it is floating data that above-mentioned first input data, which is 32, and above-mentioned converting unit 13 is first according to above-mentioned scaling position Above-mentioned first input data is converted into the second input data, it is fixed-point data that the second input data, which is 32, then above-mentioned operation The main process task circuit 101 of unit 12 and from processing circuit 102 to second input data carry out 32 fixed-point calculations.
When aforesaid operations type identification is 0111 and above-mentioned data type flag bit is 0, above-mentioned first input data is 16 be fixed-point data, and it is defeated that above-mentioned first input data is first converted to second according to above-mentioned scaling position by above-mentioned converting unit 13 Enter data, it is floating data that the second input data, which is 32, then the main process task circuit 101 of above-mentioned arithmetic element 12 and from processing electricity Road 102 carries out 32 floating-point operations to second input data;When aforesaid operations type identification is 0111 and above-mentioned data type When flag bit is 1, it is floating data that above-mentioned first input data, which is 16, and above-mentioned converting unit 13 is first according to above-mentioned scaling position Above-mentioned first input data is converted into the second input data, it is fixed-point data that the second input data, which is 32, then above-mentioned operation The main process task circuit 101 of unit 12 and from processing circuit 102 to second input data carry out 32 fixed-point calculations.
When aforesaid operations type identification is 1000 and above-mentioned data type flag bit is 0, above-mentioned first input data is 16 be fixed-point data, and it is defeated that above-mentioned first input data is first converted to second according to above-mentioned scaling position by above-mentioned converting unit 13 Enter data, it is floating data that the second input data, which is 16, then the main process task circuit 101 of above-mentioned arithmetic element 12 and from processing electricity Road 102 carries out 16 floating-point operations to second input data;When aforesaid operations type identification is 1000 and above-mentioned data type When flag bit is 1, it is floating data that above-mentioned first input data, which is 16, and above-mentioned converting unit 13 is first according to above-mentioned scaling position Above-mentioned first input data is converted into the second input data, it is fixed-point data that the second input data, which is 16, then above-mentioned operation The main process task circuit 101 of unit 12 and from processing circuit 102 to second input data carry out 16 fixed-point calculations.
When aforesaid operations type identification is 1001 and above-mentioned data type flag bit is 0, above-mentioned first input data is 64 be fixed-point data, and it is defeated that above-mentioned first input data is first converted to second according to above-mentioned scaling position by above-mentioned converting unit 13 Enter data, it is floating data that the second input data, which is 32, then the main process task circuit 101 of above-mentioned arithmetic element 12 and from processing electricity Road 102 carries out 32 floating-point operations to second input data;When aforesaid operations type identification is 1001 and above-mentioned data type When flag bit is 1, it is floating data that above-mentioned first input data, which is 64, and above-mentioned converting unit 13 is first according to above-mentioned scaling position Above-mentioned first input data is converted into the second input data, it is fixed-point data that the second input data, which is 32, then above-mentioned operation The main process task circuit 101 of unit 12 and from processing circuit 102 to second input data carry out 32 fixed-point calculations.
When aforesaid operations type identification is 1010 and above-mentioned data type flag bit is 0, above-mentioned first input data is 64 be fixed-point data, and it is defeated that above-mentioned first input data is first converted to second according to above-mentioned scaling position by above-mentioned converting unit 13 Enter data, it is floating data that the second input data, which is 16, then the main process task circuit 101 of above-mentioned arithmetic element 12 and from processing electricity Road 102 carries out 16 floating-point operations to second input data;When aforesaid operations type identification is 1010 and above-mentioned data type When flag bit is 1, it is floating data that above-mentioned first input data, which is 64, and above-mentioned converting unit 13 is first according to above-mentioned scaling position Above-mentioned first input data is converted into the second input data, it is fixed-point data that the second input data, which is 16, then above-mentioned operation The main process task circuit 101 of unit 12 and from processing circuit 102 to second input data carry out 16 fixed-point calculations.
When aforesaid operations type identification is 1011 and above-mentioned data type flag bit is 0, above-mentioned first input data is 32 be fixed-point data, and it is defeated that above-mentioned first input data is first converted to second according to above-mentioned scaling position by above-mentioned converting unit 13 Enter data, it is floating data that the second input data, which is 16, then the main process task circuit 101 of above-mentioned arithmetic element 12 and from processing electricity Road 102 carries out 16 floating-point operations to second input data;When aforesaid operations type identification is 1011 and above-mentioned data type When flag bit is 1, it is floating data that above-mentioned first input data, which is 32, and above-mentioned converting unit 13 is first according to above-mentioned scaling position Above-mentioned first input data is converted into the second input data, it is fixed-point data that the second input data, which is 16, then above-mentioned operation The main process task circuit 101 of unit 12 and from processing circuit 102 to second input data carry out 16 fixed-point calculations.
In another embodiment, which is Matrix Multiplication in terms of the instruction of matrix, accumulated instruction, activation instruction etc. Calculate instruction.
In a kind of optional embodiment, as shown in FIGURE 3 E, the arithmetic element includes: tree-shaped module 40, the tree Pattern block includes: a root port 401 and multiple ports 404, and the root port of the tree-shaped module connects the main process task electricity Road 101, multiple ports of the tree-shaped module are separately connected multiple one from processing circuit 102 from processing circuit 102;
Above-mentioned tree-shaped module has transmission-receiving function, and as shown in FIGURE 3 E, which is sending function, such as Fig. 6 A institute Show, which is receive capabilities.
The tree-shaped module, for forwarding the main process task circuit 101 and the multiple number between processing circuit 102 According to block, weight and operational order.
Optionally, which is the optional as a result, it may include at least 1 node layer, the node of computing device For the cable architecture with forwarding capability, the node itself can not have computing function.If tree-shaped module has zero layer node, i.e., Without the tree-shaped module.
Optionally, which can pitch tree construction for n, for example, binary tree structure as illustrated in Figure 3 F, certainly may be used Think trident tree construction, which can be the integer more than or equal to 2.The application specific embodiment is not intended to limit the specific of above-mentioned n Value, the above-mentioned number of plies may be 2, can connect the section of other layers in addition to node layer second from the bottom from processing circuit 102 Point, such as can connect the node of layer last as illustrated in Figure 3 F.
Optionally, above-mentioned arithmetic element can carry individual caching, may include: neuron caching as shown in Figure 3 G Unit, the neuron cache unit 63 cache the input neuron vector data and output neuron value from processing circuit 102 Data.
As shown in figure 3h, which can also include: weight cache unit 64, for caching this from processing circuit 102 weight datas needed in calculating process.
In an alternative embodiment, by taking the full connection operation in neural network computing as an example, process can be with are as follows: y=f (wx+b), wherein x is to input neural variable matrix, and w is weight matrix, and b is biasing scalar, and f is activation primitive, is specifically as follows: Sigmoid function, any one in tanh, relu, softmax function.It is assumed that be binary tree structure, have 8 from Processing circuit 102, the method realized can be with are as follows:
Controller unit 11 obtains input nerve variable matrix x, weight matrix w out of storage unit 10 and connects operation entirely Input nerve variable matrix x, weight matrix w and full connection operational order are transferred to main process task circuit 101 by instruction;
Main process task circuit 101 splits into 8 submatrixs for nerve variable matrix x is inputted, and 8 submatrixs are then passed through tree-shaped Module is distributed to 8 from processing circuit 102, and weight matrix w is broadcast to 8 from processing circuit 102,
The multiplying of 8 submatrixs and weight matrix w is executed parallel from processing circuit 102 and accumulating operation obtains 8 8 intermediate results are sent to main process task circuit 101 by intermediate result;
Above-mentioned main process task circuit 101, for sorting to obtain the operation result of wx by 8 intermediate results, by the operation result Execution activation operation obtains final result y after executing the operation of biasing b, and final result y is sent to controller unit 11, is controlled Final result y is exported or is stored to storage unit 10 by device unit 11.
In one embodiment, arithmetic element 12 is included but are not limited to: one or more multipliers of first part; One or more adder of second part (more specifically, the adder of the second part can also form add tree);The The activation primitive unit of three parts;And/or the vector processing unit of Part IV.More specifically, vector processing unit can be located Manage vector operation and/or pond operation.Input data 1 (in1) is multiplied by first part with input data 2 (in2) Output (out) later, process are as follows: out=in1*in2;Input data in1 is added to obtain defeated by second part by adder Data (out) out.More specifically, second part be add tree when, by input data in1 by add tree be added step by step obtain it is defeated Data (out) out, wherein in1 is the vector that a length is N, and N is greater than 1, process are as follows: out=in1 [1]+in1 [2]+...+ In1 [N], and/or be added to obtain output data with input data (in2) after input data (in1) is added up by addition number (out), process are as follows: out=in1 [1]+in1 [2]+...+in1 [N]+in2, or by input data (in1) and input data (in2) it is added and obtains output data (out), process are as follows: out=in1+in2;Input data (in) is passed through activation by Part III Function (active) operation obtains activation output data (out), process are as follows: out=active (in), activation primitive active can To be sigmoid, tanh, relu, softmax etc., in addition to doing activation operation, other non-linear letters are may be implemented in Part III Input data (in) can be obtained output data (out), process by operation (f) are as follows: out=f (in) by number.Vector Processing list Input data (in) is obtained the output data (out) after pondization operation, process out=pool by pond operation by member (in), wherein pool is pondization operation, and pondization operation includes but is not limited to: average value pond, maximum value pond, intermediate value pond, Input data in is and exports the data in the relevant pond core of out.
It is that the input data 1 is multiplied with input data 2 that it includes first part that the arithmetic element, which executes operation, is obtained Data after multiplication;And/or second part executes add operation and (more specifically, is add tree operation, is used for input data 1 is added step by step by add tree), or the input data 1 is passed through and is added to obtain output data with input data 2;And/or Part III executes activation primitive operation, obtains output data by activation primitive (active) operation to input data;And/or Part IV executes pond operation, and out=pool (in), wherein pool is pondization operation, and pondization operation includes but is not limited to: flat Mean value pond, maximum value pond, intermediate value pond, input data in are and export the data in the relevant pond core of out.With The operation of upper several parts can carry out the combination of different order with one multiple portions of unrestricted choice, to realize various different function The operation of energy.Computing unit constitutes second level, three-level or level Four flowing water level framework accordingly.
It should be noted that above-mentioned first input data is the non-fixed-point data of long digit, such as 32 floating datas can also To be 64 or 16 floating numbers etc. for standard, only it is illustrated here with 32 for specific embodiment;Above-mentioned Two input datas are short digit fixed-point data, and also known as less digit fixed-point data is indicated relative to the non-fixed-point data of long digit The first input data for, the fixed-point data that is indicated using less digit.
In a kind of feasible embodiment, above-mentioned first input data is non-fixed-point data, and above-mentioned second input data is Fixed-point data, the number of bits which accounts for are more than or equal to the number of bits that above-mentioned second input data accounts for. For example above-mentioned first input input data is 32 floating numbers, above-mentioned second input data is 32 fixed-point datas;On for another example Stating the first input input data is 32 floating numbers, and above-mentioned second input data is 16 fixed-point datas.
Specifically, for the different layers of different network models, above-mentioned first input data includes different types of number According to.The scaling position of the different types of data is not identical, i.e., the precision of corresponding fixed-point data is different.For connecting entirely Layer, above-mentioned first input data include the data such as input neuron, weight and biased data;When for convolutional layer, above-mentioned first Input data includes convolution kernel, the input data such as neuron and biased data.
Such as full articulamentum, above-mentioned scaling position includes the scaling position for inputting neuron, the decimal of weight The scaling position of point position and biased data.Wherein, the scaling position, the scaling position of weight of above-mentioned input neuron With the scaling position of biased data can all identical or parts it is identical or different.
In a kind of possible embodiment, above controller unit 11 or above-mentioned arithmetic element 12 obtain as procedure described above After the scaling position of above-mentioned first input data, the scaling position of the first input data is stored into delaying to storage unit 10 It deposits in 202.
When above-mentioned computations are the instruction of immediate addressing, above-mentioned Main Processor Unit 101 directly refers to according to the calculating Scaling position indicated by the operation domain of order carries out the first input data to be converted to the second input data;When above-mentioned calculating refers to When enabling the instruction for directly addressing or indirect addressing, above-mentioned Main Processor Unit 101 is signified according to the operation domain of the computations The memory space that shows obtains the scaling position of the first input data, then according to the scaling position by the first input data into Row is converted to the second input data.
Above-mentioned computing device further includes rounding-off unit, in carrying out calculating process, due to adding to the second input data (operation result includes that intermediate calculation results and calculating refer to the operation result that method operation, multiplying and/or other operations obtain The result of order) precision can exceed the accuracy ratings of current fixed-point data, therefore above-mentioned operation cache unit caches above-mentioned centre Operation result.After operation, above-mentioned rounding-off unit carries out rounding-off behaviour to the operation result beyond fixed-point data accuracy rating Make, the operation result after being rounded, then above-mentioned Date Conversion Unit is converted to the operation result after the rounding-off current fixed The data of number data type.
Specifically, above-mentioned rounding-off unit carries out rounding-off operation to above-mentioned intermediate calculation results, and rounding-off operation is random house Any one of enter operation, the operation that rounds up, be rounded up to operation, operated to round down and rounding-off operation is truncated.
When above-mentioned rounding-off unit executes random rounding-off operation, which is specifically performed the following operations:
Wherein, y indicates to be rounded obtained data at random to the operation result x progress before rounding-off, i.e., the fortune after above-mentioned rounding-off It calculates as a result, the minimum positive number that ε can indicate for current fixed-point data representation format, i.e., 2-Point Location,It indicates to above-mentioned Operation result x before rounding-off directly intercepts the resulting number of fixed-point data (doing downward floor operation similar to decimal), w.p. table Show probability, above-mentioned formula indicates that the data that random rounding-off obtains are carried out to the operation result x before above-mentioned rounding-off isProbability beCarrying out the data that random rounding-off obtains to above-mentioned intermediate calculation results x isProbability be
When above-mentioned rounding-off unit, which round up, to be operated, which is specifically performed the following operations:
Wherein, y indicates the data obtained after rounding up to the operation result x before above-mentioned rounding-off, i.e., above-mentioned rounding-off Operation result afterwards, the minimum positive integer that ε can indicate for current fixed-point data representation format, i.e., 2-Point Location,For ε Integral multiple, value is maximum number less than or equal to x.Above-mentioned formula indicates that the operation result x before above-mentioned rounding-off meets item PartWhen, the operation result after above-mentioned rounding-off isOperation result before above-mentioned rounding-off meets conditionWhen, the operation result after above-mentioned rounding-off is
When above-mentioned rounding-off unit carries out being rounded up to operation, which is specifically performed the following operations:
Wherein, y indicates the data obtained after being rounded up to operation result x before above-mentioned rounding-off, i.e., after above-mentioned rounding-off Operation result,For the integral multiple of ε, value is the minimum number more than or equal to x, and ε is current fixed-point data representation format institute The minimum positive integer that can be indicated, i.e., 2-Point Location
When above-mentioned rounding-off unit operate to round down, which is specifically performed the following operations:
Wherein, y expression carries out the data obtained after round down to the operation result x before above-mentioned rounding-off, i.e., above-mentioned rounding-off Operation result afterwards,For the integral multiple of ε, value is the maximum number less than or equal to x, and ε is current fixed-point data representation format The minimum positive integer that can be indicated, i.e., 2-Point Location
When above-mentioned rounding-off unit carries out truncation rounding-off operation, which is specifically performed the following operations:
Y=[x]
Wherein, y expression carries out the data obtained after truncation rounding-off to the operation result x before above-mentioned rounding-off, i.e., above-mentioned rounding-off Operation result afterwards, [x] expression directly intercept the resulting data of fixed-point data to above-mentioned operation result x.
After above-mentioned rounding-off unit obtains the intermediate calculation results after above-mentioned rounding-off, above-mentioned arithmetic element 12 is according to above-mentioned first Intermediate calculation results after the rounding-off are converted to the data of current fixed-point data type by the scaling position of input data.
In a kind of feasible embodiment, above-mentioned arithmetic element 12 is to the data in said one or multiple intermediate results Type is that the intermediate result of floating data does not do truncation.
The slave processing circuit 102 of above-mentioned arithmetic element 12 carries out the intermediate result that operation obtains according to the above method, due to The intermediate result that can make in the calculating process there are multiplication, division etc. exceeds the operation of memory memory range, for Intermediate result beyond memory memory range generally can carry out truncation to it;But due in the application calculating process The intermediate result of middle generation does not have to storage in memory, therefore does not have to carry out the intermediate result beyond memory memory range Truncation, greatly reduces the loss of significance of intermediate result, improves the precision of calculated result.
In a kind of feasible embodiment, above-mentioned arithmetic element 12 further includes derivation unit, when the arithmetic element 12 receives To the scaling position for the input data for participating in fixed-point calculation, the derivation unit is according to the input data of the participation fixed-point calculation Scaling position obtains the scaling position of one or more intermediate result during being derived by progress fixed-point calculation.It is above-mentioned When the intermediate result that the progress operation of operation subelement obtains is more than range indicated by its corresponding scaling position, above-mentioned derivation The scaling position of the intermediate result is moved to left M by unit, so that the precision of the intermediate result is located at the decimal of the intermediate result Within accuracy rating indicated by point position, which is the integer greater than 0.
For example, above-mentioned first input data includes input data I1 and input data I2, corresponding decimal point Position is respectively P1 and P2, and P1 > P2, and the arithmetic type indicated by the above-mentioned operational order is that add operation or subtraction are transported It calculates, i.e., when above-mentioned operation subelement carries out I1+I2 or I1-I2 operation, above-mentioned derivation unit, which is derived by, carries out above-mentioned operation The scaling position of the intermediate result of the indicated calculating process of instruction is P1;The arithmetic type indicated by the above-mentioned operational order For multiplying, i.e., when above-mentioned operation subelement carries out I1*I2 operation, above-mentioned derivation unit is derived by the above-mentioned operation of progress and refers to The scaling position for enabling the intermediate result of indicated calculating process is P1*P2.
In a kind of feasible embodiment, above-mentioned arithmetic element 12 further include:
Data buffer storage unit, for caching said one or multiple intermediate results.
In an alternative embodiment, above-mentioned computing device further includes data statistics unit, which uses Same type of input data counts in each layer to the Multi-Layered Network Model, every in each layer to obtain The scaling position of the input data of seed type.
The data statistics unit is also possible to a part of external device (ED), above-mentioned computing device carry out data conversion it Before, the scaling position for participating in operational data is obtained from external device (ED).
Specifically, above-mentioned data statistics unit includes:
Obtain subelement, same type of input data in each layer for extracting the Multi-Layered Network Model;
Count subelement, same type of input data in each layer for counting and obtaining the Multi-Layered Network Model Distribution proportion in pre-set interval;
Subelement is analyzed, for obtaining same type in each layer of the Multi-Layered Network Model according to the distribution proportion Input data scaling position.
Wherein, above-mentioned pre-set interval can beN is the one of default settings Positive integer, X are number of bits shared by fixed-point data.Above-mentioned pre-set intervalIncluding n+1 sub-district Between.It is a in above-mentioned n+1 that above-mentioned statistics subelement counts same type of input data in each layer of above-mentioned Multi-Layered Network Model Distributed intelligence on subinterval, and above-mentioned first distribution proportion is obtained according to the distributed intelligence.First distribution proportion is p0,p1, p2,…,pn, which is that same type of input data is a in above-mentioned n+1 in each layer of above-mentioned Multi-Layered Network Model Distribution proportion on subinterval.Above-mentioned analysis subelement presets a flood rate EPL, from 0,1,2 ..., obtains and is gone most in n Big i, so that pi>=1-EPL, the maximum i are same type of input data in each layer of above-mentioned Multi-Layered Network Model Scaling position.In other words, above-mentioned analysis subelement takes same type of input in each layer of above-mentioned Multi-Layered Network Model The scaling position of data are as follows: max { i/pi>=1-EPL, i ∈ { 0,1,2 ..., n } }, that is, meeting more than or equal to 1-EPL PiIn, choose the decimal of same type of input data in each layer that maximum subscript value i is above-mentioned Multi-Layered Network Model Point position.
It should be noted that above-mentioned piTo be taken in same type of input data in each layer of above-mentioned Multi-Layered Network Model Value is in sectionIn the number of input data with it is same in each layer of above-mentioned Multi-Layered Network Model The ratio of the input data total number of type.Such as in each layer of m1 Multi-Layered Network Model in same type of input data There is m2 input data value in sectionIn, then it is above-mentioned
In a kind of feasible embodiment, in order to improve operation efficiency, above-mentioned acquisition subelement is random or sampling is extracted Partial data in each layer of the Multi-Layered Network Model in same type of input data, then obtains according to the method described above Then the scaling position of the partial data carries out data to the type input data according to the scaling position of the partial data Conversion (including floating data is converted to fixed-point data, fixed-point data is converted to fixed-point data, fixed-point data is converted to fixed-point data Etc.), it may be implemented under the premise of keeping precision shortly, and calculating speed and efficiency can be improved.
Optionally, above-mentioned data statistics unit can be according to the median of above-mentioned same type of data or same layer data Determine the same type of the data perhaps bit wide of same layer data and scaling position or according to above-mentioned same type of number According to or the average value of same layer data determine the bit wide and scaling position of the same type of data or same layer data.
Optionally, above-mentioned arithmetic element is obtained according to above-mentioned same type of data or the progress operation of same layer data Intermediate result be more than the same channel type data or same layer data scaling position and bit wide corresponding to value When range, which does not carry out truncation to the intermediate result, and the intermediate result is cached to the arithmetic element In data buffer storage unit, for subsequent operation use.
Specifically, aforesaid operations domain includes the scaling position of input data and the conversion regime mark of data type.On Instruction process unit is stated to parse the data conversion instruction to obtain the scaling position of above-mentioned input data and data type Conversion regime mark.Above-mentioned processing unit further includes Date Conversion Unit, and the Date Conversion Unit is according to above-mentioned input data Above-mentioned first input data is converted to the second input data by the conversion regime of scaling position and data type mark.
It should be noted that above-mentioned network model includes multilayer, such as full articulamentum, convolutional layer, pond layer and input layer. In at least one above-mentioned input data, the input data for belonging to same layer has same scaling position, i.e., same layer is defeated Enter data sharing or shares the same scaling position.
Above-mentioned input data includes different types of data, such as including input neuron, weight and biased data.It is above-mentioned Belong to same type of input data in input data with same scaling position, i.e., above-mentioned same type of input data Share or share the same scaling position.
For example arithmetic type indicated by operational order is fixed-point calculation, and participate in operation indicated by the operational order Input data is floating data, so before carrying out fixed-point calculation, above-mentioned several converting units are by the input data from floating number According to being converted to fixed-point data;Arithmetic type indicated by operational order is floating-point operation for another example, and participates in the operational order institute The input data of the operation of instruction is fixed-point data, then before carrying out floating-point operation, above-mentioned Date Conversion Unit is by above-mentioned fortune It calculates and corresponding input data is instructed to be converted to floating data from fixed-point data.
For macro-instruction involved in the application (such as computations and data conversion instruction), above controller unit 11 Macro-instruction can be parsed, to obtain the operation domain and operation code of the macro-instruction;Being generated according to the operation domain and operation code should The corresponding microcommand of macro-instruction;Alternatively, above controller unit 11 decodes macro-instruction, it is corresponding micro- to obtain the macro-instruction Instruction.
It include primary processor and association in system on chip (System On Chip, SOC) in a kind of feasible embodiment Processor, the primary processor include above-mentioned computing device.The coprocessor obtains above-mentioned Multi-Layered Network Model according to the above method Each layer in same type of input data scaling position, and by same type in each layer of the Multi-Layered Network Model The scaling position of input data be transmitted to above-mentioned computing device or the computing device and needing using above-mentioned multitiered network In each layer of model when the scaling position of same type of input data, above-mentioned Multilayer Network is obtained from above-mentioned coprocessor The scaling position of same type of input data in each layer of network model.
In a kind of feasible embodiment, above-mentioned first input data is is non-fixed-point data, the non-fixed-point data packet It includes including long digit floating data, short digit floating data, integer data and discrete data etc..
The data type of above-mentioned first input data is different.Such as above-mentioned input neuron, weight and biased data It is floating data;Partial data in above-mentioned input neuron, weight and biased data is floating data, and partial data is whole Type data;Above-mentioned input neuron, weight and biased data are integer data.Above-mentioned computing device can realize non-fixed-point data To the conversion of fixed-point data, the classes such as long digit floating data, short digit floating data, integer data and discrete data can be realized Conversion of the data such as type to fixed-point data.The fixed-point data can be for signed fixed-point number evidence or without symbol fixed-point data.
In a kind of feasible embodiment, above-mentioned first input data and the second input data are fixed-point data, and the One input data and the second input data can be to have the fixed-point data of symbol, be perhaps signless fixed-point data or One of them is signless fixed-point data, another is the fixed-point data for having symbol.And first input data decimal point It sets different with the scaling position of the second input data.
In a kind of feasible embodiment, the first input data is fixed-point data, and above-mentioned second input data is non-fixed point Data.In other words, above-mentioned computing device can realize the conversion of fixed-point data to non-fixed-point data.
Fig. 4 is a kind of monolayer neural networks forward operation flow chart provided in an embodiment of the present invention.Flow chart description benefit A kind of process for monolayer neural networks forward operation that the computing device and instruction set implemented with the present invention are realized.For each layer For, the intermediate result vector that read group total goes out this layer is weighted to input neuron vector first.The intermediate result vector Biasing sets and activates to obtain output neuron vector.Using output neuron vector as next layer of input neuron vector.
In a specific application scenarios, above-mentioned computing device can be a training device.Carrying out neural network Before model training, which obtains the training data for participating in neural network model training, which is non-fixed point Data, and the scaling position of above-mentioned training data is obtained according to the method described above.Above-mentioned training device is according to above-mentioned training data Scaling position the training data is converted to the training data indicated with fixed-point data.Above-mentioned training device is determined according to being somebody's turn to do The training data that point data indicates carries out positive neural network computing, obtains neural network computing result.Above-mentioned training device pair Scaling position beyond training data can indicate that the neural network computing result of data precision range carries out random rounding-off behaviour Make, with the neural network computing after being rounded as a result, the neural network computing result is located at the decimal point of above-mentioned training data Position can indicate within the scope of data precision.According to the method described above, above-mentioned training device obtains the mind of every layer of multilayer neural network Through network operations as a result, i.e. output neuron.Above-mentioned training device obtains the ladder of output neuron according to every layer of output neuron Degree, and reversed operation is carried out according to the gradient of the output neuron, weight gradient is obtained, thus according to the weight gradient updating mind Weight through network model.
Above-mentioned training device repeats the above process, to achieve the purpose that trained neural network model.
It should be pointed out that above-mentioned computing device is to participation forward operation before carrying out forward operation and reverse train Data carry out data conversion;To participate in reverse train data without data conversion;Alternatively, above-mentioned computing device is to participation The data of forward operation are without data conversion;Data conversion is carried out to the data for participating in reverse train;Above-mentioned computing device pair The data for participating in the data participation reverse train of forward operation carry out data conversion;Specific data conversion process can be found in above-mentioned The description of related embodiment, no longer describes herein.
Wherein, above-mentioned forward operation includes above-mentioned multilayer neural network operation, which includes convolution Equal operations, the convolution algorithm are to be instructed to realize by convolution algorithm.
Above-mentioned convolution algorithm instruction is the instruction of one of Cambricon instruction set, the feature of the Cambricon instruction set It is, instruction is by operation code and groups of operands at it is control instruction (control respectively that instruction set, which includes the instruction of four seed types, ), instructions data transfer instruction (data transfer instructions), operational order (computational ), instructions logical order (logical instructions).
Preferably, each command length is fixed length in instruction set.For example, each command length can be in instruction set 64bit。
Further, control instruction is for controlling implementation procedure.Control instruction includes jumping (jump) instruction and condition point Branch (conditional branch) instruction.
Further, data transfer instruction is used to complete the data transmission between different storage mediums.Data transfer instruction It is instructed including load (load), (move) instruction is carried in storage (store) instruction.Load is instructed for adding data from main memory It is downloaded to caching, store instruction is for from buffer memory to main memory, move instruction to be in caching and caching or caching by data Data are carried between register or register and register.Data transfer instruction supports three kinds of different data organization sides Formula, including matrix, vector sum scalar.
Further, operational order is for completing neural network arithmetical operation.Operational order includes matrix operation command, to Measure operational order and scalar operation instruction.
Further, matrix operation command completes the matrix operation in neural network, including Matrix Multiplication vector (matrix Multiply vector), vector multiplies matrix (vector multiply matrix), Matrix Multiplication scalar (matrix Multiply scalar), apposition (outer product), matrix adds matrix (matrix add matrix), and matrix subtracts matrix (matrix subtract matrix)。
Further, vector operation instruction completes the vector operation in neural network, including vector basic operation (vector elementary arithmetics), vector surmounts function operation (vector transcendental Functions), inner product (dot product), vector generate (random vector generator) at random, in vector most Greatly/minimum value (maximum/minimum of a vector).Wherein vector basic operation includes that vector adds, and subtracts, multiplies, removes (add, subtract, multiply, divide), vector, which surmounts function, refers to that those are unsatisfactory for any making coefficient with multinomial The function of polynomial equation includes but are not limited to exponential function, logarithmic function, trigonometric function, antitrigonometric function.
Further, the scalar operation in neural network, including scalar basic operation are completed in scalar operation instruction (scalar elementary arithmetics) and scalar surmount function operation (scalar transcendental functions).Wherein scalar basic operation includes that scalar adds, and subtracts, multiplies, and is removed (add, subtract, multiply, divide), Scalar surmounts function the function for referring to that those are unsatisfactory for any polynomial equation for making coefficient with multinomial, includes but are not limited to Exponential function, logarithmic function, trigonometric function, antitrigonometric function.
Further, logical order is used for the logical operation of neural network.Logical operation includes vector logic operational order With scalar logic instruction.
Further, vector logic operational order includes that vector compares (vector compare), vector logic operation (vector logical operations) and vector, which are greater than, merges (vector greater than merge).Wherein vector Relatively include but is not limited to be greater than, be less than, be equal to, be greater than or equal to, is less than or equal to and is not equal to.Vector logic operation includes With or, non-.
Further, scalar logical operation includes that scalar compares (scalar compare), scalar logical operation (scalar logical operations).Wherein scalar relatively includes but is not limited to be greater than, and is less than, and is equal to, and is greater than or waits In being less than or equal to and be not equal to.Scalar logical operation include with or, non-.
For multilayer neural network, realization process is, in forward operation, when upper one layer of artificial neural network has executed At later, next layer of operational order can be using output neuron calculated in arithmetic element as next layer of input neuron It carries out operation (or the input neuron that certain operations are re-used as next layer is carried out to the output neuron), meanwhile, it will weigh Value also replaces with next layer of weight;In reversed operation, after the completion of the reversed operation of upper one layer of artificial neural network executes, Next layer of operational order can be using the neuron gradient that inputs calculated in arithmetic element as next layer of output neuron gradient Operation (or the output neuron gradient that certain operations are re-used as next layer is carried out to the input neuron gradient) is carried out, together When weight is replaced with to next layer of weight.As shown in figure 5, the arrow of dotted line indicates reversed operation, the arrow table of realization in Fig. 5 Show forward operation.
In another embodiment, which is Matrix Multiplication in terms of the instruction of matrix, accumulated instruction, activation instruction etc. Calculate instruction, including forward operation instruction and direction training instruction.
Illustrate the circular of computing device as shown in Figure 3A below by neural network computing instruction.For For neural network computing instruction, the formula that actually needs to be implemented can be with are as follows: s=s (∑ wxi+ b), wherein i.e. by weight w Multiplied by input data xi, sum, then plus activation operation s (h) is done after biasing b, obtain final output result s.
The method that computing device as shown in Figure 3A executes the instruction of neural network forward operation is specifically as follows:
After above-mentioned converting unit 13 carries out data type conversion to above-mentioned first input data, controller unit 11 is from instruction The instruction of neural network forward operation is extracted in cache unit 110, neural network computing instructs corresponding operation domain and at least one The operation domain is transmitted to data access unit by a operation code, controller unit 11, which is sent to fortune Calculate unit 12.
Controller unit 11 extracts the corresponding weight w of the operation domain and biasing b (when b is 0, no out of storage unit 10 It needs to extract biasing b), weight w and biasing b is transmitted to the main process task circuit 101 of arithmetic element, controller unit 11 is from storage Input data Xi is extracted in unit 10, and input data Xi is sent to main process task circuit 101.
Input data Xi is split into n data block by main process task circuit 101;
The instruction process unit 111 of controller unit 11 determines that multiplying order, biasing refer to according at least one operation code Multiplying order, offset instructions and accumulated instruction are sent to main process task circuit 101 by order and accumulated instruction, and main process task circuit 101 will The multiplying order, weight w are sent to multiple from processing circuit 102 in a broadcast manner, which are distributed to multiple From processing circuit 102 (such as with n from processing circuit 102, then each sending a data block from processing circuit 102); It is multiple from processing circuit 102, obtained for the weight w to be executed multiplying with the data block received according to the multiplying order The intermediate result is sent to main process task circuit 101 by intermediate result, which will be multiple according to the accumulated instruction The intermediate result sent from processing circuit 102 executes accumulating operation and obtains accumulation result, this adds up according to the bigoted instruction and ties Fruit execution biasing holds b and obtains final result, which is sent to the controller unit 11.
In addition, the sequence of add operation and multiplying can exchange.
It should be noted that the method that above-mentioned computing device executes the instruction of neural network reverse train is similar to above-mentioned calculating Device executes the process that neural network executes forward operation instruction, and for details, reference can be made to the associated descriptions of above-mentioned reverse train, herein No longer describe.
Technical solution provided by the present application is that neural network computing instruction realizes neural network by an instruction Multiplying and biasing operation are not necessarily to store or extract, reduce intermediate data in the intermediate result of neural computing Storage and extraction operation, so it, which has, reduces corresponding operating procedure, the advantages of improving the calculating effect of neural network.
The application is also disclosed that a machine learning arithmetic unit comprising the meter that one or more is mentioned in this application Device is calculated, for being obtained from other processing units to operational data and control information, specified machine learning operation is executed, holds Row result passes to peripheral equipment by I/O interface.Peripheral equipment for example camera, display, mouse, keyboard, network interface card, wifi Interface, server.When comprising more than one computing device, it can be linked and be passed by specific structure between computing device Transmission of data is for example interconnected by PCIE bus and is transmitted data, to support the operation of more massive machine learning.This When, same control system can be shared, there can also be control system independent;Can also can each it be added with shared drive Fast device has respective memory.In addition, its mutual contact mode can be any interconnection topology.
The machine learning arithmetic unit compatibility with higher can pass through PCIE interface and various types of server phases Connection.
The application is also disclosed that a combined treatment device comprising above-mentioned machine learning arithmetic unit, general interconnection Interface and other processing units.Machine learning arithmetic unit is interacted with other processing units, common to complete what user specified Operation.Fig. 6 is the schematic diagram of combined treatment device.
Other processing units, including central processor CPU, graphics processor GPU, machine learning processor etc. are general/special With one of processor or above processor type.Processor quantity included by other processing units is with no restrictions.Its His interface of the processing unit as machine learning arithmetic unit and external data and control, including data are carried, and are completed to the machine Device learns the basic control such as unlatching, stopping of arithmetic unit;Other processing units can also cooperate with machine learning arithmetic unit It is common to complete processor active task.
General interconnecting interface, for transmitting data and control between the machine learning arithmetic unit and other processing units Instruction.The machine learning arithmetic unit obtains required input data, write-in machine learning operation dress from other processing units Set the storage device of on piece;Control instruction can be obtained from other processing units, write-in machine learning arithmetic unit on piece Control caching;It can also learn the data in the memory module of arithmetic unit with read machine and be transferred to other processing units.
Optionally, the structure as shown in fig. 7, can also include storage device, storage device respectively with the machine learning Arithmetic unit is connected with other described processing units.Storage device for be stored in the machine learning arithmetic unit and it is described its The data of the data of his processing unit, operation required for being particularly suitable for learn arithmetic unit or other processing units in machine Storage inside in the data that can not all save.
The combined treatment device can be used as the SOC on piece of the equipment such as mobile phone, robot, unmanned plane, video monitoring equipment The die area of control section is effectively reduced in system, improves processing speed, reduces overall power.When this situation, the combined treatment The general interconnecting interface of device is connected with certain components of equipment.Certain components for example camera, display, mouse, keyboard, Network interface card, wifi interface.
In a feasible embodiment, also applied for a kind of distributed system, the system include n1 primary processor and N2 coprocessor, n1 are greater than or equal to 0 integer, and n2 is greater than or equal to 1 integer.The system can be various types Topological structure, topological structure shown in topologies including but not limited to as shown in Figure 3B, Fig. 3 C.
Input data and its scaling position and computations are respectively sent to above-mentioned multiple associations and handled by the primary processor Device;Or above-mentioned primary processor above-mentioned input data and its scaling position and computations are sent to it is above-mentioned multiple from processing From processor, which again sends above-mentioned input data and its scaling position and computations from processor for part in device To other from processor.The above-mentioned coprocessor includes above-mentioned computing device, which refers to according to the above method and calculating It enables and operation is carried out to above-mentioned input data, obtain operation result;
Wherein, above-mentioned input data including but not limited to inputs neuron, weight and biased data etc..
Operation result is sent directly to above-mentioned primary processor by above-mentioned coprocessor, or pass is not connect with primary processor Operation result is first sent to the coprocessor for having connection relationship with primary processor by the coprocessor of system, and then the coprocessor will The operation result received is sent to above-mentioned primary processor.
In some embodiments, a kind of chip has also been applied for comprising at above-mentioned machine learning arithmetic unit or combination Manage device.
In some embodiments, a kind of chip-packaging structure has been applied for comprising said chip.
In some embodiments, a kind of board has been applied for comprising said chip encapsulating structure.
In some embodiments, a kind of electronic equipment has been applied for comprising above-mentioned board.It is provided refering to Fig. 8, Fig. 8 A kind of board, above-mentioned board can also include other matching components, the matching component other than including said chip 389 Including but not limited to: memory device 390, reception device 391 and control device 392;
The memory device 390 is connect with the chip in the chip-packaging structure by bus, for storing data.Institute Stating memory device may include multiple groups storage unit 393.Storage unit described in each group is connect with the chip by bus.It can To understand, storage unit described in each group can be DDR SDRAM (English: Double Data Rate SDRAM, Double Data Rate Synchronous DRAM).
DDR, which does not need raising clock frequency, can double to improve the speed of SDRAM.DDR allows the rising in clock pulses Edge and failing edge read data.The speed of DDR is twice of standard SDRAM.In one embodiment, the storage device can be with Including storage unit described in 4 groups.Storage unit described in each group may include multiple DDR4 particles (chip).In one embodiment In, the chip interior may include 4 72 DDR4 controllers, and 64bit is used for transmission number in above-mentioned 72 DDR4 controllers According to 8bit is used for ECC check.It is appreciated that data pass when using DDR4-3200 particle in the storage unit described in each group Defeated theoretical bandwidth can reach 25600MB/s.
In one embodiment, storage unit described in each group include multiple Double Data Rate synchronous dynamics being arranged in parallel with Machine memory.DDR can transmit data twice within a clock cycle.The controller of setting control DDR in the chips, Control for data transmission and data storage to each storage unit.
The interface arrangement is electrically connected with the chip in the chip-packaging structure.The interface arrangement is for realizing described Data transmission between chip and external equipment (such as server or computer).Such as in one embodiment, the interface Device can be standard PCIE interface.For example, data to be processed are transferred to the core by standard PCIE interface by server Piece realizes data transfer.Preferably, when using the transmission of 16 interface of PCIE 3.0X, theoretical bandwidth can reach 16000MB/s. In another embodiment, the interface arrangement can also be other interfaces, and the application is not intended to limit above-mentioned other interfaces Specific manifestation form, the interface unit can be realized signaling transfer point.In addition, the calculated result of the chip is still by institute It states interface arrangement and sends back external equipment (such as server).
The control device is electrically connected with the chip.The control device is for supervising the state of the chip Control.Specifically, the chip can be electrically connected with the control device by SPI interface.The control device may include list Piece machine (Micro Controller Unit, MCU).If the chip may include multiple processing chips, multiple processing cores or more A processing circuit can drive multiple loads.Therefore, the chip may be at the different work shape such as multi-load and light load State.It may be implemented by the control device to processing chips multiple in the chip, multiple processing and/or multiple processing circuits Working condition regulation.
Electronic equipment include data processing equipment, robot, computer, printer, scanner, tablet computer, intelligent terminal, Mobile phone, automobile data recorder, navigator, sensor, camera, server, cloud server, camera, video camera, projector, hand Table, earphone, mobile storage, wearable device, the vehicles, household electrical appliance, and/or Medical Devices.
The vehicles include aircraft, steamer and/or vehicle;The household electrical appliance include TV, air-conditioning, micro-wave oven, Refrigerator, electric cooker, humidifier, washing machine, electric light, gas-cooker, kitchen ventilator;The Medical Devices include Nuclear Magnetic Resonance, B ultrasound instrument And/or electrocardiograph.
Referring to Fig. 9, Fig. 9 is a kind of method for executing machine learning and calculating provided in an embodiment of the present invention, the method packet It includes:
S901, computing device obtain the first input data and computations.
Wherein, above-mentioned first input data includes input neuron and weight.
S902, computing device parse the computations, to obtain data conversion instruction and multiple operational orders.
Wherein, it includes operation domain and operation code that the data conversion instruction, which includes data conversion instruction, which is used for Indicate the function of the data type conversion instruction, the operation domain of the data type conversion instruction includes scaling position, uses In the flag bit for the data type for indicating the first input data and the conversion regime of data type.
First input data is converted to the second input number according to the data conversion instruction by S903, computing device According to second input data is fixed-point data.
Wherein, described that first input data is converted to by the second input data according to the data conversion instruction, packet It includes:
The data conversion instruction is parsed, to obtain the scaling position, first input data of being used to indicate The flag bit of data type and the conversion regime of data type;
The data type of first input data is determined according to the data type flag bit of first input data;
According to the conversion regime of the scaling position and the data type, first input data is converted to Two input datas, the data type of second input data and the data type of first input data are inconsistent.
Wherein, when first input data and second input data are fixed-point data, first input The scaling position of the scaling position of data and second input data is inconsistent.
In a kind of feasible embodiment, when first input data is fixed-point data, the method also includes:
According to the scaling position of first input data, it is derived by the decimal point of one or more intermediate result Position, wherein one or more of intermediate results are obtained according to the first input data operation.
S904, computing device are calculated calculating to second input data execution according to the multiple operational order and refer to The result of order.
Wherein, above-mentioned operational order includes forward operation instruction and reverse train instruction, i.e., above-mentioned computing device is executing During forward operation instruction and/or reverse train instruction (i.e. the computing device carries out forward operation and/or reverse train), on Fixed-point data can be converted to for the data for participating in operation according to above-mentioned embodiment illustrated in fig. 9 by stating computing device, carry out fixed-point calculation.
It should be noted that above-mentioned steps S901-S904 is specifically described reference can be made to the correlation of Fig. 1-8 illustrated embodiment is retouched It states, no longer describes herein.
It should be noted that for the various method embodiments described above, for simple description, therefore, it is stated as a series of Combination of actions, but those skilled in the art should understand that, the application is not limited by the described action sequence because According to the application, some steps may be performed in other sequences or simultaneously.Secondly, those skilled in the art should also know It knows, embodiment described in this description belongs to alternative embodiment, related actions and modules not necessarily the application It is necessary.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, there is no the portion being described in detail in some embodiment Point, reference can be made to the related descriptions of other embodiments.
In several embodiments provided herein, it should be understood that disclosed device, it can be by another way It realizes.For example, the apparatus embodiments described above are merely exemplary, such as the division of the unit, it is only a kind of Logical function partition, there may be another division manner in actual implementation, such as multiple units or components can combine or can To be integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed is mutual Coupling, direct-coupling or communication connection can be through some interfaces, the indirect coupling or communication connection of device or unit, It can be electrical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list Member both can take the form of hardware realization, can also be realized in the form of software program module.
If the integrated unit is realized in the form of software program module and sells or use as independent product When, it can store in a computer-readable access to memory.Based on this understanding, the technical solution of the application substantially or Person says that all or part of the part that contributes to existing technology or the technical solution can body in the form of software products Reveal and, which is stored in a memory, including some instructions are used so that a computer equipment (can be personal computer, server or network equipment etc.) executes all or part of each embodiment the method for the application Step.And memory above-mentioned includes: USB flash disk, read-only memory (ROM, Read-Only Memory), random access memory The various media that can store program code such as (RAM, Random Access Memory), mobile hard disk, magnetic or disk.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of above-described embodiment is can It is completed with instructing relevant hardware by program, which can store in a computer-readable memory, memory It may include: flash disk, ROM, RAM, disk or CD etc..
The embodiment of the present application is described in detail above, specific case used herein to the principle of the application and Embodiment is expounded, the description of the example is only used to help understand the method for the present application and its core ideas; At the same time, for those skilled in the art can in specific embodiments and applications according to the thought of the application There is change place, in conclusion the contents of this specification should not be construed as limiting the present application.

Claims (28)

1. a kind of computing device, which is characterized in that the computing device includes: storage unit, controller unit and converting unit;
The controller unit, for obtaining one or more operational orders, wherein the operational order includes operation domain and behaviour Make code, which is used to indicate the function of the operational order, and the operation domain includes the first address of the first input data, The length of one input data, the first address of output data, scaling position, the data type flag bit of the first input data and behaviour Make type identification;The operational order is parsed, to obtain the first address of the first input data, the first address of output data, decimal Point position, the data type flag bit of the first input data and action type mark;According to the first ground of first input data Location obtains the first input data from the storage unit, then by first input data, scaling position, the first input The data type flag bit and action type identification transmission of data are to the converting unit;The multiple operational order is transmitted to The arithmetic element;
The converting unit, for being identified according to the scaling position, the data type flag bit and the action type First input data is converted into the second input data, second input data is fixed-point data.
2. the apparatus according to claim 1, which is characterized in that the controller unit obtains one or more calculate and refers to It enables, specifically includes:
The controller unit obtains computations, parses the computations to obtain one or more of computations.
3. the apparatus of claim 2, which is characterized in that the operation domain of the operational order further includes the first input number According to length, the controller unit is also used to parse the operational order and obtains the length of first input data;
It is defeated that the controller unit according to the first address of first input data obtains described first from the storage unit Enter data, comprising:
The controller unit is obtained from the storage unit according to the length of first input data and the first input data Take first input data.
4. according to the method described in claim 3, it is characterized in that, the computing device is for executing machine learning calculating, institute Stating device further includes arithmetic element;
The controller unit is also used to one or more of operational orders being transmitted to the arithmetic element;
The converting unit is also used to second input data being transmitted to the arithmetic element;
The arithmetic element, for carrying out operation to second input data according to one or more of operational orders, with Obtain the calculated result of the computations;And the calculated result is stored to output data described in the storage unit The corresponding position of first address.
5. device according to claim 4, which is characterized in that the machine learning calculating includes: artificial neural network fortune It calculates, first input data includes: input neuron number evidence and weight data;The calculated result is output neuron number According to.
6. device according to claim 4 or 5, which is characterized in that the arithmetic element include main process task circuit and It is multiple from processing circuit;
The main process task circuit executes preamble processing and with the multiple from processing for carrying out to second input data Data and the multiple operational order are transmitted between circuit;
It is the multiple from processing circuit, for according to from second input data of main process task circuit transmission and the multiple fortune It calculates to instruct and execute intermediate operations and obtains multiple intermediate results, and multiple intermediate results are transferred to the main process task circuit;
The main process task circuit obtains the calculating knot of the computations for executing subsequent processing to the multiple intermediate result Fruit.
7. device according to claim 6, which is characterized in that the computing device further includes direct memory access DMA mono- Member, the storage unit include: register, any combination in caching;
The caching, for storing first input data;Wherein, the caching includes that scratchpad caches;
The register, for storing scalar data in first input data;
The DMA unit, for reading data from the storage unit or to the storage unit stores data.
8. device according to claim 6 or 7, which is characterized in that the controller unit include: instruction cache unit, Instruction cache unit and storage queue unit;
Described instruction cache unit, for storing the associated computations of artificial neural network operation;
Described instruction processing unit obtains the data conversion instruction and the multiple operation for parsing to the computations Instruction, and the data conversion instruction is parsed to obtain the operation code and operation domain of the data conversion instruction;
The storage queue unit, for storing instruction queue, the instruction queue include: pending by the tandem of the queue Multiple operational orders or computations.
9. device according to claim 8, which is characterized in that the controller unit further include:
Dependence processing unit, for determining the first operational order and the 0th operational order before first operational order With the presence or absence of incidence relation, such as there are incidence relations with the 0th operational order for first operational order, by described first Operational order is buffered in described instruction cache unit, after the 0th operational order is finished, is cached from described instruction Unit extracts first operational order and is transmitted to the arithmetic element;
The 0th operational order before the determination first operational order and the first operational order whether there is incidence relation packet It includes:
The first storage address section of required data in first operational order, foundation are extracted according to first operational order 0th operational order extracts the 0th storage address section of required data in the 0th operational order, such as described first deposits Storing up address section has Chong Die region with the 0th storage address section, determines first operational order and the described 0th Operational order has incidence relation, and such as first storage address section does not have Chong Die with the 0th storage address section Region determines that first operational order and the 0th operational order do not have incidence relation.
10. according to the described in any item devices of claim 4-9, which is characterized in that when first input data is fixed-point number According to when, the arithmetic element further include:
Derivation unit is derived by one or more intermediate knot for the scaling position according to first input data The scaling position of fruit, wherein one or more of intermediate results are obtained according to the first input data operation.
11. device according to claim 10, which is characterized in that the arithmetic element further include:
Data buffer storage unit, for caching one or more of intermediate results.
12. according to the described in any item devices of claim 4-9, which is characterized in that the arithmetic element includes: tree-shaped module, The tree-shaped module includes: a root port and multiple ports, and the root port of the tree-shaped module connects the main process task electricity Road, multiple ports of the tree-shaped module are separately connected multiple one from processing circuit from processing circuit;
The tree-shaped module, for forwarding the main process task circuit and the multiple data and operation between processing circuit Instruction, wherein the tree type model is that n pitches tree construction, and the n is the integer more than or equal to 2.
13. according to the described in any item devices of claim 4-9, which is characterized in that the arithmetic element further includes branch process Circuit,
The main process task circuit is specifically used for determining that the input neuron is broadcast data, and weight is distribution data, by one Distribution data are distributed into multiple data blocks, by least one data block in the multiple data block, broadcast data and multiple At least one operational order in operational order is sent to the branch process circuit;
The branch process circuit, for forward the main process task circuit and the multiple data block between processing circuit, Broadcast data and operational order;
It is the multiple from processing circuit, for executing operation to the data block and broadcast data received according to the operational order Intermediate result is obtained, and intermediate result is transferred to the branch process circuit;
The main process task circuit is also used to obtain the intermediate result progress subsequent processing that the branch process circuit is sent described Operational order as a result, the result of the computations is sent to the controller unit.
14. according to the described in any item devices of claim 4-9, which is characterized in that the multiple to divide from processing circuit in array Cloth;It is each connect from processing circuit with other adjacent from processing circuit, the main process task circuit connection is the multiple from processing K in circuit is a from processing circuit, and the K is a from processing circuit are as follows: the n of n of the 1st row from processing circuit, m row is a from The m for managing circuit and the 1st column is a from processing circuit;
The K is from processing circuit, in the main process task circuit and multiple data between processing circuit and referring to The forwarding of order;
The main process task circuit is also used to determine that the input neuron is broadcast data, and weight is distribution data, by one point Hair data are distributed into multiple data blocks, will be at least one data block and multiple operational orders in the multiple data block At least one operational order is sent to the K from processing circuit;
The K is a from processing circuit, for converting the main process task circuit and the multiple data between processing circuit;
It is the multiple from processing circuit, obtain intermediate knot for executing operation to the data block received according to the operational order Fruit, and operation result is transferred to the K from processing circuit;
The main process task circuit refers to for being handled to obtain the calculating K from the intermediate result that processing circuit is sent Enable as a result, the result of the computations is sent to the controller unit.
15. the described in any item devices of 2-14 according to claim 1, which is characterized in that
The main process task circuit is combined sequence specifically for the intermediate result for sending multiple processing circuits and obtains the calculating The result of instruction;
Or the main process task circuit, specifically for the intermediate result of the transmission of multiple processing circuits is combined sequence and is swashed The result of the computations is obtained after processing living.
16. the described in any item devices of 2-14 according to claim 1, which is characterized in that the main process task circuit includes: at activation Manage one of circuit and addition process circuit or any combination;
The activation processing circuit, for executing the activation operation of data in main process task circuit;
The addition process circuit, for executing add operation or accumulating operation;
It is described to include: from processing circuit
Multiplication process circuit obtains result of product for executing product calculation to the data block received;
Accumulation process circuit obtains the intermediate result for executing accumulating operation to the result of product.
17. a kind of machine learning arithmetic unit, which is characterized in that the machine learning arithmetic unit includes one or more as weighed Benefit requires the described in any item computing devices of 4-16, for being obtained from other processing units to operational data and control information, And specified machine learning operation is executed, implementing result is passed into other processing units by I/O interface;
When the machine learning arithmetic unit includes multiple computing devices, can lead between the multiple computing device Specific structure is crossed to be attached and transmit data;
Wherein, multiple computing devices are interconnected and are transmitted data by quick external equipment interconnection Bus PC IE bus, To support the operation of more massive machine learning;Multiple computing devices share same control system or possess respective control System processed;Multiple computing device shared drives possess respective memory;The mutual contact mode of multiple computing devices It is any interconnection topology.
18. a kind of combined treatment device, which is characterized in that the combined treatment device includes machine as claimed in claim 17 Learn arithmetic unit, general interconnecting interface, storage device and other processing units;
The machine learning arithmetic unit is interacted with other described processing units, the common calculating behaviour for completing user and specifying Make;
The storage device is connect with the machine learning arithmetic unit and other described processing units respectively, described for saving The data of machine learning arithmetic unit and other processing units.
19. a kind of neural network chip, which is characterized in that the neural network chip includes machine as claimed in claim 17 Learn arithmetic unit or combined treatment device as claimed in claim 18.
20. a kind of electronic equipment, which is characterized in that the electronic equipment includes the chip as described in the claim 19.
21. a kind of board, which is characterized in that the board includes: memory device, interface arrangement and control device and such as right It is required that neural network chip described in 19;
Wherein, the neural network chip is separately connected with the memory device, the control device and the interface arrangement;
The memory device, for storing data;
The interface arrangement, for realizing the data transmission between the chip and external equipment;
The control device is monitored for the state to the chip;
Wherein, the memory device includes: that multiple groups storage unit, storage unit described in each group and the chip are connected by bus It connects, the storage unit are as follows: DDR SDRAM;
The chip includes: DDR controller, the control for data transmission and data storage to each storage unit;
The interface arrangement are as follows: standard PCIE interface.
22. a kind of calculation method characterized by comprising
Controller unit acquires one or more operational orders;And the operational order is parsed, to obtain the first input number According to first address, the length of the first input data, the first address of output data, scaling position, the data of the first input data Type code position and action type mark;First is obtained from the storage unit according to the first address of first input data Input data;
Converting unit is identified according to the scaling position, the data type flag bit and the action type by described first Input data is converted to the second input data, which is fixed-point data.
23. according to the method for claim 22, which is characterized in that the controller unit obtains one or more operations and refers to It enables, comprising:
Obtain computations;The computations are parsed to obtain one or more of operational orders.
24. according to the method for claim 23, which is characterized in that the operation domain of the operational order further includes the first input The length of data;
The controller unit obtains first input data according to the first address of first input data, comprising: according to The first address of first input data and the length of the first input data obtain first input data.
25. according to the method for claim 24, which is characterized in that the calculation method is to calculate for executing machine learning Method, the method also includes:
Arithmetic element executes the calculating that computations are calculated according to the multiple operational order to second input data As a result, and the calculated result is stored into the corresponding memory space of the address data output.
26. the method according to claim 24 or 25, which is characterized in that the machine learning calculating includes: artificial neuron Network operations, first input data include: input neuron and weight;The calculated result is output neuron.
27. the method according to claim 24 or 25, which is characterized in that when first input data and described second defeated When entering data and being fixed-point data, the decimal point of the scaling position of first input data and second input data It sets inconsistent.
28. according to the method for claim 27, which is characterized in that when first input data is fixed-point data, institute State method further include:
The arithmetic element is derived by one or more intermediate result according to the scaling position of first input data Scaling position, wherein one or more of intermediate results are obtained according to the first input data operation.
CN201910195820.3A 2018-02-13 2018-09-03 Computing device and method Active CN110163361B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
CN2018101492872 2018-02-13
CN201810149287.2A CN110163350B (en) 2018-02-13 2018-02-13 Computing device and method
CN2018102079158 2018-03-14
CN201810207915.8A CN110276447A (en) 2018-03-14 2018-03-14 A kind of computing device and method
CN201880002628.1A CN110383300B (en) 2018-02-13 2018-09-03 Computing device and method

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201880002628.1A Division CN110383300B (en) 2018-02-13 2018-09-03 Computing device and method

Publications (2)

Publication Number Publication Date
CN110163361A true CN110163361A (en) 2019-08-23
CN110163361B CN110163361B (en) 2021-06-25

Family

ID=67638324

Family Applications (11)

Application Number Title Priority Date Filing Date
CN201910195627.XA Active CN110163357B (en) 2018-02-13 2018-09-03 Computing device and method
CN201910195598.7A Active CN110163354B (en) 2018-02-13 2018-09-03 Computing device and method
CN201910195899.XA Active CN110163363B (en) 2018-02-13 2018-09-03 Computing device and method
CN201910195816.7A Active CN110163358B (en) 2018-02-13 2018-09-03 Computing device and method
CN201910195535.1A Active CN110163353B (en) 2018-02-13 2018-09-03 Computing device and method
CN201910195599.1A Active CN110163355B (en) 2018-02-13 2018-09-03 Computing device and method
CN201910195818.6A Active CN110163359B (en) 2018-02-13 2018-09-03 Computing device and method
CN201910195820.3A Active CN110163361B (en) 2018-02-13 2018-09-03 Computing device and method
CN201910195819.0A Active CN110163360B (en) 2018-02-13 2018-09-03 Computing device and method
CN201910195898.5A Active CN110163362B (en) 2018-02-13 2018-09-03 Computing device and method
CN201910195600.0A Active CN110163356B (en) 2018-02-13 2018-09-03 Computing device and method

Family Applications Before (7)

Application Number Title Priority Date Filing Date
CN201910195627.XA Active CN110163357B (en) 2018-02-13 2018-09-03 Computing device and method
CN201910195598.7A Active CN110163354B (en) 2018-02-13 2018-09-03 Computing device and method
CN201910195899.XA Active CN110163363B (en) 2018-02-13 2018-09-03 Computing device and method
CN201910195816.7A Active CN110163358B (en) 2018-02-13 2018-09-03 Computing device and method
CN201910195535.1A Active CN110163353B (en) 2018-02-13 2018-09-03 Computing device and method
CN201910195599.1A Active CN110163355B (en) 2018-02-13 2018-09-03 Computing device and method
CN201910195818.6A Active CN110163359B (en) 2018-02-13 2018-09-03 Computing device and method

Family Applications After (3)

Application Number Title Priority Date Filing Date
CN201910195819.0A Active CN110163360B (en) 2018-02-13 2018-09-03 Computing device and method
CN201910195898.5A Active CN110163362B (en) 2018-02-13 2018-09-03 Computing device and method
CN201910195600.0A Active CN110163356B (en) 2018-02-13 2018-09-03 Computing device and method

Country Status (1)

Country Link
CN (11) CN110163357B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110597756A (en) * 2019-08-26 2019-12-20 光子算数(北京)科技有限责任公司 Calculation circuit and data operation method

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11663002B2 (en) 2018-02-13 2023-05-30 Shanghai Cambricon Information Technology Co., Ltd Computing device and method
EP3651075B1 (en) * 2018-02-13 2021-10-27 Shanghai Cambricon Information Technology Co., Ltd Computation device and method
CN112445524A (en) * 2019-09-02 2021-03-05 中科寒武纪科技股份有限公司 Data processing method, related device and computer readable medium
CN112667241B (en) * 2019-11-08 2023-09-29 安徽寒武纪信息科技有限公司 Machine learning instruction conversion method and device, board card, main board and electronic equipment
CN110929862B (en) * 2019-11-26 2023-08-01 陈子祺 Fixed-point neural network model quantification device and method
KR20210077352A (en) * 2019-12-17 2021-06-25 에스케이하이닉스 주식회사 Data Processing System and accelerating DEVICE therefor
EP4220448A1 (en) * 2020-09-27 2023-08-02 Cambricon (Xi'an) Semiconductor Co., Ltd. Data processing device, integrated circuit chip, device, and implementation method therefor

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426344A (en) * 2015-11-09 2016-03-23 南京大学 Matrix calculation method of distributed large-scale matrix multiplication based on Spark
CN107315563A (en) * 2016-04-26 2017-11-03 北京中科寒武纪科技有限公司 A kind of apparatus and method for performing vectorial comparison operation
CN107315575A (en) * 2016-04-26 2017-11-03 北京中科寒武纪科技有限公司 A kind of apparatus and method for performing vectorial union operation
CN107316078A (en) * 2016-04-27 2017-11-03 北京中科寒武纪科技有限公司 Apparatus and method for performing artificial neural network self study computing
CN107315566A (en) * 2016-04-26 2017-11-03 北京中科寒武纪科技有限公司 A kind of apparatus and method for performing vector circulant shift operation
CN107330515A (en) * 2016-04-29 2017-11-07 北京中科寒武纪科技有限公司 A kind of apparatus and method for performing artificial neural network forward operation
CN107608715A (en) * 2017-07-20 2018-01-19 上海寒武纪信息科技有限公司 For performing the device and method of artificial neural network forward operation

Family Cites Families (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6650327B1 (en) * 1998-06-16 2003-11-18 Silicon Graphics, Inc. Display system having floating point rasterization and floating point framebuffering
US6834293B2 (en) * 2001-06-15 2004-12-21 Hitachi, Ltd. Vector scaling system for G.728 annex G
CN100410871C (en) * 2003-07-23 2008-08-13 联发科技股份有限公司 Digital signal processor applying skip type floating number operational method
US7432925B2 (en) * 2003-11-21 2008-10-07 International Business Machines Corporation Techniques for representing 3D scenes using fixed point data
CN1658153B (en) * 2004-02-18 2010-04-28 联发科技股份有限公司 Compound dynamic preset number representation and algorithm, and its processor structure
CN100340972C (en) * 2005-06-07 2007-10-03 北京北方烽火科技有限公司 Method for implementing logarithm computation by field programmable gate array in digital auto-gain control
JP4976798B2 (en) * 2006-09-28 2012-07-18 株式会社東芝 Two-degree-of-freedom position control method, two-degree-of-freedom position control device, and medium storage device
CN101231632A (en) * 2007-11-20 2008-07-30 西安电子科技大学 Method for processing floating-point FFT by FPGA
CN101183873B (en) * 2007-12-11 2011-09-28 广州中珩电子科技有限公司 BP neural network based embedded system data compression/decompression method
CN101510149B (en) * 2009-03-16 2011-05-04 炬力集成电路设计有限公司 Method and apparatus for processing data
CN101754039A (en) * 2009-12-22 2010-06-23 中国科学技术大学 Three-dimensional parameter decoding system for mobile devices
US9104479B2 (en) * 2011-12-07 2015-08-11 Arm Limited Apparatus and method for rounding a floating-point value to an integral floating-point value
CN102981854A (en) * 2012-11-16 2013-03-20 天津市天祥世联网络科技有限公司 Neural network optimization method based on floating number operation inline function library
CN103019647B (en) * 2012-11-28 2015-06-24 中国人民解放军国防科学技术大学 Floating-point accumulation/gradual decrease operational method with floating-point precision maintaining function
CN103455983A (en) * 2013-08-30 2013-12-18 深圳市川大智胜科技发展有限公司 Image disturbance eliminating method in embedded type video system
CN104572011B (en) * 2014-12-22 2018-07-31 上海交通大学 Universal matrix fixed-point multiplication device based on FPGA and its computational methods
US20170061279A1 (en) * 2015-01-14 2017-03-02 Intel Corporation Updating an artificial neural network using flexible fixed point representation
CN104679719B (en) * 2015-03-17 2017-11-10 成都金本华科技股份有限公司 A kind of floating-point operation method based on FPGA
CN104679720A (en) * 2015-03-17 2015-06-03 成都金本华科技股份有限公司 Operation method for FFT
CN105094744B (en) * 2015-07-28 2018-01-16 成都腾悦科技有限公司 A kind of variable floating data microprocessor
US9977116B2 (en) * 2015-10-05 2018-05-22 Analog Devices, Inc. Scaling fixed-point fast Fourier transforms in radar and sonar applications
CN106447035B (en) * 2015-10-08 2019-02-26 上海兆芯集成电路有限公司 Processor with variable rate execution unit
CN106991476B (en) * 2016-01-20 2020-04-10 中科寒武纪科技股份有限公司 Apparatus and method for performing artificial neural network forward operations
CN107506828B (en) * 2016-01-20 2020-11-03 中科寒武纪科技股份有限公司 Artificial neural network computing device and method for sparse connection
CN107301453B (en) * 2016-04-15 2021-04-20 中科寒武纪科技股份有限公司 Artificial neural network forward operation device and method supporting discrete data representation
CN109375951B (en) * 2016-04-27 2020-10-09 中科寒武纪科技股份有限公司 Device and method for executing forward operation of full-connection layer neural network
CN107340993B (en) * 2016-04-28 2021-07-16 中科寒武纪科技股份有限公司 Arithmetic device and method
CN107341547B (en) * 2016-04-29 2021-04-20 中科寒武纪科技股份有限公司 Apparatus and method for performing convolutional neural network training
CN106502626A (en) * 2016-11-03 2017-03-15 北京百度网讯科技有限公司 Data processing method and device
CN106708780A (en) * 2016-12-12 2017-05-24 中国航空工业集团公司西安航空计算技术研究所 Low complexity branch processing circuit of uniform dyeing array towards SIMT framework
CN106775599B (en) * 2017-01-09 2019-03-01 南京工业大学 The more computing unit coarseness reconfigurable systems and method of recurrent neural network
CN107292334A (en) * 2017-06-08 2017-10-24 北京深瞐科技有限公司 Image-recognizing method and device
CN107451658B (en) * 2017-07-24 2020-12-15 杭州菲数科技有限公司 Fixed-point method and system for floating-point operation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426344A (en) * 2015-11-09 2016-03-23 南京大学 Matrix calculation method of distributed large-scale matrix multiplication based on Spark
CN107315563A (en) * 2016-04-26 2017-11-03 北京中科寒武纪科技有限公司 A kind of apparatus and method for performing vectorial comparison operation
CN107315575A (en) * 2016-04-26 2017-11-03 北京中科寒武纪科技有限公司 A kind of apparatus and method for performing vectorial union operation
CN107315566A (en) * 2016-04-26 2017-11-03 北京中科寒武纪科技有限公司 A kind of apparatus and method for performing vector circulant shift operation
CN107316078A (en) * 2016-04-27 2017-11-03 北京中科寒武纪科技有限公司 Apparatus and method for performing artificial neural network self study computing
CN107330515A (en) * 2016-04-29 2017-11-07 北京中科寒武纪科技有限公司 A kind of apparatus and method for performing artificial neural network forward operation
CN107608715A (en) * 2017-07-20 2018-01-19 上海寒武纪信息科技有限公司 For performing the device and method of artificial neural network forward operation

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110597756A (en) * 2019-08-26 2019-12-20 光子算数(北京)科技有限责任公司 Calculation circuit and data operation method

Also Published As

Publication number Publication date
CN110163354B (en) 2020-10-09
CN110163359B (en) 2020-12-11
CN110163358B (en) 2021-01-05
CN110163360B (en) 2021-06-25
CN110163359A (en) 2019-08-23
CN110163355B (en) 2020-10-09
CN110163356A (en) 2019-08-23
CN110163358A (en) 2019-08-23
CN110163353B (en) 2021-05-11
CN110163355A (en) 2019-08-23
CN110163363B (en) 2021-05-11
CN110163362B (en) 2020-12-11
CN110163353A (en) 2019-08-23
CN110163363A (en) 2019-08-23
CN110163354A (en) 2019-08-23
CN110163356B (en) 2020-10-09
CN110163357B (en) 2021-06-25
CN110163360A (en) 2019-08-23
CN110163357A (en) 2019-08-23
CN110163361B (en) 2021-06-25
CN110163362A (en) 2019-08-23

Similar Documents

Publication Publication Date Title
US11710041B2 (en) Feature map and weight selection method and accelerating device
CN110163361A (en) A kind of computing device and method
CN109189474A (en) Processing with Neural Network device and its method for executing vector adduction instruction
CN109740739B (en) Neural network computing device, neural network computing method and related products
CN109522052B (en) Computing device and board card
CN110383300A (en) A kind of computing device and method
CN109740754B (en) Neural network computing device, neural network computing method and related products
CN109543832A (en) A kind of computing device and board
CN111047022B (en) Computing device and related product
CN110163350A (en) A kind of computing device and method
CN109032670A (en) Processing with Neural Network device and its method for executing vector duplicate instructions
CN110276447A (en) A kind of computing device and method
CN110059809A (en) A kind of computing device and Related product
CN111047021B (en) Computing device and related product
CN111178492B (en) Computing device, related product and computing method for executing artificial neural network model
CN111047024B (en) Computing device and related product
CN111260070B (en) Operation method, device and related product
CN111382848A (en) Computing device and related product
CN110472734A (en) A kind of computing device and Related product
CN112394990A (en) Floating point to half precision floating point instruction processing device and method and related products

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant