CN109522052B - Computing device and board card - Google Patents

Computing device and board card Download PDF

Info

Publication number
CN109522052B
CN109522052B CN201811429808.6A CN201811429808A CN109522052B CN 109522052 B CN109522052 B CN 109522052B CN 201811429808 A CN201811429808 A CN 201811429808A CN 109522052 B CN109522052 B CN 109522052B
Authority
CN
China
Prior art keywords
processing circuit
output
neural network
data
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811429808.6A
Other languages
Chinese (zh)
Other versions
CN109522052A (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cambricon Technologies Corp Ltd
Original Assignee
Cambricon Technologies Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cambricon Technologies Corp Ltd filed Critical Cambricon Technologies Corp Ltd
Priority to CN201811429808.6A priority Critical patent/CN109522052B/en
Publication of CN109522052A publication Critical patent/CN109522052A/en
Application granted granted Critical
Publication of CN109522052B publication Critical patent/CN109522052B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30007Arrangements for executing specific machine instructions to perform operations on data operands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a computing device and integrated circuit board, computing device is used for carrying out the operation of recurrent neural network, the integrated circuit board includes: the device comprises a storage device, an interface device, a control device and a neural network chip, wherein the neural network chip comprises a computing device, and the storage device is used for storing data; the interface device is used for realizing data transmission between the chip and external equipment; the control device is used for monitoring the state of the chip, and the computing device has the advantages of low cost and low power consumption.

Description

Computing device and board card
Technical Field
The application relates to the technical field of information processing, in particular to a computing device and a board card.
Background
With the continuous development of information technology and the increasing demand of people, the requirement of people on the timeliness of information is higher and higher. Currently, the terminal obtains and processes information based on a general-purpose processor. For example, the general purpose processor recurrent neural network is widely applied to the fields of speech recognition, language modeling, translation, picture description, etc., and has recently received more and more extensive attention in academia and industry due to its higher recognition accuracy and better parallelism.
In practice, it is found that a software program is run on a general-purpose processor to process the recurrent neural network, but the recurrent neural network is low in efficiency and high in power consumption through the processor.
Disclosure of Invention
The embodiment of the application provides a computing device and a related product, which can improve the processing speed of a recurrent neural network, improve the efficiency and save the power consumption.
In a first aspect, a computing device is provided for performing a recurrent neural network operation, the recurrent neural network comprising: an input layer, a hidden layer, and an output layer, the input layer, the hidden layer, and the output layer comprising H, the computing device comprising: an arithmetic unit and a controller unit; the arithmetic unit includes: a master processing circuit and a slave processing circuit; the computing device is used for executing h hidden layer computation of the recurrent neural network, and the time corresponding to h hidden layers is t;
the controller unit is used for acquiring input data X of the h hidden layeri tThe weight W of the h hidden layer and the output result O of the h-1 hidden layeri t-1
The controller unit is also used for inputting data Xi tWeight W and output result Oi t-1Sending the data to the main processing circuit;
the main processing circuit is used for inputting data Xi tSplitting the data into a plurality of input data blocks and outputting a result Oi t-1Splitting the weight value W into a plurality of output data blocks, distributing the plurality of input data blocks and the plurality of output data blocks to a slave processing circuit, and broadcasting the weight value W to the slave processing circuit;
the slave processing circuit is used for performing multiplication operation on the received input data block and the weight to obtain an input intermediate result, performing multiplication operation on the received output data block and the weight to obtain an output intermediate result, and sending the input intermediate result and the output intermediate result to the master processing circuit;
the main processing circuit is further used for obtaining a part of output results from the input intermediate results of the auxiliary processing circuit, splicing the output intermediate results to obtain another part of output results, calculating the sum of the part of output results and the other part of output results to obtain a hidden layer output result, and performing subsequent operation on the hidden layer output result to obtain an output result O of the h output layer of the recurrent neural network operationi t
In a second aspect, an embodiment of the present application provides a recurrent neural network computing apparatus, where the recurrent neural network computing apparatus includes one or more computing apparatuses provided in the first aspect, and is configured to obtain data to be computed and control information from other processing apparatuses, execute a specified recurrent neural network operation, and transmit an execution result to the other processing apparatuses through an I/O interface;
when the recurrent neural network device comprises a plurality of computing devices, the plurality of computing devices can be connected through a specific structure and transmit data;
the computing devices are interconnected through a PCIE bus of a fast peripheral equipment interconnection bus and transmit data so as to support operation of a larger-scale recurrent neural network; a plurality of the computing devices share the same control system or own respective control systems; the computing devices share the memory or own the memory; the plurality of computing devices are interconnected in any interconnection topology.
In a third aspect, a combined processing device is provided, which includes the recurrent neural network operation device of the second aspect, a universal interconnection interface and other processing devices;
and the recurrent neural network operation device interacts with the other processing devices to jointly complete the calculation operation specified by the user.
In a fourth aspect, a neural network chip is provided, where the neural network chip includes the computing device provided in the first aspect, or the recurrent neural network operation device provided in the second aspect, or the combined processing device provided in the third aspect.
In a fifth aspect, an electronic device is provided, the electronic device comprising a chip as provided in the fourth aspect.
In a sixth aspect, a board card is provided, which includes: a memory device, an interface device and a control device and the neural network chip provided in the fourth aspect;
wherein, the neural network chip is respectively connected with the storage device, the control device and the interface device;
the storage device is used for storing data;
the interface device is used for realizing data transmission between the chip and external equipment;
and the control device is used for monitoring the state of the chip.
In a seventh aspect, an embodiment of the present application further provides a recurrent neural network operation method, where the method is applied to a computing device, and the recurrent neural network includes: an input layer, a hidden layer, and an output layer, the input layer, the hidden layer, and the output layer comprising H, the computing device comprising: an arithmetic unit and a controller unit; the arithmetic unit includes: a master processing circuit and a slave processing circuit; the computing device is used for executing h hidden layer computation of the recurrent neural network, and the time corresponding to h hidden layers is t; the method specifically comprises the following steps:
the controller unit obtains input data X of the h hidden layeri tThe weight W of the h hidden layer and the output result O of the h-1 hidden layeri t-1(ii) a Will input data Xi tWeight W and output result Oi t-1Sending the data to the main processing circuit;
the main processing circuit inputs data Xi tSplitting the data into a plurality of input data blocks and outputting a result Oi t-1Splitting the weight value W into a plurality of output data blocks, distributing the plurality of input data blocks and the plurality of output data blocks to a slave processing circuit, and broadcasting the weight value W to the slave processing circuit;
the slave processing circuit performs multiplication operation on the received input data block and the weight to obtain an input intermediate result, performs multiplication operation on the received output data block and the weight to obtain an output intermediate result, and sends the input intermediate result and the output intermediate result to the master processing circuit;
the main processing circuit obtains a part of output results from the input intermediate results of the auxiliary processing circuit, splices the output intermediate results to obtain another part of output results, calculates the sum of the part of output results and the other part of output results to obtain a hidden layer output result, and executes subsequent operation on the hidden layer output result to obtain an output result O of the h output layer of the recurrent neural network operationi t
In some embodiments, the electronic device comprises a data processing apparatus, a robot, a computer, a printer, a scanner, a tablet, a smart terminal, a cell phone, a tachograph, a navigator, a sensor, a camera, a server, a cloud server, a camera, a camcorder, a projector, a watch, a headset, a mobile storage, a wearable device, a vehicle, a household appliance, and/or a medical device.
In some embodiments, the vehicle comprises an aircraft, a ship, and/or a vehicle; the household appliances comprise a television, an air conditioner, a microwave oven, a refrigerator, an electric cooker, a humidifier, a washing machine, an electric lamp, a gas stove and a range hood; the medical equipment comprises a nuclear magnetic resonance apparatus, a B-ultrasonic apparatus and/or an electrocardiograph.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of a recurrent neural network
Fig. 2 is a schematic structural diagram of a computing device according to an embodiment of the present application.
Fig. 2a is a schematic structural diagram of an arithmetic unit according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of another computing device provided in the present application.
Fig. 3a is a schematic structural diagram of a main processing circuit provided in the present application.
Fig. 4a is a schematic structural diagram of a transmitting end of a tree module provided in the present application.
Fig. 4b is a schematic structural diagram of a receiving end of a tree module according to the present application.
Fig. 4c is a schematic diagram of a binary tree structure provided in the present application.
FIG. 5 is a block diagram of a computing device provided in one embodiment of the present application.
Fig. 6 is a flowchart illustrating a recurrent neural network operation method according to an embodiment of the present disclosure.
Fig. 7 is a structural diagram of a combined processing device according to an embodiment of the present application.
Fig. 8 is a block diagram of another combined processing device according to an embodiment of the present application.
Fig. 9 is a schematic structural diagram of a board card provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 is a schematic diagram of a recurrent neural network, as shown in fig. 1, the recurrent neural network includes: the method comprises an input layer, a hidden layer and an output layer, and aims to solve the problem that the traditional neural network is dependent on previous input in time, when the forward operation is carried out, input data from the current time t and a hidden layer output result from the previous time t-1 are input into a cyclic neural network, and then the input data at the time t is multiplied by weight data, and the hidden layer output result of the time t-1 is multiplied by a weight to obtain an output result.
Referring to fig. 2, fig. 2 is a computing device provided in the present application. Referring to fig. 2, there is provided a computing device for performing a recurrent neural network operation, the computing device comprising: a controller unit 11 and an arithmetic unit 12, wherein the controller unit 11 is connected with the arithmetic unit 12, and the arithmetic unit 12 comprises: a master processing circuit 101 and a slave processing circuit 102 (which may be one or more slave processing circuits, with multiple slave processing circuits being preferred);
it should be noted that the main processing circuit itself includes a storage (e.g. a memory or a register) which can store some data of the main processing circuit, and the slave processing circuit can optionally carry the storage.
The recurrent neural network includes: the calculation device is used for executing the H hidden layer calculation of the cyclic neural network, and the time corresponding to the H hidden layers is t;
a controller unit 11 for acquiring input data X of the h-th hidden layeri tThe weight W of the h hidden layer and the output result O of the h-1 hidden layeri t-1
A controller unit 11 for inputting data Xi tWeight W and output result Oi t-1Sending to the main processing circuit 101;
a main processing circuit 101 for inputting data Xi tSplitting the data into a plurality of input data blocks and outputting a result Oi t-1Splitting the weight value W into a plurality of output data blocks, distributing the plurality of input data blocks and the plurality of output data blocks to a slave processing circuit, and broadcasting the weight value W to the slave processing circuit;
the slave processing circuit 102 is configured to perform a multiplication operation on the received input data block and the weight to obtain an input intermediate result, perform a multiplication operation on the received output data block and the weight to obtain an output intermediate result, and send the input intermediate result and the output intermediate result to the master processing circuit;
the main processing circuit 101 is further configured to obtain a partial output result from the input intermediate result of the slave processing circuit, splice the output intermediate result to obtain another partial output result, and calculate a sum of the partial output result and the another partial output result to obtain a t-time hidden layer output result.
According to the technical scheme, the operation unit is set to be in a master-slave structure, for the forward operation of the cyclic neural network, the input data at the moment and the output result of the hidden layer at the last moment are split and processed in parallel, so that the part with larger calculation amount can be subjected to parallel operation through the master processing circuit and the slave processing circuit, the operation speed is increased, the operation time is saved, and the power consumption is reduced.
In the forward operation, after the hidden layer at the last time t-1 is executed, the operation instruction at the current time t takes the output result of the hidden layer operation at the last time as a part of input neurons at the next time for operation, the other part of operation is that the input data of the input layer at the time t is taken as the other part of input neurons, then the two parts of input neurons and the weight are respectively subjected to product operation to obtain two operation results, the two operation results are added to obtain the output result of the hidden layer at the time t, and then the output result of the hidden layer at the time t is taken as the part of input neurons of the hidden layer operation at the next time t + 1.
For the operation of the cyclic neural network, if the cyclic neural network has a plurality of hidden layers, the input data and the output results of the operation of the plurality of hidden layers do not refer to the input neurons in the input layer and the output neurons in the output layer of the whole cyclic neural network, but for any two adjacent hidden layers in the cyclic neural network, the output results of the hidden layers at the previous moment of the cyclic neural network are part of the input neurons of the hidden layers at the current moment. That is, except the 1 st hidden layer, each hidden layer can be used as an input layer, and the next layer is a corresponding output layer.
Optionally, the main processing circuit is further configured to send the hidden layer output result to an h +1 th hidden layer (i.e., a hidden layer at the time t + 1).
Optionally, the main processing circuit is further configured to perform subsequent operation on the hidden layer output result to obtain an output result O of the h output layer of the recurrent neural network operationi t
Optionally, the computing device may further include: the storage unit 10 and the direct memory access unit 50, the storage unit 10 may include: one or any combination of a register and a cache, specifically, the cache is used for storing a calculation instruction; the register is used for storing the input data and a scalar; the cache is a scratch pad cache. The direct memory access unit 50 is used to read or store data from the storage unit 10.
Optionally, the controller unit includes: an instruction storage unit 110, an instruction processing unit 111, and a storage queue unit 113;
an instruction storage unit 110, configured to store a computation instruction associated with the recurrent neural network operation;
the instruction processing unit 111 is configured to analyze the calculation instruction to obtain a plurality of operation instructions;
a store queue unit 113 for storing an instruction queue, the instruction queue comprising: and a plurality of operation instructions or calculation instructions to be executed according to the front and back sequence of the queue.
In one alternative, the structure of the calculation instruction may be as shown in the following table.
Operation code Registers or immediate data Register/immediate ...
The ellipses in the above table indicate that multiple registers or immediate numbers may be included.
In another alternative, the computing instructions may include: one or more operation domains and an opcode. The calculation instructions may include recurrent neural network instructions. As shown in table 1, register number 0, register number 1, register number 2, register number 3, and register number 4 may be operation domains. Each of register number 0, register number 1, register number 2, register number 3, and register number 4 may be a number of one or more registers.
Figure GDA0002417548100000061
Figure GDA0002417548100000071
The register may be an off-chip memory, and in practical applications, may also be an on-chip memory for storing data, where the data may specifically be n-dimensional data, where n is an integer greater than or equal to 1, and for example, when n is equal to 1, the data is 1-dimensional data, that is, a vector, and when n is equal to 2, the data is 2-dimensional data, that is, a matrix, and when n is equal to 3 or more, the data is a multidimensional tensor.
Optionally, the controller unit may further include:
the dependency processing unit 112 is configured to determine whether a first operation instruction is associated with a zeroth operation instruction before the first operation instruction when there are multiple operation instructions, if so, cache the first operation instruction in the instruction storage unit, and after the zeroth operation instruction is executed, extract the first operation instruction from the instruction storage unit and transmit the first operation instruction to the operation unit;
the determining whether the first operation instruction has an association relationship with a zeroth operation instruction before the first operation instruction comprises:
extracting a first storage address interval of required data (such as a matrix) in the first operation instruction according to the first operation instruction, extracting a zeroth storage address interval of the required matrix in the zeroth operation instruction according to the zeroth operation instruction, if the first storage address interval and the zeroth storage address interval have an overlapped area, determining that the first operation instruction and the zeroth operation instruction have an association relation, and if the first storage address interval and the zeroth storage address interval do not have an overlapped area, determining that the first operation instruction and the zeroth operation instruction do not have an association relation.
In another alternative embodiment, the arithmetic unit 12 may include a master processing circuit 101 and a plurality of slave processing circuits 102, as shown in fig. 3. In one embodiment, as shown in FIG. 3, a plurality of slave processing circuits are distributed in an array; each slave processing circuit is connected with other adjacent slave processing circuits, the master processing circuit is connected with k slave processing circuits in the plurality of slave processing circuits, and the k slave processing circuits are as follows: it should be noted that, as shown in fig. 3, the K slave processing circuits include only the n slave processing circuits in the 1 st row, the n slave processing circuits in the m th row, and the m slave processing circuits in the 1 st column, that is, the K slave processing circuits are slave processing circuits directly connected to the master processing circuit among the plurality of slave processing circuits.
And the K slave processing circuits are used for forwarding the input data blocks, the output data blocks, the weight values and the intermediate results among the main processing circuit and the plurality of slave processing circuits.
Optionally, as shown in fig. 3a, the main processing circuit may further include: one or any combination of the conversion processing circuit 110, the activation processing circuit 111, and the addition processing circuit 112;
the conversion processing circuit 110 is configured to perform conversion processing on data, specifically: input data X received by the main processing circuiti tThe weight W orOutput result Oi t-1An interchange between the first data structure and the second data structure (e.g., a conversion of continuous data to discrete data) is performed.
An activation processing circuit 111 for performing an activation operation of data in the main processing circuit;
and an addition processing circuit 112 for performing addition operation or accumulation operation.
In another embodiment, the operation instruction is a matrix by matrix instruction, an accumulation instruction, an activation instruction, or the like.
In an alternative embodiment, as shown in fig. 4a, the arithmetic unit comprises: a tree module 40, the tree module comprising: a root port 401 and a plurality of branch ports 402, wherein the root port of the tree module is connected with the main processing circuit, and the branch ports of the tree module are respectively connected with one of the plurality of slave processing circuits;
the tree module has a transceiving function, for example, as shown in fig. 4a, the tree module is a transmitting function, and as shown in fig. 4b, the tree module is a receiving function.
The tree module is used for forwarding the input data block, the output data block, the weight and the intermediate result between the main processing circuit and the plurality of slave processing circuits.
Optionally, the tree module is an optional result of the computing device, and may include at least 1 layer of nodes, where the nodes are line structures with forwarding function, and the nodes themselves may not have computing function. If the tree module has zero-level nodes, the tree module is not needed.
Optionally, the tree module may have an n-ary tree structure, for example, a binary tree structure as shown in fig. 4c, or may have a ternary tree structure, where n may be an integer greater than or equal to 2. The present embodiment is not limited to the specific value of n, the number of layers may be 2, and the slave processing circuit may be connected to nodes of other layers than the node of the penultimate layer, for example, the node of the penultimate layer shown in fig. 4 c.
Optionally, the operation unit may carry a separate cache, as shown in fig. 2a, and may include: a neuron buffer unit, the neuron buffer unit 63 buffers the input neuron vector data and the output neuron value data of the slave processing circuit.
As shown in fig. 2a, the arithmetic unit may further include: and a weight buffer unit 64, configured to buffer weight data required by the slave processing circuit in the calculation process.
In an alternative embodiment, the arithmetic unit 12, as shown in fig. 5, may include a branch processing circuit 103; the specific connection structure is shown in fig. 5, wherein,
the branch processing circuit 103 may include a memory, as shown in fig. 5, the size of the memory of the branch processing circuit 103 may be between 2 and 2.5 times of the maximum data capacity that a single slave processing circuit needs to store, after such setting, the slave processing circuit does not need to set the memory, and compared with a branch processing circuit, the slave processing circuit only needs to set 2.5 * R (the capacity value required by a single slave processing circuit), if there is no branch processing circuit, 4 * R needs to be set, and the utilization rate of the register is low, so the structure can effectively reduce the total capacity of the memory and reduce the cost.
The branch processing circuit is used for forwarding an input data block, an output data block, a weight and an intermediate result between the main processing circuit and the plurality of slave processing circuits.
The splitting manner of the input data is described by an example, and the output result is the same as the input data due to the same data type, and the splitting manner is substantially the same, assuming that the data type is a matrix, the matrix is H * W, and the splitting manner may be, for example, if the value of H is small (smaller than a set threshold, for example, 100), the matrix H * W is split into H vectors (each vector is a row of the matrix H * W), each vector is an input data block, and the position of the first element of the input data block is marked on the input data block, that is, the input data blockh,wWherein h and w are input data blocks respectivelyh,wThe values of the first element in the H direction and in the W direction, e.g. the first input data block,h is 1.w is 1. Receiving an input data block from a processing circuith,wThen, the data block is inputh,wMultiplying and accumulating the weight values by each row of elements one by one to obtain an intermediate input resultw,iAnd w of the intermediate result is the value of w of the input data block, i is the column number value of the column element calculated with the input data block, and the main processing circuit determines the positions of the intermediate result at the output result of the hidden layer to be w and i. Input data Block for example1,1Input intermediate result obtained by calculation with weight value first column1,1The main processing circuit will input intermediate results1,1And the output result is arranged in a first row and a first column of the hidden layer output result.
The application also provides a recurrent neural network operation method, which is applied to a computing device, and the recurrent neural network comprises the following steps: an input layer, a hidden layer, and an output layer, the input layer, the hidden layer, and the output layer comprising H, the computing device comprising: an arithmetic unit and a controller unit; the arithmetic unit includes: a master processing circuit and a slave processing circuit; the computing device is used for executing h hidden layer computation of the recurrent neural network, and the time corresponding to h hidden layers is t; the method, as shown in fig. 6, includes the following steps:
step S601, the controller unit obtains the input data X of the h hidden layeri tThe weight W of the h hidden layer and the output result O of the h-1 hidden layeri t-1(ii) a Will input data Xi tWeight W and output result Oi t-1Sending the data to the main processing circuit;
step S602, the main processing circuit inputs data Xi tSplitting the data into a plurality of input data blocks and outputting a result Oi t-1Splitting the weight value W into a plurality of output data blocks, distributing the plurality of input data blocks and the plurality of output data blocks to a slave processing circuit, and broadcasting the weight value W to the slave processing circuit;
step S603, the slave processing circuit performs multiplication operation on the received input data block and the weight to obtain an input intermediate result, performs multiplication operation on the received output data block and the weight to obtain an output intermediate result, and sends the input intermediate result and the output intermediate result to the master processing circuit;
and step S604, the main processing circuit obtains a part of output results from the input intermediate results of the auxiliary processing circuit, splices the output intermediate results to obtain another part of output results, and calculates the sum of the part of output results and the other part of output results to obtain hidden layer output results.
The application also discloses a recurrent neural network device, which comprises one or more computing devices mentioned in the application, and is used for acquiring data to be operated and control information from other processing devices, executing specified recurrent neural network operation, and transmitting the execution result to peripheral equipment through an I/O interface. Peripheral devices such as cameras, displays, mice, keyboards, network cards, wifi interfaces, servers. When more than one computing device is included, the computing devices may be linked and transmit data through a specific structure, such as through a PCIE bus, to support larger-scale convolutional neural network training operations. At this time, the same control system may be shared, or there may be separate control systems; the memory may be shared or there may be separate memories for each accelerator. In addition, the interconnection mode can be any interconnection topology.
The recurrent neural network device has high compatibility and can be connected with various types of servers through PCIE interfaces.
The application also discloses a combined processing device which comprises the recurrent neural network device, the universal interconnection interface and other processing devices. And the recurrent neural network operation device interacts with other processing devices to jointly complete the operation specified by the user. Fig. 7 is a schematic view of a combined treatment apparatus.
Other processing devices include one or more of general purpose/special purpose processors such as Central Processing Units (CPUs), Graphics Processing Units (GPUs), neural network processors, and the like. The number of processors included in the other processing devices is not limited. The other processing devices are used as interfaces of the circulating neural network arithmetic device and external data and control, and include data transportation to finish basic control of starting, stopping and the like of the circulating neural network arithmetic device; other processing devices can also cooperate with the recurrent neural network arithmetic device to complete the arithmetic task together.
And the universal interconnection interface is used for transmitting data and control instructions between the recurrent neural network device and other processing devices. The cyclic neural network device acquires required input data from other processing devices and writes the input data into a storage device on the cyclic neural network device chip; control instructions can be obtained from other processing devices and written into a control cache on a recurrent neural network device chip; the data in the storage module of the recurrent neural network device can also be read and transmitted to other processing devices.
Optionally, the structure may further include a storage device, as shown in fig. 8, and the storage device is connected to the recurrent neural network device and the other processing device, respectively. The storage device is used to store data in the recurrent neural network device and the other processing devices, and is particularly suitable for data that cannot be stored in the internal storage of the recurrent neural network device or the other processing devices.
The combined processing device can be used as an SOC (system on chip) system of equipment such as a mobile phone, a robot, an unmanned aerial vehicle and video monitoring equipment, the core area of a control part is effectively reduced, the processing speed is increased, and the overall power consumption is reduced. In this case, the generic interconnect interface of the combined processing device is connected to some component of the apparatus. Some parts are such as camera, display, mouse, keyboard, network card, wifi interface.
In some embodiments, a chip is also claimed, which includes the above recurrent neural network device or combined processing device.
In some embodiments, a chip package structure is provided, which includes the above chip.
In some embodiments, a board card is provided, which includes the above chip package structure. Referring to fig. 9, fig. 9 provides a card that may include other kits in addition to the chip 389, including but not limited to: memory device 390, receiving means 391 and control device 392;
the memory device 390 is connected to the chip in the chip package structure through a bus for storing data. The memory device may include a plurality of groups of memory cells 393. Each group of the storage units is connected with the chip through a bus. It is understood that each group of the memory cells may be a DDR SDRAM (Double Data Rate SDRAM).
DDR can double the speed of SDRAM without increasing the clock frequency. DDR allows data to be read out on the rising and falling edges of the clock pulse. DDR is twice as fast as standard SDRAM. In one embodiment, the storage device may include 4 sets of the storage unit. Each group of the memory cells may include a plurality of DDR4 particles (chips). In one embodiment, the chip may internally include 4 72-bit DDR4 controllers, and 64 bits of the 72-bit DDR4 controller are used for data transmission, and 8 bits are used for ECC check. It can be understood that when DDR 4-3200 particles are adopted in each group of memory cells, the theoretical bandwidth of data transmission can reach 25600 MB/s.
In one embodiment, each group of the memory cells includes a plurality of double rate synchronous dynamic random access memories arranged in parallel. DDR can transfer data twice in one clock cycle. And a controller for controlling DDR is arranged in the chip and is used for controlling data transmission and data storage of each memory unit.
The interface device is electrically connected with a chip in the chip packaging structure. The interface device is used for realizing data transmission between the chip and an external device (such as a server or a computer). For example, in one embodiment, the interface device may be a standard PCIE interface. For example, the data to be processed is transmitted to the chip by the server through the standard PCIE interface, so as to implement data transfer. Preferably, when PCIE 3.0X 16 interface transmission is adopted, the theoretical bandwidth can reach 16000 MB/s. In another embodiment, the interface device may also be another interface, and the present application does not limit the concrete expression of the other interface, and the interface unit may implement the switching function. In addition, the calculation result of the chip is still transmitted back to an external device (e.g., a server) by the interface device.
The control device is electrically connected with the chip. The control device is used for monitoring the state of the chip. Specifically, the chip and the control device may be electrically connected through an SPI interface. The control device may include a single chip Microcomputer (MCU). The chip may include a plurality of processing chips, a plurality of processing cores, or a plurality of processing circuits, and may carry a plurality of loads. Therefore, the chip can be in different working states such as multi-load and light load. The control device can realize the regulation and control of the working states of a plurality of processing chips, a plurality of processing andor a plurality of processing circuits in the chip.
In some embodiments, an electronic device is provided that includes the above board card.
The electronic device comprises a data processing device, a robot, a computer, a printer, a scanner, a tablet computer, an intelligent terminal, a mobile phone, a vehicle data recorder, a navigator, a sensor, a camera, a server, a cloud server, a camera, a video camera, a projector, a watch, an earphone, a mobile storage, a wearable device, a vehicle, a household appliance, and/or a medical device.
The vehicle comprises an airplane, a ship and/or a vehicle; the household appliances comprise a television, an air conditioner, a microwave oven, a refrigerator, an electric cooker, a humidifier, a washing machine, an electric lamp, a gas stove and a range hood; the medical equipment comprises a nuclear magnetic resonance apparatus, a B-ultrasonic apparatus and/or an electrocardiograph.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (18)

1. A computing device configured to perform a recurrent neural network operation, the recurrent neural network comprising: an input layer, a hidden layer, and an output layer, the input layer, the hidden layer, and the output layer comprising H, the computing device comprising: an arithmetic unit and a controller unit; the arithmetic unit includes: a master processing circuit and a slave processing circuit; the computing device is used for executing h hidden layer computation of the recurrent neural network, and the time corresponding to h hidden layers is t;
the controller unit is used for acquiring input data X of the h hidden layeri tThe weight W of the h hidden layer and the output result O of the h-1 hidden layeri t-1
The controller unitAnd also for inputting data Xi tWeight W and output result Oi t-1Sending the data to the main processing circuit;
the main processing circuit is used for inputting data Xi tSplitting the data into a plurality of input data blocks and outputting a result Oi t-1Splitting the weight value W into a plurality of output data blocks, distributing the plurality of input data blocks and the plurality of output data blocks to a slave processing circuit, and broadcasting the weight value W to the slave processing circuit;
the slave processing circuit is used for performing multiplication operation on the received input data block and the weight to obtain an input intermediate result, performing multiplication operation on the received output data block and the weight to obtain an output intermediate result, and sending the input intermediate result and the output intermediate result to the master processing circuit;
the main processing circuit is also used for obtaining a part of output results from the input intermediate results of the auxiliary processing circuit, splicing the output intermediate results to obtain another part of output results, and calculating the sum of the part of output results and the other part of output results to obtain a hidden layer output result at the time t;
if the number of the slave processing circuits is multiple, the arithmetic unit further comprises one or more branch processing circuits, each branch processing circuit is connected with at least one slave processing circuit,
the branch processing circuit is used for forwarding an input data block, an output data block, a weight and an intermediate result between the main processing circuit and the plurality of slave processing circuits.
2. The computing device of claim 1,
and the main processing circuit is also used for sending the output result of the hidden layer to the h +1 th hidden layer.
3. The computing device of claim 1,
the main processing circuit is also used for executing subsequent operation on the hidden layer output result to obtain the output of the h output layer of the recurrent neural network operationResults Oi t
The subsequent processing comprises one or any combination of the following operations: a bias operation or an activation operation.
4. The apparatus of claim 1,
the main processing circuit is specifically configured to combine and sort the input intermediate results sent by the multiple processing circuits to obtain a partial output result, and combine and sort the output intermediate results sent by the multiple processing circuits to obtain another partial output result.
5. The apparatus of claim 1, wherein the main processing circuit comprises: a conversion processing circuit;
the conversion processing circuit is configured to perform conversion processing on data, and specifically includes: input data X received by the main processing circuiti tWeight W or output result Oi t-1An interchange between the first data structure and the second data structure is performed.
6. The apparatus of claim 1, wherein the slave processing circuit comprises: a multiplication processing circuit and an accumulation processing circuit;
the multiplication processing circuit is used for executing multiplication operation on the element values in the received input data block and the element values at the corresponding positions in the weight to obtain a product result; performing multiplication operation on the element values in the received output data block and the element values at the corresponding positions in the weight to obtain another multiplication result;
the accumulation processing circuit is used for executing accumulation operation on the product result to obtain the input intermediate result and executing accumulation operation on the other product result to obtain the output intermediate result.
7. A recurrent neural network operation apparatus, wherein the recurrent neural network operation apparatus comprises one or more computing apparatuses according to any one of claims 1 to 6, and is configured to acquire data to be operated and control information from other processing apparatuses, execute a specified recurrent neural network operation, and transmit the execution result to the other processing apparatuses via an I/O interface;
when the recurrent neural network device comprises a plurality of computing devices, the plurality of computing devices can be connected through a specific structure and transmit data;
the computing devices are interconnected through a PCIE bus of a fast peripheral equipment interconnection bus and transmit data so as to support operation of a larger-scale recurrent neural network; a plurality of the computing devices share the same control system or own respective control systems; the computing devices share the memory or own the memory; the plurality of computing devices are interconnected in any interconnection topology.
8. A combined processing device, characterized in that the combined processing device comprises the recurrent neural network operation device of claim 7, a universal interconnection interface and other processing devices;
and the recurrent neural network operation device interacts with the other processing devices to jointly complete the calculation operation specified by the user.
9. The combined processing device according to claim 8, further comprising: and the storage device is respectively connected with the recurrent neural network operation device and the other processing devices and is used for storing the data of the recurrent neural network operation device and the other processing devices.
10. A neural network chip comprising the computing device of claim 1 or the recurrent neural network computing device of claim 7 or the combinatorial processing device of claim 9.
11. An electronic device, characterized in that it comprises a chip according to claim 10.
12. The utility model provides a board card, its characterized in that, the board card includes: a memory device, an interface apparatus and a control device and the neural network chip of claim 10;
wherein, the neural network chip is respectively connected with the storage device, the control device and the interface device;
the storage device is used for storing data;
the interface device is used for realizing data transmission between the chip and external equipment;
and the control device is used for monitoring the state of the chip.
13. The board card of claim 12,
the memory device includes: a plurality of groups of memory cells, each group of memory cells is connected with the chip through a bus, and the memory cells are: DDR SDRAM;
the chip includes: the DDR controller is used for controlling data transmission and data storage of each memory unit;
the interface device is as follows: a standard PCIE interface.
14. A recurrent neural network operation method applied to a computing device, wherein the recurrent neural network operation method comprises the following steps: an input layer, a hidden layer, and an output layer, the input layer, the hidden layer, and the output layer comprising H, the computing device comprising: an arithmetic unit and a controller unit; the arithmetic unit includes: a master processing circuit and a slave processing circuit; the computing device is used for executing h hidden layer computation of the recurrent neural network, and the time corresponding to h hidden layers is t; the method specifically comprises the following steps:
the controller unit obtains input data X of the h hidden layeri tThe weight W of the h hidden layer and the output result O of the h-1 hidden layeri t-1(ii) a Will input data Xi tWeight W and outputResult of generation Oi t-1Sending the data to the main processing circuit;
the main processing circuit inputs data Xi tSplitting the data into a plurality of input data blocks and outputting a result Oi t-1Splitting the weight value W into a plurality of output data blocks, distributing the plurality of input data blocks and the plurality of output data blocks to a slave processing circuit, and broadcasting the weight value W to the slave processing circuit;
the slave processing circuit performs multiplication operation on the received input data block and the weight to obtain an input intermediate result, performs multiplication operation on the received output data block and the weight to obtain an output intermediate result, and sends the input intermediate result and the output intermediate result to the master processing circuit;
the main processing circuit obtains a part of output results from the input intermediate results of the slave processing circuit, splices the output intermediate results to obtain another part of output results, and calculates the sum of the part of output results and the other part of output results to obtain hidden layer output results;
if the number of the slave processing circuits is multiple, the arithmetic unit further comprises one or more branch processing circuits, each branch processing circuit being connected to at least one slave processing circuit, the method further comprising:
the branch processing circuit forwards input data blocks, output data blocks, weights and intermediate results between the master processing circuit and the plurality of slave processing circuits.
15. The method of claim 14, further comprising:
and the main processing circuit sends the output result of the hidden layer to the h +1 th hidden layer.
16. The method of claim 14, further comprising:
the main processing circuit executes subsequent operation on the output result of the hidden layer to obtain an output result O of the h output layer of the recurrent neural network operationi t
The subsequent processing comprises one or any combination of the following operations: a bias operation or an activation operation;
the activating operation includes: sigmoid, tanh, relu, softmax, or linear activation operations.
17. The method of claim 14, wherein the main processing circuit obtains a portion of the output results from the input intermediate results of the processing circuits, and concatenating the output intermediate results to obtain another portion of the output results specifically comprises:
the main processing circuit combines and sorts the input intermediate results sent by the processing circuits to obtain a part of output results, and combines and sorts the output intermediate results sent by the processing circuits to obtain another part of output results.
18. The method of claim 14, wherein the main processing circuit comprises: a conversion processing circuit; the method further comprises the following steps:
the conversion processing circuit performs conversion processing on data, specifically: input data X received by the main processing circuiti tWeight W or output result Oi t-1An interchange between the first data structure and the second data structure is performed.
CN201811429808.6A 2018-11-27 2018-11-27 Computing device and board card Active CN109522052B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811429808.6A CN109522052B (en) 2018-11-27 2018-11-27 Computing device and board card

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811429808.6A CN109522052B (en) 2018-11-27 2018-11-27 Computing device and board card

Publications (2)

Publication Number Publication Date
CN109522052A CN109522052A (en) 2019-03-26
CN109522052B true CN109522052B (en) 2020-05-08

Family

ID=65794502

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811429808.6A Active CN109522052B (en) 2018-11-27 2018-11-27 Computing device and board card

Country Status (1)

Country Link
CN (1) CN109522052B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111813448A (en) * 2019-04-12 2020-10-23 上海寒武纪信息科技有限公司 Operation method, device and related product
CN111831337B (en) 2019-04-19 2022-11-29 安徽寒武纪信息科技有限公司 Data synchronization method and device and related product
CN111831329B (en) * 2019-04-19 2022-12-09 安徽寒武纪信息科技有限公司 Data processing method and device and related product
CN111782577B (en) 2019-04-04 2023-03-24 安徽寒武纪信息科技有限公司 Data processing device and method and related product
JP7073580B2 (en) * 2019-04-04 2022-05-23 中科寒武紀科技股▲分▼有限公司 Data processing methods, equipment, and related products
CN111857828B (en) * 2019-04-25 2023-03-14 安徽寒武纪信息科技有限公司 Processor operation method and device and related product
CN112446474B (en) * 2019-08-31 2022-11-22 安徽寒武纪信息科技有限公司 Chip, multichip system, electronic equipment and data transmission method
CN110647722B (en) * 2019-09-20 2024-03-01 中科寒武纪科技股份有限公司 Data processing method and device and related products
CN110618866A (en) * 2019-09-20 2019-12-27 北京中科寒武纪科技有限公司 Data processing method and device and related product
CN114600126A (en) * 2019-10-30 2022-06-07 华为技术有限公司 Convolution operation circuit and convolution operation method
CN112766472B (en) * 2019-11-01 2024-04-12 中科寒武纪科技股份有限公司 Data processing method, device, computer equipment and storage medium
CN113807507A (en) * 2020-06-16 2021-12-17 安徽寒武纪信息科技有限公司 Data processing method and device and related product
CN112491555B (en) * 2020-11-20 2022-04-05 山西智杰软件工程有限公司 Medical electronic signature processing method and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101809597A (en) * 2007-09-26 2010-08-18 佳能株式会社 Calculation processing apparatus and method
CN107341542A (en) * 2016-04-29 2017-11-10 北京中科寒武纪科技有限公司 Apparatus and method for performing Recognition with Recurrent Neural Network and LSTM computings
CN108009150A (en) * 2017-11-28 2018-05-08 北京新美互通科技有限公司 A kind of input method and device based on Recognition with Recurrent Neural Network

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10698657B2 (en) * 2016-08-12 2020-06-30 Xilinx, Inc. Hardware accelerator for compressed RNN on FPGA
US11182665B2 (en) * 2016-09-21 2021-11-23 International Business Machines Corporation Recurrent neural network processing pooling operation
KR20180077691A (en) * 2016-12-29 2018-07-09 주식회사 엔씨소프트 Apparatus and method for sentence abstraction

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101809597A (en) * 2007-09-26 2010-08-18 佳能株式会社 Calculation processing apparatus and method
CN107341542A (en) * 2016-04-29 2017-11-10 北京中科寒武纪科技有限公司 Apparatus and method for performing Recognition with Recurrent Neural Network and LSTM computings
CN108009150A (en) * 2017-11-28 2018-05-08 北京新美互通科技有限公司 A kind of input method and device based on Recognition with Recurrent Neural Network

Also Published As

Publication number Publication date
CN109522052A (en) 2019-03-26

Similar Documents

Publication Publication Date Title
CN109543832B (en) Computing device and board card
CN109522052B (en) Computing device and board card
CN110163363B (en) Computing device and method
CN109685201B (en) Operation method, device and related product
CN110059797B (en) Computing device and related product
CN111047022A (en) Computing device and related product
CN109670581B (en) Computing device and board card
CN111488976A (en) Neural network computing device, neural network computing method and related products
CN110059809B (en) Computing device and related product
CN109711540B (en) Computing device and board card
CN109753319B (en) Device for releasing dynamic link library and related product
CN111488963B (en) Neural network computing device and method
CN111930681A (en) Computing device and related product
CN111079908A (en) Network-on-chip data processing method, storage medium, computer device and apparatus
CN109740730B (en) Operation method, device and related product
CN111368967B (en) Neural network computing device and method
CN109740729B (en) Operation method, device and related product
CN109711538B (en) Operation method, device and related product
CN111047021A (en) Computing device and related product
CN111078625B (en) Network-on-chip processing system and network-on-chip data processing method
CN111368990B (en) Neural network computing device and method
CN111368986B (en) Neural network computing device and method
CN111368987B (en) Neural network computing device and method
CN111078623B (en) Network-on-chip processing system and network-on-chip data processing method
CN111078624B (en) Network-on-chip processing system and network-on-chip data processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100000 room 644, No. 6, No. 6, South Road, Beijing Academy of Sciences

Applicant after: Zhongke Cambrian Technology Co., Ltd

Address before: 100000 room 644, No. 6, No. 6, South Road, Beijing Academy of Sciences

Applicant before: Beijing Zhongke Cambrian Technology Co., Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant