WO2017124641A1 - 用于执行人工神经网络反向训练的装置和方法 - Google Patents

用于执行人工神经网络反向训练的装置和方法 Download PDF

Info

Publication number
WO2017124641A1
WO2017124641A1 PCT/CN2016/078279 CN2016078279W WO2017124641A1 WO 2017124641 A1 WO2017124641 A1 WO 2017124641A1 CN 2016078279 W CN2016078279 W CN 2016078279W WO 2017124641 A1 WO2017124641 A1 WO 2017124641A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
weight
module
gradient
data
Prior art date
Application number
PCT/CN2016/078279
Other languages
English (en)
French (fr)
Inventor
刘少礼
郭崎
陈云霁
陈天石
Original Assignee
北京中科寒武纪科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京中科寒武纪科技有限公司 filed Critical 北京中科寒武纪科技有限公司
Priority to EP16885905.6A priority Critical patent/EP3407268A4/en
Priority to EP21184650.6A priority patent/EP3940606A1/en
Priority to KR1020187015433A priority patent/KR102175044B1/ko
Publication of WO2017124641A1 publication Critical patent/WO2017124641A1/zh
Priority to US16/038,872 priority patent/US10713567B2/en
Priority to US16/441,019 priority patent/US10713568B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/173Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
    • G06F15/17306Intercommunication techniques
    • G06F15/17318Parallel communications techniques, e.g. gather, scatter, reduce, roadcast, multicast, all to all
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/22Microcontrol or microprogram arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30145Instruction analysis, e.g. decoding, instruction word fields
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3836Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
    • G06F9/3838Dependency mechanisms, e.g. register scoreboarding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/20Handling requests for interconnection or transfer for access to input/output bus
    • G06F13/28Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound

Definitions

  • the present invention generally relates to artificial neural networks, and in particular to an apparatus and method for performing artificial neural network reverse training.
  • Multi-layer artificial neural networks are widely used in the fields of pattern recognition, image processing, function approximation and optimization calculation.
  • Multi-layer artificial networks have been accepted by Kirin, image processing, function approximation and optimization calculation.
  • Multi-layer artificial networks have been accepted by Kirin, image processing, function approximation and optimization calculation.
  • Multi-layer artificial networks have been accepted by Kirin, image processing, function approximation and optimization calculation.
  • Multi-layer artificial networks have been accepted by Kir in recent years due to their high recognition accuracy and good parallelism. The industry is getting more and more attention.
  • One known method of supporting multi-layer artificial neural network reverse training is to use a general purpose processor.
  • the method supports the above algorithm by executing general purpose instructions using a general purpose register file and generic functions.
  • One of the disadvantages of this approach is that the performance of a single general purpose processor is low and cannot meet the performance requirements of conventional multi-layer artificial neural network operations.
  • communication between general-purpose processors becomes a performance bottleneck.
  • the general-purpose processor needs to reverse-decompose the multi-layer artificial neural network into a long-column operation and a fetch instruction sequence, and the processor front-end decoding brings a large power consumption overhead.
  • Another known method of supporting multi-layer artificial neural network reverse training is to use a graphics processing unit (GPU).
  • the method supports the above algorithm by executing a generic SIMD instruction using a general purpose register file and a generic stream processing unit.
  • the GPU is a device dedicated to performing graphics and image operations and scientific calculations, without the special support for multi-layer artificial neural network operations, a large amount of front-end decoding work is still required to perform multi-layer artificial neural network operations, bringing a large number of Additional overhead.
  • the GPU has only a small on-chip cache, and the model data (weight) of the multi-layer artificial neural network needs to be repeatedly transferred from off-chip, and the off-chip bandwidth becomes the main performance bottleneck.
  • the GPU has only a small on-chip cache, and the model data (weight) of the multi-layer artificial neural network needs to be repeatedly transferred from off-chip. The off-chip bandwidth becomes the main performance bottleneck, and brings huge power consumption overhead.
  • An aspect of the present invention provides an apparatus for performing artificial neural network reverse training, including an instruction cache unit, a controller unit, a direct memory access unit, an H-tree module, a main operation module, and a plurality of slave arithmetic modules, wherein: the instruction cache unit is configured to cache instructions; the controller unit is configured to read the instructions from the instruction cache unit, and decode the instructions into a control H-tree module, a main operation module, and a slave operation module behavior Micro-instruction; direct memory access unit is used to write data from the memory to the main data module and the corresponding data buffer unit of each slave operation module or read data from the data buffer unit to the memory; the H-tree module is used at each layer The neural network reverse training starts the calculation phase.
  • the main operation module transmits the input gradient vector of the layer to all the slave arithmetic modules through the H-tree module. After the calculation process from the calculation module is completed, the H-tree module calculates the slaves step by step. The output gradient vector part of the module and the two pairs are added to obtain the output gradient vector of the layer; the main operation module is used to complete the subsequent calculation by using the output gradient vector of the layer in the calculation process of each layer; and each slave operation module Using the same input gradient vector and the respective weight data, the corresponding output gradient vector partial sum is calculated in parallel.
  • Another aspect of the present invention provides a method of performing a single layer artificial neural network reverse training using the above apparatus.
  • Another aspect of the present invention provides a method of performing multi-layer artificial neural network reverse training using the above apparatus.
  • FIG 1 shows an example block diagram of the overall structure of an apparatus for performing artificial neural network reverse training in accordance with an embodiment of the present invention.
  • FIG. 2 is a diagram schematically showing the structure of an H-tree module in an apparatus for performing artificial neural network reverse training according to an embodiment of the present invention.
  • FIG. 3 illustrates an example block diagram of a main operational module structure in an apparatus for performing artificial neural network reverse training in accordance with an embodiment of the present invention.
  • FIG. 4 illustrates an example block diagram of a slave arithmetic module structure in an apparatus for performing artificial neural network reverse training in accordance with an embodiment of the present invention.
  • FIG. 5 illustrates an example block diagram of a neural network reverse training process in accordance with an embodiment of the present invention.
  • FIG. 6 shows a flow chart of a single layer artificial neural network operation in accordance with an embodiment of the present invention.
  • the reverse training of the multilayer artificial neural network includes two or more layers of multiple neurons.
  • the input gradient vector is first weighted and summed to calculate the output gradient vector of the layer.
  • the output gradient vector is multiplied by the derivative value of the activation function of the following layer in the forward operation to obtain the input gradient vector of the next layer.
  • the input gradient vector is multiplied by the input neuron in the forward operation to obtain the gradient of the layer weight, and then the weight of the layer can be updated according to the obtained gradient of the layer weight.
  • the apparatus includes an instruction cache unit 1, a controller unit 2, a direct memory access unit 3, an H-tree module 4, a main operation module 5, and a plurality of slave operation modules 6.
  • the instruction cache unit 1, the controller unit 2, the direct memory access unit 3, the H-tree module 4, the main arithmetic module 5, and the slave arithmetic module 6 can all be implemented by a hardware circuit (for example, an application-specific integrated circuit ASIC).
  • the instruction cache unit 1 reads in an instruction through the direct memory access unit 3 and caches the read instruction.
  • the controller unit 2 reads instructions from the instruction cache unit 1 and translates the instructions into micro-instructions that control the behavior of other modules, such as the direct memory access unit 3, the main arithmetic module 5, and the slave arithmetic module 6.
  • the direct memory access unit 3 can access the external address space, directly read and write data to each cache unit inside the device, and complete data loading and storage.
  • FIG. 2 schematically shows the structure of the H-tree module 4.
  • the H-tree module 4 constitutes a data path between the main arithmetic module 5 and the plurality of slave arithmetic modules 6, and has an H-tree structure.
  • the H-tree is a binary tree path composed of multiple nodes. Each node sends the upstream data to the downstream two nodes in the same way, and the data returned by the two downstream nodes are combined and returned to the upstream node. For example, in the inverse operation of the neural network, the vectors returned by the two downstream nodes are added to a vector at the current node and returned to the upstream node.
  • each slave operation module 6 outputs The sum of the output gradient vectors will be summed two-by-two in the H-tree module 4, ie, summing and summing all the output gradient vectors as the final output gradient vector.
  • FIG. 3 shows an example block diagram of the structure of the main operation module 5 in the apparatus for performing artificial neural network reverse training according to an embodiment of the present invention.
  • the main operation module 5 includes an operation unit 51, a data dependency determination unit 52, and a neuron buffer unit 53.
  • the neuron buffer unit 53 is configured to buffer input data and output data used by the main operation module 5 in the calculation process.
  • the arithmetic unit 51 performs various arithmetic functions of the main arithmetic module.
  • the data dependency judging unit 52 is a port for the arithmetic unit 51 to read and write the neuron buffer unit 53, and at the same time, can ensure that there is no consistency conflict with the reading and writing of data in the neuron buffer unit 53. Specifically, the data dependency determining unit 52 determines whether there is a dependency relationship between the microinstruction that has not been executed and the data of the microinstruction that is being executed.
  • microinstruction is allowed to be transmitted immediately, otherwise the microinstruction needs to wait until the micro
  • the microinstruction is allowed to be transmitted after all the microinstructions on which the instruction depends are executed. For example, all microinstructions sent to the data dependency unit 52 are stored in an instruction queue internal to the data dependency unit 52, in which the read data of the read instruction ranges from the write command to the queue position. If the range of write data conflicts, the instruction must wait until the write instruction it depends on is executed.
  • the data dependency determining unit 52 is also responsible for reading the input gradient vector from the neuron buffer unit 53 and transmitting it to the slave arithmetic module 6 through the H-tree module 4, and the output data from the arithmetic module 6 is directly sent to the operation through the H-tree module 4.
  • Unit 51 The command output from the controller unit 2 is sent to the arithmetic unit 51 and the dependency determining unit 52 to control its behavior.
  • each slave operation module 6 includes an operation unit 61, a data dependency determination unit 62, a neuron buffer unit 63, a weight buffer unit 64, and a weight gradient.
  • the storage unit 65 includes
  • the arithmetic unit 61 receives the microinstructions issued by the controller unit 2 and performs an arithmetic logic operation.
  • the data dependency judging unit 62 is responsible for reading and writing operations on the cache unit in the calculation process.
  • the data dependency judging unit 62 ensures that there is no consistency conflict between the reading and writing of the cache unit.
  • the data dependency determining unit 62 determines whether there is a dependency relationship between the microinstruction that has not been executed and the data of the microinstruction that is being executed, and if not, allows the microinstruction to be immediately transmitted, otherwise it is necessary to wait until the micro The microinstruction is allowed to be transmitted after all the microinstructions on which the instruction depends are executed.
  • all microinstructions sent to the data dependency unit 62 are stored in an instruction queue inside the data dependency unit 62, in which the range of read data of the read instruction is a write command ahead of the queue position. If the range of write data conflicts, the instruction must wait until the write instruction it depends on is executed.
  • the neuron buffer unit 63 buffers the input gradient vector data and the output gradient vector partial sum calculated by the arithmetic operation module 6.
  • the weight buffer unit 64 buffers the weight vector required by the arithmetic module 6 in the calculation process. For each slave arithmetic module, only the columns corresponding to the slave arithmetic module 6 in the weight matrix are stored.
  • the weight gradient buffer unit 65 buffers the weight gradient data required by the corresponding slave module in updating the weights.
  • Each of the weight gradient data stored from the arithmetic module 6 corresponds to its stored weight vector.
  • Each of the arithmetic modules calculates only the product of the corresponding partial scalar element in the in_gradient and the column corresponding to the weight matrix w, and each of the obtained output vectors is a sum of the final result and the sum of the parts and the H-tree The middle and the second are added together to get the final result. So the computational process becomes a parallel computational part of the process and the subsequent accumulation process.
  • Each of the slave arithmetic modules 6 calculates a partial sum of the output gradient vectors, and all the sums are summed in the H-tree module 4 to obtain the final output gradient vector.
  • Each slave arithmetic module 6 simultaneously multiplies the input gradient vector and the output value of each layer in the forward operation to calculate a gradient of the weights to update the weight stored by the slave arithmetic module 6.
  • Forward and reverse training are the two main processes of neural network algorithms. To train (update) the weights in the network, the first need to calculate the input. The forward output of the vector in the network of current weights, which is a forward process, and then the strength of each layer is trained (updated) layer by layer according to the difference between the output value and the label value of the input vector itself. In the forward calculation process, the output vector of each layer and the derivative value of the activation function are saved. These data are required for the reverse training process, so these data are guaranteed to exist at the beginning of the reverse training.
  • the output value of each layer in the forward operation is the data existing at the beginning of the reverse operation, and can be buffered in the main operation module by the direct memory fetch unit and sent to the slave operation module through the H-tree.
  • the main operation module 5 performs subsequent calculation based on the output gradient vector, for example, multiplying the output gradient vector by the derivative of the activation function in the forward operation to obtain the input gradient value of the next layer.
  • the derivative of the activation function in the forward operation is the data existing at the beginning of the reverse operation, and can be cached in the main operation module by the direct memory fetch unit.
  • an instruction set for performing an artificial neural network inverse operation on the aforementioned apparatus includes the CONFIG instruction, the COMPUTE instruction, the IO instruction, the NOP instruction, the JUMP instruction, and the MOVE instruction, where:
  • the CONFIG command configures various constants required for current layer calculation before each layer of artificial neural network calculation begins;
  • the COMPUTE instruction completes the arithmetic logic calculation of each layer of artificial neural network
  • the IO instruction realizes reading input data required for calculation from the external address space and storing the data back to the external space after the calculation is completed;
  • the NOP instruction is responsible for clearing the microinstructions currently loaded into all internal microinstruction buffer queues, ensuring that all instructions preceding the NOP instruction are completed.
  • the NOP instruction itself does not contain any operations;
  • the JUMP instruction is responsible for the jump of the next instruction address that the controller will read from the instruction cache unit to implement the control flow jump;
  • the MOVE instruction is responsible for carrying data of an address in the internal address space of the device to another address in the internal address space of the device.
  • the process is independent of the operation unit and does not occupy the resources of the operation unit during execution.
  • FIG. 5 illustrates an example block diagram of a neural network reverse training process in accordance with an embodiment of the present invention.
  • the output gradient vector input gradient of the previous layer is multiplied by the corresponding activation function derivative to obtain the input data of this layer, and then multiplied by the weight matrix to obtain the output gradient vector.
  • the operation module 6 multiplies the input gradient and the input neuron in the forward operation to calculate the weight update gradient dw, and then uses w, dw, and the weight used to update the weight when updating the gradient dw'. Rate update weight w.
  • the input gradient ([input gradient0,..., input gradient3] in FIG. 5) is the output gradient vector of the n+1th layer, which is first and the derivative value of the nth layer in the forward operation process.
  • ([f'(out0),...,f'(out3)]) in Fig. 5 is multiplied to obtain an input gradient vector of the nth layer, which is completed in the main operation module 5, and sent to the H-tree module 4 From the arithmetic module 6, it is temporarily stored in the neuron buffer unit 63 of the slave arithmetic module 6. Then, the input gradient vector is multiplied by the weight matrix to obtain an output gradient vector of the nth layer.
  • the i-th slave computing module calculates the product of the i-th scalar in the input gradient vector and the column vector [w_i0,...,w_iN] in the weight matrix, and the resulting output vector is step-by-step in the H-tree module 4.
  • the two additions yield the final output gradient vector output gradient ([output gradient0,...,output gradient3] in Figure 5).
  • the jth element of the vector, in_gradient_i is the i-th element of the inverse input n-th layer input gradient vector (ie, the product of input gradient and derivative f' in Figure 5).
  • the input of the nth layer in the forward operation is the data existing at the start of the reverse training, and is sent to the slave arithmetic module 6 through the H-tree module 4 and temporarily stored in the neuron buffer unit 63.
  • the slave operation module 6 after completing the calculation of the sum of the output gradient vectors, the i-th scalar of the input gradient vector and the input vector of the n-th layer of the forward operation are multiplied to obtain a gradient vector dw of the updated weight. This update weight.
  • FIG. 6 is a flow chart showing a single layer artificial neural network reverse training in accordance with one embodiment.
  • the flowchart depicts the process of implementing a single layer neural network reverse training as shown in FIG. 5 using the apparatus and instruction set of the present invention.
  • step S1 an IO instruction is pre-stored at the first address of the instruction cache unit 1.
  • step S2 the operation starts, the controller unit 2 reads the IO instruction from the first address of the instruction cache unit 1, and according to the translated microinstruction, the direct memory access unit 3 reads from the external address space and the single-layer artificial neural network.
  • the network reverse trains all instructions related to it and caches it in the instruction cache unit 1.
  • step S3 the controller unit 2 then reads the next IO instruction from the instruction cache unit, and according to the translated microinstruction, the direct memory access unit 3 reads all the data required by the main operation module 5 from the external address space to
  • the neuron buffer unit 53 of the main operation module 5 includes the input neuron and activation function derivative values and the input gradient vector in the previous forward operation.
  • step S4 the controller unit 2 then reads the next IO instruction from the instruction cache unit, and according to the translated microinstruction, the direct memory access unit 3 reads the ownership value data and the weight required from the operation module 6 from the external address space.
  • the gradient data is stored in the weight buffer unit 64 and the weight gradient buffer unit 65 of the corresponding slave operation module 6, respectively.
  • step S5 the controller unit 2 then reads the next CONFIG instruction from the instruction cache unit, and the operation unit configures the value of the internal unit register of the operation unit according to the parameters in the translated microinstruction, including various constants required for the calculation of the layer neural network. , the accuracy of the calculation of this layer, the learning rate when updating the weight, and so on.
  • step S6 the controller unit 2 then reads the next COMPUTE instruction from the instruction cache unit, and according to the translated microinstruction, the main operation module 5 sends the input gradient vector and the input neuron in the forward operation through the H-tree module 4 to Each of the slave arithmetic modules 6, the input gradient vector and the input neuron at the time of the forward operation are stored in the neuron buffer unit 63 of the slave arithmetic module 6.
  • step S7 according to the micro-instruction decoded by the COMPUTE instruction, the weight vector (i.e., the partial column of the weight matrix stored by the slave module) is read from the weight buffer unit 64 from the arithmetic unit 61 of the arithmetic module 6 to complete the right.
  • the value vector and the vector multiplication scalar operation of the input gradient vector return the output vector portion and through the H-tree; at the same time, the input gradient vector is multiplied from the input neuron by the operation module 6, and the weight gradient is obtained and stored in the weight gradient buffer unit 65.
  • step S8 in the H-tree module 4, the output gradient portions returned from each of the arithmetic modules 6 are added step by step to obtain a complete output gradient vector.
  • step S9 the main operation module 5 obtains the return value of the H-tree module 4, and reads the activation function derivative value in the forward operation from the neuron buffer unit 53 according to the micro-instruction decoded by the COMPUTE instruction, and multiplies the derivative value by the return value.
  • the output vector which yields the next layer of inversely trained input gradient vectors, is written back to the neuron buffer unit 53.
  • step S10 the controller unit 2 then reads the next COMPUTE instruction from the instruction cache unit, and reads the weight w from the weight buffer unit 64 from the operation module 6 according to the translated microinstruction, and reads from the weight gradient buffer unit.
  • the weight gradient dw of this time and the weight gradient dw' used by the last update weight are used to update the weight w.
  • step S11 the controller unit then reads the next IO instruction from the instruction cache unit, and the direct memory access unit 3 stores the output gradient vector in the neuron buffer unit 53 to the external address according to the translated microinstruction. Space specifies the address and the operation ends.
  • the implementation process is similar to that of a single-layer neural network.
  • the next-level operation instruction will use the output gradient vector calculated in the main operation module as the next layer.
  • the trained input gradient vector performs the above calculation process, and the weight address and the weight gradient address in the instruction are also changed to the address corresponding to the layer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Computer Hardware Design (AREA)
  • Image Analysis (AREA)
  • Complex Calculations (AREA)
  • Feedback Control In General (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

一种用于执行人工神经网络反向训练的装置,包括指令缓存单元(1)、控制器单元(2)、直接内存访问单元(3)、H树模块(4)、主运算模块(5)、以及多个从运算模块(6)。使用该装置可以实现多层人工神经网络的反向训练。对于每一层来说,首先对输入梯度向量进行加权求和计算出本层的输出梯度向量。该输出梯度向量乘以下一层在正向运算时的激活函数的导数值可以得到下一层的输入梯度向量。将输入梯度向量与正向运算时的输入神经元对位相乘得到本层权值的梯度,然后可以根据所得到的本层权值的梯度来更新本层的权值。

Description

用于执行人工神经网络反向训练的装置和方法 技术领域
本发明总体上涉及人工神经网络,具体地涉及一种用于执行人工神经网络反向训练的装置和方法。
背景技术
多层人工神经网络被广泛应用于模式识别,图像处理,函数逼近和优化计算等领域,多层人工网络在近年来由于其较高的识别准确度和较好的可并行性,受到学术界和工业界越来越广泛的关注。
一种支持多层人工神经网络反向训练的已知方法是使用通用处理器。该方法通过使用通用寄存器堆和通用功能部件执行通用指令来支持上述算法。该方法的缺点之一是单个通用处理器的运算性能较低,无法满足通常的多层人工神经网络运算的性能需求。而多个通用处理器并行执行时,通用处理器之间相互通信又成为了性能瓶颈。另外,通用处理器需要把多层人工神经网络反向运算译码成一长列运算及访存指令序列,处理器前端译码带来了较大的功耗开销。
另一种支持多层人工神经网络反向训练的已知方法是使用图形处理器(GPU)。该方法通过使用通用寄存器堆和通用流处理单元执行通用SIMD指令来支持上述算法。由于GPU是专门用来执行图形图像运算以及科学计算的设备,没有对多层人工神经网络运算的专门支持,仍然需要大量的前端译码工作才能执行多层人工神经网络运算,带来了大量的额外开销。另外GPU只有较小的片上缓存,多层人工神经网络的模型数据(权值)需要反复从片外搬运,片外带宽成为了主要性能瓶颈。另外,GPU只有较小的片上缓存,多层人工神经网络的模型数据(权值)需要反复从片外搬运,片外带宽成为了主要性能瓶颈,同时带来了巨大的功耗开销。
发明内容
本发明的一个方面提供了一种用于执行人工神经网络反向训练的装置,包括指令缓存单元、控制器单元、直接内存访问单元、H树模块、主运算模块、以及 多个从运算模块,其中:指令缓存单元用于缓存指令;控制器单元用于从指令缓存单元读取指令,并将该指令译码成控制H树模块、主运算模块、以及从运算模块行为的微指令;直接内存访问单元用于从内存向主运算模块和各从运算模块的相应数据缓存单元中写数据或从所述数据缓存单元向内存读数据;H树模块用于,在每层神经网络反向训练开始计算的阶段,主运算模块通过H树模块向所有的从运算模块传输本层的输入梯度向量,在从计算模块的计算过程完成后,H树模块逐级将各从计算模块的输出梯度向量部分和两两相加得到本层的输出梯度向量;主运算模块用于在每一层的计算过程中,利用本层的输出梯度向量完成后续计算;以及每个从运算模块利用相同的输入梯度向量和各自的权值数据,并行地计算出相应的输出梯度向量部分和。
本发明的另一个方面提供了一种使用上述装置执行单层人工神经网络反向训练的方法。
本发明的另一方面提供了一种使用上述装置执行多层人工神经网络反向训练的方法。
附图说明
为了更完整地理解本发明及其优势,现在将参考结合附图的以下描述,其中:
图1示出了根据本发明实施例的用于执行人工神经网络反向训练的装置的整体结构的示例框图。
图2示意性示出了根据本发明实施例的用于执行人工神经网络反向训练的装置中H树模块的结构。
图3示出了根据本发明实施例的用于执行人工神经网络反向训练的装置中主运算模块结构的示例框图。
图4示出了根据本发明实施例的用于执行人工神经网络反向训练的装置中从运算模块结构的示例框图。
图5示出了根据本发明实施例的神经网络反向训练过程的示例框图。
图6示出了根据本发明实施例的单层人工神经网络运算的流程图。
在所有附图中,相同的装置、部件、单元等使用相同的附图标记来表示。
具体实施方式
根据结合附图对本发明示例性实施例的以下详细描述,本发明的其它方面、优势和突出特征对于本领域技术人员将变得显而易见。
在本发明中,术语“包括”和“含有”及其派生词意为包括而非限制;术语“或”是包含性的,意为和/或。
在本说明书中,下述用于描述本发明原理的各种实施例只是说明,不应该以任何方式解释为限制发明的范围。参照附图的下述描述用于帮助全面理解由权利要求及其等同物限定的本发明的示例性实施例。下述描述包括多种具体细节来帮助理解,但这些细节应认为仅仅是示例性的。因此,本领域普通技术人员应认识到,在不背离本发明的范围和精神的情况下,可以对本文中描述的实施例进行多种改变和修改。此外,为了清楚和简洁起见,省略了公知功能和结构的描述。此外,贯穿附图,相同参考数字用于相似功能和操作。
根据本发明实施例的多层人工神经网络的反向训练,包括两层或者两层以上的多个神经元。对于每一层来说,首先对输入梯度向量进行加权求和计算出本层的输出梯度向量。该输出梯度向量乘以下一层在正向运算时的激活函数的导数值可以得到下一层的输入梯度向量。将输入梯度向量与正向运算时的输入神经元对位相乘得到本层权值的梯度,然后可以根据所得到的本层权值的梯度来更新本层的权值。
图1示出了根据本发明实施例的用于执行人工神经网络反向训练的装置的整体结构的示例框图。如图1所示,该装置包括指令缓存单元1、控制器单元2、直接内存访问单元3、H树模块4、主运算模块5和多个从运算模块6。指令缓存单元1、控制器单元2、直接内存访问单元3、H树模块4、主运算模块5和从运算模块6均可以通过硬件电路(例如专用集成电路ASIC)实现。
指令缓存单元1通过直接内存访问单元3读入指令并缓存读入的指令。
控制器单元2从指令缓存单元1中读取指令,将指令译成控制其他模块行为的微指令,所述其他模块例如直接内存访问单元3、主运算模块5和从运算模块6等。
直接内存访问单元3能够访存外部地址空间,直接向装置内部的各个缓存单元读写数据,完成数据的加载和存储。
图2示意性示出了H树模块4的结构。H树模块4构成主运算模块5和多个从运算模块6之间的数据通路,并具有H树型的结构。H树是由多个节点构成的二叉树通路,每个节点将上游的数据同样地发给下游的两个节点,将下游的两个节点返回的数据进行合并,并返回给上游的节点。例如,在神经网络反向运算过程中,下游两个节点返回的向量会在当前节点相加成一个向量并返回给上游节点。在每层人工神经网络开始计算的阶段,主运算模块5内的输入梯度通过H树模块4发送给各个从运算模块6;当从运算模块6的计算过程完成后,每个从运算模块6输出的输出梯度向量部分和会在H树模块4中逐级两两相加,即对所有输出梯度向量部分和求和,作为最终的输出梯度向量。
图3示出了根据本发明实施例的用于执行人工神经网络反向训练的装置中主运算模块5的结构的示例框图。如图3所示,主运算模块5包括运算单元51、数据依赖关系判断单元52和神经元缓存单元53。
神经元缓存单元53用于缓存主运算模块5在计算过程中用到的输入数据和输出数据。运算单元51完成主运算模块的各种运算功能。数据依赖关系判断单元52是运算单元51读写神经元缓存单元53的端口,同时能够保证对神经元缓存单元53中数据的读写不存在一致性冲突。具体地,数据依赖关系判断单元52判断尚未执行的微指令与正在执行过程中的微指令的数据之间是否存在依赖关系,如果不存在,允许该条微指令立即发射,否则需要等到该条微指令所依赖的所有微指令全部执行完成后该条微指令才允许被发射。例如,所有发往数据依赖关系单元52的微指令都会被存入数据依赖关系单元52内部的指令队列里,在该队列中,读指令的读取数据的范围如果与队列位置靠前的写指令写数据的范围发生冲突,则该指令必须等到所依赖的写指令被执行后才能够执行。同时,数据依赖关系判断单元52也负责从神经元缓存单元53读取输入梯度向量通过H树模块4发送给从运算模块6,而从运算模块6的输出数据通过H树模块4直接发送给运算单元51。控制器单元2输出的指令发送给运算单元51和依赖关系判断单元52,来控制其行为。
图4示出了根据本发明实施例的用于执行人工神经网络反向训练的装置中从运算模块6的结构的示例框图。如图4所示,每个从运算模块6包括运算单元61、数据依赖关系判定单元62、神经元缓存单元63、权值缓存单元64和权值梯度缓 存单元65。
运算单元61接收控制器单元2发出的微指令并进行算数逻辑运算。
数据依赖关系判断单元62负责计算过程中对缓存单元的读写操作。数据依赖关系判断单元62保证对缓存单元的读写不存在一致性冲突。具体地,数据依赖关系判断单元62判断尚未执行的微指令与正在执行过程中的微指令的数据之间是否存在依赖关系,如果不存在,允许该条微指令立即发射,否则需要等到该条微指令所依赖的所有微指令全部执行完成后该条微指令才允许被发射。例如,所有发往数据依赖关系单元62的微指令都会被存入数据依赖关系单元62内部的指令队列里,在该队列中,读指令的读取数据的范围如果与队列位置靠前的写指令写数据的范围发生冲突,则该指令必须等到所依赖的写指令被执行后才能够执行。
神经元缓存单元63缓存输入梯度向量数据以及该从运算模块6计算得到的输出梯度向量部分和。
权值缓存单元64缓存该从运算模块6在计算过程中需要的权值向量。对于每一个从运算模块,都只会存储权值矩阵中与该从运算模块6相对应的列。
权值梯度缓存单元65缓存相应从运算模块在更新权值过程中需要的权值梯度数据。每一个从运算模块6存储的权值梯度数据与其存储的权值向量相对应。
从运算模块6实现每层人工神经网络反向训练计算输出梯度向量的过程中可以并行的前半部分以及权值的更新。以人工神经网络全连接层(MLP)为例,过程为out_gradient=w*in_gradient,其中权值矩阵w和输入梯度向量in_gradient的乘法可以划分为不相关的并行计算子任务,out_gradient与in_gradient是列向量,每个从运算模块只计算in_gradient中相应的部分标量元素与权值矩阵w对应的列的乘积,得到的每个输出向量都是最终结果的一个待累加的部分和,这些部分和在H树中逐级两两相加得到最后的结果。所以计算过程变成了并行的计算部分和的过程和后面的累加的过程。每个从运算模块6计算出输出梯度向量的部分和,所有的部分和在H树模块4中完成求和运算得到最后的输出梯度向量。每个从运算模块6同时将输入梯度向量和正向运算时每层的输出值相乘,计算出权值的梯度,以更新本从运算模块6存储的权值。正向运算和反向训练是神经网络算法的两个主要过程,神经网络要训练(更新)网络中的权值,首先需要计算输入 向量在当前权值构成的网络中的正向输出,这是正向过程,然后根据输出值与输入向量本身的标注值之间的差值,反向逐层训练(更新)每层的权值。在正向计算过程中会保存每一层的输出向量以及激活函数的导数值,这些数据是反向训练过程所需要的,所以在反向训练开始时,这些数据已经保证存在。正向运算中每层的输出值是反向运算开始时已有的数据,可以通过直接内存访存单元缓存在主运算模块中并通过H树发送给从运算模块。主运算模块5基于输出梯度向量进行后续计算,例如将输出梯度向量乘以正向运算时的激活函数的导数得到下一层的输入梯度值。正向运算时的激活函数的导数是在反向运算开始时已有的数据,可以通过直接内存访存单元缓存在主运算模块中。
根据本发明实施例,还提供了在前述装置上执行人工神经网络反向运算的指令集。指令集中包括CONFIG指令、COMPUTE指令、IO指令、NOP指令、JUMP指令和MOVE指令,其中:
CONFIG指令在每层人工神经网络计算开始前配置当前层计算需要的各种常数;
COMPUTE指令完成每层人工神经网络的算术逻辑计算;
IO指令实现从外部地址空间读入计算需要的输入数据以及在计算完成后将数据存回至外部空间;
NOP指令负责清空当前装至内部所有微指令缓存队列中的微指令,保证NOP指令之前的所有指令全部指令完毕。NOP指令本身不包含任何操作;
JUMP指令负责控制器将要从指令缓存单元读取的下一条指令地址的跳转,用来实现控制流的跳转;
MOVE指令负责将装置内部地址空间某一地址的数据搬运至装置内部地址空间的另一地址,该过程独立于运算单元,在执行过程中不占用运算单元的资源。
图5示出了根据本发明实施例的神经网络反向训练过程的示例框图。计算输出梯度向量的过程为out_gradient=w*in_gradient,其中权值矩阵w和输入梯度向量in_gradient的矩阵向量乘法可以划分为不相关的并行计算子任务,每个从运算模块6计算出输出梯度向量的部分和,所有的部分和在H树模块4中完成求和运算得到最后的输出梯度向量。图5中上一层的输出梯度向量input gradient乘以对应的激活函数导数得到本层的输入数据,再与权值矩阵相乘得到输出梯度向量。计算 权值更新梯度的过程为dw=x*in_gradient,其中每个从运算模块6计算本模块对应部分的权值的更新梯度。从运算模块6将输入梯度和正向运算时的输入神经元相乘计算出权值更新梯度dw,然后使用w、dw和上一次更新权值时使用的权值更新梯度dw’根据指令设置的学习率更新权值w。
参考图5所示,input gradient(图5中的[input gradient0,…,input gradient3])是第n+1层的输出梯度向量,该向量首先要与正向运算过程中第n层的导数值(图5中的[f’(out0),…,f’(out3)])相乘,得到第n层的输入梯度向量,该过程在主运算模块5中完成,由H树模块4发往从运算模块6,暂存在从运算模块6的神经元缓存单元63中。然后,输入梯度向量与权值矩阵相乘得到第n层的输出梯度向量。在这个过程中,第i个从运算模块计算输入梯度向量中第i个标量和权值矩阵中列向量[w_i0,…,w_iN]的乘积,得到的输出向量在H树模块4中逐级两两相加得到最后的输出梯度向量output gradient(图5中的[output gradient0,…,output gradient3])。
同时,从运算模块6还需要更新本模块中存储的权值,计算权值更新梯度的过程为dw_ij=x_j*in_gradient_i,其中x_j是正向运算时第n层的输入(即第n-1层的输出)向量的第j个元素,in_gradient_i是反向运算第n层的输入梯度向量(即图5中input gradient与导数f’的乘积)的第i个元素。正向运算时第n层的输入是在反向训练开始时就存在的数据,通过H树模块4送往从运算模块6并暂存在神经元缓存单元63中。则,在从运算模块6中,在完成输出梯度向量部分和的计算后,将输入梯度向量第i个标量和正向运算第n层的输入向量相乘,得到更新权值的梯度向量dw并据此更新权值。
图6是示出根据一个实施例的单层人工神经网络反向训练流程图。该流程图描述利用本发明的装置和指令集实现图5所示的一种单层神经网络反向训练的过程。
在步骤S1,在指令缓存单元1的首地址处预先存入一条IO指令。
在步骤S2,运算开始,控制器单元2从指令缓存单元1的首地址读取该条IO指令,根据译出的微指令,直接内存访问单元3从外部地址空间读取与该单层人工神经网络反向训练有关的所有指令,并将其缓存在指令缓存单元1中。
在步骤S3,控制器单元2接着从指令缓存单元读入下一条IO指令,根据译出的微指令,直接内存访问单元3从外部地址空间读取主运算模块5需要的所有数据至 主运算模块5的神经元缓存单元53,所述数据包括之前正向运算时的输入神经元和激活函数导数值以及输入梯度向量。
在步骤S4,控制器单元2接着从指令缓存单元读入下一条IO指令,根据译出的微指令,直接内存访问单元3从外部地址空间读取从运算模块6需要的所有权值数据和权值梯度数据,并分别存储到相应的从运算模块6的权值缓存单元64和权值梯度缓存单元65。
在步骤S5,控制器单元2接着从指令缓存单元读入下一条CONFIG指令,运算单元根据译出的微指令里的参数配置运算单元内部寄存器的值,包括该层神经网络计算需要的各种常数,本层计算的精度设置、更新权值时的学习率等。
在步骤S6,控制器单元2接着从指令缓存单元读入下一条COMPUTE指令,根据译出的微指令,主运算模块5通过H树模块4将输入梯度向量和正向运算时的输入神经元发给各从运算模块6,所述输入梯度向量和正向运算时的输入神经元存至从运算模块6的神经元缓存单元63。
在步骤S7,根据COMPUTE指令译出的微指令,从运算模块6的运算单元61从权值缓存单元64读取权值向量(即该从运算模块存储的权值矩阵的部分列),完成权值向量和输入梯度向量的向量乘标量运算,将输出向量部分和通过H树返回;同时从运算模块6将输入梯度向量与输入神经元相乘,得到权值梯度存至权值梯度缓存单元65。
在步骤S8,在H树模块4中,各从运算模块6返回的输出梯度部分和被逐级两两相加得到完整的输出梯度向量。
在步骤S9,主运算模块5得到H树模块4的返回值,根据COMPUTE指令译出的微指令,从神经元缓存单元53读取正向运算时的激活函数导数值,将导数值乘以返回的输出向量,得到下一层反向训练的输入梯度向量,将其写回至神经元缓存单元53。
在步骤S10,控制器单元2接着从指令缓存单元读入下一条COMPUTE指令,根据译出的微指令,从运算模块6从权值缓存单元64读取权值w,从权值梯度缓存单元读取本次的权值梯度dw和上一次更新权值使用的权值梯度dw’,更新权值w。
在步骤S11,控制器单元接着从指令缓存单元读入下一条IO指令,根据译出的微指令,直接内存访问单元3将神经元缓存单元53中的输出梯度向量存至外部地址 空间指定地址,运算结束。
对于多层人工神经网络,其实现过程与单层神经网络类似,当上一层人工神经网络执行完毕后,下一层的运算指令会将主运算模块中计算出的输出梯度向量作为下一层训练的输入梯度向量进行如上的计算过程,指令中的权值地址和权值梯度地址也会变更至本层对应的地址。
通过采用用于执行人工神经网络反向训练的装置和指令集,解决了CPU和GPU运算性能不足,前端译码开销大的问题。有效提高了对多层人工神经网络正向运算的支持。
通过采用针对多层人工神经网络反向训练的专用片上缓存,充分挖掘了输入神经元和权值数据的重用性,避免了反复向内存读取这些数据,降低了内存访问带宽,避免了内存带宽成为多层人工神经网络正向运算性能瓶颈的问题。
前面的附图中所描绘的进程或方法可通过包括硬件(例如,电路、专用逻辑等)、固件、软件(例如,被具体化在非瞬态计算机可读介质上的软件),或两者的组合的处理逻辑来执行。虽然上文按照某些顺序操作描述了进程或方法,但是,应该理解,所描述的某些操作能以不同顺序来执行。此外,可并行地而非顺序地执行一些操作。
在前述的说明书中,参考其特定示例性实施例描述了本发明的各实施例。显然,可对各实施例做出各种修改,而不背离所附权利要求所述的本发明的更广泛的精神和范围。相应地,说明书和附图应当被认为是说明性的,而不是限制性的。

Claims (10)

  1. 一种用于执行人工神经网络反向训练的装置,包括指令缓存单元、控制器单元、直接内存访问单元、H树模块、主运算模块、以及多个从运算模块,其中:
    指令缓存单元用于缓存指令;
    控制器单元用于从指令缓存单元读取指令,并将该指令译码成控制H树模块、主运算模块、以及从运算模块行为的微指令;
    直接内存访问单元用于从外部地址空间向主运算模块和各从运算模块的相应数据缓存单元中写数据或从所述数据缓存单元向外部地址空间读数据;
    H树模块用于,在每层神经网络反向训练开始计算的阶段,主运算模块通过H树模块向所有的从运算模块传输本层的输入梯度向量,在从计算模块的计算过程完成后,H树模块逐级将各从计算模块的输出梯度向量部分和两两相加得到本层的输出梯度向量;
    主运算模块用于在每一层的计算过程中,利用本层的输出梯度向量完成后续计算;以及
    每个从运算模块利用相同的输入梯度向量和各自的权值数据,并行地计算出相应的输出梯度向量部分和。
  2. 根据权利要求1所述的装置,其中,多个从运算模块利用相同的输入梯度向量并行地计算出各自权值的梯度并使用计算得到的各自权值的梯度来更新各自的权值
  3. 根据权利要求1所述的装置,其中,主运算模块将每一层的输出梯度向量与下一层的激活函数求导值对位相乘,作为下一层的输入梯度向量。
  4. 根据权利要求1所述的装置,其中,从运算模块包括输入神经元缓存单元,用于缓存输入神经元数据。
  5. 根据权利要求1所述的装置,其中,H树模块构成主运算模块和所述多个从运算模块之间的数据通路,并具有H树型的结构,H树是由多个节点构成的二叉树通路,每个节点将上游的数据同样地发给下游的两个节点,将下游的两个节点返回的数据相加,并返回给上游的节点。
  6. 根据权利要求1所述的装置,其中,主运算模块包括运算单元、数据依 赖关系判断单元和神经元缓存单元,其中:
    神经元缓存单元用于缓存主运算模块在计算过程中用到的输入数据和输出数据;
    运算单元完成主运算模块的各种运算功能;
    数据依赖关系判断单元是运算单元读写神经元缓存单元的端口,保证对神经元缓存单元中数据读写不存在一致性冲突,并且负责从神经元缓存单元读取输入梯度向量通过H树模块发送给从运算模块;以及
    来自H树模块的输出梯度向量被发送到运算单元。
  7. 根据权利要求1所述的装置,其中,每个从运算模块包括运算单元、数据依赖关系判定单元、神经元缓存单元、权值缓存单元和权值梯度缓存单元,其中:
    运算单元接收控制器单元发出的微指令并进行算数逻辑运算;
    数据依赖关系判断单元负责计算过程中对神经元缓存单元、权值缓存单元和权值梯度缓存单元的读写操作,保证对神经元缓存单元、权值缓存单元和权值梯度缓存单元的读写不存在一致性冲突;
    神经元缓存单元缓存输入梯度向量数据以及该从运算模块计算得到的输出梯度向量部分和;
    权值缓存单元缓存该从运算模块在计算过程中需要的权值向量,对于每一个从运算模块,所述权值向量是权值矩阵中与该从运算模块相对应的列;以及
    权值梯度缓存单元缓存相应从运算模块在更新权值过程中需要的权值梯度数据,每个从运算模块存储的权值梯度数据与其存储的权值向量相对应。
  8. 根据权利要求6或7所述的装置,其中,通过以下方式保证读写不存在一致性冲突:判断尚未执行的微指令与正在执行过程中的微指令的数据之间是否存在依赖关系,如果不存在,允许该条微指令立即发射,否则需要等到该条微指令所依赖的所有微指令全部执行完成后该条微指令才允许被发射。
  9. 一种使用根据权利要求1-7中的任一项的装置执行单层人工神经网络反向训练的方法,包括:
    直接内存访问单元从外部地址空间读取与该单层人工神经网络反向训练有关的所有人工神经网络运算指令,并将其缓存在指令缓存单元中、
    直接内存访问单元从外部地址空间读取主运算模块需要的所有数据至主运算模块的神经元缓存单元,所述数据包括:输入梯度向量、以及之前正向运算时的激活函数导数值和输入神经元;
    直接内存访问单元从外部地址空间读取从运算模块需要的所有权值数据和权值梯度数据,并分别存储到相应的从运算模块的权值缓存单元和权值梯度缓存单元;
    主运算模块和从运算模块各自中的运算单元根据译出的微指令里的参数配置该运算单元内部寄存器的值,所述参数包括该层神经网络计算需要的各种常数、本层计算的精度设置、和更新权值时的学习率;
    主运算模块通过H树模块将输入梯度向量和正向运算时的输入神经元发给各从运算模块,所述输入梯度向量和正向运算时的输入神经元被存至从运算模块的神经元缓存单元;
    从运算模块的运算单元从权值缓存单元读取权值向量,完成权值向量和输入梯度向量的向量乘标量运算,将输出向量部分和通过H树模块返回;同时从运算模块将输入梯度向量与输入神经元相乘,得到权值梯度存至权值梯度缓存单元,其中,权值向量是该从运算模块存储的权值矩阵的部分列;
    在H树模块中,各从运算模块返回的输出梯度部分和被逐级两两相加得到完整的输出梯度向量;
    主运算模块得到H树模块的返回值,从神经元缓存单元读取正向运算时的激活函数导数值,将导数值乘以返回的输出梯度向量,得到下一层反向训练的输入梯度向量,将其写回至神经元缓存单元;
    从运算模块从权值缓存单元读取权值w,从权值梯度缓存单元读取本次的权值梯度dw和上一次更新权值使用的权值梯度dw’,更新权值w;
    直接内存访问单元将神经元缓存单元中的输出梯度向量存至外部地址空间指定地址。
  10. 一种执行多层人工神经网络反向训练的方法,包括:
    针对每一层,执行根据权利要求9所述的方法,其中:
    当上一层人工神经网络执行完毕后,使用主运算模块中计算出的下一层训练的输入梯度向量,针对所述下一层再次执行根据权利要求9所述的方法。
PCT/CN2016/078279 2016-01-20 2016-04-01 用于执行人工神经网络反向训练的装置和方法 WO2017124641A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
EP16885905.6A EP3407268A4 (en) 2016-01-20 2016-04-01 DEVICE AND METHOD FOR REVERSE LEARNING OF A NETWORK OF ARTIFICIAL NEURONS
EP21184650.6A EP3940606A1 (en) 2016-01-20 2016-04-01 Device and method for executing reversal training of artificial neural network
KR1020187015433A KR102175044B1 (ko) 2016-01-20 2016-04-01 인공 신경망 역방향 트레이닝 실행용 장치와 방법
US16/038,872 US10713567B2 (en) 2016-01-20 2018-07-18 Apparatus and method for executing reversal training of artificial neural network
US16/441,019 US10713568B2 (en) 2016-01-20 2019-06-14 Apparatus and method for executing reversal training of artificial neural network

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610039032.1 2016-01-20
CN201610039032.1A CN106991478B (zh) 2016-01-20 2016-01-20 用于执行人工神经网络反向训练的装置和方法

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/038,872 Continuation-In-Part US10713567B2 (en) 2016-01-20 2018-07-18 Apparatus and method for executing reversal training of artificial neural network

Publications (1)

Publication Number Publication Date
WO2017124641A1 true WO2017124641A1 (zh) 2017-07-27

Family

ID=59361370

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/078279 WO2017124641A1 (zh) 2016-01-20 2016-04-01 用于执行人工神经网络反向训练的装置和方法

Country Status (5)

Country Link
US (2) US10713567B2 (zh)
EP (2) EP3407268A4 (zh)
KR (1) KR102175044B1 (zh)
CN (3) CN111353588B (zh)
WO (1) WO2017124641A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111367567A (zh) * 2018-12-25 2020-07-03 上海寒武纪信息科技有限公司 一种神经网络计算装置和方法
CN111368986A (zh) * 2018-12-25 2020-07-03 上海寒武纪信息科技有限公司 一种神经网络计算装置和方法

Families Citing this family (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111353588B (zh) * 2016-01-20 2024-03-05 中科寒武纪科技股份有限公司 用于执行人工神经网络反向训练的装置和方法
CN107341541B (zh) * 2016-04-29 2021-01-29 中科寒武纪科技股份有限公司 一种用于执行全连接层神经网络训练的装置和方法
CN109478144B (zh) 2017-07-05 2021-12-14 上海寒武纪信息科技有限公司 一种数据处理装置和方法
US11086634B2 (en) 2017-07-05 2021-08-10 Shanghai Cambricon Information Technology Co., Ltd. Data processing apparatus and method
CN107578014B (zh) 2017-09-06 2020-11-03 上海寒武纪信息科技有限公司 信息处理装置及方法
CN109583577B (zh) 2017-09-29 2021-04-23 上海寒武纪信息科技有限公司 运算装置及方法
KR102477404B1 (ko) 2017-08-31 2022-12-13 캠브리콘 테크놀로지스 코퍼레이션 리미티드 칩 장치 및 관련 제품
US11437032B2 (en) 2017-09-29 2022-09-06 Shanghai Cambricon Information Technology Co., Ltd Image processing apparatus and method
CN107748914A (zh) * 2017-10-19 2018-03-02 珠海格力电器股份有限公司 人工神经网络运算电路
CN109117184A (zh) 2017-10-30 2019-01-01 上海寒武纪信息科技有限公司 人工智能处理器及使用处理器执行平面旋转指令的方法
WO2019114842A1 (zh) 2017-12-14 2019-06-20 北京中科寒武纪科技有限公司 一种集成电路芯片装置
CN109961138B (zh) * 2017-12-14 2020-04-14 中科寒武纪科技股份有限公司 神经网络训练方法及相关产品
CN109978148B (zh) * 2017-12-28 2020-06-23 中科寒武纪科技股份有限公司 集成电路芯片装置及相关产品
CN109993276B (zh) * 2017-12-29 2021-10-26 中科寒武纪科技股份有限公司 用于执行人工神经网络反向训练的装置和方法
US11373088B2 (en) * 2017-12-30 2022-06-28 Intel Corporation Machine learning accelerator mechanism
CN110163334B (zh) * 2018-02-11 2020-10-09 上海寒武纪信息科技有限公司 集成电路芯片装置及相关产品
US11740898B2 (en) 2018-02-13 2023-08-29 Shanghai Cambricon Information Technology Co., Ltd Computing device and method
WO2019157812A1 (zh) * 2018-02-13 2019-08-22 上海寒武纪信息科技有限公司 一种计算装置及方法
US11630666B2 (en) 2018-02-13 2023-04-18 Shanghai Cambricon Information Technology Co., Ltd Computing device and method
CN110162162B (zh) 2018-02-14 2023-08-18 上海寒武纪信息科技有限公司 处理器的控制装置、方法及设备
CN111767998B (zh) * 2018-02-27 2024-05-14 上海寒武纪信息科技有限公司 集成电路芯片装置及相关产品
EP3624020A4 (en) 2018-05-18 2021-05-05 Shanghai Cambricon Information Technology Co., Ltd CALCULATION PROCEDURES AND RELATED PRODUCTS
WO2020001438A1 (zh) 2018-06-27 2020-01-02 上海寒武纪信息科技有限公司 片上代码断点调试方法、片上处理器及芯片断点调试系统
EP3757896B1 (en) * 2018-08-28 2023-01-11 Cambricon Technologies Corporation Limited Method and device for pre-processing data in a neural network
US11990137B2 (en) 2018-09-13 2024-05-21 Shanghai Cambricon Information Technology Co., Ltd. Image retouching method and terminal device
EP3859488A4 (en) 2018-09-28 2022-06-29 Shanghai Cambricon Information Technology Co., Ltd Signal processing device, signal processing method and related product
CN111045726B (zh) * 2018-10-12 2022-04-15 上海寒武纪信息科技有限公司 支持编码、解码的深度学习处理装置及方法
CN111368967B (zh) * 2018-12-25 2023-04-07 上海寒武纪信息科技有限公司 一种神经网络计算装置和方法
CN111383638A (zh) 2018-12-28 2020-07-07 上海寒武纪信息科技有限公司 信号处理装置、信号处理方法及相关产品
CN109919313B (zh) * 2019-01-31 2021-06-08 华为技术有限公司 一种梯度传输的方法及分布式训练系统
US11847554B2 (en) 2019-04-18 2023-12-19 Cambricon Technologies Corporation Limited Data processing method and related products
CN111832737B (zh) 2019-04-18 2024-01-09 中科寒武纪科技股份有限公司 一种数据处理方法及相关产品
US11061819B2 (en) 2019-05-28 2021-07-13 Micron Technology, Inc. Distributed computing based on memory as a service
US11169930B2 (en) 2019-05-28 2021-11-09 Micron Technology, Inc. Fine grain data migration to or from borrowed memory
US11438414B2 (en) 2019-05-28 2022-09-06 Micron Technology, Inc. Inter operating system memory services over communication network connections
US11334387B2 (en) * 2019-05-28 2022-05-17 Micron Technology, Inc. Throttle memory as a service based on connectivity bandwidth
US11100007B2 (en) 2019-05-28 2021-08-24 Micron Technology, Inc. Memory management unit (MMU) for accessing borrowed memory
US11256624B2 (en) 2019-05-28 2022-02-22 Micron Technology, Inc. Intelligent content migration with borrowed memory
US11675676B2 (en) 2019-06-12 2023-06-13 Shanghai Cambricon Information Technology Co., Ltd Neural network quantization parameter determination method and related products
US11676029B2 (en) 2019-06-12 2023-06-13 Shanghai Cambricon Information Technology Co., Ltd Neural network quantization parameter determination method and related products
WO2021036904A1 (zh) 2019-08-23 2021-03-04 安徽寒武纪信息科技有限公司 数据处理方法、装置、计算机设备和存储介质
US11526761B2 (en) * 2019-08-24 2022-12-13 Microsoft Technology Licensing, Llc Neural network training with decreased memory consumption and processor utilization

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5517596A (en) * 1991-05-17 1996-05-14 International Business Machines Corporation Learning machine synapse processor system apparatus
CN101833691A (zh) * 2010-03-30 2010-09-15 西安理工大学 一种基于fpga的最小二乘支持向量机串行结构实现方法
CN103996069A (zh) * 2013-02-20 2014-08-20 百度在线网络技术(北京)有限公司 一种基于多gpu的bpnn训练方法和装置
CN104899641A (zh) * 2015-05-25 2015-09-09 杭州朗和科技有限公司 深度神经网络学习方法、处理器和深度神经网络学习系统
CN105095966A (zh) * 2015-07-16 2015-11-25 清华大学 人工神经网络和脉冲神经网络的混合计算系统

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4912654A (en) * 1988-12-14 1990-03-27 Government Systems Corporation Gte Neural networks learning method
US5226092A (en) * 1991-06-28 1993-07-06 Digital Equipment Corporation Method and apparatus for learning in a neural network
CN1168328C (zh) * 2002-02-07 2004-09-22 大唐移动通信设备有限公司 一种适用于软件无线电技术实现的迭代运算结构及方法
US7747070B2 (en) * 2005-08-31 2010-06-29 Microsoft Corporation Training convolutional neural networks on graphics processing units
CN101930561A (zh) * 2010-05-21 2010-12-29 电子科技大学 一种基于N-Gram分词模型的反向神经网络垃圾邮件过滤装置
CN102004446A (zh) * 2010-11-25 2011-04-06 福建师范大学 具有多层结构的bp神经元自适应方法
CN105095833B (zh) * 2014-05-08 2019-03-15 中国科学院声学研究所 用于人脸识别的网络构建方法、识别方法及系统
CN104036451B (zh) * 2014-06-20 2018-12-11 深圳市腾讯计算机系统有限公司 基于多图形处理器的模型并行处理方法及装置
CN104463324A (zh) * 2014-11-21 2015-03-25 长沙马沙电子科技有限公司 一种基于大规模高性能集群的卷积神经网络并行处理方法
CN104899561A (zh) * 2015-05-27 2015-09-09 华南理工大学 一种并行化的人体行为识别方法
CN105095962B (zh) * 2015-07-27 2017-07-28 中国汽车工程研究院股份有限公司 一种基于bp人工神经网络的材料动态力学性能预测方法
CN111353588B (zh) * 2016-01-20 2024-03-05 中科寒武纪科技股份有限公司 用于执行人工神经网络反向训练的装置和方法
CN106991477B (zh) * 2016-01-20 2020-08-14 中科寒武纪科技股份有限公司 一种人工神经网络压缩编码装置和方法
CN106991476B (zh) * 2016-01-20 2020-04-10 中科寒武纪科技股份有限公司 用于执行人工神经网络正向运算的装置和方法
CN106126481B (zh) * 2016-06-29 2019-04-12 华为技术有限公司 一种计算系统和电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5517596A (en) * 1991-05-17 1996-05-14 International Business Machines Corporation Learning machine synapse processor system apparatus
CN101833691A (zh) * 2010-03-30 2010-09-15 西安理工大学 一种基于fpga的最小二乘支持向量机串行结构实现方法
CN103996069A (zh) * 2013-02-20 2014-08-20 百度在线网络技术(北京)有限公司 一种基于多gpu的bpnn训练方法和装置
CN104899641A (zh) * 2015-05-25 2015-09-09 杭州朗和科技有限公司 深度神经网络学习方法、处理器和深度神经网络学习系统
CN105095966A (zh) * 2015-07-16 2015-11-25 清华大学 人工神经网络和脉冲神经网络的混合计算系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3407268A4 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111367567A (zh) * 2018-12-25 2020-07-03 上海寒武纪信息科技有限公司 一种神经网络计算装置和方法
CN111368986A (zh) * 2018-12-25 2020-07-03 上海寒武纪信息科技有限公司 一种神经网络计算装置和方法
CN111367567B (zh) * 2018-12-25 2023-03-07 上海寒武纪信息科技有限公司 一种神经网络计算装置和方法
CN111368986B (zh) * 2018-12-25 2023-03-10 上海寒武纪信息科技有限公司 一种神经网络计算装置和方法

Also Published As

Publication number Publication date
CN110135581B (zh) 2020-11-06
KR20180102058A (ko) 2018-09-14
CN111353588B (zh) 2024-03-05
US20180322392A1 (en) 2018-11-08
EP3940606A1 (en) 2022-01-19
CN106991478B (zh) 2020-05-08
US20190294971A1 (en) 2019-09-26
US10713567B2 (en) 2020-07-14
EP3407268A4 (en) 2019-08-28
CN106991478A (zh) 2017-07-28
CN111353588A (zh) 2020-06-30
EP3407268A1 (en) 2018-11-28
US10713568B2 (en) 2020-07-14
CN110135581A (zh) 2019-08-16
KR102175044B1 (ko) 2020-11-05

Similar Documents

Publication Publication Date Title
WO2017124641A1 (zh) 用于执行人工神经网络反向训练的装置和方法
WO2017124642A1 (zh) 用于执行人工神经网络正向运算的装置和方法
CN109376861B (zh) 一种用于执行全连接层神经网络训练的装置和方法
WO2017185347A1 (zh) 用于执行循环神经网络和lstm运算的装置和方法
WO2017185387A1 (zh) 一种用于执行全连接层神经网络正向运算的装置和方法
WO2017185391A1 (zh) 一种用于执行卷积神经网络训练的装置和方法
CN110929863B (zh) 用于执行lstm运算的装置和方法
WO2017124644A1 (zh) 一种人工神经网络压缩编码装置和方法
WO2017124647A1 (zh) 一种矩阵计算装置
WO2017177442A1 (zh) 支持离散数据表示的人工神经网络正向运算装置和方法
WO2017185336A1 (zh) 用于执行pooling运算的装置和方法
WO2017185248A1 (zh) 用于执行人工神经网络自学习运算的装置和方法
WO2018058452A1 (zh) 一种执行人工神经网络运算的装置和方法
WO2017177446A1 (zh) 支持离散数据表示的人工神经网络反向训练装置和方法
WO2017185335A1 (zh) 一种用于执行batch normalization运算的装置和方法
WO2017181336A1 (zh) maxout层运算装置和方法
CN109993276B (zh) 用于执行人工神经网络反向训练的装置和方法
WO2017185413A1 (zh) 一种用于执行Hessian-Free训练算法的装置和方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16885905

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20187015433

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE