CN102004446A - Self-adaptation method for back-propagation (BP) nerve cell with multilayer structure - Google Patents

Self-adaptation method for back-propagation (BP) nerve cell with multilayer structure Download PDF

Info

Publication number
CN102004446A
CN102004446A CN2010105648277A CN201010564827A CN102004446A CN 102004446 A CN102004446 A CN 102004446A CN 2010105648277 A CN2010105648277 A CN 2010105648277A CN 201010564827 A CN201010564827 A CN 201010564827A CN 102004446 A CN102004446 A CN 102004446A
Authority
CN
China
Prior art keywords
counter
weights
microcontroller
neuron
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2010105648277A
Other languages
Chinese (zh)
Inventor
黄晞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Normal University
Original Assignee
Fujian Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Normal University filed Critical Fujian Normal University
Priority to CN2010105648277A priority Critical patent/CN102004446A/en
Publication of CN102004446A publication Critical patent/CN102004446A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention relates to the field of nerve network hardware realization, in particular to a self-adaptation method for the computation of a single nerve cell node in a BP nerve network with a multilayer structure in different computation stages to realize different learning algorithm. A nerve cell processor is used for executing the computation of the single node on each layer of the BP nerve network. By programming a control register, a microcontroller can control an arithmetic device to execute the computation of three different kinds of BP nerve networks at the nerve cell node, wherein BP nerve network learning algorithm includes three kinds of typical learning algorithms, namely BP standard algorithm, additional momentum item algorithm and learning speed self-adaptation adjustment algorithm. A plurality of nerve cell processors can be connected in series to realize pipeline algorithm, so that the self-adaptation method has the characteristics of high flexibility and high practicability and is applicable in the application field of embedded hardware BP hardware nerve networks.

Description

BP neuron adaptive method with sandwich construction
Technical field
The present invention relates to neural network hardware and realize the field, particularly have in the hardware BP neural network of sandwich construction single neuron node at the computing in nonidentity operation stage and the adaptive approach of realizing different learning algorithms.
Background technology
Artificial neural network is widely used in fields such as Based Intelligent Control, pattern-recognition, being most widely used of BP neural network wherein, and the learning algorithm of BP neural network has multiple, to satisfy different application demands.But the subject matter degree of concurrence that traditional software implementation method based on general processor exists is lower, and particularly in the Embedded Application field, computing velocity can't satisfy on-the-spot real-time demand.Hardware neural network realizes satisfying the requirement of parallel computation, but hardware realizes existing the problem of very flexible.Apparatus of the present invention realize the computing of three kinds of typical B P neural networks on neuron node by setting that the control register group of neuron processor is programmed.A plurality of neuron processor series connection can realize the flowing water computing, have both satisfied the requirement of parallel computation, have improved dirigibility and applicability again.
Summary of the invention
The problem to be solved in the present invention is in the Embedded Application field, and single neuron node is in the computing in nonidentity operation stage and the problem that realizes different learning algorithms in the hardware BP neural network.Have programmable characteristics, the dirigibility height, applicability is strong.
A kind of programmable hardware provided by the invention have a sandwich construction BP neuron adaptive method, hardware components is made up of local data's bus 1, special register group 2, control register group 3, arithmetical unit 4, weights storer 5, propagation of error factor storer 6, a weights counter 7, number of plies counter 8, training batch counter 9, microcontroller 10 and external control bus 11.Wherein special register group 2, control register group 3, arithmetical unit 4, weights storer 5, propagation of error factor storer 6 link to each other with local data bus 1 respectively; Microcontroller 10 links to each other with special register group 2, control register group 3, arithmetical unit 4, training batch counter 9, number of plies counter 8, propagation of error factor storer 6, weights storer 5, a weights counter 7 respectively by microcontroller signal wire L1, L2, L3, L4, L5, L6, L7, L8; Between arithmetical unit 4 and the weights storer 5, link to each other by bus independently between arithmetical unit 4 and the propagation of error factor storer 6.
Programmable hardware BP neuron processor 12 of the present invention is externally under the control of master controller 13, receive the data of external data memory 14, and result of calculation passed to outside excitation function circuit 15, the result of excitation function output is stored in the external data memory 14.
BP neural network commonly used is made of input layer, several hidden layers and output layer usually, and every layer has several neuron nodes.Wherein said arithmetical unit can be carried out the calculating of input layer, each hidden layer and output layer individual node, and total number of plies can reach 8 at most. and import only comprising each input layer and calculate net, output layer Error Calculation error o, each hidden layer Error Calculation error h, each layer propagation of error factor calculates δ and each layer weights adjustment amount calculates Δ W.It is relevant with concrete learning algorithm that wherein said each layer weights adjustment amount calculates Δ W, and the present invention can realize BP canonical algorithm, additional momentum item algorithm and these three kinds of typical learning algorithms of learning rate self-adaptation adjusting algorithm.
Described weights storer is used to deposit respectively imports the connection weights ω of node to current this neuron node of computation layer IjAnd the expectation value d of output layer i, the related weights of the calculating of different layers are different, can call in by outside storage.Simultaneously also be used to deposit weights adjustment amount Δ W in each layer weights adjustment amount calculation stages.
Described propagation of error factor storer be used to store arithmetical unit carry out the propagation of error factor calculate produce the propagation of error factor delta.
Be made of a control word register and eight each layer neuron number registers in the control register group, control word register wherein contains three class control informations, comprises three learning algorithm type codings, four BP network numbers of plies and a working method position.Each neuron number register deposit one deck neuron number.The working method position is used to indicate arithmetical unit to be operated in the normal operation state or is operated in physical training condition.By programming setting, can satisfy the selection of different scales, different learning algorithms to control storage device group.
Described special use is deposited group and is made of four registers, a parameter will using in each register-stored computation process, comprising the additional momentum factor alpha in learning rate η, the additional momentum algorithm, the learning rate that the learning rate self-adaptation is regulated in the algorithm is adjusted factor-beta (0<β<1), γ (γ>1).
A described weights counter is used for the loop computation number of times of statistical calculation device in each layer computing.When the weights counter was zero, the computing in this stage finished.
What described number of plies counter was used for the current calculating of statistical calculation device is the data of which layer.When number of plies counter was zero, whole calculating finished.
Described microcontroller generation microcontroller signal finishes initial work and the control arithmetical unit is finished the different phase arithmetic operation.Microcontroller is after receiving the initialization control signal of outside master controller, call in data successively from external data memory, write control register group, special register group and weights storer, and the numerical value of the input layer in the control register is composed to a weights counter.After finishing initial work, microcontroller selects to enter normal operation state or physical training condition by the working method position of control word register.Under the normal operation state, the microprocessor controls arithmetical unit is finished neuron and is imported calculating only, and exports result of calculation to outside excitation function circuit; Under physical training condition, the microprocessor controls arithmetical unit is finished each layer neuron successively and is imported calculating net, output layer Error Calculation error only o, each layer propagation of error factor is calculated δ, each hidden layer Error Calculation error h, select dissimilar learning algorithms to finish each layer weights adjustment amount according to the learning algorithm type coding in the control word register and calculate Δ W.The weights counter is for subtracting 1 counting, every calculating once, microprocessor controls weights counter is carried out and to be subtracted 1 operation.When the value of weights count value device was zero, one deck calculated required neuron number under microcontroller took out from the control register group, composed to beginning next stage calculating after the weights counter.Number of plies counter can be added and subtracted 1 counting, and the intact one deck of every calculating subtracts 1 when the feedforward computing, and when number of plies counter was zero, whole computing finished; The intact one deck of every calculating adds 1 when error anti-pass computing, and when number of plies Counter Value equaled the 1st hidden layer, microcontroller calculated the control signal that finishes to the master controller transmission of outside, and waits for the new s operation control signal of master controller.
Description of drawings
Fig. 1 is a programmable hardware BP neuron processor structure synoptic diagram of the present invention
Fig. 2 is the structural representation of programmable hardware BP neuron processor of the present invention and external component
Fig. 3 is a programmable hardware BP neuron processor special register group synoptic diagram of the present invention
Fig. 4 is a programmable hardware BP neuron processor control register group synoptic diagram of the present invention
Among Fig. 1, the 1st, local data's bus, the 2nd, special register group, the 3rd, control register group, the 4th, arithmetical unit, the 5th, weights storer, the 6th, propagation of error factor storer, the 7th, a weights counter, the 8th, number of plies counter, the 9th, training batch counter, the 10th, microcontroller, the 11st, external control bus.
Among Fig. 2, the 12nd, programmable hardware BP neuron processor, the 13rd, outside master controller, the 14th, external data memory, the 15th, excitation function circuit.
Among Fig. 3, the top-down interlayer coding of digitized representation special register group.
Among Fig. 4, the top-down interlayer coding of digitized representation control register group.
Embodiment
A kind of programmable hardware BP neuron processor 12 provided by the invention is made of local data's bus 1, special register group 2, control register group 3, arithmetical unit 4, weights storer 5, propagation of error factor storer 6, a weights counter 7, number of plies counter 8, training batch counter 9, microcontroller 10 and external control bus 11.
Described special register group 2, control register group 3, arithmetical unit 4, weights storer 5, propagation of error factor storer 6 have with local data bus 1 respectively and are connected.
Described microcontroller 10 and special register group 2, control register group 3, arithmetical unit 4, training batch counter 9, number of plies counter 8, propagation of error factor storer 6, weights storer 5, a weights counter 7 have and are connected.
Described control register group 3 is made of control word register 20 and eight each layer neuron number registers 21~28.
Described special register group 2 is by the additional momentum coefficient register α 17 in learning rate register η 15, the additional momentum algorithm, the learning rate that the learning rate self-adaptation is regulated in the algorithm is adjusted factor register β 18 (0<β<1), and the learning rate that the learning rate self-adaptation is regulated in the algorithm is adjusted factor register γ 19 (γ>1) formation.
Described a kind of programmable hardware BP neuron processor 12 is operated in two states, normal operation state and physical training condition.Microcontroller 10 calls in successively from external data memory 14 that the control word data that learning algorithm type coding, the BP network number of plies and working method position constitute write control word registers group 3, the data of calling in BP network layer and every layer of neuron number write neuron number register 21~28, according to the working method position in the control word register 20, judgment task state.When the working method position is 0, enter the normal operation state; When the working method position is 1, enter physical training condition.
At the normal operation state, microcontroller 10 is called in the input layer that trained from external data memory 14 and is connected that weights write weights storer 5, the value of reading an input layer number register 21 writes a weights counter 7, read in the control word register 20 BP network layer numerical value write number of plies counter 8.After finishing initial work, start arithmetical unit 4, read in sample data successively from external data memory 14, the neuronic clean input computing net of following one deck, when weights number counter 7 is zero, the calculating of this layer of individual node is finished, export result of calculation to outside excitation function circuit 15, number of plies counter 8 subtracts 1, microcontroller 10 takes out down from neuron number register (22), and one deck calculates required neuron number, compose and give a weights counter 7, call in after the following one deck connection weight value that has trained writes weights storer 5 from external data memory 14, from external data memory 14, read in last layer excitation function circuit 15 result calculated successively, begin to descend one deck calculation stages, when the value of number of plies counter 8 was zero, the sample computing finished, microcontroller 10 sends to the master controller 13 of outside and calculates the control signal that finishes, and waits for the s operation control signal that master controller 13 is new.
In physical training condition, finish initial work, finish clean input computing net after, also to finish output layer Error Calculation error o, each layer propagation of error factor is calculated δ, each hidden layer Error Calculation error h, each layer weights adjustment amount calculates Δ W.
Described output layer Error Calculation error o=d-o, wherein desired output d leaves in the weights storer 5, and real output value o leaves in the external data memory 14, and result of calculation writes external data memory 14.
Described each layer propagation of error factor is calculated δ=errori * f (net), wherein the error amount error of this layer iLeave in the external data memory 14 with the derivative f (net) of clean output, read in the arithmetical unit 4 by microcontroller 10 controls and calculate, result of calculation leaves in the propagation of error factor storer 6.
Described each hidden layer Error Calculation error h=∑ δ * ω, propagation of error factor delta wherein leaves propagation of error factor storer 6 in, and weights ω leaves in the weights storer 5, is read in the arithmetical unit 4 by microcontroller 10 controls and calculates, and result of calculation writes external data memory 14.
Described each layer weights adjustment amount calculates Δ W and relates to different learning algorithms, read three learning algorithm type codings in the control word register 20 by microcontroller 10, this tri-bit encoding is represented BP algorithm, additional momentum item algorithm and learning rate adaptive algorithm respectively.
Described BP canonical algorithm is finished Δ W=η * δ * O and is calculated, wherein learning rate η leaves among the learning rate register η 15, δ leaves in the propagation of error factor storer 6, O leaves in the external data memory 14, read calculating in the arithmetical unit 4 by microcontroller 10 controls, result of calculation writes in the weights storer 5.
Described additional momentum item algorithm is finished Δ W (t)=η * δ * O+ α * Δ W (t-1), wherein α leaves among the additional momentum coefficient register α 17, last layer weights adjustment amount calculates Δ W (t-1) and leaves in the weights storer 5, read in calculating in the arithmetical unit 4 by microcontroller 10 controls, result of calculation writes in the weights storer 5.
Described learning rate self-adaptation is regulated algorithm and is finished Δ W=η * δ * O calculating.This learning algorithm is after network is adjusted through t weights, if total error rises, then this adjustment is invalid, and promptly Δ W does not write in the weights storer 5, revises η=β * η (0<β<1) simultaneously; If total error is effective, then this is adjusted effectively, and Δ W is write in the weights storer 5, revises η=γ * η (γ>1) simultaneously.Wherein β leaves among the learning rate adjustment factor register β 18, γ leaves learning rate in and adjusts among the factor register γ 19, training batch number t is read in from external data memory 14 when the initialization by microcontroller 10 and trains batch counter 9, the every adjustment of weights once, training batch counter 9 subtracts 1, when training batch counter 9 is zero, carry out the retouching operation of learning rate.
A kind of programmable hardware BP neuron processor provided by the invention can be realized the computing of three kinds of typical B P neural networks on neuron node by setting that control register is programmed.A plurality of neuron processor series connection can realize the flowing water computing, promptly satisfy the requirement of parallel computation, have improved dirigibility and applicability again, are suitable for embedded hardware BP Application of Neural Network field.

Claims (10)

1. BP neuron adaptive method with sandwich construction, its hardware components is made up of local data's bus (1), special register group (2), control register group (3), arithmetical unit (4), weights storer (5), propagation of error factor storer (6), a weights counter (7), number of plies counter (8), training batch counter (9), microcontroller (10) and external control bus (11), it is characterized in that a described weights counter is used for the loop computation number of times of statistical calculation device in each layer computing; What described number of plies counter was used for the current calculating of statistical calculation device is the data of which layer;
Described microcontroller generation microcontroller signal finishes initial work and the control arithmetical unit is finished the different phase arithmetic operation.
2. the BP neuron adaptive method with sandwich construction according to claim 1 is characterized in that in the described weights counter that when the weights counter was zero, the computing in this stage finished.
3. the BP neuron adaptive method with sandwich construction according to claim 1 is characterized in that in the described number of plies counter that whole calculating finishes when number of plies counter is zero.
4. the BP neuron adaptive method with sandwich construction according to claim 1, when it is characterized in that described microcontroller generation microcontroller signal is finished initial work and control arithmetical unit, microcontroller is after receiving the initialization control signal of outside master controller, call in data successively from external data memory, write control register group, special register group and weights storer, and the numerical value of the input layer in the control register is composed to a weights counter.
5. the BP neuron adaptive method with sandwich construction according to claim 1 is characterized in that, described microcontroller selects to enter normal operation state or physical training condition by the working method position of control word register.
6. the BP neuron adaptive method with sandwich construction according to claim 5, it is characterized in that microcontroller produces the microcontroller signal and finishes after the initial work under the normal operation state, the microprocessor controls arithmetical unit is finished neuron and is imported calculating only, and exports result of calculation to outside excitation function circuit.
7. the BP neuron adaptive method with sandwich construction according to claim 5, it is characterized in that described microcontroller produces the microcontroller signal and finishes after the initial work under physical training condition, the microprocessor controls arithmetical unit is finished each layer neuron successively and is imported calculating net only, output layer Error Calculation erroro, each layer propagation of error factor is calculated δ, each hidden layer Error Calculation errorh selects dissimilar learning algorithms to finish each layer weights adjustment amount according to the learning algorithm type coding in the control word register and calculates Δ W.
8. the BP neuron adaptive method with sandwich construction according to claim 5 is characterized in that described weights counter for subtracting 1 counting, every calculating once, microprocessor controls weights counter is carried out and to be subtracted 1 operation.
9. the BP neuron adaptive method with sandwich construction according to claim 5, it is characterized in that when the value of weights count value device is zero, microcontroller takes out the required neuron number of one deck calculating down from the control register group, compose to beginning next stage after the weights counter and calculate.
10. the BP neuron adaptive method with sandwich construction according to claim 5 is characterized in that number of plies counter can add and subtract 1 counting, feedforward during computing the intact one deck of every calculating subtract 1, when number of plies counter was zero, whole computing finished; The intact one deck of every calculating adds 1 when error anti-pass computing, and when number of plies Counter Value equaled the 1st hidden layer, microcontroller calculated the control signal that finishes to the master controller transmission of outside, and waits for the new s operation control signal of master controller.
CN2010105648277A 2010-11-25 2010-11-25 Self-adaptation method for back-propagation (BP) nerve cell with multilayer structure Pending CN102004446A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010105648277A CN102004446A (en) 2010-11-25 2010-11-25 Self-adaptation method for back-propagation (BP) nerve cell with multilayer structure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010105648277A CN102004446A (en) 2010-11-25 2010-11-25 Self-adaptation method for back-propagation (BP) nerve cell with multilayer structure

Publications (1)

Publication Number Publication Date
CN102004446A true CN102004446A (en) 2011-04-06

Family

ID=43811872

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010105648277A Pending CN102004446A (en) 2010-11-25 2010-11-25 Self-adaptation method for back-propagation (BP) nerve cell with multilayer structure

Country Status (1)

Country Link
CN (1) CN102004446A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102540886A (en) * 2011-12-26 2012-07-04 中国农业机械化科学研究院 Self-adaptive control method and system for operation power of pipe-laying trencher
CN107273969A (en) * 2017-05-11 2017-10-20 西安交通大学 It is a kind of to parameterize the expansible full articulamentum multilayer interconnection structure of neutral net
CN107390518A (en) * 2017-07-27 2017-11-24 青岛格莱瑞智能控制技术有限公司 A kind of neural self-adaptation control method increased and decreased certainly based on local weight study and member
CN108268939A (en) * 2016-12-30 2018-07-10 上海寒武纪信息科技有限公司 For performing the device of LSTM neural network computings and operation method
WO2019080037A1 (en) * 2017-10-26 2019-05-02 Shenzhen Genorivision Technology Co. Ltd. Computing unit
WO2019200545A1 (en) * 2018-04-17 2019-10-24 深圳鲲云信息科技有限公司 Method for operation of network model and related product
CN111340200A (en) * 2016-01-20 2020-06-26 中科寒武纪科技股份有限公司 Apparatus and method for performing artificial neural network forward operations
CN111353588A (en) * 2016-01-20 2020-06-30 中科寒武纪科技股份有限公司 Apparatus and method for performing artificial neural network reverse training
CN111340200B (en) * 2016-01-20 2024-05-03 中科寒武纪科技股份有限公司 Apparatus and method for performing artificial neural network forward operations

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102540886A (en) * 2011-12-26 2012-07-04 中国农业机械化科学研究院 Self-adaptive control method and system for operation power of pipe-laying trencher
CN102540886B (en) * 2011-12-26 2014-12-10 中国农业机械化科学研究院 Self-adaptive control method and system for operation power of pipe-laying trencher
CN111340200A (en) * 2016-01-20 2020-06-26 中科寒武纪科技股份有限公司 Apparatus and method for performing artificial neural network forward operations
CN111353588A (en) * 2016-01-20 2020-06-30 中科寒武纪科技股份有限公司 Apparatus and method for performing artificial neural network reverse training
CN111353588B (en) * 2016-01-20 2024-03-05 中科寒武纪科技股份有限公司 Apparatus and method for performing artificial neural network reverse training
CN111340200B (en) * 2016-01-20 2024-05-03 中科寒武纪科技股份有限公司 Apparatus and method for performing artificial neural network forward operations
CN108268939A (en) * 2016-12-30 2018-07-10 上海寒武纪信息科技有限公司 For performing the device of LSTM neural network computings and operation method
CN107273969A (en) * 2017-05-11 2017-10-20 西安交通大学 It is a kind of to parameterize the expansible full articulamentum multilayer interconnection structure of neutral net
CN107273969B (en) * 2017-05-11 2020-06-19 西安交通大学 Parameterized and extensible neural network full-connection layer multilayer interconnection structure
CN107390518A (en) * 2017-07-27 2017-11-24 青岛格莱瑞智能控制技术有限公司 A kind of neural self-adaptation control method increased and decreased certainly based on local weight study and member
WO2019080037A1 (en) * 2017-10-26 2019-05-02 Shenzhen Genorivision Technology Co. Ltd. Computing unit
WO2019200545A1 (en) * 2018-04-17 2019-10-24 深圳鲲云信息科技有限公司 Method for operation of network model and related product

Similar Documents

Publication Publication Date Title
CN201927073U (en) Programmable hardware BP (back propagation) neuron processor
CN102004446A (en) Self-adaptation method for back-propagation (BP) nerve cell with multilayer structure
Li et al. Prediction for tourism flow based on LSTM neural network
CN110223517B (en) Short-term traffic flow prediction method based on space-time correlation
CN107578095B (en) Neural computing device and processor comprising the computing device
CN104145281A (en) Neural network computing apparatus and system, and method therefor
CN108764540A (en) Water supply network pressure prediction method based on parallel LSTM series connection DNN
CN109146156B (en) Method for predicting charging amount of charging pile system
CN104900063B (en) A kind of short distance running time Forecasting Methodology
CN111445010B (en) Distribution network voltage trend early warning method based on evidence theory fusion quantum network
WO2013170843A1 (en) Method for training an artificial neural network
CN106959937A (en) A kind of vectorization implementation method of warp product matrix towards GPDSP
CN109840628A (en) A kind of multizone speed prediction method and system in short-term
CN106068519A (en) For sharing the method and apparatus of the efficient realization of neuron models
CN107256423A (en) A kind of neural planar network architecture of augmentation and its training method, computer-readable recording medium
CN112633577A (en) Short-term household electrical load prediction method, system, storage medium and equipment
CN109635938A (en) A kind of autonomous learning impulsive neural networks weight quantization method
CN106067075B (en) Building energy load prediction model building and load prediction method and device
CN109412152B (en) Power grid loss calculation method based on deep learning and elastic network regularization
Varahrami Recognition of good prediction of gold price between MLFF and GMDH neural network
JPH02136034A (en) Optimal power load distribution system by neural network
Momoh et al. Artificial neural network based load forecasting
Lalis et al. Dynamic forecasting of electric load consumption using adaptive multilayer perceptron (AMLP)
Cheung et al. Adaptive rival penalized competitive learning and combined linear predictor with application to financial investment
Banik et al. Modeling chaotic behavior of Dhaka stock market index values using the neuro-fuzzy model

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20110406