CN201927073U - Programmable hardware BP (back propagation) neuron processor - Google Patents

Programmable hardware BP (back propagation) neuron processor Download PDF

Info

Publication number
CN201927073U
CN201927073U CN2010206318443U CN201020631844U CN201927073U CN 201927073 U CN201927073 U CN 201927073U CN 2010206318443 U CN2010206318443 U CN 2010206318443U CN 201020631844 U CN201020631844 U CN 201020631844U CN 201927073 U CN201927073 U CN 201927073U
Authority
CN
China
Prior art keywords
weights
storer
counter
neuron
register group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2010206318443U
Other languages
Chinese (zh)
Inventor
黄晞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Normal University
Original Assignee
Fujian Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Normal University filed Critical Fujian Normal University
Priority to CN2010206318443U priority Critical patent/CN201927073U/en
Application granted granted Critical
Publication of CN201927073U publication Critical patent/CN201927073U/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Abstract

The utility model relates to a neuron processor in a programmable hardware BP (back propagation) neural network, which comprises an operational unit, a weight memory, an error propagation factor memory, a special register set, a control register set, a microcontroller, a weight number counter, a layer number counter and a training batch counter. The neuron processor is used for executing the operation of a single node in each layer of the BP neural network. The control registers are programmed, so the microcontroller can control the operational unit to execute the operation of three different BP neural networks on the neuron nodes, wherein the BP neural network learning algorithms refer to three classical learning algorithms, namely a BP standard algorithm, an additional momentum item algorithm and a learning speed self-adaptive adjustment algorithm. Multiple neuron processors can be connected in series to realize pipeline operation; in addition, the neuron processor in a programmable hardware BP neural network has the characteristics of high flexibility and strong practicality and is applicable to the application field of embedded hardware BP neural network.

Description

A kind of programmable hardware BP neuron processor
Technical field
The utility model relates to neural network hardware and realizes the field, and particularly the hardware of BP neural network computing is realized, specifically is meant the neuron processor in the BP neural network.
Background technology
Artificial neural network is widely used in fields such as Based Intelligent Control, pattern-recognition, being most widely used of BP neural network wherein, and the learning algorithm of BP neural network has multiple, to satisfy different application demands.But the subject matter degree of concurrence that traditional software implementation method based on general processor exists is lower, and particularly in the Embedded Application field, computing velocity can't satisfy on-the-spot real-time demand.Hardware neural network realizes satisfying the requirement of parallel computation, but hardware realizes existing the problem of very flexible.The utility model device is realized the computing of three kinds of typical B P neural networks on neuron node by setting that the control register group of neuron processor is programmed.A plurality of neuron processor series connection can realize the flowing water computing, have both satisfied the requirement of parallel computation, have improved dirigibility and applicability again.
Summary of the invention
Problem to be solved in the utility model is in the Embedded Application field, and single neuron node is in the computing in nonidentity operation stage and the problem that realizes different learning algorithms in the hardware BP neural network.Have programmable characteristics, the dirigibility height, applicability is strong.
A kind of programmable hardware BP neuron processor that the utility model provides is made up of local data's bus 1, special register group 2, control register group 3, arithmetical unit 4, weights storer 5, propagation of error factor storer 6, a weights counter 7, number of plies counter 8, training batch counter 9, microcontroller 10 and external control bus 11.Wherein special register group 2, control register group 3, arithmetical unit 4, weights storer 5, propagation of error factor storer 6 link to each other with local data bus 1 respectively; Microcontroller 10 links to each other with special register group 2, control register group 3, arithmetical unit 4, training batch counter 9, number of plies counter 8, propagation of error factor storer 6, weights storer 5, a weights counter 7 respectively by microcontroller signal wire L1, L2, L3, L4, L5, L6, L7, L8; Between arithmetical unit 4 and the weights storer 5, link to each other by bus independently between arithmetical unit 4 and the propagation of error factor storer 6.
In concrete the use, programmable hardware BP neuron processor 12 described in the utility model is externally under the control of master controller 13, receive the data of external data memory 14, and result of calculation passed to outside excitation function circuit 15, the result of excitation function output is stored in the external data memory 14.
BP neural network commonly used is made of input layer, several hidden layers and output layer usually, and every layer has several neuron nodes.Wherein said arithmetical unit can be carried out the calculating of input layer, each hidden layer and output layer individual node, and total number of plies can reach 8 at most. and import only comprising each input layer and calculate net, output layer Error Calculation error o, each hidden layer Error Calculation error h, each layer propagation of error factor calculates δ and each layer weights adjustment amount calculates Δ W.It is relevant with concrete learning algorithm that wherein said each layer weights adjustment amount calculates Δ W, and this novel practical can be realized BP canonical algorithm, additional momentum item algorithm and these three kinds of typical learning algorithms of learning rate self-adaptation adjusting algorithm.
Described weights storer is used to deposit respectively imports the connection weights ω of node to current this neuron node of computation layer IjAnd the expectation value d of output layer i, the related weights of the calculating of different layers are different, can call in by outside storage.Simultaneously also be used to deposit weights adjustment amount Δ W in each layer weights adjustment amount calculation stages.
Described propagation of error factor storer be used to store arithmetical unit carry out the propagation of error factor calculate produce the propagation of error factor delta.
Be made of a control word register and eight each layer neuron number registers in the control register group, control word register wherein contains three class control informations, comprises three learning algorithm type codings, four BP network numbers of plies and a working method position.Each neuron number register deposit one deck neuron number.The working method position is used to indicate arithmetical unit to be operated in the normal operation state or is operated in physical training condition.By programming setting, can satisfy the selection of different scales, different learning algorithms to control storage device group.
Described special use is deposited group and is made of four registers, a parameter will using in each register-stored computation process, comprising the additional momentum factor alpha in learning rate η, the additional momentum algorithm, the learning rate that the learning rate self-adaptation is regulated in the algorithm is adjusted factor-beta (0<β<1), γ (γ>1).
A described weights counter is used for the loop computation number of times of statistical calculation device in each layer computing.When the weights counter was zero, the computing in this stage finished.
What described number of plies counter was used for the current calculating of statistical calculation device is the data of which layer.When number of plies counter was zero, whole calculating finished.
Described microcontroller generation microcontroller signal finishes initial work and the control arithmetical unit is finished the different phase arithmetic operation.Microcontroller is after receiving the initialization control signal of outside master controller, call in data successively from external data memory, write control register group, special register group and weights storer, and the numerical value of the input layer in the control register is composed to a weights counter.After finishing initial work, microcontroller selects to enter normal operation state or physical training condition by the working method position of control word register.Under the normal operation state, the microprocessor controls arithmetical unit is finished neuron and is imported calculating only, and exports result of calculation to outside excitation function circuit; Under physical training condition, the microprocessor controls arithmetical unit is finished each layer neuron successively and is imported calculating net, output layer Error Calculation error only o, each layer propagation of error factor is calculated δ, each hidden layer Error Calculation error h, select dissimilar learning algorithms to finish each layer weights adjustment amount according to the learning algorithm type coding in the control word register and calculate Δ W.The weights counter is for subtracting 1 counting, every calculating once, microprocessor controls weights counter is carried out and to be subtracted 1 operation.When the value of weights count value device was zero, one deck calculated required neuron number under microcontroller took out from the control register group, composed to beginning next stage calculating after the weights counter.Number of plies counter can be added and subtracted 1 counting, and the intact one deck of every calculating subtracts 1 when the feedforward computing, and when number of plies counter was zero, whole computing finished; The intact one deck of every calculating adds 1 when error anti-pass computing, and when number of plies Counter Value equaled the 1st hidden layer, microcontroller calculated the control signal that finishes to the master controller transmission of outside, and waits for the new s operation control signal of master controller.
Description of drawings
Fig. 1 is a programmable hardware BP neuron processor structure synoptic diagram of the present utility model
Fig. 2 is the structural representation of programmable hardware BP neuron processor of the present utility model and external component
Fig. 3 is a programmable hardware BP neuron processor special register group synoptic diagram of the present utility model
Fig. 4 is a programmable hardware BP neuron processor control register group synoptic diagram of the present utility model
Among Fig. 1, the 1st, local data's bus, the 2nd, special register group, the 3rd, control register group, the 4th, arithmetical unit, the 5th, weights storer, the 6th, propagation of error factor storer, the 7th, a weights counter, the 8th, number of plies counter, the 9th, training batch counter, the 10th, microcontroller, the 11st, external control bus.
Among Fig. 2, the 12nd, programmable hardware BP neuron processor, the 13rd, outside master controller, the 14th, external data memory, the 15th, excitation function circuit.
Among Fig. 3, the top-down interlayer coding of digitized representation special register group.
Among Fig. 4, the top-down interlayer coding of digitized representation control register group.
Embodiment
A kind of programmable hardware BP neuron processor 12 that the utility model provides is made of local data's bus 1, special register group 2, control register group 3, arithmetical unit 4, weights storer 5, propagation of error factor storer 6, a weights counter 7, number of plies counter 8, training batch counter 9, microcontroller 10 and external control bus 11.
Described special register group 2, control register group 3, arithmetical unit 4, weights storer 5, propagation of error factor storer 6 have with local data bus 1 respectively and are connected.
Described microcontroller 10 and special register group 2, control register group 3, arithmetical unit 4, training batch counter 9, number of plies counter 8, propagation of error factor storer 6, weights storer 5, a weights counter 7 have and are connected.
Described control register group 3 is made of control word register 20 and eight each layer neuron number registers 21~28.
Described special register group 2 is by the additional momentum coefficient register α 17 in learning rate register η 15, the additional momentum algorithm, the learning rate that the learning rate self-adaptation is regulated in the algorithm is adjusted factor register β 18 (0<β<1), and the learning rate that the learning rate self-adaptation is regulated in the algorithm is adjusted factor register γ 19 (γ>1) formation.
Described a kind of programmable hardware BP neuron processor 12 is operated in two states, normal operation state and physical training condition.Microcontroller 10 calls in successively from external data memory 14 that the control word data that learning algorithm type coding, the BP network number of plies and working method position constitute write control word registers group 3, the data of calling in BP network layer and every layer of neuron number write neuron number register 21~28, according to the working method position in the control word register 20, judgment task state.When the working method position is 0, enter the normal operation state; When the working method position is 1, enter physical training condition.
At the normal operation state, microcontroller 10 is called in the input layer that trained from external data memory 14 and is connected that weights write weights storer 5, the value of reading an input layer number register 21 writes a weights counter 7, read in the control word register 20 BP network layer numerical value write number of plies counter 8.After finishing initial work, start arithmetical unit 4, read in sample data successively from external data memory 14, the neuronic clean input computing net of following one deck, when weights number counter 7 is zero, the calculating of this layer of individual node is finished, export result of calculation to outside excitation function circuit 15, number of plies counter 8 subtracts 1, microcontroller 10 takes out down from neuron number register (22), and one deck calculates required neuron number, compose and give a weights counter 7, call in after the following one deck connection weight value that has trained writes weights storer 5 from external data memory 14, from external data memory 14, read in last layer excitation function circuit 15 result calculated successively, begin to descend one deck calculation stages, when the value of number of plies counter 8 was zero, the sample computing finished, microcontroller 10 sends to the master controller 13 of outside and calculates the control signal that finishes, and waits for the s operation control signal that master controller 13 is new.
In physical training condition, finish initial work, finish clean input computing net after, also to finish output layer Error Calculation error o, each layer propagation of error factor is calculated δ, each hidden layer Error Calculation error h, each layer weights adjustment amount calculates Δ W.
Described output layer Error Calculation error o=d-o, wherein desired output d leaves in the weights storer 5, and real output value o leaves in the external data memory 14, and result of calculation writes external data memory 14.
Described each layer propagation of error factor is calculated δ=errori * f (net), wherein the error amount error of this layer iLeave in the external data memory 14 with the derivative f (net) of clean output, read in the arithmetical unit 4 by microcontroller 10 controls and calculate, result of calculation leaves in the propagation of error factor storer 6.
Described each hidden layer Error Calculation error h=∑ δ * ω, propagation of error factor delta wherein leaves propagation of error factor storer 6 in, and weights ω leaves in the weights storer 5, is read in the arithmetical unit 4 by microcontroller 10 controls and calculates, and result of calculation writes external data memory 14.
Described each layer weights adjustment amount calculates Δ W and relates to different learning algorithms, read three learning algorithm type codings in the control word register 20 by microcontroller 10, this tri-bit encoding is represented BP algorithm, additional momentum item algorithm and learning rate adaptive algorithm respectively.
Described BP canonical algorithm is finished Δ W=η * δ * O and is calculated, wherein learning rate η leaves among the learning rate register η 15, δ leaves in the propagation of error factor storer 6, O leaves in the external data memory 14, read calculating in the arithmetical unit 4 by microcontroller 10 controls, result of calculation writes in the weights storer 5.
Described additional momentum item algorithm is finished Δ W (t)=η * δ * O+ α * Δ W (t-1), wherein α leaves among the additional momentum coefficient register α 17, last layer weights adjustment amount calculates Δ W (t-1) and leaves in the weights storer 5, read in calculating in the arithmetical unit 4 by microcontroller 10 controls, result of calculation writes in the weights storer 5.
Described learning rate self-adaptation is regulated algorithm and is finished Δ W=η * δ * O calculating.This learning algorithm is after network is adjusted through t weights, if total error rises, then this adjustment is invalid, and promptly Δ W does not write in the weights storer 5, revises η=β * η (0<β<1) simultaneously; If total error is effective, then this is adjusted effectively, and Δ W is write in the weights storer 5, revises η=γ * η (γ>1) simultaneously.Wherein β leaves among the learning rate adjustment factor register β 18, γ leaves learning rate in and adjusts among the factor register γ 19, training batch number t is read in from external data memory 14 when the initialization by microcontroller 10 and trains batch counter 9, the every adjustment of weights once, training batch counter 9 subtracts 1, when training batch counter 9 is zero, carry out the retouching operation of learning rate.
A kind of programmable hardware BP neuron processor that the utility model provides can be realized the computing of three kinds of typical B P neural networks on neuron node by setting that control register is programmed.A plurality of neuron processor series connection can realize the flowing water computing, promptly satisfy the requirement of parallel computation, have improved dirigibility and applicability again, are suitable for embedded hardware BP Application of Neural Network field.

Claims (4)

1. programmable hardware BP neuron processor, it is characterized in that processor is by local data's bus (1), special register group (2), control register group (3), arithmetical unit (4), weights storer (5), propagation of error factor storer (6), a weights counter (7), number of plies counter (8), training batch counter (9), microcontroller (10) and external control bus (11) are formed, wherein special register group (2), control register group (3), arithmetical unit (4), weights storer (5), propagation of error factor storer (6) links to each other with local data's bus (1) respectively; Microcontroller (10) links to each other with special register group (2), control register group (3), arithmetical unit (4), training batch counter (9), number of plies counter (8), propagation of error factor storer (6), weights storer (5), a weights counter (7) respectively by microcontroller signal wire L1, L2, L3, L4, L5, L6, L7, L8; Between arithmetical unit (4) and the weights storer (5), link to each other by bus independently between arithmetical unit (4) and the propagation of error factor storer (6).
2. programmable hardware BP neuron processor according to claim 1, be characterised in that described special register group (2) is made of four registers, a parameter will using in each register-stored computation process, comprising the additional momentum factor alpha in learning rate η, the additional momentum algorithm, the learning rate that the learning rate self-adaptation is regulated in the algorithm is adjusted the factor 0<β<1, γ>1.
3. programmable hardware BP neuron processor according to claim 1 is characterized in that control register group (3) is made of control word register (20) and eight each layer neuron number registers (21)~(28).
4. programmable hardware BP neuron processor according to claim 1 is characterized in that arithmetical unit (4) has with weights storer (5), propagation of error factor storer (6) to be connected.
CN2010206318443U 2010-11-25 2010-11-25 Programmable hardware BP (back propagation) neuron processor Expired - Fee Related CN201927073U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010206318443U CN201927073U (en) 2010-11-25 2010-11-25 Programmable hardware BP (back propagation) neuron processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010206318443U CN201927073U (en) 2010-11-25 2010-11-25 Programmable hardware BP (back propagation) neuron processor

Publications (1)

Publication Number Publication Date
CN201927073U true CN201927073U (en) 2011-08-10

Family

ID=44430912

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010206318443U Expired - Fee Related CN201927073U (en) 2010-11-25 2010-11-25 Programmable hardware BP (back propagation) neuron processor

Country Status (1)

Country Link
CN (1) CN201927073U (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105260776A (en) * 2015-09-10 2016-01-20 华为技术有限公司 Neural network processor and convolutional neural network processor
CN106485321A (en) * 2015-10-08 2017-03-08 上海兆芯集成电路有限公司 There is the processor of framework neutral net performance element
WO2017124642A1 (en) * 2016-01-20 2017-07-27 北京中科寒武纪科技有限公司 Device and method for executing forward calculation of artificial neural network
CN107766936A (en) * 2016-08-22 2018-03-06 耐能有限公司 Artificial neural networks, artificial neuron and the control method of artificial neuron
US10387298B2 (en) 2017-04-04 2019-08-20 Hailo Technologies Ltd Artificial neural network incorporating emphasis and focus techniques
US10474586B2 (en) 2016-08-26 2019-11-12 Cambricon Technologies Corporation Limited TLB device supporting multiple data streams and updating method for TLB module
US11221929B1 (en) 2020-09-29 2022-01-11 Hailo Technologies Ltd. Data stream fault detection mechanism in an artificial neural network processor
US11238334B2 (en) 2017-04-04 2022-02-01 Hailo Technologies Ltd. System and method of input alignment for efficient vector operations in an artificial neural network
US11237894B1 (en) 2020-09-29 2022-02-01 Hailo Technologies Ltd. Layer control unit instruction addressing safety mechanism in an artificial neural network processor
US11263077B1 (en) 2020-09-29 2022-03-01 Hailo Technologies Ltd. Neural network intermediate results safety mechanism in an artificial neural network processor
US11544545B2 (en) 2017-04-04 2023-01-03 Hailo Technologies Ltd. Structured activation based sparsity in an artificial neural network
US11551028B2 (en) 2017-04-04 2023-01-10 Hailo Technologies Ltd. Structured weight based sparsity in an artificial neural network
US11615297B2 (en) 2017-04-04 2023-03-28 Hailo Technologies Ltd. Structured weight based sparsity in an artificial neural network compiler
US11811421B2 (en) 2020-09-29 2023-11-07 Hailo Technologies Ltd. Weights safety mechanism in an artificial neural network processor
US11874900B2 (en) 2020-09-29 2024-01-16 Hailo Technologies Ltd. Cluster interlayer safety mechanism in an artificial neural network processor

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105260776B (en) * 2015-09-10 2018-03-27 华为技术有限公司 Neural network processor and convolutional neural networks processor
CN105260776A (en) * 2015-09-10 2016-01-20 华为技术有限公司 Neural network processor and convolutional neural network processor
CN106485321A (en) * 2015-10-08 2017-03-08 上海兆芯集成电路有限公司 There is the processor of framework neutral net performance element
CN106485321B (en) * 2015-10-08 2019-02-12 上海兆芯集成电路有限公司 Processor with framework neural network execution unit
WO2017124642A1 (en) * 2016-01-20 2017-07-27 北京中科寒武纪科技有限公司 Device and method for executing forward calculation of artificial neural network
CN107766936A (en) * 2016-08-22 2018-03-06 耐能有限公司 Artificial neural networks, artificial neuron and the control method of artificial neuron
US10474586B2 (en) 2016-08-26 2019-11-12 Cambricon Technologies Corporation Limited TLB device supporting multiple data streams and updating method for TLB module
US11514291B2 (en) 2017-04-04 2022-11-29 Hailo Technologies Ltd. Neural network processing element incorporating compute and local memory elements
US11551028B2 (en) 2017-04-04 2023-01-10 Hailo Technologies Ltd. Structured weight based sparsity in an artificial neural network
US11675693B2 (en) 2017-04-04 2023-06-13 Hailo Technologies Ltd. Neural network processor incorporating inter-device connectivity
US11238334B2 (en) 2017-04-04 2022-02-01 Hailo Technologies Ltd. System and method of input alignment for efficient vector operations in an artificial neural network
US11238331B2 (en) 2017-04-04 2022-02-01 Hailo Technologies Ltd. System and method for augmenting an existing artificial neural network
US11615297B2 (en) 2017-04-04 2023-03-28 Hailo Technologies Ltd. Structured weight based sparsity in an artificial neural network compiler
US11216717B2 (en) 2017-04-04 2022-01-04 Hailo Technologies Ltd. Neural network processor incorporating multi-level hierarchical aggregated computing and memory elements
US11263512B2 (en) 2017-04-04 2022-03-01 Hailo Technologies Ltd. Neural network processor incorporating separate control and data fabric
US11354563B2 (en) 2017-04-04 2022-06-07 Hallo Technologies Ltd. Configurable and programmable sliding window based memory access in a neural network processor
US11461615B2 (en) 2017-04-04 2022-10-04 Hailo Technologies Ltd. System and method of memory access of multi-dimensional data
US11461614B2 (en) 2017-04-04 2022-10-04 Hailo Technologies Ltd. Data driven quantization optimization of weights and input data in an artificial neural network
US10387298B2 (en) 2017-04-04 2019-08-20 Hailo Technologies Ltd Artificial neural network incorporating emphasis and focus techniques
US11544545B2 (en) 2017-04-04 2023-01-03 Hailo Technologies Ltd. Structured activation based sparsity in an artificial neural network
US11263077B1 (en) 2020-09-29 2022-03-01 Hailo Technologies Ltd. Neural network intermediate results safety mechanism in an artificial neural network processor
US11237894B1 (en) 2020-09-29 2022-02-01 Hailo Technologies Ltd. Layer control unit instruction addressing safety mechanism in an artificial neural network processor
US11221929B1 (en) 2020-09-29 2022-01-11 Hailo Technologies Ltd. Data stream fault detection mechanism in an artificial neural network processor
US11811421B2 (en) 2020-09-29 2023-11-07 Hailo Technologies Ltd. Weights safety mechanism in an artificial neural network processor
US11874900B2 (en) 2020-09-29 2024-01-16 Hailo Technologies Ltd. Cluster interlayer safety mechanism in an artificial neural network processor

Similar Documents

Publication Publication Date Title
CN201927073U (en) Programmable hardware BP (back propagation) neuron processor
CN102004446A (en) Self-adaptation method for back-propagation (BP) nerve cell with multilayer structure
Li et al. Prediction for tourism flow based on LSTM neural network
WO2022135066A1 (en) Temporal difference-based hybrid flow-shop scheduling method
CN110223517B (en) Short-term traffic flow prediction method based on space-time correlation
Gotmare et al. Swarm and evolutionary computing algorithms for system identification and filter design: A comprehensive review
CN108764540B (en) Water supply network pressure prediction method based on parallel LSTM series DNN
CN104145281A (en) Neural network computing apparatus and system, and method therefor
CN104900063B (en) A kind of short distance running time Forecasting Methodology
CN106952161A (en) A kind of recent forward prediction method of stock based on shot and long term memory depth learning network
CN102682345A (en) Traffic flow prediction method based on quick learning neural network with double optimal learning rates
CN106802553A (en) A kind of railway locomotive operation control system hybrid tasks scheduling method based on intensified learning
CN106959937A (en) A kind of vectorization implementation method of warp product matrix towards GPDSP
CN108182490A (en) A kind of short-term load forecasting method under big data environment
CN112633577A (en) Short-term household electrical load prediction method, system, storage medium and equipment
CN109635938A (en) A kind of autonomous learning impulsive neural networks weight quantization method
CN115017817A (en) Method, system, terminal and medium for optimizing energy efficiency of refrigeration machine room
CN115940294A (en) Method, system, equipment and storage medium for adjusting real-time scheduling strategy of multi-stage power grid
CN107818380A (en) Information processing method and server
CN108459570B (en) Irrigation water distribution intelligent control system and method based on generation of confrontation network architecture
Li et al. Adaptive scheduling for smart shop floor based on deep Q-network
CN101334637A (en) Machine group loading forecast control method based on flow model
Cheung et al. Adaptive rival penalized competitive learning and combined linear predictor with application to financial investment
CN111563767A (en) Stock price prediction method and device
CN105512754A (en) Conjugate prior-based single-mode distribution estimation optimization method

Legal Events

Date Code Title Description
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110810

Termination date: 20111125