CN108537335A - A kind of BP neural network algorithm of autoadapted learning rate - Google Patents

A kind of BP neural network algorithm of autoadapted learning rate Download PDF

Info

Publication number
CN108537335A
CN108537335A CN201710127684.5A CN201710127684A CN108537335A CN 108537335 A CN108537335 A CN 108537335A CN 201710127684 A CN201710127684 A CN 201710127684A CN 108537335 A CN108537335 A CN 108537335A
Authority
CN
China
Prior art keywords
learning rate
weights
error
neural network
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710127684.5A
Other languages
Chinese (zh)
Inventor
华雨
王晓鸣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201710127684.5A priority Critical patent/CN108537335A/en
Publication of CN108537335A publication Critical patent/CN108537335A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Feedback Control In General (AREA)

Abstract

The invention discloses a kind of BP neural network algorithms of autoadapted learning rate, it is optimized for the back-propagation process of BP neural network algorithm, the different learning rate of connection weight dynamic adaptation to different neurons, ensure the optimal solution that the optimizing on each of which direction can be sought on this direction, to improve the regulated efficiency of weight to greatest extent to accelerate the convergence rate entirely trained;The first step:Netinit;Second step:Sample inputs;Third walks:Forward-propagating;4th step:As a result judge;5th step:Backpropagation;Present procedure is simply easy to implement, and improvement effect is good, and the training time of BP neural network is greatly shortened, and effectively overcomes the problem of being easy to be absorbed in local minimum in existing algorithm and wide usage is strong, has good actual application value.

Description

A kind of BP neural network algorithm of autoadapted learning rate
Technical field
The invention belongs to field of artificial intelligence, the BP neural network algorithm of especially a kind of autoadapted learning rate.
Background technology
Artificial neural network is one of the research hotspot in modern science artificial intelligence direction.It passes through to human brain nervous system Network structure analyzed, analysis human brain nerve work system mechanism come establish one imitate human brain work, have reality When receive handle and with store function a mathematics network model.BP neural network is the most frequently used in smart network Algorithm, be most ripe one of the network structure of current research, because of its superpower self study, self-organizing, adaptive, associative memory And fault-tolerant ability, be widely used it is general, almost cover life in every aspect, including information processing, automation control, Economy, medicine, traffic, science and technology, space flight, psychology etc..
However, existing BP algorithm is in application, there are still certain problems:Convergence time is long and is easily trapped into part Minimum value.It is directed to these defect improvements introduced mainly enlightening formula improved method and the improvement side based on numerical optimization at present Method, but both there is certain limitation.Heuristic improved method program is simple, and it is low to realize that condition requires, but improvement effect Generally;Improved method improvement effect based on numerical optimization is preferable, but program is complicated, and it is high to realize that condition requires.
Invention content
Technical problem solved by the invention is to provide a kind of BP neural network algorithm of autoadapted learning rate, to solve Existing BP algorithm training time is long and the problem of being easily trapped into local minimum;Meanwhile the BP of autoadapted learning rate of the invention Neural network algorithm can ensure that the complexity of algorithm is relatively low, realize that condition is easy.
Realize that the technical solution of the object of the invention is:
A kind of BP neural network algorithm of autoadapted learning rate, specifically includes following steps:
The first step:Netinit;
The weight matrix U and hidden layer J of input layer I to hidden layer J are initialized to the weight matrix V of output layer P, determines each layer Transmission function in neuron number and hidden layer;Set desired output E and expected convergence precision Es;
Second step:Sample inputs;
The sample data of training will be needed to sequentially input into algorithm routine;
Third walks:Forward-propagating;
Calculate the error value E (n) of reality output and desired output;
4th step:As a result judge;
And whether error in judgement value E (n) meets the requirements, i.e. E (n)<Es, result is exported if meeting the requirements and terminates to instruct Practice, otherwise goes to backpropagation;
5th step:Backpropagation;
I.e. by adjusting learning rate come find current iteration calculate gradient direction on optimal solution;Here with arbitrary weights square Arbitrary weight w in battle array WiFor, carry out specific method of adjustment explanation;First determine whether wiGradientIf wiGradient be 0, i.e.,So adjust next weight wi+1, and method of adjustment and wiUnanimously, if it is not 0, then according to formula (1) It is adjusted with formula (2):
wi(n+1)=wi(n)+Δwi(n) (2)
In formula, Δ wi(n) the weights variable quantity of adjustment, η are indicatedi(n) it is learning rate of the weights when n-th adjusts, root According to the weight w of gained weights i.e. (n+1)th time iteration after adjustmenti(n+1) relative to the weights Δ w of this weights, that is, nth iterationi (n) whether more close to optimal solution.
Compared with prior art, the present invention its remarkable advantage:
(1) by the learning rate different to each neuron automatic adaptation, the training time is greatly shortened.
(2) algorithm routine is relatively easy, easy to implement.
(3) versatility is stronger, can cope with complex network structures.
Present invention is further described in detail below in conjunction with the accompanying drawings.
Description of the drawings
Fig. 1 is the flow chart of autoadapted learning rate BP neural network algorithm.
Fig. 2 is the network structure of autoadapted learning rate BP neural network algorithm.
Fig. 3 is Initial parameter sets code.
Fig. 4 makes a living into network initial weight code.
Fig. 5 is back-propagation process partial code.
Fig. 6 is the error convergence curve graph of autoadapted learning rate BP neural network algorithm.
Specific implementation mode
The BP neural network algorithm of a kind of autoadapted learning rate of the present invention, in the back-propagation process of neural metwork training In the different learning rate of the connection weight dynamic adaptation to different neurons, and the optimizing on each direction can seek this Optimal solution on a direction, to improve the regulated efficiency of weight to greatest extent to accelerate the convergence rate entirely trained, together When the present invention the learning rate that constantly adjusts of algorithm also ensure that this algorithm will not be absorbed in local minimum in the training process.Due to The operation principle of the neural network of multilayered structure be it is the same, therefore, the present invention by taking the BP neural network of three-decker as an example, into Row autoadapted learning rate BP neural network algorithm illustrates;In conjunction with Fig. 1, Fig. 2, the method for the present invention specifically includes following step Suddenly:
The first step:Netinit;
The weight matrix U and hidden layer J of input layer I to hidden layer J are initialized to the weight matrix V of output layer P, determines each layer Transmission function in neuron number and hidden layer;Set desired output E and expected convergence precision Es.
Second step:Sample inputs;
The sample data of training will be needed to sequentially input into algorithm routine;
Third walks:Forward-propagating;
Calculate the error value E (n) of reality output and desired output (step process is consistent with existing standard BP algorithm);
4th step:As a result judge;
And whether error in judgement value E (n) meets the requirements, i.e. E (n)<Es, result is exported if meeting the requirements and terminates to instruct Practice, otherwise goes to backpropagation;
5th step:Backpropagation;
I.e. by adjusting learning rate come find current iteration calculate gradient direction on optimal solution.Here with arbitrary weights square Arbitrary weight w in battle array WiFor, carry out specific method of adjustment explanation.First determine whether wiGradientIf wiGradient be 0, i.e.,So adjust next weight wi+1, and method of adjustment and wiUnanimously, if it is not 0, then according to formula (1) It is adjusted with formula (2):
wi(n+1)=wi(n)+Δwi(n) (2)
In formula, Δ wi(n) the weights variable quantity of adjustment, η are indicatedi(n) it is learning rate of the weights when n-th adjusts, root According to the weight w of gained weights i.e. (n+1)th time iteration after adjustmenti(n+1) relative to the weights Δ w of this weights, that is, nth iterationi (n) whether more close to optimal solution, it is divided into two kinds of situation processing:
If gained w after 5.1 adjustmenti(n+1) relative Δ wi(n) closer to optimal solution, i.e. error is reduced, then temporarily Learning rate is improved, enables learning rate double:
Along Δ wi(n) new learning rate of the negative gradient direction according to first time after doubleRecalculate weight wi(n+ 1) a new weights, are acquiredIfErrors compare weight w after forward-propagatingi(n+1) more It is small, then according to formula (3) continue to increase learning rate obtain second it is double after learning rateAnd continue by above-mentioned method Calculate weightsAnd it makes comparisons.Until the gained weights after a times calculatesCompared to preceding primary calculating institute Obtain weightsError is larger, then stops calculating, and enables:
It then jumps to third step and carries out next iteration calculating.
If the error after 5.2 adjustment is further off optimal solution, i.e. error increases instead, then temporarily reduce learning rate, this In we enable learning rate halve:
Likewise, along wi(n) negative gradient direction is according to the new learning rate after halving for the first timeIt recalculatesAcquire a new weightsIf new weights are still not so good as weight wi(n), i.e., error compares weights wi(n) it improves, then then continuing to reduce the learning rate after learning rate is halved for the second time according to formula (6)On continuing Calculating is stated, until the gained weights after q times calculatesCompared to wi(n) more excellent, then stop calculating, enable:
It then jumps to third step and carries out next iteration calculating.
Embodiment 1:
Emulation experiment based on BP neural network algorithm carries out software programming using Matlab, is changed according to proposed by the present invention A set of irregular data to be corrected imported into program and are trained by the BP algorithm after, and are calculated with existing neural network Method correction acquired results are compared.
The structure of setting neural network is needed before training starts, it is first determined the hidden layer number and every layer of nerve of neural network First number.The selection of hidden layer selects excessive hidden layer that error surface gradient can be made not steady enough concerning to training precision and training time Fixed, network has more local valley floor faces, to make network be more easily trapped into local minimum in the training process.And if Very few hidden layer is selected, it, may more difficult satisfaction if selection is very few since the error of each its weighed value adjusting of hidden layer has certain limit The required precision of experiment.According to the needs of this experiment, middle layer is chosen for two hidden-layer structure.The neuron number of hidden layer selects Also critically important, generally depend on trained number of samples.Because sample is bigger, error is bigger caused by training, this is just needed Enough neuron numbers are wanted to carry out meet demand, if because number is inadequate, then network obtains the ability of information not from sample Foot, will appear the case where being absorbed in local minimum in the training process in this way, can even not train not come out sometimes.And if neural First number is excessive, coincide linearly then will appear in the training process, and obtained result is also not necessarily more preferable and can reduce Convergence rate.It is thus determined that hidden nodes purpose optimum value is also a problem, here according to the demand of this experiment, if It sets two hidden layers and separately includes 40 and 35 neurons.
Then, transmission function need to be chosen and initial parameter is set, transmission function is to control the important letter of network final output The derivative of number, error function is also for seeking one of the important parameter of gradient during error is reversely adjusted.It is common at present Transmission function be Sigmoid functions, it is there are two types of the form of expression:Logarithmic Sigmoid functions and tangential type Sigmoid letters Number, corresponding respectively that any input range is handled to section (0,1) and section (- 1 ,+1), respective specific function expression is such as Under.
In formula, λ is constant, and its variation can have network training certain influence, and it is bigger to be embodied in λ, convergence rate It is faster, but network oscillation also can be more serious simultaneously, to can not even be trained when keeping unstable networks serious.Therefore it is directed to It is also a research direction that the shape of best Sigmoid functions is found in the numerical value adjustment of λ, but is attempt to increase the numerical value of λ It is unworkable to improve convergence rate.Here transmission function of this experimental selection tangential type Sigmoid functions as hidden layer.It closes It is set in the selection of convergence precision and maximum iteration, required precision is higher, then convergence time is longer, here it is considered that experiment The problem of time, the convergence precision for comparison are set as 0.0001, and corresponding greatest iteration frequency of training is 500 times.Initial ginseng Number setting code is as shown in Figure 4.
Although the selection of network initial weight does not interfere with the convergence precision of network, but it is also possible that network convergence The probability for being absorbed in local minimum in the process impacts.It can be obtained from the prior art, in the case where other conditions are certain, Initial weight takes best in section [- 0.5 ,+0.5] range.Therefore this experiment initial weight also will be random in this section Selection.Since improved algorithm can carry out follow-up adjustment in detail for learning rate, initial learning rate is also set as in area Between randomly select in [0,1] range.It is as shown in Figure 5 to generate initial weight code.
Subsequent program carries out forward-propagating and backpropagation, is carried out by the learning rate of each neuron to each layer It adjusts to be corrected to sample error, part of original code of this part is as shown in Figure 6.
After reaching required precision, training result is exported.The present invention chooses the average value of 10 correction result of calculation as real It tests as a result, 10 training converge to the convergence time that error precision is 0.0001 and iterations are as shown in table 1.
The training time of 1. autoadapted learning rate BP neural network algorithm of table
By table 1 as it can be seen that being trained to precision with improved autoadapted learning rate BP neural network algorithm and being Training time needed for 0.0001 is basically stable at 10S or so, and iterations are also basically stable at about 50 times.Convergence rate is It is very fast, and during the entire process of emulation experiment, algorithm never occurs being absorbed in the feelings for stagnating or being absorbed in local minimum Condition, it is sufficient to prove that improved algorithm also preferably overcomes this point.Autoadapted learning rate BP neural network correcting algorithm Error convergence curve is as shown in Figure 3.
As can be seen that the BP neural network algorithm of autoadapted learning rate can will miss after 5 iterative calculation in Fig. 3 Difference converges to 0.01 or so, can entirely instruct error convergence to 0.0001 or so after 50 times or so iterative calculation It is quickly and steady to practice process, the phenomenon that obviously bounce, if error precision is set as 10-10, maximum iteration is set as 5000 times, autoadapted learning rate BP neural network algorithm can also be realized, it is only necessary to give the longer training time.
Next, we will be to the school of the correcting algorithm of autoadapted learning rate BP algorithm and other other existing neural networks Positive result is compared.After experimental data to be substituted into the operation of respective program respectively, it is still subject to the average value of ten operations, Comparative situation is as shown in table 2.
The Contrast on effect of 2. various BP gamma corrections algorithms of table
As can be seen that the convergence rate of existing standard BP algorithm is very slow in table 2, error precision, which is converged to 0.0009, to be needed Be averaged 51 seconds or so time, and has carried out a large amount of iterative calculation, and the defect of its own has been carried out in detail in front Subdivision analysis.Additional guide vanes slightly improve compared with standard BP algorithm, when error precision is 0.0009, convergence time 42 Second, iterations are also reduced, but be can see effect and be not obvious, and convergence rate is still a problem, this calculation Method has always certain uncertainty for the selection of momentum coefficient.The improvement effect of traditional step-size adaptation method is not also ten clearly demarcated Aobvious, when error precision is 0.0001, the convergence in mean time is 98 seconds, although comparing standard BP algorithm has larger change, and Programming is simple easily operated, but convergence rate is still to be improved.Resilient BP algorithm is imitated in three kinds of heuristic improved methods Fruit is best, it can preferably shorten convergence time, but its emulation experiment needs at 24 seconds or so by error convergence to 0.0001 It largely to calculate, the performance to computer is a test.Had for convergence time based on the improved three kinds of methods of numerical value It is preferable to improve, but their algorithm complexity is always a problem.It can be seen that quasi-Newton method improvement effect is very Obviously, when error precision is 0.0001, its required training time is only 16 seconds or so, than heuristic improved method The best resilient BP algorithm of middle effect also wants outstanding, but the defect of this algorithm is also clearly, needs to calculate Hessian matrix, Algorithm routine is extremely complex, and practicability is not very high.The improvement effect of conjugate gradient method is also very good, and error precision is Convergence time when 0.0001 only needs 12 seconds or so, but needs to calculate the second dervative information of function, and algorithm routine is also opposite It is complicated.It especially needs to be concerned with, LM algorithms imitate the shortening of the time of convergence rate in all BP neural network algorithms Fruit is best, and average only to need 2.18 seconds, the training that error precision requirement is 0.0001 can be completed in 104 iterative calculation, but It is that and will produce many intermediary matrixs in calculating process, need likewise, it needs to calculate the second dervative of error function Occupy a large amount of memory, therefore when and network parameter relatively simple in BP network structures is less, LM algorithms be preferably select, and When network structure is complex, the realization difficulty of LM algorithms is larger.Finally, autoadapted learning rate BP nerves proposed by the present invention The improvement effect of network algorithm is also very prominent, error function can be converged to 0.0001 and standard BP calculation at 10 seconds or so Method is compared and substantially reduces convergence time, it should be pointed out that the principle of this algorithm is remained based on the ladder in former BP algorithm Descent method is spent, algorithm flow is relatively easy, and without calculating the second dervative of error function, but its improvement effect is already better than base In the improved quasi-Newton method of numerical value and conjugate gradient method, it is only second to LM algorithms.And since the algorithm routine is simple, it calculates in occupying It deposits smaller, is still applicable in face of complex network structures, wide usage is more preferable.

Claims (2)

1. a kind of BP neural network algorithm of autoadapted learning rate, which is characterized in that specifically include following steps:
The first step:Netinit;
The weight matrix V for initializing the weight matrix U and hidden layer J to output layer P of input layer I to hidden layer J determines each layer nerve Transmission function in first number and hidden layer;Set desired output E and expected convergence precision Es;
Second step:Sample inputs;
The sample data of training will be needed to sequentially input into algorithm routine;
Third walks:Forward-propagating;
Calculate the error value E (n) of reality output and desired output;
4th step:As a result judge;
And whether error in judgement value E (n) meets the requirements, i.e. E (n)<Es, result is exported if meeting the requirements and terminates to train, otherwise Go to backpropagation;
5th step:Backpropagation;
I.e. by adjusting learning rate come find current iteration calculate gradient direction on optimal solution;Here with arbitrary weight matrix W In arbitrary weight wiFor, carry out specific method of adjustment explanation;First determine whether wiGradientIf wiGradient be 0, I.e.So adjust next weight wi+1, and method of adjustment and wiUnanimously, if it is not 0, then according to formula (1) and Formula (2) is adjusted:
wi(n+1)=wi(n)+Δwi(n) (2)
In formula, Δ wi(n) the weights variable quantity of adjustment, η are indicatedi(n) it is learning rate of the weights when n-th adjusts, according to tune The weight w of gained weights i.e. (n+1)th time iteration after wholei(n+1) relative to the weights Δ w of this weights, that is, nth iterationi(n) it is It is no more close to optimal solution.
2. a kind of BP neural network algorithm of autoadapted learning rate as described in claim 1, which is characterized in that step 5 is reversed Propagation is handled in two kinds of situation, specially:
If gained w after 5.1 adjustmenti(n+1) relative Δ wi(n) closer to optimal solution, i.e. error is reduced, then temporarily improving Learning rate enables learning rate double:
Along Δ wi(n) new learning rate of the negative gradient direction according to first time after doubleRecalculate weight wi(n+1), it asks Obtain a new weightsIfErrors compare weight w after forward-propagatingi(n+1) smaller, then According to formula (3) continue to increase learning rate obtain second it is double after learning rateAnd continue to calculate power by above-mentioned method ValueAnd it makes comparisons;Until the gained weights after a times calculatesCompared to weights obtained by preceding primary calculatingError is larger, then stops calculating, and enables:
It then jumps to third step and carries out next iteration calculating;
If 5.2 adjustment after error further off optimal solution, i.e. error increases instead, then temporarily reduce learning rate, here I Enable learning rate halve:
Likewise, along wi(n) negative gradient direction is according to the new learning rate after halving for the first timeIt recalculatesAcquire a new weightsIf new weights are still not so good as weight wi(n), i.e., error compares weights wi(n) it improves, then then continuing to reduce the learning rate after learning rate is halved for the second time according to formula (6)On continuing Calculating is stated, until the gained weights after q times calculatesCompared to wi(n) more excellent, then stop calculating, enable:
It then jumps to third step and carries out next iteration calculating.
CN201710127684.5A 2017-03-06 2017-03-06 A kind of BP neural network algorithm of autoadapted learning rate Pending CN108537335A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710127684.5A CN108537335A (en) 2017-03-06 2017-03-06 A kind of BP neural network algorithm of autoadapted learning rate

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710127684.5A CN108537335A (en) 2017-03-06 2017-03-06 A kind of BP neural network algorithm of autoadapted learning rate

Publications (1)

Publication Number Publication Date
CN108537335A true CN108537335A (en) 2018-09-14

Family

ID=63489517

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710127684.5A Pending CN108537335A (en) 2017-03-06 2017-03-06 A kind of BP neural network algorithm of autoadapted learning rate

Country Status (1)

Country Link
CN (1) CN108537335A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109409501A (en) * 2018-09-25 2019-03-01 北京工业大学 A kind of Neural network optimization with characteristic of oblivion of imitative brain
CN109459609A (en) * 2018-10-17 2019-03-12 北京机械设备研究所 A kind of distributed generation resource frequency detecting method based on artificial neural network
CN109818964A (en) * 2019-02-01 2019-05-28 长沙市智为信息技术有限公司 A kind of ddos attack detection method, device, equipment and storage medium
CN109829054A (en) * 2019-01-17 2019-05-31 齐鲁工业大学 A kind of file classification method and system
CN110596199A (en) * 2019-09-02 2019-12-20 安徽康佳同创电器有限公司 Electronic nose, smell identification method and storage medium
CN111024776A (en) * 2019-12-19 2020-04-17 安徽康佳同创电器有限公司 Electronic nose, smell identification method and storage medium
CN111598460A (en) * 2020-05-18 2020-08-28 武汉轻工大学 Method, device and equipment for monitoring heavy metal content in soil and storage medium
CN111639762A (en) * 2020-05-22 2020-09-08 河北工业大学 Lower limb artificial limb gait recognition method based on self-organizing neural network
CN111797750A (en) * 2020-06-30 2020-10-20 江苏省特种设备安全监督检验研究院 Elevator car sill and well inner surface distance measuring method based on BP neural network
CN111814965A (en) * 2020-08-14 2020-10-23 Oppo广东移动通信有限公司 Hyper-parameter adjusting method, device, equipment and storage medium
CN112861991A (en) * 2021-03-09 2021-05-28 中山大学 Learning rate adjusting method for neural network asynchronous training
CN112926727A (en) * 2021-02-10 2021-06-08 北京工业大学 Solving method for local minimum value of single hidden layer ReLU neural network

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109409501A (en) * 2018-09-25 2019-03-01 北京工业大学 A kind of Neural network optimization with characteristic of oblivion of imitative brain
CN109459609B (en) * 2018-10-17 2020-10-13 北京机械设备研究所 Distributed power supply frequency detection method based on artificial neural network
CN109459609A (en) * 2018-10-17 2019-03-12 北京机械设备研究所 A kind of distributed generation resource frequency detecting method based on artificial neural network
CN109829054A (en) * 2019-01-17 2019-05-31 齐鲁工业大学 A kind of file classification method and system
CN109818964A (en) * 2019-02-01 2019-05-28 长沙市智为信息技术有限公司 A kind of ddos attack detection method, device, equipment and storage medium
CN109818964B (en) * 2019-02-01 2021-12-07 长沙市智为信息技术有限公司 DDoS attack detection method, device, equipment and storage medium
CN110596199A (en) * 2019-09-02 2019-12-20 安徽康佳同创电器有限公司 Electronic nose, smell identification method and storage medium
CN111024776A (en) * 2019-12-19 2020-04-17 安徽康佳同创电器有限公司 Electronic nose, smell identification method and storage medium
CN111598460A (en) * 2020-05-18 2020-08-28 武汉轻工大学 Method, device and equipment for monitoring heavy metal content in soil and storage medium
CN111598460B (en) * 2020-05-18 2023-09-29 武汉轻工大学 Method, device, equipment and storage medium for monitoring heavy metal content of soil
CN111639762A (en) * 2020-05-22 2020-09-08 河北工业大学 Lower limb artificial limb gait recognition method based on self-organizing neural network
CN111797750A (en) * 2020-06-30 2020-10-20 江苏省特种设备安全监督检验研究院 Elevator car sill and well inner surface distance measuring method based on BP neural network
CN111814965A (en) * 2020-08-14 2020-10-23 Oppo广东移动通信有限公司 Hyper-parameter adjusting method, device, equipment and storage medium
CN112926727A (en) * 2021-02-10 2021-06-08 北京工业大学 Solving method for local minimum value of single hidden layer ReLU neural network
CN112926727B (en) * 2021-02-10 2024-02-27 北京工业大学 Solving method for local minimum value of single hidden layer ReLU neural network
CN112861991A (en) * 2021-03-09 2021-05-28 中山大学 Learning rate adjusting method for neural network asynchronous training
CN112861991B (en) * 2021-03-09 2023-04-14 中山大学 Learning rate adjusting method for neural network asynchronous training

Similar Documents

Publication Publication Date Title
CN108537335A (en) A kind of BP neural network algorithm of autoadapted learning rate
Islam et al. A constructive algorithm for training cooperative neural network ensembles
CN108596212B (en) Transformer fault diagnosis method based on improved cuckoo search optimization neural network
US10832123B2 (en) Compression of deep neural networks with proper use of mask
US10762426B2 (en) Multi-iteration compression for deep neural networks
US20190050734A1 (en) Compression method of deep neural networks
CN112101530B (en) Neural network training method, device, equipment and storage medium
CN107679617A (en) The deep neural network compression method of successive ignition
CN110110862A (en) A kind of hyperparameter optimization method based on adaptability model
EP3121767A1 (en) Methods and systems for implementing deep spiking neural networks
KR102037279B1 (en) Deep learning system and method for determining optimum learning model
CN111079899A (en) Neural network model compression method, system, device and medium
CN113065631A (en) Parameter optimization method based on improved competition group algorithm
CN110738362A (en) method for constructing prediction model based on improved multivariate cosmic algorithm
CN111625998A (en) Method for optimizing structure of laminated solar cell
CN115952456A (en) Method, system, program product and storage medium for determining fault diagnosis model
Jaddi et al. Taguchi-based parameter designing of genetic algorithm for artificial neural network training
CN107578101B (en) Data stream load prediction method
CN114880806A (en) New energy automobile sales prediction model parameter optimization method based on particle swarm optimization
CN106896724B (en) Tracking system and tracking method for sun tracker
CN117670586A (en) Power grid node carbon factor prediction method and system based on graph neural network
CN117808120A (en) Method and apparatus for reinforcement learning of large language models
CN115983366A (en) Model pruning method and system for federal learning
CN108090564A (en) Based on network weight is initial and the redundant weighting minimizing technology of end-state difference
Leoshchenko et al. Adaptive Mechanisms for Parallelization of the Genetic Method of Neural Network Synthesis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180914