CN108596078A - A kind of seanoise signal recognition method based on deep neural network - Google Patents
A kind of seanoise signal recognition method based on deep neural network Download PDFInfo
- Publication number
- CN108596078A CN108596078A CN201810361731.7A CN201810361731A CN108596078A CN 108596078 A CN108596078 A CN 108596078A CN 201810361731 A CN201810361731 A CN 201810361731A CN 108596078 A CN108596078 A CN 108596078A
- Authority
- CN
- China
- Prior art keywords
- weights
- seanoise
- deep neural
- signal
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Abstract
The invention discloses a kind of seanoise signal recognition method based on deep neural network, this method is by establishing DNN deep neural network models, to the continuous training and update of operation and backpropagation before being carried out to the weights of every layer of neuron of model, obtain to tell the classification weights of different type seanoise signal, to realize the identification to different type seanoise signal;The initial weight that recognition methods of the present invention carries out DNN deep neural networks using depth confidence network is trained, the initial weight that obtained weights are trained as deep neural network, data are trained later, to realize the identification to different type seanoise signal.The present invention so that test result accuracy is high using the initial value that deep neural network and depth confidence network training go out, and can reach the identification requirement of high precision.
Description
Technical field
The invention belongs to Detection of Weak Signals fields, and in particular to a kind of seanoise signal based on deep neural network
Recognition methods.
Background technology
In practical engineering application field, seanoise signal largely exists in residing marine environment, but is ground to it
Study carefully few.In for its sort research, what is used in the past is all the method for such as SVM, accidental resonance classics, and many sides
Method is all to select then to extract useful signal specific to its most of noise filtering.This kind of complete signal may not be just in reality
Do not have value, there is very high value in certain fields to its classification.
Classification learning is carried out to different seanoise signals based on deep learning algorithm, establishes one total five layers
DNN deep neural networks are trained different types of seanoise signal.By the weights to every layer of neuron of model into
Obtain to tell the classification of different type seanoise signal before row with update to the continuous training of operation and backpropagation
Weights can finally tell corresponding different seanoise signals, different type seanoise signal is identified.
However previous deep learning algorithm is all built upon under the background that random initial value is chosen, it is this in hands-on
Often resultant error is larger for initial value so that there is a problem of that final result and actual result similarity are low.Therefore, DNN depth
It is urgent problem to practise the problem that training precision is low under the random initial value of network.
Invention content
The purpose of the present invention is to solve defects existing in the prior art, and test knot can be effectively improved by providing one kind
Fruit accuracy reaches high-precision identification requirement.
In order to achieve the above object, the present invention provides a kind of seanoise signal identification side based on deep neural network
Method, this method is by establishing DNN deep neural network models, to operation and reversely before being carried out to the weights of every layer of neuron of model
The continuous training and update of propagation, obtain the classification weights that can tell different type seanoise signal, to realization pair
The identification of different type seanoise signal;Recognition methods of the present invention carries out DNN deep neural networks using depth confidence network
Initial weight training, the initial weight that obtained weights are trained as deep neural network is later trained data,
To realize the identification to different type seanoise signal.
Further, recognition methods of the present invention carries out the initial weight of DNN deep neural networks using depth confidence network
Training, the initial weight that obtained weights are trained as deep neural network, then it is real by nonlinear s igmoid excitation functions
Existing functional value normalization, finds out the error function of reality output and desired output, then utilizes gradient descent algorithm minimizing
The error coefficient for obtaining a weights constantly updates weights using this coefficient and weights summation, and finally obtaining can tell
The classification weights of different type seanoise signal.
Further, DNN deep neural networks are divided into five layers, including input, output layer and three layers of hidden layer;Wherein, defeated
Entering layer, to be the first floor have 24 neurons, first layer hidden layer to have 20 neurons, the second hidden layer to have 16 neurons, third layer
Hidden layer has 8 neurons, output layer to have 4 neurons.The present invention has the following advantages compared with prior art:
The present invention uses deep neural network and depth confidence network algorithm, overcomes using after random initial weight iteration
Occur that result precision is low, is easily absorbed in the disadvantages such as local optimum, convergence efficiency be low, utilizes deep neural network and depth confidence network
The initial value trained so that test result accuracy is high, can reach the identification requirement of high precision.By the initial of confidence network training
Weights bring training in deep neural network into, and final done with high accuracy classification identifies different seanoise signals.
Description of the drawings
Fig. 1 is the method flow diagram that seanoise signal identification is carried out using the present invention.
Specific implementation mode
The present invention is described in detail below in conjunction with the accompanying drawings.
The present invention is based on the seanoise signal recognition method of deep neural network, using deep learning algorithm to some not
Same seanoise signal carries out classification learning, and it is deep to establish a DNN comprising input layer, three layers of hidden layer and corresponding output layer
Degree neural network model is trained different types of seanoise signal.Gone out using depth confidence network pre-training corresponding first
Beginning weights.Before being carried out by the weights to every layer of neuron of model energy is obtained to the continuous training and update of operation and backpropagation
The classification weights of different type seanoise signal are enough told, corresponding different seanoise signals can be finally told, to not
Same type seanoise signal is identified.As shown in Figure 1, it is as follows:
The first step:It is five layers to establish a deep neural network altogether, and hidden layer is 3-tier architecture:Input layer, which is the first floor, to be had
24 neurons, the first hidden layer have 20 neurons, the second hidden layer to have 16 neurons, third hidden layer to have 8 nerves
Member, output layer have 4 neurons.Forward direction operation is represented by (by taking first neuron of each layer as an example):
First neuron of hidden layer first linearly calculates forward:
Second neuron of hidden layer first linearly calculates forward:
First neuron of third hidden layer linearly calculates forward:
First neuron of layer 5 output layer linearly calculates forward:
Wherein, x in formula (1)1jFor each neuron input quantity w of input layer11For first neuron power of the first hidden layer
Value is net after progress linear operation11.X in formula (2)2jTo be after the corresponding neuron of the first hidden layer linearly calculating as a result, will knot
Fruit brings the second hidden layer and first neuron weight w of the second hidden layer into21It carries out linear operation and obtains net21.X in formula (3)3jIt is
The corresponding neuron of two hidden layers linearly calculate after as a result, bringing result into third hidden layer and first neuron weights of third hidden layer
w31It carries out linear operation and obtains net31.X in formula (4)4jFor the corresponding neuron of third hidden layer linearly calculate after as a result, by result
Bring output layer and first neuron weight w of output layer into41It carries out linear operation and obtains net41。
First layer is last layer of back-propagation process, and whether the fed back statistics of first layer can be seen that can ensure instead
Present whether result enters corresponding not variable condition.
Second step:The value of actual forward calculation needs to normalize, and value is limited at 0-1's after activation primitive f (x)
Within the scope of so that the case where increasing always to operation before model effectively avoids.But it also to avoid by accordingly activating letter simultaneously
After number, weights variation coefficient value is too small to be caused to calculate the case where stopping not carrying out during backpropagation calculates.Mould
Activation primitive used in every layer of type, which is the same, shares the same activation primitive, and the activation primitive used in simulations is
Sigmoid activation primitive formula areUsing activation primitive by the preceding linear result non-linearization to operation.
First hides the linear result non-linearization of first neuron layer by layer:
The second linear result non-linearization of the neuron of hidden layer first:
The linear result non-linearization of first neuron of third hidden layer:
The linear result non-linearization of first neuron of layer 5:
Formula (5), formula (6), formula (7), formula (8) be exactly propagated forward linear operation and activation primitive it is non-linear
Propagated forward after operation calculates the process for generating result.
Third walks:It is exactly to counter-propagate through backpropagation to obtain that output result and actual result can be made after propagated forward
The weight w of error minimumij.Solve reality output okWith desired output dkError function when seek partial derivativeWhen the number of plies increases
It is infeasible using partial derivative method for solving so solved using gradient descent algorithm when there are many neuron number, it can obtain
It can make the amount Δ w that weights change to oneij=oj(1-oj)(dj-oj)oi, recycle this weights variable quantity to weight wijIt is real
Now update is wij=wij+Δwij.This method can carry out the experiment of reciprocation cycle until reaching time of corresponding training repeatedly
Number requires, and carries out using test set verify after the completion of training obtaining a result.
4th step:Depth confidence network weight pre-training is carried out, individually unsupervisedly each stratification communication network of training,
When ensuring maps feature vectors to different characteristic space, all keeping characteristics information as much as possible.
5th step:Neural network is set in last layer of depth confidence network, the output feature vector for receiving confidence is made
For its input feature value, trains entity relationship grader with having supervision and each stratification communication network can only ensure itself
Weights in layer are optimal this layer of maps feature vectors, are not to be reached to the maps feature vectors of entire depth confidence network
To optimal, so counterpropagation network also propagates to error message is top-down each stratification and believes, entire depth confidence is finely tuned
Network.The process of confidence network training model is considered as, to the initialization of a deep-neural-network weighting parameter, utilizing depth
Degree confidence network overcome neural network be easily trapped into because of random initializtion weighting parameter local optimum and the training time length
Disadvantage.
6th step:It using initial weight training and tests, deep neural network is opened using BP algorithm from propagated forward operation
Begin.Generally there are input vector X=(x1, x2,...,xi,...,xn)T, hidden layer output vector Y=(y1, y2..., yj,ym)T,
Output layer output vector O=(o1, o2..., ok,ol)T, desired output vector d=(d1, d2..., dk,dl)T, hidden layer output
Vectorial W=(wi1j1, wi2j2..., win-1jn-1, winjn)T, the mathematical relationship of propagated forward:
For output layer, have:
For hidden layer, have:
Formula (9) and formula (10) are propagated forward mathematical formulae models, when network output is not waited with desired output, are deposited
In output error E:
Formula (11) illustrates that network error is related with output valve, so it includes weight w to beijFunction, therefore adjust weights
Changeable error E, in order to be solved to minimizing the error so introducing gradient descent algorithm.
Gradient descent algorithm is so that error is steadily decreasing by adjusting weights, the adjustment of weights known to formula (12)
Amount is directly proportional to the decline of the gradient of error, can obtain:
Formula (12) obtains variation delta wij, Δ wijUpdate new wij, repetitive cycling update weights.Obtain new power
New output valve is obtained to operation before being re-started after value, it is minimum to find out error with desired output valve progress error calculation later
The weights variable quantity of value, then updates the forward direction operation that weights carry out a new round, and this cycle can constantly reduce reality output
The error of value and desired output.An output is obtained as a result, the algorithm declined again by gradient obtains instead to operation by preceding
Into propagation, function minimum finally constantly reduces corresponding error, and algorithm model constantly improve is made to obtain an ideal knot
Fruit.
Claims (3)
1. a kind of seanoise signal recognition method based on deep neural network, this method is by establishing DNN depth nerve nets
Network model, to the continuous training and update of operation and backpropagation before being carried out to the weights of every layer of neuron of model, obtaining can
The classification weights of different type seanoise signal are told, to realize the identification to different type seanoise signal;Its
It is characterized in that:The initial weight that the recognition methods carries out the DNN deep neural networks using depth confidence network is trained, will
The initial weight that obtained weights are trained as deep neural network, is later trained data, to realize to inhomogeneity
The identification of type seanoise signal.
2. recognition methods according to claim 1, it is characterised in that:The recognition methods is carried out using depth confidence network
The initial weight of the DNN deep neural networks is trained, the initial weight that obtained weights are trained as deep neural network,
Functional value normalization is realized by nonlinear s igmoid excitation functions again, finds out the error function of reality output and desired output,
Then the error coefficient that a weights are obtained using gradient descent algorithm minimizing is summed continuous using this coefficient and weights
Weights are updated, the classification weights that can tell different type seanoise signal are finally obtained.
3. recognition methods according to claim 2, it is characterised in that:The DNN deep neural networks are divided into five layers, including
Input layer, output layer and three layers of hidden layer;The input layer is the first floor, has 24 neurons;First hidden layer has 20 god
Through member, second and third hidden layer be respectively provided with 16 and 8 neurons;The output layer has 4 neurons.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810361731.7A CN108596078A (en) | 2018-04-20 | 2018-04-20 | A kind of seanoise signal recognition method based on deep neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810361731.7A CN108596078A (en) | 2018-04-20 | 2018-04-20 | A kind of seanoise signal recognition method based on deep neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108596078A true CN108596078A (en) | 2018-09-28 |
Family
ID=63613718
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810361731.7A Pending CN108596078A (en) | 2018-04-20 | 2018-04-20 | A kind of seanoise signal recognition method based on deep neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108596078A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109635945A (en) * | 2018-11-21 | 2019-04-16 | 华中科技大学 | A kind of training method of the deep neural network for image classification |
CN111860273A (en) * | 2020-07-14 | 2020-10-30 | 吉林大学 | Magnetic resonance underground water detection noise suppression method based on convolutional neural network |
CN113365283A (en) * | 2020-11-16 | 2021-09-07 | 南京航空航天大学 | Unmanned aerial vehicle ad hoc network channel access control method based on flow prediction |
CN113762513A (en) * | 2021-09-09 | 2021-12-07 | 沈阳航空航天大学 | DNA neuron learning method based on DNA strand displacement |
WO2022057305A1 (en) * | 2020-09-16 | 2022-03-24 | 南方科技大学 | Signal processing method and apparatus, terminal device and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106845525A (en) * | 2016-12-28 | 2017-06-13 | 上海电机学院 | A kind of depth confidence network image bracket protocol based on bottom fusion feature |
CN106920544A (en) * | 2017-03-17 | 2017-07-04 | 深圳市唯特视科技有限公司 | A kind of audio recognition method based on deep neural network features training |
WO2017158058A1 (en) * | 2016-03-15 | 2017-09-21 | Imra Europe Sas | Method for classification of unique/rare cases by reinforcement learning in neural networks |
-
2018
- 2018-04-20 CN CN201810361731.7A patent/CN108596078A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017158058A1 (en) * | 2016-03-15 | 2017-09-21 | Imra Europe Sas | Method for classification of unique/rare cases by reinforcement learning in neural networks |
CN106845525A (en) * | 2016-12-28 | 2017-06-13 | 上海电机学院 | A kind of depth confidence network image bracket protocol based on bottom fusion feature |
CN106920544A (en) * | 2017-03-17 | 2017-07-04 | 深圳市唯特视科技有限公司 | A kind of audio recognition method based on deep neural network features training |
Non-Patent Citations (2)
Title |
---|
YU PEI ET AL: "Classification of marine noise signals based on DNN (Deep Neural Networks) model", 《2017 IEEE 13TH INTERNATIONAL CONFERENCE ON ELECTRONIC MEASUREMENT & INSTRUMENTS》 * |
金海: "基于深度神经网络的音频事件检测", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109635945A (en) * | 2018-11-21 | 2019-04-16 | 华中科技大学 | A kind of training method of the deep neural network for image classification |
CN109635945B (en) * | 2018-11-21 | 2022-12-02 | 华中科技大学 | Deep neural network training method for image classification |
CN111860273A (en) * | 2020-07-14 | 2020-10-30 | 吉林大学 | Magnetic resonance underground water detection noise suppression method based on convolutional neural network |
WO2022057305A1 (en) * | 2020-09-16 | 2022-03-24 | 南方科技大学 | Signal processing method and apparatus, terminal device and storage medium |
CN113365283A (en) * | 2020-11-16 | 2021-09-07 | 南京航空航天大学 | Unmanned aerial vehicle ad hoc network channel access control method based on flow prediction |
CN113762513A (en) * | 2021-09-09 | 2021-12-07 | 沈阳航空航天大学 | DNA neuron learning method based on DNA strand displacement |
CN113762513B (en) * | 2021-09-09 | 2023-09-29 | 沈阳航空航天大学 | DNA neuron learning method based on DNA strand displacement |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108596078A (en) | A kind of seanoise signal recognition method based on deep neural network | |
CN110020682B (en) | Attention mechanism relation comparison network model method based on small sample learning | |
CN107688850A (en) | A kind of deep neural network compression method | |
CN107679617A (en) | The deep neural network compression method of successive ignition | |
CN106875002A (en) | Complex value neural network training method based on gradient descent method Yu generalized inverse | |
Paupamah et al. | Quantisation and pruning for neural network compression and regularisation | |
CN111401547B (en) | HTM design method based on circulation learning unit for passenger flow analysis | |
Moustafa et al. | Performance evaluation of artificial neural networks for spatial data analysis | |
CN110188794A (en) | A kind of training method, device, equipment and the storage medium of deep learning model | |
Lun et al. | The modified sufficient conditions for echo state property and parameter optimization of leaky integrator echo state network | |
CN107423705A (en) | SAR image target recognition method based on multilayer probability statistics model | |
CN112578089B (en) | Air pollutant concentration prediction method based on improved TCN | |
CN111382840B (en) | HTM design method based on cyclic learning unit and oriented to natural language processing | |
Zhou et al. | Evolutionary shallowing deep neural networks at block levels | |
CN110298434A (en) | A kind of integrated deepness belief network based on fuzzy division and FUZZY WEIGHTED | |
CN104915566A (en) | Design method for depth calculation model supporting incremental updating | |
CN103077408A (en) | Method for converting seabed sonar image into acoustic substrate classification based on wavelet neutral network | |
CN107729988A (en) | Blue-green alga bloom Forecasting Methodology based on dynamic depth confidence network | |
CN113158886B (en) | Waveform agility radar radiation source identification method based on deep reinforcement learning | |
Pal | Deep learning parameterization of subgrid scales in wall-bounded turbulent flows | |
CN109408896A (en) | A kind of anerobic sowage processing gas production multi-element intelligent method for real-time monitoring | |
Yolcu et al. | A new multilayer feedforward network based on trimmed mean neuron model | |
CN110853707A (en) | Gene regulation and control network reconstruction method based on deep learning | |
Laleh et al. | Chaotic continual learning | |
CN112862173B (en) | Lake and reservoir cyanobacterial bloom prediction method based on self-organizing deep confidence echo state network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180928 |