CN109547374A - A kind of depth residual error network and system for subsurface communication Modulation Identification - Google Patents
A kind of depth residual error network and system for subsurface communication Modulation Identification Download PDFInfo
- Publication number
- CN109547374A CN109547374A CN201811403513.1A CN201811403513A CN109547374A CN 109547374 A CN109547374 A CN 109547374A CN 201811403513 A CN201811403513 A CN 201811403513A CN 109547374 A CN109547374 A CN 109547374A
- Authority
- CN
- China
- Prior art keywords
- residual error
- depth residual
- error network
- layer
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L27/00—Modulated-carrier systems
- H04L27/0012—Modulated-carrier systems arrangements for identifying the type of modulation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04B—TRANSMISSION
- H04B13/00—Transmission systems characterised by the medium used for transmission, not provided for in groups H04B3/00 - H04B11/00
- H04B13/02—Transmission systems in which the medium consists of the earth or a large mass of water thereon, e.g. earth telegraphy
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
Abstract
This application discloses a kind of depth residual error networks and system for subsurface communication Modulation Identification, it include: data prediction layer, the data prediction layer includes the first depth residual error network layer, data characteristics extract layer, the data characteristics extract layer includes the second depth residual error network layer, third depth residual error network layer and the 4th depth residual error network layer, data classification result output layer, the data classification result output layer include the 5th depth residual error network layer.Since the depth residual error network provided by the present application for subsurface communication Modulation Identification includes multiple depth residual error network layers, and being incremented by with depth residual error network layer depth, depth residual error network layer is incremented by the identification capability of data, so that depth residual error network provided by the present application is when being used for subsurface communication, the using effect in practical water-bed communication can be improved, it is more convenient efficiently to complete water-bed communication modulation identification, improve the accuracy that Modulation Identification judges in water-bed communication process.
Description
Technical field
This application involves deep learning field, in particular to a kind of depth residual error network for subsurface communication Modulation Identification
And system.
Background technique
Bottom wireless communication is considered as most challenging wireless communications method, water-bed wireless channel due to its own
Characteristic (such as narrow bandwidth, extended time and serious intersymbol interference) and so that communication process is become abnormal difficult.These water
Bottom radio communication characteristics seriously affect the stability of water-bed communication system, and cause serious hindrance to high speed water bottom wireless communication.
Modulation Identification plays conclusive effect during being sorted in communication system, at communication sink, signal demodulation
Correct modulation classification identification is all based on signal identification.Due to the complexity and unstability of water-bed wireless communication system,
Correct modulation system is difficult in practical water-bed communication period.
Recognition methods for modulation system is shown one's talent in various machine learning methods, and current most popular place is become
Reason method.Various depth residual error network models, which need to provide before the use, is largely used to trained data.By being counted from a large amount of
According to middle learning data set distribution probability, depth residual error network can obtain good performance on classifying quality.In training pattern
Stage, calculation amount are practically without reduction.On the contrary, model becomes increasingly complex with the variation of demand, this meeting in training pattern
Consume more computing resources.However, when model is trained to simultaneously in practice in use, as long as bottom hardware supports depth residual error
The trained model running of network, processing speed will be very fast.This is primarily due to trained model and does not need to handle in real time
Data, therefore result judgement can be quickly made by trained model structure.In actual scene, due to not big
The training that data are used for depth residual error network model is measured, data volume deficiency causes depth residual error network model training effect bad,
Use can not be even disposed in reality scene.
It can be seen from the above, network depth is to influence the key factor of model performance.With the network layer number of deep learning
Increase, network model can extract the feature of more complicated data set.When model is deepened, it can theoretically obtain and preferably divide
Class effect.When being designed too deep for trained depth residual error network model, performance will be reduced and can not even be trained.It is main former
Because being when the distribution probability of learning data set, with the intensification of network model depth, the gradient of acquisition will disappear or explosion.By
In gradient problem, when depth residual error network too depth and when effect does not improve, depth residual error network becomes increasingly difficult to train.In water
In lower wireless communication, since the practical communication characteristic of the limitation of environment, especially narrow bandwidth cannot transmit mass data.These because
Element results in the training of deep learning model, and the data set for underwater wireless communication is restricted, and does not reach ideal
The scale of training pattern.Therefore, accurately the modulation system of the water-bed communication of identification becomes a challenging problem.
Summary of the invention
In order to solve the above-mentioned technical problem the application provides, the application is achieved by the following technical solution:
In a first aspect, the embodiment of the present application provides a kind of depth residual error network for subsurface communication Modulation Identification, packet
Include: data prediction layer, the data prediction layer include the first depth residual error network layer, the first depth residual error network layer
Different Modulations data for will receive pre-process;Data characteristics extract layer, the data characteristics extract layer packet
Include the second depth residual error network layer, third depth residual error network layer and the 4th depth residual error network layer, the second depth residual error
Network layer, third depth residual error network layer and the 4th depth residual error network layer gradually increase data characteristics identification extractability,
It is deep successively to pass through the second depth residual error network layer, third by the pretreated data of the first depth residual error network layer
It spends residual error network layer and the 4th depth residual error network layer carries out data characteristics extraction;Data classification result output layer, the data
Classification results output layer includes the 5th depth residual error network layer, and the 5th depth residual error network layer is used for deep according to the described 4th
The characteristic that degree residual error network layer is extracted is judged and exports the modulation system finally identified.
Using above-mentioned implementation, due to the depth residual error network packet provided by the present application for subsurface communication Modulation Identification
Containing multiple depth residual error network layers, and being incremented by with depth residual error network layer depth, depth residual error network layer is to data
Identification capability is incremented by, so that depth residual error network provided by the present application can be improved when being used for subsurface communication in reality
Using effect in the communication of the bottom is more convenient efficiently to complete water-bed communication modulation identification, and raising is adjusted in water-bed communication process
The accuracy of system identification judgement.
According to the implementation of first aspect, in a first possible implementation of that first aspect, first depth
The Different Modulations data that residual error network layer transmits subsurface communication, change by data format, realize more modulation
The pretreatment of mode data.
According to the implementation of first aspect, in a second possible implementation of that first aspect, second depth
Residual error network layer includes: the first depth residual error network unit, the second depth residual error network unit, third depth residual error network unit
With the 4th depth residual error network unit, the first depth residual error network unit includes: the first convolutional layer, the first batch standardization
Layer and the first active coating;The second depth residual error network unit includes: the first data handling path and the second data processing road
Diameter, first data handling path and the polymerization output of the second data handling path;The third depth residual error network unit packet
It includes: the first adding layers and the second active coating;The 4th depth residual error network unit includes third data handling path and the 4th
Data handling path, wherein any direct sum number of processing path in the third data handling path and the 4th data handling path
It is connected according to classification results output layer.
According to second of first aspect possible implementation, in first aspect in the third possible implementation, institute
The first data handling path is stated to be made of first structure unit, the second structural unit, third structural unit and the 4th structural unit,
Wherein: the first structure unit includes: third convolutional layer, the second batch normalization layer and third active coating;Second knot
Structure unit includes: radix layer, and the radix layer is generated by a plurality of data parallel data path by a convolutional layer;The third knot
Structure unit includes: third batch normalization layer and the 4th active coating;4th structural unit includes: Volume Four lamination and the 4th
Batch normalization layer.
According to the implementation of first aspect, in the 4th kind of possible implementation of first aspect, the third depth
Residual error network layer includes: the 5th depth residual error network unit, the 6th depth residual error network unit, the 7th depth residual error network unit
With the 8th depth residual error network unit, the 5th depth residual error network unit includes: the second adding layers and the 5th active coating;Institute
Stating the 8th depth residual error network unit includes: third adding layers and the 6th active coating, the 6th depth residual error network unit and
7th depth residual error network unit includes two data processing paths.
According to the implementation of first aspect, in the 5th kind of possible implementation of first aspect, the 5th depth
Residual error network layer includes: the 4th adding layers, the 7th active coating and output layer.
According to the second to five kind of first aspect any possible implementation, in the 6th kind of possible realization side of first aspect
In formula, batch normalization layer is difficult to trained problem for solving the problems, such as that depth residual error is alleviated in gradient disappearance in back-propagating.
According to the second to five kind of first aspect any possible implementation, in the low seven kinds of possible realization sides of first aspect
In formula, active coating is for normalized in data handling procedure.
Second aspect, a kind of system for subsurface communication Modulation Identification, including above-mentioned first aspect or first aspect are appointed
One possible depth residual error network.
Detailed description of the invention
The application is further described with reference to the accompanying drawing.
Fig. 1 is a kind of signal of the depth residual error network for subsurface communication Modulation Identification provided by the embodiments of the present application
Figure;
Fig. 2 is a kind of concrete structure schematic diagram of depth residual error network provided by the embodiments of the present application;
Fig. 3 is a kind of depth residual error network unit signal comprising two data processing paths provided by the embodiments of the present application
Figure;
Fig. 4 is that another depth residual error network unit comprising two data processing paths provided by the embodiments of the present application is shown
It is intended to;
Fig. 5 is a kind of depth residual error network training process effect diagram provided by the embodiments of the present application;
Fig. 6 is a kind of recognition effect signal of the depth residual error network provided by the embodiments of the present application at signal-to-noise ratio -6dB
Figure;
Fig. 7 is a kind of recognition effect signal of the depth residual error network provided by the embodiments of the present application at signal-to-noise ratio -2dB
Figure;
Fig. 8 is a kind of structural schematic diagram of the system for subsurface communication Modulation Identification provided by the embodiments of the present application;
Fig. 9 is a kind of structural schematic diagram of terminal provided by the embodiments of the present application.
Specific embodiment
In order to clarify the technical characteristics of the invention, explaining with reference to the accompanying drawing with specific embodiment this programme
It states.
The Modulation Identification of receiving end is the premise of demodulation signal identification.Underwater wireless communication process is mainly by underwater special
The influence of channel form is primarily referred to as Doppler frequency shift, additive noise and multi-path effect.
The form of expression is substantially similar to general traffic model, but different from citation form in detail, fundamental form
Formula can be described with following form:Wherein y (t) is that final receive is believed
Number.h(ε;T) it is response of the channel to the impulse response added in t moment at (t- ε) moment.ε is delay.W (t) is additivity
Noise can be the non-uniform Distribution of the various spectrum components of noise, also referred to as nonwhite noise.
Time varying impulse response in above formula can be further written asWhereinIt is the impulse response (ε-ε at this time on n-th of pathn), εnIt is the phase delay on n-th of path.βnIt (t) is n
Possible time-varying decay factor (n=1,2 ..., N) on multipath propagation paths.
Above formula inputs in first formula, and the form for then receiving signal is indicated with following formIt receives signal to be made of N number of path components, wherein on each path
Component decays to βn(t), and the phase delay on each path is εn.In wireless communication procedure, in order to improve holding for transmission
Loading capability improves system effectiveness using M-ary QAM and PSK modulator approach.Two kinds of modulating modes and both modulating modes
Multi input form can be realized by I/Q orthogonal modulation (I represents same phase, and Q represents orthogonal).Two signals for needing to transmit
g1(t) and g2(t) cosine signal cos (ψ is used respectivelycAnd sinusoidal signal sin (ψ t)cT) it modulates, emits signal x (t)=g to obtain1
(t)cos(ψct)-g2(t)sin(ψcT), wherein g1(t) it is known as I channel signal, g2(t) it is known as Q channel signal.cos(ψcT) and
sin(ψcIt t) is carrier wave, wherein wcIt is carrier angular frequencies.For exporting QPSK modulation, I/Q tune is used using 1 simplicity of explanation of table
It makes to realize that QPSK, s1s01 indicate the first two position of input data.
The I/Q modulation of TABLE 1.QPSK modulation
The study found that the accuracy of model can be improved in the in-depth of depth residual error network layer.However, working as the level of network
When increasing above certain level, the model accuracy of training set training and the model measurement precision of test set all begin to decline even nothing
Method training.All these all to show when network layer deepens, depth residual error network becomes increasingly difficult to train.
With going deep into for depth residual error network, the root of modelling effect worse and worse is that gradient disappears.Common residual error net
For network structure by input layer, some hidden layers (being also possible to one layer) and output layer composition, every layer has multiple neurons.According to residual error
The backpropagation principle of network, is exported by propagated forward calculated result, then by error amount D and original sample between the two
Original value is comparedAccording to calculated error result, the side of " chain rule " is utilized
Method finds out partial derivative, by result by error propagation, to obtain the gradient of weight adjustment.It is obtained according to chain rule from output layer
To the backpropagation of hidden layer, it is shown belowWherein Out1 is the of output layer
One unit, Hid1 are first units of hidden layer.The subsequent iteration propagated by forward and backward is being matched repeatedly and is being adjusted
After whole parameter matrix, the error amount for exporting result becomes smaller and smaller, so that exporting result closer to target.By upper
State process, it can be found that residual error network need by back-propagation process disease gradient models fitting is continuously improved
Energy.And when network layer is deepened due to desired accuracy, the gradient transmitted in back-propagation process will be because repeating to be superimposed
And it fades away.The number of plies is more, and the quantity of backpropagation is more, causes more to decay, or even can not effectively adjust preceding net
The weight of network layers.After deepening the network number of plies, on the basis of improving model accuracy, before the real practical application of model,
Need to solve the problems, such as gradient disappearance.
Assuming that relatively shallow network model has reached the limit of training precision and cannot continue to improve.At this point, herein
After network model, in addition several identity map layers (i.e. output is equal to input).At this moment after being further added by the depth of network, at least net
Network model will not because of the number of plies increase and lead to further increasing for mistake.That is, it is not with the in-depth of network layer
It should result in the increase of the model training mistake on training set.It and is using identical mapping, directly by previous network layer here
Value is output to behind network layer, is the main thought of depth residual error network design.
In order to realize accurately identifying for subsurface communication modulation system, this application provides one kind for underwater as shown in Figure 1
The depth residual error network of communication modulation identification.Referring to Fig. 1, depth residual error network 10 provided by the present application includes: data prediction
Layer 101, data characteristics extract layer 102 and data classification results output layer 103.
Data prediction layer 101, the data prediction layer include the first depth residual error network layer, and first depth is residual
Poor network layer is for pre-processing the Different Modulations data received.Data characteristics extract layer 102, the data are special
Levying extract layer includes the second depth residual error network layer, third depth residual error network layer and the 4th depth residual error network layer, and described the
Two depth residual error network layers, third depth residual error network layer and the 4th depth residual error network layer identify extractability to data characteristics
It gradually increases, successively passes through the second depth residual error network by the pretreated data of the first depth residual error network layer
Layer, third depth residual error network layer and the 4th depth residual error network layer carry out data characteristics extraction.Data classification result output layer
103, the data classification result output layer includes the 5th depth residual error network layer, and the 5th depth residual error network layer is used for root
Judged according to the characteristic that the 4th depth residual error network layer is extracted and exports the modulation system finally identified.
The Different Modulations data that the first depth residual error network layer transmits subsurface communication, by data lattice
Formula changes, and realizes the pretreatment of Different Modulations data.
The second depth residual error network layer include: the first depth residual error network unit, the second depth residual error network unit,
Third depth residual error network unit and the 4th depth residual error network unit, the first depth residual error network unit includes: first
Convolutional layer, the first batch normalization layer and the first active coating;The second depth residual error network unit includes: the first data processing
Path and the second data handling path, first data handling path and the polymerization output of the second data handling path;Described
Three depth residual error network units include: the first adding layers and the second active coating;The 4th depth residual error network unit includes the
Three data handling paths and the 4th data handling path, wherein in the third data handling path and the 4th data handling path
Any processing path is directly connected with data classification results output layer.
First data handling path is by first structure unit, the second structural unit, third structural unit and the 4th knot
Structure unit composition, in which: the first structure unit includes: third convolutional layer, the second batch normalization layer and third active coating;
Second structural unit includes: radix layer, and the radix layer is generated by a plurality of data parallel data path by a convolutional layer;
The third structural unit includes: third batch normalization layer and the 4th active coating;4th structural unit includes: Volume Four
Lamination and the 4th batch normalization layer.
The third depth residual error network layer include: the 5th depth residual error network unit, the 6th depth residual error network unit,
7th depth residual error network unit and the 8th depth residual error network unit, the 5th depth residual error network unit includes: second
Adding layers and the 5th active coating;The 8th depth residual error network unit includes: third adding layers and the 6th active coating, and described
Six depth residual error network units and the 7th depth residual error network unit include two data processing paths.
The 5th depth residual error network layer includes: the 4th adding layers, the 7th active coating and output layer.5th depth is residual
Poor network layer major function on the basis of level processing, after data processing, carries out final result judgement in front.It is logical
It crosses full articulamentum and generates final result judgement by Softmax activation primitive.It, can be efficient after subsurface communication is transmitted
It identifies and receives which kind of modulation system signal belongs to.
Effective feature set data classification ability is formed by the full articulamentum of single layer, is mentioned to final modulation system judgement
For pretreatment of preferably classifying.The current classifiable modulation system for being mainly used for subsurface communication be MPSK (mainly include BPSK,
QPSK, 8PSK etc.) and MQAM (mainly including 16QAM etc.), so that model later can broadly expanded application range.Quan Lian
It connects layer and further processing actually has been carried out to the result exported by every layer depth residual error network, the average pondization of the overall situation is produced
Raw characteristic data set is optimized and exports final debud mode result.
In the present embodiment, convolutional layer includes the neural unit (namely data convolution unit) of M*N format, wherein M is represented
Analysis data matrix on be expert on data amount check, N represent analyze data matrix column on data amount check, two layers
M value, N value it is the same.Batch normalization layer is difficult to instruct for solving the problems, such as that depth residual error is alleviated in gradient disappearance in back-propagating
Experienced problem, active coating is for normalized in data handling procedure.Fig. 2 gives the signal in a kind of the embodiment of the present application
Property depth residual error network.
Although in depth residual error network layer, the number by increasing neuron can extract more data characteristicses, simultaneously
Excessive neuron also easily causes overfitting problem, and the model for causing training to be completed can not use in the actual environment.This
When need the processing mode by the way that batch normalization layer is added between depth residual error network layer just to improve after model training in reality
Generalization ability in the use of border.The correspondence activation primitive of reasonable selection active coating is also critically important promotion classifying quality simultaneously
The important parameter for preventing overfitting.
In depth residual error network in data handling procedure, optimization object function change into be approach an identical mapping, and
It is not 0 mapping.At this moment the disturbance that study is found to identical mapping can be easier than relearning a mapping function.So passing through
Deep layer residual error e-learning, which optimizes the mode of an identical mapping, approaches the mode of 0 mapping replacing optimization object function and is promoted
Classifying quality can improve recognition effect by the feature of the profound more data sets of residual error e-learning, training simultaneously
Model generalization performance out is more preferable, can widely promote the use of in similar tasks.
In this application, trained depth residual error network model data and test depth residual error network model data point have been used
The mode of other independent input carries out the training of residual error network that is to say, first the data of training pattern are input in residual error network,
After training, then the data of test model are input to the accuracy rate test carried out in residual error network to network model.Therefore
The case where actual use can be simulated, improves the using effect in practical subsurface communication, is more convenient efficiently to complete underwater logical
Believe Modulation Identification, improves the accuracy of identification judgement.Because having been completed the training and survey to model before actual use
Trial work is made, and trained model does not need to carry out the dynamic adjustment of parameter, data processing and corresponding in actual online use
Calculating process, and can be directly by the data judgement output of input as a result, having that low, the real-time treatability of delay is good and efficiency
The advantages that high.
In the multilayered structure of depth residual error network, each layer in structure can be equal to the form of classifier.It is difficult
Find the physical significance of specific classification exact interpretation of how it feels in each layer of classifier.In fact, every layer
In various neurons all objectively realize the function of classifier.It samples the feature vector input in preceding layer,
And map them into new vector space.
In depth residual error network training process in the embodiment of the present application, it is assumed that one section of the input to residual error network is
∈, it is contemplated that output be It is desired output result.If it is desired to learning such model, training can be more tired
It is difficult.Have obtained that the accuracy (or the error of discovery lower level is very big) of saturation degree, then next learning objective
Translate into identical mapping.That is, input ∈ is similar to export H (x), to keep the precision of subsequent level not reduce.Base
Input ∈ is connected to initial results by shortcut and is directly passed to output in this residual error structure.Output isWhenWhen, thenThis is the schematic diagram of above-mentioned same nature.It is residual
Co-net network is equivalent to change learning objective, rather than learns complete output, but target valueDifference between ∈,
Referred to as residual error equationTherefore, it is approximately 0 that latter training objective, which is by residual result, so as to net
The intensification precision of network does not reduce.Directly processing does not learn existing inductive problem during depth model to residual error network.However, more
The network structure of deep narrower (not extending in network-wide) is a kind of good parameter transfer mode, can be in the direction of variation
Upper solution gradient problem.
Residual error network has used for reference the thinking of the skip floor connection of the Highway Network network architecture, but further improves
This network structure.Residual error item is initially weighting, but identical structure is replaced by asymmetrical graph.Learn by using depth residual error
Frame come solve the problem of depth residual error network solve neural network accuracy reduce and degenerate.In fact, (only being weighed using rest network
The realization parameterized again) model indicate no direct advantage.However, residual error network allows all models to carry out depth layer by layer
Degree characterization.Residual error network keeps feedforward/Back Propagation Algorithm very smooth.The remaining net of deeper can more easily control be optimized in this way
Network model.The front layer output that this residual error jump structure has broken conventional residual network can only be used as the convention of rear layer input.Directly
Connect the input across multiple figure layers output figure layer as latter figure layer.Its significance is to solve the problems, such as superposition multitiered network,
And the error rate of entire learning model is made not reduce but improve.At this point, the number of plies of residual error network can be more than previous constraint,
Reach tens of layers, hundreds of layers even thousands of layers provide feasibility for high-level semantics feature extraction and classification.
In the embodiment of the present application, basic ResNeXT structure can be made of two basic modules, such as following Fig. 3,
Shown in Fig. 4.Dotted line in Fig. 3, Fig. 4 is radix layer, and Fig. 3 indicates structure type 1, and Fig. 4 indicates structure type 2.It is related in application
Depth residual error network in, reuse the higher network structure of both structure compositions discrimination, as shown in Figure 2.Here
Activation primitive indicates the activation primitive being used together with depth residual error network.
In the embodiment of the present application, in deep learning recurrent problem first is that needing large-scale training set training pattern
To obtain good extensive model.However, the calculating cost of large-scale training set is also higher.Once data set input model is instructed
Practice, usually is out the ability of Current hardware training data.As training set rises to the sample of billions of subsurface communication data, one
The calculating of step gradient may need for quite a long time.
The problems in model cannot be once directly inputted in order to solve data set, and data can only be input in model in batches
It is trained.So the data training effect of batch input model should be consistent with the training result of all data input models
's.In the training process, training effect is measured using gradient decline.It is average value that the core of gradient decline, which is gradient, is shown
The probability distribution of batch data is consistent with the probability distribution of all data.This means that can be used from the small of large-scale data
Scale sample estimates estimated value.And the cost function in deep learning algorithm can usually be decomposed into the cost of each sample
The summation of function.
In specific embodiment design, in each step of algorithm, from training set
Equably draw small lot (small lot) sample.The quantity of small lot n ' is usually relatively small quantity, can from one to
Selection in several hundred.Importantly, n ' is generally also fixed even if the size n of training set increases.When in total subsurface communication number
When according to concentrating the billions of a samples of fitting, update calculating is used only several hundred a samples and may be sufficient every time.However, under gradient
Drop is typically considered slow or insecure, and optimization algorithm not necessarily measures interior guarantee local minimum at a reasonable time
Or global minimum.It needs preferably, not only to accelerate training pattern, also to converge to global minima gradient descent algorithm, this
It is exactly Adam.Adam is a kind of optimization algorithm, can replace traditional stochastic gradient descent process.It can be according to training data
Iteratively update residual error network weight.
Following procedure is for describing Adam gradient descent method, the parameter μ of Yao Youhua, objective function ζ (μ), β1And β2It is initial
Learning rate.Then iteration optimization is executed, wherein each epochl (epoch indicates that complete data set is passed through by the forward direction of model,
Subsequently reverse direction passes through model.), l indicates that epoch passes through the number of model.Process is as follows:
Gradient of the calculating target function relative to parameter current
Keep past single order moments estimation gradient klExponential damping mean value kl=β1kl-1+(1-β1)σl, and stored
Square G for the second torque estimation gradient gonelExponential damping mean value Gl=β22Gl-1+(1-β2)σl 2。
Current time downward gradient
If klAnd GlIt is initialized to 0 vector, then they are displaced to 0, therefore by correcting in deviationK is calculated laterlAnd GlTo carry out deviation correction to offset these deviations.Gradual change updates ruleWherein ∈ is a small positive integer, to prevent denominator for 0.
In specific embodiments of the present invention design, it can be seen that Adam algorithm is different from common SGD (boarding steps
Degree decline).SGD keeps single learning rate to update all weights, and learning rate will not change during the training period.Adam passes through
Single order and the second order moments estimation of gradient are calculated to design the independence self-adapting learning rate of different parameters, to improve to underwater logical
Believe the identification of signal.Particularly, the sparse signal data obtained from the underwater channel with sparse characteristic are more suitable in Adam
The middle adaptive approach handled using sparse data.
When the solution of the present invention is implemented to design, common deep learning model training process is the training number of model training
According to collection, after the completion of training, model then is trained using validation data set and is verified.In the process, it is necessary to take and arrange
It applies to measure the training result of model and verify the validity of model.Here measurement is common intersection entropy function.Cross entropy
It is cost function, the true value of predicted value and predictive data set based on predictive data set is come the size of descriptive model.Assuming that
The output form of neuron used in depth residual error network is γ=δ (y), and δ is activation primitive here, and form isIt uses the form of sigmoid function, and value range is from 0 to 1.The basic operation form of neuron y is y
=∑iλisi+ b, wherein i is corresponding residual error network layer neuron sequence number, can take it is any be greater than or equal to 2 value.λiIt is
The weight of neuron, siIt is the input of neuron,It is deviation.Cost function is suitable for the reason of cross entropy and is, passes through intersection
The operation value obtained of entropy must be positive.Prediction result value is smaller, it is more accurate, this makes it easier to test model most
Whole effect.
In scheme when it is implemented, from figure 5 it can be seen that when val_error and training_loss.&.error is received
When holding back minimum value, model realizes desired as a result, so as to complete model on training dataset and validation data set
Training.Solid line val_error is the variation that model is verified in validation data set, dotted line training_loss.& in the figure
It .error is the variation that training data concentrates model training.Horizontal axis indicates the training time, and model is restrained after 15 training, shown
The model proposed is effective for the Modulation Identification of subsurface communication.In upper figure, it is by calculating that the longitudinal axis, which indicates the proportion of goods damageds,
Intersection entropy function formula calculate.It can be seen from the figure that fast convergence starts when initially entering the data of verifying collection,
And when reaching the 5th training time, almost converge to ideal value.Minimum receipts although still there is fluctuation, around oscillation
It is still highly stable to hold back loss late.This further illustrates the efficiency of model used, and can be instructed with seldom training time
Practice the probability distribution for obtaining the data in subsurface communication modulated signal data set.
In application embodiment, in Fig. 6, Fig. 7, horizontal axis is the judgement how model modulates reception signal, the longitudinal axis
Indicate the true modulation of data.There is high discrimination in left side SNR=-6db, BPSK and the 8PSK of Fig. 6.QPSK is easier at this time
It is mistaken for 8PSK, because both modulator approaches are relatively similar.16QAM is easy to be mistaken for QPSK and 8PSK, but not judges by accident
For BPSK.In Fig. 7 when SNR rises to -2dB, four kinds of modulating modes can be distinguished well.Although a part of quilt of QPSK
It is mistaken for 8PSK, but the discrimination of 8PSK is very high.The model shows well under low SNR ratio.
Full connection judgment layer can export that there are the judgements of the result of which kind of modulation system.And in other example, entirely
Connection judgment layer can also export whether need further to judge modulation system, should be able to be more preferable if it is certain modulation system
The judgement for being classified as certain modulation system.
Full connection judgment layer can judge final result by way of output probability.In addition, in some other example
In, full connection judgment layer can also by the way of various non-linear or linear classifier such as random forest, decision tree,
Support vector machines etc..Even in some examples, some simple numerical operation methods also can be used in full connection judgment layer, than
Such as maximum value determining method, average value determining method.
As can be seen from the above embodiments, the depth residual error network provided in this embodiment for subsurface communication Modulation Identification includes
Multiple depth residual error network layers, and being incremented by with depth residual error network layer depth, depth residual error network layer distinguishes data
Knowledge ability is incremented by, so that depth residual error network provided by the present application can be improved when being used for subsurface communication in practical water
Using effect in the communication of bottom is more convenient efficiently to complete water-bed communication modulation identification, and raising is modulated in water-bed communication process
Identify the accuracy of judgement
Referring to Fig. 8, the embodiment of the present application also provides a kind of systems for subsurface communication Modulation Identification.The application is implemented
What example provided the system 20 for subsurface communication Modulation Identification includes depth residual error network 10 provided by the above embodiment, institute's depth
Residual error network 10 includes: data prediction layer 101, data characteristics extract layer 102 and data classification results output layer 103.
Data prediction layer 101, the data prediction layer include the first depth residual error network layer, and first depth is residual
Poor network layer is for pre-processing the Different Modulations data received.Data characteristics extract layer 102, the data are special
Levying extract layer includes the second depth residual error network layer, third depth residual error network layer and the 4th depth residual error network layer, and described the
Two depth residual error network layers, third depth residual error network layer and the 4th depth residual error network layer identify extractability to data characteristics
It gradually increases, successively passes through the second depth residual error network by the pretreated data of the first depth residual error network layer
Layer, third depth residual error network layer and the 4th depth residual error network layer carry out data characteristics extraction.Data classification result output layer
103, the data classification result output layer includes the 5th depth residual error network layer, and the 5th depth residual error network layer is used for root
Judged according to the characteristic that the 4th depth residual error network layer is extracted and exports the modulation system finally identified.
Present invention also provides a kind of terminals, as shown in figure 9, the terminal 30 includes: processor 301,302 and of memory
Communication interface 303.
In Fig. 9, processor 301, memory 302 and communication interface 303 can be connected with each other by bus;Bus can be with
It is divided into address bus, data/address bus, control bus etc..Only to be indicated with a thick line in Fig. 9 convenient for indicating, it is not intended that
Only a bus or a type of bus.
To depth after the allomeric function of the usually controlling terminal 30 of processor 301, such as the starting and terminal starting of terminal
Spend the training etc. of residual error network.In addition, processor 301 can be general processor, for example, central processing unit (English:
Central processing unit, abbreviation: CPU), network processing unit (English: network processor, abbreviation: NP)
Or the combination of CPU and NP.Processor is also possible to microprocessor (MCU).Processor can also include hardware chip.It is above-mentioned hard
Part chip can be specific integrated circuit (ASIC), programmable logic device (PLD) or combinations thereof.Above-mentioned PLD can be complexity
Programmable logic device (CPLD), field programmable gate array (FPGA) etc..
Memory 302 is configured as storage computer executable instructions to support the operation of 30 data of terminal.Memory 301
It can be realized by any kind of volatibility or non-volatile memory device or their combination, such as static random-access
Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), Erasable Programmable Read Only Memory EPROM (EPROM),
Programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, disk or CD.
After starting terminal 30, processor 301 and memory 302 are powered on, and processor 301, which reads and executes, is stored in memory
Computer executable instructions in 302, to complete the training of above-mentioned depth residual error network.
Communication interface 303 transmits data for terminal 30, such as realizes the data communication between underwater communication apparatus.It is logical
Believe that interface 303 includes wired communication interface, can also include wireless communication interface.Wherein, wired communication interface includes that USB connects
Mouth, Micro USB interface, can also include Ethernet interface.Wireless communication interface can be WLAN interface, cellular network communication
Interface or combinations thereof etc..
In one exemplary embodiment, terminal 30 provided by the embodiments of the present application further includes power supply module, power supply module
Various assemblies for terminal 30 provide electric power.Power supply module may include power-supply management system, one or more power supplys and other
The associated component of electric power is generated, managed, and distributed with for terminal 30.
Communication component, communication component are configured to facilitate the logical of wired or wireless way between terminal 30 and other equipment
Letter.Terminal 30 can access the wireless network based on communication standard, such as WiFi, 2G or 3G or their combination.Communication component warp
Broadcast singal or broadcast related information from external broadcasting management system are received by broadcast channel.Communication component further includes near field
(NFC) module is communicated, to promote short range communication.For example, radio frequency identification (RFID) technology, infrared data can be based in NFC module
Association (IrDA) technology, ultra wide band (UWB) technology, bluetooth (BT) technology and other technologies are realized.
In one exemplary embodiment, terminal 30 can be by one or more application specific integrated circuit (ASIC), number
Word signal processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array
(FPGA), controller, microcontroller, processor or other electronic components are realized.
The same or similar parts between the embodiments can be referred to each other in present specification.Especially for system
And for terminal embodiment, since depth residual error network therein is substantially similar to the embodiment of depth residual error network, so retouching
That states is fairly simple, and related place is referring to the explanation in depth residual error network embodiment.
It should be noted that, in this document, the relational terms of such as " first " and " second " or the like are used merely to one
A entity or operation with another entity or operate distinguish, without necessarily requiring or implying these entities or operation it
Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant are intended to
Cover non-exclusive inclusion, so that the process, method, article or equipment for including a series of elements not only includes those
Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or setting
Standby intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that
There is also other identical elements in the process, method, article or apparatus that includes the element.
Certainly, above description is also not limited to the example above, technical characteristic of the application without description can by or
It is realized using the prior art, details are not described herein;The technical solution that above embodiments and attached drawing are merely to illustrate the application is not
It is the limitation to the application, Tathagata substitutes, and the application is described in detail only in conjunction with and referring to preferred embodiment, ability
Domain it is to be appreciated by one skilled in the art that those skilled in the art were made in the essential scope of the application
Variations, modifications, additions or substitutions also should belong to claims hereof protection scope without departure from the objective of the application.
Claims (9)
1. a kind of depth residual error network for subsurface communication Modulation Identification characterized by comprising
Data prediction layer, the data prediction layer include the first depth residual error network layer, the first depth residual error network
Layer is for pre-processing the Different Modulations data received;
Data characteristics extract layer, the data characteristics extract layer include the second depth residual error network layer, third depth residual error network
Layer and the 4th depth residual error network layer, the second depth residual error network layer, third depth residual error network layer and the 4th depth are residual
Poor network layer gradually increases data characteristics identification extractability, by the pretreated number of the first depth residual error network layer
It is counted according to the second depth residual error network layer, third depth residual error network layer and the 4th depth residual error network layer is successively passed through
According to feature extraction;
Data classification result output layer, the data classification result output layer include the 5th depth residual error network layer, the described 5th
Depth residual error network layer is final for being judged according to the characteristic that the 4th depth residual error network layer is extracted and being exported
The modulation system of identification.
2. the depth residual error network according to claim 1 for subsurface communication Modulation Identification, which is characterized in that described
The Different Modulations data that one depth residual error network layer transmits subsurface communication, change by data format, realize more
The pretreatment of kind modulation system data.
3. the depth residual error network according to claim 1 for subsurface communication Modulation Identification, which is characterized in that described
Two depth residual error network layers include: the first depth residual error network unit, the second depth residual error network unit, third depth residual error net
Network unit and the 4th depth residual error network unit, the first depth residual error network unit include: the first convolutional layer, the first batch
Normalization layer and the first active coating;The second depth residual error network unit includes: the first data handling path and the second data
Processing path, first data handling path and the polymerization output of the second data handling path;The third depth residual error network
Unit includes: the first adding layers and the second active coating;The 4th depth residual error network unit includes third data handling path
With the 4th data handling path, wherein any processing path is straight in the third data handling path and the 4th data handling path
It connects and is connected with data classification results output layer.
4. the depth residual error network according to claim 3 for subsurface communication Modulation Identification, which is characterized in that described
One data handling path is made of first structure unit, the second structural unit, third structural unit and the 4th structural unit,
In: the first structure unit includes: third convolutional layer, the second batch normalization layer and third active coating;Second structure
Unit includes: radix layer, and the radix layer is generated by a plurality of data parallel data path by a convolutional layer;The third structure
Unit includes: third batch normalization layer and the 4th active coating;4th structural unit includes: Volume Four lamination and the 4th batch
Measure normalization layer.
5. the depth residual error network according to claim 1 for subsurface communication Modulation Identification, which is characterized in that described
Three depth residual error network layers include: the 5th depth residual error network unit, the 6th depth residual error network unit, the 7th depth residual error net
Network unit and the 8th depth residual error network unit, the 5th depth residual error network unit include: that the second adding layers and the 5th swash
Layer living;The 8th depth residual error network unit includes: third adding layers and the 6th active coating, the 6th depth residual error network
Unit and the 7th depth residual error network unit include two data processing paths.
6. the depth residual error network according to claim 1 for subsurface communication Modulation Identification, which is characterized in that described
Five depth residual error network layers include: the 4th adding layers, the 7th active coating and output layer.
7. according to the depth residual error network of the described in any item subsurface communication Modulation Identifications of claim 3-6, which is characterized in that batch
Amount normalization layer is difficult to trained problem for solving the problems, such as that depth residual error is alleviated in gradient disappearance in back-propagating.
8. according to the depth residual error network of the described in any item subsurface communication Modulation Identifications of claim 3-6, which is characterized in that swash
Layer living is for normalized in data handling procedure.
9. a kind of system for subsurface communication Modulation Identification, which is characterized in that described in any item including such as claim 1-8
Depth residual error network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811403513.1A CN109547374B (en) | 2018-11-23 | 2018-11-23 | Depth residual error network and system for underwater communication modulation recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811403513.1A CN109547374B (en) | 2018-11-23 | 2018-11-23 | Depth residual error network and system for underwater communication modulation recognition |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109547374A true CN109547374A (en) | 2019-03-29 |
CN109547374B CN109547374B (en) | 2021-11-23 |
Family
ID=65849590
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811403513.1A Active CN109547374B (en) | 2018-11-23 | 2018-11-23 | Depth residual error network and system for underwater communication modulation recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109547374B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110503185A (en) * | 2019-07-18 | 2019-11-26 | 电子科技大学 | A kind of improved depth modulation identification network model |
CN112307987A (en) * | 2020-11-03 | 2021-02-02 | 泰山学院 | Method for identifying communication signal based on deep hybrid routing network |
CN113285762A (en) * | 2021-02-25 | 2021-08-20 | 广西师范大学 | Modulation format identification method based on relative entropy calculation |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140122398A1 (en) * | 2012-10-25 | 2014-05-01 | Brain Corporation | Modulated plasticity apparatus and methods for spiking neuron network |
CN106529428A (en) * | 2016-10-31 | 2017-03-22 | 西北工业大学 | Underwater target recognition method based on deep learning |
WO2017087119A2 (en) * | 2015-11-20 | 2017-05-26 | U.S. Trading Partners, Inc. | Underwater positioning system for scuba divers |
CN107609488A (en) * | 2017-08-21 | 2018-01-19 | 哈尔滨工程大学 | A kind of ship noise method for identifying and classifying based on depth convolutional network |
US20180129906A1 (en) * | 2016-11-07 | 2018-05-10 | Qualcomm Incorporated | Deep cross-correlation learning for object tracking |
CN108038471A (en) * | 2017-12-27 | 2018-05-15 | 哈尔滨工程大学 | A kind of underwater sound communication signal type Identification method based on depth learning technology |
CN108616470A (en) * | 2018-03-26 | 2018-10-02 | 天津大学 | Modulation Signals Recognition method based on convolutional neural networks |
CN109299697A (en) * | 2018-09-30 | 2019-02-01 | 泰山学院 | Deep neural network system and method based on underwater sound communication Modulation Mode Recognition |
CN111970218A (en) * | 2020-08-27 | 2020-11-20 | 泰山学院 | Method for carrying out communication automatic modulation recognition based on deep multi-hop network |
-
2018
- 2018-11-23 CN CN201811403513.1A patent/CN109547374B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140122398A1 (en) * | 2012-10-25 | 2014-05-01 | Brain Corporation | Modulated plasticity apparatus and methods for spiking neuron network |
WO2017087119A2 (en) * | 2015-11-20 | 2017-05-26 | U.S. Trading Partners, Inc. | Underwater positioning system for scuba divers |
CN106529428A (en) * | 2016-10-31 | 2017-03-22 | 西北工业大学 | Underwater target recognition method based on deep learning |
US20180129906A1 (en) * | 2016-11-07 | 2018-05-10 | Qualcomm Incorporated | Deep cross-correlation learning for object tracking |
CN107609488A (en) * | 2017-08-21 | 2018-01-19 | 哈尔滨工程大学 | A kind of ship noise method for identifying and classifying based on depth convolutional network |
CN108038471A (en) * | 2017-12-27 | 2018-05-15 | 哈尔滨工程大学 | A kind of underwater sound communication signal type Identification method based on depth learning technology |
CN108616470A (en) * | 2018-03-26 | 2018-10-02 | 天津大学 | Modulation Signals Recognition method based on convolutional neural networks |
CN109299697A (en) * | 2018-09-30 | 2019-02-01 | 泰山学院 | Deep neural network system and method based on underwater sound communication Modulation Mode Recognition |
CN111970218A (en) * | 2020-08-27 | 2020-11-20 | 泰山学院 | Method for carrying out communication automatic modulation recognition based on deep multi-hop network |
Non-Patent Citations (2)
Title |
---|
WENHAN YANG: "Video super-resolution based on spatial-temporal recurrent residual networks", 《COMPUTER VISION AND IMAGE UNDERSTANDING》 * |
YUN LIN: "The application of deep learning in communication signal modulation recognition", 《IEEE》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110503185A (en) * | 2019-07-18 | 2019-11-26 | 电子科技大学 | A kind of improved depth modulation identification network model |
CN110503185B (en) * | 2019-07-18 | 2023-04-07 | 电子科技大学 | Improved deep modulation recognition network model |
CN112307987A (en) * | 2020-11-03 | 2021-02-02 | 泰山学院 | Method for identifying communication signal based on deep hybrid routing network |
CN112307987B (en) * | 2020-11-03 | 2021-06-29 | 泰山学院 | Method for identifying communication signal based on deep hybrid routing network |
CN113285762A (en) * | 2021-02-25 | 2021-08-20 | 广西师范大学 | Modulation format identification method based on relative entropy calculation |
CN113285762B (en) * | 2021-02-25 | 2022-08-05 | 广西师范大学 | Modulation format identification method based on relative entropy calculation |
Also Published As
Publication number | Publication date |
---|---|
CN109547374B (en) | 2021-11-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20230299951A1 (en) | Quantum neural network | |
Mannanuddin et al. | Confluence of Machine Learning with Edge Computing for IoT Accession | |
CN109547374A (en) | A kind of depth residual error network and system for subsurface communication Modulation Identification | |
CN109361635A (en) | Subsurface communication Modulation Mode Recognition method and system based on depth residual error network | |
EP3635637A1 (en) | Pre-training system for self-learning agent in virtualized environment | |
CN109274621A (en) | Communication protocol signals recognition methods based on depth residual error network | |
Campbell et al. | The explosion of artificial intelligence in antennas and propagation: How deep learning is advancing our state of the art | |
Lin et al. | Machine learning templates for QCD factorization in the search for physics beyond the standard model | |
CN108345904A (en) | A kind of Ensemble Learning Algorithms of the unbalanced data based on the sampling of random susceptibility | |
CN110309854A (en) | A kind of signal modulation mode recognition methods and device | |
Wu et al. | Automatic modulation classification based on deep learning for software‐defined radio | |
CN109120435A (en) | Network link quality prediction technique, device and readable storage medium storing program for executing | |
CN110414627A (en) | A kind of training method and relevant device of model | |
CN109462564A (en) | Subsurface communication Modulation Mode Recognition method and system based on deep neural network | |
CN109543818A (en) | A kind of link evaluation method and system based on deep learning model | |
CN111766635A (en) | Sand body communication degree analysis method and system | |
CN113114673A (en) | Network intrusion detection method and system based on generation countermeasure network | |
CN113139570A (en) | Dam safety monitoring data completion method based on optimal hybrid valuation | |
Li et al. | The use of nonlinear dynamic system and deep learning in production condition monitoring and product quality prediction | |
CN111522736A (en) | Software defect prediction method and device, electronic equipment and computer storage medium | |
Stylianopoulos et al. | Deep-learning-assisted configuration of reconfigurable intelligent surfaces in dynamic rich-scattering environments | |
CN103149878A (en) | Self-adaptive learning system of numerical control machine fault diagnosis system in multi-agent structure | |
CN104092503A (en) | Artificial neural network spectrum sensing method based on wolf pack optimization | |
Sun et al. | A compound structure for wind speed forecasting using MKLSSVM with feature selection and parameter optimization | |
CN107765259A (en) | A kind of transmission line of electricity laser ranging Signal denoising algorithm that threshold value is improved based on Lifting Wavelet |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB03 | Change of inventor or designer information | ||
CB03 | Change of inventor or designer information |
Inventor after: Wang Yan Inventor after: Xiao Jing Inventor after: Chen Jun Inventor after: Zhang Lian Inventor after: Yang Hongfang Inventor after: Cui Xiangxia Inventor before: Wang Yan Inventor before: Chen Jun Inventor before: Cui Xiangxia |
|
GR01 | Patent grant | ||
GR01 | Patent grant |