CN112733811B - Method for identifying underwater sound signal modulation modes based on improved dense neural network - Google Patents

Method for identifying underwater sound signal modulation modes based on improved dense neural network Download PDF

Info

Publication number
CN112733811B
CN112733811B CN202110214167.8A CN202110214167A CN112733811B CN 112733811 B CN112733811 B CN 112733811B CN 202110214167 A CN202110214167 A CN 202110214167A CN 112733811 B CN112733811 B CN 112733811B
Authority
CN
China
Prior art keywords
neural network
spectrum
underwater sound
entropy
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110214167.8A
Other languages
Chinese (zh)
Other versions
CN112733811A (en
Inventor
王景景
黄子豪
张威龙
闫正强
杨星海
施威
刘世萱
李海涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao University of Science and Technology
Original Assignee
Qingdao University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao University of Science and Technology filed Critical Qingdao University of Science and Technology
Publication of CN112733811A publication Critical patent/CN112733811A/en
Application granted granted Critical
Publication of CN112733811B publication Critical patent/CN112733811B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

The invention discloses an inter-class recognition method of underwater sound signal modulation modes based on an improved dense neural network, which comprises the steps of firstly, receiving an underwater sound modulation signal to be recognized, and extracting the characteristic of the underwater sound modulation signal; performing dimension reduction and denoising on the characteristic of the underwater sound modulation signal by using a principal component analysis method; then carrying out normalization and dimension change; removing a pooling layer based on the dense neural network to obtain an improved dense neural network, and training the neural network; and inputting the processed underwater sound modulation signal characteristics into a trained improved dense neural network, and finally completing recognition among modulation modes. The invention finally realizes the recognition between the underwater sound signal modulation modes with low delay and high accuracy, and the recognition method has strong anti-interference capability, low calculation cost and high recognition accuracy.

Description

Method for identifying underwater sound signal modulation modes based on improved dense neural network
Technical Field
The invention belongs to the technical field of underwater acoustic communication, and particularly relates to an inter-class recognition method of an underwater acoustic signal modulation mode based on an improved dense neural network.
Background
The underwater wireless data transmission technology is a key technology for constructing ocean China. The underwater acoustic communication becomes the most widely applied underwater communication mode at present due to the advantages of small propagation loss, long transmission distance and the like. Currently, an adaptive modulation and coding (Adaptive modulation coding, AMC) technology capable of selecting a modulation mode according to channel conditions is widely used in an underwater acoustic communication system, and the technology requires that both communication parties match the modulation mode through multiple handshakes, but a complex underwater channel environment may cause a handshake signal error, so that a receiving end adopts a non-matched demodulation mode, and serious errors of demodulated data are caused.
The intelligent recognition of the modulation mode can help the receiving end automatically recognize the modulation mode of the received signal, and ensures that the receiving end adopts a correct demodulation mode to demodulate data. The current intelligent modulation mode identification method mainly comprises modulation mode identification based on maximum likelihood ratio hypothesis test, modulation mode identification based on feature extraction and modulation mode identification based on different neural network models. The modulation mode identification method based on the maximum likelihood ratio hypothesis test requires priori information of signals and is complex in calculation and not suitable for practical application; the traditional modulation mode identification method based on feature extraction is extremely dependent on the quality of extracted features although the technical thought is simple and clear and engineering application is easy. However, in a complex and changeable underwater channel, signal characteristics are severely interfered by noise, and are difficult to be used for identifying a modulation mode; in the modulation mode identification method based on the neural network model, the model with better identification performance mainly comprises a convolutional neural network and the like, however, most of the neural networks adopted are the existing network models, the research on the targeted improvement of the model is relatively less, in addition, a large number of training samples are needed for training a deep neural network, but the current underwater acoustic communication data in China are scarce and have high acquisition cost, and the existing underwater acoustic data quantity is insufficient for supporting the training of the neural network.
Disclosure of Invention
Aiming at the technical problems of poor anti-interference capability, high calculation cost, low recognition accuracy and the like of the existing underwater sound signal modulation mode inter-class recognition method, the invention aims to provide an improved dense neural network-based underwater sound signal modulation mode inter-class recognition method so as to solve the problems.
In order to achieve the aim of the invention, the invention is realized by adopting the following technical scheme:
an inter-class recognition method for an underwater sound signal modulation mode based on an improved dense neural network comprises the following steps:
s1, receiving an underwater sound modulation signal to be identified, and extracting the characteristic of the underwater sound modulation signal;
s2: performing dimension reduction and denoising on the water sound modulation signal characteristics extracted by the S1 by using a Principal Component Analysis (PCA);
s3: normalizing and dimension changing are carried out on the characteristic of the underwater sound modulation signal processed in the step S2;
s4: removing a pooling layer based on the dense neural network to obtain an improved dense neural network, and training the neural network;
s5: and (3) inputting the characteristic of the underwater acoustic modulation signal processed in the step (S3) into the improved dense neural network trained in the step (S4), and finally completing the recognition between modulation modes.
Further, the extracting the characteristic of the underwater sound modulation signal in the step S1 specifically includes:
s1-1: firstly, calculating a power spectrum, a singular spectrum, an envelope spectrum, a frequency spectrum and a phase spectrum of the underwater sound modulation signal;
s1-2: calculating spectral features and entropy features;
the spectral features are: maximum value gamma of Q parameter, power spectrum peak number, R parameter and zero center normalized instantaneous amplitude spectrum density max
The entropy features: the method comprises the steps of power spectrum shannon entropy, power spectrum index entropy, singular spectrum shannon entropy, singular spectrum index entropy, spectrum amplitude shannon entropy, spectrum amplitude index entropy, phase spectrum shannon entropy and phase spectrum index entropy.
Wherein, the physical meaning and specific formula of the Q parameter are:
Q=V/μ
where V and μ represent the variance and mean of the signal, respectively.
The physical meaning of the peak number of the power spectrum is the peak number of the signal power spectrum.
Wherein, the physical meaning and the calculation formula of the R parameter are as follows:
R=σ 22
middle sigma 2 And μ represents the variance and mean of the signal envelope squares, the R parameter being capable of reflecting the extent of variation of the envelope spectrum.
Wherein the zero center normalizes the maximum gamma of the instantaneous amplitude spectral density max The specific formula of (2) is: gamma ray max =max{DFTα cn (n)} 2 /N
Wherein N is the number of sampling points, a cn (n) normalized instantaneous amplitude for zero center, the formula is as follows:
a cn (n)=a n (n)-1
wherein a is cn (n)=a(n)/m a ,m a Is the mean value of the instantaneous amplitude a (n).
The calculation formulas of the power spectrum shannon entropy and the power spectrum index entropy are as follows:
power spectrum shannon entropy:
Figure BDA0002953321270000021
power spectrum exponential entropy:
Figure BDA0002953321270000022
in p i The weight of each point in the signal power spectrum is K, which is the point of the power spectrum.
The method for calculating the singular spectrum shannon entropy and the singular spectrum index entropy comprises the following steps:
embedding discrete underwater sound sampling signals into dimension m and delay time n to obtain a reconstructed phase space matrix:
Figure BDA0002953321270000023
singular value decomposition is carried out on the matrix to obtain:
Figure BDA0002953321270000031
wherein the matrix Q is a diagonal matrix, and the singular values σ on the diagonal form a singular value spectrum σ= { σ 1 ,σ 2 ,...,σ j And j is less than or equal to K. Definition of normalized singular values as sigma i Weight of p i The singular spectrum shannon entropy and the exponential entropy can be obtained respectively as follows:
singular spectrum shannon entropy:
Figure BDA0002953321270000032
singular spectrum index entropy:
Figure BDA0002953321270000033
the calculation formula of the spectrum amplitude shannon entropy and the spectrum amplitude index entropy is as follows:
spectral amplitude shannon entropy:
Figure BDA0002953321270000034
spectral amplitude exponential entropy:
Figure BDA0002953321270000035
in p i The weight of each point in the signal amplitude-frequency response curve is K, which is the number of points of the amplitude-frequency response curve.
The calculation formulas of the phase spectrum shannon entropy and the phase spectrum index entropy are as follows:
phase spectrum shannon entropy:
Figure BDA0002953321270000036
phase spectrum index entropy:
Figure BDA0002953321270000037
in p i The weight of each point in the signal phase-frequency response curve is K, which is the number of points of the phase-frequency response curve.
Further, the main component analysis method in S2 specifically includes the steps of:
s2-1 suppose that the kth characteristic of the underwater acoustic signal is X k =(x 1 ,x 2 ,...,x b ) T The covariance matrix of the K features is
Figure BDA0002953321270000038
K is the total number of features,
Figure BDA0002953321270000039
is the average value of all the characteristic parameters;
s2-2 calculating eigenvalue lambda of S matrix 1 ,λ 1 ,λ 2 ,λ 3 ,...,λ n And feature vector alpha 1 ,α 2 ,α 3 ,...,α n And arranging the characteristic values in a descending order;
s2-3, calculating the accumulated variance contribution rate of the first m feature components:
Figure BDA00029533212700000310
s2-4 selection of X k The first m feature components with the accumulated variance contribution rate reaching 90 percent are subjected to linear transformation: y is Y k =A T X k Wherein A is the first mFeature vectors corresponding to feature values, Y k Is the final feature vector after PCA processing.
Further, in the step S3, normalization and dimension change are performed on the extracted features;
the feature normalization formula is:
Figure BDA00029533212700000311
x is original characteristic data, X' is normalized data, and Max and Min are respectively the maximum value and the minimum value of the characteristic data;
the dimension change is specifically as follows:
the original n×m-dimensional feature data is converted into n×m×1-dimensional feature data suitable for a dense neural network, and the feature number of the feature data is not changed but is logically changed from two dimensions to three dimensions.
Further, in the step S4, the dense neural network training step includes:
s4-1: the dense neural network is improved, a pooling layer for simplifying the features is omitted on the basis of the original dense neural network, and the problem of important feature loss caused by pooling is avoided;
s4-2: pre-training the neural network by using the existing sea test real data and simulation data and using a greedy algorithm;
s4-3: and performing migration learning on the pre-trained neural network in the target sea area to obtain a neural network model suitable for the target sea area.
Furthermore, the greedy algorithm can obtain a global optimal solution through a series of local optimal solutions, and the greedy algorithm is used for training the network as follows:
(1) Training the first layer neural network independently until a given accuracy is reached;
(2) The first layer network data is reserved, and the second layer network is independently trained until the given precision is achieved;
(3) Repeating the above process until the whole neural network training is completed.
Further, the method for migration learning comprises the following steps:
and (3) maintaining the weight of the pre-trained improved dense neural network convolution layer unchanged, putting the weight into a target sea area, and modulating the full-connection layer part of the neural network according to the actual underwater sound signal.
Compared with the prior art, the invention has the advantages and positive effects that:
according to the method for identifying the underwater sound signal modulation modes, firstly, the spectral features and the entropy features insensitive to noise are selected and extracted, so that the robustness of the features is improved; secondly, performing dimension reduction optimization on the features by using a principal component analysis method, and realizing certain signal-to-noise separation while reducing the calculation cost; finally, the problem of insufficient underwater acoustic signal training samples is solved to a great extent by using transfer learning, the accuracy of the neural network classifier is ensured, and finally, the recognition between underwater acoustic signal modulation modes with low delay and high accuracy is realized.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention.
FIG. 2 is a model diagram of an improved dense neural network in an embodiment of the invention.
FIG. 3 is a flow chart of a pre-training network using a greedy algorithm in an embodiment of the invention.
Fig. 4 is a flow chart of training a network using transfer learning in an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples.
Example 1
In the underwater sound self-adaptive modulation coding communication system, a signal transmitting end and a receiving end usually agree on a modulation mode through a handshake signal, but an underwater sound channel is complex and changeable, the handshake signal is easy to generate errors, the receiving end can automatically identify the modulation mode of a received signal through a modulation mode intelligent identification method, and data demodulation is ensured to be correct.
The embodiment is a rapid and accurate underwater sound signal modulation mode inter-class identification method based on an improved dense neural network, and after receiving an underwater sound modulation signal, the method comprises the following parts, as shown in fig. 1:
s1, extracting and processing the modulated signal characteristics, which comprises the following steps:
s11, obtaining a power spectrum, a singular spectrum, an envelope spectrum, a frequency spectrum and a phase spectrum of a modulation signal;
s12, calculating spectral features and entropy features of the signals;
in the embodiment, the spectral characteristics and the entropy characteristics with strong noise immunity are selected as the characteristics for identifying modulation modes; the spectral features and entropy features of the signal specifically include:
the spectral features are: maximum value gamma of Q parameter, power spectrum peak number, R parameter and zero center normalized instantaneous amplitude spectrum density max
The entropy features: the method comprises the steps of power spectrum shannon entropy, power spectrum index entropy, singular spectrum shannon entropy, singular spectrum index entropy, spectrum amplitude shannon entropy, spectrum amplitude index entropy, phase spectrum shannon entropy and phase spectrum index entropy.
Wherein, the physical meaning and specific formula of the Q parameter are:
Q=V/μ
where V and μ represent the variance and mean of the signal, respectively.
The physical meaning of the peak number of the power spectrum is the peak number of the signal power spectrum.
Wherein, the physical meaning and the calculation formula of the R parameter are as follows:
R=σ 22
middle sigma 2 And μ represents the variance and mean of the signal envelope squares, the R parameter being capable of reflecting the extent of variation of the envelope spectrum.
Wherein the zero center normalizes the maximum gamma of the instantaneous amplitude spectral density max The specific formula of (2) is: gamma ray max =max{DFTa cn (n)} 2 /N
Wherein N is the number of sampling points, a cn (n) normalized instantaneous amplitude for zero center, the formula is as follows:
a cn (n)=a n (n)-1
wherein alpha is cn (n)=a(n)/m a ,m a Is the mean value of the instantaneous amplitude a (n).
The calculation formulas of the power spectrum shannon entropy and the power spectrum index entropy are as follows:
power spectrum shannon entropy:
Figure BDA0002953321270000051
power spectrum exponential entropy:
Figure BDA0002953321270000052
in p i The weight of each point in the signal power spectrum is K, which is the point of the power spectrum.
The singular spectrum entropy calculation method comprises the following steps:
embedding discrete underwater sound sampling signals into dimension m and delay time n to obtain a reconstructed phase space matrix:
Figure BDA0002953321270000053
singular value decomposition is carried out on the matrix to obtain:
Figure BDA0002953321270000061
wherein the matrix Q is a diagonal matrix, and the singular values σ on the diagonal form a singular value spectrum σ= { σ 1 ,σ 2 ,...,σ j And j is less than or equal to K. Definition of normalized singular values as sigma i Weight of p i The singular spectrum shannon entropy and the exponential entropy can be obtained respectively as follows:
singular spectrum shannon entropy:
Figure BDA0002953321270000062
singular spectrum index entropy:
Figure BDA0002953321270000063
the calculation formula of the spectrum amplitude shannon entropy and the spectrum amplitude index entropy is as follows:
spectral amplitude shannon entropy:
Figure BDA0002953321270000064
spectral amplitude exponential entropy:
Figure BDA0002953321270000065
in p i The weight of each point in the signal amplitude-frequency response curve is K, which is the number of points of the amplitude-frequency response curve.
The calculation formulas of the phase spectrum shannon entropy and the phase spectrum index entropy are as follows:
phase spectrum shannon entropy:
Figure BDA0002953321270000066
phase spectrum index entropy:
Figure BDA0002953321270000067
in p i The weight of each point in the signal phase frequency response curve. K is the number of points of the phase-frequency response curve.
S13, performing dimension reduction and noise removal on the extracted features by using a principal component analysis method;
the principal component analysis method comprises the following specific steps:
s131, supposing the kth characteristic of the underwater sound signal to be x k =(x 1 ,x 2 ,...,x n ) T The covariance matrix of the K features is
Figure BDA0002953321270000068
/>
K is the total number of features,
Figure BDA0002953321270000069
is the average value of all the characteristic parameters;
s132, calculating the eigenvalue lambda of the S matrix 1 ,λ 1 ,λ 2 ,λ 3 ,...,λ n And feature vector alpha 1 ,α 2 ,α 3 ,...,α n And arranging the characteristic values in a descending order;
s133, calculating the accumulated variance contribution rate of the first m feature components:
Figure BDA00029533212700000610
s134, select X k The first m feature components with the accumulated variance contribution rate reaching 90 percent are subjected to linear transformation: y is Y k =A T X k Wherein A is the feature vector corresponding to the first m feature values, Y k Is the final feature vector after PCA processing.
S2: a neural network training step comprising:
s21, pre-training a neural network by using the existing sea test data and simulation data and utilizing a greedy algorithm;
the neural network used in the embodiment is an improved dense neural network, as shown in fig. 2, and the network omits a pooling layer for simplifying features on the basis of the original dense neural network, so that the problem of important feature loss caused by pooling is avoided;
the greedy algorithm can obtain a global optimal solution through a series of local optimal solutions, and as shown in fig. 3, the greedy algorithm training network comprises the following steps:
s211, training the first layer neural network independently until a given precision is achieved;
s212, reserving first-layer network data, and independently training a second-layer network until a given precision is achieved;
s213, repeating the above processes until the whole neural network training is completed;
s3, identifying the classes, including:
s31, normalizing and dimension changing are carried out on the extracted features;
further, the feature normalization formula is:
Figure BDA0002953321270000071
x is original characteristic data, X' is normalized data, and Max and Min are the maximum value and the minimum value of the characteristic data respectively.
The dimension change is specifically as follows:
the original n×m-dimensional feature data is converted into n×m×1-dimensional feature data suitable for a dense neural network, and the feature number of the feature data is not changed but is logically changed from two dimensions to three dimensions.
S32, inputting the processed characteristics into a trained neural network, and completing recognition among modulation modes.
S4, in order to verify the method, matlab software is used for simulating different types of modulation signals in the embodiment, and the modulation modes are respectively as follows: BPSK, QPSK, BFSK, QFSK, 16QAM, 64QAM, OFDM; the signal-to-noise ratio range is [ 30dB,30dB ], the interval is 5dB; each class of modulated signal has 100 groups of time domain waveform data under different signal to noise ratios; the data amounts totaled 5200 groups.
S41, analyzing a power spectrum, a singular spectrum, an envelope spectrum, a frequency spectrum and a phase spectrum of the signal time domain waveform data, and calculating to obtain a maximum value gamma of Q parameter, power spectrum peak number, R parameter and zero center normalized instantaneous amplitude spectrum density max The characteristic vector of 12 dimensions is formed by the power spectrum shannon entropy, the power spectrum index entropy, the singular spectrum shannon entropy, the singular spectrum index entropy, the spectrum amplitude shannon entropy, the spectrum amplitude index entropy, the phase spectrum shannon entropy and the phase spectrum index entropy.
S42, performing PCA processing on the extracted features to obtain 7-dimensional feature data after dimension reduction and denoising, and performing normalization and dimension change processing.
S43, the processed characteristic data under different signal to noise ratios are uniformly divided into a training set and a testing set according to different modulation categories and are used for training of the neural network, wherein the training set is 3900 groups of data, and the testing set is 1300 groups of data.
S44, training and improving the dense neural network by using a greedy algorithm, and finally obtaining an intelligent recognition model of the modulation mode under the mixed signal-to-noise ratio, wherein the recognition performance is shown in the table 1:
TABLE 1 recognition accuracy results of neural network models at different signal-to-noise ratios
Figure BDA0002953321270000081
The results show that: the scheme can reach 89.23% of overall recognition accuracy on a test set with the signal-to-noise ratio of-30 dB to 30dB, can reach nearly 100% of recognition accuracy on signals with the signal-to-noise ratio of 0dB and above, and shows good recognition performance and recognition robustness on characteristic data under different signal-to-noise ratios.
According to the embodiment, the spectral characteristics and the entropy characteristics insensitive to noise are selected and extracted, the main component analysis method is used for carrying out dimension reduction optimization on the characteristics, so that certain signal-to-noise separation is realized while the calculation cost is reduced, and the interference of underwater noise on modulation recognition is removed; and then selecting a dense neural network with higher convergence speed and higher accuracy, and aiming at the characteristic of lower feature redundancy after PCA processing in the invention, removing the pooling layer in a targeted way to avoid important features of pooling effect loss, introducing a transfer learning training deep neural network in the aspect of network training, greatly solving the problem of insufficient underwater acoustic signal training samples, ensuring the accuracy of a neural network classifier, and finally realizing the recognition between underwater acoustic signal modulation modes with low delay and high accuracy.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be apparent to one skilled in the art that modifications may be made to the technical solutions described in the foregoing embodiments, or equivalents may be substituted for some of the technical features thereof; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (5)

1. An inter-class recognition method of underwater sound signal modulation modes based on an improved dense neural network is characterized by comprising the following steps:
s1, receiving an underwater sound modulation signal to be identified, and extracting the characteristic of the underwater sound modulation signal; the method comprises the following steps:
s1-1: firstly, calculating a power spectrum, a singular spectrum, an envelope spectrum, a frequency spectrum and a phase spectrum of the underwater sound modulation signal;
s1-2: calculating spectral features and entropy features;
the spectral features are: maximum value gamma of Q parameter, power spectrum peak number, R parameter and zero center normalized instantaneous amplitude spectrum density max
The entropy features: power spectrum shannon entropy, power spectrum exponential entropy, singular spectrum shannon entropy, singular spectrum exponential entropy, spectrum amplitude shannon entropy, spectrum amplitude exponential entropy, phase spectrum shannon entropy, phase spectrum exponential entropy;
s2: performing dimension reduction and noise reduction on the underwater sound modulation signal characteristics extracted in the step S1 by using a principal component analysis method;
s3: normalizing and dimension changing are carried out on the characteristic of the underwater sound modulation signal processed in the step S2;
s4: removing a pooling layer based on the dense neural network to obtain an improved dense neural network, and training the neural network;
s5: inputting the characteristic of the underwater sound modulation signal processed in the step S3 into the improved dense neural network trained in the step S4, and finally completing the recognition between modulation modes;
in the step S4, the dense neural network training step includes:
s4-1: the dense neural network is improved, a pooling layer for simplifying the features is omitted on the basis of the original dense neural network, and the problem of important feature loss caused by pooling is avoided;
s4-2: pre-training the neural network by using the existing sea test real data and simulation data and using a greedy algorithm;
s4-3: and performing migration learning on the pre-trained neural network in the target sea area to obtain a neural network model suitable for the target sea area.
2. The method for identifying the modulation modes of the underwater acoustic signal according to claim 1, wherein the main component analysis method in S2 specifically comprises the steps of:
s2-1 suppose that the kth characteristic of the underwater acoustic signal is X k =(x 1 ,x 2 ,...,x n ) T The covariance matrix of the K features is
Figure FDA0003928102900000011
K is the total number of features,
Figure FDA0003928102900000012
is the average value of all the characteristic parameters;
s2-2 calculating eigenvalue lambda of S matrix 11 ,λ 2 ,λ 3 ,...,λ n And feature vector alpha 1 ,α 2 ,α 3 ,...,α n And arranging the characteristic values in a descending order;
s2-3, calculating the accumulated variance contribution rate of the first m feature components:
Figure FDA0003928102900000013
s2-4 selection of X k The first m feature components with the accumulated variance contribution rate reaching 90 percent are subjected to linear transformation: y is Y k =A T X k Wherein A is the feature vector corresponding to the first m feature values, Y k Is the final feature vector after PCA processing.
3. The method for identifying underwater sound signal modulation modes according to claim 1, wherein in S3, normalization and dimension change are performed on extracted features;
the feature normalization formula is:
Figure FDA0003928102900000014
x is original characteristic data, X' is normalized data, and Max and Min are respectively the maximum value and the minimum value of the characteristic data;
the dimension change is specifically as follows:
the original n×m-dimensional feature data is converted into n×m×1-dimensional feature data suitable for a dense neural network, and the feature number of the feature data is not changed but is logically changed from two dimensions to three dimensions.
4. The method for identifying underwater sound signal modulation modes as claimed in claim 1, wherein the greedy algorithm can obtain a global optimal solution through a series of local optimal solutions, and the step of training the network by using the greedy algorithm is as follows:
(1) Training the first layer neural network independently until a given accuracy is reached;
(2) The first layer network data is reserved, and the second layer network is independently trained until the given precision is achieved;
(3) Repeating the above process until the whole neural network training is completed.
5. The method for identifying the modulation mode of the underwater acoustic signal according to claim 1, wherein the method for transfer learning is as follows: and (3) maintaining the weight of the pre-trained improved dense neural network convolution layer unchanged, putting the weight into a target sea area, and modulating the full-connection layer part of the neural network according to the actual underwater sound signal.
CN202110214167.8A 2020-09-23 2021-02-26 Method for identifying underwater sound signal modulation modes based on improved dense neural network Active CN112733811B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011006476.8A CN112132027A (en) 2020-09-23 2020-09-23 Underwater sound signal modulation mode inter-class identification method based on improved dense neural network
CN2020110064768 2020-09-23

Publications (2)

Publication Number Publication Date
CN112733811A CN112733811A (en) 2021-04-30
CN112733811B true CN112733811B (en) 2023-06-13

Family

ID=73842578

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202011006476.8A Withdrawn CN112132027A (en) 2020-09-23 2020-09-23 Underwater sound signal modulation mode inter-class identification method based on improved dense neural network
CN202110214167.8A Active CN112733811B (en) 2020-09-23 2021-02-26 Method for identifying underwater sound signal modulation modes based on improved dense neural network

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202011006476.8A Withdrawn CN112132027A (en) 2020-09-23 2020-09-23 Underwater sound signal modulation mode inter-class identification method based on improved dense neural network

Country Status (1)

Country Link
CN (2) CN112132027A (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112861066B (en) * 2021-02-15 2022-05-17 青岛科技大学 Machine learning and FFT (fast Fourier transform) -based blind source separation information source number parallel estimation method
CN112887239B (en) * 2021-02-15 2022-04-26 青岛科技大学 Method for rapidly and accurately identifying underwater sound signal modulation mode based on deep hybrid neural network
CN113242197B (en) * 2021-03-24 2022-06-07 厦门大学 Underwater acoustic signal modulation identification method and system based on artificial intelligence
CN113259288B (en) * 2021-05-05 2023-08-08 青岛科技大学 Underwater sound modulation mode identification method based on feature fusion and lightweight hybrid model
CN113435247B (en) * 2021-05-18 2023-06-23 西安电子科技大学 Intelligent recognition method, system and terminal for communication interference

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110691050A (en) * 2019-09-10 2020-01-14 中国人民解放军战略支援部队信息工程大学 C-E characteristic-based radiation source fingerprint extraction method and device and individual identification system
CN111444805A (en) * 2020-03-19 2020-07-24 哈尔滨工程大学 Improved multi-scale wavelet entropy digital signal modulation identification method

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108038471A (en) * 2017-12-27 2018-05-15 哈尔滨工程大学 A kind of underwater sound communication signal type Identification method based on depth learning technology
CN108805039A (en) * 2018-04-17 2018-11-13 哈尔滨工程大学 The Modulation Identification method of combination entropy and pre-training CNN extraction time-frequency image features
CN110909861B (en) * 2018-09-17 2023-05-30 北京市商汤科技开发有限公司 Neural network optimization method and device, electronic equipment and storage medium
CN109299697A (en) * 2018-09-30 2019-02-01 泰山学院 Deep neural network system and method based on underwater sound communication Modulation Mode Recognition
CN109802905B (en) * 2018-12-27 2022-01-14 西安电子科技大学 CNN convolutional neural network-based digital signal automatic modulation identification method
CN110163282B (en) * 2019-05-22 2022-12-06 西安电子科技大学 Modulation mode identification method based on deep learning
CN110738138A (en) * 2019-09-26 2020-01-31 哈尔滨工程大学 Underwater acoustic communication signal modulation mode identification method based on cyclic neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110691050A (en) * 2019-09-10 2020-01-14 中国人民解放军战略支援部队信息工程大学 C-E characteristic-based radiation source fingerprint extraction method and device and individual identification system
CN111444805A (en) * 2020-03-19 2020-07-24 哈尔滨工程大学 Improved multi-scale wavelet entropy digital signal modulation identification method

Also Published As

Publication number Publication date
CN112132027A (en) 2020-12-25
CN112733811A (en) 2021-04-30

Similar Documents

Publication Publication Date Title
CN112733811B (en) Method for identifying underwater sound signal modulation modes based on improved dense neural network
CN110163282B (en) Modulation mode identification method based on deep learning
CN111510402B (en) OFDM channel estimation method based on deep learning
CN113259288B (en) Underwater sound modulation mode identification method based on feature fusion and lightweight hybrid model
CN112464837A (en) Shallow sea underwater acoustic communication signal modulation identification method and system based on small data samples
CN112702294A (en) Modulation recognition method for multi-level feature extraction based on deep learning
CN114422311B (en) Signal modulation recognition method and system combining deep neural network and expert priori features
CN110138459A (en) Sparse underwater sound orthogonal frequency division multiplexing channel estimation methods and device based on base tracking denoising
CN111585922A (en) Modulation mode identification method based on convolutional neural network
CN112737992B (en) Underwater sound signal modulation mode self-adaptive in-class identification method
CN113225282A (en) Communication signal modulation identification method based on BP neural network
CN111865863A (en) RNN neural network-based OFDM signal detection method
CN112910812A (en) Modulation mode identification method for deep learning based on space-time feature extraction
CN107707497B (en) Communication signal identification method based on subtraction clustering and fuzzy clustering algorithm
CN116628566A (en) Communication signal modulation classification method based on aggregated residual transformation network
CN109274626B (en) Modulation identification method based on constellation diagram orthogonal scanning characteristics
CN112202696B (en) Underwater sound signal automatic modulation identification method based on fuzzy self-encoder
CN115913849A (en) Electromagnetic signal identification method based on one-dimensional complex value residual error network
Xie et al. Deep learning-based modulation detection for NOMA systems
CN113242201B (en) Wireless signal enhanced demodulation method and system based on generation classification network
CN113489545B (en) Light space pulse position modulation step-by-step classification detection method based on K-means clustering
CN114422310A (en) Digital orthogonal modulation signal identification method based on joint distribution matrix and multi-input neural network
CN114584441A (en) Digital signal modulation identification method based on deep learning
CN111917674A (en) Modulation identification method based on deep learning
CN111314255A (en) Low-complexity SISO and MIMO receiver generation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant