CN112528775A - Underwater target classification method - Google Patents

Underwater target classification method Download PDF

Info

Publication number
CN112528775A
CN112528775A CN202011362222.XA CN202011362222A CN112528775A CN 112528775 A CN112528775 A CN 112528775A CN 202011362222 A CN202011362222 A CN 202011362222A CN 112528775 A CN112528775 A CN 112528775A
Authority
CN
China
Prior art keywords
classification
time
model
resnet
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011362222.XA
Other languages
Chinese (zh)
Inventor
姜喆
赵晨
王天星
杨舸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202011362222.XA priority Critical patent/CN112528775A/en
Publication of CN112528775A publication Critical patent/CN112528775A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing

Abstract

The invention provides a classification method of underwater targets, which comprises the steps of receiving signal preprocessing, carrying out time-frequency processing, building a data set, building and training a deep learning classification model, outputting a classification result through the deep learning classification model, and obtaining a final classification result. The invention provides a classification model-S-ResNet combining a ResNet network and a SqueezeNet network aiming at the difficulty of underwater sound target classification, and compared with other classification models, the classification model has the advantages of higher identification accuracy, less network parameters, small calculated amount, certain advantage in training time and capability of meeting the requirement of real-time classification of underwater targets.

Description

Underwater target classification method
Technical Field
The invention relates to the technical field of neural networks, in particular to a deep neural network classification model method.
Background
In recent years, demand for marine resources is increasing, and requirements for offshore defense and anti-submarine warfare are becoming higher, so underwater target detection becomes a research hotspot in the underwater acoustic field of science-developed countries.
Currently, passive sonar has proven to be an effective tool for detecting and identifying self-radiating objects. In the underwater passive target identification, the most common information source is radiation noise of a target, and the clustering characteristics of tonnage, type and the like of a ship target can be accurately reflected to a certain extent based on noise information from noise sources such as an underwater vehicle engine, a propeller, machinery and the like, so that an important theoretical basis is provided for the research of target classification. In the military field, the target identification function is realized by extracting and effectively classifying and identifying the characteristics, so that the types and sizes of targets of the underwater vehicle are distinguished, whether the target is a real target or a false target is effectively distinguished, and an effective attack instruction is implemented. Similar systems are also required in the civil field for automatic detection and classification of sea traffic and monitoring of port traffic.
Heretofore, underwater target classification has relied primarily on sonar-received target radiated noise as the source of passive targets. Aiming at the characteristics of target radiation noise, the traditional target classification method mainly extracts the characteristics of an audio domain, a time-frequency domain and a chaotic domain of the ship radiation noise, and then utilizes a shallow classification decision device to carry out judgment and classification. However, on one hand, due to the complexity of the marine environment, the time-varying and space-varying properties of the underwater acoustic channel and the diversity of underwater targets, the underwater acoustic information of the targets acquired by the sonar is complex and diverse, and the accuracy of target identification cannot be guaranteed. On the other hand, with the development of noise reduction and vibration reduction technologies, the sound level of target radiation noise is continuously detected, and the target radiation noise is easily submerged in marine environment noise. Different domain features extracted manually are increasingly difficult to use in a classification decider. In summary, it is urgent to design a target classification system with high classification accuracy and strong generalization capability.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a classification method of underwater targets. The traditional target classification method mainly extracts the characteristics of the ship radiation noise frequency domain, the time-frequency domain and the chaotic domain, and then utilizes a shallow classification decision device to perform decision. While the real marine acoustic signal has typical non-stationarity, non-linearity, non-gaussian. The problems of poor processing effect, low accuracy, poor noise resistance and the like can occur when the real marine sound signal features are extracted by the traditional signal processing method. The deep neural network integrates feature learning and classification and discrimination, and compared with a shallow model, the deep neural network has the characteristics of excellent feature learning capability, a universal decision device and the like. The method improves the problems existing in underwater target classification aiming at deep learning, and is beneficial to reducing the complexity of the model while keeping the classification effect. The method is beneficial to the intelligentization and real-time processing of underwater target classification, and has important significance for deep learning in the engineering application of underwater target classification.
The technical scheme adopted by the invention for solving the technical problem comprises the following steps:
the method comprises the following steps: preprocessing a received signal;
the method comprises the following steps that hydrophones distributed in a water area receive underwater sound signals of the current water area in real time, the received underwater sound signals are y (n), and the received signals are subjected to direct current and low-pass filtering to obtain preprocessed signals s (n);
step two: time-frequency processing;
performing sliding window slicing processing on the signals s (n) obtained after the preprocessing, and then performing short-time Fourier transform to obtain a time-frequency characteristic diagram of each audio signal; labeling the time-frequency characteristic graph, and setting different labels for different target samples;
step three: constructing a data set;
according to the time-frequency characteristic diagram obtained in the step two, the time-frequency characteristic diagram is used as a data set S, and the data set S is randomly divided into training sets S according to a preset proportion1And test set S2And carrying out pixel value normalization processing; taking the finally obtained time-frequency diagram sample as a data set of the final target classification;
step four: building a deep learning classification model;
constructing a classification model S-ResNet on the basis of a ResNet34 network, wherein the classification model S-ResNet has a residual error neural network model with 8 layers, the 1 st layer is composed of 1 convolutional layer and 1 pooling layer, and the 3 rd layer to the 8 th layer are respectively composed of 6 residual error units; adding a Fire Module in the SqueezeNet into each residual error unit, wherein the Fire Module converts the original layer of convolution into two layers of Sequeeze and end;
step five: training a newly built classification model;
sending the data set constructed in the third step into the classification model S-ResNet constructed in the fourth step, setting corresponding superparameter rounds (epoch), Learning Rate (LR) and the number of samples (Batch Size) selected by one-time training, training through a cross entropy loss function until convergence, testing and identifying accuracy by using a total test set for the improved residual error neural network model obtained by each training round, and storing the neural network model with the maximum accuracy in all training rounds as an optimal classification model;
step six: outputting a classification result to obtain a final classification result;
and inputting the time-frequency characteristic graphs of different targets in the test set into a final classification recognition model by using the optimal classification model obtained in the step five, and outputting the final classification recognition model as recognition results of different targets.
The preset proportion is 7:3, and the data set S is randomly divided into a training set S according to the proportion of 7:31And test set S2
In the third step, the sizes of all the data samples are uniformly set to be consistent with the input size of the-ResNet classification model.
The residual error network (ResNet) is formed by stacking residual error units, and the residual error units in the residual error network are realized in a layer-skipping connection mode, namely, the input of the unit is directly added with the output of the unit, and then nonlinear mapping is carried out through an activation function.
The SqueezeNet network consists of two parts: two layers of sequeneze and end connected in the back, the size of input of the Fire module is H W M, H, W, M represents the length, width and channel number of input sample data respectively, and the characteristic diagram of output is H M (E)1+E3),E1、E3The numbers of convolution kernels 1x1 and 3x3 are respectively, the resolution of the feature map is unchanged, and only the dimension, namely the number of channels, is changed, so that the purpose of reducing the weight parameters is achieved.
The invention has the beneficial effect that aiming at the difficulty of underwater sound target classification, the invention provides a classification model-S-ResNet combining a ResNet network and a SqueezeNet network. Compared with other classification models, the classification model provided by the invention has the advantages of higher identification accuracy, fewer network parameters, small calculated amount and certain advantage in training time, and can meet the requirement of real-time classification of underwater targets.
Drawings
Fig. 1 shows the residual block of the S-ResNet network of the present invention (4S1 ═ E1 ═ E2).
FIG. 2 is a block diagram of the classification and identification method of underwater targets of the present invention.
Fig. 3 shows a ResNet residual block structure according to the present invention, and fig. 3(a) and fig. 3(b) both show a ResNet residual block structure.
FIG. 4 shows the Fire module of the present invention SqueezeNet.
Detailed Description
The invention is further illustrated with reference to the following figures and examples.
Aiming at the problems of large model parameter, high calculation amount, difficult rapid convergence and the like in an underwater target classification method based on deep learning, a novel classification model, namely S-ResNet, is provided, and the model is formed by stacking residual blocks similar to ResNet. The residual block unit of S-ResNet is shown in FIG. 1, and the network structure is shown in Table 1. The overall classification algorithm flow is shown in fig. 2.
The method comprises the following steps: preprocessing a received signal;
the method comprises the following steps that hydrophones distributed in a water area receive underwater sound signals of the current water area in real time, the received underwater sound signals are y (n), and the received signals are subjected to direct current and low-pass filtering to obtain preprocessed signals s (n);
step two: time-frequency processing;
performing sliding window slicing processing on the signals s (n) obtained after the preprocessing, and then performing short-time Fourier transform to obtain a time-frequency characteristic diagram of each audio signal; labeling the time-frequency characteristic graph, and setting different labels for different target samples;
step three: constructing a data set;
using the time-frequency characteristic graph obtained in the step two as a numberA data set S is randomly divided into training sets S according to a preset proportion of 7:31And test set S2The sizes of all data samples are uniformly set to be 224 multiplied by 224 pixels because the input size of the S-ResNet classification model is 224 multiplied by 224 pixels, and pixel value normalization processing is carried out; taking the finally obtained time-frequency diagram sample as a data set of the final target classification;
step four: building a deep learning classification model;
the convolutional neural network extracts the features of different levels of the image, and the extracted features are richer along with the increase of the number of network layers. However, when the network reaches a certain depth, the network has a "degeneration" (Degradation) phenomenon in the training process. In order to solve the degradation problem caused by the over-deep network, a residual error neural network (ResNet) is proposed by He and the like, and the network depth is greatly improved while the higher accuracy is achieved. While the lightweight network SqueezeNet has fewer parameters, the classification effect is not ideal. The invention combines ResNet and SqueezeNet, and provides a new deep learning classification model-S-ResNet with small calculated amount and good classification effect by virtue of the advantages of ResNet and SqueezeNet.
The invention constructs a residual error neural network model with 8 layers on the basis of a ResNet34 network, wherein the 1 st layer consists of 1 convolutional layer and 1 pooling layer, and the 3 rd layer to the 8 th layer respectively consist of 6 residual error blocks; and adding a Fire Module in the SqueezeNet into each residual block in the improved residual neural network model, wherein the Fire Module remarkably reduces the parameter quantity by converting the convolution of the original layer into two layers of Sequeeze and end.
The residual network (ResNet) is formed by stacking residual units (as shown in fig. 3), and fig. 3(a) and fig. 3(b) are two common residual unit blocks, wherein the greatest advantage of the graph (b) compared with the graph (a) is that the number of parameters of the right graph is reduced by 16.94 times compared with that of the left graph, thereby reducing the amount of calculation. The residual unit of graph (b) is therefore widely applied in residual networks. The residual unit in the residual network is realized in a form of skip layer connection, namely, the input of the unit is directly added with the output of the unit, and then nonlinear mapping is carried out through an activation function. Therefore, the residual error network can be easily realized by a mainstream automatic differential deep learning framework, and parameters are directly updated by using a BP algorithm. The residual error network well solves the degradation problem of the deep neural network, and the residual error network is converged more quickly on the premise of the same layer number.
The SqueezeNet network proposes a new network architecture Fire Module (as shown in fig. 4). The so-called "Fire Module" is a specially designed structure, which is specifically composed of two parts: two layers of sequeneze and end connected later. The size of the Fire module input is H W M, H, W, M represents the length, width and channel number of the input sample data respectively, and the output characteristic diagram is H M (E)1+E3),E1、E3The numbers of convolution kernels 1x1 and 3x3 respectively, so that the resolution of the feature graph is unchanged, and only dimension, namely channel number, is changed, and the purpose of reducing weight parameters is achieved (E in the graph)0、E1、E3The relationship of (1) is: e1=E3=4E0). The design form can compromise the detection accuracy and can reduce the complexity of the model.
The structure of the residual unit of the S-ResNet model after adding the Fire Module is shown in FIG. 1. (for simplicity of drawing, the BN layer and the ReLU function behind all convolutional layers are omitted in the structure diagram). In the residual error unit block designed in fig. 1, the residual error unit of the residual error network is ingeniously modified by taking the design idea of the Fire Module in the squeezet into account, so that a brand-new residual error unit block is designed. The constructed residual blocks are stacked to form a final classification model S-ResNet structure as shown in Table 1:
TABLE 1 Structure of S-ResNet model
Figure BDA0002804304820000051
Step five: training newly built classification model
And (3) sending the data set constructed in the third step into the classification model S-ResNet designed in the fourth step, setting corresponding super parameters such as the round (epoch), the Learning Rate (LR) and the number of samples selected by one-time training (Batch Size), training through a cross entropy loss function until convergence, testing and identifying the accuracy of the improved residual error neural network model obtained by each round of training by using the overall test set, and storing the neural network model with the maximum accuracy in all rounds of training as the optimal classification model.
Step six: outputting the classification result to obtain the final classification result
And inputting the time-frequency characteristic graphs of different targets in the test set into a final classification recognition model by using the optimal classification model obtained in the step five, and outputting the final classification recognition model as recognition results of different targets.
The examples of the invention are as follows:
the method comprises the following steps: preprocessing a received signal;
the hydrophone distributed in the water area receives the underwater sound signal of the current water area in real time, and the received underwater sound signal is y (n). The received signal is subjected to preprocessing such as dc processing to obtain a preprocessed signal s (n).
Step two: framing and windowing;
and performing frame windowing on the preprocessed signals s (n). When processing non-stationary underwater acoustic signals, the whole signal needs to be subjected to frame processing, namely, the whole signal is divided into a plurality of segments, so that the subsequent processing is facilitated. This process is called framing. After framing, a discontinuity occurs at the beginning and end of each frame. The more frames that are divided, the greater the error from the original signal. If the time-frequency transformation is directly performed, the signal energy of a certain frequency can be diffused to an adjacent frequency point, and the frequency spectrum leakage phenomenon occurs. To reduce spectral leakage, the signal is typically windowed after sampling. The framed signal is made continuous and each frame exhibits the characteristics of a periodic function.
Step three: carrying out spectrum analysis;
after the processing of the second step, the underwater sound signal of each frame is a steady slow transformation signal, and the signal has no large sudden change. The basic idea of short-time Fourier transform is to add a sliding time window to a signal and perform Fourier transform on the signal in the window to obtain the time-frequency spectrum of the signal. And performing initial time-frequency analysis on the multiple types of targets by adopting a spectrogram function in the time-frequency analysis function to obtain a spectrogram of a signal.
Step four: building a data set
And according to the third step, carrying out short-time Fourier transform on multiple groups of data of different targets received by the hydrophone, and drawing a time-frequency graph every 1 second. Taking the obtained time-frequency graph of the target as a data set S, and randomly dividing the data set S into training sets S according to a preset ratio of 7:31And test set S2. Finally, the sizes of all the images are unified into 224 × 224 pixels. And after preprocessing, the finally obtained time-frequency image sample is used as a data set of the final target classification.
Step five: building of deep learning classification model
The convolutional neural network can extract the features of different levels of the image. With the increase of the number of network layers, the extracted features are richer. However, when the network reaches a certain depth, the network has a "degeneration" (Degradation) phenomenon in the training process. In order to solve the degradation problem caused by the network being too deep, He and the like propose a residual error neural network (ResNet), and the network depth is greatly improved while the higher accuracy is achieved. The lightweight network SqueezeNet has less parameters but the effect is not ideal. The method combines ResNet and SqueezeNet, and provides a new deep learning classification model, namely S-ResNet, with small calculation amount and good classification effect by virtue of the advantages of ResNet and SqueezeNet.
The residual network (ResNet) is formed by stacking a series of residual units (as shown in fig. 3). The residual cells are implemented in a skip-level connection, i.e. the input of a cell is added directly to the output of the cell and then reactivated. Therefore, the residual network can be easily realized by a mainstream automatic differential deep learning framework, and parameters are directly updated by using a BP algorithm. The residual error network well solves the degradation problem of the deep neural network, and the residual error network is converged more quickly on the premise of the same layer number.
The SqueezeNet network proposes a new network architecture Fire Module (as shown in fig. 3). The so-called "Fire Module" is a specially designed structure, which is specifically composed of two parts: sequeneze ofAnd two end layers connected in the back, wherein the number of the sequeneze layers is S1The convolution layer (2) of (1 × 1) is constituted by two convolution layers of (e 1) convolution kernel size 1 × 1 and (e 3) convolution kernel size 3 × 3, and the convolution layers include feature maps obtained from (1 × 1) and (3 × 3). The feature map input by the Fire module is H x W x M, the feature map output by the Fire module is H x M (e1+ e3), and the resolution of the feature map is unchanged, and the changed result is only dimension, namely channel number, so that the purpose of reducing weight parameters is achieved. The design form can compromise the detection accuracy and can reduce the complexity of the model. While the Fire Module is applied to the residual network in this document, the finally constructed residual block is shown in fig. 1 (for simplicity of drawing, the BN layer and RELU functions behind all convolutional layers are omitted in the structure diagram). The constructed residual blocks are stacked to form the final classification model structure as shown in table 1.
Step six: training newly built classification model
And (4) sending the data set constructed in the step four into a classification model, and training to obtain the optimal classification model.
Step seven: outputting the classification result to obtain the final classification result
And (5) carrying out classification and identification according to the obtained underwater sound data by using the classification model constructed in the sixth step, and outputting a final classification result.

Claims (5)

1. A method of classifying underwater objects, comprising the steps of:
the method comprises the following steps: preprocessing a received signal;
the method comprises the following steps that hydrophones distributed in a water area receive underwater sound signals of the current water area in real time, the received underwater sound signals are y (n), and the received signals are subjected to direct current and low-pass filtering to obtain preprocessed signals s (n);
step two: time-frequency processing;
performing sliding window slicing processing on the signals s (n) obtained after the preprocessing, and then performing short-time Fourier transform to obtain a time-frequency characteristic diagram of each audio signal; labeling the time frequency characteristic graph, and setting different labels for different target samples;
step three: constructing a data set;
according to the time-frequency characteristic diagram obtained in the step two, the time-frequency characteristic diagram is used as a data set S, and the data set S is randomly divided into training sets S according to a preset proportion1And test set S2And carrying out pixel value normalization processing; taking the finally obtained time-frequency diagram sample as a data set of the final target classification;
step four: building a deep learning classification model;
constructing a classification model S-ResNet on the basis of a ResNet34 network, wherein the classification model S-ResNet has a residual error neural network model with 8 layers, the 1 st layer is composed of 1 convolutional layer and 1 pooling layer, and the 3 rd layer to the 8 th layer are respectively composed of 6 residual error units; adding a Fire Module in the SqueezeNet into each residual error unit, wherein the Fire Module converts the original layer of convolution into two layers of Sequeeze and end;
step five: training a newly built classification model;
sending the data set constructed in the third step into the classification model S-ResNet constructed in the fourth step, setting corresponding super-parameter rounds, learning rate and the number of samples selected by one-time training, training through a cross entropy loss function until convergence, testing and identifying accuracy of an improved residual error neural network model obtained by each round of training by using a total test set, and storing a neural network model with the maximum accuracy in all rounds of training as an optimal classification model;
step six: outputting a classification result to obtain a final classification result;
and inputting the time-frequency characteristic graphs of different targets in the test set into a final classification recognition model by using the optimal classification model obtained in the step five, and outputting the final classification recognition model as recognition results of different targets.
2. A method of classification of underwater objects according to claim 1, characterized in that:
the preset ratio is 7:3, and the data set S is randomly entered according to the ratio of 7:3Line division into training sets S1And test set S2
3. A method of classification of underwater objects according to claim 1, characterized in that:
in the third step, the sizes of all the data samples are uniformly set to be consistent with the input size of the-ResNet classification model.
4. A method of classification of underwater objects according to claim 1, characterized in that:
the residual error network ResNet is formed by stacking residual error units, and the residual error units in the residual error network are realized in a layer jump connection mode, namely the input of the units is directly added with the output of the units, and then nonlinear mapping is carried out through an activation function.
5. A method of classification of underwater objects according to claim 1, characterized in that:
the SqueezeNet network consists of two parts: two layers of sequeneze and end connected in the back, the size of input of the Fire module is H W M, H, W, M represents the length, width and channel number of input sample data respectively, and the characteristic diagram of output is H M (E)1+E3),E1、E3The numbers of convolution kernels 1x1 and 3x3 are respectively, the resolution of the characteristic diagram is unchanged, and only the dimension number, namely the channel number, is changed, so that the purpose of reducing the weight parameters is achieved.
CN202011362222.XA 2020-11-28 2020-11-28 Underwater target classification method Pending CN112528775A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011362222.XA CN112528775A (en) 2020-11-28 2020-11-28 Underwater target classification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011362222.XA CN112528775A (en) 2020-11-28 2020-11-28 Underwater target classification method

Publications (1)

Publication Number Publication Date
CN112528775A true CN112528775A (en) 2021-03-19

Family

ID=74994416

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011362222.XA Pending CN112528775A (en) 2020-11-28 2020-11-28 Underwater target classification method

Country Status (1)

Country Link
CN (1) CN112528775A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113326737A (en) * 2021-05-06 2021-08-31 西北工业大学 Data enhancement method for underwater target
CN113435276A (en) * 2021-06-16 2021-09-24 中国电子科技集团公司第五十四研究所 Underwater sound target identification method based on antagonistic residual error network
CN116973901A (en) * 2023-09-14 2023-10-31 海底鹰深海科技股份有限公司 Algorithm application of time-frequency analysis in sonar signal processing
CN117198330A (en) * 2023-11-07 2023-12-08 国家海洋技术中心 Sound source identification method and system and electronic equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103091679A (en) * 2013-02-04 2013-05-08 中国科学院声学研究所 Underwater moving target identification method
CN110009052A (en) * 2019-04-11 2019-07-12 腾讯科技(深圳)有限公司 A kind of method of image recognition, the method and device of image recognition model training
CN110490230A (en) * 2019-07-16 2019-11-22 西北工业大学 The Acoustic Object recognition methods of confrontation network is generated based on depth convolution
WO2019237567A1 (en) * 2018-06-14 2019-12-19 江南大学 Convolutional neural network based tumble detection method
CN111104954A (en) * 2018-10-26 2020-05-05 华为技术有限公司 Object classification method and device
CN111325143A (en) * 2020-02-18 2020-06-23 西北工业大学 Underwater target identification method under unbalanced data set condition
CN111400540A (en) * 2020-03-11 2020-07-10 金陵科技学院 Singing voice detection method based on extrusion and excitation residual error network
CN111624585A (en) * 2020-05-21 2020-09-04 西北工业大学 Underwater target passive detection method based on convolutional neural network
CN111735525A (en) * 2020-05-28 2020-10-02 哈尔滨工程大学 DEMON spectral feature extraction method suitable for unmanned sonar

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103091679A (en) * 2013-02-04 2013-05-08 中国科学院声学研究所 Underwater moving target identification method
WO2019237567A1 (en) * 2018-06-14 2019-12-19 江南大学 Convolutional neural network based tumble detection method
CN111104954A (en) * 2018-10-26 2020-05-05 华为技术有限公司 Object classification method and device
CN110009052A (en) * 2019-04-11 2019-07-12 腾讯科技(深圳)有限公司 A kind of method of image recognition, the method and device of image recognition model training
CN110490230A (en) * 2019-07-16 2019-11-22 西北工业大学 The Acoustic Object recognition methods of confrontation network is generated based on depth convolution
CN111325143A (en) * 2020-02-18 2020-06-23 西北工业大学 Underwater target identification method under unbalanced data set condition
CN111400540A (en) * 2020-03-11 2020-07-10 金陵科技学院 Singing voice detection method based on extrusion and excitation residual error network
CN111624585A (en) * 2020-05-21 2020-09-04 西北工业大学 Underwater target passive detection method based on convolutional neural network
CN111735525A (en) * 2020-05-28 2020-10-02 哈尔滨工程大学 DEMON spectral feature extraction method suitable for unmanned sonar

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
FORREST N. IANDOLA 等: "SQUEEZENET: ALEXNET-LEVEL ACCURACY WITH 50X FEWER PARAMETERS AND <0.5MB MODEL SIZE", 《ARXIV:1602.07360V4》 *
LONG CHEN 等: "Underwater object detection using Invert Multi-Class Adaboost with deep learning", 《ARXIV:2005.11552V1》 *
WILLIAM BOKUI SHEN 等: "Exploration of the Effect of Residual Connection on top of SqueezeNet A Combination study of Inception Model and Bypass Layers", 《HTTPS://WWW.SEMANTICSCHOLAR.ORG/PAPER/EXPLORATION-OF-THE-EFFECT-OF-RESIDUAL-CONNECTION-ON-SHEN-HAN/AE0AEEB9E73F9DE1AE3CDBC4AC3BE995EEDE564B#PAPER-HEADER》 *
孙若钒 等: "VansNet 轻量化卷积神经网络", 《贵州大学学报( 自然科学版)》 *
王小宇等: "改进的卷积神经网络实现端到端的水下目标自动识别", 《信号处理》 *
王鹏: "基于深度神经网络的水中目标识别研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113326737A (en) * 2021-05-06 2021-08-31 西北工业大学 Data enhancement method for underwater target
CN113435276A (en) * 2021-06-16 2021-09-24 中国电子科技集团公司第五十四研究所 Underwater sound target identification method based on antagonistic residual error network
CN116973901A (en) * 2023-09-14 2023-10-31 海底鹰深海科技股份有限公司 Algorithm application of time-frequency analysis in sonar signal processing
CN117198330A (en) * 2023-11-07 2023-12-08 国家海洋技术中心 Sound source identification method and system and electronic equipment
CN117198330B (en) * 2023-11-07 2024-01-30 国家海洋技术中心 Sound source identification method and system and electronic equipment

Similar Documents

Publication Publication Date Title
CN112528775A (en) Underwater target classification method
CN112802484B (en) Panda sound event detection method and system under mixed audio frequency
CN110532932B (en) Method for identifying multi-component radar signal intra-pulse modulation mode
Wang et al. ia-PNCC: Noise Processing Method for Underwater Target Recognition Convolutional Neural Network.
CN106772331A (en) Target identification method and Target Identification Unit
CN104568113B (en) A kind of ocean acoustic propagation investigation automatic intercept method of blast wave based on model
CN110929842A (en) Accurate intelligent detection method for burst time region of non-cooperative radio signal
CN111090089B (en) Space-time adaptive detection method based on two types of auxiliary data
CN112307926B (en) Acoustic passive ship target classification method based on generation countermeasure network
CN110444225B (en) Sound source target identification method based on feature fusion network
Vahidpour et al. An automated approach to passive sonar classification using binary image features
CN116106880A (en) Underwater sound source ranging method and device based on attention mechanism and multi-scale fusion
Guo et al. Underwater target detection and localization with feature map and CNN-based classification
Alouani et al. A spatio-temporal deep learning approach for underwater acoustic signals classification
CN115909040A (en) Underwater sound target identification method based on self-adaptive multi-feature fusion model
CN114219998A (en) Sonar image real-time detection method based on target detection neural network
CN115510898A (en) Ship acoustic wake flow detection method based on convolutional neural network
Zhang et al. Underwater acoustic source separation with deep Bi-LSTM networks
Zhou et al. A multi-feature compression and fusion strategy of vertical self-contained hydrophone array
CN113109795B (en) Deep sea direct sound zone target depth estimation method based on deep neural network
CN111624585A (en) Underwater target passive detection method based on convolutional neural network
Ye et al. A Gray Scale Correction Method for Side-Scan Sonar Images Based on GAN
CN116405127B (en) Compression method and device of underwater acoustic communication preamble signal detection model
CN116417011A (en) Underwater sound target identification method based on feature fusion and residual CNN
CN113792774B (en) Intelligent fusion sensing method for underwater targets

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210319

WD01 Invention patent application deemed withdrawn after publication