CN114636995A - Underwater sound signal detection method and system based on deep learning - Google Patents
Underwater sound signal detection method and system based on deep learning Download PDFInfo
- Publication number
- CN114636995A CN114636995A CN202210257543.6A CN202210257543A CN114636995A CN 114636995 A CN114636995 A CN 114636995A CN 202210257543 A CN202210257543 A CN 202210257543A CN 114636995 A CN114636995 A CN 114636995A
- Authority
- CN
- China
- Prior art keywords
- signal
- spectrogram
- deep learning
- noise reduction
- frequency
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 24
- 238000013135 deep learning Methods 0.000 title claims abstract description 22
- 230000005236 sound signal Effects 0.000 title claims description 9
- 238000012545 processing Methods 0.000 claims abstract description 58
- 238000000034 method Methods 0.000 claims abstract description 21
- 238000013136 deep learning model Methods 0.000 claims abstract description 19
- 230000009466 transformation Effects 0.000 claims abstract description 14
- 230000003044 adaptive effect Effects 0.000 claims description 12
- 238000001914 filtration Methods 0.000 claims description 12
- 238000012549 training Methods 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 230000002708 enhancing effect Effects 0.000 claims description 3
- 230000006870 function Effects 0.000 description 10
- 238000011160 research Methods 0.000 description 7
- 238000011176 pooling Methods 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000005284 excitation Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/52—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
- G01S7/539—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Networks & Wireless Communication (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
The application provides an underwater acoustic signal detection method and system based on deep learning, which comprise the following characteristics: step S1, signal noise reduction processing; step S2, signal enhancement processing; step S3, signal transformation processing, wherein different spectrograms are respectively generated by using the signals after the enhancement processing, and the spectrograms comprise one or more of a spectrogram and/or a cepstrum and/or a time domain graph and/or a spectrogram (time-frequency spectrogram); and S4, inputting the spectrogram or spectrograms transformed in the step S3, recognizing the spectrogram or spectrograms by using the trained deep learning model, and outputting a recognition result. The underwater acoustic signal detection and identification method considers different signal characteristics at the same time, and improves the detection and identification precision by utilizing the provided identification model.
Description
Technical Field
The invention relates to the technical field of audio signal detection and identification, in particular to an underwater sound signal detection method and system based on deep learning.
Background
The research on the characteristics of targets in water is a key technology for target identification and is also a research problem in the sonar field. The underwater target identification technology is paid attention at home and abroad, and long-term continuous research is carried out on the aspects of theory and experiment. In the general development level, the current sonar target identification method adopts an expert system and a template matching mode to detect and identify targets in water. The technical scheme adopted can be divided into: the method comprises a feature extraction method based on a physical model, a feature extraction method based on signal analysis, an identification method based on fine features and fusion application based on multi-sensor and multi-feature information.
In recent years, the deep learning method becomes a hotspot in the field of artificial intelligence, and not only is the algorithm research endless, but also the deep learning method is widely applied to the fields of voice, images and the like. Aiming at the detection and identification of the underwater sound signals, a plurality of research teams at home and abroad develop application research of the deep learning method, but the general model is single, deep research is not developed aiming at the characteristics of the underwater sound signals, the adopted model lacks universality or is insufficient in precision, and the situation of misjudgment exists.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides an underwater sound signal detection method based on deep learning, which comprises the following characteristics:
step S1, signal noise reduction processing;
step S2, signal enhancement processing;
step S3, signal transformation processing, wherein different spectrograms are respectively generated by using the signals after the enhancement processing, and the spectrograms comprise one or more of a spectrogram and/or an inverse spectrogram and/or a time domain graph and/or a spectrogram (time-frequency spectrogram);
and S4, inputting the spectrogram or spectrograms transformed in the step S3, recognizing the spectrogram or spectrograms by using the trained deep learning model, and outputting a recognition result.
Optionally, the noise reduction method in step S1 is: one or more of LMS adaptive filter noise reduction, LMS adaptive notch filter noise reduction, wiener filter noise reduction.
Optionally, the S2 includes: the method comprises the steps of obtaining a high-frequency signal and a low-frequency signal through high-pass filtering and low-pass filtering respectively, only performing enhancement processing on the obtained low-frequency signal to obtain an enhanced low-frequency signal, and superposing the high-frequency signal to the enhanced low-frequency signal to obtain the enhanced signal.
Optionally, the S3 includes: the spectrogram may include, but is not limited to, one or more of a mel frequency cepstrum (MFCC), a gamma pass frequency cepstrum (GFCC), a linear prediction cepstrum (LFCC), a bark frequency cepstrum (BFCC), and a power normalized cepstrum (PNCC).
Optionally, the S4 includes: the deep learning model comprises an input layer, one or more hidden layers and an output layer.
Correspondingly, the application also provides an underwater acoustic signal detection system based on deep learning, which comprises the following unit modules:
the signal noise reduction processing unit is used for finishing the noise reduction processing of the signal;
the signal enhancement processing unit is used for further enhancing the signal after noise reduction;
the signal transformation processing unit is used for respectively generating different spectrograms by utilizing the signals after the enhancement processing, wherein the spectrograms comprise one or more of a spectrogram and/or an inverse spectrogram and/or a time domain graph and/or a spectrogram (time-frequency spectrogram);
and the recognition output unit is used for inputting the one or more spectrograms converted by the signal conversion processing unit into a deep learning model after training for recognition and outputting a recognition result.
Optionally, the noise reduction method in the signal noise reduction processing unit is: one or more of LMS adaptive filter noise reduction, LMS adaptive notch filter noise reduction, wiener filter noise reduction.
Optionally, the signal enhancement processing unit obtains a high-frequency signal and a low-frequency signal through high-pass filtering and low-pass filtering, only performs enhancement processing on the obtained low-frequency signal to obtain an enhanced low-frequency signal, and superimposes the high-frequency signal on the enhanced low-frequency signal to obtain the enhanced signal.
Optionally, the spectrogram in the signal transformation processing unit includes, but is not limited to, one or more of a mel frequency cepstrum (MFCC), a gamma pass frequency cepstrum (GFCC), a linear prediction cepstrum (LFCC), a bark frequency cepstrum (BFCC), and a power normalized cepstrum (PNCC).
Optionally, the deep learning model in the recognition output unit includes an input layer, one or more hidden layers, and an output layer.
The technical effects of this application lie in:
1. and converting the underwater sound signal into a spectrogram signal, and detecting and identifying by using a deep learning mode.
2. Various spectrogram signals are utilized for training and learning, and the robustness and the precision of the detection and identification model are improved.
3. The detection and identification model used by the method has a high detection and identification speed, and can lead the opposite side to find the target signal.
Drawings
FIG. 1 is a principal logic sequence diagram of the present invention.
Detailed Description
As shown in fig. 1, to solve the above problem, the present invention provides a method for detecting an underwater acoustic signal based on deep learning, which includes the following features:
step S1, signal noise reduction processing;
step S2, signal enhancement processing;
step S3, signal transformation processing, wherein different spectrograms are respectively generated by using the signals after the enhancement processing, and the spectrograms comprise one or more of a spectrogram and/or a cepstrum and/or a time domain graph and/or a spectrogram (time-frequency spectrogram);
and S4, inputting the spectrogram or spectrograms transformed in the step S3, recognizing the spectrogram or spectrograms by using the trained deep learning model, and outputting a recognition result.
Optionally, the noise reduction method in step S1 is: one or more of LMS adaptive filter noise reduction, LMS adaptive notch filter noise reduction, wiener filter noise reduction.
Optionally, the S2 includes: the method comprises the steps of obtaining a high-frequency signal and a low-frequency signal through high-pass filtering and low-pass filtering respectively, only performing enhancement processing on the obtained low-frequency signal to obtain an enhanced low-frequency signal, and superposing the high-frequency signal to the enhanced low-frequency signal to obtain the enhanced signal.
Optionally, the S3 includes: the spectrogram may include, but is not limited to, one or more of a mel frequency cepstrum (MFCC), a gamma pass frequency cepstrum (GFCC), a linear prediction cepstrum (LFCC), a bark frequency cepstrum (BFCC), and a power normalized cepstrum (PNCC).
Optionally, the S4 includes: the deep learning model comprises an input layer, one or more hidden layers and an output layer.
The input layer is used for receiving one or more spectrograms after signal transformation processing;
as another embodiment corresponding to the above input layer, the input layer is configured to receive an original signal before signal noise reduction processing, a signal after signal enhancement processing, and one or more spectrograms after signal transformation processing, so as to prevent beneficial information in the original signal from being damaged during noise reduction and prevent unnecessary harmful information from being introduced during enhancement processing.
Optionally, the hidden layer comprises one or more convolutional layers, one or more pooling layers; the loss function adopted by the deep learning model is a cross entropy loss function.
Optionally, the pooling method is as follows:
xe=f(weφ(ue))
ue=(1-we)φ(xe-1);
wherein x iseRepresents the output of the current layer, ueFor representing the input, w, of a function phieRepresents the weight of the current layer, phi represents the cross entropy loss function, xe-1Representing the output of the previous layer.
N represents the size of the sample data set, i takes values of 1-N, and yi represents a label corresponding to the sample xi; qyiRepresents the weight of the sample xi at its label yi, MyiDenotes the deviation of the sample xi at its label yi, MjRepresents the deviation at output node j; thetaj,iIs the weighted angle between the sample xi and its corresponding label yi.
The excitation function R is:
n represents the size of a sample data set; yi denotes the sample feature vector xiA corresponding tag value; wyiRepresenting a sample feature vector xiWeight at its label yi, θyiDenoted as sample xiThe angle of the vector with its corresponding label yi.
And continuously training the deep learning model until a preset condition is met to obtain the trained deep learning model.
Correspondingly, the application also provides an underwater acoustic signal detection system based on deep learning, which comprises the following unit modules:
the signal noise reduction processing unit is used for finishing the noise reduction processing of the signal;
the signal enhancement processing unit is used for further enhancing the signal subjected to noise reduction;
a signal transformation processing unit, configured to generate different spectrograms respectively by using the enhanced signal, where the spectrogram includes one or more of a spectrogram and/or an inverse spectrogram and/or a time domain map and/or a spectrogram (time-frequency spectrogram);
and the recognition output unit is used for inputting the one or more spectrograms converted by the signal conversion processing unit into a deep learning model after training for recognition and outputting a recognition result.
Optionally, the noise reduction method in the signal noise reduction processing unit is: one or more of LMS adaptive filter noise reduction, LMS adaptive notch filter noise reduction, wiener filter noise reduction.
Optionally, the signal enhancement processing unit obtains a high-frequency signal and a low-frequency signal through high-pass filtering and low-pass filtering, only performs enhancement processing on the obtained low-frequency signal to obtain an enhanced low-frequency signal, and superimposes the high-frequency signal on the enhanced low-frequency signal to obtain the enhanced signal.
Optionally, the spectrogram in the signal transformation processing unit includes, but is not limited to, one or more of a mel frequency cepstrum (MFCC), a gamma pass frequency cepstrum (GFCC), a linear prediction cepstrum (LFCC), a bark frequency cepstrum (BFCC), and a power normalized cepstrum (PNCC).
Optionally, the deep learning model in the recognition output unit includes an input layer, one or more hidden layers, and an output layer.
The input layer is used for receiving one or more spectrograms after signal transformation processing;
as another embodiment corresponding to the above input layer, the input layer is configured to receive an original signal before signal noise reduction processing, a signal after signal enhancement processing, and one or more spectrograms after signal transformation processing, so as to prevent beneficial information in the original signal from being damaged during noise reduction and prevent unnecessary harmful information from being introduced during enhancement processing.
The hidden layer comprises one or more convolutional layers and one or more pooling layers; the loss function adopted by the deep learning model is a cross entropy loss function.
Optionally, the pooling method is as follows:
wherein x iseRepresents the output of the current layer, ueFor representing the input, w, of a function phieRepresents the weight of the current layer, phi represents the cross entropy loss function, xe-1Representing the output of the previous layer.
Optionally, theN represents the size of the sample data set, i takes values of 1-N, and yi represents a label corresponding to the sample xi; qyiRepresents the weight of the sample xi at its label yi, MyiDenotes the deviation of the sample xi at its label yi, MjRepresents the deviation at output node j; thetaj,iIs the weighted angle between the sample xi and its corresponding label yi.
The excitation function R is:
n represents the size of a sample data set; yi denotes the sample feature vector xiA corresponding tag value; w is a group ofyiRepresenting a sample feature vector xiWeight at its label yi, θyiDenoted as sample xiThe angle of the vector with its corresponding label yi.
And continuously training the deep learning model until a preset condition is met to obtain the trained deep learning model.
It should be noted that the above embodiments and further limitations, which can be combined and used without conflict, constitute the practical disclosure of the present invention, are limited by space and are not listed, but all combinations fall within the scope of protection of the present application.
It will be understood by those skilled in the art that all or part of the steps of the above methods may be implemented by instructing the relevant hardware through a program, and the program may be stored in a computer readable storage medium, such as a read-only memory, a magnetic or optical disk, and the like. Alternatively, all or part of the steps of the above embodiments may be implemented using one or more integrated circuits. Accordingly, each module/unit in the above embodiments may be implemented in the form of hardware, and may also be implemented in the form of a software functional module. The present invention is not limited to any specific form of combination of hardware and software.
The present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof, and it should be understood that various changes and modifications can be effected therein by one skilled in the art without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (10)
1. A underwater sound signal detection method based on deep learning comprises the following characteristics:
step S1, signal noise reduction processing;
step S2, signal enhancement processing;
step S3, signal transformation processing, wherein different spectrograms are respectively generated by using the signals after the enhancement processing, and the spectrograms comprise one or more of a spectrogram and/or an inverse spectrogram and/or a time domain graph and/or a spectrogram (time-frequency spectrogram);
and S4, inputting the spectrogram or spectrograms transformed in the step S3, recognizing the spectrogram or spectrograms by using the trained deep learning model, and outputting a recognition result.
2. The underwater acoustic signal detection method based on deep learning of claim 1, comprising the following features: the noise reduction method in step S1 includes: one or more of LMS adaptive filter noise reduction, LMS adaptive notch filter noise reduction, wiener filter noise reduction.
3. The underwater acoustic signal detection method based on deep learning of claim 1, comprising the following features: the S2 includes: the method comprises the steps of obtaining a high-frequency signal and a low-frequency signal through high-pass filtering and low-pass filtering respectively, only performing enhancement processing on the obtained low-frequency signal to obtain an enhanced low-frequency signal, and superposing the high-frequency signal to the enhanced low-frequency signal to obtain the enhanced signal.
4. The underwater acoustic signal detection method based on deep learning as claimed in claim 1, comprising the following features: the S3 includes: the spectrogram may include, but is not limited to, one or more of a mel frequency cepstrum (MFCC), a gamma pass frequency cepstrum (GFCC), a linear prediction cepstrum (LFCC), a bark frequency cepstrum (BFCC), and a power normalized cepstrum (PNCC).
5. The underwater acoustic signal detection method based on deep learning of claim 1, comprising the following features: the S4 includes: the deep learning model comprises an input layer, one or more hidden layers and an output layer.
6. An underwater acoustic signal detection system based on deep learning comprises the following characteristics: the system comprises the following unit modules:
the signal noise reduction processing unit is used for finishing the noise reduction processing of the signal;
the signal enhancement processing unit is used for further enhancing the signal after noise reduction;
the signal transformation processing unit is used for respectively generating different spectrograms by utilizing the signals after the enhancement processing, wherein the spectrograms comprise one or more of a spectrogram and/or an inverse spectrogram and/or a time domain graph and/or a spectrogram (time-frequency spectrogram);
and the recognition output unit is used for inputting the one or more spectrograms converted by the signal conversion processing unit into a deep learning model after training for recognition and outputting a recognition result.
7. The deep learning based underwater acoustic signal detection system according to claim 6, comprising the following features: the noise reduction method in the signal noise reduction processing unit comprises the following steps: one or more of LMS adaptive filter noise reduction, LMS adaptive notch filter noise reduction, wiener filter noise reduction.
8. The deep learning based underwater acoustic signal detection system according to claim 6, comprising the following features: the signal enhancement processing unit obtains a high-frequency signal and a low-frequency signal through high-pass filtering and low-pass filtering respectively, only carries out enhancement processing on the obtained low-frequency signal to obtain an enhanced low-frequency signal, and superposes the high-frequency signal on the enhanced low-frequency signal to obtain the enhanced signal.
9. The deep learning based underwater acoustic signal detection system according to claim 6, comprising the following features: the spectrogram in the signal transformation processing unit includes, but is not limited to, one or more of a mel frequency cepstrum (MFCC), a gamma pass frequency cepstrum (GFCC), a linear prediction cepstrum (LFCC), a bark frequency cepstrum (BFCC), and a power normalized cepstrum (PNCC).
10. The deep learning based underwater acoustic signal detection system according to claim 6, comprising the following features: the deep learning model in the recognition output unit comprises an input layer, one or more hidden layers and an output layer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210257543.6A CN114636995A (en) | 2022-03-16 | 2022-03-16 | Underwater sound signal detection method and system based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210257543.6A CN114636995A (en) | 2022-03-16 | 2022-03-16 | Underwater sound signal detection method and system based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114636995A true CN114636995A (en) | 2022-06-17 |
Family
ID=81949849
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210257543.6A Pending CN114636995A (en) | 2022-03-16 | 2022-03-16 | Underwater sound signal detection method and system based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114636995A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116973922A (en) * | 2023-08-29 | 2023-10-31 | 中国水产科学研究院珠江水产研究所 | Underwater biodistribution characteristic analysis method based on underwater acoustic signal detection |
Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101470194A (en) * | 2007-12-26 | 2009-07-01 | 中国科学院声学研究所 | Torpedo target recognition method |
CN108229404A (en) * | 2018-01-09 | 2018-06-29 | 东南大学 | A kind of radar echo signal target identification method based on deep learning |
CN108229298A (en) * | 2017-09-30 | 2018-06-29 | 北京市商汤科技开发有限公司 | The training of neural network and face identification method and device, equipment, storage medium |
CN109100710A (en) * | 2018-06-26 | 2018-12-28 | 东南大学 | A kind of Underwater targets recognition based on convolutional neural networks |
CN109165566A (en) * | 2018-08-01 | 2019-01-08 | 中国计量大学 | A kind of recognition of face convolutional neural networks training method based on novel loss function |
CN109269547A (en) * | 2018-07-12 | 2019-01-25 | 哈尔滨工程大学 | Submarine target Ship Detection based on line spectrum |
CN109375186A (en) * | 2018-11-22 | 2019-02-22 | 中国人民解放军海军航空大学 | Radar target identification method based on the multiple dimensioned one-dimensional convolutional neural networks of depth residual error |
CN109460974A (en) * | 2018-10-29 | 2019-03-12 | 广州皓云原智信息科技有限公司 | A kind of attendance checking system based on gesture recognition |
CN109493847A (en) * | 2018-12-14 | 2019-03-19 | 广州玛网络科技有限公司 | Sound recognition system and voice recognition device |
CN110390282A (en) * | 2019-07-12 | 2019-10-29 | 西安格威西联科技有限公司 | A kind of finger vein identification method and system based on the loss of cosine center |
CN111627419A (en) * | 2020-05-09 | 2020-09-04 | 哈尔滨工程大学 | Sound generation method based on underwater target and environmental information characteristics |
CN112163461A (en) * | 2020-09-07 | 2021-01-01 | 中国海洋大学 | Underwater target identification method based on multi-mode fusion |
CN112257521A (en) * | 2020-09-30 | 2021-01-22 | 中国人民解放军军事科学院国防科技创新研究院 | CNN underwater acoustic signal target identification method based on data enhancement and time-frequency separation |
CN112329819A (en) * | 2020-10-20 | 2021-02-05 | 中国海洋大学 | Underwater target identification method based on multi-network fusion |
CN112364779A (en) * | 2020-11-12 | 2021-02-12 | 中国电子科技集团公司第五十四研究所 | Underwater sound target identification method based on signal processing and deep-shallow network multi-model fusion |
CN113077813A (en) * | 2021-03-22 | 2021-07-06 | 自然资源部第一海洋研究所 | Ship noise identification method based on holographic spectrum and deep learning |
CN113191178A (en) * | 2020-12-04 | 2021-07-30 | 中国船舶重工集团公司第七一五研究所 | Underwater sound target identification method based on auditory perception feature deep learning |
CN113537113A (en) * | 2021-07-26 | 2021-10-22 | 哈尔滨工程大学 | Underwater sound target identification method based on composite neural network |
CN113705647A (en) * | 2021-08-19 | 2021-11-26 | 电子科技大学 | Dynamic interval-based dual semantic feature extraction method |
CN113992153A (en) * | 2021-11-19 | 2022-01-28 | 珠海康晋电气股份有限公司 | Visual real-time monitoring distributed management system of photovoltaic power plant |
-
2022
- 2022-03-16 CN CN202210257543.6A patent/CN114636995A/en active Pending
Patent Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101470194A (en) * | 2007-12-26 | 2009-07-01 | 中国科学院声学研究所 | Torpedo target recognition method |
CN108229298A (en) * | 2017-09-30 | 2018-06-29 | 北京市商汤科技开发有限公司 | The training of neural network and face identification method and device, equipment, storage medium |
CN108229404A (en) * | 2018-01-09 | 2018-06-29 | 东南大学 | A kind of radar echo signal target identification method based on deep learning |
CN109100710A (en) * | 2018-06-26 | 2018-12-28 | 东南大学 | A kind of Underwater targets recognition based on convolutional neural networks |
CN109269547A (en) * | 2018-07-12 | 2019-01-25 | 哈尔滨工程大学 | Submarine target Ship Detection based on line spectrum |
CN109165566A (en) * | 2018-08-01 | 2019-01-08 | 中国计量大学 | A kind of recognition of face convolutional neural networks training method based on novel loss function |
CN109460974A (en) * | 2018-10-29 | 2019-03-12 | 广州皓云原智信息科技有限公司 | A kind of attendance checking system based on gesture recognition |
CN109375186A (en) * | 2018-11-22 | 2019-02-22 | 中国人民解放军海军航空大学 | Radar target identification method based on the multiple dimensioned one-dimensional convolutional neural networks of depth residual error |
CN109493847A (en) * | 2018-12-14 | 2019-03-19 | 广州玛网络科技有限公司 | Sound recognition system and voice recognition device |
CN110390282A (en) * | 2019-07-12 | 2019-10-29 | 西安格威西联科技有限公司 | A kind of finger vein identification method and system based on the loss of cosine center |
CN111627419A (en) * | 2020-05-09 | 2020-09-04 | 哈尔滨工程大学 | Sound generation method based on underwater target and environmental information characteristics |
CN112163461A (en) * | 2020-09-07 | 2021-01-01 | 中国海洋大学 | Underwater target identification method based on multi-mode fusion |
CN112257521A (en) * | 2020-09-30 | 2021-01-22 | 中国人民解放军军事科学院国防科技创新研究院 | CNN underwater acoustic signal target identification method based on data enhancement and time-frequency separation |
CN112329819A (en) * | 2020-10-20 | 2021-02-05 | 中国海洋大学 | Underwater target identification method based on multi-network fusion |
CN112364779A (en) * | 2020-11-12 | 2021-02-12 | 中国电子科技集团公司第五十四研究所 | Underwater sound target identification method based on signal processing and deep-shallow network multi-model fusion |
CN113191178A (en) * | 2020-12-04 | 2021-07-30 | 中国船舶重工集团公司第七一五研究所 | Underwater sound target identification method based on auditory perception feature deep learning |
CN113077813A (en) * | 2021-03-22 | 2021-07-06 | 自然资源部第一海洋研究所 | Ship noise identification method based on holographic spectrum and deep learning |
CN113537113A (en) * | 2021-07-26 | 2021-10-22 | 哈尔滨工程大学 | Underwater sound target identification method based on composite neural network |
CN113705647A (en) * | 2021-08-19 | 2021-11-26 | 电子科技大学 | Dynamic interval-based dual semantic feature extraction method |
CN113992153A (en) * | 2021-11-19 | 2022-01-28 | 珠海康晋电气股份有限公司 | Visual real-time monitoring distributed management system of photovoltaic power plant |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116973922A (en) * | 2023-08-29 | 2023-10-31 | 中国水产科学研究院珠江水产研究所 | Underwater biodistribution characteristic analysis method based on underwater acoustic signal detection |
CN116973922B (en) * | 2023-08-29 | 2024-04-16 | 中国水产科学研究院珠江水产研究所 | Underwater biodistribution characteristic analysis method based on underwater acoustic signal detection |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110600017B (en) | Training method of voice processing model, voice recognition method, system and device | |
CN108735199B (en) | Self-adaptive training method and system of acoustic model | |
Liu et al. | CP-GAN: Context pyramid generative adversarial network for speech enhancement | |
Wang et al. | ia-PNCC: Noise Processing Method for Underwater Target Recognition Convolutional Neural Network. | |
Yuliani et al. | Speech enhancement using deep learning methods: A review | |
CN113646833A (en) | Voice confrontation sample detection method, device, equipment and computer readable storage medium | |
CN111653270B (en) | Voice processing method and device, computer readable storage medium and electronic equipment | |
Zhang et al. | Birdsoundsdenoising: Deep visual audio denoising for bird sounds | |
Cao et al. | Underwater target classification at greater depths using deep neural network with joint multiple‐domain feature | |
CN111081223A (en) | Voice recognition method, device, equipment and storage medium | |
CN114636995A (en) | Underwater sound signal detection method and system based on deep learning | |
CN112183582A (en) | Multi-feature fusion underwater target identification method | |
CN113555038B (en) | Speaker-independent voice emotion recognition method and system based on unsupervised domain countermeasure learning | |
CN112331232B (en) | Voice emotion recognition method combining CGAN spectrogram denoising and bilateral filtering spectrogram enhancement | |
Riviello et al. | Binary Speech Features for Keyword Spotting Tasks. | |
Guan et al. | Robust sensor fusion algorithms against voice command attacks in autonomous vehicles | |
CN113095381B (en) | Underwater sound target identification method and system based on improved DBN | |
CN115565548A (en) | Abnormal sound detection method, abnormal sound detection device, storage medium and electronic equipment | |
KR101862352B1 (en) | Front-end processor for speech recognition, and apparatus and method of speech recognition using the same | |
Song et al. | Underwater acoustic signal noise reduction based on fully convolutional time domain separation network | |
Afonja et al. | Generative extraction of audio classifiers for speaker identification | |
CN117854540B (en) | Underwater sound target identification method and system based on neural network and multidimensional feature fusion | |
Gul et al. | Single channel speech enhancement by colored spectrograms | |
Qiming et al. | Intelligent Speaker Recognition Algorithm Based on SE-Res2Net | |
CN113284486B (en) | Robust voice identification method for environmental countermeasure |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20220617 |
|
RJ01 | Rejection of invention patent application after publication |