CN111401236A - Underwater sound signal denoising method based on self-coding neural network - Google Patents
Underwater sound signal denoising method based on self-coding neural network Download PDFInfo
- Publication number
- CN111401236A CN111401236A CN202010180738.6A CN202010180738A CN111401236A CN 111401236 A CN111401236 A CN 111401236A CN 202010180738 A CN202010180738 A CN 202010180738A CN 111401236 A CN111401236 A CN 111401236A
- Authority
- CN
- China
- Prior art keywords
- self
- weight
- layer
- network
- coding
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 230000005236 sound signal Effects 0.000 title claims abstract description 19
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 17
- 238000012549 training Methods 0.000 claims abstract description 26
- 230000004913 activation Effects 0.000 claims abstract description 18
- 238000013507 mapping Methods 0.000 claims abstract description 14
- 238000011478 gradient descent method Methods 0.000 claims abstract description 4
- 230000006870 function Effects 0.000 claims description 50
- 210000002569 neuron Anatomy 0.000 claims description 14
- 230000014509 gene expression Effects 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 10
- 239000011159 matrix material Substances 0.000 claims description 9
- 238000003062 neural network model Methods 0.000 claims description 3
- 238000012360 testing method Methods 0.000 description 7
- 230000005284 excitation Effects 0.000 description 5
- 238000011160 research Methods 0.000 description 4
- 238000001228 spectrum Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000009432 framing Methods 0.000 description 1
- 210000002364 input neuron Anatomy 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/02—Preprocessing
- G06F2218/04—Denoising
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Abstract
The invention provides an underwater sound signal denoising method based on a self-coding neural network, which comprises the steps of training self-coding to obtain an activation function model, optimizing the value of parameters in the neural network according to a loss function, updating all weights by using a gradient descent method, enabling the value of the weights to accord with the mapping from a noisy sample to a clean sample, obtaining the parameters of a mapping relation, and then obtaining the model with the parameters after the training of the self-coding model to realize the denoising function of the noisy sample. The invention solves the problem of poor system robustness caused by the assumption made on the independence of signals and noise in the traditional denoising algorithm, and increases the denoising robustness.
Description
Technical Field
The invention relates to the technical field of underwater sound denoising, and extracts useful underwater sound signals in a complex underwater environment.
Background
The underwater acoustic signal denoising technology is a key research object in the field of signal processing, and is mainly used for feature extraction of underwater acoustic signals, in water, propagation of electromagnetic wave signals is limited, main information carriers are changed into acoustic signals, however, compared with the electromagnetic wave signals, the acoustic signals are more easily interfered by the external environment and present features similar to noise signals, and useful information is carried in the signal features, so that the research on underwater acoustic signal denoising is a key research content in the field of signal processing and the field of underwater signal research.
In the existing underwater sound signal denoising method, the traditional underwater sound denoising method, the wavelet denoising method and the adaptive filtering denoising method are mainly adopted, and the machine learning-based method is applied to underwater sound denoising in a small amount at present.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides an underwater sound signal denoising method based on a self-coding neural network, the traditional underwater sound signal denoising method, such as a wavelet denoising algorithm and an integrated empirical mode decomposition algorithm, has certain assumed conditions for samples before denoising, but the assumed conditions cannot be completely met in the actual underwater environment, the self-coding denoising network based on deep learning is a supervised training model, specific label information needs to be given in advance, in addition, no assumption is made for the environment and the samples, therefore, the training process of the self-coding network does not depend on the assumed conditions, and the trained model has better robustness.
The technical scheme adopted by the invention for solving the technical problem comprises the following steps:
step 1: in the training stage of self-coding, the self-coding network is assumed to have three elements in the input layer, two elements in the hidden layer and three elements in the output layer, and variables are definedIs a weight, wherein a represents the a-th element in the upper network connecting the weight, b represents the b-th element in the lower network connecting the weight, c takes 1 or 2, 1 represents the weight from the input layer to the hidden layer, 2 represents the weight from the hidden layer to the output layer, and the variable isFor the bias term, q is 1 or 2, q-1 represents the bias of the hidden layer, q-2 represents the bias of the output layer, p represents the bias term of the p-th term in the layer, and the input layer is represented by xiRepresenting i denotes the ith element of the input layer, e.g.Representing a weight between a first element in an input layer to a first element of a hidden layer of the self-coding network,representing a weight from a first element in an input layer to a second element in a hidden layer of the encoded network,representing the weight between the first element in the hidden layer to the first element in the output layer of the self-coding network,represents an offset from the first element of the hidden layer of the coded network,an offset representing a second element of the hidden layer of the self-coding network,represents the offset of the first element of the output layer from the coding, h represents the output of the hidden layer from the coding network,an output representing a first element of a hidden layer of a self-coding network,representing the output of a second element of the hidden layer of the self-coding network, y being the output of the output layer, y1The first output element of the output layer is represented, so the forward-propagating self-coding network is represented by the following relation:
the method comprises the following steps that formula (1) is an encoding process of a self-encoding network, formula (2) is a decoding process of the self-encoding network, f is an activation function, each layer of the self-encoding network is provided with one activation function, the activation functions select sigmoid functions, and the mathematical expressions of the sigmoid functions are as follows:
step 2: the back propagation algorithm is a core algorithm for training the neural network, and the values of parameters in the neural network are optimized according to a defined loss function, so that the loss function of the neural network model on a training data set reaches a minimum value, and the loss function is expressed as c ═ f (x, w, b) -x]2Wherein x is an input matrix in the self-coding network model, w is a weight matrix, b is a bias matrix,is the output value from the encoded output layer, wherein,for the input values after adding noise, the loss function is expressed as:
wherein, N is the frame number after the underwater sound signal Fourier transform, lambda, β, and rho are model hyperparameters, wherein lambda is a weight attenuation parameter used for controlling the relative importance of a weight attenuation term in a formula, β is a sparsity punishment term parameter used for controlling the weight of a sparsity punishment factor, yiFor the output value, x, of the self-coding network at the ith neuroniIs the input value of the i-th neuron of the self-coding network, w is the weight of the coding layer, w' is the weight of the decoding layer, FThe weights of all elements when the function is lost are referred, and for the sparse penalty term in the formula (4), the specific expression is as follows:
wherein rho is a sparsity parameter representing the average liveness closeness of the hidden neuron;taking target activation sparsity parameter rho as mean value and activation degree of hidden neuron j of self-coding neural networkRelative entropy between two bernoulli random variables that are mean values, where ρ is an empirical value, taken as 100;
the gradient descent algorithm is that the point where the derivative is 0 is the minimum point of a function, and since the independent variables of the loss function are all weights and offsets, the derivative expression of the loss function is solved as follows:
whereinFor small variations in the weights from the first element of the input layer of the encoding network to the first element of the hidden layer,for small variations in the weights from the first element of the encoded network input layer to the second element of the hidden layer,biasing the slight variation of the term for the first element of the hidden layer of the self-coding network,the slight change of the second element bias term of the hidden layer of the self-coding network is taken as a gradient in the gradient direction,the expression is as follows:
each weight is updated according to equation (7), wherein η is the learning rate,the weight is updated with the variance, and the update formula is:
wherein,for the updated weight, subtracting the increment of the weight from the original weight to obtain the updated weight, wherein the updating method of each weight is similar to the formula (8), and each weight is updated correspondingly;
and updating all weights by using a gradient descent method, so that the values of the weights conform to the mapping from the noisy sample to a clean sample, and after the parameters of the mapping relation are obtained, the model with the parameters obtained after the self-coding model is trained realizes the denoising function of the noisy sample.
The method has the beneficial effects that the DNN denoising self-coding algorithm is adopted, so that the problem of poor system robustness caused by the assumption of independence of signals and noise in the traditional denoising algorithm is solved. The method solves the problem that the noise type of the underwater sound signal with noise and the coherence of the underwater sound signal and the noise signal are assumed in advance in the denoising process of the traditional underwater sound denoising algorithm, and some assumptions cannot be established in practical application, so that the denoising range is narrow.
Drawings
FIG. 1 is a diagram of a training architecture for a single self-encoding network of the present invention.
Fig. 2 is a flow chart of an implementation of the present invention.
FIG. 3 is a time domain contrast diagram before and after denoising according to the present invention.
Detailed Description
The invention is further illustrated with reference to the following figures and examples.
The invention provides a denoising algorithm based on a self-coding network, and the denoising method based on the self-coding network utilizes a training self-coding model to enable the self-coding model to form a mapping relation between the characteristics of a signal with noise and the characteristics of a clean sound signal, and the mapping relation can be utilized to effectively remove the noise of the signal with noise, thereby recovering the original clean sound signal.
(1) In the training stage of self-coding, the self-coding network is assumed to have three elements in the input layer, two elements in the hidden layer and three elements in the output layer, and variables are definedIs a weight, wherein a represents the a-th element in the upper network connecting the weight, b represents the b-th element in the lower network connecting the weight, c takes 1 or 2, 1 represents the weight from the input layer to the hidden layer, 2 represents the weight from the hidden layer to the output layer, and the variable isFor the bias term, q is 1 or 2, q-1 represents the bias of the hidden layer, q-2 represents the bias of the output layer, p represents the bias term of the p-th term in the layer, and the input layer is represented by xiRepresenting i denotes the ith element of the input layer, e.g.Representing a weight between a first element in an input layer to a first element of a hidden layer of the self-coding network,representing a weight from a first element in an input layer to a second element in a hidden layer of the encoded network,representing the weight from the first element in the hidden layer to the first element in the output layer of the encoded network,represents an offset from the first element of the hidden layer of the coded network,an offset representing a second element of the hidden layer of the self-coding network,represents the offset of the first element of the output layer from the coding, h represents the output of the hidden layer from the coding network,an output representing a first element of a hidden layer of the self-coding network,representing the output of the second element of the hidden layer of the self-coding network, y being the output of the output layer, y1The first output element of the output layer is represented, so the forward-propagating self-coding network is represented by the following relation:
the method comprises the following steps that (1) is an encoding process of a self-encoding network, (2) is a decoding process of the self-encoding network, f is an activation function, the activation function is an important embodiment of personification of the neural network, just as a neuron of a person gives an arbitrary excitation, if the excitation is larger than a certain threshold value, the excitation is transmitted to a brain, if the excitation is smaller than the threshold value, the excitation is automatically ignored, and a linearization effect is achieved, each layer of the self-encoding network is provided with the activation function, the activation function selects a sigmoid function, and the mathematical expression of the sigmoid function is as follows:
(2) the back propagation algorithm is a core algorithm for training the neural network, and the values of parameters in the neural network are optimized according to a defined loss function, so that the loss function of the neural network model on a training data set reaches a minimum value, and the loss function is expressed as c ═ f (x, w, b) -x]2Wherein x is an input matrix in the self-coding network model, w is a weight matrix, b is a bias matrix,is the output value from the encoded output layer, wherein,for the input values after adding noise, the influence on the weight attenuation on the loss function and the effect of the sparse penalty in the self-coding model are considered, and the loss function is expressed as:
wherein N is the frame number after the underwater sound signal Fourier transform, lambda, β, and rho are model hyper-parameters, wherein lambda is a weight attenuation parameter used for controlling the relative weight of the weight attenuation term in the formulaEssential, β is a sparsity penalty parameter, weight used to control sparsity penalty factor, yiFor the output value, x, of the self-coding network at the ith neuroniFor the input value of the self-coding network in the ith neuron, w is the weight of the coding layer, w' is the weight of the decoding layer, F refers to the weight of all elements when the function is lost, and for the sparse penalty term in the formula (4), the overfitting phenomenon of the self-coding network in the training process is effectively removed, and the specific expression is as follows:
wherein rho is a sparsity parameter representing the average liveness closeness of the hidden neuron;taking target activation sparsity parameter rho as mean value and activation degree of hidden neuron j of self-coding neural networkRelative entropy between two bernoulli random variables that are mean values, where ρ is an empirical value, taken as 100;
the gradient descent algorithm is one of important methods for solving the minimization of a loss function, and the main idea is that the position where the derivative is 0 is a minimum value point of the function, and since the independent variables of the loss function are all weights and offsets, the derivative expression for solving the loss function is as follows:
whereinFor small variations in the weights from the first element of the input layer of the encoding network to the first element of the hidden layer,for small variations in the weights from the first element of the encoded network input layer to the second element of the hidden layer,biasing the slight variation of the term for the first element of the hidden layer of the self-coding network,the method is a small variation of a second element bias term of a hidden layer of the self-coding network, the direction of the small variation is the direction with the fastest variation, the minimum value of a loss function is found at the fastest speed, the transformation is the fastest in the direction of a gradient, the small variation is taken as the gradient, and the small variation is takenThe expression is as follows:
the update of each weight is performed in accordance with equation (7), where η is the learning rate,the weight is updated with the variance, and the update formula is:
wherein,for the updated weight, subtracting the increment of the weight from the original weight to obtain the updated weight, wherein the updating method of each weight is similar, and each weight is updated correspondingly, so that the mapping relation of the self-coding is slowly close to the real mapping relation.
And updating all weights by using a gradient descent method, so that the values of the weights conform to the mapping from the noisy sample to a clean sample, and after the parameters of the mapping relation are obtained, the model with the parameters obtained after the self-coding model is trained realizes the denoising function of the noisy sample.
The present invention is further described with reference to fig. 2, which is a flow chart of a denoised self-encoding network.
Step 1: firstly, framing a sample, adding a Hanning window with the length of 400, fixing each frame of the sample at the distance of 400 sample points, then assuming that the 400 sample points are a stable process, adopting short-time Fourier transform, and respectively adopting the short-time Fourier transform for each frame, wherein because the frequency spectrum leakage of two edge sections of the Hanning window is serious, the overlapping rate of two adjacent frames is fifty percent, the overlapping can also ensure that a recovery signal is smooth, taking phase information of a noisy sample, then extracting a characteristic value, and taking a characteristic function as:
Y(d)=log|Y(d)|2(9)
y (d) is a signal of a sample after short-time Fourier transform, d is a frequency dimension, after a characteristic value is extracted, information is sent to a constructed nominal DNN model for training, the DNN denoising network model comprises 5 layers, an input layer, an output layer and 4 hidden layers, 11 frames of noisy samples are taken to extract one frame of useful information, so that the input neuron node of the input layer is 4400 points, the point number of a first neuron is 2048, the point number of a second neuron is 2048, the point number of a third neuron is 1024, the point number of a neuron of the output layer is 400, the sparse parameter of each layer is 0.2, and the activation function of each layer is a sigmoid function, which is shown in a formula (3).
Step 2: intercepting a section of underwater sound signals of 80 minutes, selecting the underwater sound signals of the first 40 minutes as a training set, selecting the underwater sound signals of the last 40 minutes as a testing set, and selecting the underwater sound signals with the learning rate of 0.02 and the sparse function of 0.5.
And step 3: according to the method in the step 1 and the model of the self-coding neural network shown in the figure (1), a training set is added with noise and sent into a DNN model for training, in order to prevent gradient disappearance, 64 groups of data are selected as a batch, loop iteration is carried out for 20 times, each training is completed, a group of test sets are sent into for testing, and the training set error and the test set error of each training set are recorded.
And 4, step 4: and (3) performing back propagation calculation by using the formulas (7) and (8), and performing parameter updating by solving the gradient to finely adjust the mapping relation of the whole network so as to obtain the noise-reduced self-coding network model.
And 5: and obtaining a denoising self-coding model after all the updating steps are completed, wherein all parameters of the denoising self-coding model are used as parameters for mapping from a noisy sample to a clean sample.
Step 6: testing the test set by using the trained denoising DNN model, and reconstructing the sample after estimating the power spectrum of the clean sample by using the DNN model, wherein in the reconstruction stage, as shown in formula (10):
whereinThe power spectrum of the clean samples is estimated for DNN, ∠ y (d) is the phase information extracted from the noisy samples.
The result is shown in FIG. 3, where loss is the error between the clean sample and the predicted sample after each training of the training set, and val _ loss is the error between the clean sample and the predicted sample after each training of the testing set.
Claims (1)
1. An underwater sound signal denoising method based on a self-coding neural network is characterized by comprising the following steps:
step 1: in the training stage of self-coding, the self-coding network is assumed to have three elements in the input layer, two elements in the hidden layer and three elements in the output layer, and variables are definedIs a weight, wherein a represents the a-th element in the upper network connecting the weight, b represents the b-th element in the lower network connecting the weight, c takes 1 or 2, 1 represents the weight from the input layer to the hidden layer, 2 represents the weight from the hidden layer to the output layer, and the variable isFor the bias term, q is 1 or 2, q-1 represents the bias of the hidden layer, q-2 represents the bias of the output layer, p represents the bias term of the p-th term in the layer, and the input layer is represented by xiRepresenting i denotes the ith element of the input layer, e.g.Representing a weight between a first element in an input layer to a first element of a hidden layer of the self-coding network,representing a weight from a first element in an input layer to a second element in a hidden layer of the encoded network,representing the weight between the first element in the hidden layer to the first element in the output layer of the self-coding network,represents an offset from the first element of the hidden layer of the coded network,an offset representing a second element of the hidden layer of the self-coding network,represents the offset of the first element of the output layer from the coding, h represents the output of the hidden layer from the coding network,an output representing a first element of a hidden layer of the self-coding network,representing the output of a second element of the hidden layer of the self-coding network, y being the output of the output layer, y1The first output element of the output layer is represented, so the forward-propagating self-coding network is represented by the following relation:
the method comprises the following steps that formula (1) is an encoding process of a self-encoding network, formula (2) is a decoding process of the self-encoding network, f is an activation function, each layer of the self-encoding network is provided with one activation function, the activation functions select sigmoid functions, and mathematical expressions of the sigmoid functions are as follows:
step 2: the back propagation algorithm is a core algorithm for training the neural network, and the values of parameters in the neural network are optimized according to a defined loss function, so that the loss function of the neural network model on a training data set reaches a minimum value, and the loss function is expressed as c ═ f (x, w, b) -x]2Wherein x is an input matrix in the self-coding network model, w is a weight matrix, b is a bias matrix,is the output value from the encoded output layer, wherein,for the input values after adding noise, the loss function is expressed as:
wherein, N is the frame number after the underwater sound signal Fourier transform, lambda, β, and rho are model hyperparameters, wherein lambda is a weight attenuation parameter used for controlling the relative importance of a weight attenuation term in a formula, β is a sparsity penalty term parameter used for controlling the weight of a sparsity penalty factor, and y isiFor the output value, x, of the self-coding network at the ith neuroniFor the input value of the self-coding network in the ith neuron, w is the weight of the coding layer, w' is the weight of the decoding layer, F refers to the weight of all elements when the function is lost, and for the sparse penalty term in the formula (4), the specific expression is as follows:
wherein rho is a sparsity parameter representing the average liveness closeness of the hidden neuron;taking target activation sparsity parameter rho as mean value and activation degree of hidden neuron j of self-coding neural networkRelative entropy between two bernoulli random variables that are mean values, where ρ is an empirical value, taken as 100;
the gradient descent algorithm is that the point where the derivative is 0 is the minimum point of a function, and since the independent variables of the loss function are all weights and offsets, the derivative expression of the loss function is solved as follows:
whereinFor small variations in the weights from the first element of the encoded network input layer to the first element of the hidden layer,for small variations in the weights from the first element of the encoded network input layer to the second element of the hidden layer,biasing the slight variation of the term for the first element of the hidden layer of the self-coding network,the slight variation of the second element bias term of the hidden layer of the self-coding network is taken as a gradient in the gradient direction,the expression is as follows:
each weight is updated according to equation (7), wherein η is the learning rate,the weight is updated with the variance, and the update formula is:
wherein,in order to update the weight of the weight after the update,the weight after the update is obtained by subtracting the increment of the weight from the original weight, the updating method of each weight is similar to the formula (8), and each weight is updated correspondingly;
and updating all weights by using a gradient descent method, so that the values of the weights conform to the mapping from the noisy sample to a clean sample, and after the parameters of the mapping relation are obtained, the model with the parameters obtained after the self-coding model is trained realizes the denoising function of the noisy sample.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010180738.6A CN111401236A (en) | 2020-03-16 | 2020-03-16 | Underwater sound signal denoising method based on self-coding neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010180738.6A CN111401236A (en) | 2020-03-16 | 2020-03-16 | Underwater sound signal denoising method based on self-coding neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111401236A true CN111401236A (en) | 2020-07-10 |
Family
ID=71428812
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010180738.6A Pending CN111401236A (en) | 2020-03-16 | 2020-03-16 | Underwater sound signal denoising method based on self-coding neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111401236A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112215054A (en) * | 2020-07-27 | 2021-01-12 | 西北工业大学 | Depth generation countermeasure method for underwater acoustic signal denoising |
CN113094993A (en) * | 2021-04-12 | 2021-07-09 | 电子科技大学 | Modulation signal denoising method based on self-coding neural network |
CN113205050A (en) * | 2021-05-09 | 2021-08-03 | 西北工业大学 | Ship radiation noise line spectrum extraction method based on GRU-AE network |
CN113780450A (en) * | 2021-09-16 | 2021-12-10 | 郑州云智信安安全技术有限公司 | Distributed storage method and system based on self-coding neural network |
CN117974736A (en) * | 2024-04-02 | 2024-05-03 | 西北工业大学 | Underwater sensor output signal noise reduction method and system based on machine learning |
CN118379982A (en) * | 2024-06-27 | 2024-07-23 | 武汉普惠海洋光电技术有限公司 | High-frequency array environment noise reduction method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107123431A (en) * | 2017-05-02 | 2017-09-01 | 西北工业大学 | A kind of underwater sound signal noise-reduction method |
CN109446902A (en) * | 2018-09-22 | 2019-03-08 | 天津大学 | A kind of marine environment based on unmanned platform and the comprehensive cognitive method of target |
CN109919864A (en) * | 2019-02-20 | 2019-06-21 | 重庆邮电大学 | A kind of compression of images cognitive method based on sparse denoising autoencoder network |
CN110490816A (en) * | 2019-07-15 | 2019-11-22 | 哈尔滨工程大学 | A kind of underwater Heterogeneous Information data noise reduction |
CN110751044A (en) * | 2019-09-19 | 2020-02-04 | 杭州电子科技大学 | Urban noise identification method based on deep network migration characteristics and augmented self-coding |
-
2020
- 2020-03-16 CN CN202010180738.6A patent/CN111401236A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107123431A (en) * | 2017-05-02 | 2017-09-01 | 西北工业大学 | A kind of underwater sound signal noise-reduction method |
CN109446902A (en) * | 2018-09-22 | 2019-03-08 | 天津大学 | A kind of marine environment based on unmanned platform and the comprehensive cognitive method of target |
CN109919864A (en) * | 2019-02-20 | 2019-06-21 | 重庆邮电大学 | A kind of compression of images cognitive method based on sparse denoising autoencoder network |
CN110490816A (en) * | 2019-07-15 | 2019-11-22 | 哈尔滨工程大学 | A kind of underwater Heterogeneous Information data noise reduction |
CN110751044A (en) * | 2019-09-19 | 2020-02-04 | 杭州电子科技大学 | Urban noise identification method based on deep network migration characteristics and augmented self-coding |
Non-Patent Citations (4)
Title |
---|
ZHENXING LIU 等: "Research on Underwater Acoustic Channel Denoising Algorithm based on Auto-Encoder", 《2019 IEEE 3RD ADVANCED INFORMATION MANAGEMENT, COMMUNICATES, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (IMCEC)》 * |
姜楠 等: "基于稀疏自动编码网络的水声通信信号调制识别", 《信号处理》 * |
杨宏晖 等: "被动水下目标识别研究进展综述", 《无人系统技术》 * |
殷敬伟 等: "基于降噪自编码器的水声信号增强研究", 《通信学报》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112215054A (en) * | 2020-07-27 | 2021-01-12 | 西北工业大学 | Depth generation countermeasure method for underwater acoustic signal denoising |
CN112215054B (en) * | 2020-07-27 | 2022-06-28 | 西北工业大学 | Depth generation countermeasure method for denoising underwater sound signal |
CN113094993A (en) * | 2021-04-12 | 2021-07-09 | 电子科技大学 | Modulation signal denoising method based on self-coding neural network |
CN113094993B (en) * | 2021-04-12 | 2022-03-29 | 电子科技大学 | Modulation signal denoising method based on self-coding neural network |
CN113205050A (en) * | 2021-05-09 | 2021-08-03 | 西北工业大学 | Ship radiation noise line spectrum extraction method based on GRU-AE network |
CN113780450A (en) * | 2021-09-16 | 2021-12-10 | 郑州云智信安安全技术有限公司 | Distributed storage method and system based on self-coding neural network |
CN117974736A (en) * | 2024-04-02 | 2024-05-03 | 西北工业大学 | Underwater sensor output signal noise reduction method and system based on machine learning |
CN117974736B (en) * | 2024-04-02 | 2024-06-07 | 西北工业大学 | Underwater sensor output signal noise reduction method and system based on machine learning |
CN118379982A (en) * | 2024-06-27 | 2024-07-23 | 武汉普惠海洋光电技术有限公司 | High-frequency array environment noise reduction method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111401236A (en) | Underwater sound signal denoising method based on self-coding neural network | |
CN108682418B (en) | Speech recognition method based on pre-training and bidirectional LSTM | |
CN110739002B (en) | Complex domain speech enhancement method, system and medium based on generation countermeasure network | |
US10672414B2 (en) | Systems, methods, and computer-readable media for improved real-time audio processing | |
CN109841226B (en) | Single-channel real-time noise reduction method based on convolution recurrent neural network | |
CN111564160B (en) | Voice noise reduction method based on AEWGAN | |
CN112735456B (en) | Speech enhancement method based on DNN-CLSTM network | |
CN112468326B (en) | Access flow prediction method based on time convolution neural network | |
CN107845389A (en) | A kind of sound enhancement method based on multiresolution sense of hearing cepstrum coefficient and depth convolutional neural networks | |
Venkateswarlu et al. | Speech intelligibility quality in telugu speech patterns using a wavelet-based hybrid threshold transform method | |
CN110550518A (en) | Elevator operation abnormity detection method based on sparse denoising self-coding | |
CN111860273A (en) | Magnetic resonance underground water detection noise suppression method based on convolutional neural network | |
KR20210043833A (en) | Apparatus and Method for Classifying Animal Species Noise Robust | |
JP2024519657A (en) | Diffusion models with improved accuracy and reduced computational resource consumption | |
CN112086100B (en) | Quantization error entropy based urban noise identification method of multilayer random neural network | |
CN114333773A (en) | Industrial scene abnormal sound detection and identification method based on self-encoder | |
CN116561515A (en) | Power frequency noise suppression method based on cyclic neural network magnetic resonance signals | |
CN112530449B (en) | Speech enhancement method based on bionic wavelet transform | |
CN116778945A (en) | Acoustic noise reduction method and device based on improved INMF | |
Cao et al. | Sparse representation of classified patches for CS-MRI reconstruction | |
CN117033986A (en) | Impact fault feature interpretable extraction method based on algorithm guide network | |
CN116705049A (en) | Underwater acoustic signal enhancement method and device, electronic equipment and storage medium | |
CN115017964A (en) | Magnetotelluric signal denoising method and system based on attention mechanism sparse representation | |
CN114141266A (en) | Speech enhancement method for estimating prior signal-to-noise ratio based on PESQ driven reinforcement learning | |
CN114189876B (en) | Flow prediction method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200710 |