CN109993280A - A kind of underwater sound source localization method based on deep learning - Google Patents

A kind of underwater sound source localization method based on deep learning Download PDF

Info

Publication number
CN109993280A
CN109993280A CN201910236715.XA CN201910236715A CN109993280A CN 109993280 A CN109993280 A CN 109993280A CN 201910236715 A CN201910236715 A CN 201910236715A CN 109993280 A CN109993280 A CN 109993280A
Authority
CN
China
Prior art keywords
sound source
data
vector
source
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910236715.XA
Other languages
Chinese (zh)
Other versions
CN109993280B (en
Inventor
吴志翔
姜龙玉
金睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201910236715.XA priority Critical patent/CN109993280B/en
Publication of CN109993280A publication Critical patent/CN109993280A/en
Application granted granted Critical
Publication of CN109993280B publication Critical patent/CN109993280B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The invention discloses a kind of underwater sound source localization method based on deep learning, it include: that operation is normalized to the vector data for using KRAKEN program to simulate, and it is superimposed 0 mean value gaussian random noise complex vector n, obtain the simulated sound field data p (f) at frequency f;Normalized covariance matrix H is constructed according to simulated sound field data p (f), and Hermitian decomposition is carried out to matrix H, the real matrix that convolutional neural networks are capable of handling is converted by complex matrix H, obtains the input data of convolutional neural networks;Using input data training convolutional neural networks, underwater sound source location prediction model is obtained, then according to the sound field data observed, predicts the distance and depth of signal source.The present invention has used LeNet-5 convolutional neural networks and 56 layer depth residual error networks for underwater auditory localization under single, more sound source situation, achieves the underwater sound source location algorithm for possessing degree of precision and accuracy rate, and improve the real-time of underwater sound source positioning.

Description

A kind of underwater sound source localization method based on deep learning
Technical field
The present invention relates to a kind of underwater sound source localization method more particularly to a kind of underwater sound source positioning based on deep learning Method belongs to signal processing technology field.
Background technique
Underwater sound source positions the skill for referring to using underwater acoustic wave and electronic technology the direction and distance that determine underwater sound source Art.Underwater sound signal data received by the receiving array formed according to several hydrophones, by certain data handling procedure To carry out underwater sound source positioning.
Currently, Matched-field processing (Matching Field Processing, MFP) technology is passive for submarine target The exemplary process of detection, it combines signal processing technology and hydroacoustic physics, utilizes the correlations such as channel characteristics, narrow bandwidth band Technology handles sound source data received by hydrophone receiving array, efficiency, in terms of greatly better than passing System signal processing technology.Hinich carries out source positioning [the Hinich M of the Matched-field processing based on orthogonal array first J.Maximum-likelihood signal processing for a vertical array[J] .J.Acoust.Soc.Am, 1973,54 (2): 499-503.], and propose the estimation method to Depth;Bucker is successfully incited somebody to action Matched Field applies to estimation [the Bucker H P.Use of calculated sound fields and of source range and depth matched-field detection to locate sound sources in shallow water[J] .J.Acoust.Soc.Am,1976,59(2):368-373.];Hamson successfully located the long distance of different frequency in shallow water From sound source [Hamson R M, Heitmeyer R M.Environmental and system effects on source localization in shallow water by the matched-field processing of a vertical array[J].J.Acoust.Soc.Am,1989,86(5):1950-1959.].So far, Matched Field Processing Technique is in related fields Achieve original achievement.But due to predicting that sound field is by propagation model in various environmental parameter items in Matched-field processing Solve and obtain under part, thus its performance largely rely on environmental parameter accuracy (i.e. environmental modeling it is accurate with It is no), so how to reduce environment mismatch problem, become the focus of correlative study person.
In order to reduce environment mismatch, a kind of method is that loss brought by model error is reduced using certain means, another Kind method is that the method for being used without modeling carries out underwater sound source positioning.Deep learning be in machine learning a kind of pair of data into The method of row representative learning, it is made of multiple process layers with labyrinth, can be with by the structure of this extreme complexity To data layered shaping, gradually it is abstracted.The birth of deep learning be in order to preferably to large-scale data carry out higher level of abstraction, because This can possess higher precision and accurate than existing method to design by positioning deep learning applied to underwater sound source The algorithm of rate.
Past has correlative study person and machine learning algorithm is applied in auditory localization, especially Steinberg et al. It will be applied to really using the single layer feedforward neural network (Feedforward Neural Network, FNN) of inverting learning algorithm Determine source position [Steinberg B Z, Beran M J, Chin S H, the et al.A neural network in homogeneous medium Approach to source localization [J] .J.Acoust.Soc.Am, 1991,90 (4): 2081-2090.], it should Method allows the continuity of relevant parameter to estimate (i.e. depth).In the recent period, Niu et al. using three_layer planar waveguide, support to Amount machine, three kinds of methods of random forest to sound source single under marine environment carried out Distance positioning experiment [Niu H, Reeves E, Gerstof P.Source localization in an ocean waveguide using supervised machine Learning [J] .J.Acoust.Soc.Am, 2017,142 (3): 1176-1188.]: in three_layer planar waveguide, use Tensorflow programming and provided by it adaptive moments estimation is come optimization algorithm, and to signal-to-noise ratio (Signal Noise Ratio, SNR), hide the parameters such as number of layers the final effect with comparison algorithm be adjusted;In support vector machines, pass through instruction Practice K (K-1)/2 model to create the multiclass SVM with K class.Most frequently distributing to same category of point is considered composition individually Classification, and so on, until all the points are endowed from 1 until the classification of K;In random forest, by using statistical sampling Keep random forests algorithm more robust, in given draw, input data is unified to be randomly choosed from all training sets.One New decision tree is suitable for each data subset, and each point will be assigned to its most common class in all draftings.But this article Use the defect of three kinds of methods it is also obvious (three_layer planar waveguide is a kind of relatively simple neural network structure, Need to improve the effect of prediction using more complicated network structure;Support vector machines can occupy greatly when sample size is larger The Installed System Memory of amount and training time, and increasing data set after effect reaches a certain level can not be to experimental result with more preferable Promotion, can not preferably realize the target detection of multi-group data;Random forest is demonstrated in many data, if certain Classify in regression problem, there are biggish noises for data, and there is some variable for needing to divide multiclass, then random forest To have a greatly reduced quality for the prediction effect of the variable), and this article only predicts sound source distance and single sound source, nothing Method meets the requirement in practical application, it is therefore desirable to use more complicated, efficient neural network structure, design can it is single, Sound source distance, depth and the underwater sound source localization method for possessing higher precision and accuracy rate are predicted in the case of more sound sources simultaneously.
Summary of the invention
Goal of the invention: to overcome the shortcomings of the existing technology, a kind of underwater sound source localization method based on deep learning is provided, While ensure that higher underwater sound source locating accuracy, reduces the consumption of hardware resource and improve underwater sound source detection Real-time.
Technical solution: for achieving the above object, the invention adopts the following technical scheme:
A kind of underwater sound source localization method based on deep learning, comprising the following steps:
(1) operation is normalized to the vector data for using KRAKEN program to simulate, and is superimposed 0 mean value gaussian random Noise complex vector n, obtains the simulated sound field data p (f) at frequency f;
(2) normalized covariance matrix H is constructed according to simulated sound field data p (f), and Hermitian points is carried out to matrix H Solution, converts the real matrix that convolutional neural networks are capable of handling for complex matrix H, obtains the input data of convolutional neural networks;
(3) input data training convolutional neural networks are used, underwater sound source location prediction model is obtained, then basis observes Sound field data, predict the distance and depth of signal source.
Further, the obtained data of KRAKEN program are noiseless initial data in step (1), need to add manually Noise;Data vector s received by hydrophone array is normalized first: | s |=1, then pass through Box-Muller algorithm Form 0 mean value gaussian random noise complex vector n, probability density function are as follows:
Wherein,γ is nominal signal-to-noise ratio, and N is recipient quantity in hydrophone array;
Complex vector n is obtained by following formula:
Wherein Xi,YiFor (0,1] on be uniformly distributed, finally obtained vector d=s+n;D is that the step finally obtains Simulated sound field data p (f).
Further, in order to make treatment process independently of compound source spectrum in step (2), the simulated sound field data that will be received Be converted to normalized sample covariance matrix;Discrete Fourier transform is carried out by the input data to L sensor, in frequency Sound field data at rate f are expressed as p (f)=[p1(f),…,pL(f)]T;Sound field is modeled as:
P (f)=S (f) g (f, r)+∈;
Wherein, ∈ is noise, and S (f) is source, and function g is Green's function, is influenced to reduce sound field amplitude bring | S (f) |, which asks normalized covariance matrix formation conjugate pair to claim matrix are as follows:
Wherein, H represents conjugate transposition operation, at this time C (f)=CH(f), NsRepresent the snapshot number formed;
Assuming that C (f)=A+iB sets input to convert real number matrix for imaginary number matrix and retain data information It is set toThen the input of convolutional neural networks at this time is matrix H.
Further, sound source position is subjected to classification map first in step (3), specifically includes two methods:
(a) depth and distance are divided equally respectively, then individually training, merging after having trained;Specifically:
Depth and distance are respectively divided into K1,K2Part,Width is Δ d, Δ r, each input Vector xn, n=1 ... N is by gn、tnLabel,k1=1 ..., K1,k2=1 ..., K2, label represents true Source position classification, the i.e. output of underwater sound source location prediction model;
For convolutional neural networks CNN, classify g for positionn、tnIt is mapped to a 1*K1,1*K2Vector gn、tnIn, In,
The anticipated output probability of convolutional neural networks is represented, i.e., for inputting xnSource is in position dk,rkProbability;These For object vector for training CNN, the prediction result of output is to obtain the area of maximum value in Softmax distribution in estimation range Domain;
(b) divide equally by depth and after merging, its region is represented with the fritter centre of area, directly training;Specifically:
Depth and the plane apart from composition are divided into K parts, a1,…,aK, each fritter area be Δ a, each input to Measure xn, n=1 ... N, by tnLabel, tn∈ak, k=1 ..., K, label represents the classification of true source position, i.e. underwater sound source is fixed The output of position prediction model;
For CNN, classify t for positionnIt is mapped to the vector t of a 1*KnIn, whereinSk It represents with tkCentered on area be Δ a rectangular region, tn=tn,1,…,tn,KThe anticipated output for representing neural network is general Rate, i.e., for inputting xnSource is in position akProbability;For these object vectors for training CNN, the prediction result of output is to predict The region of maximum value is obtained in range in Softmax distribution.
Further, two kinds of single sources of sound source position mapping mode training are combined using LeNet-5 convolutional neural networks structure In the case of underwater sound source location model, training process is as follows:
1) convolutional neural networks weight is initialized;
2) using the data set handled by data prediction and source position mapping as input data, source position classification Label is one-hot coding vector, and input data is by propagated forward process composed by convolutional layer, pond layer, full articulamentum Obtain output valve;
3) loss function between output source position classification and target category is calculated;
4) backpropagation is carried out, each layer error term is successively acquired, in conjunction with the corresponding convolution mind of gradient updating of connection weight Through network weight, if being unsatisfactory for termination condition repeats step 2) -4), on the contrary training terminates;
And single source prediction effect under the situation of part is optimized using 56 layer depth residual error networks according to training effect, further Deepen the network number of plies, and solves gradient disappearance, explosion issues, better lift scheme precision and training effect.
Further, in the case of using 56 layer depth residual error network models and sound source position mapping mode (b) training multi-source Underwater sound source location model, training process is as follows:
1) depth residual error network weight is initialized;
2) using the data set handled by data prediction and source position mapping as input data, source position classification Label is vector, and corresponding classification sound source sets 1, otherwise sets 0, input data is formed by convolutional layer, pond layer, full articulamentum Propagated forward process obtain output valve;
3) output source position classification is compared with threshold there are the probability of sound source by classification each in output vector and is obtained, in turn Calculate the loss function between output source position classification and target category;
4) backpropagation is carried out, successively acquires each layer error term, the corresponding depth of gradient updating in conjunction with connection weight is residual Poor network weight, if being unsatisfactory for termination condition repeats step 2) -4), on the contrary training terminates.
The utility model has the advantages that comparing with traditional Matched Field Processing Technique, the present invention is without establishing propagation model, directly to data Layered shaping is gradually abstracted, and reduces the influence of environment mismatch bring;It is compared with correlative study, the present invention will have suitable development The deep learning method of prospect is combined with underwater sound source positioning, is all predicted sound source distance, depth, and carry out Underwater sound source positioning in the case of single, more sound source, possesses higher precision and accuracy rate.Importantly, the present invention only needs to occupy Few calculating time and memory headroom train corresponding prediction model, the prediction under actual environment after can be used to.
Detailed description of the invention
Fig. 1 is the flow diagram of the method for the present invention;
Fig. 2 is the environmental parameter of KRAKEN Program Generating training data in the method for the present invention;
Fig. 3 is used LeNet-5 convolutional neural networks structure under simple sund source situation in the method for the present invention.
Specific embodiment
Technical solution of the present invention is described in detail in the following with reference to the drawings and specific embodiments.
Invention thinking of the invention is to carry out environment present in marine acoustic source location technology for existing Matched-field processing Mismatch problems carry out layered shaping to data using the convolutional neural networks in deep learning, are gradually abstracted, thus it is single, The prediction of high-precision and accuracy rate is carried out under more sound source environment to sound source distance, depth.
Convolutional neural networks are applied to underwateracoustic by a kind of underwater sound source localization method based on deep learning of the invention In the positioning of source, feature extraction and processing are carried out to the data that hydrophone array receives, and fully considering and analyze may shadow The condition of convolutional neural networks predictablity rate and error size is rung, more accurately carries out underwater sound source positioning to reach;Method Process is as shown in Figure 1, comprising the following steps:
Step 1: operation being normalized to the vector data for using KRAKEN program to simulate, and is superimposed 0 mean value Gauss Random noise complex vector n, obtains the simulated sound field data p (f) at frequency f;
Data used in experiment are gone out using the KRAKEN program simulation with environmental parameter, such as Fig. 2, source frequency For 250Hz, in 100m waveguide (velocity of sound 1500m/s), bottom deposit layer (velocity of sound 1590m/s, density: 1.2g/cm3, attenuation coefficient 0.5dB/λ).Vertical hydrophone array is made of 20 receivers, and across 0-100 meters of depth, sensor spacing is 5 meters.Dan Yuan In the case of, Δ d=2m, Δ r=10m are 2m to the precision of prediction of depth, the precision of prediction adjusted the distance is 10m;Multi-source situation Under, Δ d=1m, Δ r=8m are 1m to the precision of prediction of depth, the precision of prediction adjusted the distance is 8m.
Since the obtained data of KRAKEN program are noiseless initial data, us is needed to add noise manually.First It needs to normalize data vector s received by hydrophone array column array: | s |=1, then pass through Box-Muller algorithm shape At 0 mean value gaussian random noise complex vector n, probability density function are as follows:
Wherein,γ is nominal signal-to-noise ratio, and N is recipient quantity in hydrophone array.Because in array The amplitude of middle signal vector is different, and actual signal-to-noise ratio may be higher or relatively low, but by calculating signal-to-noise ratio when institute Signal power is summation on each receiverSo average signal-to-noise ratio is still γ.
Complex vector n is obtained by following formula:
Wherein Xi,YiFor (0,1] on be uniformly distributed.Finally obtained vector d=s+n.D is that the step finally obtains Simulated sound field data p (f).
Step 2: normalized covariance matrix H being constructed according to simulated sound field data p (f), and matrix H is carried out Hermitian is decomposed, and is converted the real matrix that convolutional neural networks are capable of handling for complex matrix H, is obtained convolutional neural networks Input data;
In order to make treatment process independently of compound source spectrum, the simulated sound field data received is converted into normalized sampling Covariance matrix.Discrete Fourier transform is carried out by the input data to L sensor, the sound field tables of data at frequency f It is shown as p (f)=[p1(f),…,pL(f)]T.Sound field is modeled as:
P (f)=S (f) g (f, r)+∈ (3);
∈ is noise herein, and S (f) is source, and function g is Green's function, is influenced to reduce sound field amplitude bring | S (f) |, which asks normalized covariance matrix formation conjugate pair to claim matrix:
Here H represents conjugate transposition operation, at this time C (f)=CH(f), NsRepresent the snapshot number formed, it is fast in the present invention It is 10 according to several default settings.
Assuming that C (f)=A+iB sets input to convert real number matrix for imaginary number matrix and retain data information It is set toThen the input of neural network at this time is the matrix of 40*40.
Step 3: it uses input data training convolutional neural networks, obtains underwater sound source location prediction model, then it can root According to the sound field data observed, the distance and depth of signal source are predicted;
Convolutional neural networks are a kind of famous deep learning structures, it can simulate human brain to the classification of signal processing from And representative learning is carried out to data, convolutional neural networks are mainly used in supervised learning, by input sample vector and Label vector on the other side carries out the extraction and sampling of successive ignition, and prediction obtained by calculation to the feature of data Vector and true vector carry out calculating comparison, and reverse transfer adjusts the weight of convolutional layer, pond layer, full articulamentum etc., to obtain Obtain optimal prediction effect.
Convolutional neural networks mainly include convolutional layer (Convolution Layers), pond layer (Pooling Layers) And full articulamentum (Fully Connection Layer), and use propagated forward and calculate output valve, backpropagation adjustment The training method of weight and biasing is general to form are as follows: input-[[convolutional layer-activation primitive] * N-pond layer] * M-[Quan Lian Connect layer-activation primitive] the full articulamentum-output of * K-.
In the classification problem, a series of source positions are divided into each classification by us.
(1) depth and distance are divided equally respectively, then individually training, merging after having trained
Depth and distance are respectively divided into K1,K2Part,Width is Δ d, Δ r, each input Vector xn, n=1 ... N is by gn、tnLabel,k1=1 ..., K1,k2=1 ..., K2, label represents true Source position classification, the i.e. output of model.
For CNN, classify g for positionn、tnIt is mapped to a 1*K1,1*K2Vector gn、tnIn, wherein
The anticipated output probability of neural network is represented, i.e., for inputting xnSource is in position dk,rk(respectively for) it is general Rate.For these object vectors for training CNN, the prediction result of output is to obtain maximum in Softmax distribution in estimation range The region of value.
(2) divide equally by depth and after merging, its region is represented with the fritter centre of area, directly training
Depth and the plane apart from composition are divided into K parts, a1,…,aK, each fritter area be Δ a, each input to Measure xn, n=1 ... N, by tnLabel, tn∈ak, k=1 ..., K are marked and are represented true source position classification, the i.e. output of model.
For CNN, classify t for positionnIt is mapped to the vector t of a 1*KnIn, wherein
SkIt represents with tkCentered on area be Δ a rectangular region, tn= tn,1,…,tn,KThe anticipated output probability of neural network is represented, i.e., for inputting xnSource is in position akProbability.These object vectors For training CNN, the prediction result of output is to obtain the region of maximum value in Softmax distribution in estimation range.
In the case of single source, accuracy rate is to predict correct sample number divided by the obtained value of total number of samples;Multi-source situation Under, it is assumed that D is multi-tag data set, and quantity is | D |, sample is (xi,Yi), i=1 ..., | D |, H is multi-tag classifier, Zi =H (xi) it is prediction result collection, then accuracy rate calculation formula are as follows:
Mean absolute percentage error (Mean Absolute Percent Error, MAPE) and mean absolute error (Mean Absolute Deviation, MAD) is most common error measure statistics.When analyzing the mistake of single project, MAD is a good statistic.Because MAD considers the size of the error in errors present estimation and accuracy rate represents just The frequency really estimated, in contrast, MAPE be it is sensitive to scale, should not use when handling small lot data.Because true Value is in the denominator of equation, so MAPE is undefined when actual demand is zero.In addition, when actual value is not zero but very Hour, MAPE would generally use extremum.This proportional sensitivity makes MAPE close to valueless at all, because it is low capacity The mistake measurement of data.In the present invention, it will quantify the predictability of location estimation method using predictablity rate and MAD Can:
Under single source situation, initial learning rate is 0.001, has used maximum pond method, keep dropout technology (probability 0.5), Adam optimizer, regularization method, efficiently avoid over-fitting.Such as the LeNet-5 convolutional Neural net of Fig. 3 Network is earliest one of convolutional neural networks, it considers that feature distribution is used as one in entire data, compared to by each point Large-sized multiple layer neural network individually enters, it can efficiently use a small amount of parameter and exist more focused on the spatial coherence of data Similar features are extracted on multiple positions, and in network model, if we increase the depth of network, the performance meeting of network Become more preferable, when network depth deepens, for gradient during backpropagation, easily there is a phenomenon where gradients to disappear, Srivastava gives solution [Srivastava R K, Greff K, Schmidhuber J.Highway first Networks [J] .Computer Science, 2015.], i.e. Highway Network, he by each layer of activation primitive into It has gone variation, so that it is not just only done nonlinear transformation to input, and store input by specific ratio, in this way Input is possible to as walking high speed, it is not necessary to be transmitted directly to next layer by transformation, He Kaiming etc. is changed on this basis Into [He K, Zhang X, Ren S, et al.Deep Residual Learning for Image Recognition [J] .2015:770-778.], " Shortcut Connections " is proposed, it is assumed that our targets to be learnt are H (x), are passed through Shortcut Connections, our learning objective will be transformed to H (x)+x, as long as using properly to the operation, gradient It disappears never again, here it is depth residual error networks.Herein, we combine two kinds using LeNet-5 convolutional neural networks Underwater sound source location model in the case of the single source of source position mapping mode training.LeNet-5 convolutional neural networks are by 2 convolution Layer, 2 pond layers, 3 full articulamentum compositions, input data input the first convolutional layer, the output of the first convolutional layer and the first pond Change the input connection of layer, the output of the first pond layer is connect with the input of the second convolutional layer, the output of the second convolutional layer and second The input of pond layer connects, and the output of the second pond layer is connect with the input of the first full articulamentum, the output of the first full articulamentum It being connect with the input of the second full articulamentum, the output of the second full articulamentum is connect with the input of the full articulamentum of third (output layer), Output layer exports result.Wherein convolutional layer convolution kernel size is respectively 7*7,5*5,3*3, uses ReLU function as activation letter Number, the pond method of pond layer choosing are maximum pond, and loss function intersects entropy function using softmax, and training process is as follows:
1) convolutional neural networks weight is initialized;
2) using the data set handled by data prediction and sound source position mapping as input data, sound source position Class label is one-hot coding vector, and input data is by propagated forward composed by convolutional layer, pond layer, full articulamentum Process obtains output valve, wherein every layer of input is upper one layer of output in propagated forward;
2.1) convolutional layer: carrying out multiple convolution operation to input data, extracts its signal spy temporally and spatially Sign, signal carries out the feature extraction of position data on clock synchronization space at this, exports as extracted position feature.
Convolutional layer calculation formula is as follows:
xc=fc(xc-1*Wc+bc);
Wherein, xc-1For convolutional layer input, xcFor convolutional layer output, Wc,bcRespectively convolutional layer weight and bias term, wherein Function fcFor activation primitive, what is be used in the present invention is ReLU function, expression formula are as follows:
F (x)=max (0, x);
2.2) pond layer: for reducing the data volume after process of convolution, i.e., a part of characteristic area ask its mean value or Maximum value carries out Feature Compression to the extracted position feature of upper layer convolutional layer at this to represent the part, extracts main Position feature.
Pond layer calculation formula is as follows:
xp=down (xp-1);
Wherein, xp-1For the input of pond layer, xpFor the output of pond layer, down indicates the pond function chosen, in the present invention Use maximum pond.
2.3) full articulamentum: the signal characteristic learnt in convolutional layer is mapped on position classification, to obtain its institute Belong to the probability of classification, the bigger closer true source position of probability connects the position feature of all extractions at this.
Full articulamentum calculation formula is as follows:
xf=ff(xf-1Wf+bf);
Wherein, xf-1For the input of full articulamentum, xfFor the output of full articulamentum, Wf,bfRespectively full articulamentum weight and biasing , wherein function ffFor activation primitive, output layer uses softmax function in the present invention, other full articulamentums use Be ReLU function, softmax function expression are as follows:
Wherein xlFor input, ylFor the tag along sort of prediction, θ is model parameter, p (yl=j | xl;It θ) indicates to join in model The probability that the sample is classification j under number θ.
3) loss function between output source position classification and target category is calculated;
Specific formula are as follows:
Wherein, y be convolutional neural networks prediction as a result,For legitimate reading, what which portrayed is practical defeated Out between desired output at a distance from, that is, the value of cross entropy is smaller, and two probability distribution are with regard to closer.
4) backpropagation is carried out, each layer error term is successively acquired, in conjunction with the corresponding convolution mind of gradient updating of connection weight Through network weight, step 2) -4 is repeated if the number of iterations or predictablity rate for not arriving setting), on the contrary training terminates.
Following formula is used to the update of weight:
Wherein α is learning rate, and w, b are respective layer weight and biasing, and w ', b ' are updated weight and biasing, and E is damage It loses.
4.1) output layer
It is above formula L that E is lost at this, and local derviation is asked to obtain:
Wherein coefficient matrix is having a size of n*m, xi, i=1 ..., m are input vector, yj, j=1 ..., n is output vector.
4.2) full articulamentum
The error of full articulamentum backpropagation is defined as follows:
BecauseSoThat is b gradientIt leads with error E to fully entering x in certain node NumberIt is identical.
Therefore during this backpropagation formula are as follows:
δl=((wl+1)Tδl+1)⊙f′(xl);
4.3) pond layer
When propagated forward, pond is carried out using maximum pondization in this patent.It is now to from the maximum pond rear region of progress Error, backstepping lead the error that preceding layer does not carry out maximum pond time domain, this process is called Upsample.It is inputted by later layer Error derives the formula of preceding layer error originated from input are as follows:
6l=Upsample (6l+1)⊙f′(xl)
4.4) convolutional layer
The formula of convolutional layer error are as follows:
The gradient of so W of convolutional layer, b are as follows:
And single source prediction effect is optimized under the situation of part using 56 layer depth residual error networks according to training effect, depth is residual For poor network compared to LeNet-5 convolutional neural networks, training process is similar, but introduces residual error network structure, by learning objective Residual error function F (x)=H (x)-x is converted to, since residual error function is more easier to be fitted, it is possible to further deepen network layer Number solves gradient disappearance, explosion issues, better lift scheme precision and training effect.
Under multi-source situation, initial learning rate is 0.1, has used maximum pond method, momentum momentum method, canonical Change method etc. is more using 56 layers of depth residual error network models and the training of above-mentioned second of sound source position mapping mode Underwater sound source location model in the case of source, training process are as follows:
1) depth residual error network weight is initialized;
2) using the data set handled by data prediction and source position mapping as input data, source position classification Label is vector, and corresponding classification sound source sets 1, otherwise sets 0, input data is formed by convolutional layer, pond layer, full articulamentum Propagated forward process obtain output valve;
2.1) convolutional layer: carrying out multiple convolution operation to input data, extracts its signal spy temporally and spatially Sign, signal carries out the feature extraction of position data on clock synchronization space at this, exports as extracted position feature.
Convolutional layer calculation formula is as follows:
xc=fc(xc-1*Wc+bc);
Wherein, xc-1For convolutional layer input, xcFor convolutional layer output, Wc,bcRespectively convolutional layer weight and bias term, wherein Function fcFor activation primitive, what is be used in the present invention is ReLU function, expression formula are as follows:
F (x)=max (0, x)
2.2) pond layer: for reducing the data volume after process of convolution, i.e., a part of characteristic area ask its mean value or Maximum value carries out Feature Compression to the extracted position feature of upper layer convolutional layer at this to represent the part, extracts main Position feature.
Pond layer calculation formula is as follows:
xp=down (xp-l);
Wherein, xp-1For the input of pond layer, xpFor the output of pond layer, down indicates the pond function chosen.
2.3) full articulamentum: the signal characteristic learnt in convolutional layer is mapped on position classification, to obtain its institute Belong to the probability of classification, the bigger closer true source position of probability connects the position feature of all extractions at this.
Full articulamentum calculation formula is as follows:
xf=ff(xf-1Wf+bf);
Wherein, xf-1For the input of full articulamentum, xfFor the output of full articulamentum, Wf,bfRespectively full articulamentum weight and biasing , wherein function ffFor activation primitive, what is be used in the present invention is softmax function, softmax function expression are as follows:
Wherein xlFor input, ylFor the tag along sort of prediction, θ is model parameter, p (yl=j | xl;It θ) indicates to join in model The probability that the sample is classification j under number θ.
2.4) it residual block: by mapping certain layers of addition residual error, disappears solving to deepen the gradient of initiation because of network, is quick-fried Fried problem, thus preferably with deeper network come model of fit.
Assuming that H (x) be need neural network go fitting objective function, if multiple non-linear layers can gradually approximation one The function of a complexity, then be equivalent to they can gradually approximate residual error function, i.e. H (x)-x.If optimization object function is to force A nearly identical mapping, rather than 0 mapping, then study finds the meeting of the disturbance (i.e. residual error) to identical mapping than relearning one A mapping function will be easy.I.e. current network layer output be equal to proper network output and network inputs and, then pass through again Cross activation primitive.
3) output source position classification is compared with threshold there are the probability of sound source by classification each in output vector and is obtained, in turn Calculate the loss function between output source position classification and target category;
Specific formula are as follows:
Wherein, y be convolutional neural networks prediction as a result,For legitimate reading, what which portrayed is reality output At a distance between desired output, that is, the value of cross entropy is smaller, and two probability distribution are with regard to closer.
4) backpropagation is carried out, successively acquires each layer error term, the corresponding depth of gradient updating in conjunction with connection weight is residual Poor network weight repeats step 2) -4 if the number of iterations or predictablity rate for not arriving setting), on the contrary training terminates.
Backpropagation of the depth residual error network on convolutional layer, pond layer, full articulamentum is similar with convolutional neural networks, but Due to the presence of residual block, there is part different, is shown below:
Wherein E is error term, and W is weight, xiIt is exported for i layers.
Corresponding sound source position can be obtained in input sound field data after model is trained.
Algorithm operates in Intel (R) Core (TM) i5-7500CPU@3.40GHz, on Nvida GTX980 computer, behaviour Making system is Ubuntu 16.04.4 LTS.
In order to verify effect of the invention, the prediction of underwater auditory localization under single, more sound source is tested respectively using test set Effect, the prediction effect of underwater auditory localization is as shown in table 1 under simple sund source:
Table 1
The experimental results showed that under equal conditions to compare the conjunction of depth, distance for the independent prediction of depth and distance And prediction effect is more robust, and preferable effect has just been received in signal-to-noise ratio 0 for the independent prediction of depth and distance, and Need the effect in signal-to-noise ratio 10 preferable the merging prediction of depth, distance.We will use 56 layer depth residual error networks to depth Degree, distance are tested under 0,5,10dB signal-to-noise ratio environment respectively respectively, and effect is as shown in table 2:
Table 2
As can be seen that using 56 layer depth residual error networks to depth, apart from the reality under 0,5,10dB signal-to-noise ratio environment respectively The effect tested has different degrees of improvement than using under LeNet5 structure, produce a desired effect.
The prediction effect of underwater auditory localization is as shown in table 3 under more sound source situations, uses alliteration source data 56 layer depths of training It spends residual error network and gets:
Table 3
As can be seen from the table, although essence not as good as the precision and accuracy rate under simple sund source situation, under more sound source situations Degree and accuracy rate predict that performance preferably, meets desired effect under certain signal-to-noise ratio.
According to the above experimental result it is found that inventive process ensures that while higher underwater sound source locating accuracy, Success combines deep learning method with underwater sound source positioning, is all predicted sound source distance, depth, and carry out Underwater sound source positioning in the case of single, more sound source possesses higher precision and accuracy rate and improves the real-time of underwater sound source detection Property.
Aiming at the problem that present invention environment mismatch present in the existing Matched-field processing method based on Marine environment modeling, Using typical method-convolutional neural networks (Convolutional Neural Network, CNN) of deep learning to experiment Data carry out representative learning and higher level of abstraction, can carry out underwater sound source without model to true marine environment and determine Position, and then can reduce because of loss brought by model error.The present invention makes for underwater auditory localization under single, more sound source situation With LeNet-5 convolutional neural networks and 56 layer depth residual error networks, the underwateracoustic for possessing degree of precision and accuracy rate is achieved Source location algorithm, and improve the real-time of underwater sound source positioning.It can be widely used for detecting invasion enemy army's submarine, the life of tracking ocean Object is salvaged in the scenes such as seabed remains, detection marine resources.

Claims (6)

1. a kind of underwater sound source localization method based on deep learning, which comprises the following steps:
(1) operation is normalized to the vector data for using KRAKEN program to simulate, and is superimposed 0 mean value gaussian random noise Complex vector n obtains the simulated sound field data p (f) at frequency f;
(2) normalized covariance matrix H is constructed according to simulated sound field data p (f), and Hermitian decomposition is carried out to matrix H, The real matrix that convolutional neural networks are capable of handling is converted by complex matrix H, obtains the input data of convolutional neural networks;
(3) input data training convolutional neural networks are used, underwater sound source location prediction model is obtained, then according to the sound observed Field data predicts the distance and depth of signal source.
2. a kind of underwater sound source localization method based on deep learning according to claim 1, it is characterised in that: step (1) the obtained data of KRAKEN program are noiseless initial data in, need to add noise manually;First by hydrophone array Received data vector s normalization: | s |=1,0 mean value gaussian random noise is then formed by Box-Muller algorithm Complex vector n, probability density function are as follows:
Wherein,γ is nominal signal-to-noise ratio, and N is recipient quantity in hydrophone array;
Complex vector n is obtained by following formula:
Wherein Xi, YiFor (0,1] on be uniformly distributed, finally obtained vector d=s+n;D is the finally obtained mould of the step Onomatopoeia field data p (f).
3. a kind of underwater sound source localization method based on deep learning according to claim 1, it is characterised in that: step (2) in order to make treatment process independently of compound source spectrum in, the simulated sound field data received is converted into normalized sampling and is assisted Variance matrix;Discrete Fourier transform is carried out by the input data to L sensor, the sound field data at frequency f indicate For p (f)=[p1(f) ..., pL(f)]T;Sound field is modeled as:
P (f)=S (f) g (f, r)+∈;
Wherein, ∈ is noise, and S (f) is source, and function g is Green's function, is influenced to reduce sound field amplitude bring | S (f) |, The compound sound field asks normalized covariance matrix formation conjugate pair to claim matrix are as follows:
Wherein, H represents conjugate transposition operation, at this time C (f)=CH(f), NsRepresent the snapshot number formed;
Assuming that input is set as by C (f)=A+iB in order to convert real number matrix for imaginary number matrix and retain data informationThen the input of convolutional neural networks at this time is matrix H.
4. a kind of underwater sound source localization method based on deep learning according to claim 1, which is characterized in that step (3) sound source position is subjected to classification map first in, specifically includes two methods:
(a) depth and distance are divided equally respectively, then individually training, merging after having trained;Specifically:
Depth and distance are respectively divided into K1, K2Part,Width is Δ d, Δ r, each input vector xn, n=1 ... N is by gn、tnLabel,k1=1 ..., K1, k2=1 ..., K2, label represents true Source position classification, the i.e. output of underwater sound source location prediction model;
For convolutional neural networks CNN, classify g for positionn、tnIt is mapped to a 1*K1, 1*K2Vector gn、tnIn, wherein
The anticipated output probability of convolutional neural networks is represented, i.e., for inputting xnSource is in position dk, rkProbability;These For object vector for training CNN, the prediction result of output is to obtain the area of maximum value in Softmax distribution in estimation range Domain;
(b) divide equally by depth and after merging, its region is represented with the fritter centre of area, directly training;Specifically:
Depth and the plane apart from composition are divided into K parts, a1..., aK, each fritter area is Δ a, each input vector xn, n=1 ... N, by tnLabel, tn∈ak, k=1 ..., K, label represents true source position classification, i.e. underwater sound source is fixed The output of position prediction model;
For CNN, classify t for positionnIt is mapped to the vector t of a 1*KnIn, whereinSkIt represents With tkCentered on area be Δ a rectangular region, tn=tN, 1..., tN, KThe anticipated output probability of neural network is represented, i.e., For inputting xnSource is in position akProbability;For these object vectors for training CNN, the prediction result of output is in estimation range The region of maximum value is obtained in interior Softmax distribution.
5. a kind of underwater sound source localization method based on deep learning according to claim 4, which is characterized in that use LeNet-5 convolutional neural networks structure combines the underwater sound source positioning mould in the case of two kinds of single sources of sound source position mapping mode training Type, training process are as follows:
1) convolutional neural networks weight is initialized;
2) using the data set handled by data prediction and source position mapping as input data, source position class label For one-hot coding vector, input data is obtained by propagated forward process composed by convolutional layer, pond layer, full articulamentum Output valve;
3) loss function between output source position classification and target category is calculated;
4) backpropagation is carried out, each layer error term is successively acquired, in conjunction with the corresponding convolutional Neural net of gradient updating of connection weight Network weight, if being unsatisfactory for termination condition repeats step 2) -4), on the contrary training terminates;
And single source prediction effect under the situation of part is optimized using 56 layer depth residual error networks according to training effect, further deepen The network number of plies, and solve gradient disappearance, explosion issues, better lift scheme precision and training effect.
6. a kind of underwater sound source localization method based on deep learning according to claim 4, which is characterized in that use 56 Underwater sound source location model in the case of layer depth residual error network model and sound source position mapping mode (b) training multi-source, training Process is as follows:
1) depth residual error network weight is initialized;
2) using the data set handled by data prediction and source position mapping as input data, source position class label For vector, corresponding classification sound source sets 1, otherwise sets 0, input data by convolutional layer, pond layer, full articulamentum it is composed before Output valve is obtained to communication process;
3) output source position classification is compared with threshold there are the probability of sound source by classification each in output vector and is obtained, and then calculates Loss function between output source position classification and target category out;
4) backpropagation is carried out, each layer error term is successively acquired, in conjunction with the corresponding depth residual error net of gradient updating of connection weight Network weight, if being unsatisfactory for termination condition repeats step 2) -4), on the contrary training terminates.
CN201910236715.XA 2019-03-27 2019-03-27 Underwater sound source positioning method based on deep learning Active CN109993280B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910236715.XA CN109993280B (en) 2019-03-27 2019-03-27 Underwater sound source positioning method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910236715.XA CN109993280B (en) 2019-03-27 2019-03-27 Underwater sound source positioning method based on deep learning

Publications (2)

Publication Number Publication Date
CN109993280A true CN109993280A (en) 2019-07-09
CN109993280B CN109993280B (en) 2021-05-11

Family

ID=67131509

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910236715.XA Active CN109993280B (en) 2019-03-27 2019-03-27 Underwater sound source positioning method based on deep learning

Country Status (1)

Country Link
CN (1) CN109993280B (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110531313A (en) * 2019-08-30 2019-12-03 西安交通大学 A kind of near-field signals source localization method based on deep neural network regression model
CN110807365A (en) * 2019-09-29 2020-02-18 浙江大学 Underwater target identification method based on fusion of GRU and one-dimensional CNN neural network
CN111008674A (en) * 2019-12-24 2020-04-14 哈尔滨工程大学 Underwater target detection method based on rapid cycle unit
CN111596262A (en) * 2020-05-07 2020-08-28 武汉大学 Vector hydrophone and multi-target direction estimation method based on vector hydrophone
CN111680889A (en) * 2020-05-20 2020-09-18 中国地质大学(武汉) Offshore oil leakage source positioning method and device based on cross entropy
CN111898732A (en) * 2020-06-30 2020-11-06 江苏省特种设备安全监督检验研究院 Ultrasonic ranging compensation method based on deep convolutional neural network
CN111965601A (en) * 2020-08-05 2020-11-20 西南交通大学 Underwater sound source passive positioning method based on nuclear extreme learning machine
CN111983619A (en) * 2020-08-07 2020-11-24 西北工业大学 Underwater acoustic target forward scattering acoustic disturbance positioning method based on transfer learning
CN112231974A (en) * 2020-09-30 2021-01-15 山东大学 TBM rock breaking seismic source seismic wave field characteristic recovery method and system based on deep learning
CN112733447A (en) * 2021-01-07 2021-04-30 浙江大学 Underwater sound source positioning method and system based on domain adaptive network
CN113099356A (en) * 2021-03-24 2021-07-09 华中科技大学 Method and device for self-adaptive sound field regulation and control
CN113109794A (en) * 2020-01-13 2021-07-13 中国科学院声学研究所 Deep sea sound source depth setting method based on deep neural network in strong noise environment
CN113138365A (en) * 2020-01-17 2021-07-20 中国科学院声学研究所 Single-vector hydrophone direction estimation method based on deep learning
CN113286257A (en) * 2021-05-20 2021-08-20 南京邮电大学 Novel distributed non-ranging positioning method
CN113657416A (en) * 2020-05-12 2021-11-16 中国科学院声学研究所 Deep sea sound source ranging method and system based on improved deep neural network
CN113671473A (en) * 2021-09-09 2021-11-19 哈尔滨工程大学 Joint matching field positioning method and system based on environmental constraint and Riemann distance
CN113739905A (en) * 2020-05-27 2021-12-03 现代摩比斯株式会社 Apparatus and method for locating noise occurring in steering system
CN114241272A (en) * 2021-11-25 2022-03-25 电子科技大学 Heterogeneous information fusion positioning method based on deep learning
CN115201753A (en) * 2022-09-19 2022-10-18 泉州市音符算子科技有限公司 Low-power-consumption multi-spectral-resolution voice positioning method
CN116106880A (en) * 2023-04-13 2023-05-12 北京理工大学 Underwater sound source ranging method and device based on attention mechanism and multi-scale fusion
CN117151198A (en) * 2023-09-06 2023-12-01 广东海洋大学 Underwater sound passive positioning method and device based on self-organizing competitive neural network

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030072456A1 (en) * 2001-10-17 2003-04-17 David Graumann Acoustic source localization by phase signature
CN102736064A (en) * 2011-04-14 2012-10-17 东南大学 Compression sensor-based positioning method of sound source of hearing aid
WO2016089300A1 (en) * 2014-12-02 2016-06-09 Thales Solutions Asia Pte Ltd. Methods and systems for spectral analysis of sonar data
CN107703486A (en) * 2017-08-23 2018-02-16 南京邮电大学 A kind of auditory localization algorithm based on convolutional neural networks CNN
CN107909082A (en) * 2017-10-30 2018-04-13 东南大学 Sonar image target identification method based on depth learning technology
CN108122559A (en) * 2017-12-21 2018-06-05 北京工业大学 Binaural sound sources localization method based on deep learning in a kind of digital deaf-aid
CN108802683A (en) * 2018-05-30 2018-11-13 东南大学 A kind of source localization method based on management loading
CN109001679A (en) * 2018-06-14 2018-12-14 河北工业大学 A kind of indoor sound source area positioning method based on convolutional neural networks
CN109212512A (en) * 2018-10-15 2019-01-15 东南大学 A kind of underwater sound array ambient sea noise emulation mode with spatial coherence

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030072456A1 (en) * 2001-10-17 2003-04-17 David Graumann Acoustic source localization by phase signature
CN102736064A (en) * 2011-04-14 2012-10-17 东南大学 Compression sensor-based positioning method of sound source of hearing aid
WO2016089300A1 (en) * 2014-12-02 2016-06-09 Thales Solutions Asia Pte Ltd. Methods and systems for spectral analysis of sonar data
CN107703486A (en) * 2017-08-23 2018-02-16 南京邮电大学 A kind of auditory localization algorithm based on convolutional neural networks CNN
CN107909082A (en) * 2017-10-30 2018-04-13 东南大学 Sonar image target identification method based on depth learning technology
CN108122559A (en) * 2017-12-21 2018-06-05 北京工业大学 Binaural sound sources localization method based on deep learning in a kind of digital deaf-aid
CN108802683A (en) * 2018-05-30 2018-11-13 东南大学 A kind of source localization method based on management loading
CN109001679A (en) * 2018-06-14 2018-12-14 河北工业大学 A kind of indoor sound source area positioning method based on convolutional neural networks
CN109212512A (en) * 2018-10-15 2019-01-15 东南大学 A kind of underwater sound array ambient sea noise emulation mode with spatial coherence

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
DMITRY SUVOROV 等: "Deep residual network for sound source localization in the time domain", 《ARXIV》 *
ERIC L. FERGUSON 等: "CONVOLUTIONAL NEURAL NETWORKS FOR PASSIVE MONITORING OF A SHALLOW WATER ENVIRONMENT USING A SINGLE SENSOR", 《ICASSP 2017》 *
HAIQIANG NIU 等: "Source localization in an ocean waveguide using supervised machine learning", 《THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 》 *
JUAN MANUEL VERA-DIAZ 等: "Towards End-to-End Acoustic Localization using Deep Learning: from Audio Signal to Source Position Coordinates", 《ARXIV》 *
ZHAOQIONG HUANG 等: "Source localization using deep neural networks in a shallow water environment", 《THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 》 *
陈迎春 等: "一种基于空间稀疏重构的匹配场定位方法", 《电子设计工程》 *

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110531313A (en) * 2019-08-30 2019-12-03 西安交通大学 A kind of near-field signals source localization method based on deep neural network regression model
CN110531313B (en) * 2019-08-30 2021-05-28 西安交通大学 Near-field signal source positioning method based on deep neural network regression model
CN110807365A (en) * 2019-09-29 2020-02-18 浙江大学 Underwater target identification method based on fusion of GRU and one-dimensional CNN neural network
CN110807365B (en) * 2019-09-29 2022-02-11 浙江大学 Underwater target identification method based on fusion of GRU and one-dimensional CNN neural network
CN111008674A (en) * 2019-12-24 2020-04-14 哈尔滨工程大学 Underwater target detection method based on rapid cycle unit
CN111008674B (en) * 2019-12-24 2022-05-03 哈尔滨工程大学 Underwater target detection method based on rapid cycle unit
CN113109794A (en) * 2020-01-13 2021-07-13 中国科学院声学研究所 Deep sea sound source depth setting method based on deep neural network in strong noise environment
CN113109794B (en) * 2020-01-13 2022-12-06 中国科学院声学研究所 Deep sea sound source depth setting method based on deep neural network in strong noise environment
CN113138365B (en) * 2020-01-17 2022-12-06 中国科学院声学研究所 Single-vector hydrophone direction estimation method based on deep learning
CN113138365A (en) * 2020-01-17 2021-07-20 中国科学院声学研究所 Single-vector hydrophone direction estimation method based on deep learning
CN111596262B (en) * 2020-05-07 2023-03-10 武汉敏声新技术有限公司 Vector hydrophone and multi-target direction estimation method based on vector hydrophone
CN111596262A (en) * 2020-05-07 2020-08-28 武汉大学 Vector hydrophone and multi-target direction estimation method based on vector hydrophone
CN113657416B (en) * 2020-05-12 2023-07-18 中国科学院声学研究所 Deep sea sound source ranging method and system based on improved deep neural network
CN113657416A (en) * 2020-05-12 2021-11-16 中国科学院声学研究所 Deep sea sound source ranging method and system based on improved deep neural network
CN111680889B (en) * 2020-05-20 2023-08-18 中国地质大学(武汉) Cross entropy-based offshore oil leakage source positioning method and device
CN111680889A (en) * 2020-05-20 2020-09-18 中国地质大学(武汉) Offshore oil leakage source positioning method and device based on cross entropy
CN113739905A (en) * 2020-05-27 2021-12-03 现代摩比斯株式会社 Apparatus and method for locating noise occurring in steering system
US11945521B2 (en) 2020-05-27 2024-04-02 Hyundai Mobis Co., Ltd. Device for locating noise in steering system
CN111898732A (en) * 2020-06-30 2020-11-06 江苏省特种设备安全监督检验研究院 Ultrasonic ranging compensation method based on deep convolutional neural network
CN111898732B (en) * 2020-06-30 2023-06-20 江苏省特种设备安全监督检验研究院 Ultrasonic ranging compensation method based on deep convolutional neural network
CN111965601A (en) * 2020-08-05 2020-11-20 西南交通大学 Underwater sound source passive positioning method based on nuclear extreme learning machine
CN111983619A (en) * 2020-08-07 2020-11-24 西北工业大学 Underwater acoustic target forward scattering acoustic disturbance positioning method based on transfer learning
CN112231974B (en) * 2020-09-30 2022-11-04 山东大学 Deep learning-based method and system for recovering seismic wave field characteristics of rock breaking seismic source of TBM (Tunnel boring machine)
CN112231974A (en) * 2020-09-30 2021-01-15 山东大学 TBM rock breaking seismic source seismic wave field characteristic recovery method and system based on deep learning
CN112733447B (en) * 2021-01-07 2022-04-29 浙江大学 Underwater sound source positioning method and system based on domain adaptive network
CN112733447A (en) * 2021-01-07 2021-04-30 浙江大学 Underwater sound source positioning method and system based on domain adaptive network
CN113099356A (en) * 2021-03-24 2021-07-09 华中科技大学 Method and device for self-adaptive sound field regulation and control
CN113099356B (en) * 2021-03-24 2021-11-19 华中科技大学 Method and device for self-adaptive sound field regulation and control
CN113286257A (en) * 2021-05-20 2021-08-20 南京邮电大学 Novel distributed non-ranging positioning method
CN113671473B (en) * 2021-09-09 2023-09-15 哈尔滨工程大学 Combined matching field positioning method and system based on environment constraint and Riemann distance
CN113671473A (en) * 2021-09-09 2021-11-19 哈尔滨工程大学 Joint matching field positioning method and system based on environmental constraint and Riemann distance
CN114241272A (en) * 2021-11-25 2022-03-25 电子科技大学 Heterogeneous information fusion positioning method based on deep learning
CN115201753A (en) * 2022-09-19 2022-10-18 泉州市音符算子科技有限公司 Low-power-consumption multi-spectral-resolution voice positioning method
CN116106880A (en) * 2023-04-13 2023-05-12 北京理工大学 Underwater sound source ranging method and device based on attention mechanism and multi-scale fusion
CN117151198A (en) * 2023-09-06 2023-12-01 广东海洋大学 Underwater sound passive positioning method and device based on self-organizing competitive neural network
CN117151198B (en) * 2023-09-06 2024-04-09 广东海洋大学 Underwater sound passive positioning method and device based on self-organizing competitive neural network

Also Published As

Publication number Publication date
CN109993280B (en) 2021-05-11

Similar Documents

Publication Publication Date Title
CN109993280A (en) A kind of underwater sound source localization method based on deep learning
CN110807365B (en) Underwater target identification method based on fusion of GRU and one-dimensional CNN neural network
CN107703486B (en) Sound source positioning method based on convolutional neural network CNN
CN108696331B (en) Signal reconstruction method based on generation countermeasure network
CN111835444B (en) Wireless channel scene identification method and system
CN108169708B (en) Direct positioning method of modular neural network
CN109782231B (en) End-to-end sound source positioning method and system based on multi-task learning
CN112560079B (en) Hidden false data injection attack method based on deep belief network and migration learning
CN114564982B (en) Automatic identification method for radar signal modulation type
CN114595732B (en) Radar radiation source sorting method based on depth clustering
CN112733447B (en) Underwater sound source positioning method and system based on domain adaptive network
CN113673312B (en) Deep learning-based radar signal intra-pulse modulation identification method
CN110276256A (en) Based on the low signal-to-noise ratio Modulation Recognition of Communication Signal method and device for adjusting ginseng accidental resonance
Hosseini Nejad Takhti et al. Classification of marine mammals using the trained multilayer perceptron neural network with the whale algorithm developed with the fuzzy system
CN110808932B (en) Multi-layer sensor rapid modulation identification method based on multi-distribution test data fusion
CN114814776B (en) PD radar target detection method based on graph attention network and transfer learning
CN106772223B (en) A kind of single-bit Estimation of Spatial Spectrum method that logic-based returns
CN114172770B (en) Modulation signal identification method of quantum root tree mechanism evolution extreme learning machine
CN116452963A (en) Multi-dimensional feature-based seabed substrate type classification method
CN112434716B (en) Underwater target data amplification method and system based on condition countermeasure neural network
Wang et al. Modulation recognition method for underwater acoustic communication signal based on relation network under small sample set
CN112666528A (en) Multi-station radar system interference identification method based on convolutional neural network
Luo et al. A Cross-domain Radar Emitter Recognition Method with Few-shot Learning
CN108564171A (en) A kind of neural network sound source angle method of estimation based on quick global K mean cluster
CN113359091B (en) Deep learning-based multi-kernel function aliasing radar radiation source identification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant