CN111352075B - Underwater multi-sound-source positioning method and system based on deep learning - Google Patents

Underwater multi-sound-source positioning method and system based on deep learning Download PDF

Info

Publication number
CN111352075B
CN111352075B CN201811564007.0A CN201811564007A CN111352075B CN 111352075 B CN111352075 B CN 111352075B CN 201811564007 A CN201811564007 A CN 201811564007A CN 111352075 B CN111352075 B CN 111352075B
Authority
CN
China
Prior art keywords
sound
sound source
signal
source
hydrophone
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811564007.0A
Other languages
Chinese (zh)
Other versions
CN111352075A (en
Inventor
徐及
黄兆琼
颜永红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Acoustics CAS
Original Assignee
Institute of Acoustics CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Acoustics CAS filed Critical Institute of Acoustics CAS
Priority to CN201811564007.0A priority Critical patent/CN111352075B/en
Publication of CN111352075A publication Critical patent/CN111352075A/en
Application granted granted Critical
Publication of CN111352075B publication Critical patent/CN111352075B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/18Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves
    • G01S5/22Position of source determined by co-ordinating a plurality of position lines defined by path-difference measurements

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

The invention discloses an underwater multi-sound-source positioning method and system based on deep learning, wherein the method comprises the following steps: receiving a signal to be detected through a hydrophone array, and estimating the direction of a sound source; and forming a subarray wave beam in the direction where the sound source possibly exists, then calculating a spatial correlation matrix of the signal to be detected, forming a characteristic vector, inputting the characteristic vector to a pre-trained time delay neural network, and outputting the distance of the sound source. The underwater multi-sound-source positioning method can be independent of prior knowledge of environmental parameters; a plurality of sound sources are distinguished on a characteristic level by utilizing a subarray beam forming method, so that a plurality of underwater targets are positioned simultaneously.

Description

Underwater multi-sound-source positioning method and system based on deep learning
Technical Field
The invention relates to the field of underwater positioning, in particular to an underwater multi-sound-source positioning method and system based on deep learning.
Background
The sound source positioning comprises single sound source positioning and multi-sound source positioning, and the sound source positioning technology can indicate the spatial orientation of a sound source target, so that important spatial information is provided for subsequent information acquisition and processing.
The traditional method mainly utilizes the modern digital signal processing technology to estimate the position information of the underwater sound source, and gives the sound source position through a lattice point matching search or analysis mode.
In recent years, a small number of methods introduce a neural network into an underwater sound source positioning task, however, previous researches are directed to the underwater single sound source positioning task, and compared with single sound source positioning, the multi-sound source positioning task is more complex, and the problem of multi-sound source positioning in an actual environment is solved due to the fact that a plurality of sound sources are mutually interfered.
Disclosure of Invention
The invention aims to overcome the technical defects and provides an underwater multi-sound-source positioning method based on deep learning.
In order to achieve the above object, an underwater multi-sound-source localization method based on deep learning includes:
receiving a signal to be detected through a hydrophone array, and estimating the direction of a sound source; and forming a subarray wave beam in the direction where the sound source possibly exists, then calculating a spatial correlation matrix of the signal to be detected, forming a characteristic vector, inputting the characteristic vector to a pre-trained time delay neural network, and outputting the distance of the sound source.
As an improvement of the above method, the training step of the time-delay neural network includes:
step 1) forming a subarray wave beam on each frequency in a signal bandwidth to realize focusing of a sound source signal;
step 2) calculating a spatial correlation matrix of signals focused by all sub-arrays on each sound source position at each frequency in a signal bandwidth to form a characteristic vector;
and 3) taking the characteristic vector as input, taking the distance of a known sound source as a label, and training the time delay neural network by adopting a minimum mean square error criterion to obtain the trained time delay neural network.
As an improvement of the above method, the step 1) is specifically:
divide the hydrophone array into B sub-arrays, { omega }1,…,ΩBRespectively performing beam forming on a known sound source on each sub-array; the focus signal at the b-th sub-array is then expressed as:
Figure BDA0001914060190000021
Figure BDA0001914060190000022
wherein, the upper mark is taukRepresenting the delay of the sound source at the kth hydrophone relative to the first microphone,/kAnd
Figure BDA0001914060190000023
representing the distance between the kth hydrophone and the first hydrophone and the unit direction vector, ΩbIs a hydrophone index number set contained in the sub-array b, c is sound velocity, j is an imaginary part unit, fiIs the frequency, i is the frequency index; y isk(fi) Converting a sound source signal received by a kth hydrophone into a digital sound signal, and performing Fourier transform to obtain a signal; β is the azimuth of the known sound source; performing subarray beam forming on all B subarrays:
G(fi)=[g1(fi),…,gB(fi)]T
as an improvement of the above method, the step 2) is specifically:
computing a spatial correlation matrix R (f) of the sound sourcei):
Figure BDA0001914060190000024
Figure BDA0001914060190000025
The spatial correlation matrix R (f)i) The real and imaginary parts of each element of (a) are connected in series to form a feature vector.
As an improvement of the above method, the step of training the time-delay neural network by using the minimum mean square error criterion is as follows:
Figure BDA0001914060190000026
wherein r islSound source distance output for time-delay neural networkValue, r'lThe distance value of the sound source is known, and L is the number of samples; e is a minimum cost function, iteration is carried out through random gradient descent back propagation, and a weight matrix of the time delay neural network is obtained.
As an improvement of the above method, the estimating the azimuth of the sound source specifically includes:
step S1) calculates the signal Y (f) to be detectedi) Of the spatial correlation matrix E [ Y (f)i)YH(fi)]:
Figure BDA0001914060190000027
Wherein E (-) represents the expected average operation (-)HRepresents the transpose of the conjugate,
Figure BDA0001914060190000028
and
Figure BDA0001914060190000029
the eigenvalue and eigenvector matrices correspond to signal subspaces respectively,
Figure BDA00019140601900000210
and
Figure BDA00019140601900000211
respectively corresponding to the eigenvalue and the eigenvector matrix to a noise subspace;
step S2) of obtaining PMUSICMaximum value of function obtains theta as sound source direction estimated value alpha1,…,αD
Figure BDA0001914060190000031
Wherein, H (theta, f)i) Is the steering vector of the sound source, F is the number of frequency points, and D is the number of sound sources.
As an improvement of the above method, the sub-array beamforming in the direction where the sound source may exist is specifically:
divide the hydrophone array into B sub-arrays, { omega }1,…,ΩBAnd forming a beam for each sound source on each sub-array, respectively, and then expressing a focusing signal for a d-th sound source on a b-th sub-array as:
Figure BDA0001914060190000032
Figure BDA0001914060190000033
wherein, the upper scale sigmak,dThe delay of the kth hydrophone relative to the first microphone of the kth sound source is represented, D is more than or equal to 1 and less than or equal to D, lkAnd
Figure BDA0001914060190000034
representing the distance between the kth hydrophone and the first hydrophone and the unit direction vector, ΩbIs a hydrophone index number set contained in the sub-array b, c is sound velocity, j is an imaginary part unit, fiIs the frequency, i is the frequency index; performing subarray beam forming on all B subarrays for the d sound sources:
Figure BDA0001914060190000035
as an improvement of the above method, the calculating a spatial correlation matrix of the signal to be detected to form the eigenvector specifically includes:
spatial correlation matrix S of the d-th possible source of the signal to be detectedd(fi) Comprises the following steps:
Figure BDA0001914060190000036
Figure BDA0001914060190000037
the spatial correlation matrix Sd(fi) The real and imaginary parts of each element of (a) are concatenated to form a feature vector.
An underwater multi-sound source localization system based on deep learning, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the above method when executing the program.
The invention has the advantages that:
1. the underwater multi-sound-source positioning method utilizes a deep neural network and does not depend on prior knowledge of environmental parameters; a plurality of sound sources are distinguished on a characteristic level by utilizing a sub-array beam forming method, so that the purpose of simultaneously positioning a plurality of underwater targets is realized;
2. the method only needs single sound source data in the training stage, and can realize the positioning task in the multi-source scene, thereby greatly reducing the complexity of the model.
Drawings
Fig. 1 is a flow chart of an underwater multi-sound-source localization method based on deep learning according to the present invention.
Detailed Description
The invention will now be further described with reference to the accompanying drawings.
Referring to fig. 1, the invention provides an underwater multi-sound-source positioning method based on deep learning, which comprises the following steps:
step 1) converting a sound source signal received through a hydrophone array into a digital sound signal;
converting a sound source signal received through the hydrophone array into a digital sound signal; wherein the hydrophone array comprises K microphones.
Step 2), Fourier transform is carried out on the digital sound signal;
step 3) forming a subarray wave beam on each frequency in the signal bandwidth to realize focusing of sound source signals in different directions, and the specific steps are as follows:
3-1) divide the hydrophone array into B sub-arrays, { omega1,…,ΩBAnd forming beams for the sound sources on each sub-array, respectively, and assuming that there are D sound sources, a focusing signal for the D-th sound source in the b-th sub-array may be represented as:
Figure BDA0001914060190000041
Figure BDA0001914060190000042
wherein tau is superscriptedk,dRepresenting the delay of the d sound source at the k hydrophone relative to the first microphone,/kAnd
Figure BDA0001914060190000043
representing the distance between the kth hydrophone and the first hydrophone and the unit direction vector, ΩbThe index number of the hydrophone contained in the sub-array b is set, c is sound velocity, j is an imaginary part unit, f is frequency, and i is a frequency index. Carrying out sub-array beam forming on all B sub-arrays of the d sound sources to obtain
Figure BDA0001914060190000044
Step 4) solving a spatial correlation matrix for the signals focused by all the subarrays at each sound source position at each frequency in the signal broadband to obtain a characteristic vector, wherein the specific steps are as follows:
and (3) solving a covariance matrix of each sound source, wherein a spatial correlation matrix of the d sound source can be expressed as:
Figure BDA0001914060190000045
wherein
Figure BDA0001914060190000046
Will be on the effective frequency band
Figure BDA0001914060190000047
The real part and the imaginary part of the neural network are connected in series to be used as input characteristic vectors of the neural network;
and 5) in the training stage, learning the training sample by using the time delay neural network to obtain a mapping relation model of the characteristic vector and the sound source distance. The criterion of neural network training is the minimum mean square error criterion:
Figure BDA0001914060190000051
wherein r islRepresents an estimated value of the sound source distance, r'lThe reference value of the sound source distance is obtained, and L is the number of samples; minimizing a cost function E through a random gradient descent back propagation algorithm to obtain a weight matrix of the neural network;
step 6) in the testing stage, inputting a test sample for azimuth estimation, performing sub-array beam forming on the azimuth where a sound source possibly exists, then obtaining a spatial correlation matrix to obtain a characteristic vector of a test signal, inputting the characteristic vector to a trained model to obtain a distance estimation value of the sound source, and the specific steps are as follows:
step 6-1), carrying out azimuth estimation on the test sample, estimating the azimuth of a possible signal, and firstly solving a spatial correlation matrix of an observed signal based on a MUSIC (multiple signal classification) method, wherein the spatial correlation matrix is expressed as:
Figure BDA0001914060190000052
where E (-) represents the expected average operation (-)HRepresents the transpose of the conjugate,
Figure BDA0001914060190000053
and
Figure BDA0001914060190000054
the eigenvalue and eigenvector matrices correspond to signal subspaces respectively,
Figure BDA0001914060190000055
and
Figure BDA0001914060190000056
the eigenvalue and eigenvector matrix correspond to the noise subspace. The signal orientation may be obtained by maximizing the function:
Figure BDA0001914060190000057
the final target orientation is estimated as Θ ═ θ1,…,θD};
Step 6-2), extracting characteristics of all directions of possible sound sources in the theta according to the steps 3-1) and 4-1), and inputting a model to obtain distance information of a target;
the characteristic vectors extracted in the steps 3) and 4) distinguish a plurality of sound sources on a characteristic level, so that a neural network model can be trained through single sound source signal data to obtain the corresponding relation between the characteristics and the target distance, during testing, the position of the sound source possibly existing is estimated through a position estimation module, then the characteristics of different sound sources are extracted, and the estimated values of the distances of the sound sources can be respectively obtained by inputting the characteristics of the neural network model, so that the positioning of the multiple sound sources is realized.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention and are not limited. Although the present invention has been described in detail with reference to the embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (7)

1. An underwater multi-sound-source positioning method based on deep learning comprises the following steps:
receiving a signal to be detected through a hydrophone array, and estimating the direction of a sound source; forming subarray wave beams in the direction where a sound source possibly exists, then calculating a spatial correlation matrix of a signal to be detected, forming a characteristic vector, inputting the characteristic vector to a pre-trained time delay neural network, and outputting the distance of the sound source;
the sub-array beam forming in the direction where the sound source may exist is specifically as follows:
divide the hydrophone array into B sub-arrays, { omega }1,…,ΩBAnd forming a beam for each sound source on each sub-array, respectively, and then expressing a focusing signal for a d-th sound source on a b-th sub-array as:
Figure FDA0003292942030000011
Figure FDA0003292942030000012
wherein, the upper scale sigmak,dThe delay of the kth hydrophone relative to the first microphone of the kth sound source is represented, D is more than or equal to 1 and less than or equal to D, lkAnd
Figure FDA0003292942030000013
representing the distance between the kth hydrophone and the first hydrophone and the unit direction vector, ΩbIs a hydrophone index number set contained in the sub-array b, c is sound velocity, j is an imaginary part unit, fiIs the frequency, i is the frequency index; performing subarray beam forming on all B subarrays for the d sound sources:
Figure FDA0003292942030000014
the calculation of the spatial correlation matrix of the signal to be detected to form the eigenvector specifically comprises:
spatial correlation matrix S of the d-th possible source of the signal to be detectedd(fi) Comprises the following steps:
Figure FDA0003292942030000015
Figure FDA0003292942030000016
the spatial correlation matrix Sd(fi) The real and imaginary parts of each element of (a) are concatenated to form a feature vector.
2. The deep learning-based underwater multi-sound-source positioning method according to claim 1, wherein the training step of the time-delay neural network comprises:
step 1) forming a subarray wave beam on each frequency in a signal bandwidth to realize focusing of a sound source signal;
step 2) calculating a spatial correlation matrix of signals focused by all sub-arrays on each sound source position at each frequency in a signal bandwidth to form a characteristic vector;
and 3) taking the characteristic vector as input, taking the distance of a known sound source as a label, and training the time delay neural network by adopting a minimum mean square error criterion to obtain the trained time delay neural network.
3. The deep learning-based underwater multi-sound-source positioning method according to claim 2, wherein the step 1) is specifically as follows:
divide the hydrophone array into B sub-arrays, { omega }1,…,ΩBRespectively performing beam forming on a known sound source on each sub-array; the focus signal at the b-th sub-array is then expressed as:
Figure FDA0003292942030000021
Figure FDA0003292942030000022
wherein, the upper mark is taukIndicating that the source is opposite the kth hydrophoneDelay at the first microphone,/kAnd
Figure FDA0003292942030000023
representing the distance between the kth hydrophone and the first hydrophone and the unit direction vector, ΩbIs a hydrophone index number set contained in the sub-array b, c is sound velocity, j is an imaginary part unit, fiIs the frequency, i is the frequency index; y isk(fi) Converting a sound source signal received by a kth hydrophone into a digital sound signal, and performing Fourier transform to obtain a signal; β is the azimuth of the known sound source; performing subarray beam forming on all B subarrays:
G(fi)=[g1(fi),…,gB(fi)]T
4. the deep learning-based underwater multi-sound-source positioning method according to claim 3, wherein the step 2) is specifically:
computing a spatial correlation matrix R (f) of the sound sourcei):
Figure FDA0003292942030000024
Figure FDA0003292942030000025
The spatial correlation matrix R (f)i) The real and imaginary parts of each element of (a) are connected in series to form a feature vector.
5. The deep learning-based underwater multi-sound-source positioning method according to claim 4, wherein the step of training the time-delay neural network by using the minimum mean square error criterion is as follows:
Figure FDA0003292942030000026
wherein r islA sound source distance value r output by the time delay neural networkl' is a known sound source distance value, and L is the number of samples; e is a minimum cost function, iteration is carried out through random gradient descent back propagation, and a weight matrix of the time delay neural network is obtained.
6. The deep learning-based underwater multi-sound-source localization method according to claim 1, wherein the estimating the azimuth of the sound source specifically comprises:
step S1) calculates the signal Y (f) to be detectedi) Of the spatial correlation matrix E [ Y (f)i)YH(fi)]:
Figure FDA0003292942030000027
Wherein E (-) represents the expected average operation (-)HRepresents the transpose of the conjugate,
Figure FDA0003292942030000031
and
Figure FDA0003292942030000032
the eigenvalue and eigenvector matrices correspond to signal subspaces respectively,
Figure FDA0003292942030000033
and
Figure FDA0003292942030000034
respectively corresponding to the eigenvalue and the eigenvector matrix to a noise subspace;
step S2) of obtaining PMUSICMaximum value of function obtains theta as sound source direction estimated value alpha1,…,αD
Figure FDA0003292942030000035
Wherein, H (theta, f)i) Is the steering vector of the sound source, F is the number of frequency points, and D is the number of sound sources.
7. An underwater multi-sound source localization system based on deep learning, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor, when executing the program, carries out the steps of the method according to one of claims 1 to 6.
CN201811564007.0A 2018-12-20 2018-12-20 Underwater multi-sound-source positioning method and system based on deep learning Active CN111352075B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811564007.0A CN111352075B (en) 2018-12-20 2018-12-20 Underwater multi-sound-source positioning method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811564007.0A CN111352075B (en) 2018-12-20 2018-12-20 Underwater multi-sound-source positioning method and system based on deep learning

Publications (2)

Publication Number Publication Date
CN111352075A CN111352075A (en) 2020-06-30
CN111352075B true CN111352075B (en) 2022-01-25

Family

ID=71195256

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811564007.0A Active CN111352075B (en) 2018-12-20 2018-12-20 Underwater multi-sound-source positioning method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN111352075B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113419216B (en) * 2021-06-21 2023-10-31 南京信息工程大学 Multi-sound source positioning method suitable for reverberant environment
CN115047408B (en) * 2022-06-13 2023-08-15 天津大学 Underwater multi-sound-source positioning method based on single-layer large convolution kernel neural network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5621858A (en) * 1992-05-26 1997-04-15 Ricoh Corporation Neural network acoustic and visual speech recognition system training method and apparatus
WO2006107230A1 (en) * 2005-03-30 2006-10-12 Intel Corporation Multiple-input multiple-output multicarrier communication system with joint transmitter and receiver adaptive beamforming for enhanced signal-to-noise ratio
CN105005026A (en) * 2015-06-08 2015-10-28 中国船舶重工集团公司第七二六研究所 Near-field target sound source three-dimensional passive positioning method
CN105609113A (en) * 2015-12-15 2016-05-25 中国科学院自动化研究所 Bispectrum weighted spatial correlation matrix-based speech sound source localization method
CN108828566A (en) * 2018-06-08 2018-11-16 苏州桑泰海洋仪器研发有限责任公司 Underwater pulse signal recognition methods based on towing line array

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6463904B2 (en) * 2014-05-26 2019-02-06 キヤノン株式会社 Signal processing apparatus, sound source separation method, and program
JP6567832B2 (en) * 2015-01-29 2019-08-28 日本電産株式会社 Radar system, radar signal processing apparatus, vehicle travel control apparatus and method, and computer program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5621858A (en) * 1992-05-26 1997-04-15 Ricoh Corporation Neural network acoustic and visual speech recognition system training method and apparatus
WO2006107230A1 (en) * 2005-03-30 2006-10-12 Intel Corporation Multiple-input multiple-output multicarrier communication system with joint transmitter and receiver adaptive beamforming for enhanced signal-to-noise ratio
CN105005026A (en) * 2015-06-08 2015-10-28 中国船舶重工集团公司第七二六研究所 Near-field target sound source three-dimensional passive positioning method
CN105609113A (en) * 2015-12-15 2016-05-25 中国科学院自动化研究所 Bispectrum weighted spatial correlation matrix-based speech sound source localization method
CN108828566A (en) * 2018-06-08 2018-11-16 苏州桑泰海洋仪器研发有限责任公司 Underwater pulse signal recognition methods based on towing line array

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Multiple Source Localization in a Shallow Water Waveguide Exploiting Subarray Beamforming and Deep Neural Networks;Zhaoqiong Huang et al.;《sensors》;20191102;正文第1-22页 *
水声信号处理的模式识别方法 11实验研究;宫先仪 等;《声学与电子工程》;19921231;第1-6页 *
深度学习在水下目标被动识别中的应用进展;徐及 等;《信号处理》;20190930;第1460-1475页 *

Also Published As

Publication number Publication date
CN111352075A (en) 2020-06-30

Similar Documents

Publication Publication Date Title
CN109975762B (en) Underwater sound source positioning method
CN108375763B (en) Frequency division positioning method applied to multi-sound-source environment
Qi et al. Spatial difference smoothing for DOA estimation of coherent signals
CN111123192B (en) Two-dimensional DOA positioning method based on circular array and virtual extension
CN109782231B (en) End-to-end sound source positioning method and system based on multi-task learning
CN106023996B (en) Sound recognition methods based on cross acoustic array broad-band EDFA
CN111352075B (en) Underwater multi-sound-source positioning method and system based on deep learning
CN112990082B (en) Detection and identification method of underwater sound pulse signal
CN113607447A (en) Acoustic-optical combined fan fault positioning device and method
CN108398659B (en) Direction-of-arrival estimation method combining matrix beam and root finding MUSIC
CN108089146B (en) High-resolution broadband direction-of-arrival estimation method for pre-estimated angle error robustness
CN103837858B (en) A kind of far field direction of arrival estimation method for planar array and system
CN110515034B (en) Acoustic signal azimuth angle measurement system and method
CN109932679B (en) Method for estimating maximum likelihood angle resolution of sensor array system
Zhang et al. Deep learning-based direction-of-arrival estimation for multiple speech sources using a small scale array
CN108614235B (en) Single-snapshot direction finding method for information interaction of multiple pigeon groups
CN116559778B (en) Vehicle whistle positioning method and system based on deep learning
CN109541573A (en) A kind of element position calibration method being bent hydrophone array
CN109541572B (en) Subspace orientation estimation method based on linear environment noise model
CN109283487B (en) MUSIC-DOA method based on controllable power response of support vector machine
CN113075645B (en) Distorted matrix line spectrum enhancement method based on principal component analysis-density clustering
CN110632605A (en) Wide-tolerance large-aperture towed linear array time domain single-beam processing method
CN113109760B (en) Multi-line spectrum combined DOA estimation and clustering method and system based on group sparsity
CN105703841B (en) A kind of separation method of multipath propagation broadband active acoustical signal
Rajani et al. Direction of arrival estimation by using artificial neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant