CN114252844A - Passive positioning method for single sound source target - Google Patents
Passive positioning method for single sound source target Download PDFInfo
- Publication number
- CN114252844A CN114252844A CN202111599748.4A CN202111599748A CN114252844A CN 114252844 A CN114252844 A CN 114252844A CN 202111599748 A CN202111599748 A CN 202111599748A CN 114252844 A CN114252844 A CN 114252844A
- Authority
- CN
- China
- Prior art keywords
- microphone
- sound source
- base station
- frequency
- positioning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000001514 detection method Methods 0.000 claims description 12
- 238000000354 decomposition reaction Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 3
- 230000004807 localization Effects 0.000 claims description 3
- 230000004044 response Effects 0.000 claims description 3
- 238000001228 spectrum Methods 0.000 claims description 3
- 230000001360 synchronised effect Effects 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 238000003491 array Methods 0.000 claims description 2
- 239000011159 matrix material Substances 0.000 claims description 2
- 230000035479 physiological effects, processes and functions Effects 0.000 claims 1
- 238000005259 measurement Methods 0.000 abstract description 7
- 239000011664 nicotinic acid Substances 0.000 abstract description 5
- 230000005236 sound signal Effects 0.000 abstract description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000005314 correlation function Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S5/00—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
- G01S5/18—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves
- G01S5/22—Position of source determined by co-ordinating a plurality of position lines defined by path-difference measurements
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
Abstract
The invention relates to a passive positioning method for a single sound source target, which comprises the steps of arranging three-axis microphone sensors into an array with a certain geometric shape in space, and acquiring sound field information of the target by using a large-capacity data acquisition module. Because the time of arrival of the sound signal emitted by the sound source at each microphone is different, the spatial position of the sound source is further calculated by combining the known spatial position of the microphone array by using the acquired time difference of arrival. The bionic double-ear ultramicro base line station distribution mode is adopted, so that the sensor array is minimized, the requirements of portability and wearing can be met, and the bionic double-ear ultramicro base line station distribution method is suitable for complex combat occasions; by adopting a group wave phase measurement mode, the measurement precision and the measurement resolution of the time difference under the condition of a small base line are improved, and the sound source positioning precision is improved; carrying out far-field large-area rapid scanning by adopting a double-base-station DOA algorithm, and positioning the area where the sound source is located; and the SRP-PHAT is adopted to carry out near-field small-area fine scanning, so that the high-precision positioning of a sound source is realized, and the sound source positioning efficiency is improved to the greatest extent.
Description
Technical Field
The invention belongs to the technical field of sound source positioning, particularly relates to a passive positioning method for a single sound source target, and particularly relates to a rapid passive positioning method for a sound source target based on an ultramicro baseline.
Background
In the battle, a sniper can efficiently shoot enemy personnel and damage key equipment, huge personnel casualties and psychological panic are caused to army, and the threat is increasing day by day.
In order to relieve the threat brought by a sniper, a high-efficiency anti-sniping detection system needs to be researched, wherein the acoustic detection technology is a hotspot for researching the sniper positioning field because of the advantages of all weather, low cost, strong anti-interference performance and the like.
In the process of locating the sound source of a sniper, mainly referring to a locating method of an anti-unmanned aerial vehicle and an intelligent mine, a large-size large-baseline sound detection array is arranged by using a sound sensor, and the sniper is located by measuring the position of the sound source generated when a sniper gun is opened. Along with diversified and complicated battle scenes, the existing sound source detection system has the following problems:
(1) the sound detection system is large in size, the distance between the sensors is generally larger than 70cm, the sound detection system is not suitable for wearable positioning detection of individual soldiers, and the sound detection system is low in adaptability particularly in narrow space and urban roadway battles.
(2) The positioning calculation has low real-time performance and insufficient applicability under the conditions of rapid situation perception, battle situation feedback and the like.
(3) The time difference measurement precision is not high, and the sound source positioning precision is low.
Disclosure of Invention
The invention provides a passive positioning method for a single sound source target, which solves the problems.
In order to solve the technical problems, the invention provides a sound source positioning method based on ultramicro baseline double base stations, which is characterized by comprising the following steps:
s1, constructing an ultramicro baseline dual-base-station microphone array: the microphone detection array is composed of 2 base stations, each base station adopts 8 microphones which are symmetrically distributed and fixed on the bracket up and down, and the distance from each microphone to the central point of the bracket is d; a microphone is respectively arranged up and down in the direction coaxial with the Z axis, and the other 3 microphones are uniformly distributed around the Z axis and form an included angle of 45 degrees with the Z axis;
s2, acquiring sound source signals acquired by the double-base-station microphone array through the data synchronous acquisition system;
s3, respectively carrying out frequency spectrum sparsification on the sound source signals acquired in the step 2 by adopting a variational modal decomposition method, carrying out self-adaptive decomposition on the sound source signals into a plurality of modal frequency components, and taking the highest-frequency modal component mu of each microphonenAs a component of a subsequent moveout extraction;
s4, extracting high-precision time difference information inside each base station according to the highest-frequency modal component of each microphone;
s5, obtaining sound source direction information corresponding to each base station according to the high-precision time difference information extracted in S4 and the position coordinate relation of the microphone of the base station;
s6, according to the sound source direction information and the microphone position coordinates corresponding to each base station obtained in the S5, a cross joint positioning method is adopted to quickly obtain a sound source positioning area;
and S7, assuming that the area where the sound source obtained in the S6 is located is a spherical area, forming a microphone array under a coordinate system by the double-base-station microphone arrays, carrying out fine scanning and positioning on the spherical area, and reconstructing corresponding sound field energy in the spherical area by using a combined controllable response frequency and phase transformation algorithm, wherein the point of the maximum energy value is the specific position of the sound source.
Has the advantages that: the method comprises the steps of arranging the three-axis microphone sensors into an array with a certain geometric shape in space, and acquiring sound field information of a target by using a large-capacity data acquisition module. Because the time of arrival of the sound signal emitted by the sound source at each microphone is different, the spatial position of the sound source is further calculated by combining the known spatial position of the microphone array by using the acquired time difference of arrival. Has the following advantages:
1. a bionic double-ear ultramicro baseline station arrangement mode is adopted, so that the sensor array is minimized, the requirements of portability and wearing can be met, and the bionic double-ear ultramicro baseline station arrangement method is suitable for complex combat occasions;
2. by adopting a group wave phase measurement mode, the measurement precision and the measurement resolution of the time difference under the condition of a small base line are improved, and the sound source positioning precision is improved;
3. carrying out far-field large-area rapid scanning by adopting a double-base-station DOA algorithm, and positioning the area where the sound source is located; and the SRP-PHAT is adopted to carry out near-field small-area fine scanning, so that the high-precision positioning of a sound source is realized, and the sound source positioning efficiency is improved to the greatest extent.
Drawings
FIG. 1 a single base station microphone array;
FIG. 2 a dual base station microphone array model;
FIG. 3 is a two-base station fusion localization model.
Detailed Description
In order to make the objects, contents and advantages of the present invention clearer, the following detailed description of the embodiments of the present invention is provided.
The invention provides a passive positioning method of a single sound source target, which specifically comprises the following steps:
s1, constructing an ultramicro baseline double-base-station microphone detection array;
the microphone detection array is composed of 2 base stations, each base station adopts 8 microphones which are symmetrically distributed, the distance from each microphone to a central point is d, d is set to be 5cm, and the direction angle and pitch angle information of the microphones are M1(0°,90°),M2(330°,45°),M3(90°,45°),M4(210°,45°), M5(30°,-45°),M6(210°,-45°),M7(270°,-45°),M8(0 °, -90 °). The azimuth angle is based on the positive direction of the X axis, the pitch angle is based on the XY plane, and the upward direction is positive. A microphone is respectively arranged up and down in the direction coaxial with the Z axis, and the other 3 microphones are uniformly distributed around the Z axis and form an included angle of 45 degrees with the Z axis;
the specific structure of a single base station is shown in fig. 1;
the human physiology-imitating binaural structure is distributed on 2 base stations, the center distance of the base stations is 15cm, and the base stations are jointly established under the same coordinate system, as shown in fig. 2:
taking the origin of coordinates as the center, the specific positions of the microphone array of the base station 1 are as follows:
M1(-7.5,0,5),
M2(-7.5+5·cos(330°)·cos(45°),5·sin(330°)·cos(45°),5·sin(45°)),
M3(-7.5+5·cos(90°)·cos(45°),5·sin(90°)·cos(45°),5·sin(45°)),
M4(-7.5+5·cos(210°)·cos(45°),5·sin(210°)·cos(45°),5·sin(45°)),
M5(-7.5+5·cos(30°)·cos(-45°),5·sin(30°)·cos(-45°),5·sin(-45°)),
M6(-7.5+5·cos(210°)·cos(-45°),5·sin(210°)·cos(-45°),5·sin(-45°)),
M7(-7.5+5·cos(270°)·cos(-45°),5·sin(270°)·cos(-45°),5·sin(-45°)),
M8(-7.5,0,-5);
the specific location of the base station 2 microphone array is as follows:
M9(7.5,0,5),
M10(7.5+5·cos(330°)·cos(45°),5·sin(330°)·cos(45°),5·sin(45°)),
M11(7.5+5·cos(90°)·cos(45°),5·sin(90°)·cos(45°),5·sin(45°)),
M12(7.5+5·cos(210°)·cos(45°),5·sin(210°)·cos(45°),5·sin(45°)),
M13(7.5+5·cos(30°)·cos(-45°),5·sin(30°)·cos(-45°),5·sin(-45°)),
M14(7.5+5·cos(210°)·cos(-45°),5·sin(210°)·cos(-45°),5·sin(-45°)),
M15(7.5+5·cos(270°)·cos(-45°),5·sin(270°)·cos(-45°),5·sin(-45°)),
M16(7.5,0,-5);
s2, acquiring a sound source signal;
the double-base-station microphone array in the step 1 is connected with a 16-channel data synchronous acquisition system, the sampling rate is 1Mhz, the quantization bit number is 16 bits, and sound field information of a target is acquired.
S3, adopt and changePerforming frequency spectrum sparsification on the 16 microphone acoustic signals V (n) (1, 2, 16) acquired in the step 2 by using modal decomposition (VMD), performing adaptive decomposition on the signals into a plurality of modal frequency components, and taking the highest-frequency modal component mu of each microphonenAs a component of the subsequent moveout extraction.
S4, extracting high-precision time difference information in the single base station;
s4.1, taking the base station 1 as an example, taking a microphone node M1~M8Corresponding to the highest frequency modal component mu1~μ8And extracting the arrival time information t of the first arrival wave transmitted to each microphone sensor node by the sound source by adopting a long-short time window method (STA/LTA)1~t8;
S4.2, setting the highest-frequency modal component mu of all microphone nodes1~μ8Identifying the frequency and calculating the common frequency f0;
S4.3, constructing a narrow-band filter and extracting a modal component mu1~μ8Middle corresponds to f0Frequency component of (d)'1~μ′8;
S4.4, calculating the single-period phase difference based on a correlation method;
the microphone M1Set as reference node, utilizing mu'1Mu's'nAnd (n ═ 2,3, …,8) autocorrelation and cross-correlation functions are calculated to obtain a reference node M1The phase difference between the microphone and other microphone nodes is as follows:
wherein,as a function of the autocorrelation of the microphone nodes,is a cross-correlation function of two microphone nodes.
S4.5, utilizing the arrival time information t of the first arrival wave in the S4.11~t8And the phase difference information delta phi extracted in S4.41n'(n ═ 2,3, …,8), the phase ambiguity is removed.
With a microphone M1The coordinate is (x)1,y1,z1) The coordinates of other microphones are (x)n',yn',zn') The propagation speed of sound is 340 m/s. The corresponding time difference formula between the reference node and each microphone node is as follows:
τ1n'=t1-tn' (2)
according to the geometric relation of the positions of the microphones of the base station 1 and the wave field propagation theory, the corrected phase difference delta phi 'can be obtained'1n'Corresponding time difference information delta tau1n'The formula should be satisfied:
wherein, is1n'Is a reference node M1And phase difference between each microphone node, f0The frequency of a high-frequency signal shared by the microphone nodes, wherein N is the number of times that the frequency component is periodically repeated; and c is the speed of sound.
Repeating S4, the reference node M in the base station 2 can be obtained9And time difference information delta t between other microphone nodes9n”(n”=10,11,…,16)。
S5, the single base station calculates the direction information of the sound source;
extracting high-precision time difference information delta tau in the base station 1 according to S41n'(n' ═ 2,3, …,8), the matrix obtained in conjunction with the base station 1 microphone position coordinate relationship is as follows:
wherein c is 340M/s, microphone M1The coordinate is (x)1,y1,z1) Microphone Mn'The coordinate is (x)n',yn',zn') The least square method is used for solving the formula (4) to obtain the sound by calculationReference node M in source base station 11Corresponding to the azimuth angle alpha1And a pitch angle beta1。
Repeating S5 to obtain a reference node M in the sound source base station 29Is in azimuth of9And a pitch angle beta9。
S6, performing bionic double-ear direction finding cross joint positioning to quickly obtain a sound source positioning area;
as shown in FIG. 3, let the sound source be at P (x, y, z), the base station 1 uses a microphone M1Azimuth angle alpha of sound source measured for reference point1Pitch angle of beta1(ii) a Base station 2 with microphone M9Azimuth angle alpha of sound source measured for reference point9Pitch angle of beta9Then two straight lines L can be determined1And L9。
Due to the influence of errors, two straight lines are difficult to intersect at one point and exist in the form of non-coplanar straight lines. With a microphone M1The coordinate is (x)1,y1,z1) Microphone M9The coordinate is (x)9,y9,z9) The straight line L can be obtained by calculation1And L9The equation of (c):
Conversion to
Li=ri+tivi (6)
sound source point P to straight line L1A distance of D1To a straight line L9A distance of D2. The position (x, y, z) of point P is calculated using the distance minimization method such that D1And D2The sum D is minimal. That is, it can be confirmed that the center is P (x, y, z),a spherical region of radius;
s7, fine scanning and positioning are carried out based on the double base stations, and the specific position of the sound source is obtained;
assuming that the sound source obtained at S6 is a spherical region, its center is P (x, y, z) and its radius is P (x, y, z)And forming a microphone array under a coordinate system by the double-base-station microphone array, and performing fine scanning and positioning on the spherical area. And reconstructing sound field energy corresponding to the search area by using an SRP-PHAT (joint controllable response frequency and phase transformation) algorithm. Where the point of energy maximum is the specific location of the sound source.
Assuming that the point P '(x', y ', z') is the position of the sound source scanned in space, the corresponding energy is defined as:
wherein R is1q(·) denotes a generalized cross-correlation curve of a 1 st microphone and a q (q ═ 2, 3.., 16) th microphone perceived sound source signal.
τ1q(x ', y ', z ') denotes the 1 st microphone (x) at the scanning position1,y1,z1) And the q (q ═ 2, 3.., 16) microphones (x)q,yq,zq) The time difference between them is shown in equation 8.
c=340m/s。
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.
Claims (7)
1. A single sound source target passive localization method is characterized by comprising the following steps:
s1, constructing an ultramicro baseline dual-base-station microphone array: the microphone detection array is composed of 2 base stations, each base station adopts 8 microphones which are symmetrically distributed and fixed on the bracket up and down, and the distance from each microphone to the central point of the bracket is d; a microphone is respectively arranged up and down in the direction coaxial with the Z axis, and the other 3 microphones are uniformly distributed around the Z axis and form an included angle of 45 degrees with the Z axis;
s2, acquiring sound source signals acquired by the double-base-station microphone array through the data synchronous acquisition system;
s3, respectively carrying out frequency spectrum sparsification on the sound source signals acquired in the step 2 by adopting a variational modal decomposition method, carrying out self-adaptive decomposition on the sound source signals into a plurality of modal frequency components, and taking the highest-frequency modal component mu of each microphonenAs a component of a subsequent moveout extraction;
s4, extracting high-precision time difference information inside each base station according to the highest-frequency modal component of each microphone;
s5, obtaining sound source direction information corresponding to each base station according to the high-precision time difference information extracted in S4 and the position coordinate relation of the microphone of the base station;
s6, according to the sound source direction information and the microphone position coordinates corresponding to each base station obtained in the S5, a cross joint positioning method is adopted to quickly obtain a sound source positioning area;
and S7, assuming that the area where the sound source obtained in the S6 is located is a spherical area, forming a microphone array under a coordinate system by the double-base-station microphone arrays, carrying out fine scanning and positioning on the spherical area, and reconstructing corresponding sound field energy in the spherical area by using a combined controllable response frequency and phase transformation algorithm, wherein the point of the maximum energy value is the specific position of the sound source.
2. The passive single sound source target positioning method according to claim 1, characterized in that 2 base stations are provided with humanoid physiology binaural structures, and are jointly established under the same coordinate system; is arranged at both sides of the shoulder.
3. The method for passively positioning a single sound source target according to claim 1, wherein the step S4 is as follows:
s4.1, for each base station, taking a microphone node M1~M8Corresponding to the highest frequency modal component mu1~μ8And extracting the arrival time information t of the first arrival wave transmitted to each microphone sensor node by the sound source by adopting a long-short time window method1~t8;
S4.2, setting the highest-frequency modal component mu of all microphone nodes1~μ8Identifying the frequency and calculating the common frequency f0;
S4.3, constructing a narrow-band filter and extracting a modal component mu1~μ8Middle corresponds to f0Frequency component μ of1'~μ8';
S4.4, calculating the single-period phase difference based on a correlation method; taking a microphone positioned on the Z axis on each base station as a reference node, and calculating phase differences of other microphones in the base station relative to the reference node;
s4.5, utilizing the arrival time information t of the first arrival wave in the S4.11~t8And the phase difference information extracted in the S4.4 is used for eliminating phase ambiguity and obtaining the time difference information of other microphones in each base station relative to the reference node.
4. The method of claim 3, wherein the target of the single sound source is passively located,
setting the microphone M of the reference node of each base station1The coordinate is (x)1,y1,z1) The coordinates of other microphones are (x)n',yn',zn') The propagation speed of sound is 340 m/s. The corresponding time difference formula between the reference node and each microphone node is as follows:
τ1n'=t1-tn' (2)
according to the geometric relation of the microphone positions in the base station and the wave field propagation theory, the corrected phase difference delta phi 'can be obtained'1n'Corresponding time difference information delta tau1n'The formula should be satisfied:
wherein, is1n'Is a reference node M1And phase difference between each microphone node, f0The frequency of a high-frequency signal shared by the microphone nodes, wherein N is the number of times that the frequency component is periodically repeated; c is the speed of sound and n represents the nth microphone within a single base station.
5. The passive localization method of single acoustic source target according to claim 4, wherein the high-precision time difference information Δ τ extracted from S4 is extracted from the interior of single base station1n'(n ═ 2,3, …,8), and combining the base station microphone position coordinate relationship, the matrix can be obtained as follows:
wherein c is 340M/s, microphone M1The coordinate is (x)1,y1,z1) Microphone Mn'The coordinate is (x)n',yn',zn') And solving the formula (4) by using a least square method to calculate and obtain a reference node M in the sound source base station 11Corresponding to the azimuth angle alpha1And a pitch angle beta1。
6. The method as claimed in claim 3, wherein in S6, the position of the sound source is P (x, y, z).
7. The method for passively positioning a single sound source target according to claim 5, wherein S7 specifically comprises:
assuming that the point P '(x', y ', z') is the position of the sound source scanned in space, the corresponding energy is defined as:
wherein R is1q(·) denotes a generalized cross-correlation curve of a 1 st microphone and a q (q ═ 2, 3.., 16) th microphone perceived sound source signal.
τ1q(x ', y ', z ') denotes the 1 st microphone (x) at the scanning position1,y1,z1) And the q (q ═ 2, 3.., 16) microphones (x)q,yq,zq) The time difference between them, equation 8:
c=340m/s。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111599748.4A CN114252844A (en) | 2021-12-24 | 2021-12-24 | Passive positioning method for single sound source target |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111599748.4A CN114252844A (en) | 2021-12-24 | 2021-12-24 | Passive positioning method for single sound source target |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114252844A true CN114252844A (en) | 2022-03-29 |
Family
ID=80797442
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111599748.4A Pending CN114252844A (en) | 2021-12-24 | 2021-12-24 | Passive positioning method for single sound source target |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114252844A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115171227A (en) * | 2022-09-05 | 2022-10-11 | 深圳市北科瑞声科技股份有限公司 | Living body detection method, living body detection device, electronic apparatus, and storage medium |
CN117406174A (en) * | 2023-12-15 | 2024-01-16 | 深圳市声菲特科技技术有限公司 | Method, device, equipment and storage medium for accurately positioning sound source |
-
2021
- 2021-12-24 CN CN202111599748.4A patent/CN114252844A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115171227A (en) * | 2022-09-05 | 2022-10-11 | 深圳市北科瑞声科技股份有限公司 | Living body detection method, living body detection device, electronic apparatus, and storage medium |
CN115171227B (en) * | 2022-09-05 | 2022-12-27 | 深圳市北科瑞声科技股份有限公司 | Living body detection method, living body detection device, electronic apparatus, and storage medium |
CN117406174A (en) * | 2023-12-15 | 2024-01-16 | 深圳市声菲特科技技术有限公司 | Method, device, equipment and storage medium for accurately positioning sound source |
CN117406174B (en) * | 2023-12-15 | 2024-03-15 | 深圳市声菲特科技技术有限公司 | Method, device, equipment and storage medium for accurately positioning sound source |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114252844A (en) | Passive positioning method for single sound source target | |
CN108680156B (en) | Robot positioning method for multi-sensor data fusion | |
CN107957574B (en) | Time division foundation MIMO landslide radar imaging method based on IFFT and hybrid matching pursuit | |
CN105682221A (en) | Passive positioning system based on ultra wide band (UWB) and positioning method | |
CN104035065A (en) | Sound source orienting device on basis of active rotation and method for applying sound source orienting device | |
CN108020812B (en) | Two-dimensional DOA estimation method based on special three-parallel line array structure | |
CN107589399A (en) | Based on the relatively prime array Wave arrival direction estimating method for sampling virtual signal singular values decomposition more | |
CN103796304B (en) | One kind is based on virtual training collection and markovian underground coal mine localization method | |
CN105223551A (en) | A kind of wearable auditory localization tracker and method | |
WO2022151511A1 (en) | Cross-correlation tensor-based three-dimensional coprime cubic array direction of arrival estimation method | |
CN103792513B (en) | A kind of thunder navigation system and method | |
CN106019215A (en) | Nested array direction-of-arrival angle estimation method based on fourth-order cumulants | |
CN105866777B (en) | The bistatic PS-InSAR three-dimensional deformations inversion method of the multi-period navigation satellite of multi-angle | |
CN109901112A (en) | It is positioned simultaneously based on the acoustics that multiple channel acousto obtains and builds drawing method | |
CN103472450A (en) | Non-uniform space configuration distributed SAR moving target three-dimensional imaging method based on compressed sensing | |
CN112684414A (en) | Unmanned aerial vehicle counter-braking method and device | |
CN111181673B (en) | 3D wireless channel modeling method based on double-mobile scene | |
CN111199281B (en) | Short wave single station direct positioning deviation compensation method based on geographical coordinate airspace position spectrum | |
CN104683949B (en) | It is a kind of to be applied to the mixing method for self-locating based on aerial array in Wireless Mesh network | |
CN106646421B (en) | MIMO radar waveform co-design method based on three-dimensional nonuniform noise | |
CN106019266A (en) | Gunfire distance determining and projectile velocity measuring method | |
CN101403791A (en) | Fast real-time space spectrum estimation ultra-resolution direction-finding device and method thereof | |
CN107505598A (en) | A kind of high burst localization method based on three basic matrixs | |
CN108594217A (en) | A kind of extraterrestrial target pitching and orientation two dimension angular closed loop tracking system | |
CN116299182A (en) | Sound source three-dimensional positioning method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |