CN116184320A - Unmanned aerial vehicle acoustic positioning method and unmanned aerial vehicle acoustic positioning system - Google Patents

Unmanned aerial vehicle acoustic positioning method and unmanned aerial vehicle acoustic positioning system Download PDF

Info

Publication number
CN116184320A
CN116184320A CN202310467524.0A CN202310467524A CN116184320A CN 116184320 A CN116184320 A CN 116184320A CN 202310467524 A CN202310467524 A CN 202310467524A CN 116184320 A CN116184320 A CN 116184320A
Authority
CN
China
Prior art keywords
acoustic
aerial vehicle
unmanned aerial
signal
array
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310467524.0A
Other languages
Chinese (zh)
Other versions
CN116184320B (en
Inventor
尹永刚
张劲
施钧辉
任丹阳
王钰琪
唐昆
靳伯骜
王少博
姚泽炜
陈渊源
高大
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202310467524.0A priority Critical patent/CN116184320B/en
Publication of CN116184320A publication Critical patent/CN116184320A/en
Application granted granted Critical
Publication of CN116184320B publication Critical patent/CN116184320B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/18Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves
    • G01S5/22Position of source determined by co-ordinating a plurality of position lines defined by path-difference measurements
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The application relates to an unmanned aerial vehicle acoustic positioning method and an unmanned aerial vehicle acoustic positioning system, wherein the method comprises the following steps: acquiring a first acoustic signal based on a first induction array, and determining a first position of a target unmanned aerial vehicle based on the first acoustic signal, wherein the first induction array comprises an induction device; the second sound signal is obtained based on the second sensing array, and the second position of the target unmanned aerial vehicle is determined based on the second sound signal, wherein the second sensing array comprises a technical scheme of a sensing device close to the first position, so that the technical problem that the position information of the target unmanned aerial vehicle cannot be determined quickly in the related technology is solved, potential safety hazards are caused, the low calculation cost is achieved, the position information of the target unmanned aerial vehicle is determined quickly, and the technical effect of safety protection is improved.

Description

Unmanned aerial vehicle acoustic positioning method and unmanned aerial vehicle acoustic positioning system
Technical Field
The application relates to the technical field of acoustic detection, in particular to an unmanned aerial vehicle acoustic positioning method and an unmanned aerial vehicle acoustic positioning system.
Background
Along with the continuous expansion of rotor unmanned aerial vehicle market, the potential safety hazard and accident that rotor unmanned aerial vehicle caused are reported frequently. Unmanned aerial vehicle flies black, and confidential and personal privacy can be revealed, so that aviation safety and personal safety are threatened. The detection and recognition technology research of the black flying unmanned aerial vehicle is enhanced, the early warning reconnaissance capability of an airspace is improved, and the construction of a low-low target intrusion prevention system is currently a work to be carried out urgently in China.
The detection means for the 'low-speed' target unmanned plane at present mainly comprise radar, optical, radio and acoustic detection technologies and the like. Radar detection is insensitive to the small unmanned aerial vehicle, and has high cost and large power consumption; the performance of the optical imaging method is easily affected by factors such as ambient light, rain and fog, atmospheric turbulence and the like; the radio method cannot perceive the unmanned aerial vehicle in the electromagnetic silence state. The passive acoustic detection does not need an active emission device, has small volume and good concealment performance, the acoustic wave is used as a mechanical wave, has small power, is not interfered by electromagnetic information, can work around the clock, has low cost, can be deployed in a large-scale networking way, and achieves full-space and full-area monitoring.
However, the current acoustic unmanned aerial vehicle detection method needs to perform a large amount of calculation in the process of determining the position information of the unmanned aerial vehicle with the target, and has long time consumption, and can not quickly determine whether the unmanned aerial vehicle with the target is located in a sensitive area.
Disclosure of Invention
Based on the foregoing, it is necessary to provide an unmanned aerial vehicle acoustic positioning method, an unmanned aerial vehicle acoustic positioning system and a computer readable storage medium capable of quickly determining a target unmanned aerial vehicle.
In a first aspect, the present application provides an acoustic positioning method for an unmanned aerial vehicle. The method comprises the following steps:
acquiring a first acoustic signal based on a first induction array, and determining a first position of a target unmanned aerial vehicle based on the first acoustic signal, wherein the first induction array comprises an induction device; a second sound signal is acquired based on a second sensing array, and a second location of the target drone is determined based on the second sound signal, wherein the second sensing array includes a sensing device proximate to the first location.
In one embodiment, acquiring a second acoustic signal based on a second sensing array, determining a second location of the target drone based on the second acoustic signal, includes: determining the signal intensity ratio of each second sound signal in a plurality of second sound signals; and correcting a third acoustic signal acquired by a third sensing array according to the signal intensity ratio of each second acoustic signal, and determining the second position of the target unmanned aerial vehicle based on the corrected third acoustic signal.
In one embodiment, before the acquiring the first acoustic signal based on the first sensing array, determining the first position of the target drone, the method further includes: and determining that the target unmanned aerial vehicle exists under the condition that the first sound signal comprises a first target signal and a second target signal, wherein the second target signal comprises a higher harmonic frequency signal of the first target signal.
In one embodiment, the higher order includes three times and more.
In one embodiment, the sensing means comprises at least one microphone; acquiring a first acoustic signal based on a first sensing array, determining a first position of a target unmanned aerial vehicle, comprising: and starting one microphone of each sensing device in the first sensing array, and acquiring the first sound signal based on the started microphone.
In one embodiment, the sensing device comprises a microphone and an acoustic resonator, wherein the microphone obtains an acoustic signal via the acoustic resonator, and the acoustic resonator is used for enhancing a first target signal in the acoustic signal.
In one embodiment, the first order resonant frequency of the acoustic resonator is equal to the fundamental frequency of the first target signal.
In a second aspect, the present application further provides an acoustic positioning system for a drone, comprising:
the unmanned aerial vehicle acoustic positioning system comprises an induction array and a controller, wherein the induction array comprises a first induction array and a second induction array, the first induction array comprises an induction device, the second induction array comprises an induction device, the first induction array is used for acquiring a first acoustic signal, the second induction array is used for acquiring a second acoustic signal, and the controller is used for executing an unmanned aerial vehicle acoustic positioning method based on the acquired acoustic signal.
In one embodiment, the sensing array further comprises a third sensing array for acquiring a third acoustic signal; the controller is further used for determining the signal intensity ratio of each second sound signal in a plurality of second sound signals; and correcting a third acoustic signal acquired by a third sensing array according to the signal intensity ratio of each second acoustic signal, and determining the second position of the target unmanned aerial vehicle based on the corrected third acoustic signal.
In one embodiment, the controller is further configured to: and determining that the target unmanned aerial vehicle exists under the condition that the first sound signal comprises a first target signal and a second target signal, wherein the second target signal comprises a higher harmonic frequency signal of the first target signal.
In one embodiment, the higher order includes three times and more.
In one embodiment, the sensing means comprises at least one microphone; the controller is used for starting one microphone of each sensing device in the first sensing array, and acquiring the first sound signal based on the started microphone.
In one embodiment, the sensing device comprises a microphone and an acoustic resonator, wherein the microphone obtains an acoustic signal via the acoustic resonator, and the acoustic resonator is used for enhancing a first target signal in the acoustic signal.
In one embodiment, the first order resonant frequency of the acoustic resonator is equal to the fundamental frequency of the first target signal.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring a first acoustic signal based on a first induction array, and determining a first position of a target unmanned aerial vehicle based on the first acoustic signal, wherein the first induction array comprises an induction device; a second sound signal is acquired based on a second sensing array, and a second location of the target drone is determined based on the second sound signal, wherein the second sensing array includes a sensing device proximate to the first location.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring a first acoustic signal based on a first induction array, and determining a first position of a target unmanned aerial vehicle based on the first acoustic signal, wherein the first induction array comprises an induction device; a second sound signal is acquired based on a second sensing array, and a second location of the target drone is determined based on the second sound signal, wherein the second sensing array includes a sensing device proximate to the first location.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
acquiring a first acoustic signal based on a first induction array, and determining a first position of a target unmanned aerial vehicle based on the first acoustic signal, wherein the first induction array comprises an induction device; a second sound signal is acquired based on a second sensing array, and a second location of the target drone is determined based on the second sound signal, wherein the second sensing array includes a sensing device proximate to the first location.
The unmanned aerial vehicle acoustic positioning method, the unmanned aerial vehicle acoustic positioning device, the computer equipment, the storage medium and the computer program product are used for acquiring a first acoustic signal based on a first induction array, and determining a first position of a target unmanned aerial vehicle based on the first acoustic signal, wherein the first induction array comprises an induction device; the second sound signal is obtained based on the second sensing array, and the second position of the target unmanned aerial vehicle is determined based on the second sound signal, wherein the second sensing array comprises a technical scheme of a sensing device close to the first position, so that the technical problem that the position information of the target unmanned aerial vehicle cannot be determined quickly in the related technology is solved, potential safety hazards are caused, the low calculation cost is achieved, the position information of the target unmanned aerial vehicle is determined quickly, and the technical effect of safety protection is improved.
Drawings
FIG. 1 is a diagram of an application environment for a positioning method in one embodiment;
FIG. 2 is a schematic diagram of a positioning system in one embodiment;
FIG. 3 is a flow chart of a positioning method in one embodiment;
FIG. 4 is an acoustic map and an acoustic gradient map in one embodiment;
FIG. 5 is a schematic diagram of a distributed acoustic array system in one embodiment;
FIG. 6 is an acoustic map and an acoustic gradient map in one embodiment;
FIG. 7 is a schematic diagram of a positioning system in one embodiment;
FIG. 8 is a schematic diagram illustrating a positional relationship between a second sensing array and a target unmanned aerial vehicle according to an embodiment;
FIG. 9 is a graph of recognition results based on a second sound signal in one embodiment;
FIG. 10 is a schematic diagram illustrating a positional relationship between a third sensing array and a target unmanned aerial vehicle according to an embodiment;
FIG. 11 is a diagram illustrating a recognition result corresponding to a third sensor array according to an embodiment;
FIG. 12 is a schematic diagram of an induction array targeted drone in one embodiment;
FIG. 13 is a graph of recognition results for a target drone in one embodiment;
FIG. 14 is a flow chart of a positioning method in one embodiment;
FIG. 15 is a schematic diagram of a distributed acoustic array system in one embodiment;
FIG. 16 is a schematic diagram of an induction device according to an embodiment;
FIG. 17 is a simulation result of the frequency response of a cavity enhancement microphone in one embodiment;
fig. 18 is an internal structural view of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The positioning method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices, and portable wearable devices, where the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart vehicle devices, and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server 104 may be implemented as a stand-alone server or as a server cluster of multiple servers.
In another embodiment, please refer to fig. 2, fig. 2 is a schematic diagram of a positioning system according to the present embodiment.
As shown in fig. 2, the positioning system includes a sensing array 210 including a first sensing array 211 including sensing devices and a second sensing array 212 including sensing devices, and a controller 220.
The induction array is formed by at least one induction device, part of induction devices in the induction array form a first induction array, part of induction devices in the induction array form a second induction array, the first induction array and the second induction array can simultaneously comprise the same induction device or can simultaneously comprise different induction devices, and the embodiment is not limited in this regard.
With continued reference to fig. 1, the sensing array and the sensing device shown in this embodiment may be the terminal shown in fig. 1; the controller may be a terminal as shown in fig. 1, or may be a server as shown in fig. 1, which is not limited to this embodiment, and may be configured according to practical situations.
In practical application, the first sensing array is used for acquiring a first sound signal, and the second sensing array is used for acquiring a second sound signal; the sensing array transmits the acquired acoustic signals to a controller, which performs a positioning method based on the acquired acoustic signals.
Referring to fig. 3, fig. 3 is a flowchart of a positioning method according to the present embodiment, and the controller shown in fig. 2 may perform the positioning method shown in fig. 3, where the method includes the following steps:
s301, acquiring a first acoustic signal based on the first induction array, and determining a first position of the target unmanned aerial vehicle based on the first acoustic signal.
The target unmanned aerial vehicle may be an unmanned aerial vehicle that generates the first acoustic signal, for example, may be an unmanned aerial vehicle.
The first position of the target drone may be a general position of the target drone, for example, may be a relative position with respect to the first sensing array, for example, may be: the target drone is located at the southeast corner of the first sensing array.
S302, a second sound signal is acquired based on a second sensing array, and a second position of the target unmanned aerial vehicle is determined based on the second sound signal.
Wherein the second sensing array comprises sensing means adjacent to the first location.
The second location of the target drone may be location information including a position and a distance of the target drone, where the distance and the position are relative information compared to the second sensing array.
According to the positioning method, the first acoustic signal is obtained based on the first induction array, and the first position of the target unmanned aerial vehicle is determined based on the first acoustic signal, wherein the first induction array comprises an induction device; the second sound signal is obtained based on the second sensing array, and the second position of the target unmanned aerial vehicle is determined based on the second sound signal, wherein the second sensing array comprises a technical scheme of a sensing device close to the first position, so that the technical problem that the position information of the target unmanned aerial vehicle cannot be determined quickly in the related technology is solved, potential safety hazards are caused, the low calculation cost is achieved, the position information of the target unmanned aerial vehicle is determined quickly, and the technical effect of safety protection is improved.
It should be noted that the sensing device of the second sensing array includes a sensing device adjacent to the first position, the first sensing array may include a second sensing array, the first sensing array may be the same sensing array as the second sensing array, and the first sensing array and the second sensing array may not have any inclusion relationship, which is not limited in this embodiment.
In another embodiment shown, in step 301 above, determining a first location of the target drone based on the first acoustic signal includes:
and generating an acoustic map and an acoustic gradient map based on the first acoustic signal, and judging a first position of the target unmanned aerial vehicle according to the acoustic map and the acoustic gradient map.
For example, referring to fig. 4, fig. 4 is an acoustic map and an acoustic gradient map shown in the present embodiment. As shown in fig. 4, 100 subarrays are uniformly distributed in the range shown in fig. 4, and the induction array includes position information of the subarrays, relative orientations of the target unmanned aerial vehicle relative to the subarrays, and acoustic signal intensity information generated by the target unmanned aerial vehicle acquired by the subarrays.
For example, the location of the arrow may represent a sub-array; the direction pointed by the arrow is used for indicating the opposite position of the target unmanned aerial vehicle relative to the subarray; the acoustic line is used to indicate the intensity at which the subarray acquired the acoustic signal.
Alternatively, the acoustic map and the acoustic gradient map may be replaced by an acoustic heat map, which is not limited to this embodiment.
It should be noted that each subarray in the figure may be an induction device, and the corresponding array shown in the figure may be an induction array formed by the induction devices;
alternatively, referring to fig. 5, fig. 5 is a schematic diagram illustrating a distributed acoustic array system according to the present embodiment, as shown in fig. 5, each sub-array may also represent an induction array, where the induction array includes a plurality of induction devices, and the array shown in the corresponding figure may be an induction array with a larger range formed by a plurality of induction arrays.
By the method, the relative position of the target unmanned aerial vehicle compared with the induction array or the induction subarray can be quickly determined at low cost due to the fact that fewer resources (for example, simple calculation and small calculation amount) are consumed for generating the acoustic map, the acoustic gradient map or the acoustic heat map.
For example, with continued reference to FIG. 4, since the arrows in FIG. 4 all point to positions (0, 45) - (0, 55), the target drone is located to the left of the sense array and at a position with a y-axis in the range of 45-55m, it should be noted that the first position is only at a general angle.
In another embodiment shown, please refer to fig. 6, fig. 6 is an acoustic map and acoustic gradient map shown in this embodiment. Taking the acoustic map as shown in fig. 4 based on the acoustic signal received at 0s as an example, the acoustic map as shown in fig. 6 based on the acoustic signal received at 1 s.
According to the acoustic pressure improvement in the acoustic map, the unmanned aerial vehicle can be basically judged to be close to the left side, and according to the gradient map, the unmanned aerial vehicle can be judged to move from the right left side to the lower position along the gradient direction.
Further, before executing step 302, the method further includes: and determining subarrays adjacent to the first position based on the determined first position, estimating the space position of the unmanned aerial vehicle by using the adjacent subarrays, reducing the space searching range during accurate tracking, and determining a second induction array.
Firstly, determining the approximate position of a target unmanned aerial vehicle based on a first induction array with a larger range; and selecting sensing devices around the approximate position of the target unmanned aerial vehicle to form a second sensing array, and determining the accurate position of the target unmanned aerial vehicle based on the second sensing array.
On the one hand, the first induction array is used for roughly estimating the position information of the target unmanned aerial vehicle, and the second induction array is used for accurately acquiring the second position information of the target unmanned aerial vehicle, so that the calculation cost and the energy consumption are reduced; on the other hand, since the second array is formed by the sensing device near the first position, the angle sensitivity of the target unmanned aerial vehicle is better, and the second position of the target unmanned aerial vehicle is easier to determine.
In another embodiment, please refer to fig. 7, fig. 7 is a schematic diagram of a positioning system according to the present embodiment. As shown in fig. 7, the system further includes: a third sensing array 213 comprising sensing means for acquiring a third acoustic signal.
The controller, after executing step 302, also executes: determining the signal intensity ratio of each second sound signal in a plurality of second sound signals; and correcting a third acoustic signal acquired by a third sensing array according to the signal intensity ratio of each second acoustic signal, and determining the second position of the target unmanned aerial vehicle based on the corrected third acoustic signal.
In practical application, since the second sensing array is closest to the target unmanned aerial vehicle, the signal intensity ratio of each second acoustic signal can be determined based on the second acoustic signal acquired by the second sensing array, and the signal intensity ratio is used for correcting the third acoustic signal.
And acquiring a third sound signal based on the third sensing array, correcting the third sound signal based on the signal intensity ratio, and calculating the synthetic aperture through the distributed array to accurately track the target unmanned aerial vehicle.
The third sensing array is used for acquiring effective acoustic signals generated by the target unmanned aerial vehicle as much as possible. The third sensing array may include a second sensing array, and optionally, the second sensing array may include a third sensing array, which is not limited in this embodiment.
Referring to fig. 8, fig. 8 is a schematic diagram illustrating a positional relationship between the second sensing array and the target unmanned aerial vehicle according to the present embodiment, and as shown in fig. 8, the target unmanned aerial vehicle is located near the lower left corner of the second sensing array.
Referring to fig. 9, fig. 9 is a graph of a recognition result based on a second sound signal according to the present embodiment, wherein the lighter the color is, the closer the color is to the target unmanned aerial vehicle. Based on the recognition result, the pixels in the recognition result graph are ranked according to the brightness, the weight corresponding to the higher brightness is about larger, the lower the brightness is, the weight is smaller, and therefore the weight number corresponding to the recognition result as shown in fig. 9 is obtained.
Referring to fig. 10, fig. 10 is a schematic diagram illustrating a positional relationship between the third sensing array and the target unmanned aerial vehicle, as shown in fig. 10, the target unmanned aerial vehicle is located at the lower left corner of the third sensing array.
Referring to fig. 11, fig. 11 is a diagram of a recognition result corresponding to the third sensing array. And correcting the data in the identification result diagram corresponding to the third sensing array according to the obtained weight so as to obtain more accurate second position information.
Because the spatial resolution of the image obtained by the plurality of independent arrays is limited, the adoption of the synthetic aperture in real time consumes excessive resources. Therefore, the position of the subarrays close to the unmanned aerial vehicle is pre-judged, and then the accurate tracking with high spatial resolution is realized through a synthetic aperture technology.
In another embodiment, please refer to fig. 12, fig. 12 is a schematic diagram of an induction array targeting unmanned aerial vehicle according to the present embodiment. As shown in fig. 12, the sense array is composed of 4 sub-arrays.
The method for identifying the unmanned aerial vehicle by the synthetic aperture array formed by 4 subarrays as shown in fig. 12 comprises the following steps:
step 1, calculating a spatial covariance matrix of the subarray/synthetic aperture array.
For synthetic aperture, additional inter-subarray cross-correlation matrix synthesis is required prior to step 1. If the sound signal vector acquired by the jth subarray in the M distributed system is
Figure SMS_1
Then a synthetic aperture covariance matrix with diagonal spatial covariance matrix of each subarray can be constructed>
Figure SMS_2
:/>
Figure SMS_3
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_4
n represents the number of snapshots the acquisition signal contains and H is the conjugate transpose operator.
Then the non-diagonal part in the synthetic aperture covariance matrix is complemented by a matrix completion algorithm, and the complete synthetic aperture covariance matrix is obtained
Figure SMS_5
And 2, decomposing the characteristic values aiming at the space covariance matrix, and judging the specific source number of the unmanned aerial vehicle according to the characteristic values obtained by decomposition.
Step 3, decomposing the covariance matrix into a signal subspace Us and a noise subspace Un by the number of sources, i.e
Figure SMS_6
Step 4, according to the acoustic propagation matrix
Figure SMS_7
Constructing an angle of arrival function->
Figure SMS_8
And 5, searching a maximum value point of the arrival angle function, namely a second position of the target unmanned aerial vehicle.
For a single array, the covariance matrix of the single array is obtained directly by the calculation method of the covariance matrix of the subarray in the above steps, and the specific method steps are described in the foregoing, which is not repeated in this embodiment.
Referring to fig. 13, fig. 13 is another result diagram of a single target drone and multiple targets obtained by a single sub-array and a synthetic aperture array according to the present embodiment. Wherein the white portion of the figure indicates the second position of the target drone identified by the array.
According to the embodiment, the array aperture is expanded through the distributed induction array, the non-fuzzy angle estimation is obtained through the subarray, the high-precision fuzzy angle estimation is obtained through the distributed synthesis array, and the non-fuzzy high-precision angle estimation is finally obtained through combination of the non-fuzzy angle estimation and the fuzzy angle estimation.
In another embodiment shown, please refer to fig. 14, fig. 14 is a flowchart of a positioning method shown in this embodiment, and the controller is configured to execute the method shown in fig. 14, where the method includes the following steps:
s1401, a first acoustic signal is acquired.
The subarrays are distributed in the area to be monitored, and environmental noise is acquired through the sensing device, so that a first sound signal is obtained.
S1402, processing the first acoustic signal to obtain time-frequency spectrum information.
And processing the acquired acoustic signals by short-time Fourier transform to obtain a time spectrum.
S1403, judging whether the target unmanned aerial vehicle exists, if so, continuing to execute step 1404.
And determining that the target unmanned aerial vehicle exists under the condition that the first sound signal comprises a first target signal and a second target signal, wherein the second target signal comprises a higher harmonic frequency signal of the first target signal.
Specifically, an identification model may be established through the unmanned aerial vehicle noise database, where the identification model is used to determine whether the target unmanned aerial vehicle exists based on the acoustic signal, for example, if the target fundamental frequency signal and the higher harmonic frequency signal exist in the acoustic signal at the same time, it is determined that the target unmanned aerial vehicle exists.
S1404 determining a first position of the target drone based on the first acoustic signal.
S1405, determining a signal intensity ratio of each of the second sound signals in the plurality of second sound signals.
And S1406, correcting a third sound signal acquired by a third sensing array according to the signal intensity ratio of each second sound signal, and determining the second position of the target unmanned aerial vehicle based on the corrected third sound signal.
In another embodiment shown, the higher order harmonic signals include harmonics of the target fundamental frequency signal three times and more.
The specific implementation of steps 1404-1406 may be referred to the above embodiment, which is not described in detail.
In another embodiment, please refer to fig. 15, fig. 15 is a schematic diagram of a distributed acoustic array system according to the present embodiment, wherein the acoustic array system includes n sub-arrays (n is an integer greater than 1) which are a first sub-array, a second sub-array, … …, and an nth sub-array.
Preferably, each sub-array is distributed on the roof of the building in the area to be tested, forming a large-area, distributed acoustic detection system. And collecting acoustic signal data received by different subarrays to a terminal processing unmanned aerial vehicle, and expanding the array aperture by a distributed array synthesis technology so as to improve the spatial angle resolution of the acoustic array.
In another embodiment, please refer to fig. 16, fig. 16 is a schematic structural diagram of an induction device according to the present embodiment. As shown in fig. 16, the sensing device comprises at least one microphone 3 and at least one acoustic resonator 1.
The acoustic resonant cavity 1 is in a circular tube structure, the tube body is made of metal materials (such as aluminum alloy and stainless steel), and the inner wall of the circular tube is polished. The polishing treatment aims to reduce the thermal viscous damping of air in the circular tube and increase the amplification factor of the resonant cavity to sound waves. One end face of the circular tube points to a sound source to be measured and is used for receiving sound waves.
In another embodiment shown, with continued reference to fig. 16, the sensing device may further include a base 2 as shown in fig. 16. The base is of a disc-shaped structure and is used for fixing the microphone and the acoustic resonant cavity, and the upper surface of the base is connected with one end face of the acoustic resonant cavity. The microphone is embedded in the base, and the upper surface of the microphone is flush with the upper surface of the base. The sound receiving hole of the microphone faces the acoustic resonant cavity, and the axis of the sound receiving hole of the microphone coincides with the axis of the circular tube of the acoustic resonant cavity. The inner diameter of the acoustic resonator is larger than the diameter of the acoustic aperture of the microphone. Alternatively, one end face of the circular tube of the acoustic resonator is tightly connected with the base, and the connection mode can be welding or bonding, so that the leakage of acoustic energy from the edge of the end face is prevented.
The acoustic resonator may be configured to resonantly amplify an acoustic wave signal in a particular frequency range. Referring to fig. 17, fig. 17 is a simulation result of frequency response of a cavity enhancement microphone. Taking an example that the inner diameter of the acoustic resonant cavity is 20mm, the length is 100mm, the sound pressure of the sound source is 1Pa (the sound pressure level is 94 dB), 2 resonance peaks exist in the range of 0-3000Hz, the resonance frequency (first-order resonance frequency) of the resonance peak 1 is 785Hz, the peak value is 131dB, namely, the 1Pa sound pressure is amplified by about 70 times, and the sound waves are amplified between 304-1019 Hz; the resonance frequency (second order resonance frequency) of the resonance peak 2 is 2334Hz, the peak value is 108dB, namely, the 1Pa sound pressure is amplified by about 5 times, and the sound wave is amplified between 1620 and 3000 Hz.
If the length of the acoustic resonator 1 is much longer than its diameter, the first order resonant frequency can be estimated according to the following formula:
f1≈0.25c/(L+0.3D)
where c is the speed of sound in air, about 340m/s, L is the length of the resonant cavity, and D is the diameter of the resonant cavity.
For an l=100 mm cavity, f1≡802Hz, substantially identical to the simulation result 785Hz of fig. 2, differing by only 2%.
The second order resonant frequency can be estimated as follows:
f2≈3f1
the calculated result is 2406Hz, which is relatively close to the simulation result 2334Hz and only 3 percent different.
The acoustic signal of the rotor unmanned aerial vehicle has strong frequency domain characteristics, and the frequency spectrum of the acoustic signal is mainly fundamental frequency plus secondary harmonic frequency, tertiary harmonic frequency and higher harmonic frequency.
The fundamental frequency of the acoustic signal of a rotorcraft is mainly determined by the number of rotor blades and the rotational speed, with a general fundamental frequency between 100 and 1000 Hz.
Conventional microphone arrays typically include only a microphone and a mount. According to the method, the acoustic resonant cavity is added to the front end of the microphone, so that the acoustic wave signals can be enhanced in a specific frequency band. The first-order resonant frequency of the acoustic resonant cavity is designed to be the fundamental frequency of the rotary wing unmanned aerial vehicle, the fundamental frequency signal of the rotary wing unmanned aerial vehicle can be enhanced, the acoustic contributions of other frequency bands are relatively restrained, and finally the signal-to-noise ratio of the microphone array can be improved.
Because the second-order resonant frequency of the acoustic resonant cavity is about 3 times of the first-order resonant frequency, if the fundamental frequency signal of the rotor unmanned aerial vehicle is enhanced by the first-order resonant peak of the acoustic resonant cavity, the third-order harmonic frequency signal of the unmanned aerial vehicle is also enhanced by the second-order resonant peak of the acoustic resonant cavity.
Although the time spectrum obtained by adopting the traditional microphone can judge the existence of multiple harmonics of the unmanned aerial vehicle, the contribution of background noise in the spectrum is the strongest, and the signal-to-noise ratio is poor. The frequency spectrum after the resonant microphone is adopted, and the signals in the base band and the triple harmonic band are greatly increased due to the effect of the resonant cavity, so that the signal-to-noise ratio of the effective signals is improved.
Therefore, whether the target unmanned aerial vehicle exists can be judged by judging whether the target base signal and the high-order harmonic signal exist in the acoustic signal at the same time. Whether the rotor unmanned aerial vehicle exists or not is judged by detecting whether fundamental frequency and third harmonic frequency characteristics exist in the signal at the same time, and the problem that the false alarm rate of the acoustic unmanned aerial vehicle detection in a noisy environment is high can be solved.
In another embodiment shown, where the sensing device is the presence of at least one microphone and at least one acoustic resonator, step 1401 described above may be:
and starting one microphone of each sensing device in the first sensing array, and acquiring the first sound signal based on the started microphone.
Because the first acoustic signal is acquired only according to one microphone on each sensing device in the first sensing array, the first position of the target unmanned aerial vehicle can be rapidly determined in the first sensing array with a larger range and with smaller calculation cost.
It should be noted that the first sensing array may include all sensing devices or some sensing devices.
The second sensor array in step 1405 may include a sensor device that turns on only one or all of the microphones.
The spatial angular resolution of sound source localization is related to the aperture of the microphone array, the larger the aperture the higher the resolution. The traditional acoustic unmanned aerial vehicle detection method generally only adopts one microphone array, and the angular resolution is not high due to the limited aperture of a single microphone array. The high-altitude unmanned aerial vehicle has very high altitude, compared with the low-altitude unmanned aerial vehicle, the angle change relative to the microphone array is very small under the same flight distance, so that the requirement on the aperture of the microphone array is very high.
Under the condition that all microphones are started, the aperture of the sensing device is enlarged, the higher the resolution ratio of the sensing device to the target unmanned aerial vehicle is, the more accurate the obtained second position is.
The third sensor array in step 1406 may be a sensor array including all sensors that turn on all microphones.
Under the condition that all the sensing devices are started, the resolution ratio of the target unmanned aerial vehicle is higher, and under the adjustment of the weight, a second position with higher accuracy can be obtained.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
The various modules in the positioning system described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure thereof may be as shown in fig. 18. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a positioning method.
It will be appreciated by those skilled in the art that the structure shown in fig. 18 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the computer device to which the present application is applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
acquiring a first acoustic signal based on a first induction array, and determining a first position of a target unmanned aerial vehicle based on the first acoustic signal, wherein the first induction array comprises an induction device; a second sound signal is acquired based on a second sensing array, and a second location of the target drone is determined based on the second sound signal, wherein the second sensing array includes a sensing device proximate to the first location.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a first acoustic signal based on a first induction array, and determining a first position of a target unmanned aerial vehicle based on the first acoustic signal, wherein the first induction array comprises an induction device; a second sound signal is acquired based on a second sensing array, and a second location of the target drone is determined based on the second sound signal, wherein the second sensing array includes a sensing device proximate to the first location.
In one embodiment, a computer program product is provided comprising a computer program which, when executed by a processor, performs the steps of:
acquiring a first acoustic signal based on a first induction array, and determining a first position of a target unmanned aerial vehicle based on the first acoustic signal, wherein the first induction array comprises an induction device; a second sound signal is acquired based on a second sensing array, and a second location of the target drone is determined based on the second sound signal, wherein the second sensing array includes a sensing device proximate to the first location.
It should be noted that, user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can take many forms, such as static Random access memory (Static Random Access Memory, SRAM) or Dynamic Random access memory (Dynamic Random AccessMemory, DRAM), among others. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. An acoustic positioning method for an unmanned aerial vehicle, the method comprising:
acquiring a first acoustic signal based on a first induction array, and determining a first position of a target unmanned aerial vehicle based on the first acoustic signal, wherein the first induction array comprises an induction device;
a second sound signal is acquired based on a second sensing array, and a second location of the target drone is determined based on the second sound signal, wherein the second sensing array includes a sensing device proximate to the first location.
2. The drone acoustic positioning method of claim 1, wherein acquiring a second acoustic signal based on a second sensing array, determining a second location of the target drone based on the second acoustic signal, comprises:
determining the signal intensity ratio of each second sound signal in a plurality of second sound signals;
and correcting a third acoustic signal acquired by a third sensing array according to the signal intensity ratio of each second acoustic signal, and determining the second position of the target unmanned aerial vehicle based on the corrected third acoustic signal.
3. The method of acoustic positioning of a drone of claim 1, wherein prior to the acquiring the first acoustic signal based on the first sensing array, determining the first location of the target drone, the method further comprises:
and determining that the target unmanned aerial vehicle exists under the condition that the first sound signal comprises a first target signal and a second target signal, wherein the second target signal comprises a higher harmonic frequency signal of the first target signal.
4. The unmanned aerial vehicle acoustic positioning method of claim 3, wherein the higher order comprises three or more times.
5. The unmanned aerial vehicle acoustic positioning method of any of claims 1 to 4, wherein the sensing device comprises at least one microphone; acquiring a first acoustic signal based on a first sensing array, determining a first position of a target unmanned aerial vehicle, comprising:
and starting one microphone of each sensing device in the first sensing array, and acquiring the first sound signal based on the started microphone.
6. The drone acoustic positioning method of any one of claims 1 to 4, wherein the sensing device comprises a microphone and an acoustic resonator, wherein the microphone obtains acoustic signals via the acoustic resonator, the acoustic resonator being used to enhance a first target signal of the acoustic signals.
7. The drone acoustic positioning method of claim 6, wherein a first order resonant frequency of the acoustic resonant cavity is equal to a fundamental frequency of the first target signal.
8. The unmanned aerial vehicle acoustic positioning system is characterized by comprising an induction array and a controller, wherein the induction array comprises a first induction array, a second induction array and a third induction array, the first induction array, the second induction array and the third induction array respectively comprise induction devices, wherein,
the first sensing array is used for acquiring a first acoustic signal, the second sensing array is used for acquiring a second acoustic signal, the third sensing array is used for acquiring a third acoustic signal, and the controller is used for executing the unmanned aerial vehicle acoustic positioning method according to any one of claims 1 to 7 based on the acquired acoustic signals.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor when executing the computer program implements the steps of the unmanned aerial vehicle acoustic positioning method of any of claims 1 to 7.
10. A computer readable storage medium, having stored thereon a computer program, characterized in that the computer program when executed by a processor implements the steps of the unmanned aerial vehicle acoustic positioning method of any of claims 1 to 7.
CN202310467524.0A 2023-04-27 2023-04-27 Unmanned aerial vehicle acoustic positioning method and unmanned aerial vehicle acoustic positioning system Active CN116184320B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310467524.0A CN116184320B (en) 2023-04-27 2023-04-27 Unmanned aerial vehicle acoustic positioning method and unmanned aerial vehicle acoustic positioning system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310467524.0A CN116184320B (en) 2023-04-27 2023-04-27 Unmanned aerial vehicle acoustic positioning method and unmanned aerial vehicle acoustic positioning system

Publications (2)

Publication Number Publication Date
CN116184320A true CN116184320A (en) 2023-05-30
CN116184320B CN116184320B (en) 2023-07-18

Family

ID=86444680

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310467524.0A Active CN116184320B (en) 2023-04-27 2023-04-27 Unmanned aerial vehicle acoustic positioning method and unmanned aerial vehicle acoustic positioning system

Country Status (1)

Country Link
CN (1) CN116184320B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106772246A (en) * 2017-01-20 2017-05-31 浙江大学 Unmanned plane real-time detection and alignment system and method based on acoustic array
CN114636970A (en) * 2022-02-21 2022-06-17 中国人民解放军战略支援部队信息工程大学 Multi-unmanned aerial vehicle cooperative direct positioning method based on passive synthetic aperture
WO2022257499A1 (en) * 2021-06-11 2022-12-15 五邑大学 Sound source localization method and apparatus based on microphone array, and storage medium
CN115480214A (en) * 2022-10-09 2022-12-16 思必驰科技股份有限公司 Sound source positioning method, electronic device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106772246A (en) * 2017-01-20 2017-05-31 浙江大学 Unmanned plane real-time detection and alignment system and method based on acoustic array
WO2022257499A1 (en) * 2021-06-11 2022-12-15 五邑大学 Sound source localization method and apparatus based on microphone array, and storage medium
CN114636970A (en) * 2022-02-21 2022-06-17 中国人民解放军战略支援部队信息工程大学 Multi-unmanned aerial vehicle cooperative direct positioning method based on passive synthetic aperture
CN115480214A (en) * 2022-10-09 2022-12-16 思必驰科技股份有限公司 Sound source positioning method, electronic device and storage medium

Also Published As

Publication number Publication date
CN116184320B (en) 2023-07-18

Similar Documents

Publication Publication Date Title
WO2020192721A1 (en) Voice awakening method and apparatus, and device and medium
Busset et al. Detection and tracking of drones using advanced acoustic cameras
US7123548B1 (en) System for detecting, tracking, and reconstructing signals in spectrally competitive environments
KR102503684B1 (en) Electronic apparatus and operating method thereof
CN107677992B (en) Movement detection method and device and monitoring equipment
US10739435B2 (en) System method for acoustic source localization with aerial drones
US11189298B2 (en) Acoustic zooming
US10027771B2 (en) System and method for measuring position
CN111580099A (en) Wall clutter suppression method of through-wall imaging radar based on joint entropy
CN107450882B (en) Method and device for adjusting sound loudness and storage medium
CN116184320B (en) Unmanned aerial vehicle acoustic positioning method and unmanned aerial vehicle acoustic positioning system
US11513004B2 (en) Terahertz spectroscopy and imaging in dynamic environments
Sinitsyn et al. Determination of aircraft current location on the basis of its acoustic noise
CN111431641B (en) Unmanned aerial vehicle DOA estimation method and device based on antenna array
Chen et al. A microphone position calibration method based on combination of acoustic energy decay model and TDOA for distributed microphone array
Saqib et al. A framework for spatial map generation using acoustic echoes for robotic platforms
US11099072B2 (en) Terahertz spectroscopy and imaging in dynamic environments with spectral response enhancements
CN111768797A (en) Speech enhancement processing method, speech enhancement processing device, computer equipment and storage medium
JP5648417B2 (en) Target management apparatus and target management method
US20210041376A1 (en) Terahertz spectroscopy and imaging in dynamic environments with performance enhancements using ambient sensors
Harvey et al. A harmonic spectral beamformer for the enhanced localization of propeller-driven aircraft
US20160216357A1 (en) Method and Apparatus for Determining the Direction of Arrival of a Sonic Boom
Chervoniak et al. Passive acoustic radar system for flying vehicle localization
CN115061124A (en) Radiation source positioning method, radiation source positioning device, computer equipment and storage medium
US20200339068A1 (en) Microphone-based vehicle passenger locator and identifier

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant