CN213754654U - Low-altitude unmanned aerial vehicle detection device based on sound and image fusion - Google Patents

Low-altitude unmanned aerial vehicle detection device based on sound and image fusion Download PDF

Info

Publication number
CN213754654U
CN213754654U CN202022700370.XU CN202022700370U CN213754654U CN 213754654 U CN213754654 U CN 213754654U CN 202022700370 U CN202022700370 U CN 202022700370U CN 213754654 U CN213754654 U CN 213754654U
Authority
CN
China
Prior art keywords
microphone array
far
zooming
sound
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202022700370.XU
Other languages
Chinese (zh)
Inventor
焦庆春
王小龙
王利军
白慧慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lover Health Science and Technology Development Co Ltd
Zhejiang University of Science and Technology ZUST
Original Assignee
Zhejiang Lover Health Science and Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lover Health Science and Technology Development Co Ltd filed Critical Zhejiang Lover Health Science and Technology Development Co Ltd
Priority to CN202022700370.XU priority Critical patent/CN213754654U/en
Application granted granted Critical
Publication of CN213754654U publication Critical patent/CN213754654U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The utility model relates to a low latitude unmanned aerial vehicle detection device based on sound and image fusion, including support, near field microphone array, far field microphone array, panoramic camera, zoom infrared camera, zoom binocular camera and controller, near field microphone array is equidistant to be installed at the top, bottom, left part, right part of support for control near field low latitude area; the far field microphone array is parallel to the near field microphone array, is arranged on the bracket and is arranged at the outer end part of the bracket and used for monitoring a far field low altitude area; the panoramic camera, the zooming infrared camera and the zooming binocular camera are all arranged at the central part of the bracket and are used for monitoring and identifying the unmanned aerial vehicle; the near-field microphone array, the far-field microphone array, the panoramic camera, the zooming infrared camera and the zooming binocular camera are respectively in communication connection with the controller. The utility model has the characteristics of higher rate of accuracy can improve the whole work efficiency of system, has effectively promoted effective monitoring range of system etc.

Description

Low-altitude unmanned aerial vehicle detection device based on sound and image fusion
Technical Field
The utility model belongs to the technical field of the low latitude security protection, specifically speaking relate to an unmanned detection device of low latitude based on sound and image fusion.
Background
In recent years, the market development of unmanned aerial vehicles is rapid, and the unmanned aerial vehicles are widely applied to logistics, entertainment, aerial photography, search and rescue and the like, and meanwhile, the problems caused by the 'black flight' of the unmanned aerial vehicles are more and more, and the public safety and the individual privacy are seriously affected. Therefore, unmanned aerial vehicle's control is taken precautions against and is more and more received academic and industrial area's attention.
At present, the detection means of the unmanned aerial vehicle mainly comprises radar, audio, video and radio frequency.
Chinese patent CN 111121541 a proposes an anti-drone radar system with radio interference function, which uses radar arrays to detect drones. However, the radar scattering cross section of the low-altitude unmanned aerial vehicle is small, the detection precision of the radar is not high, and most radars cannot accurately detect the low-altitude unmanned aerial vehicle basically due to the interference of extremely strong ground clutter and ground reflected waves in a low-altitude airspace region, particularly in an urban environment; chinese patent CN 111190140 a proposes a black unmanned aerial vehicle detection system based on radio frequency detection, and this system can be through detecting radio frequency signal monitoring part unmanned aerial vehicle, but radio frequency mainly used detects unmanned aerial vehicle's picture and passes signal and remote control signal, can't detect when unmanned aerial vehicle sets for the GPS navigation, and the radio frequency receives the interference serious in low-altitude city environment.
Chinese patent CN 107884749 a proposes a low altitude unmanned passive acoustic detection positioning device, which can identify and position an unmanned aerial vehicle through audio frequency, but the microphone used in the utility model is not equipped with a sound gathering cover, the detection distance is short, the microphone array has no switching coordination between near field and far field, and in addition, the noise filtration does not perform selective filtering according to the target distance; chinese patent CN 109708659 a proposes a distributed intelligent photoelectric low-altitude protection system, which can identify and track an unmanned aerial vehicle by using a video monitoring means, but the method only uses a video device, cannot accurately locate the position of the unmanned aerial vehicle, is easily interfered by a complex environment, and is easily shielded.
SUMMERY OF THE UTILITY MODEL
In order to overcome the not enough of prior art existence, the utility model provides a low latitude unmanned aerial vehicle detection device based on sound and image fusion, through multisensor linkage tracking, multielement fusion discernment target, the low latitude unmanned aerial vehicle that can high-efficient, accurate discernment and trail the invasion.
The utility model adopts the technical proposal that:
a low-altitude unmanned aerial vehicle detection device based on sound and image fusion comprises a support, a near-field microphone array, a far-field microphone array, a panoramic camera, a zooming infrared camera, a zooming binocular camera and a controller, wherein the near-field microphone array is arranged on the top, the bottom, the left part and the right part of the support at equal intervals and used for monitoring a near-field low-altitude area; the far field microphone array is parallel to the near field microphone array, is arranged on the bracket and is arranged at the outer end part of the bracket and used for monitoring a far field low altitude area; the panoramic camera, the zooming infrared camera and the zooming binocular camera are all arranged at the central part of the bracket and are used for monitoring and identifying the unmanned aerial vehicle; the near-field microphone array, the far-field microphone array, the panoramic camera, the zooming infrared camera and the zooming binocular camera are respectively in communication connection with the controller.
Preferably, the near-field microphone array comprises a plurality of near-field microphones, the far-field microphone array comprises a plurality of far-field microphones, a plurality of audio detection cloud deck and a plurality of sound gathering covers, the sound pickup heads of the far-field microphones are respectively installed in the corresponding sound gathering covers, the far-field microphones are respectively fixed on the corresponding audio detection cloud decks, and the motion of the far-field microphones is controlled by the audio detection cloud decks.
Preferably, the zooming infrared camera and the zooming binocular camera are fixed together side by side and are arranged on the infrared binocular head.
Preferably, the near-field microphone array comprises four near-field microphones, and the far-field microphone array comprises four far-field microphones, four audio detection holders and four sound gathering hoods.
Preferably, the sound focusing cup is a paraboloid which is formed by rotating a parabola around an origin.
Preferably, the sound pick-up heads of the far-field microphones are respectively installed at the focus points of the corresponding sound gathering covers, and the far-field microphones are respectively fixed on the audio detection pan/tilt head.
Preferably, the controller comprises five paths of pan-tilt controls, eight paths of audio streams, three paths of video streams, a CPU (central processing unit) mainly used for controlling, a GPU (graphics processing unit) mainly used for identifying and a memory, wherein the pan-tilt controls are connected with the audio detection pan-tilt and the infrared binocular pan-tilt through corresponding pan-tilt control sensors, the audio streams are connected with the audio detection pan-tilt through corresponding audio stream sensors, and the video streams are connected with the panoramic camera, the zooming infrared camera and the zooming binocular camera through corresponding video stream sensors; the CPU is communicated with the GPU, and the CPU and the GPU are respectively communicated with the memory. The holder control is used for outputting a control signal of the high-speed holder and receiving attitude information of the high-speed holder; the audio stream is used for receiving audio information of the microphone array; the video stream is used for receiving image information of the panoramic camera, the zooming infrared camera and the zooming binocular camera.
Preferably, the near-field microphone array, the far-field microphone array, the panoramic camera, the zoom infrared camera and the zoom binocular camera are respectively in communication connection with the controller through any one or more of the following modes: ethernet interface, optical interface, 4G/5G, WIFI.
The technical effects of the utility model reside in that:
(1) the utility model carries out fusion recognition through sound, thermal imaging and video images, the recognition accuracy is higher, and the whole working efficiency is also improved;
(2) the utility model provides a can be great liberation far field microphone array in the detection area of every microphone, so can effectual promotion microphone array's detection distance, improve entire system's effective monitoring range.
Drawings
Fig. 1 is a schematic structural diagram of the present invention;
FIG. 2 is a process diagram of the method for low altitude unmanned detection;
FIG. 3 is a process diagram of the method for tracking an intrusion target by multi-sensor linkage according to the present invention;
FIG. 4 is a process diagram of the method for identifying the low altitude unmanned aerial vehicle by multi-parameter fusion of the present invention;
fig. 5 is a schematic block diagram of the present invention.
Detailed Description
The present invention will be further described with reference to the following specific examples, but the scope of the present invention is not limited thereto.
Referring to fig. 1, the low altitude unmanned aerial vehicle detection device based on sound and image fusion comprises a support 1, a near field microphone array, a far field microphone array, a panoramic camera 4, a zoom infrared camera 5, a zoom binocular camera 6 and a controller 8, wherein the near field microphone array is equidistantly arranged on the top, the bottom, the left part and the right part of the support 1 and used for monitoring a near field low altitude area; the far field microphone array is parallel to the near field microphone array, is arranged on the bracket 1 and is arranged at the outer end part of the bracket 1 and is used for monitoring a far field low altitude area; the panoramic camera 4, the zooming infrared camera 5 and the zooming binocular camera 6 are all arranged at the central part of the bracket 1 and are used for monitoring and identifying the unmanned aerial vehicle; the near-field microphone array, the far-field microphone array, the panoramic camera 4, the zooming infrared camera 5 and the zooming binocular camera 6 are respectively in communication connection with the controller 8.
The near-field microphone array comprises a near-field microphone 2A, a near-field microphone 2B, a near-field microphone 2C and a near-field microphone 2D, the far-field microphone array comprises a far-field microphone 3A, a far-field microphone 3B, a far-field microphone 3C, a far-field microphone 3D, an audio detection cradle head 3E, an audio detection cradle head 3F, an audio detection cradle head 3G, an audio detection cradle head 3H, a sound gathering cover 3I, a sound gathering cover 3J, a sound gathering cover 3K and a sound gathering cover 3L, the sound gathering cover is a paraboloid formed by rotating a parabola around an origin, the sound pickup heads of the far-field microphone 3A, the far-field microphone 3B, the far-field microphone 3C and the far-field microphone 3D are respectively installed at the focus points of the corresponding sound gathering cover 3I, the sound gathering cover 3J, the sound gathering cover 3K and the sound gathering cover 3L, and the far-field microphone 3A, the far-field microphone 3B, the far-field microphone 3D and the far-field microphone 3D, The far-field microphone 3C and the far-field microphone 3D are respectively fixed on the audio detection cradle head 3E, the audio detection cradle head 3F, the audio detection cradle head 3G and the audio detection cradle head 3H, and the motion is controlled by the audio detection cradle head; the zooming infrared camera 5 and the zooming binocular camera 6 are fixed together side by side and are arranged on an infrared binocular head 7.
Referring to fig. 5, the controller includes five pan-tilt controllers, eight audio streams, three video streams, a CPU mainly used for control, a GPU mainly used for identification, and a memory; the cradle head control is connected with the audio detection cradle head and the infrared binocular cradle head through corresponding cradle head control sensors, the audio stream is connected with the audio detection cradle head through corresponding audio stream sensors, and the video stream is connected with the panoramic camera 4, the zooming infrared camera 5 and the zooming binocular camera 6 through corresponding video stream sensors; the CPU is communicated with the GPU, the CPU and the GPU are respectively communicated with the memory, and the memory can be used for the CPU and the GPU to perform temporary storage of operation. The sensor can adopt an electronic compass, a gyroscope sensor and the like; the holder control is used for outputting a control signal of the high-speed holder and receiving attitude information of the high-speed holder; the audio stream is used for receiving audio information of the microphone array; the video stream is used for receiving image information of the panoramic camera 4, the zooming infrared camera 5 and the zooming binocular camera 6; the communication connection can be realized through an Ethernet interface, an optical interface, 4G/5G, WIFI and other wired and wireless communication modes. The panoramic camera, the zooming infrared camera, the zooming binocular camera, the near-field microphone array and the far-field microphone array are further respectively connected with an external storage device, and the external storage device can be used for recording events, storing video and audio files and allowing the events to be viewed.
Referring to fig. 2-4, the method for detecting the low altitude unmanned aerial vehicle by the device mainly comprises intrusion detection, target linkage tracking and multi-parameter fusion target identification, wherein the intrusion detection is used for detecting whether an unknown intruder exists in a monitoring area in real time; the target linkage tracking is used for accurately tracking an invading target in real time; the multi-parameter fusion target identification is used for identifying and judging an invasion target; the specific implementation process is as follows:
the method comprises the following steps: the panoramic camera 4 monitors video information of a target area in real time and carries out intrusion detection through an optical flow method; the optical flow refers to the change of gray value between pixel points on the video image, and the change is obtained by dividing the displacement value by the time difference, in the monitoring area, the background image is basically in a static state, and the intrusion object is in a motion state, so that the intrusion detection can be realized by the optical flow information difference between the intrusion object and the background.
Step two: after the panoramic camera 4 detects the intrusion target, the panoramic camera 4, the zooming binocular camera 6 and the far-field microphone array track the intrusion target in a linkage manner:
(A1) establishing a coordinate system by taking the midpoint of the panoramic image as an origin, determining the pixel coordinate P of the central point of the intrusion target by taking the unit length as a pixel valuepixel(x1, y1) to the controller 8;
(A2) the controller 8 controls the zooming binocular camera 6 to search the invading target from near to far according to the control function (1-1), if the zooming binocular camera 6 cannot search the invading target, the controller requests the panoramic camera 4 for a latest invading target pixel coordinate to search again, and after the corresponding target is searched, the controller adjusts the zooming binocular camera 6 to obtain the focal length f2Enabling the number of target pixel points shot by the binocular camera to be not low K (the lowest pixel number meeting the requirement of an identification algorithm), and then obtaining the depth h of the target pixel points according to the binocular algorithm;
CPTZ-71,θ2)=FPTZ-Binocular camera(x1,y1,f1,f2) (1-1)
in the formula [ theta ]1: horizontal rotation angle, theta, of infrared binocular head 72: vertical rotation angle of the infrared binocular head 7, (x1, y 1): pixel coordinates of the intrusion object, f1: focal length, f, of the panoramic camera 42: the focal length of the zoom binocular camera 6;
and then obtaining the coordinate P of the invading target under the coordinate system of the zooming binocular camera 6 through the function (1-2)Binocular camera(xs,ys,zs);
PBinocular camera(xs,ys,zs)=Ftra(h,θ1,θ2) (1-2)
(A3) The controller of the far-field microphone array respectively leads the coordinate P of the invasion target in the coordinate system of the zooming binocular camera 6 through the relative positions of each microphone and the zooming binocular camera 6Binocular camera(xs,ys,zs) Into coordinates P in respective coordinate systemsmic-i(xi,yi,zi);
Respectively controlling each far-field microphone to cover a target area through a holder control function (1-3), and acquiring target audio information after covering the target;
CPTZ(α,β)=FPTZ(x,y,z) (1-3)
wherein α: horizontal rotation angle of the audio detection pan/tilt, β: vertical rotation angle of audio detection pan/tilt head, (x, y, z): coordinates of an intrusion target;
then, the space coordinate P of the intrusion target under a coordinate system taking the middle point of the bracket 1 as the origin is obtained through the formula (1-4)beacket(x,y,z);
Pbeacket(x,y,z)=TDOA(t1,t2,t3,t4) (1-4)
The TDOA algorithm is specifically as follows:
in the center of the mounting bracketEstablishing a space coordinate system for the origin, the coordinate P of which is known from the microphone mounting positioni(xi,yi,zi) Assuming the coordinates of the target are Q (x, y, z), then:
Figure BDA0002788938580000051
Ri 2-R1 2=2x(x1-xi)+2y(y1-yi)+2z(z1-zi)+xi 2+yi 2+zi 2-x1 2-y1 2-z1 2 (1-6)
let x1,i=x1-xi,y1,i=y1-yi,z1,i=z1-zi,Ki=xi 2+yi 2+zi 2To obtain:
Figure BDA0002788938580000052
substituting i into 2, 3 and 4 into (1-7) respectively to obtain:
Figure BDA0002788938580000053
and R is2 2-R1 2=(R2-R1)2+2R1(R2-R1) (1-9)
R2,1=(R2-R1)=c(t2-t1) (1-10)
Substituting (1-9) and (1-10) into (1-8) can obtain:
Figure BDA0002788938580000054
from (1-5) to obtain
Figure BDA0002788938580000055
Joint solutions (1-11) and (1-12) are carried out, and then the time values of the audio signals received by the microphones are substituted to obtain target coordinates Q (x, y, z);
(A4) the space coordinate and the corresponding time value PRE are calculatedinput(x, y, z, t) is input into the trained track prediction model to obtain the predicted coordinate P of the next moment of the intrusion targetpre(xp,yp,zp);
(A5) Will predict the coordinates Ppre(xp,yp,zp) Firstly, the coordinates of each microphone coordinate system of the far-field microphone array are respectively converted
Figure BDA0002788938580000061
And coordinates in the coordinate system of the zoom binocular camera 6
Figure BDA0002788938580000062
And then the far-field microphone array, the zooming infrared camera 5 and the zooming binocular camera 6 are controlled to accurately track the invading target in real time through the pan-tilt control functions (1-3).
Step three: after the zooming infrared camera 5, the zooming binocular camera 6 and the far-field microphone array can stably track the invading target, thermal imaging, image information and sound information of the target are respectively collected, and then the invading target identification of multi-parameter fusion is carried out; the specific implementation process is as follows:
(B1) the far field microphone array or the near field microphone array acquires an audio signal V of an invasion target, and the panoramic camera 4 acquires a target video image PpThe zooming infrared camera 5 obtains a thermal imaging picture P of a targettThe zooming binocular camera 6 acquires a video image P of a targetb
(B2) For audio signal V in (B1)Line pre-treatment to obtain Vpre(ii) a The preprocessing comprises filtering, pre-emphasis, windowing and framing, wherein the filtering is to calculate the distance range between a target and a microphone array according to the obtained space coordinates of the target and then carry out selective filtering according to the distance range;
Veig=MFCC(Vpre) (2-1)
then extracting the frequency domain characteristic V by the formula (2-1)eig(ii) a Pair (B1) of thermal imaging chart PtPre-treating to obtain Pt-pre
Pt-eig=PCA(Pt-pre) (2-2)
Then extracting the thermal imaging graph characteristic P by the formula (2-2)t-eig(ii) a For the video image P in (B1)pPre-treating to obtain Pp-pre
Pp-eig=HOG(Pp-pre) (2-3)
Then, the image feature P is extracted by the formula (2-3)p-eig
(B3) The related feature V obtained in the step (B2)eig、Pt-eig、Pp-eigAs input data of the first SVM classifier, whether a target is an unmanned aerial vehicle or not is recognized through a classification recognition model of the first SVM classifier, the output of the first SVM classifier is the unmanned aerial vehicle or not, and the classification recognition model of the first SVM classifier is as follows:
Y1=SVMuob(Veig,Pt-eig,Pp-eig) (2-4)
(B4) when the step (B3) identifies that the intrusion target is a drone, the video image P in (B1) is pairedbPre-treating to obtain Pb-pre
Pb-eig=HOG(Pb-pre) (2-5)
Then, the image feature P is extracted by the formula (2-5)b-eig
(B5) The sound characteristic V obtained in the step (B2)eigAnd the image feature P obtained in the step (B4)b-eigAs input data of the second SVM classifier, recognizing the type of the unmanned aerial vehicle through a classification recognition model of the second SVM classifierAnd the specific output comprises a gyroplane, a glider, an airship and a hot air balloon, and the classification recognition model of the second SVM classifier is as follows:
Y2=SVMkou(Veig,Pb-eig) (2-6)
the classification recognition models of the first SVM classifier and the second SVM classifier are obtained through the following steps:
(C1) establishing a training data set and a testing data set through the acquired audio, thermal imaging and video images of organisms, gyroplanes, gliders, airships and hot air balloons after preprocessing, characteristic extraction and normalization processing;
(C2) using the optimized penalty factor C and the width parameter sigma of the Gaussian kernel function2The training data set begins to train the SVM classifier by adopting a cross validation method, an identification model containing an optimal hyperplane is obtained after training, and the identification model of the trained SVM classifier is stored;
(C3) testing the classification recognition effect of the recognition model of the SVM classifier by using the recognition model of the SVM classifier and the test data set saved in the step (C2), and outputting the tested classification result;
(C4) if the tested classification result meets the requirement, the classification result is used as the classification recognition model; if not, repeating the steps (C2) and (C3).
Step four: when the intrusion target is identified as the unmanned aerial vehicle, the zooming infrared camera 5, the zooming binocular camera 6 and the far-field microphone array continuously track the intruding unmanned aerial vehicle;
step five: when the unmanned aerial vehicle enters the range of the near-field microphone array, the near-field microphone array starts to work, and the far-field microphone array is in standby state.
The utility model discloses a panoramic camera carries out intrusion detection, then tracks along the invasion target in real time through multisensor linkage tracking method, carries out the invasion target identification that multielement fused through double-deck SVM classifier at last. The utility model discloses have higher rate of accuracy, in addition, the design of double-deck SVM is very big has promoted the whole work efficiency of system. Simultaneously, the linkage tracking cooperation of multisensor, the detection area of every microphone in the far field microphone array of great degree liberation, so can effectual promotion microphone array's detection distance, improve entire system's effective monitoring range.
The utility model is suitable for a low latitude unmanned aerial vehicle control of some important regions for ensure the regional safety of this low latitude. It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that any modifications and variations of the invention are within the spirit and scope of the invention as defined by the appended claims.

Claims (8)

1. A low latitude unmanned aerial vehicle detection device based on sound and image fusion which characterized in that: the device comprises a support (1), a near-field microphone array, a far-field microphone array, a panoramic camera (4), a zooming infrared camera (5), a zooming binocular camera (6) and a controller (8), wherein the near-field microphone array is equidistantly arranged on the top, the bottom, the left part and the right part of the support (1), the far-field microphone array is parallel to the near-field microphone array and is arranged on the support (1) and arranged at the outer end part of the support (1), and the panoramic camera (4), the zooming infrared camera (5) and the zooming binocular camera (6) are all arranged at the central part of the support (1); the near-field microphone array, the far-field microphone array, the panoramic camera (4), the zooming infrared camera (5) and the zooming binocular camera (6) are respectively in communication connection with the controller (8).
2. The low altitude unmanned aerial vehicle detection device based on sound and image fusion according to claim 1, characterized in that: the near field microphone array comprises a plurality of near field microphones, the far field microphone array comprises a plurality of far field microphones, a plurality of audio detection cloud deck and a plurality of sound gathering cover, the sound pick-up heads of the far field microphones are respectively installed in the corresponding sound gathering covers, the far field microphones are respectively fixed on the corresponding audio detection cloud deck, and the motion is controlled by the audio detection cloud deck.
3. The low altitude unmanned aerial vehicle detection device based on sound and image fusion according to claim 1 or 2, characterized in that: the zooming infrared camera (5) and the zooming binocular camera (6) are fixed together side by side and are installed on the infrared binocular head (7).
4. The low altitude unmanned aerial vehicle detection device based on sound and image fusion according to claim 2, characterized in that: the near-field microphone array comprises four near-field microphones (2A, 2B, 2C, 2D), and the far-field microphone array comprises four far-field microphones (3A, 3B, 3C, 3D), four audio detection holders (3E, 3F, 3G, 3H) and four sound gathering covers (3I, 3J, 3K, 3L).
5. The low altitude unmanned aerial vehicle detection device based on sound and image fusion according to claim 4, wherein: the sound gathering shield is a paraboloid which is formed by rotating a parabola around an origin.
6. The low altitude unmanned aerial vehicle detection device based on sound and image fusion according to claim 5, wherein: the sound pick-up heads of the far-field microphones (3A, 3B, 3C and 3D) are respectively installed at the focuses of the corresponding sound gathering covers (3I, 3J, 3K and 3L), and the far-field microphones (3A, 3B, 3C and 3D) are respectively fixed on the audio detection cloud platforms (3E, 3F, 3G and 3H).
7. The low altitude unmanned aerial vehicle detection device based on sound and image fusion according to claim 4, wherein: the controller comprises five paths of pan-tilt controls, eight paths of audio streams, three paths of video streams, a CPU, a GPU and a memory, the pan-tilt controls are connected with an audio detection pan-tilt and an infrared binocular pan-tilt through corresponding pan-tilt control sensors, the audio streams are connected with the audio detection pan-tilt through corresponding audio stream sensors, and the video streams are connected with a panoramic camera (4), a zooming infrared camera (5) and a zooming binocular camera (6) through corresponding video stream sensors; the CPU is communicated with the GPU, and the CPU and the GPU are respectively communicated with the memory.
8. The low altitude unmanned aerial vehicle detection device based on sound and image fusion according to claim 1, characterized in that: the near-field microphone array, the far-field microphone array, the panoramic camera (4), the zooming infrared camera (5) and the zooming binocular camera (6) are respectively in communication connection with the controller (8) in any one or more modes of the following modes: ethernet interface, optical interface, 4G/5G, WIFI.
CN202022700370.XU 2020-11-20 2020-11-20 Low-altitude unmanned aerial vehicle detection device based on sound and image fusion Active CN213754654U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202022700370.XU CN213754654U (en) 2020-11-20 2020-11-20 Low-altitude unmanned aerial vehicle detection device based on sound and image fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202022700370.XU CN213754654U (en) 2020-11-20 2020-11-20 Low-altitude unmanned aerial vehicle detection device based on sound and image fusion

Publications (1)

Publication Number Publication Date
CN213754654U true CN213754654U (en) 2021-07-20

Family

ID=76826607

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202022700370.XU Active CN213754654U (en) 2020-11-20 2020-11-20 Low-altitude unmanned aerial vehicle detection device based on sound and image fusion

Country Status (1)

Country Link
CN (1) CN213754654U (en)

Similar Documents

Publication Publication Date Title
CN112270680B (en) Low altitude unmanned detection method based on sound and image fusion
US11915502B2 (en) Systems and methods for depth map sampling
CN109444911B (en) Unmanned ship water surface target detection, identification and positioning method based on monocular camera and laser radar information fusion
CN108052097B (en) Method for training heterogeneous sensing system and heterogeneous sensing system
CN108226951B (en) Laser sensor based real-time tracking method for fast moving obstacle
CN107352032B (en) Method for monitoring people flow data and unmanned aerial vehicle
US9429650B2 (en) Fusion of obstacle detection using radar and camera
US10671068B1 (en) Shared sensor data across sensor processing pipelines
CN108646739A (en) A kind of sensor information fusion method
US11061122B2 (en) High-definition map acquisition system
CN106774363B (en) Unmanned aerial vehicle flight control system and method
CN108162858B (en) Vehicle-mounted monitoring device and method thereof
KR102266996B1 (en) Method and apparatus for limiting object detection area in a mobile system equipped with a rotation sensor or a position sensor with an image sensor
KR102472075B1 (en) System and method for supporting automatic detection service based on real-time road image and radar signal analysis results
RU2755603C2 (en) System and method for detecting and countering unmanned aerial vehicles
CN111652067B (en) Unmanned aerial vehicle identification method based on image detection
Amin et al. Quality of obstacle distance measurement using ultrasonic sensor and precision of two computer vision-based obstacle detection approaches
CN110619276A (en) Anomaly and violence detection system and method based on unmanned aerial vehicle mobile monitoring
CN111127837A (en) Alarm method, camera and alarm system
CN115291219A (en) Method and device for realizing dynamic obstacle avoidance of unmanned aerial vehicle by using monocular camera and unmanned aerial vehicle
CN112183330A (en) Target detection method based on point cloud
CN115034324A (en) Multi-sensor fusion perception efficiency enhancement method
CN213754654U (en) Low-altitude unmanned aerial vehicle detection device based on sound and image fusion
JP2022537557A (en) Method and apparatus for determining drivable area information
CN110827257B (en) Visual navigation positioning method for embedded airborne infrared image

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant