CN116740334B - Unmanned aerial vehicle intrusion detection positioning method based on binocular vision and improved YOLO - Google Patents
Unmanned aerial vehicle intrusion detection positioning method based on binocular vision and improved YOLO Download PDFInfo
- Publication number
- CN116740334B CN116740334B CN202310743710.2A CN202310743710A CN116740334B CN 116740334 B CN116740334 B CN 116740334B CN 202310743710 A CN202310743710 A CN 202310743710A CN 116740334 B CN116740334 B CN 116740334B
- Authority
- CN
- China
- Prior art keywords
- unmanned aerial
- aerial vehicle
- target
- camera
- cameras
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 34
- 238000000034 method Methods 0.000 title claims abstract description 19
- 238000004519 manufacturing process Methods 0.000 claims abstract description 6
- 238000003384 imaging method Methods 0.000 claims abstract description 4
- 238000012549 training Methods 0.000 claims description 10
- 230000003287 optical effect Effects 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 3
- 230000009286 beneficial effect Effects 0.000 abstract description 3
- 239000011159 matrix material Substances 0.000 description 6
- 238000013519 translation Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 241000251737 Raja Species 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000009545 invasion Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Computational Linguistics (AREA)
- Image Processing (AREA)
Abstract
The invention relates to the technical field of unmanned aerial vehicle positioning, in particular to an unmanned aerial vehicle intrusion detection positioning method based on binocular vision and improved YOLO, which comprises the steps of arranging two cameras in parallel on the left and right of a region to be monitored, calibrating to obtain internal and external parameters of the two cameras, reading a right camera image, extracting an unmanned aerial vehicle target in the right camera image by using an improved model, recording pixel coordinates and size of the unmanned aerial vehicle target in the right camera, and manufacturing a mask image of the target; positioning an unmanned aerial vehicle target in a left camera through a mask image, and recording pixel coordinates of the unmanned aerial vehicle target in the left camera; according to the obtained pixel coordinates of the unmanned aerial vehicle in the left camera and the right camera, the imaging principle of the binocular cameras and the internal and external parameters of the two cameras are utilized to obtain the space three-dimensional coordinates of the unmanned aerial vehicle target, and the improved YOLO model is utilized to effectively reduce the quantity of the parameters and the size of the model, so that the unmanned aerial vehicle is beneficial to running on edge equipment, and meanwhile, the problem of low detection precision of the small target is solved.
Description
Technical Field
The invention relates to the technical field of unmanned aerial vehicle positioning, in particular to an unmanned aerial vehicle intrusion detection positioning method based on binocular vision and improved YOLO.
Background
The rapid development of unmanned aerial vehicle technology makes unmanned aerial vehicles widely used in the fields of military, civil use and the like. However, as the number of unmanned aerial vehicles increases, unmanned aerial vehicle invasion becomes an increasingly serious problem, and potential safety hazards are brought to society. For example, the drone may be used to conduct spying, monitor sensitive areas, attack targets, etc., pose a threat to national security and personal privacy; in cities, the altitude and path of flight of unmanned aerial vehicles may collide with buildings or vehicles, causing accidents. Therefore, it is very necessary to detect and locate unmanned intrusion.
The current common unmanned aerial vehicle intrusion detection positioning method is mainly used for monitoring through radar, infrared sensors and other devices. However, these devices have the disadvantages of high cost, susceptibility to environmental influences, etc., which limit their effectiveness in practical applications. Unmanned aerial vehicle intrusion detection based on images is more and more paid attention to, the algorithm is used for training a neural network by utilizing data and labels in a data set, the candidate region generation step is omitted, and feature extraction, target classification and target regression are all achieved by putting the candidate region generation step into the neural network, so that the target detection speed is greatly improved. However, the existing target detection model has the disadvantages of higher detection speed, large parameter quantity, higher requirement on hardware, difficulty in being deployed on edge equipment and unsatisfactory accuracy for small target detection.
Therefore, there is a need to provide an unmanned aerial vehicle intrusion detection positioning method based on binocular vision and improved YOLO, which solves the above technical problems.
Disclosure of Invention
In order to solve the technical problems, the invention provides an unmanned aerial vehicle intrusion detection positioning method based on binocular vision and improved YOLO.
The invention provides an unmanned aerial vehicle intrusion detection positioning method based on binocular vision and improved YOLO, which comprises the following steps:
s1, training an improved YOLOV5 model through a real world public training set and a self-built data set;
s2, arranging two cameras in parallel on the left and right sides of a region to be monitored, calibrating to obtain internal and external parameters of the two cameras, and setting a warning distance from the unmanned aerial vehicle to the binocular system;
s3, reading a right camera image, extracting an unmanned aerial vehicle target in the right camera image by using the model obtained in the S1, recording pixel coordinates and size of the unmanned aerial vehicle target in the right camera, and manufacturing a mask image of the target;
s4, positioning the unmanned aerial vehicle target in the left camera by using the mask image obtained in the S3, and recording the pixel coordinates of the unmanned aerial vehicle target in the left camera;
s5, according to the pixel coordinates of the unmanned aerial vehicle in the left camera and the right camera obtained in the S3 and the S4, the imaging principle of the binocular camera and the internal and external parameters in the S2 are utilized to obtain the space three-dimensional coordinates of the unmanned aerial vehicle target.
Preferably, the improved YOLO5 model in S1 is that in the back band part, the SPD module is used to replace the convolution block for downsampling, the ShuffleBlock is used to replace the C3 module, and the SPPF module is removed; and a dynamic convolution block ODConv is used for replacing a C3 module in the Head part, and the detection model is trained by utilizing a training set until the test requirement is met, so that a final detection model is obtained.
It should be noted that: the self-built data set is a self-timer unmanned aerial vehicle video, frame sampling is carried out, 300 frames of images are randomly extracted, an imageLabel tool is used for calibrating, the data set for model training is 51746 color images with 640 multiplied by 480, each image contains zero to three unmanned aerial vehicle targets, and small traversing machine targets which do not appear in the original data set can be added by adding the self-built unmanned aerial vehicle data set, so that the data set is enriched.
Preferably, in the step S2, two cameras are arranged in parallel on the left and right sides of the area to be monitored, and the specific steps of obtaining the internal and external parameters of the two cameras and setting the warning distance are as follows:
s21, arranging double cameras in the area to be monitored, wherein the double cameras are arranged in parallel left and right in a double-monocular mode and can be double-monocular cameras.
S22, performing double-target positioning on the camera module to obtain the internal participation external parameters of the two cameras.
S23, determining the warning distance between the unmanned aerial vehicle and the binocular system.
It should be noted that: the internal parameters need to include focal length, principal coordinate point, camera resolution, radial distortion coefficient, tangential distortion coefficient and projection error; the extrinsic parameters need to include a rotation matrix and a translation matrix. Wherein the focal length is required to be the pixel focal length, and the translation matrix between the two cameras is required to ensure that only the offset of the X axis
Preferably, the step of recording the pixel coordinates and the size of the unmanned aerial vehicle target in the right camera and making the mask image of the target in S3 includes:
s31, when target detection is carried out, the target detection is regarded as a regression problem, an input picture is divided into grids of S multiplied by S, and if the center of a detected target exists in the center of a certain cell, the cell is responsible for predicting the target; each cell can generate B bounding boxes, each bounding box comprises the deviation of the center position of an object relative to the cell position, the width and the height of the bounding box and the confidence of the target, and the bounding boxes are subjected to non-maximum values to obtain the pixel coordinates of the upper left point and the pixel coordinates of the lower right point of the unmanned aerial vehicle target.
S32, intercepting the area on the original image according to two coordinates of the unmanned aerial vehicle target, and generating a mask image of the unmanned aerial vehicle target.
Preferably, in the step S4, using the mask image obtained in the step S3, locating the unmanned aerial vehicle target in the left camera, and recording the pixel coordinates of the unmanned aerial vehicle target in the left camera includes the specific steps of:
s41, extracting features, and extracting square difference features from the mask image obtained in the step two.
S42, comparing the features, manufacturing a sliding block according to the size of the mask image, and calculating the similarity score of the sliding block in the target image.
S43, positioning the best match, searching the region with the highest score in the target image after calculating the score, and recording the pixel coordinates of the upper left corner and the lower right corner of the region.
Preferably, the step of S5 to obtain the spatial three-dimensional coordinates of the unmanned aerial vehicle target specifically includes:
and S51, converting pixel coordinates in binocular vision into space coordinates, and realizing by triangularization.
S52, the pixel coordinates of the same object in the image acquired by the left camera and the right camera are respectively expressed as (u) l ,v l ) Sum (u) r ,v r ) The spatial coordinates (X, Y, Z) of the object in space are calculated using the following formula:
wherein T is x Representing the distance between the two cameras, (c) x ,c y ) Is the optical center coordinates of the camera, f is the focal length of the camera, (X) 0 ,Y 0 ,Z 0 ) Is the position coordinates of the camera. Substituting the internal and external parameter matrixes obtained in the three steps with the arrangement coordinates of the cameras to obtain the space coordinates of the unmanned aerial vehicle. And obtaining the distance between the unmanned aerial vehicle and the forbidden area by using a distance formula between the three-dimensional coordinates.
Compared with the related art, the unmanned aerial vehicle intrusion detection positioning method based on binocular vision and improved YOLO has the following beneficial effects:
1. the invention is based on computer vision, has small requirements on environment and is little interfered by environmental factors.
2. The detection method provided by the invention has the advantages of low equipment cost, easiness in maintenance and low daily cost.
3. The invention utilizes the improved YOLO model, effectively reduces the parameter and the model size, is beneficial to running on edge equipment, and simultaneously improves the problem of low detection precision of small targets.
Drawings
Fig. 1 is a schematic flow chart of an unmanned aerial vehicle intrusion detection positioning method based on binocular vision and improved YOLO;
FIG. 2 is a diagram of a modified YOLO model structure.
Fig. 3 is a schematic diagram of the basic structure of ODConv.
Fig. 4 is a schematic diagram of the basic structure of the SPD.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
A detailed description of a specific implementation of an unmanned aerial vehicle intrusion detection positioning method based on binocular vision and improved YOLO is described below in connection with specific embodiments.
The invention provides an unmanned aerial vehicle intrusion detection positioning method based on binocular vision and improved YOLO, which comprises the following steps:
s1, training an improved YOLOV5 model through a real world public training set and a self-built data set;
s2, arranging two cameras in parallel on the left and right sides of a region to be monitored, calibrating to obtain internal and external parameters of the two cameras, and setting a warning distance from the unmanned aerial vehicle to the binocular system;
s3, reading a right camera image, extracting an unmanned aerial vehicle target in the right camera image by using the model obtained in the S1, recording pixel coordinates and size of the unmanned aerial vehicle target in the right camera, and manufacturing a mask image of the target;
s4, positioning the unmanned aerial vehicle target in the left camera by using the mask image obtained in the S3, and recording the pixel coordinates of the unmanned aerial vehicle target in the left camera;
s5, according to the pixel coordinates of the unmanned aerial vehicle in the left camera and the right camera obtained in the S3 and the S4, the imaging principle of the binocular camera and the internal and external parameters in the S2 are utilized to obtain the space three-dimensional coordinates of the unmanned aerial vehicle target.
Preferably, the improved YOLO5 model in S1 is that in the back band part, the SPD module is used to replace the convolution block for downsampling, the ShuffleBlock is used to replace the C3 module, and the SPPF module is removed; and a dynamic convolution block ODConv is used for replacing a C3 module in the Head part, and the detection model is trained by utilizing a training set until the test requirement is met, so that a final detection model is obtained.
It should be noted that: the self-built data set is the unmanned aerial vehicle video of self-timer, the photo to unmanned aerial vehicle various models, various angles have been contained in the data set, guarantee the recognition accuracy, and carry out the frame sampling to the data set, the 300 frame images of random extraction, use image label instrument to mark, be used for the color image of model training's data set size 51746 pieces 640 x 480, every image contains zero to three unmanned aerial vehicle target, add the unmanned aerial vehicle data set of self-built can add the small-size machine target that passes through that does not appear in the original data set, enrich the data set.
ODConv is a plug-and-play attention module structured to learn complementary attention along four dimensions of the kernel space using a multidimensional attention mechanism through a parallel strategy as shown in fig. 3. As a "plug and play" operation, it can be easily embedded into existing CNN networks. And the experimental result shows that the performance of the large model can be improved, and the performance of the light model can be improved.
The SPD module is developed by a team of Raja Sunkarad of Missouri university, and the structure is shown in figure 4, and consists of a space-to-depth (SPD) layer and a non-stride (step length is 1) convolution layer, and the learnable information is not lost when downsampling is carried out, so that the precision of a small target is improved.
In the embodiment of the present invention, in the step S2, two cameras are arranged in parallel on the left and right sides of the area to be monitored, and the specific steps of obtaining the internal and external parameters of the two cameras and setting the warning distance are:
s21, arranging double cameras in the area to be monitored, wherein the double cameras are arranged in parallel left and right in a double-monocular mode and can be double-monocular cameras.
S22, performing double-target positioning on the camera module to obtain the internal participation external parameters of the two cameras.
S23, determining the warning distance between the unmanned aerial vehicle and the binocular system.
It should be noted that: the internal parameters need to include focal length, principal coordinate point, camera resolution, radial distortion coefficient, tangential distortion coefficient and projection error; the extrinsic parameters need to include a rotation matrix and a translation matrix. Wherein the focal length is required to be the pixel focal length, and the translation matrix between the two cameras is required to ensure that only the offset of the X axis
In the embodiment of the present invention, the specific steps of recording the pixel coordinates and the size of the unmanned aerial vehicle target in the right camera and making the mask image of the target in S3 are as follows:
s31, when target detection is carried out, the target detection is regarded as a regression problem, an input picture is divided into grids of S multiplied by S, and if the center of a detected target exists in the center of a certain cell, the cell is responsible for predicting the target; each cell can generate B bounding boxes, each bounding box comprises the deviation of the center position of an object relative to the cell position, the width and the height of the bounding box and the confidence of the target, and the bounding boxes are subjected to non-maximum values to obtain the pixel coordinates of the upper left point and the pixel coordinates of the lower right point of the unmanned aerial vehicle target.
S32, intercepting the area on the original image according to two coordinates of the unmanned aerial vehicle target, and generating a mask image of the unmanned aerial vehicle target.
In the embodiment of the present invention, in the step S4, using the mask image obtained in the step S3, the unmanned aerial vehicle target in the left camera is positioned, and the specific steps of recording the pixel coordinates of the unmanned aerial vehicle target in the left camera are as follows:
s41, extracting features, and extracting square difference features from the mask image obtained in the step two.
S42, comparing the features, manufacturing a sliding block according to the size of the mask image, and calculating the similarity score of the sliding block in the target image.
S43, positioning the best match, searching the region with the highest score in the target image after calculating the score, and recording the pixel coordinates of the upper left corner and the lower right corner of the region.
It should be noted that: template matching operation is carried out on the mask image, and a matchTemplate function in OpenCV is used for template matching, wherein the matching mode is normalized correlation coefficient matching, and the correlation degree between two variables can be calculated by the following steps:
1. for two variables X and Y, their mean values are calculated asAnd->
2. For X and Y, their standard deviations s are calculated separately x Sum s y 。
3. Calculating covariance of X and Y
4. Calculating normalized correlation coefficients
Where n represents the sample size. The normalized correlation coefficient has a value ranging between [ -1,1] and can be used for describing the strength and direction of the linear relationship between two variables. When r=1, it means that the two variables exhibit a complete positive correlation; when r= -1, it means that the two variables exhibit a complete negative correlation; when r=0, it means that there is no linear correlation between the two variables. Thereby deriving drone target coordinates in another view.
In the embodiment of the present invention, the step of S5 of obtaining the spatial three-dimensional coordinates of the unmanned aerial vehicle target specifically includes:
and S51, converting pixel coordinates in binocular vision into space coordinates, and realizing by triangularization.
S52, the pixel coordinates of the same object in the image acquired by the left camera and the right camera are respectively expressed as (u) l ,v l ) Sum (u) r ,v r ) The spatial coordinates (X, Y, Z) of the object in space are calculated using the following formula:
wherein T is x Representing the distance between the two cameras, (c) x ,c y ) Is the optical center coordinates of the camera, f is the focal length of the camera, (X) 0 ,Y 0 ,Z 0 ) Is the position coordinates of the camera. Substituting the internal and external parameter matrixes obtained in the three steps with the arrangement coordinates of the cameras to obtain the space coordinates of the unmanned aerial vehicle. And obtaining the distance between the unmanned aerial vehicle and the forbidden area by using a distance formula between the three-dimensional coordinates.
The circuits and control involved in the present invention are all of the prior art, and are not described in detail herein.
The foregoing description is only illustrative of the present invention and is not intended to limit the scope of the invention, and all equivalent structures or equivalent processes or direct or indirect application in other related technical fields are included in the scope of the present invention.
Claims (5)
1. The unmanned aerial vehicle intrusion detection positioning method based on binocular vision and improved YOLO is characterized by comprising the following steps of:
s1, a YOLO5 model is arranged in a backbone part, an SPD module is used for replacing a convolution block to perform downsampling, a SheffleBlock is used for replacing a C3 module, and an SPPF module is removed; the method comprises the steps that a dynamic convolution block ODConv is used for replacing a C3 module in a Head part, and an improved YOLO5 model is trained by utilizing a training set until the testing requirement is met, so that a final detection model is obtained;
s2, arranging two cameras in parallel on the left and right sides of a region to be monitored, calibrating to obtain internal and external parameters of the two cameras, and setting a warning distance between the unmanned aerial vehicle and a binocular system;
s3, reading a right camera image, extracting an unmanned aerial vehicle target in the right camera image by using the model obtained in the S1, dividing an input picture into grids of S multiplied by S, and if the center of the detected target exists in the center of a certain cell, predicting the target by the cell; each cell can generate B bounding boxes, each bounding box comprises offset of the center position of an object relative to the cell position, width and height of the bounding box and confidence of a target, non-maximum values are carried out on the bounding boxes, upper left point pixel coordinates and lower right point pixel coordinates of an unmanned aerial vehicle target are obtained, the area is intercepted on an original image according to the two coordinates of the unmanned aerial vehicle target, and a mask image of the unmanned aerial vehicle target is generated;
s4, positioning the unmanned aerial vehicle target in the left camera by using the mask image obtained in the S3, and recording the pixel coordinates of the unmanned aerial vehicle target in the left camera;
s5, according to the pixel coordinates of the unmanned aerial vehicle in the left camera and the right camera obtained in the S3 and the S4, the imaging principle of the binocular camera and the internal and external parameters in the S2 are utilized to obtain the space three-dimensional coordinates of the unmanned aerial vehicle target.
2. The unmanned aerial vehicle intrusion detection positioning method based on binocular vision and improved YOLO according to claim 1, wherein in the step S2, two cameras are arranged in parallel on the left and right of a region to be monitored, and the specific steps of obtaining internal and external parameters of the two cameras and setting a warning distance are performed by:
s21, arranging double cameras in a region to be monitored, wherein the double cameras are arranged in parallel left and right of a double monocular or are double monocular cameras;
s22, performing double-target positioning on the camera module to obtain internal participation external parameters of the two cameras;
s23, determining the warning distance between the unmanned aerial vehicle and the binocular system.
3. The unmanned aerial vehicle intrusion detection positioning method based on binocular vision and improved YOLO according to claim 1, wherein the specific steps of using the mask image obtained in S3 to position the unmanned aerial vehicle object in the left camera and recording the pixel coordinates of the unmanned aerial vehicle object in the left camera are as follows:
s41, extracting features, namely extracting square difference features from the mask image obtained in the step two;
s42, comparing the characteristics, manufacturing a sliding block according to the size of the mask image, and calculating the similarity score of the sliding block in the target image;
s43, positioning the best match, searching the region with the highest score in the target image after calculating the score, and recording the pixel coordinates of the upper left corner and the lower right corner of the region.
4. The unmanned aerial vehicle intrusion detection positioning method based on binocular vision and improved YOLO according to claim 3, wherein the specific step of obtaining the spatial three-dimensional coordinates of the unmanned aerial vehicle target in S5 is as follows:
s51, converting pixel coordinates in binocular vision into space coordinates through triangularization;
s52, the pixel coordinates of the same object in the image acquired by the left camera and the right camera are respectively expressed as (u) l ,v l ) Sum (u) r ,v r ) The spatial coordinates (X, Y, Z) of the object in space are calculated using the following formula:
wherein T is x Representing the distance between the two cameras, (c) x ,c y ) Is the optical center coordinates of the camera, f is the focal length of the camera, (X) 0 ,Y 0 ,Z 0 ) Is the position coordinates of the camera.
5. A terminal device comprising two cameras placed in parallel, a reading camera and a computer for performing various image processing instructions, wherein the image processing instructions are loaded and executed by the computer to implement the detection positioning method according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310743710.2A CN116740334B (en) | 2023-06-23 | 2023-06-23 | Unmanned aerial vehicle intrusion detection positioning method based on binocular vision and improved YOLO |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310743710.2A CN116740334B (en) | 2023-06-23 | 2023-06-23 | Unmanned aerial vehicle intrusion detection positioning method based on binocular vision and improved YOLO |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116740334A CN116740334A (en) | 2023-09-12 |
CN116740334B true CN116740334B (en) | 2024-02-06 |
Family
ID=87909526
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310743710.2A Active CN116740334B (en) | 2023-06-23 | 2023-06-23 | Unmanned aerial vehicle intrusion detection positioning method based on binocular vision and improved YOLO |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116740334B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118549971A (en) * | 2024-07-29 | 2024-08-27 | 北京领云时代科技有限公司 | Unmanned aerial vehicle bee colony attack intention recognition system and method based on HMM model |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013129060A1 (en) * | 2012-03-02 | 2013-09-06 | 株式会社ブイ・テクノロジー | Manufacturing device and manufacturing method for three-dimensional liquid crystal display device |
CN108256504A (en) * | 2018-02-11 | 2018-07-06 | 苏州笛卡测试技术有限公司 | A kind of Three-Dimensional Dynamic gesture identification method based on deep learning |
CN111260788A (en) * | 2020-01-14 | 2020-06-09 | 华南理工大学 | Power distribution cabinet switch state identification method based on binocular vision |
CN111563415A (en) * | 2020-04-08 | 2020-08-21 | 华南理工大学 | Binocular vision-based three-dimensional target detection system and method |
CN111721259A (en) * | 2020-06-24 | 2020-09-29 | 江苏科技大学 | Underwater robot recovery positioning method based on binocular vision |
CN113256778A (en) * | 2021-07-05 | 2021-08-13 | 爱保科技有限公司 | Method, device, medium and server for generating vehicle appearance part identification sample |
CN114565900A (en) * | 2022-01-18 | 2022-05-31 | 广州软件应用技术研究院 | Target detection method based on improved YOLOv5 and binocular stereo vision |
WO2022116478A1 (en) * | 2020-12-04 | 2022-06-09 | 南京大学 | Three-dimensional reconstruction apparatus and method for flame spectrum |
CN115471542A (en) * | 2022-05-05 | 2022-12-13 | 济南大学 | Packaging object binocular recognition and positioning method based on YOLO v5 |
CN115601437A (en) * | 2021-07-27 | 2023-01-13 | 苏州星航综测科技有限公司(Cn) | Dynamic convergence type binocular stereo vision system based on target identification |
CN116051658A (en) * | 2023-03-27 | 2023-05-02 | 北京科技大学 | Camera hand-eye calibration method and device for target detection based on binocular vision |
-
2023
- 2023-06-23 CN CN202310743710.2A patent/CN116740334B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013129060A1 (en) * | 2012-03-02 | 2013-09-06 | 株式会社ブイ・テクノロジー | Manufacturing device and manufacturing method for three-dimensional liquid crystal display device |
CN108256504A (en) * | 2018-02-11 | 2018-07-06 | 苏州笛卡测试技术有限公司 | A kind of Three-Dimensional Dynamic gesture identification method based on deep learning |
CN111260788A (en) * | 2020-01-14 | 2020-06-09 | 华南理工大学 | Power distribution cabinet switch state identification method based on binocular vision |
CN111563415A (en) * | 2020-04-08 | 2020-08-21 | 华南理工大学 | Binocular vision-based three-dimensional target detection system and method |
CN111721259A (en) * | 2020-06-24 | 2020-09-29 | 江苏科技大学 | Underwater robot recovery positioning method based on binocular vision |
WO2022116478A1 (en) * | 2020-12-04 | 2022-06-09 | 南京大学 | Three-dimensional reconstruction apparatus and method for flame spectrum |
CN113256778A (en) * | 2021-07-05 | 2021-08-13 | 爱保科技有限公司 | Method, device, medium and server for generating vehicle appearance part identification sample |
CN115601437A (en) * | 2021-07-27 | 2023-01-13 | 苏州星航综测科技有限公司(Cn) | Dynamic convergence type binocular stereo vision system based on target identification |
CN114565900A (en) * | 2022-01-18 | 2022-05-31 | 广州软件应用技术研究院 | Target detection method based on improved YOLOv5 and binocular stereo vision |
CN115471542A (en) * | 2022-05-05 | 2022-12-13 | 济南大学 | Packaging object binocular recognition and positioning method based on YOLO v5 |
CN116051658A (en) * | 2023-03-27 | 2023-05-02 | 北京科技大学 | Camera hand-eye calibration method and device for target detection based on binocular vision |
Non-Patent Citations (5)
Title |
---|
Research on A Binocular Fish Dimension Measurement Method Based on Instance Segmentation and Fish Tracking;Liu H等;《2022 34th Chinese Control and Decision Conference (CCDC)》;2791-2796 * |
Three-dimensional path planning for unmanned aerial vehicle based on interfered fluid dynamical system;Wang Honglun等;《中国航空学报:英文版》(第1期);229-239 * |
基于双目视觉的呼吸运动实时跟踪方法研究;王艳等;《生物医学工程学报》;第37卷(第1期);72-78 * |
基于双目视觉的工件三维坐标提取方法研究;张虎等;《西安文理学院学报(自然科学版)》(第4期);63-67 * |
基于双目视觉的物料三维空间定位算法;赵杰等;《科学技术与工程》;第23卷(第18期);7861-7867 * |
Also Published As
Publication number | Publication date |
---|---|
CN116740334A (en) | 2023-09-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhao et al. | Detection, tracking, and geolocation of moving vehicle from uav using monocular camera | |
CN106529538A (en) | Method and device for positioning aircraft | |
US11430199B2 (en) | Feature recognition assisted super-resolution method | |
WO2012139228A1 (en) | Video-based detection of multiple object types under varying poses | |
CN116740334B (en) | Unmanned aerial vehicle intrusion detection positioning method based on binocular vision and improved YOLO | |
CN114898314B (en) | Method, device, equipment and storage medium for detecting target of driving scene | |
CN113673584A (en) | Image detection method and related device | |
CN111626987A (en) | Vehicle detection method based on unmanned aerial vehicle and anchor-frame-free network | |
Wang et al. | Real-time damaged building region detection based on improved YOLOv5s and embedded system from UAV images | |
Marez et al. | UAV detection with a dataset augmented by domain randomization | |
Bikmullina et al. | Stand for development of tasks of detection and recognition of objects on image | |
CN117115414B (en) | GPS-free unmanned aerial vehicle positioning method and device based on deep learning | |
Zhai et al. | Target Detection of Low‐Altitude UAV Based on Improved YOLOv3 Network | |
Wu et al. | Multimodal Collaboration Networks for Geospatial Vehicle Detection in Dense, Occluded, and Large-Scale Events | |
Song et al. | Misaligned Visible-Thermal Object Detection: A Drone-based Benchmark and Baseline | |
Lee et al. | SAM-net: LiDAR depth inpainting for 3D static map generation | |
Nie et al. | LFC-SSD: Multiscale aircraft detection based on local feature correlation | |
CN114494359A (en) | Small sample moving object detection method based on abnormal optical flow | |
CN114241306A (en) | Infrared unmanned aerial vehicle target tracking method based on twin neural network | |
Sun et al. | Hyperspectral low altitude UAV target tracking algorithm based on deep learning and improved KCF | |
CN112633089A (en) | Video pedestrian re-identification method, intelligent terminal and storage medium | |
Tian et al. | High confidence detection for moving target in aerial video | |
Song et al. | Image matching and localization based on fusion of handcrafted and deep features | |
CN118570889B (en) | Image quality optimization-based sequential image target identification method and device and electronic equipment | |
Wu et al. | Damper defect detection for transmission line based on cognitive preprocessing and feature fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |