CN108711172B - Unmanned aerial vehicle identification and positioning method based on fine-grained classification - Google Patents

Unmanned aerial vehicle identification and positioning method based on fine-grained classification Download PDF

Info

Publication number
CN108711172B
CN108711172B CN201810371993.1A CN201810371993A CN108711172B CN 108711172 B CN108711172 B CN 108711172B CN 201810371993 A CN201810371993 A CN 201810371993A CN 108711172 B CN108711172 B CN 108711172B
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
fine
information
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810371993.1A
Other languages
Chinese (zh)
Other versions
CN108711172A (en
Inventor
刘昊
魏志强
殷波
曲方超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ocean University of China
Original Assignee
Ocean University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ocean University of China filed Critical Ocean University of China
Priority to CN201810371993.1A priority Critical patent/CN108711172B/en
Publication of CN108711172A publication Critical patent/CN108711172A/en
Application granted granted Critical
Publication of CN108711172B publication Critical patent/CN108711172B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an unmanned aerial vehicle identification and positioning method based on fine-grained classification, which is characterized in that based on the fine-grained classification after object coarse-grained detection, the identified model of an unmanned aerial vehicle and information of an unmanned aerial vehicle model library are correspondingly searched to obtain specific external structure information of the unmanned aerial vehicle, and the two-dimensional coordinates of the unmanned aerial vehicle are mapped into three-dimensional coordinates to determine the position of the unmanned aerial vehicle in a three-dimensional space by combining internal parameters of a camera, and track information of the unmanned aerial vehicle can be obtained in the three-dimensional space through continuous three-dimensional coordinate information of frame pictures. The unmanned aerial vehicle positioning method and device solve the problem that the unmanned aerial vehicle is inaccurate in identification and positioning in the prior art.

Description

Unmanned aerial vehicle identification and positioning method based on fine-grained classification
Technical Field
The invention belongs to the technical field of unmanned aerial vehicles, and particularly relates to an unmanned aerial vehicle identification and positioning method based on fine-grained classification.
Background
In recent years, the unmanned aerial vehicle technology is rapidly developed, is widely applied to various fields, and also puts higher demands on the detection, identification and positioning technology of the unmanned aerial vehicle. At present, there are multiple schemes to unmanned aerial vehicle's detection, discernment and location, including the mode that satellite positioning's mode, radar and camera combine etc. detection effect is poor, easily receives external signal interference.
To this end, the skilled person makes improvements. For example, the invention patent with the application number of 2016111441100 discloses an unmanned aerial vehicle identity recognition system for unmanned aerial vehicle control based on electronic information, which comprises a remote sensing aircraft, a satellite, a ground signal transceiving base station and a remote sensing controller, and meets GPS positioning and Beidou positioning. The invention patent with application number 2017109352832 discloses a passive positioning and recognition system for civil unmanned aerial vehicles, which uses a passive radar to detect, position and track the unmanned aerial vehicles, receives remote control and remote measurement signals of the unmanned aerial vehicles, completes the recognition of the unmanned aerial vehicles through the analysis of the remote control and remote measurement signals, can avoid the influence of bad weather, but is only limited to detecting and recognizing whether the unmanned aerial vehicles are the unmanned aerial vehicles, and has lower accuracy for the matching and recognition of the identities of the unmanned aerial vehicles.
The unmanned aerial vehicle discernment and the detection of prior art only limit and can accurately discern and detect out whether this object is unmanned aerial vehicle and unmanned aerial vehicle position and size in the two-dimensional plane map, and the external structure size that does not have accurate judgement be which kind of model unmanned aerial vehicle and unmanned aerial vehicle, more can't learn the three-dimensional coordinate of unmanned aerial vehicle in actual scene space. We design this solution to address this problem.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides an unmanned aerial vehicle identification and positioning method based on fine-grained classification, based on the fine-grained classification after object detection, the identified unmanned aerial vehicle model and unmanned aerial vehicle model library information are correspondingly searched out to obtain relevant external structure information, and the two-dimensional coordinates of the unmanned aerial vehicle are mapped into three-dimensional coordinates to determine the position of the unmanned aerial vehicle in the three-dimensional space by combining the internal parameters of a camera, so that the problem of inaccurate identification and positioning of the unmanned aerial vehicle is solved.
In order to solve the technical problems, the invention adopts the technical scheme that: the unmanned aerial vehicle identification and positioning method based on fine-grained classification comprises the following steps:
(a) establishing an unmanned aerial vehicle database: establishing a computer vision database meeting fine-grained classification by acquiring the model and parameter information of the unmanned aerial vehicle;
(b) and (3) coarse particle size detection: end-to-end real-time object recognition and detection are carried out on the video shot by the camera through a YOLO network structure, and whether an object is an unmanned aerial vehicle or not is accurately judged;
(c) fine-grained classification: adopting a strong supervision learning method to complete deep learning on the basis of the data set in the unmanned aerial vehicle data set database in the step (a) so as to complete fine-grained classification, and accurately obtaining the detected type and model of the unmanned aerial vehicle after the fine-grained classification is completed;
(d) matching retrieval model number: after the specific model of the unmanned aerial vehicle is obtained, matching and retrieving specific information of the unmanned aerial vehicle of the model in an unmanned aerial vehicle database to obtain all information of the unmanned aerial vehicle, including material information and structure information, and obtaining the external visual characteristics of the unmanned aerial vehicle;
(e) calibrating a camera: obtaining internal parameters of a camera;
(f) space positioning: and combining the external visual features of the unmanned aerial vehicle with the internal parameters of the camera to obtain the position of the unmanned aerial vehicle in the three-dimensional space.
Further, in the step (b), a YOLO network structure is optimized, four bounding boxes are predicted in each grid, a vector with a vector of 7 × 23 is output, object detection based on a streaming video is realized, and whether an object in a lens is an unmanned aerial vehicle is detected at a rate of 60-70FPS/S, so that coarse-grained detection is completed.
Further, in the step (c), the prominent obvious features among the unmanned aerial vehicles of different models are used as local micro features divided in a fine granularity mode, micro feature classifiers are obtained through training on the basis of detection, and the specific models of the unmanned aerial vehicles are finally obtained through detection results of the micro feature classifiers in a regionalized detection frame picture mode.
Further, in the step (c), the identification and detection of the local tiny features is to perform down-sampling on the unmanned aerial vehicle by twice pixel amplification, as shown in formula (1),
Figure BDA0001638664440000021
Xirepresenting the down-sampled image, SiRepresentative of the original image, YS (S)i) A certain pixel point position, x, representing the original image11,…,xnnRepresentative pixel size, YS (S)i) The function maps the position coordinates of a certain pixel point of the original image to other positions, and the multiplication with the matrix is to amplify the characteristic pixel intensity of the tiny position, weaken the pixel intensity of other areas, increase the resolution and amplify the visual perception information of local tiny characteristics; to be amplifiedAnd putting the local tiny features into a deep network to serve as a classifier for fine-grained classification.
Further, the YOLO network structure comprises 6 convolutional layers and a full-connection layer, outputs 7 × 23 vectors, outputs predicted target position and category information, samples the output pictures in the form of formula (1), and then sends the samples to a fine-grained classification deep network, wherein the deep network comprises 4 convolutional layers and 1 full-connection layer, and the full-connection layer outputs 7 × 14 vectors for outputting the fine-grained detected model of the unmanned aerial vehicle.
Further, by calibrating a camera and translating and rotating the matrix, converting two-dimensional coordinates in the image into three-dimensional coordinates, and establishing mapping from a planar picture to a three-dimensional space; and obtaining the two-dimensional coordinate position of the unmanned aerial vehicle through the YOLO network structure, and further obtaining the three-dimensional coordinate of the unmanned aerial vehicle in the three-dimensional space according to the mapping relation.
Further, track information of the unmanned aerial vehicle is obtained in the three-dimensional space through continuous three-dimensional coordinate information of the frame pictures.
Further, the extrinsic visual characteristics include structural information of the unmanned aerial vehicle, such as the length of the body, the width of the body, and the height of the body.
Further, the internal parameters of the camera include lens parameters, sensor parameters and pixel size.
Compared with the prior art, the invention has the advantages that:
(1) according to the method, a computer vision database meeting fine-grained classification is established by collecting numerous unmanned aerial vehicle models and parameter information, compared with the existing database, the dynamic visualization of a data set can be seen, and detailed model information is marked.
(2) By optimizing the YOLO network structure, the problem that the existing network structure has low accuracy in detecting short-distance objects is solved, and the problem that objects close to a ground channel are removed through NMS (network management system) is reduced; the new network structure not only realizes the object detection based on the streaming video, but also solves the problem that the original network structure is close to the small group of unmanned aerial vehicles, so that the detection of the unmanned aerial vehicles is realized, and people can accurately judge whether the object is the unmanned aerial vehicle or not, thereby completing the coarse grain detection.
(3) Based on the fine-grained classification after object detection, a strong supervised learning method is adopted to finish the fine-grained classification on the basis of a huge data set, so that the classification of the unmanned aerial vehicle is obtained, and the type of the detected unmanned aerial vehicle is more accurately obtained; the identified model of the unmanned aerial vehicle corresponds to information in an unmanned aerial vehicle model library to retrieve relevant external structure information, and the two-dimensional coordinates of the unmanned aerial vehicle are mapped into three-dimensional coordinates to determine the position in the three-dimensional space of the unmanned aerial vehicle by combining internal parameters of a camera, so that the problem that the unmanned aerial vehicle in the prior art is inaccurate in identification and positioning is solved, the problem that technical means such as radar and satellite adopted in the prior art are easily interfered by external environment is solved, and the positioning is accurate.
(4) The fine-grained classification detection method does not need to process edge marking points, and eliminates the step of background blurring, so that the speed of fine-grained classification detection can be greatly increased, and the real-time fine-grained classification detection is completely met.
(5) The invention can also obtain the track information of the unmanned aerial vehicle in the three-dimensional space through the continuous three-dimensional coordinate information of the frame pictures.
Drawings
FIG. 1 is a block flow diagram of the method of the present invention;
FIG. 2 is a schematic diagram illustrating the recognition and detection of local micro features according to the present invention;
FIG. 3 is a schematic flow chart of the present invention.
Detailed Description
The invention is further described with reference to the following figures and specific embodiments.
As shown in fig. 1, in the unmanned aerial vehicle identification and positioning method based on fine-grained classification of the present invention, in order to implement fine-grained classification, an unmanned aerial vehicle data set library is first established, and at present, no data set library can meet the requirements of fine-grained classification detection, and only coarse-grained classification can be met. The invention collects huge data sets meeting various unmanned aerial vehicle models and parameter information through experiments, establishes a computer vision database meeting fine-grained classification, and carries out unmanned aerial vehicle labeling of fine-grained classification. Compared with the existing data set database, the computer vision database can see the dynamic visualization of the data set and label detailed model information.
Coarse grain detection is performed before fine grain classification. Optimizing a YOLO network structure: in the YOLO network structure, each grid predicts four bounding boxes, and the output vector is a vector of 7 × 23, so that the problem of low accuracy of the YOLO network structure in the prior art for detecting short-distance objects is solved. Since each trellis predicts four bounding boxes, the NMS removal problem for objects close to the ground truth is greatly reduced.
Through the optimized YOLO network structure, end-to-end real-time object recognition and detection are carried out on the video shot by the camera, whether an object in a lens is an unmanned aerial vehicle or not can be detected on Titan X at the speed of 60-70FPS/S, coarse grain detection is completed, the detection speed is high, object detection based on streaming video is completely met, sufficient time is provided for fine grain classification, detection of the unmanned aerial vehicle is achieved, and whether the object is the unmanned aerial vehicle or not can be accurately judged. That is to say, the object detection based on under the streaming video has not only been realized to the network structure of optimizing, has solved the network structure of prior art moreover and has been close to unmanned aerial vehicle colony problem each other.
Fine-grained classification: deep learning is completed by adopting a strong supervision learning method on the basis of a data set in an unmanned aerial vehicle data set database established by the invention, fine-grained classification is further completed, and the detected type and model of the unmanned aerial vehicle are accurately obtained after the fine-grained classification is completed.
According to the invention, the prominent obvious features among unmanned aerial vehicles of different models are used as local micro features divided by fine granularity, on the basis of detection, the number of rotors can be used as one micro feature of fine granularity classification, a micro feature classifier is obtained through training, frame pictures are detected regionally, and the specific model of the unmanned aerial vehicle is finally obtained through the detection results of a plurality of micro feature classifiers.
For fine-grained classification, the identification and detection of local micro features are very critical links, and different from the existing marking points of local micro feature identification and detection parts, the unmanned aerial vehicle is subjected to down-sampling with twice pixel amplification as shown in formula (1),
Figure BDA0001638664440000051
Xirepresenting the down-sampled image, SiRepresentative of the original image, YS (S)i) A certain pixel point position, x, representing the original image11,…,xnnRepresentative pixel size, YS (S)i) The function maps a certain pixel point position coordinate of the original image to other positions, and the multiplication with the matrix is to powerfully amplify the characteristic pixel of the tiny position and weaken the pixel intensity of other areas, so that the resolution can be increased, and the visual perception information of local tiny characteristics is amplified, as shown in fig. 2: the tiny features of the rectangular frame area are obviously enlarged, which brings powerful visual perception information for distinguishing models by the tiny features. And (4) putting the amplified local tiny features in the rectangular frame into a deep network, and using the local tiny features as a classifier for fine-grained classification. Compared with the existing fine-grained classification detection method, the fine-grained detection method provided by the invention has the advantages that edge marking points are not processed, and the step of background blurring is eliminated, so that the speed of fine-grained classification detection can be greatly increased, and the real-time fine-grained classification detection is completely met.
The process of the invention is as shown in fig. 3, firstly, the original picture resize is adjusted to a proper size, the picture is put into a coarse-grained detection YOLO network structure, the YOLO network structure comprises 6 convolution layers and a full-connection layer, 7 x 23 vectors are output, the predicted target position and category information is output, the output picture is sampled in a formula (1) form, and then the sampling is sent into a fine-grained classification depth network, the depth network comprises 4 convolution layers and 1 full-connection layer, and the full-connection layer outputs 7 x 14 vectors for outputting the fine-grained detection unmanned aerial vehicle model.
After the specific model of the unmanned aerial vehicle is obtained, the model is matched and retrieved: the specific information of the unmanned aerial vehicle of the model is matched and retrieved in the unmanned aerial vehicle data base, all information of the unmanned aerial vehicle is obtained, and the external visual characteristics of the unmanned aerial vehicle, such as the length of the unmanned aerial vehicle, the width of the unmanned aerial vehicle and the height of the unmanned aerial vehicle, including material information such as fuselage materials and structural information such as the length, the width and the height of the unmanned aerial vehicle are obtained.
Before the space positioning of the unmanned aerial vehicle, internal parameters of a camera, including lens parameters, sensor parameters, lens pixel size and the like, are obtained through calibration of the camera; and finally, combining the external visual characteristics of the unmanned aerial vehicle such as the length of the unmanned aerial vehicle body and the height of the unmanned aerial vehicle body with the internal parameters of the camera to obtain the position of the unmanned aerial vehicle in the three-dimensional space. The method comprises the steps of calibrating internal parameters of a camera, converting two-dimensional coordinates in an image into three-dimensional coordinates through a translation and rotation matrix, establishing mapping from a plane picture to a three-dimensional space, obtaining the two-dimensional coordinate position of the unmanned aerial vehicle through a YOLO network structure, and further obtaining the three-dimensional coordinates of the unmanned aerial vehicle in the three-dimensional space according to a mapping relation.
The invention can also obtain the track information of the unmanned aerial vehicle in the three-dimensional space through the continuous three-dimensional coordinate information of the frame pictures.
In conclusion, the unmanned aerial vehicle positioning method and the unmanned aerial vehicle positioning system can realize positioning of the unmanned aerial vehicle based on fine-grained classification on the basis of ensuring the unmanned aerial vehicle identification and detection, can obtain position coordinate information and running track information of the unmanned aerial vehicle in a three-dimensional space, and provide powerful help for other application occasions.
It is understood that the above description is not intended to limit the present invention, and the present invention is not limited to the above examples, and those skilled in the art should understand that they can make various changes, modifications, additions and substitutions within the spirit and scope of the present invention.

Claims (6)

1. An unmanned aerial vehicle identification and positioning method based on fine-grained classification is characterized by comprising the following steps:
(a) establishing an unmanned aerial vehicle database: establishing a computer vision database meeting fine-grained classification by acquiring the model and parameter information of the unmanned aerial vehicle;
(b) and (3) coarse particle size detection: end-to-end real-time object recognition and detection are carried out on the video shot by the camera through a YOLO network structure, and whether an object is an unmanned aerial vehicle or not is accurately judged;
(c) fine-grained classification: adopting a strong supervision learning method to complete deep learning on the basis of the data set in the unmanned aerial vehicle data set database in the step (a) so as to complete fine-grained classification, and accurately obtaining the detected type and model of the unmanned aerial vehicle after the fine-grained classification is completed;
(d) matching retrieval model number: after the specific model of the unmanned aerial vehicle is obtained, matching and retrieving specific information of the unmanned aerial vehicle of the model in an unmanned aerial vehicle database to obtain all information of the unmanned aerial vehicle, including material information and structure information, and obtaining the external visual characteristics of the unmanned aerial vehicle;
(e) calibrating a camera: obtaining internal parameters of a camera;
(f) space positioning: combining the external visual features of the unmanned aerial vehicle with the internal parameters of the camera to obtain the position of the unmanned aerial vehicle in the three-dimensional space;
in the step (b), optimizing a YOLO network structure, predicting four bounding boxes in each grid, outputting a vector with a vector of 7 × 23, realizing object detection based on a streaming video, and detecting whether an object in a lens is an unmanned aerial vehicle at a rate of 60-70FPS/S, thereby completing coarse-grained detection;
in the step (c), the prominent obvious features among the unmanned aerial vehicles of different models are used as local tiny features divided in a fine granularity mode, tiny feature classifiers are obtained through training on the basis of detection, and the specific models of the unmanned aerial vehicles are finally obtained through regional detection frame pictures and detection results of a plurality of tiny feature classifiers;
in the step (c), the identification and detection of the local tiny features are to perform down-sampling on the unmanned aerial vehicle by twice pixel amplification, as shown in formula (1),
Figure FDA0002480151160000011
Xirepresenting the down-sampled image, SiRepresentative of the original image, YS (S)i) A certain pixel point position, x, representing the original image11,…,xnnRepresentative pixel size, YS (S)i) Function(s)The position coordinates of a certain pixel point of the original image are mapped to other positions, and the multiplication with the matrix is to amplify the characteristic pixel intensity of the tiny position, weaken the pixel intensity of other areas, increase the resolution and amplify the visual perception information of local tiny characteristics; and putting the amplified local tiny features into a deep network to serve as a classifier for fine-grained classification.
2. The fine-grained classification-based unmanned aerial vehicle identification and positioning method according to claim 1, characterized in that: the YOLO network structure comprises 6 convolution layers and a full connection layer, 7 x 23 vectors are output, predicted target position and category information are output, output pictures are sampled in the form of a formula (1), and then the output pictures are sent to a fine-grained classified depth network, wherein the depth network comprises 4 convolution layers and 1 full connection layer, and the full connection layer outputs 7 x 14 vectors and is used for outputting the unmanned aerial vehicle model after fine-grained detection.
3. The fine-grained classification-based unmanned aerial vehicle identification and positioning method according to any one of claims 1-2, characterized in that: through camera calibration and translation and rotation matrixes, converting two-dimensional coordinates in an image into three-dimensional coordinates, and establishing mapping from a plane picture to a three-dimensional space; and obtaining the two-dimensional coordinate position of the unmanned aerial vehicle through the YOLO network structure, and further obtaining the three-dimensional coordinate of the unmanned aerial vehicle in the three-dimensional space according to the mapping relation.
4. The fine-grained classification-based unmanned aerial vehicle identification and positioning method according to claim 3, characterized in that: and obtaining the track information of the unmanned aerial vehicle in the three-dimensional space through the continuous three-dimensional coordinate information of the frame pictures.
5. The fine-grained classification-based unmanned aerial vehicle identification and positioning method according to claim 1, characterized in that: the external visual characteristics comprise structural information of the unmanned aerial vehicle, such as the length of the unmanned aerial vehicle body, the width of the unmanned aerial vehicle body and the height of the unmanned aerial vehicle body.
6. The fine-grained classification-based unmanned aerial vehicle identification and positioning method according to claim 1, characterized in that: the intrinsic parameters of the camera include lens parameters, sensor parameters, and pixel size.
CN201810371993.1A 2018-04-24 2018-04-24 Unmanned aerial vehicle identification and positioning method based on fine-grained classification Active CN108711172B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810371993.1A CN108711172B (en) 2018-04-24 2018-04-24 Unmanned aerial vehicle identification and positioning method based on fine-grained classification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810371993.1A CN108711172B (en) 2018-04-24 2018-04-24 Unmanned aerial vehicle identification and positioning method based on fine-grained classification

Publications (2)

Publication Number Publication Date
CN108711172A CN108711172A (en) 2018-10-26
CN108711172B true CN108711172B (en) 2020-07-03

Family

ID=63866935

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810371993.1A Active CN108711172B (en) 2018-04-24 2018-04-24 Unmanned aerial vehicle identification and positioning method based on fine-grained classification

Country Status (1)

Country Link
CN (1) CN108711172B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109270954A (en) * 2018-10-30 2019-01-25 西南科技大学 A kind of unmanned plane interactive system and its control method based on gesture recognition
CN109918988A (en) * 2018-12-30 2019-06-21 中国科学院软件研究所 A kind of transplantable unmanned plane detection system of combination imaging emulation technology
CN111695397A (en) * 2019-12-20 2020-09-22 珠海大横琴科技发展有限公司 Ship identification method based on YOLO and electronic equipment
CN112101222A (en) * 2020-09-16 2020-12-18 中国海洋大学 Sea surface three-dimensional target detection method based on unmanned ship multi-mode sensor
CN112509384B (en) * 2021-02-03 2021-07-30 深圳协鑫智慧能源有限公司 Intelligent street lamp-based aircraft control method and intelligent street lamp
CN113901944B (en) * 2021-10-25 2024-04-09 大连理工大学 Marine organism target detection method based on improved YOLO algorithm

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105827607A (en) * 2016-03-31 2016-08-03 赵文洁 Unmanned aerial vehicle identification system
CN107134144A (en) * 2017-04-27 2017-09-05 武汉理工大学 A kind of vehicle checking method for traffic monitoring
CN107256262A (en) * 2017-06-13 2017-10-17 西安电子科技大学 A kind of image search method based on object detection
CN107291811A (en) * 2017-05-18 2017-10-24 浙江大学 A kind of sense cognition enhancing robot system based on high in the clouds knowledge fusion
CN107678023A (en) * 2017-10-10 2018-02-09 芜湖华创光电科技有限公司 A kind of passive location and identifying system to civilian unmanned plane
CN107862694A (en) * 2017-12-19 2018-03-30 济南大象信息技术有限公司 A kind of hand-foot-and-mouth disease detecting system based on deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10915673B2 (en) * 2014-07-01 2021-02-09 Scanifly, LLC Device, method, apparatus, and computer-readable medium for solar site assessment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105827607A (en) * 2016-03-31 2016-08-03 赵文洁 Unmanned aerial vehicle identification system
CN107134144A (en) * 2017-04-27 2017-09-05 武汉理工大学 A kind of vehicle checking method for traffic monitoring
CN107291811A (en) * 2017-05-18 2017-10-24 浙江大学 A kind of sense cognition enhancing robot system based on high in the clouds knowledge fusion
CN107256262A (en) * 2017-06-13 2017-10-17 西安电子科技大学 A kind of image search method based on object detection
CN107678023A (en) * 2017-10-10 2018-02-09 芜湖华创光电科技有限公司 A kind of passive location and identifying system to civilian unmanned plane
CN107862694A (en) * 2017-12-19 2018-03-30 济南大象信息技术有限公司 A kind of hand-foot-and-mouth disease detecting system based on deep learning

Also Published As

Publication number Publication date
CN108711172A (en) 2018-10-26

Similar Documents

Publication Publication Date Title
CN108711172B (en) Unmanned aerial vehicle identification and positioning method based on fine-grained classification
CN109685066B (en) Mine target detection and identification method based on deep convolutional neural network
CN105930819B (en) Real-time city traffic lamp identifying system based on monocular vision and GPS integrated navigation system
Zhao et al. Detection, tracking, and geolocation of moving vehicle from uav using monocular camera
CN110222626B (en) Unmanned scene point cloud target labeling method based on deep learning algorithm
CN111080693A (en) Robot autonomous classification grabbing method based on YOLOv3
CN107767400B (en) Remote sensing image sequence moving target detection method based on hierarchical significance analysis
CN108446634B (en) Aircraft continuous tracking method based on combination of video analysis and positioning information
CN115943439A (en) Multi-target vehicle detection and re-identification method based on radar vision fusion
CN111241988B (en) Method for detecting and identifying moving target in large scene by combining positioning information
CN103413444A (en) Traffic flow surveying and handling method based on unmanned aerial vehicle high-definition video
CN115049700A (en) Target detection method and device
CN111274926B (en) Image data screening method, device, computer equipment and storage medium
CN110110618B (en) SAR target detection method based on PCA and global contrast
Ma et al. An intelligent object detection and measurement system based on trinocular vision
CN115331100A (en) Spatial distribution monitoring method and system for cultivated land planting attributes
CN113298781B (en) Mars surface three-dimensional terrain detection method based on image and point cloud fusion
Kukolj et al. Road edge detection based on combined deep learning and spatial statistics of LiDAR data
CN113850195A (en) AI intelligent object identification method based on 3D vision
CN113985435A (en) Mapping method and system fusing multiple laser radars
CN108765444A (en) Ground T shape Moving objects detection and location methods based on monocular vision
Liu et al. A lightweight lidar-camera sensing method of obstacles detection and classification for autonomous rail rapid transit
CN114708538B (en) Unmanned aerial vehicle-based red solenopsis invicta formicary detection positioning method
CN107193965B (en) BoVW algorithm-based rapid indoor positioning method
CN115761265A (en) Method and device for extracting substation equipment in laser radar point cloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant