CN109255286A - A kind of quick detection recognition method of unmanned plane optics based on YOLO deep learning network frame - Google Patents

A kind of quick detection recognition method of unmanned plane optics based on YOLO deep learning network frame Download PDF

Info

Publication number
CN109255286A
CN109255286A CN201810807503.8A CN201810807503A CN109255286A CN 109255286 A CN109255286 A CN 109255286A CN 201810807503 A CN201810807503 A CN 201810807503A CN 109255286 A CN109255286 A CN 109255286A
Authority
CN
China
Prior art keywords
yolo
layer
unmanned plane
network frame
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810807503.8A
Other languages
Chinese (zh)
Other versions
CN109255286B (en
Inventor
智喜洋
俞利健
巩晋南
江世凯
陈文彬
胡建明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN201810807503.8A priority Critical patent/CN109255286B/en
Publication of CN109255286A publication Critical patent/CN109255286A/en
Application granted granted Critical
Publication of CN109255286B publication Critical patent/CN109255286B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention discloses a kind of quick detection recognition methods of unmanned plane optics based on YOLO deep learning network frame, the method comprises the following steps: step 1: carrying out flight test acquisition optical imaging experiments data for five kinds of unmanned planes of existing market mainstream, and handled according to optical imaging experiments data of the standard VOC data format to acquisition;Step 2: building YOLO network frame, is improved using residual error network module to YOLO network frame, is trained to improved YOLO network frame, obtains detection identification model;Step 3: the real scene shooting flight optical imaging experiment data comprising five kinds of unmanned planes are chosen, carry out detection identification using the resulting detection identification model of step 2.The invention avoids the problems such as complicated, applicability is not strong is manually modeled to unmanned plane and complex background feature, the speed and accuracy rate of moving-target detection identification under complex background can be greatly improved.

Description

A kind of unmanned plane optics based on YOLO deep learning network frame quickly detects identification Method
Technical field
The invention belongs to technical field of image processing, are related to a kind of unmanned plane detection recognition method based on optical imagery, More particularly to a kind of unmanned plane optics for being based on YOLO (You Only Look Once, only have a look at) deep learning network frame Quick detection recognition method.
Background technique
In recent years, unmanned air vehicle technique is grown rapidly, and is widely used to take photo by plane, agricultural plant protection, traffic monitoring, calamity Area's detection, power-line patrolling, quickly numerous Military and civil fields such as investigation, surprise attack.But spreading unchecked for unmanned plane market also brings various peaces Full hidden danger, the unmanned plane that especially do not abide by supervision regulation, surmount the flight of regulation airspace, gives airport airline carriers of passengers safety, emphasis The security protection of area peripheral edge causes great threat.Therefore, need to develop China's critical facility periphery, crowd massing place, important The monitoring of aerial unmanned plane and the Prevention-Security measure of the key areas such as military district periphery.
Currently, the air surveillance of unmanned plane and defence be mainly based on radar, GPS signal, as ThaleSA (method), DroneShield (beauty), JAMMER (ink) company are monitored unmanned plane using radar, big vast electronics technology invention behind Shanghai Portable unmanned machine interference unit, which then passes through no-manned machine distant control signal and the compacting of GPS positioning signal, makes unmanned plane fall or make a return voyage. But the method based on radar or GPS signal is mainly used for remote quick discovery, capture and the tracking of unmanned plane, still difficult at present To realize that efficient, high-accuracy the detection to unmanned plane type identifies.And detection knowledge is carried out to unmanned plane using optical instrument It does not have the advantage that 1) compared to radar, it is thin that optical imagery can obtain the targets such as gray scale more abundant, texture, structure Information is saved, the high-accuracy identification of unmanned plane type is more suitable for;2) compared to GPS, optical instrument belongs to passive mode, is not required to Target is wanted actively to receive signal.Therefore the unmanned machine testing identification based on optical imagery has become important trend.Traditional Optical detection recognition methods is the first artificial suitable classifier of construction feature reselection, background is simple, target invariant features It is showed in metastable situation good.And unmanned plane, in the sky in flight course, relative position, posture become constantly Change, is difficult to find the characteristics of image for meeting Scale invariant, angle invariability, and the background during unmanned plane during flying is equally Movement, and it is likely to complicated and changeable.
Summary of the invention
The present invention is difficult to realize using means such as radar, GPS to the high-accuracy identification of unmanned plane classification, biography for current The problems such as artificial constructed process of the optical signature of system is cumbersome, generalization ability is not strong proposes a kind of based on YOLO deep learning net The quick detection recognition method of unmanned plane optics of network frame.This method can utilize actual test flight data, learning training Network, directly output recognition result are avoided compared to traditional detection identification process to unmanned plane and complex background feature people The problems such as work modeling is complicated, applicability is not strong can greatly improve under complex background the speed of moving-target detection identification and accurate Rate.
The purpose of the present invention is what is be achieved through the following technical solutions:
A kind of quick detection recognition method of unmanned plane optics based on YOLO deep learning network frame, including walk as follows It is rapid:
Step 1: carrying out flight test acquisition optical imaging experiments data for five kinds of unmanned planes of existing market mainstream, And it is handled according to optical imaging experiments data of the standard VOC data format to acquisition;
Step 2: building YOLO network frame, is improved using residual error network module to YOLO network frame, to improvement YOLO network frame afterwards is trained, and obtains detection identification model;
Step 3: choosing the real scene shooting flight optical imaging experiment data comprising five kinds of unmanned planes, resulting using step 2 Detection identification model carries out detection identification.
Compared with the prior art, the present invention has the advantage that
(1) present invention in view of the target detection recognition methods based on deep learning be by feature construction and fusion for classification at One entirety, i.e. input are initial data, direct output category result, do not need artificial constructed feature, are more suitable for solving multiple The automatic detection identification problem of moving-target under miscellaneous dynamic background, proposes a kind of unmanned plane light based on YOLO deep learning network frame Learn quick, autonomous detection recognition method.
(2) method of the invention is answered suitable for various complexity such as airport, stadium, concert, important military district peripheries Quick discovery, high-accuracy detection and the identification invaded with unmanned plane under scene are the effective supervision and defense in the air of unmanned plane It provides and supports.
(3) present invention is in unmanned plane in the sky flight course, and relative position, posture are constantly changing, very Difficulty finds the characteristics of image for meeting scale invariability, angle invariability, and the background during unmanned plane during flying is usually complicated It is the problems such as changeable, theoretical based on deep learning, using heterogeneous networks layer (convolutional layer, pond layer, recurrence layer etc.) function, in conjunction with spy Different network configuration design, realizes automation, the abstract expression of target signature under complex situations, and is automatically performed target point Class.Compared to traditional target detection identification process, this method can greatly improve moving-target detection identification under complex background Speed and accuracy rate.
(4) present invention selects YOLO network frame, which, which will test, appoints to realize that the quick detection of unmanned plane identifies Business is handled as regression problem, directly obtains bounding box coordinates and setting comprising object by all pixels of entire image Reliability and class probability are significantly better than R-CNN, Fast R-CNN even deep learning network frame in detection speed.And it utilizes Residual error network module improves YOLO network, reduces a possibility that gradient explosion or gradient disappear in training process, To effectively improve the usable probability of training pattern, identification is quickly detected to the unmanned plane based on YOLO deep learning network frame The practical application of method is of great significance.
Detailed description of the invention
Fig. 1 is that the present invention is based on the quick detection recognition method processes of the unmanned plane optics of YOLO deep learning network frame Figure;
Fig. 2 is VOC standard data format example;
Fig. 3 is network structure;
Fig. 4 is the sample image in database;
Fig. 5 is real image test result.
Specific embodiment
Further description of the technical solution of the present invention with reference to the accompanying drawing, and however, it is not limited to this, all to this Inventive technique scheme is modified or replaced equivalently, and without departing from the spirit and scope of the technical solution of the present invention, should all be covered Within the protection scope of the present invention.
The present invention provides a kind of quick detection recognition method of unmanned plane optics based on YOLO deep learning network frame, As shown in Figure 1, step specific as follows:
Step 1: carrying out flight test acquisition optical imaging experiments data for five kinds of unmanned planes of existing market mainstream, And it is handled according to optical imaging experiments data of the standard VOC data format to acquisition.Specific step is as follows:
Big boundary M100, Phantom-3, Inspire-1, agricultural plant protection unmanned plane, police unmanned plane is selected to carry out flight examination It tests acquisition experimental data, and operations is labeled etc. to experimental data and comply with VOC data format, treated specific format As shown in Figure 2.
Step 2: building YOLO network frame, is improved using residual error network module to YOLO network frame, to improvement YOLO network frame afterwards is trained, and obtains detection identification model.Specific step is as follows:
The detection method of YOLO is single order detection method, input picture is divided into S × S grid, each grid is responsible for inspection Survey the object of " falling into " grid.If the coordinate of the center of some object drops into some grid, then this grid is just It is responsible for detecting this object.The output information of each grid includes two large divisions, is comprising object rectangular area information respectively B boundingbox (bounding box) information and C object belong to the probabilistic information of certain classification.
Bounding box information includes 5 values, is x, y, w, h and confidence (confidence level) respectively.Wherein x, y Refer to deviant of the center of the bounding box for the object that current grid is predicted relative to current grid position, And it is normalized to the coordinate of [0,1].W, h are the width of bounding box with height and using the width and height of image It is normalized in [0,1].
Confidence reflect current bounding box whether include object and object space accuracy, calculating side Formula is as follows:
Confidence=P (object) IOU.
Wherein, it indicates that bounding box includes object (target object) as P (object)=1, does not include then P (object)=0;IOU (hand over and compare) is the intersection area for predicting bounding box and object real estate, and area is with pixel As a result areal calculation is normalized in [0,1] section with real estate area elemental area.
It builds YOLO network (shown in network structure such as Fig. 3 (a)), network includes 24 convolutional layers and 2 full articulamentums, volume Lamination is used for future position and class probability for extracting characteristics of image, full articulamentum.The loss function definition of network is such as Under:
Each meaning of parameters of above-mentioned formula is as follows: S2Indicate that image segmentation is the number of grid, value is 13 × 13;B is indicated The number of anchor point frame in each grid, value 5;X, y, w, h are the centre coordinate and wide height of predicted boundary frame;Indicate the centre coordinate and wide height of actual boundary frame;C indicates the confidence level in predicted boundary frame comprising target;Indicate the friendship of realistic objective and realistic objective bounding box and ratio;P (c) indicates the prediction probability for belonging to a certain classification;Table Show the actual probabilities for belonging to a certain classification, belonging to then value is 1, and otherwise value is 0;IobjIndicate anchor point frame in whether inclusion Otherwise body is 0 comprising being then 1;λcoordIndicate that position prediction loses weight, value 3;λnoobjMesh when indicating without target Mark confidence weight, value 0.7.
First two of function are coordinate prediction loss, and the confidence that Section 3 is the box comprising object predicts damage It loses, Section 4 is that the confidence not comprising object predicts loss, and Section 5 is class prediction loss.
Activation primitive is defined as follows in network:
Gradient explosion is generated when to avoid training or gradient disappears, and the present invention changes this network using residual error module Into after improvement shown in network structure such as Fig. 3 (b), specific location is as follows: the 3rd layer of pond layer output steering and the 9th layer of convolutional layer are defeated It merges out, the 12nd layer of convolutional layer output steering is merged with the 15th layer of convolutional layer output, the 4th layer of pond layer output steering It is merged with the 18th layer of convolutional layer output, the 19th layer of convolutional layer output steering is merged with the 22nd layer of convolutional layer output.? It is added at 4 after short circuit connects and composes residual unit, the gradient explosion or gradient when effectively alleviating trained disappear.Residual error at 4 Unit part gradient circulation way is as follows:
Loss=F (xi,Wi)
In formula, loss indicates loss function, xi、WiFor i-th layer of network input and i-th layer of weight, loss function is expressed as defeated Enter the function F (x with weighti,Wi), xLIndicate the output of residual error module shunting layer, xlIt indicates to export at the merging of residual error module, first A factorIndicate that loss function reaches L layers of gradient, 1 in round bracket shows that short-circuit mechanism can be propagated nondestructively Gradient, and an other residual error gradient then needs the layer by having weight.
Hyper parameter setting is carried out to network, sets 0.0003 for initial learning rate, it is (i.e. every using stochastic gradient descent method Secondary update using sample size is 1, and update times are 50000) to train network in total, obtain detection identification model.
Step 3: choosing the real scene shooting flight optical imaging experiment data comprising five class unmanned planes, resulting using step 2 Identification model is detected, testing image is inputted, feature, pond layer downscaled images size, finally by connecting entirely are extracted by convolutional layer Layer output target prodiction value and target category probabilistic forecasting value are connect, it is identification that target category maximum probability value, which corresponds to classification, As a result.As shown in Figure 5 known to each classification detection recognition correct rate statistical result: M100 91.96%, Inspire-1 are 91.74%, Phantom-3 89.78%, agriculture unmanned plane are 94.13%, and police unmanned plane is 89.84%, have reached higher Detection recognition correct rate, and in the case where being accelerated using GPU, every frame processing speed can reach a millisecond rank, realize quick Detection identification.

Claims (6)

1. a kind of quick detection recognition method of unmanned plane optics based on YOLO deep learning network frame, it is characterised in that described Method and step is as follows:
Step 1: carry out flight test acquisition optical imaging experiments data for five kinds of unmanned planes of existing market mainstream, and press The quasi- VOC data format of sighting target handles the optical imaging experiments data of acquisition;
Step 2: building YOLO network frame, is improved using residual error network module to YOLO network frame, to improved YOLO network frame is trained, and obtains detection identification model;
Step 3: the real scene shooting flight optical imaging experiment data comprising five kinds of unmanned planes are chosen, the resulting detection of step 2 is utilized Identification model carries out detection identification.
2. the unmanned plane optics according to claim 1 based on YOLO deep learning network frame quickly detects identification side Method, it is characterised in that five kinds of unmanned planes are big boundary M100, Phantom-3, Inspire-1, agricultural plant protection unmanned plane, police Unmanned plane.
3. the unmanned plane optics according to claim 1 based on YOLO deep learning network frame quickly detects identification side Method, it is characterised in that specific step is as follows for the step 2:
(1) input picture is divided into 13 × 13 grid by YOLO, and each grid is responsible for detecting the object of " falling into " grid, each The output information of grid include two large divisions, respectively be comprising object rectangular area information 5 boundingbox information and C object belongs to the probabilistic information of certain classification;
(2) YOLO network is built, network includes 24 convolutional layers and 2 full articulamentums, and convolutional layer is for extracting characteristics of image, entirely Articulamentum is used for future position and class probability.
(3) YOLO network is improved using residual error module, short circuit at 4 is added and connects and composes residual unit, residual unit at 4 Part gradient circulation way is as follows:
Loss=F (xi,Wi)
In formula, loss indicates loss function, xi、WiFor i-th layer of network input and i-th layer of weight, loss function be expressed as input with Function F (the x of weighti,Wi), xLIndicate the output of residual error module shunting layer, xlIndicate the merging of residual error module at export, first because SonIndicate that loss function reaches L layers of gradient;
(4) hyper parameter setting is carried out to the improved YOLO network of step (3), sets 0.0003 for initial learning rate, uses Stochastic gradient descent method trains network, obtains detection identification model.
4. the unmanned plane optics according to claim 3 based on YOLO deep learning network frame quickly detects identification side Method, it is characterised in that in the step (3), the specific location that short circuit connects at 4 is as follows: the 3rd layer of pond layer output steering and the 9th Layer convolutional layer output merges, and the 12nd layer of convolutional layer output steering is merged with the 15th layer of convolutional layer output, the 4th layer of pond Layer output steering is exported with the 18th layer of convolutional layer and is merged, the 19th layer of convolutional layer output steering and the 22nd layer of convolutional layer export into Row merges.
5. the unmanned plane optics according to claim 3 based on YOLO deep learning network frame quickly detects identification side Method, it is characterised in that in the step (4), the update times in total of stochastic gradient descent method training network are 50000.
6. the unmanned plane optics according to claim 1 based on YOLO deep learning network frame quickly detects identification side Method, it is characterised in that in the step 3, it is as follows to carry out detection knowledge method for distinguishing using the resulting detection identification model of step 2: Testing image is inputted, feature is extracted by convolutional layer, pond layer downscaled images size exports target position finally by full articulamentum Predicted value and target category probabilistic forecasting value are set, target category maximum probability value corresponds to the result that classification is identification.
CN201810807503.8A 2018-07-21 2018-07-21 Unmanned aerial vehicle optical rapid detection and identification method based on deep learning network framework Active CN109255286B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810807503.8A CN109255286B (en) 2018-07-21 2018-07-21 Unmanned aerial vehicle optical rapid detection and identification method based on deep learning network framework

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810807503.8A CN109255286B (en) 2018-07-21 2018-07-21 Unmanned aerial vehicle optical rapid detection and identification method based on deep learning network framework

Publications (2)

Publication Number Publication Date
CN109255286A true CN109255286A (en) 2019-01-22
CN109255286B CN109255286B (en) 2021-08-24

Family

ID=65049063

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810807503.8A Active CN109255286B (en) 2018-07-21 2018-07-21 Unmanned aerial vehicle optical rapid detection and identification method based on deep learning network framework

Country Status (1)

Country Link
CN (1) CN109255286B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109977840A (en) * 2019-03-20 2019-07-05 四川川大智胜软件股份有限公司 A kind of airport scene monitoring method based on deep learning
CN110490155A (en) * 2019-08-23 2019-11-22 电子科技大学 A kind of no-fly airspace unmanned plane detection method
CN110850897A (en) * 2019-11-13 2020-02-28 中国人民解放军空军工程大学 Small unmanned aerial vehicle pose data acquisition method facing deep neural network
CN111611918A (en) * 2020-05-20 2020-09-01 重庆大学 Traffic flow data set acquisition and construction method based on aerial photography data and deep learning
CN111723690A (en) * 2020-06-03 2020-09-29 北京全路通信信号研究设计院集团有限公司 Circuit equipment state monitoring method and system
CN111797940A (en) * 2020-07-20 2020-10-20 中国科学院长春光学精密机械与物理研究所 Image identification method based on ocean search and rescue and related device
CN112329768A (en) * 2020-10-23 2021-02-05 上善智城(苏州)信息科技有限公司 Improved YOLO-based method for identifying fuel-discharging stop sign of gas station
CN112668445A (en) * 2020-12-24 2021-04-16 南京泓图人工智能技术研究院有限公司 Vegetable type detection and identification method based on yolov5
CN112699810A (en) * 2020-12-31 2021-04-23 中国电子科技集团公司信息科学研究院 Method and device for improving figure identification precision of indoor monitoring system
CN113822372A (en) * 2021-10-20 2021-12-21 中国民航大学 Unmanned aerial vehicle detection method based on YOLOv5 neural network
CN113822375A (en) * 2021-11-08 2021-12-21 北京工业大学 Improved traffic image target detection method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017077348A1 (en) * 2015-11-06 2017-05-11 Squarehead Technology As Uav detection
CN107862705A (en) * 2017-11-21 2018-03-30 重庆邮电大学 A kind of unmanned plane small target detecting method based on motion feature and deep learning feature

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017077348A1 (en) * 2015-11-06 2017-05-11 Squarehead Technology As Uav detection
CN107862705A (en) * 2017-11-21 2018-03-30 重庆邮电大学 A kind of unmanned plane small target detecting method based on motion feature and deep learning feature

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
XIE, B等: "Research on UAV target recognition algorithm based on transfer learning SAE", 《INFRARED AND LASER ENGINEERING》 *
王靖宇等: "基于深度神经网络的低空弱小无人机目标检测研究", 《西北工业大学学报》 *
蒋兆军等: "基于深度学习的无人机识别算法研究", 《电子技术应用》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109977840A (en) * 2019-03-20 2019-07-05 四川川大智胜软件股份有限公司 A kind of airport scene monitoring method based on deep learning
CN110490155B (en) * 2019-08-23 2022-05-17 电子科技大学 Method for detecting unmanned aerial vehicle in no-fly airspace
CN110490155A (en) * 2019-08-23 2019-11-22 电子科技大学 A kind of no-fly airspace unmanned plane detection method
CN110850897A (en) * 2019-11-13 2020-02-28 中国人民解放军空军工程大学 Small unmanned aerial vehicle pose data acquisition method facing deep neural network
CN110850897B (en) * 2019-11-13 2023-06-13 中国人民解放军空军工程大学 Deep neural network-oriented small unmanned aerial vehicle pose data acquisition method
CN111611918A (en) * 2020-05-20 2020-09-01 重庆大学 Traffic flow data set acquisition and construction method based on aerial photography data and deep learning
CN111611918B (en) * 2020-05-20 2023-07-21 重庆大学 Traffic flow data set acquisition and construction method based on aerial data and deep learning
CN111723690A (en) * 2020-06-03 2020-09-29 北京全路通信信号研究设计院集团有限公司 Circuit equipment state monitoring method and system
CN111723690B (en) * 2020-06-03 2023-10-20 北京全路通信信号研究设计院集团有限公司 Method and system for monitoring state of circuit equipment
CN111797940A (en) * 2020-07-20 2020-10-20 中国科学院长春光学精密机械与物理研究所 Image identification method based on ocean search and rescue and related device
CN112329768A (en) * 2020-10-23 2021-02-05 上善智城(苏州)信息科技有限公司 Improved YOLO-based method for identifying fuel-discharging stop sign of gas station
CN112668445A (en) * 2020-12-24 2021-04-16 南京泓图人工智能技术研究院有限公司 Vegetable type detection and identification method based on yolov5
CN112699810A (en) * 2020-12-31 2021-04-23 中国电子科技集团公司信息科学研究院 Method and device for improving figure identification precision of indoor monitoring system
CN112699810B (en) * 2020-12-31 2024-04-09 中国电子科技集团公司信息科学研究院 Method and device for improving character recognition precision of indoor monitoring system
CN113822372A (en) * 2021-10-20 2021-12-21 中国民航大学 Unmanned aerial vehicle detection method based on YOLOv5 neural network
CN113822375A (en) * 2021-11-08 2021-12-21 北京工业大学 Improved traffic image target detection method
CN113822375B (en) * 2021-11-08 2024-04-26 北京工业大学 Improved traffic image target detection method

Also Published As

Publication number Publication date
CN109255286B (en) 2021-08-24

Similar Documents

Publication Publication Date Title
CN109255286A (en) A kind of quick detection recognition method of unmanned plane optics based on YOLO deep learning network frame
CN113359810B (en) Unmanned aerial vehicle landing area identification method based on multiple sensors
CN106356757B (en) A kind of power circuit unmanned plane method for inspecting based on human-eye visual characteristic
CN103679674B (en) Method and system for splicing images of unmanned aircrafts in real time
CN110866887A (en) Target situation fusion sensing method and system based on multiple sensors
CN112068111A (en) Unmanned aerial vehicle target detection method based on multi-sensor information fusion
CN106529538A (en) Method and device for positioning aircraft
CN108710126A (en) Automation detection expulsion goal approach and its system
CN108875754B (en) Vehicle re-identification method based on multi-depth feature fusion network
Shi et al. Objects detection of UAV for anti-UAV based on YOLOv4
CN107742276A (en) One kind is based on the quick processing system of the airborne integration of unmanned aerial vehicle remote sensing image and method
Golovko et al. Development of solar panels detector
Ye et al. CT-Net: An efficient network for low-altitude object detection based on convolution and transformer
CN111046756A (en) Convolutional neural network detection method for high-resolution remote sensing image target scale features
Shangzheng A traffic sign image recognition and classification approach based on convolutional neural network
CN116109950A (en) Low-airspace anti-unmanned aerial vehicle visual detection, identification and tracking method
CN116846059A (en) Edge detection system for power grid inspection and monitoring
Tong et al. UAV target detection based on RetinaNet
Dousai et al. Detecting humans in search and rescue operations based on ensemble learning
Ihekoronye et al. Aerial supervision of drones and other flying objects using convolutional neural networks
Cheng et al. Moving Target Detection Technology Based on UAV Vision
Ghosh et al. AirTrack: Onboard deep learning framework for long-range aircraft detection and tracking
Niu et al. UAV Detection based on improved YOLOv4 object detection model
Delleji et al. An upgraded-YOLO with object augmentation: mini-UAV detection under low-visibility conditions by improving deep neural networks
Ding et al. Research on UAV detection technology of Gm-APD Lidar based on YOLO model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Zhi Xiyang

Inventor after: Yu Lijian

Inventor after: Hu Jianming

Inventor after: Gong Jinnan

Inventor after: Jiang Shikai

Inventor after: Chen Wenbin

Inventor before: Zhi Xiyang

Inventor before: Yu Lijian

Inventor before: Gong Jinnan

Inventor before: Jiang Shikai

Inventor before: Chen Wenbin

Inventor before: Hu Jianming

GR01 Patent grant
GR01 Patent grant