CN112287896A - Unmanned aerial vehicle aerial image target detection method and system based on deep learning - Google Patents

Unmanned aerial vehicle aerial image target detection method and system based on deep learning Download PDF

Info

Publication number
CN112287896A
CN112287896A CN202011344704.2A CN202011344704A CN112287896A CN 112287896 A CN112287896 A CN 112287896A CN 202011344704 A CN202011344704 A CN 202011344704A CN 112287896 A CN112287896 A CN 112287896A
Authority
CN
China
Prior art keywords
target
image
detection
feature map
target detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011344704.2A
Other languages
Chinese (zh)
Inventor
谷永辉
刘昌军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Jiexun Communication Technology Co ltd
Original Assignee
Shandong Jiexun Communication Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Jiexun Communication Technology Co ltd filed Critical Shandong Jiexun Communication Technology Co ltd
Priority to CN202011344704.2A priority Critical patent/CN112287896A/en
Publication of CN112287896A publication Critical patent/CN112287896A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The utility model provides an unmanned aerial vehicle aerial image target detection method and system based on deep learning, comprising the following steps: acquiring target images of various categories and shot images of an unmanned aerial vehicle; preprocessing a target image, and dividing the preprocessed target image into a training set and a verification set; inputting the target images of the training set into a target detection network for training to obtain a trained target detection model; inputting the target images of the verification set into a trained target detection model for verification to obtain a plurality of class detection target identification models; inputting a shot image of the unmanned aerial vehicle into a plurality of category detection target recognition models to obtain a detection result; the method comprises the steps of identifying and checking various detection targets by identifying the characteristics of various target types; the problem that the detection target data amount of part of categories is insufficient is solved.

Description

Unmanned aerial vehicle aerial image target detection method and system based on deep learning
Technical Field
The disclosure relates to the field of computer vision and image processing, and mainly relates to an unmanned aerial vehicle aerial image target detection method based on deep learning.
Background
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
The unmanned aerial vehicle aerial photography mode is adopted to provide a more convenient and effective management means for many application fields, wherein target detection, namely detection of an image target with certain characteristics from an unmanned aerial vehicle aerial photograph is the most extensive application. However, most of the detection methods adopted at the present stage are manual interpretation, namely, the aerial images are inquired in a manual mode. Although manual detection can reduce the workload of on-site patrol, it is still inefficient to completely use manual interpretation for massive unmanned aerial vehicle aerial pictures or videos.
With the continuous maturation of deep learning technology, target detection based on deep learning has been applied in many fields. Therefore, the target detection technology based on deep learning is used for target detection of the aerial image of the unmanned aerial vehicle, and has obvious technical advantages.
Disclosure of Invention
For solving prior art's not enough, this disclosure mainly provides the unmanned aerial vehicle target detection method of taking photo by plane that can be nimble, high-efficient, accurate, can discern multiple target image of taking photo by plane.
In a first aspect, the present disclosure provides a method for detecting an image target by aerial photography of an unmanned aerial vehicle based on deep learning, including:
acquiring target images of various categories and shot images of an unmanned aerial vehicle;
preprocessing a target image, and dividing the preprocessed target image into a training set and a verification set;
establishing a target detection network, wherein the target detection network is used for extracting a feature map of a target image, fusing the feature map to obtain a fused feature map, and performing model training according to the fused feature map;
inputting the target images of the training set into a target detection network for training to obtain a trained target detection model; inputting the target images of the verification set into a trained target detection model for verification to obtain a plurality of class detection target identification models;
and inputting the shot image of the unmanned aerial vehicle into a plurality of category detection target recognition models to obtain a detection result.
In a second aspect, the present disclosure provides an unmanned aerial vehicle aerial image target detection system based on deep learning, including:
a data acquisition module: acquiring target images of various categories and shot images of an unmanned aerial vehicle;
a preprocessing module: preprocessing a target image, and dividing the preprocessed target image into a training set and a verification set;
the target detection network establishment module: establishing a target detection network, wherein the target detection network is used for extracting a feature map of a target image, fusing the feature map to obtain a fused feature map, and performing model training according to the fused feature map;
a training module: inputting the target images of the training set into a target detection network for training to obtain a trained target detection model; inputting the target images of the verification set into a trained target detection model for verification to obtain a plurality of class detection target identification models;
a detection result acquisition module: and inputting the shot image of the unmanned aerial vehicle into a plurality of category detection target recognition models to obtain a detection result.
In a third aspect, the present disclosure provides a computer-readable storage medium for storing computer instructions, which when executed by a processor, perform the method for detecting the target in the aerial image of the drone according to the first aspect.
In a fourth aspect, the present disclosure provides an electronic device comprising a memory and a processor, and computer instructions stored on the memory and executed on the processor, wherein the computer instructions, when executed by the processor, implement the method for detecting the target in the aerial image of the unmanned aerial vehicle according to the first aspect.
Compared with the prior art, this disclosure possesses following beneficial effect:
1. the method comprises the steps of inputting target images of a training set into a target detection network for training to obtain a trained target detection model; the method comprises the steps of inputting target images of a verification set into a trained target detection model for verification to obtain a plurality of category detection target recognition models, inputting images shot by an unmanned aerial vehicle into the plurality of category detection target recognition models to obtain detection results, and realizing recognition and investigation of various detection targets by adopting a computer vision technology and through feature recognition of various target categories, so that the problem that the efficiency is low when a large amount of aerial pictures or videos of the unmanned aerial vehicle are completely interpreted by using manual interpretation is solved.
2. The method aims at the problem of small samples, utilizes the labeling coordinates of the unmanned aerial vehicle image to preprocess the existing data, and adopts the generation countermeasure network to perform data enhancement on the preprocessed data, so that the problem that the data quantity of detection targets of partial categories is insufficient is solved.
3. The target detection model is further improved, the target detection network is used for extracting the feature map of the target image, the feature map is fused to obtain a fusion feature map, model training is carried out according to the fusion feature map, the detection precision of the small target is improved, and the problem of how to use the deep learning target detection technology for target detection of the aerial image of the unmanned aerial vehicle is solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application.
FIG. 1 is a flow chart of detection target recognition in an embodiment of the present disclosure;
fig. 2 is a training process flow diagram of an embodiment of the disclosure.
The specific implementation mode is as follows:
the present disclosure is further described with reference to the following drawings and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
The noun explains:
a regional recommendation network (RPN);
a non-maxima suppression method (NMS);
feature maps (feature maps);
softmax (softmax logistic regression model);
propusal boxes;
roi Pooling (region of interest Pooling);
a proseal feature map (scheme feature map);
epoch (epoch);
example 1
As shown in fig. 1, the present disclosure provides a method for detecting an object in an aerial image of an unmanned aerial vehicle based on deep learning, including: acquiring target images of various categories and shot images of an unmanned aerial vehicle;
preprocessing a target image, and dividing the preprocessed target image into a training set and a verification set;
establishing a target detection network, wherein the target detection network is used for extracting a feature map of a target image, fusing the feature map to obtain a fused feature map, and performing model training according to the fused feature map;
inputting the target images of the training set into a target detection network for training to obtain a trained target detection model;
inputting the target images of the verification set into a trained target detection model for verification to obtain a plurality of class detection target identification models;
and inputting the shot image of the unmanned aerial vehicle into a plurality of category detection target recognition models to obtain a detection result.
Further, the target images of multiple types are images obtained by shooting multiple types of detection targets in an unmanned aerial vehicle aerial shooting manner.
Further, the pre-processing the target image includes,
carrying out image denoising processing on the target image;
generating data enhancement of the target image by the countermeasure network;
and labeling various detection targets in the target image.
Further, the dividing the preprocessed target images into a training set and a verification set includes dividing the labeled detection target images into the training set and the verification set according to a set proportion.
Further, the step of inputting the target images of the training set into the target detection network for training comprises,
inputting the images in the training set into a target detection network, extracting N layers of feature maps, fusing the N layers of feature maps to obtain a fused feature map, and inputting the fused feature map into a regional recommendation network;
the regional recommendation network carries out clustering analysis on the boundary frame of the target image in the training set by adopting a clustering method to obtain the dimension and proportion of an anchor frame, generates the anchor frame by fusing the characteristic diagram and the dimension and proportion of the anchor frame obtained by clustering, and filters and marks the generated anchor frame;
classifying the anchor frames by using a logistic regression model, performing non-maximum value inhibition by using a cross-over ratio size comparison mode, and finally generating a plurality of candidate recommendation frames to generate different layers of proposal boxes;
and transmitting the proposal box generated by the regional recommendation network and the fused feature map to a region-of-interest pooling layer, and normalizing to obtain a scheme feature map.
Specifically, the step of inputting the target images of the training set into the target detection network for training includes:
inputting the images in the training set into a feature extraction network, storing the extracted feature maps of the N layers, fusing the feature maps of the N layers to obtain a new feature map, and inputting the new feature map into the RPN;
and the RPN performs clustering analysis on the boundary frames in the training set by adopting a clustering method to obtain the anchor frame scale and proportion which are more suitable for detecting the target data set. Generating an anchor frame through the fused feature map and the anchor frame scale and proportion obtained by clustering, filtering and marking the generated anchor frame, judging as a foreground when the overlap ratio of the anchor frame and the marked object is greater than a threshold value, and determining as a background when the overlap ratio is smaller than the threshold value;
classifying the anchor frames by using softmax (logistic regression model), performing NMS non-maximum value inhibition by using a cross-over ratio size comparison mode, and finally generating a plurality of candidate recommendation frames to generate different layers of proposal boxes;
transmitting the prophase boxes generated by RPN and the fused feature map into a Roi Pooling layer, and normalizing the prophase maps by the Roi Pooling layer;
inputting the target images of the verification set into a trained target detection model for verification to obtain a plurality of class detection target recognition models,
transmitting a generic feature map (scheme feature map) to a full connection layer, calculating the specific category position and probability information of each generic through the full connection layer and softmax, obtaining the position offset of each generic for border regression, using an NMS method again to reserve a prediction box with the best effect, comparing the prediction result with a verification set, and adjusting the weight parameters of the network through back propagation;
and (3) after multiple times of training, taking the model with the best effect on the verification set for storage: the model with the best comprehensive accuracy and recall rate consideration effect is selected from a verification set by adjusting super parameters such as learning rate and the like for multiple times of training and storing the model once after a certain epoch.
The method comprises the steps of inputting the images shot by the unmanned aerial vehicle into a plurality of category detection target recognition models to obtain detection results,
aerial photography is carried out in an airspace with a detection target through an unmanned aerial vehicle, a shooting image of the unmanned aerial vehicle is input into a plurality of classes of detection target recognition models, a detection result is obtained, and the detection result is various detection targets for recognition, if: dam-shaped detection targets, tubular detection targets and the like, and image information containing the detection targets is output and recognized.
Further, the Roi Pooling layer refers to: and the Roi pooling layer is used for processing the incoming information and obtaining a uniform-size proxy feature map.
Further, the most effective model is: and the model with the highest sum of the accuracy and the recall ratio on the verification set on the basis of meeting the set value requirement.
Further, the shooting image of unmanned aerial vehicle is the image of waiting to detect the region, uses unmanned aerial vehicle to carry out the image that the aerial photograph obtained to the detection target, and the image reservation has information such as longitude and latitude and time when shooing.
Furthermore, the sliding window detection means a process of detecting the large-scale image of the unmanned aerial vehicle for multiple times by adopting a mode that the image to be detected is at a certain speed from top left to bottom right, transversely firstly and then longitudinally.
Further, the identifying of the various types of detection targets includes: inputting an image to be detected into a detection target recognition model, setting a probability threshold value, and outputting an image with a detection target prediction frame and the probability that the prediction frame belongs to a specific detection target category.
Further, the target detection network is obtained by the following improved settings:
step (4-1-1): the feature extraction network of the improved target detection network is that: and a plurality of layers or all layers of the feature extraction network of the target detection network are used for extracting feature map of the image, and the low-layer feature map and the high-layer feature map are fused to obtain a new feature map.
Step (4-1-2): anchor frame scales and proportions selected by a regional recommendation network (RPN) are improved, and clustering analysis is performed on a boundary frame in a training set by adopting a clustering method, so that the anchor frame scales and proportions are more suitable for detecting a target data set, and a prediction frame generated by a model and a real frame have higher overlapping degree. The RPN generates a detection frame aiming at the input feature map, judges whether the detection frame belongs to a foreground or a background, and inhibits NMS (network management system) from generating candidate recommendation frames of different layers through a non-maximum value after frame regression;
step (4-1-3): inputting the candidate recommendation frames prophase boxes obtained by the RPN in the step (4-1-2) and the feature map subjected to feature fusion in the step (4-1-1) into a Roi Pooling layer to obtain prophase feature maps;
step (4-1-4): transmitting the generic feature maps to the full connection layer, and calculating and judging the category of a detection target of each generic feature map through the full connection layer and the softmax classifier;
obtaining position offset by comparing the category of the detection target with the marked actual situation, performing frame regression according to the position offset, and then retaining a prediction frame with the best effect by using a non-maximum suppression (NMS) method;
through border regression performed in the RPN and border regression performed in the full connection layer, the precision of the prediction frame is improved through two-time border regression, and through comparison of the prediction result and the verification set, the weight parameters in the network are adjusted through back propagation.
Example 2
An unmanned aerial vehicle aerial image target detection system based on deep learning, includes:
a data acquisition module configured to: acquiring target images of various categories and shot images of an unmanned aerial vehicle;
a pre-processing module configured to: preprocessing a target image, and dividing the preprocessed target image into a training set and a verification set;
a target detection network establishment module configured to: establishing a target detection network, wherein the target detection network is used for extracting a feature map of a target image, fusing the feature map to obtain a fused feature map, and performing model training according to the fused feature map;
a training module configured to: inputting the target images of the training set into a target detection network for training to obtain a trained target detection model; inputting the target images of the verification set into a trained target detection model for verification to obtain a plurality of class detection target identification models;
a detection result acquisition module configured to: and inputting the shot image of the unmanned aerial vehicle into a plurality of category detection target recognition models to obtain a detection result.
Further, the specific configuration modes of the data acquisition module, the preprocessing module, the target detection network establishing module, the training module and the detection result obtaining module respectively correspond to the specific steps of the unmanned aerial vehicle aerial image target detection method in the above embodiment.
In other embodiments, the present disclosure also provides:
a computer readable storage medium storing computer instructions which, when executed by a processor, perform the drone aerial image target detection method of embodiment 1.
An electronic device comprising a memory and a processor, and computer instructions stored on the memory and executed on the processor, wherein the computer instructions, when executed by the processor, perform the method for detecting the target of the aerial image of the unmanned aerial vehicle according to embodiment 1.
Although the present disclosure has been described with reference to specific embodiments, it should be understood that the scope of the present disclosure is not limited thereto, and those skilled in the art will appreciate that various modifications and changes can be made without departing from the spirit and scope of the present disclosure.

Claims (10)

1. An unmanned aerial vehicle aerial image target detection method based on deep learning is characterized by comprising the following steps:
acquiring target images of various categories and shot images of an unmanned aerial vehicle;
preprocessing a target image, and dividing the preprocessed target image into a training set and a verification set;
establishing a target detection network, wherein the target detection network is used for extracting a feature map of a target image, fusing the feature map to obtain a fused feature map, and performing model training according to the fused feature map;
inputting the target images of the training set into a target detection network for training to obtain a trained target detection model; inputting the target images of the verification set into a trained target detection model for verification to obtain a plurality of class detection target identification models;
and inputting the shot image of the unmanned aerial vehicle into a plurality of category detection target recognition models to obtain a detection result.
2. The unmanned aerial vehicle aerial image target detection method of claim 1, wherein said pre-processing the target image comprises,
carrying out image denoising processing on the target image;
generating data enhancement of the target image by the countermeasure network;
and labeling various detection targets in the target image.
3. The method of claim 1, wherein the step of inputting the target images of the training set into the target detection network for training comprises,
inputting the images in the training set into a target detection network, extracting feature graphs and fusing to obtain a fused feature graph;
the regional recommendation network carries out clustering analysis on the target image and generates an anchor frame by fusing the characteristic diagram and anchor frame scale and proportion obtained by clustering analysis;
classifying the anchor frames and generating proposal boxes of different layers by a non-maximum value inhibition method;
and normalizing the proposal box and the fusion characteristic diagram to obtain a scheme characteristic diagram.
4. The method for detecting the target of the aerial image of the unmanned aerial vehicle as claimed in claim 3, wherein the method for classifying the anchor frame and suppressing the non-maximum value to generate the proposal boxes of different layers comprises the following steps: classifying the anchor frames by using a logistic regression model, performing non-maximum value inhibition by using a cross-over ratio size comparison mode, generating a plurality of candidate recommendation frames, and generating different layers of proposal boxes.
5. The unmanned aerial vehicle aerial image target detection method of claim 3, wherein the step of normalizing the proposal box and the fusion feature map to obtain a proposal feature map comprises: and transmitting the proposal box generated by the regional recommendation network and the fused feature map to a region-of-interest pooling layer, and normalizing to obtain a scheme feature map.
6. The method for detecting the target of the aerial image of the unmanned aerial vehicle according to claim 3, wherein the step of inputting the target images of the verification set into the trained target detection model for verification to obtain the target recognition models for multiple class detection includes transmitting the scheme feature map to the full connection layer, calculating the specific class position and probability information of each propofol and obtaining the position offset of each propofol through the full connection layer and the logistic regression model, using an NMS method to reserve a prediction box with the best effect, and adjusting the weight parameters of the network through comparison between the prediction result and the verification set and back propagation.
7. The target detection method for the aerial image of the unmanned aerial vehicle as claimed in claim 1, wherein a plurality of layers or all layers of the feature extraction network of the target detection network are used for extracting the feature map of the image, and the feature map of the lower layer is fused with the feature map of the higher layer to obtain a fused feature map.
8. The utility model provides an unmanned aerial vehicle image target detection system that takes photo by plane based on degree of deep learning which characterized in that includes:
a data acquisition module: acquiring target images of various categories and shot images of an unmanned aerial vehicle;
a preprocessing module: preprocessing a target image, and dividing the preprocessed target image into a training set and a verification set;
the target detection network establishment module: establishing a target detection network, wherein the target detection network is used for extracting a feature map of a target image, fusing the feature map to obtain a fused feature map, and performing model training according to the fused feature map;
a training module: inputting the target images of the training set into a target detection network for training to obtain a trained target detection model; inputting the target images of the verification set into a trained target detection model for verification to obtain a plurality of class detection target identification models;
a detection result acquisition module: and inputting the shot image of the unmanned aerial vehicle into a plurality of category detection target recognition models to obtain a detection result.
9. A computer readable storage medium storing computer instructions which, when executed by a processor, perform the drone aerial image target detection method of any one of claims 1-7.
10. An electronic device comprising a memory and a processor, and computer instructions stored on the memory and executed on the processor, wherein the computer instructions, when executed by the processor, perform the drone aerial image target detection method of any one of claims 1-7.
CN202011344704.2A 2020-11-26 2020-11-26 Unmanned aerial vehicle aerial image target detection method and system based on deep learning Pending CN112287896A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011344704.2A CN112287896A (en) 2020-11-26 2020-11-26 Unmanned aerial vehicle aerial image target detection method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011344704.2A CN112287896A (en) 2020-11-26 2020-11-26 Unmanned aerial vehicle aerial image target detection method and system based on deep learning

Publications (1)

Publication Number Publication Date
CN112287896A true CN112287896A (en) 2021-01-29

Family

ID=74426437

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011344704.2A Pending CN112287896A (en) 2020-11-26 2020-11-26 Unmanned aerial vehicle aerial image target detection method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN112287896A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112884055A (en) * 2021-03-03 2021-06-01 歌尔股份有限公司 Target labeling method and target labeling device
CN112987765A (en) * 2021-03-05 2021-06-18 北京航空航天大学 Precise autonomous take-off and landing method of unmanned aerial vehicle/boat simulating attention distribution of prey birds
CN113312987A (en) * 2021-05-11 2021-08-27 东南大学 Unmanned aerial vehicle aerial photographing based pavement crack image identification method
CN113327461A (en) * 2021-08-03 2021-08-31 杭州海康威视数字技术股份有限公司 Cooperative unmanned aerial vehicle detection method, device and equipment
CN113469114A (en) * 2021-07-19 2021-10-01 国网陕西省电力公司电力科学研究院 Method, device and equipment for identifying environmental water conservation disturbed soil range of power transmission line
CN114677859A (en) * 2022-03-23 2022-06-28 广州中科云图智能科技有限公司 Unmanned aerial vehicle air route automatic correction method and device
CN115908897A (en) * 2022-10-28 2023-04-04 西南交通大学 High-resolution remote sensing image-based intelligent identification method for high-speed railway power supply facilities

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109284767A (en) * 2018-07-23 2019-01-29 天津师范大学 A kind of pedestrian retrieval method based on augmented sample and multithread layer
CN109918523A (en) * 2019-02-14 2019-06-21 广东工业大学 A kind of circuit board element detection method based on YOLO9000 algorithm
CN110852241A (en) * 2019-11-06 2020-02-28 西安交通大学 Small target detection method applied to nursing robot

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109284767A (en) * 2018-07-23 2019-01-29 天津师范大学 A kind of pedestrian retrieval method based on augmented sample and multithread layer
CN109918523A (en) * 2019-02-14 2019-06-21 广东工业大学 A kind of circuit board element detection method based on YOLO9000 algorithm
CN110852241A (en) * 2019-11-06 2020-02-28 西安交通大学 Small target detection method applied to nursing robot

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112884055A (en) * 2021-03-03 2021-06-01 歌尔股份有限公司 Target labeling method and target labeling device
CN112884055B (en) * 2021-03-03 2023-02-03 歌尔股份有限公司 Target labeling method and target labeling device
CN112987765A (en) * 2021-03-05 2021-06-18 北京航空航天大学 Precise autonomous take-off and landing method of unmanned aerial vehicle/boat simulating attention distribution of prey birds
CN113312987A (en) * 2021-05-11 2021-08-27 东南大学 Unmanned aerial vehicle aerial photographing based pavement crack image identification method
CN113312987B (en) * 2021-05-11 2024-04-16 东南大学 Recognition method based on unmanned aerial vehicle road surface crack image
CN113469114A (en) * 2021-07-19 2021-10-01 国网陕西省电力公司电力科学研究院 Method, device and equipment for identifying environmental water conservation disturbed soil range of power transmission line
CN113327461A (en) * 2021-08-03 2021-08-31 杭州海康威视数字技术股份有限公司 Cooperative unmanned aerial vehicle detection method, device and equipment
CN113327461B (en) * 2021-08-03 2021-11-23 杭州海康威视数字技术股份有限公司 Cooperative unmanned aerial vehicle detection method, device and equipment
CN114677859A (en) * 2022-03-23 2022-06-28 广州中科云图智能科技有限公司 Unmanned aerial vehicle air route automatic correction method and device
CN115908897A (en) * 2022-10-28 2023-04-04 西南交通大学 High-resolution remote sensing image-based intelligent identification method for high-speed railway power supply facilities
CN115908897B (en) * 2022-10-28 2023-09-12 西南交通大学 Intelligent identification method for high-speed railway power supply facilities based on high-resolution remote sensing images

Similar Documents

Publication Publication Date Title
CN112287896A (en) Unmanned aerial vehicle aerial image target detection method and system based on deep learning
CN111027481B (en) Behavior analysis method and device based on human body key point detection
CN110569754A (en) Image target detection method, device, storage medium and equipment
CN110222686B (en) Object detection method, object detection device, computer equipment and storage medium
CN115797736B (en) Training method, device, equipment and medium for target detection model and target detection method, device, equipment and medium
CN117011563B (en) Road damage inspection cross-domain detection method and system based on semi-supervised federal learning
CN113129335A (en) Visual tracking algorithm and multi-template updating strategy based on twin network
CN110659601A (en) Depth full convolution network remote sensing image dense vehicle detection method based on central point
CN113870254B (en) Target object detection method and device, electronic equipment and storage medium
CN111160100A (en) Lightweight depth model aerial photography vehicle detection method based on sample generation
CN112785610B (en) Lane line semantic segmentation method integrating low-level features
CN112347967A (en) Pedestrian detection method fusing motion information in complex scene
CN117152513A (en) Vehicle boundary positioning method for night scene
CN116758421A (en) Remote sensing image directed target detection method based on weak supervised learning
CN116824333A (en) Nasopharyngeal carcinoma detecting system based on deep learning model
CN116363532A (en) Unmanned aerial vehicle image traffic target detection method based on attention mechanism and re-parameterization
CN113780462B (en) Vehicle detection network establishment method based on unmanned aerial vehicle aerial image and application thereof
CN114694090A (en) Campus abnormal behavior detection method based on improved PBAS algorithm and YOLOv5
CN107403192A (en) A kind of fast target detection method and system based on multi-categorizer
CN114548376A (en) Intelligent transportation system-oriented vehicle rapid detection network and method
CN114463628A (en) Deep learning remote sensing image ship target identification method based on threshold value constraint
Bharathi et al. A Conceptual Real-Time Deep Learning Approach for Object Detection, Tracking and Monitoring Social Distance using Yolov5
Xiaozhou et al. Recognition method of outdoor design scene based on support vector machine and feature fusion
CN115937657A (en) Image processing method, device, equipment and storage medium
CN115984304A (en) Instance partitioning method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination