CN110751106A - Unmanned aerial vehicle target detection method and system - Google Patents
Unmanned aerial vehicle target detection method and system Download PDFInfo
- Publication number
- CN110751106A CN110751106A CN201911010556.8A CN201911010556A CN110751106A CN 110751106 A CN110751106 A CN 110751106A CN 201911010556 A CN201911010556 A CN 201911010556A CN 110751106 A CN110751106 A CN 110751106A
- Authority
- CN
- China
- Prior art keywords
- target position
- detection result
- aerial vehicle
- unmanned aerial
- position detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a method and a system for detecting an unmanned aerial vehicle target, wherein the method comprises the steps of firstly, acquiring an unmanned aerial vehicle video in real time; secondly, determining a kth frame picture according to the unmanned aerial vehicle video; inputting the k frame of picture into a detection model again for unmanned aerial vehicle position detection, and obtaining a first target position detection result corresponding to the k frame of picture; then, carrying out unmanned aerial vehicle position tracking by using KCF to obtain a second target position detection result corresponding to the kth frame of picture; and finally, determining the target position of the unmanned aerial vehicle according to the first target position detection result and the second target position detection result corresponding to the kth frame of picture.
Description
Technical Field
The invention relates to the technical field of unmanned aerial vehicle tracking, in particular to an unmanned aerial vehicle target detection method and system.
Background
Because unmanned aerial vehicle volume is less, is characteristics such as plastics material mostly, detectors such as radar often can not in time discover unmanned aerial vehicle, have detected into a difficult problem to the unmanned aerial vehicle in no-fly zone. Therefore, the detection and early warning of the unmanned aerial vehicle based on optical equipment such as a camera and the like is a very effective solution.
For a fast moving drone, there are two challenges in detection: the relative rapid movement of the unmanned aerial vehicle and the camera easily generates imaging motion blur, so that the appearance characteristics of the unmanned aerial vehicle are changed or lost; the appearance characteristics of a small unmanned aerial vehicle or an unmanned aerial vehicle far away from the small unmanned aerial vehicle are not obvious and are easily confused with flying birds and other categories, so that the problem of inaccurate target detection of the unmanned aerial vehicle exists.
Disclosure of Invention
The invention aims to provide an unmanned aerial vehicle target detection method and system to improve the target tracking accuracy of an unmanned aerial vehicle.
In order to achieve the above object, the present invention provides a method for detecting an unmanned aerial vehicle target, the method comprising:
step S1: collecting videos of the unmanned aerial vehicle in real time;
step S2: determining a kth frame picture according to the unmanned aerial vehicle video, wherein k is a positive integer greater than or equal to 1;
step S3: inputting the k frame of picture into a detection model to perform unmanned aerial vehicle position detection, and obtaining a first target position detection result corresponding to the k frame of picture;
step S4: carrying out unmanned aerial vehicle position tracking by using KCF to obtain a second target position detection result corresponding to the kth frame of picture;
step S5: and determining the target position of the unmanned aerial vehicle according to the first target position detection result and the second target position detection result corresponding to the kth frame of picture.
Optionally, the determining the target position of the unmanned aerial vehicle according to the first target position detection result and the second target position detection result corresponding to the kth frame of picture specifically includes:
step S51: judging whether a detection result exists in the first target position detection result or not; if the first target position detection result has a detection result, performing step S52; if the first target position detection result does not have a detection result, the second target position detection result corresponding to the kth frame of picture is the target position of the unmanned aerial vehicle to be selected;
step S52: determining a cross-over ratio according to the first target position detection result and the second target position detection result;
step S53: judging whether the intersection ratio is greater than or equal to a set value; if the intersection ratio is larger than or equal to a set value, selecting a detection result with the largest area in the first target position detection result and the second target position detection result as a target position of the unmanned aerial vehicle to be selected; if the intersection ratio is smaller than a set value, the first target position detection result is the target position of the unmanned aerial vehicle to be selected;
step S54: initializing KCF according to the target position of the unmanned aerial vehicle to be selected;
step S55: judging whether k is greater than or equal to the total frame number; if k is greater than or equal to the total frame number, outputting the target position of the unmanned aerial vehicle to be selected as the position of the unmanned aerial vehicle; if k is less than the total number of frames, k is made k +1, and the process returns to step S2.
Optionally, the inputting the k-th frame of picture into the detection model for unmanned aerial vehicle position detection to obtain a first target position detection result corresponding to the k-th frame of picture specifically includes:
marking the unmanned aerial vehicle in the multi-frame picture to obtain a marked data set;
taking a set number of the labeled data sets as training sets;
inputting the training set into a Retianet network for training to obtain a detection model;
and inputting the k frame of picture into the detection model to carry out unmanned aerial vehicle position detection, and obtaining a first target position detection result corresponding to the k frame of picture.
Optionally, the annotation data set is in a voc format.
The invention also provides an unmanned aerial vehicle target detection system, which comprises:
the acquisition module is used for acquiring videos of the unmanned aerial vehicle in real time;
the k frame picture determining module is used for determining a k frame picture according to the unmanned aerial vehicle video, wherein k is a positive integer greater than or equal to 1;
the first target position detection result determining module is used for inputting the k frame of picture into a detection model to carry out unmanned aerial vehicle position detection, and obtaining a first target position detection result corresponding to the k frame of picture;
the second target position detection result determining module is used for tracking the position of the unmanned aerial vehicle by using KCF to obtain a second target position detection result corresponding to the kth frame of picture;
and the unmanned aerial vehicle target position determining module is used for determining the target position of the unmanned aerial vehicle according to the first target position detection result and the second target position detection result corresponding to the k-th frame of picture.
Optionally, the module for determining the target position of the unmanned aerial vehicle specifically includes:
the first judging unit is used for judging whether the detection result of the first target position exists or not; if the first target position detection result has a detection result, executing an intersection ratio determining unit; if the first target position detection result does not have a detection result, the second target position detection result corresponding to the kth frame of picture is the target position of the unmanned aerial vehicle to be selected;
the intersection ratio determining unit is used for determining an intersection ratio according to the first target position detection result and the second target position detection result;
a second judgment unit for judging whether the intersection ratio is greater than or equal to a set value; if the intersection ratio is larger than or equal to a set value, selecting a detection result with the largest area in the first target position detection result and the second target position detection result as a target position of the unmanned aerial vehicle to be selected; if the intersection ratio is smaller than a set value, the first target position detection result is the target position of the unmanned aerial vehicle to be selected;
the initialization unit is used for initializing KCF according to the target position of the unmanned aerial vehicle to be selected;
a third judging unit for judging whether k is greater than or equal to the total frame number; if k is greater than or equal to the total frame number, outputting the target position of the unmanned aerial vehicle to be selected as the position of the unmanned aerial vehicle; if k is less than the total frame number, let k be k +1, and return to "kth frame picture determination module".
Optionally, the module for determining the detection result of the first target location specifically includes:
the data set labeling unit is used for labeling the unmanned aerial vehicle in the multi-frame pictures to obtain a labeled data set;
the assignment unit is used for taking the set number of the labeled data sets as a training set and taking the rest labeled data sets as a test set;
the training unit is used for inputting the training set into a Retianet network for training to obtain a detection model;
and the first target position detection result determining unit is used for inputting the k frame of picture into the detection model to carry out unmanned aerial vehicle position detection, and obtaining a first target position detection result corresponding to the k frame of picture.
Optionally, the annotation data set is in a voc format.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention discloses a method and a system for detecting an unmanned aerial vehicle target, wherein the method comprises the steps of firstly, acquiring an unmanned aerial vehicle video in real time; secondly, determining a kth frame picture according to the unmanned aerial vehicle video; inputting the k frame of picture into a detection model again for unmanned aerial vehicle position detection, and obtaining a first target position detection result corresponding to the k frame of picture; then, carrying out unmanned aerial vehicle position tracking by using KCF to obtain a second target position detection result corresponding to the kth frame of picture; and finally, determining the target position of the unmanned aerial vehicle according to the first target position detection result and the second target position detection result corresponding to the kth frame of picture.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a flowchart of a target detection method for an unmanned aerial vehicle according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a rapid movement of an unmanned aerial vehicle according to an embodiment of the invention;
fig. 3 is a structural diagram of an unmanned aerial vehicle target detection system in the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide an unmanned aerial vehicle target detection method and system to improve the target tracking accuracy of an unmanned aerial vehicle.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
KCF, which appears in the following examples, represents a Kernel Correlation Filter (KCF).
Fig. 1 is a flowchart of an unmanned aerial vehicle target detection method according to an embodiment of the present invention, and as shown in fig. 1, the present invention discloses an unmanned aerial vehicle target detection method, which includes:
step S1: collecting videos of the unmanned aerial vehicle in real time; the unmanned aerial vehicle video comprises various scenes which are possibly encountered;
step S2: determining a kth frame picture according to the unmanned aerial vehicle video, wherein k is a positive integer greater than or equal to 1;
step S3: inputting the k frame of picture into a detection model to perform unmanned aerial vehicle position detection, and obtaining a first target position detection result corresponding to the k frame of picture;
step S4: carrying out unmanned aerial vehicle position tracking by using a kernel correlation filter KCF to obtain a second target position detection result corresponding to the kth frame of picture;
step S5: and determining the target position of the unmanned aerial vehicle according to the first target position detection result and the second target position detection result corresponding to the kth frame of picture.
The individual steps are discussed in detail below:
step S1: collecting videos of the unmanned aerial vehicle in real time; the unmanned aerial vehicle video comprises a small unmanned aerial vehicle rapid motion video, a long-distance unmanned aerial vehicle motion video or an unmanned aerial vehicle rapid motion video; the essence of the small target in the present invention is that the pixels of the drone in the picture are less than 30x 30; that is to say, the unmanned aerial vehicle can be adjusted in size and distance; under the condition that the distance is unchanged, the unmanned aerial vehicle is adjusted, and as long as the pixel of the unmanned aerial vehicle in the video is less than 30x30, the unmanned aerial vehicle is called as a small unmanned aerial vehicle; under the condition that the unmanned aerial vehicle is unchanged, as long as the requirement that the distance of pixels in the acquired picture is less than 30x30 is a long distance, as shown in fig. 2, a small black frame is a position frame of the unmanned aerial vehicle detected by the picture of the mth frame, a large black frame is an amplification frame, the amplification frame is 2.5 times amplification of the position frame, if the motion of the unmanned aerial vehicle exceeds the amplification frame in the (m + 1) th frame, the rapid motion is defined, and m is a positive integer greater than or equal to 1.
Step S3: inputting the k-th frame of picture into a detection model for unmanned aerial vehicle position detection, and obtaining a first target position detection result corresponding to the k-th frame of picture, wherein the method specifically comprises the following steps:
step S31: marking the unmanned aerial vehicle in the multi-frame picture to obtain a marked data set; 3500 frames of unmanned aerial vehicle data are collected in the embodiment;
step S32: taking a set number of the labeled data sets as training sets;
step S33: inputting the training set into a Retianet network for training to obtain a detection model;
step S34: and inputting the k frame of picture into the detection model to carry out unmanned aerial vehicle position detection, and obtaining a first target position detection result corresponding to the k frame of picture.
Step S33: inputting the training set into a Retianet network for training to obtain a detection model; the method specifically comprises the following steps: inputting the training set into a Retianet network for training by using a loss function to obtain a detection model, wherein the specific formula of the loss function is as follows:
loss=-α(1-pt)γlog(pt)
wherein p istRepresenting the probability, α represents a positive and negative sample parameter matrix, gamma represents a parameter of contribution of adjustment difficulty samples to a loss function, a label y is equal to 1 for a positive class, a label y is equal to-1 for a negative class, x represents the probability of the unmanned aerial vehicle and is obtained by a retinet network activation function, x belongs to (0,1), and if a predicted value is a positive class and the label is 1, p is equal tot> 0.5, and as predictive value increases, ptApproaching to 1; the same holds for a negative class of predicted values and a-1, p labeltGreater than 0.5, the smaller the predicted value, ptApproaching to 1; the larger the prediction value of the positive class, the better, and the smaller the prediction value of the negative class, the same as p of all samplestThe optimization is 1.α is a positive and negative sample parameter matrix, in the system, α is 1 for positive samples, α is 0.25 for negative samples, because the number of negative samples in the one-stage object detection algorithm is much larger than that of positive samples, so that the positive samples have larger influence on the model loss function than the negative samples, and the problem of positive and negative imbalance is handledtThe smaller, the (1-p) ist)γThe larger, and thus relatively large losses; similarly, for simple samples, the models are well classified, ptThe larger the size, the larger the size of (1-p)t)γThe smaller, and thus the smaller the loss, the more emphasis is placed on the model training on difficult samples, in this example γ is taken to be 2.
As an alternative embodiment, the annotated data set of the invention is in voc format.
Step S4: and tracking the position of the unmanned aerial vehicle by using a kernel correlation filter KCF to obtain a second target position detection result corresponding to the kth frame of picture.
After initializing KCF, carrying out 2.5 times of amplification operation on a position frame where the unmanned aerial vehicle is located, then generating different cyclic matrixes for 2.5 times of amplification areas, so that the unmanned aerial vehicle appears at different positions of the amplification frame, sampling at a position corresponding to the amplification frame of the (k-1) th frame when tracking the (k) th frame, and then carrying out correlation calculation with the cyclic matrixes to obtain a maximum response point, wherein the point is a second target position detection result corresponding to the image of the (k) th frame.
Step S5: the determining the target position of the unmanned aerial vehicle according to the first target position detection result and the second target position detection result corresponding to the kth frame picture specifically includes:
step S51: judging whether a detection result exists in the first target position detection result or not; if the first target position detection result has a detection result, performing step S52; if the first target position detection result does not have a detection result, the second target position detection result corresponding to the kth frame of picture is the target position of the unmanned aerial vehicle to be selected;
step S52: determining a cross-over ratio according to the first target position detection result and the second target position detection result;
step S53: judging whether the intersection ratio is greater than or equal to a set value; if the intersection ratio is larger than or equal to a set value, selecting a detection result with the largest area in the first target position detection result and the second target position detection result as a target position of the unmanned aerial vehicle to be selected; if the intersection ratio is smaller than a set value, the first target position detection result is the target position of the unmanned aerial vehicle to be selected;
step S54: initializing KCF according to the target position of the unmanned aerial vehicle to be selected; the method specifically comprises the following steps:
when starting, initializing KCF by using a first target detection result corresponding to a first frame picture; and subsequently initializing KCF according to the target position of the unmanned aerial vehicle to be selected.
Step S55: judging whether k is greater than or equal to the total frame number; if k is greater than or equal to the total frame number, outputting the target position of the unmanned aerial vehicle to be selected as the position of the unmanned aerial vehicle; if k is less than the total number of frames, k is made k +1, and the process returns to step S2.
Fig. 3 is a structural diagram of an unmanned aerial vehicle target detection system according to an embodiment of the present invention, and as shown in fig. 3, the present invention further provides an unmanned aerial vehicle target detection system, where the system includes:
the acquisition module 1 is used for acquiring videos of the unmanned aerial vehicle in real time;
the k frame picture determining module 2 is used for determining a k frame picture according to the unmanned aerial vehicle video, wherein k is a positive integer greater than or equal to 1;
the first target position detection result determining module 3 is used for inputting the k frame of picture into a detection model for unmanned aerial vehicle position detection, and obtaining a first target position detection result corresponding to the k frame of picture;
the second target position detection result determining module 4 is used for tracking the position of the unmanned aerial vehicle by using KCF to obtain a second target position detection result corresponding to the kth frame of picture;
and the unmanned aerial vehicle target position determining module 5 is used for determining the target position of the unmanned aerial vehicle according to the first target position detection result and the second target position detection result corresponding to the k-th frame of picture.
The various modules are discussed below:
the unmanned aerial vehicle target position determining module 5 specifically includes:
the first judging unit is used for judging whether the detection result of the first target position exists or not; if the first target position detection result has a detection result, executing an intersection ratio determining unit; if the first target position detection result does not have a detection result, the second target position detection result corresponding to the kth frame of picture is the target position of the unmanned aerial vehicle to be selected;
the intersection ratio determining unit is used for determining an intersection ratio according to the first target position detection result and the second target position detection result;
a second judgment unit for judging whether the intersection ratio is greater than or equal to a set value; if the intersection ratio is larger than or equal to a set value, selecting a detection result with the largest area in the first target position detection result and the second target position detection result as a target position of the unmanned aerial vehicle to be selected; if the intersection ratio is smaller than a set value, the first target position detection result is the target position of the unmanned aerial vehicle to be selected;
the initialization unit is used for initializing KCF according to the target position of the unmanned aerial vehicle to be selected;
a third judging unit for judging whether k is greater than or equal to the total frame number; if k is greater than or equal to the total frame number, outputting the target position of the unmanned aerial vehicle to be selected as the position of the unmanned aerial vehicle; if k is less than the total frame number, let k be k +1, and return to "kth frame picture determination module".
The first target position detection result determining module 3 specifically includes:
the data set labeling unit is used for labeling the unmanned aerial vehicle in the multi-frame pictures to obtain a labeled data set;
the assignment unit is used for taking the set number of the labeled data sets as a training set and taking the rest labeled data sets as a test set;
the training unit is used for inputting the training set into a Retianet network for training to obtain a detection model;
and the first target position detection result determining unit is used for inputting the k frame of picture into the detection model to carry out unmanned aerial vehicle position detection, and obtaining a first target position detection result corresponding to the k frame of picture.
As an alternative embodiment, the annotated data set of the invention is in voc format.
The method introduces the correct target initialization KCF tracker in time, thereby improving the tracking accuracy and avoiding the problem that the subsequent tracking is completely inaccurate due to the generation of wrong tracking at a certain moment.
In addition, the invention combines detection and tracking to determine the target position of the unmanned aerial vehicle to be selected, and initializes KCF by using the target position of the unmanned aerial vehicle to be selected, so that the unmanned aerial vehicle can be tracked even if the detected target is lost, the tracking reliability is improved, and the influence caused by subsequent tracking errors due to initialization errors is greatly reduced.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.
Claims (8)
1. An unmanned aerial vehicle target detection method, the method comprising:
step S1: collecting videos of the unmanned aerial vehicle in real time;
step S2: determining a kth frame picture according to the unmanned aerial vehicle video, wherein k is a positive integer greater than or equal to 1;
step S3: inputting the k frame of picture into a detection model to perform unmanned aerial vehicle position detection, and obtaining a first target position detection result corresponding to the k frame of picture;
step S4: carrying out unmanned aerial vehicle position tracking by using a kernel correlation filter KCF to obtain a second target position detection result corresponding to the kth frame of picture;
step S5: and determining the target position of the unmanned aerial vehicle according to the first target position detection result and the second target position detection result corresponding to the kth frame of picture.
2. The target detection method according to claim 1, wherein the determining the target position of the unmanned aerial vehicle according to the first target position detection result and the second target position detection result corresponding to the k-th frame picture specifically includes:
step S51: judging whether a detection result exists in the first target position detection result or not; if the first target position detection result has a detection result, performing step S52; if the first target position detection result does not have a detection result, the second target position detection result corresponding to the kth frame of picture is the target position of the unmanned aerial vehicle to be selected;
step S52: determining a cross-over ratio according to the first target position detection result and the second target position detection result;
step S53: judging whether the intersection ratio is greater than or equal to a set value; if the intersection ratio is larger than or equal to a set value, selecting a detection result with the largest area in the first target position detection result and the second target position detection result as a target position of the unmanned aerial vehicle to be selected; if the intersection ratio is smaller than a set value, the first target position detection result is the target position of the unmanned aerial vehicle to be selected;
step S54: initializing KCF according to the target position of the unmanned aerial vehicle to be selected;
step S55: judging whether k is greater than or equal to the total frame number; if k is greater than or equal to the total frame number, outputting the target position of the unmanned aerial vehicle to be selected as the position of the unmanned aerial vehicle; if k is less than the total number of frames, k is made k +1, and the process returns to step S2.
3. The target detection method according to claim 1, wherein the step of inputting the k-th frame of picture into a detection model for unmanned aerial vehicle position detection to obtain a first target position detection result corresponding to the k-th frame of picture specifically comprises:
marking the unmanned aerial vehicle in the multi-frame picture to obtain a marked data set;
taking a set number of the labeled data sets as training sets, and taking the rest labeled data sets as test sets;
inputting the training set into a Retianet network for training to obtain a detection model;
and inputting the k frame of picture into the detection model to carry out unmanned aerial vehicle position detection, and obtaining a first target position detection result corresponding to the k frame of picture.
4. The method of claim 3, wherein the annotated dataset is in voc format.
5. An unmanned aerial vehicle target detection system, the system comprising:
the acquisition module is used for acquiring videos of the unmanned aerial vehicle in real time;
the k frame picture determining module is used for determining a k frame picture according to the unmanned aerial vehicle video, wherein k is a positive integer greater than or equal to 1;
the first target position detection result determining module is used for inputting the k frame of picture into a detection model to carry out unmanned aerial vehicle position detection, and obtaining a first target position detection result corresponding to the k frame of picture;
the second target position detection result determining module is used for tracking the position of the unmanned aerial vehicle by using KCF to obtain a second target position detection result corresponding to the kth frame of picture;
and the unmanned aerial vehicle target position determining module is used for determining the target position of the unmanned aerial vehicle according to the first target position detection result and the second target position detection result corresponding to the k-th frame of picture.
6. The object detection system of claim 5, wherein the drone object position determination module specifically includes:
the first judging unit is used for judging whether the detection result of the first target position exists or not; if the first target position detection result has a detection result, executing an intersection ratio determining unit; if the first target position detection result does not have a detection result, the second target position detection result corresponding to the kth frame of picture is the target position of the unmanned aerial vehicle to be selected;
the intersection ratio determining unit is used for determining an intersection ratio according to the first target position detection result and the second target position detection result;
a second judgment unit for judging whether the intersection ratio is greater than or equal to a set value; if the intersection ratio is larger than or equal to a set value, selecting a detection result with the largest area in the first target position detection result and the second target position detection result as a target position of the unmanned aerial vehicle to be selected; if the intersection ratio is smaller than a set value, the first target position detection result is the target position of the unmanned aerial vehicle to be selected;
the initialization unit is used for initializing KCF according to the target position of the unmanned aerial vehicle to be selected;
a third judging unit for judging whether k is greater than or equal to the total frame number; if k is greater than or equal to the total frame number, outputting the target position of the unmanned aerial vehicle to be selected as the position of the unmanned aerial vehicle; if k is less than the total frame number, let k be k +1, and return to "kth frame picture determination module".
7. The object detection system of claim 5, wherein the first object position detection result determining module specifically includes:
the data set labeling unit is used for labeling the unmanned aerial vehicle in the multi-frame pictures to obtain a labeled data set;
the assignment unit is used for taking the set number of the labeled data sets as a training set;
the training unit is used for inputting the training set into a Retianet network for training to obtain a detection model;
and the first target position detection result determining unit is used for inputting the k frame of picture into the detection model to carry out unmanned aerial vehicle position detection, and obtaining a first target position detection result corresponding to the k frame of picture.
8. The object detection system of claim 7, wherein the annotation data set is in voc format.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911010556.8A CN110751106B (en) | 2019-10-23 | 2019-10-23 | Unmanned aerial vehicle target detection method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911010556.8A CN110751106B (en) | 2019-10-23 | 2019-10-23 | Unmanned aerial vehicle target detection method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110751106A true CN110751106A (en) | 2020-02-04 |
CN110751106B CN110751106B (en) | 2022-04-26 |
Family
ID=69279545
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911010556.8A Active CN110751106B (en) | 2019-10-23 | 2019-10-23 | Unmanned aerial vehicle target detection method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110751106B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111583339A (en) * | 2020-04-27 | 2020-08-25 | 中国人民解放军军事科学院国防科技创新研究院 | Method, device, electronic equipment and medium for acquiring target position |
CN111881982A (en) * | 2020-07-30 | 2020-11-03 | 北京环境特性研究所 | Unmanned aerial vehicle target identification method |
CN112541608A (en) * | 2020-02-19 | 2021-03-23 | 深圳中科保泰科技有限公司 | Unmanned aerial vehicle takeoff point prediction method and device |
CN112597795A (en) * | 2020-10-28 | 2021-04-02 | 丰颂教育科技(江苏)有限公司 | Visual tracking and positioning method for motion-blurred object in real-time video stream |
CN113723311A (en) * | 2021-08-31 | 2021-11-30 | 浙江大华技术股份有限公司 | Target tracking method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106960446A (en) * | 2017-04-01 | 2017-07-18 | 广东华中科技大学工业技术研究院 | A kind of waterborne target detecting and tracking integral method applied towards unmanned boat |
CN108320306A (en) * | 2018-03-06 | 2018-07-24 | 河北新途科技有限公司 | Merge the video target tracking method of TLD and KCF |
-
2019
- 2019-10-23 CN CN201911010556.8A patent/CN110751106B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106960446A (en) * | 2017-04-01 | 2017-07-18 | 广东华中科技大学工业技术研究院 | A kind of waterborne target detecting and tracking integral method applied towards unmanned boat |
CN108320306A (en) * | 2018-03-06 | 2018-07-24 | 河北新途科技有限公司 | Merge the video target tracking method of TLD and KCF |
Non-Patent Citations (3)
Title |
---|
TSUNG-YI LIN ET AL.: "Focal Loss for Dense Object Detection", 《ARXIV》 * |
周纪强: "监控视频中多类目标检测与多目标跟踪算法研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 * |
郭娅祥: "多目标跟踪系统的关键技术研究及实现", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112541608A (en) * | 2020-02-19 | 2021-03-23 | 深圳中科保泰科技有限公司 | Unmanned aerial vehicle takeoff point prediction method and device |
CN112541608B (en) * | 2020-02-19 | 2023-10-20 | 深圳中科保泰空天技术有限公司 | Unmanned aerial vehicle departure point prediction method and device |
CN111583339A (en) * | 2020-04-27 | 2020-08-25 | 中国人民解放军军事科学院国防科技创新研究院 | Method, device, electronic equipment and medium for acquiring target position |
CN111881982A (en) * | 2020-07-30 | 2020-11-03 | 北京环境特性研究所 | Unmanned aerial vehicle target identification method |
CN112597795A (en) * | 2020-10-28 | 2021-04-02 | 丰颂教育科技(江苏)有限公司 | Visual tracking and positioning method for motion-blurred object in real-time video stream |
CN113723311A (en) * | 2021-08-31 | 2021-11-30 | 浙江大华技术股份有限公司 | Target tracking method |
Also Published As
Publication number | Publication date |
---|---|
CN110751106B (en) | 2022-04-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110751106B (en) | Unmanned aerial vehicle target detection method and system | |
Kim et al. | High-speed drone detection based on yolo-v8 | |
CN102982336B (en) | Model of cognition generates method and system | |
CN109063549B (en) | High-resolution aerial video moving target detection method based on deep neural network | |
CN109919007B (en) | Method for generating infrared image annotation information | |
KR102002812B1 (en) | Image Analysis Method and Server Apparatus for Detecting Object | |
CN113096159B (en) | Target detection and track tracking method, model and electronic equipment thereof | |
US10319095B2 (en) | Method, an apparatus and a computer program product for video object segmentation | |
CN111985365A (en) | Straw burning monitoring method and system based on target detection technology | |
CN103679186A (en) | Target detecting and tracking method and device | |
CN112967341A (en) | Indoor visual positioning method, system, equipment and storage medium based on live-action image | |
CN111856445B (en) | Target detection method, device, equipment and system | |
CN111784737A (en) | Automatic target tracking method and system based on unmanned aerial vehicle platform | |
CN111160365A (en) | Unmanned aerial vehicle target tracking method based on combination of detector and tracker | |
WO2022205329A1 (en) | Object detection method, object detection apparatus, and object detection system | |
CN111222397A (en) | Drawing book identification method and device and robot | |
CN110310305A (en) | A kind of method for tracking target and device based on BSSD detection and Kalman filtering | |
CN111260687A (en) | Aerial video target tracking method based on semantic perception network and related filtering | |
CN116740334B (en) | Unmanned aerial vehicle intrusion detection positioning method based on binocular vision and improved YOLO | |
de Sa Lowande et al. | Analysis of post-disaster damage detection using aerial footage from uwf campus after hurricane sally | |
CN116310482A (en) | Target detection and recognition method and system based on domestic chip multi-target real-time tracking | |
CN111160156B (en) | Method and device for identifying moving object | |
CN114241306A (en) | Infrared unmanned aerial vehicle target tracking method based on twin neural network | |
CN112015231A (en) | Method and system for processing surveillance video partition | |
CN113283279B (en) | Multi-target tracking method and device in video based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |