CN114973033B - Unmanned aerial vehicle automatic detection target and tracking method - Google Patents

Unmanned aerial vehicle automatic detection target and tracking method Download PDF

Info

Publication number
CN114973033B
CN114973033B CN202210597472.4A CN202210597472A CN114973033B CN 114973033 B CN114973033 B CN 114973033B CN 202210597472 A CN202210597472 A CN 202210597472A CN 114973033 B CN114973033 B CN 114973033B
Authority
CN
China
Prior art keywords
image
target
stage
feature
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210597472.4A
Other languages
Chinese (zh)
Other versions
CN114973033A (en
Inventor
刘明华
邵洪波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao University of Science and Technology
Original Assignee
Qingdao University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao University of Science and Technology filed Critical Qingdao University of Science and Technology
Priority to CN202210597472.4A priority Critical patent/CN114973033B/en
Publication of CN114973033A publication Critical patent/CN114973033A/en
Application granted granted Critical
Publication of CN114973033B publication Critical patent/CN114973033B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Remote Sensing (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an automatic target detection and tracking method for an unmanned aerial vehicle, which relates to the technical field of automatic detection and tracking of unmanned aerial vehicles.

Description

Unmanned aerial vehicle automatic detection target and tracking method
Technical Field
The invention relates to the technical field of automatic detection and tracking of unmanned aerial vehicles, in particular to an automatic detection target and tracking method of an unmanned aerial vehicle.
Background
Detection tracking of targets is an important component in image processing technology, and comprises two subtasks of target detection and target tracking. Target detection is the process of detecting and classifying a target object in an image. The object tracking technology is a process of tracking an object by using a certain frame of a video sequence as a starting point and utilizing manual selection or a detector to continuously obtain the motion state of the object in a subsequent frame.
The detection method alone can well obtain the positions of all targets and mark the categories of the targets, but the detection processing speed is slow. The tracking method used alone firstly needs to manually give the initial position of the target to be tracked, and secondly cannot process the newly appeared target, and cannot cope with the actual scene although the speed is high. Therefore, a method for combining detection and tracking needs to be found, so that the advantages of the method and the device are combined, and the method can be applied to complex tasks.
The existing detection and technical means can only detect and track the size of a detection frame formed by a specific basic geometric object in a detection frame, and the rotation of a polyhedral target during movement is not considered, so that the detection angles of continuous frames of the unmanned aerial vehicle are different, and even in the same detection position, the change of the detection angles can cause the change of the size of the detection frame, thereby being very easy to influence the tracking judgment of the unmanned aerial vehicle.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides an unmanned aerial vehicle automatic detection target and tracking method, which comprises the following steps:
step 1: establishing a target three-dimensional model, acquiring a feature vector of an image feature information total set of the target three-dimensional model, and classifying and naming the feature vector;
step 2: when unmanned aerial vehicle obtains target n After (n is more than or equal to 2) frame images, determining corresponding image characteristic information in the images of the targets, and obtaining the first image by adopting a Two-Stage algorithm n A post-stage detection frame region where a target is located in the frame image; a Two-Stage algorithm is adopted to obtain a front detection frame area where a target in an n-1 frame image is located;
step 3: and comparing the image characteristic information of the rear detection frame area with that of the front detection frame area, acquiring the moving speed of the target in space, and adjusting the moving speed of the unmanned aerial vehicle to keep the moving speed consistent with the target.
Preferably, after the target stereoscopic model is established, target multi-angle image data is acquired based on the stereoscopic model, and an image feature area obtained by detecting each angle image data is recorded to form an image feature information aggregate.
Preferably, in step 1, the image feature information aggregate acquisition process specifically includes the following steps:
step 11: acquiring a target image data acquisition angle by adopting a loop subdivision algorithm, acquiring a plurality of acquired images according to the acquisition angle to form an image total set, and marking the image total set according to the acquisition angle;
step 12: performing data normalization processing on the image total set by taking the average pixel height as a unit to obtain relative height; normalizing the image total set by taking the average pixel width as a unit to obtain a relative width, and normalizing the image pixel point average value in a regular range as a unit to obtain a relative duty ratio;
step 13: inputting the processed image total set into a transducer network, extracting features and understanding information to finally obtain feature vectors;
step 14: and adopting a full connection layer for the feature vectors to obtain final classification dimension and classification result of the image feature information, and marking the final classification dimension and classification result as an image feature information total set.
Preferably, in the step 3, the image feature information comparing is performed between the rear detection frame area and the front detection frame area, and when the speed of the target moving in the space is obtained, the method specifically includes the following steps:
step 31: will be the first n The frame image is recorded as a later-stage image to be measured and an nth- 1 The frame image is marked as a front-stage image to be detected, the rear-stage image to be detected is input into a characteristic point detection network to obtain a rear-stage characteristic result to be detected, the front-stage image to be detected is input into the characteristic point detection network to obtain a front-stage characteristic result to be detected, and whether a target is lost is judged;
step 32: and under the condition that the target is not lost, comparing the characteristic result to be detected at the later stage with the characteristic result to be detected at the earlier stage to obtain the target moving speed.
Preferably, in step 31, the feature point detection network is composed of a feature extraction module, a priori region generation module and an attention mechanism module, where the feature extraction module includes extracting edge features, texture features and semantic features of the acquired image according to the image feature total set; the prior region generation module is used for generating prior frames with fixed sizes on the acquired images, the regions correspond to a plurality of regions of the acquired images, the prior frame regions correspond to a plurality of regions of a single acquired image, and the extraction difficulty of the image characteristic regions is reduced; the attention mechanism module is used for enabling the feature point detection network to pay more attention to the image feature area.
Preferably, in step 32, the method specifically includes the following steps when the target shift speed is obtained according to the comparison between the feature result to be measured at the later stage and the feature result to be measured at the earlier stage:
step 321: obtaining a rear-stage acquisition angle of an nth frame image and a front-stage acquisition angle of an n-1 th frame image according to the rear-stage feature result to be detected and the front-stage feature result to be detected;
step 322: acquiring a front-stage acquisition image corresponding to the front-stage acquisition angle in the image total set, screening a target area by adopting a seed growth algorithm to obtain a front-stage standard detection frame, and calculating the front-stage relative duty ratio of the front-stage detection frame and the front-stage standard detection frame;
step 323: acquiring a rear-stage acquisition image corresponding to the rear-stage acquisition angle in the image total set, screening a target area by adopting a seed growth algorithm to obtain a rear-stage standard detection frame, and calculating the rear-stage relative duty ratio of the rear-stage detection frame and the rear-stage standard detection frame;
step 324: and obtaining the moving speed of the target through the difference value of the front-stage relative duty ratio and the rear-stage relative duty ratio.
Preferably, in step 31, when determining whether the target is lost, the method specifically includes the following steps:
step 311: accumulating the confidence coefficient of the target in the first frame image to the nth frame image to obtain a parameter confidence coefficient;
step 312: and judging whether the parameter confidence coefficient is smaller than a first preset value, if so, determining that the target is lost, and if not, determining that the target is not lost.
Preferably, in step 2, when a Two-Stage algorithm is adopted to obtain a detection frame area where a target is located in an image, feature extraction is performed by using an HrNet18 network as a backbone network thereof, so as to screen out a target image with quality less than expected.
Preferably, when the target image with the quality less than expected is screened out through the HrNet18 network, the method specifically comprises the following steps:
s21: performing data preprocessing, including data size change and normalization of image data, wherein the normalization of the image data includes image rotation and image overturn;
s22: inputting the enhanced image data into an HrNet18 network, extracting features to finally obtain feature vectors, and passing the feature vectors through a full connection layer to obtain the dimension of final classification;
s23: marking the photo, dividing the photo into a normal quality image and an abnormal quality image, and deleting the abnormal quality image.
Preferably, in step 3, the moving speed of the unmanned aerial vehicle is adjusted, and when the moving speed is consistent with the target, specifically, the center position of the photographing visual field of the photographing device in the unmanned aerial vehicle is aligned to the geometric center of the rear-stage detection frame, and the moving speed of the unmanned aerial vehicle is adjusted.
The beneficial effects of the invention are as follows:
according to the invention, images of the detection target in multi-angle acquisition can be obtained according to the three-dimensional modeling and loop algorithm, and then the characteristic extraction processing is carried out on the acquired images to obtain image characteristic data under different acquisition angles, so that the real-time acquired data are compared and calculated to obtain the movement speed of the target in the image of the adjacent frame, and the shooting angle and the movement speed of the unmanned aerial vehicle are regulated according to the movement speed of the target, thereby greatly improving the detection tracking effect of the unmanned aerial vehicle on the target.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. Like elements or portions are generally identified by like reference numerals throughout the several figures. In the drawings, elements or portions thereof are not necessarily drawn to scale.
Fig. 1 is a flowchart of an automatic target detection and tracking method for an unmanned aerial vehicle provided by the invention;
fig. 2 is a flow chart of an image feature information collection acquisition process of an unmanned aerial vehicle automatic detection target and tracking method.
Detailed Description
Embodiments of the technical scheme of the present invention will be described in detail below with reference to the accompanying drawings. The following examples are only for more clearly illustrating the technical aspects of the present invention, and thus are merely examples, and are not intended to limit the scope of the present invention.
It is noted that unless otherwise indicated, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this invention pertains.
As shown in fig. 1, an automatic target detection and tracking method for an unmanned aerial vehicle includes the following steps:
step 1: establishing a target three-dimensional model, acquiring a feature vector of an image feature information total set of the target three-dimensional model, and classifying and naming the feature vector;
step 2: after an unmanned aerial vehicle acquires an nth (n is more than or equal to 2) frame image of a target, determining corresponding image characteristic information in the image where the target is positioned, and obtaining a rear detection frame area where the target is positioned in the nth frame image by adopting a Two-Stage algorithm; a Two-Stage algorithm is adopted to obtain a front detection frame area where a target in an n-1 frame image is located;
step 3: and comparing the image characteristic information of the rear detection frame area with that of the front detection frame area, acquiring the moving speed of the target in space, and adjusting the moving speed of the unmanned aerial vehicle to keep the moving speed consistent with the target.
More specifically, after a target three-dimensional model is established, target multi-angle image data is acquired based on the three-dimensional model, and an image characteristic area obtained by detecting each angle image data is recorded to form an image characteristic information total set.
The stereoscopic model may be built here using C4D modeling software.
As shown in fig. 2, more specifically, the image feature information aggregate acquisition process specifically includes the following steps:
step 11: acquiring a target image data acquisition angle by adopting a loop subdivision algorithm, acquiring a plurality of acquired images according to the acquisition angle to form an image total set, and marking the image total set according to the acquisition angle;
step 12: performing data normalization processing on the image total set by taking the average pixel height as a unit to obtain relative height; normalizing the image total set by taking the average pixel width as a unit to obtain a relative width, and normalizing the image pixel point average value in a regular range as a unit to obtain a relative duty ratio;
step 13: inputting the processed image total set into a transducer network, extracting features and understanding information to finally obtain feature vectors;
step 14: and adopting a full connection layer for the feature vectors to obtain final classification dimension and classification result of the image feature information, and marking the final classification dimension and classification result as an image feature information total set.
The basic idea of loop subdivision is that a triangle is divided into four triangles, the positions of new vertexes and old vertexes are respectively changed, the surface of a model is finally made smoother, when the acquisition angle of target image data is acquired by adopting a loop subdivision algorithm, the geometric center of the three-dimensional model is obtained firstly according to the three-dimensional model of the target, an regular icosahedron is built by taking the geometric center as an origin, the regular icosahedron is subdivided for a plurality of times by adopting the loop subdivision algorithm, a plurality of vertexes are obtained, at the moment, the graph formed by connecting the vertexes is similar to a sphere, and a single vertex is taken as an acquisition angle, so that a plurality of acquired images can be obtained, and an image total set is formed.
More specifically, in the step 3, the image feature information of the rear detection frame area is compared with the image feature information of the front detection frame area, and when the speed of the target moving in the space is obtained, the method specifically includes the following steps:
step 31: the nth frame of image is marked as a later-stage image to be detected, the nth-1 frame of image is marked as a former-stage image to be detected, the later-stage image to be detected is input into a characteristic point detection network to obtain a later-stage characteristic result to be detected, the former-stage image to be detected is input into the characteristic point detection network to obtain a former-stage characteristic result to be detected, and whether a target is lost is judged;
step 32: and under the condition that the target is not lost, comparing the characteristic result to be detected at the later stage with the characteristic result to be detected at the earlier stage to obtain the target moving speed.
More specifically, the feature point detection network is composed of a feature extraction module, a priori region generation module and an attention mechanism module, wherein the feature extraction module comprises the steps of extracting edge features, texture features and semantic features of an acquired image according to an image feature total set; the prior region generation module is used for generating prior frames with fixed sizes on the acquired images, the regions correspond to a plurality of regions of the acquired images, the prior frame regions correspond to a plurality of regions of a single acquired image, and the extraction difficulty of the image characteristic regions is reduced; the attention mechanism module is used for enabling the feature point detection network to pay more attention to the image feature area.
The feature extraction module comprises a convolution layer and a pooling layer, and the main function of the module is to extract the features of the acquired image according to the image feature total set; the prior region generation module can enable the network to be transferred from the global detection feature points to the local detection, so that the difficulty in extracting the feature points from the whole image is reduced; by introducing the attention mechanism module, the network can put more resources into the area near the feature points and filter out some irrelevant information.
More specifically, in the step 32, when the target shift speed is obtained by comparing the feature result to be measured at the later stage with the feature result to be measured at the earlier stage, the method specifically includes the following steps:
step 321: obtaining a rear-stage acquisition angle of an nth frame image and a front-stage acquisition angle of an n-1 th frame image according to the rear-stage feature result to be detected and the front-stage feature result to be detected;
step 322: acquiring a front-stage acquisition image corresponding to the front-stage acquisition angle in the image total set, screening a target area by adopting a seed growth algorithm to obtain a front-stage standard detection frame, and calculating the front-stage relative duty ratio of the front-stage detection frame and the front-stage standard detection frame;
step 323: acquiring a rear-stage acquisition image corresponding to the rear-stage acquisition angle in the image total set, screening a target area by adopting a seed growth algorithm to obtain a rear-stage standard detection frame, and calculating the rear-stage relative duty ratio of the rear-stage detection frame and the rear-stage standard detection frame;
step 324: and obtaining the moving speed of the target through the difference value of the front-stage relative duty ratio and the rear-stage relative duty ratio.
More specifically, in the step 31, when determining whether the target is lost, the method specifically includes the following steps:
step 311: accumulating the confidence coefficient of the target in the first frame image to the nth frame image to obtain a parameter confidence coefficient;
step 312: and judging whether the parameter confidence coefficient is smaller than a first preset value, if so, determining that the target is lost, and if not, determining that the target is not lost.
More specifically, in step 2, when a Two-Stage algorithm is adopted to obtain a detection frame area where a target is located in an image, feature extraction is performed by using an HrNet18 network as a backbone network thereof, so as to screen out a target image with quality less than expected.
More specifically, when the target image with the quality less than expected is screened out through the HrNet18 network, the method specifically comprises the following steps:
s21: performing data preprocessing, including data size change and normalization of image data, wherein the normalization of the image data includes image rotation and image overturn;
s22: inputting the enhanced image data into an HrNet18 network, extracting features to finally obtain feature vectors, and passing the feature vectors through a full connection layer to obtain the dimension of final classification;
s23: marking the photo, dividing the photo into a normal quality image and an abnormal quality image, and deleting the abnormal quality image.
More specifically, in the step 3, when the moving speed of the unmanned aerial vehicle is adjusted to be consistent with the target, specifically, the center position of the photographing visual field of the photographing device in the unmanned aerial vehicle is aligned to the geometric center of the rear-stage detection frame, and the moving speed of the unmanned aerial vehicle is adjusted.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention, and are intended to be included within the scope of the appended claims and description.

Claims (4)

1. The unmanned aerial vehicle automatic detection target and tracking method is characterized by comprising the following steps:
step 1: establishing a target three-dimensional model, acquiring feature vectors of an image feature information total set of the target three-dimensional model, classifying and naming, acquiring target multi-angle image data based on the three-dimensional model after establishing the target three-dimensional model, and recording an image feature area obtained by detecting each angle image data to form the image feature information total set;
step 2: after an unmanned aerial vehicle acquires an nth (n is more than or equal to 2) frame image of a target, determining corresponding image characteristic information in the image where the target is positioned, and obtaining a rear detection frame area where the target is positioned in the nth frame image by adopting a Two-Stage algorithm; a Two-Stage algorithm is adopted to obtain a front detection frame area where a target in an n-1 frame image is located;
step 3: comparing the image characteristic information of the rear detection frame area with that of the front detection frame area, obtaining the moving speed of the target in space, and adjusting the moving speed of the unmanned aerial vehicle to keep the moving speed consistent with the target;
in the step 3, the image feature information of the rear detection frame area is compared with the image feature information of the front detection frame area, and when the speed of the target moving in the space is obtained, the method specifically comprises the following steps:
step 31: the nth frame of image is marked as a later-stage image to be detected, the nth-1 frame of image is marked as a former-stage image to be detected, the later-stage image to be detected is input into a characteristic point detection network to obtain a later-stage characteristic result to be detected, the former-stage image to be detected is input into the characteristic point detection network to obtain a former-stage characteristic result to be detected, and whether a target is lost is judged;
step 32: under the condition that the target is not lost, comparing the characteristic result to be detected at the later stage with the characteristic result to be detected at the earlier stage to obtain the target moving speed;
in the step 32, the comparison is performed according to the feature result to be measured at the later stage and the feature result to be measured at the earlier stage, and when the target moving speed is obtained, the method specifically comprises the following steps:
step 321: obtaining a rear-stage acquisition angle of an nth frame image and a front-stage acquisition angle of an n-1 th frame image according to the rear-stage feature result to be detected and the front-stage feature result to be detected;
step 322: acquiring a front-stage acquisition image corresponding to the front-stage acquisition angle in the image total set, screening a target area by adopting a seed growth algorithm to obtain a front-stage standard detection frame, and calculating the front-stage relative duty ratio of the front-stage detection frame and the front-stage standard detection frame; the method comprises the steps of acquiring a target image data acquisition angle by adopting a loop subdivision algorithm, and acquiring a plurality of image characteristic areas according to the acquisition angle to form an image aggregate, wherein the method specifically comprises the following steps of: when acquiring the acquisition angle of target image data by adopting a loop subdivision algorithm, firstly obtaining a geometric center of a three-dimensional model according to the three-dimensional model of the target, establishing a regular icosahedron by taking the geometric center as an origin, carrying out repeated subdivision on the regular icosahedron by adopting the loop subdivision algorithm to obtain a plurality of vertexes, and taking a single vertex as an acquisition angle, thereby obtaining a plurality of acquired images and forming an image total set;
step 323: acquiring a rear-stage acquisition image corresponding to the rear-stage acquisition angle in the image total set, screening a target area by adopting a seed growth algorithm to obtain a rear-stage standard detection frame, and calculating the rear-stage relative duty ratio of the rear-stage detection frame and the rear-stage standard detection frame;
step 324: and obtaining the moving speed of the target through the difference value of the front-stage relative duty ratio and the rear-stage relative duty ratio.
2. The unmanned aerial vehicle automatic detection target and tracking method according to claim 1, wherein in step 31, the feature point detection network is composed of a feature extraction module, a priori region generation module and an attention mechanism module, the feature extraction module includes extracting edge features, texture features and semantic features of the acquired image according to the image feature total set; the prior region generation module is used for generating prior frames with fixed sizes on the acquired images, the regions correspond to a plurality of regions of the acquired images, the prior frame regions correspond to a plurality of regions of a single acquired image, and the extraction difficulty of the image characteristic regions is reduced; the attention mechanism module is used for enabling the feature point detection network to pay more attention to the image feature area.
3. The method for automatically detecting and tracking an object by an unmanned aerial vehicle according to claim 1, wherein in step 31, when determining whether the object is lost, the method specifically comprises the steps of:
step 311: accumulating the confidence coefficient of the target in the first frame image to the nth frame image to obtain a parameter confidence coefficient;
step 312: and judging whether the parameter confidence coefficient is smaller than a first preset value, if so, determining that the target is lost, and if not, determining that the target is not lost.
4. The method for automatically detecting and tracking the target by the unmanned aerial vehicle according to claim 1, wherein in the step 3, when the moving speed of the unmanned aerial vehicle is adjusted to be consistent with the target, specifically, the center position of the photographing visual field of the photographing device in the unmanned aerial vehicle is aligned with the geometric center of the rear-stage detection frame, and the moving speed of the unmanned aerial vehicle is adjusted.
CN202210597472.4A 2022-05-30 2022-05-30 Unmanned aerial vehicle automatic detection target and tracking method Active CN114973033B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210597472.4A CN114973033B (en) 2022-05-30 2022-05-30 Unmanned aerial vehicle automatic detection target and tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210597472.4A CN114973033B (en) 2022-05-30 2022-05-30 Unmanned aerial vehicle automatic detection target and tracking method

Publications (2)

Publication Number Publication Date
CN114973033A CN114973033A (en) 2022-08-30
CN114973033B true CN114973033B (en) 2024-03-01

Family

ID=82957483

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210597472.4A Active CN114973033B (en) 2022-05-30 2022-05-30 Unmanned aerial vehicle automatic detection target and tracking method

Country Status (1)

Country Link
CN (1) CN114973033B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9443320B1 (en) * 2015-05-18 2016-09-13 Xerox Corporation Multi-object tracking with generic object proposals
CN107292297A (en) * 2017-08-09 2017-10-24 电子科技大学 A kind of video car flow quantity measuring method tracked based on deep learning and Duplication
CN109001484A (en) * 2018-04-18 2018-12-14 广州视源电子科技股份有限公司 The detection method and device of rotation speed
CN110401799A (en) * 2019-08-02 2019-11-01 睿魔智能科技(深圳)有限公司 A kind of auto-tracking shooting method and system
CN110458868A (en) * 2019-08-15 2019-11-15 湖北经济学院 Multiple target tracking based on SORT identifies display systems
CN111798482A (en) * 2020-06-16 2020-10-20 浙江大华技术股份有限公司 Target tracking method and device
CN112184760A (en) * 2020-10-13 2021-01-05 中国科学院自动化研究所 High-speed moving target detection tracking method based on dynamic vision sensor
CN112907634A (en) * 2021-03-18 2021-06-04 沈阳理工大学 Vehicle tracking method based on unmanned aerial vehicle
WO2021189507A1 (en) * 2020-03-24 2021-09-30 南京新一代人工智能研究院有限公司 Rotor unmanned aerial vehicle system for vehicle detection and tracking, and detection and tracking method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10460600B2 (en) * 2016-01-11 2019-10-29 NetraDyne, Inc. Driver behavior monitoring
CN106228112B (en) * 2016-07-08 2019-10-29 深圳市优必选科技有限公司 Face datection tracking and robot head method for controlling rotation and robot

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9443320B1 (en) * 2015-05-18 2016-09-13 Xerox Corporation Multi-object tracking with generic object proposals
CN107292297A (en) * 2017-08-09 2017-10-24 电子科技大学 A kind of video car flow quantity measuring method tracked based on deep learning and Duplication
CN109001484A (en) * 2018-04-18 2018-12-14 广州视源电子科技股份有限公司 The detection method and device of rotation speed
CN110401799A (en) * 2019-08-02 2019-11-01 睿魔智能科技(深圳)有限公司 A kind of auto-tracking shooting method and system
CN110458868A (en) * 2019-08-15 2019-11-15 湖北经济学院 Multiple target tracking based on SORT identifies display systems
WO2021189507A1 (en) * 2020-03-24 2021-09-30 南京新一代人工智能研究院有限公司 Rotor unmanned aerial vehicle system for vehicle detection and tracking, and detection and tracking method
CN111798482A (en) * 2020-06-16 2020-10-20 浙江大华技术股份有限公司 Target tracking method and device
CN112184760A (en) * 2020-10-13 2021-01-05 中国科学院自动化研究所 High-speed moving target detection tracking method based on dynamic vision sensor
CN112907634A (en) * 2021-03-18 2021-06-04 沈阳理工大学 Vehicle tracking method based on unmanned aerial vehicle

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Fast Visual Object Tracking with Rotated Bounding Boxes;Bao Xin Chen 等;《arXiv:1907.03892v5》;全文 *
图像处理在客流检测中的算法研究;王晓 等;《中国海洋大学学报》;全文 *
第一人称视角下的社会力优化多行人跟踪;杨廷召 等;《中国图象图形学报》;全文 *

Also Published As

Publication number Publication date
CN114973033A (en) 2022-08-30

Similar Documents

Publication Publication Date Title
CN108985169B (en) Shop cross-door operation detection method based on deep learning target detection and dynamic background modeling
CN106845364B (en) Rapid automatic target detection method
CN111768388A (en) Product surface defect detection method and system based on positive sample reference
CN111222396A (en) All-weather multispectral pedestrian detection method
CN108804992B (en) Crowd counting method based on deep learning
Najiya et al. UAV video processing for traffic surveillence with enhanced vehicle detection
CN112734761B (en) Industrial product image boundary contour extraction method
CN110189375A (en) A kind of images steganalysis method based on monocular vision measurement
CN113888461A (en) Method, system and equipment for detecting defects of hardware parts based on deep learning
CN111738071B (en) Inverse perspective transformation method based on motion change of monocular camera
CN113223044A (en) Infrared video target detection method combining feature aggregation and attention mechanism
CN115131325A (en) Breaker fault operation and maintenance monitoring method and system based on image recognition and analysis
CN116664565A (en) Hidden crack detection method and system for photovoltaic solar cell
CN115908354A (en) Photovoltaic panel defect detection method based on double-scale strategy and improved YOLOV5 network
CN115861799A (en) Light-weight air-to-ground target detection method based on attention gradient
Prakoso et al. Vehicle detection using background subtraction and clustering algorithms
CN112884795A (en) Power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion
CN114973033B (en) Unmanned aerial vehicle automatic detection target and tracking method
CN117036404A (en) Monocular thermal imaging simultaneous positioning and mapping method and system
CN113112479A (en) Progressive target detection method and device based on key block extraction
Cheng et al. A fast mosaic approach for remote sensing images
CN107564029B (en) Moving target detection method based on Gaussian extreme value filtering and group sparse RPCA
CN113780462B (en) Vehicle detection network establishment method based on unmanned aerial vehicle aerial image and application thereof
CN116189037A (en) Flame detection identification method and device and terminal equipment
Chuang et al. Moving object segmentation and tracking using active contour and color classification models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant