CN112819856B - Target tracking method and self-positioning method applied to unmanned aerial vehicle - Google Patents
Target tracking method and self-positioning method applied to unmanned aerial vehicle Download PDFInfo
- Publication number
- CN112819856B CN112819856B CN202110086693.0A CN202110086693A CN112819856B CN 112819856 B CN112819856 B CN 112819856B CN 202110086693 A CN202110086693 A CN 202110086693A CN 112819856 B CN112819856 B CN 112819856B
- Authority
- CN
- China
- Prior art keywords
- frame image
- target
- aerial vehicle
- unmanned aerial
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/14—Fourier, Walsh or analogous domain transformations, e.g. Laplace, Hilbert, Karhunen-Loeve, transforms
- G06F17/141—Discrete Fourier transforms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Discrete Mathematics (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention relates to a target tracking method and a self-positioning method applied to an unmanned aerial vehicle, wherein the method is based on relevant filtering, the interframe change rate of a response image is smoothly detected by constraining the second-order difference of response in the tracking process, the capability of a tracker for adapting to the target apparent change is enhanced, the weight distribution of each characteristic channel is iteratively optimized by introducing a channel weight regular term and using an alternative direction multiplier method in the training process, the self-adaptive distribution of channel weight is realized, the tracker focuses on the characteristic channel with higher reliability, and the judgment force of the tracker is enhanced; on the basis of the target tracking method, the invention provides the unmanned aerial vehicle self-positioning system which has better feasibility and universality.
Description
Technical Field
The invention relates to the technical field of visual target tracking and self-positioning, relates to a target tracking method and a self-positioning method applied to an unmanned aerial vehicle, and particularly relates to an unmanned aerial vehicle target tracking and self-positioning method based on multi-regularization correlation filtering.
Background
Visual target tracking is an important research direction in the field of computer vision. The process of target tracking is essentially a process of dynamic information extraction and analysis of the target according to the initial information of the target. With the rapid development of computer vision technology and image processing technology, the target tracking technology has great progress, and the applications in the aspects of automatic driving, man-machine interaction, intelligent monitoring systems and the like are developed.
Unmanned aerial vehicle is because of its detection range is wide, the flexibility is strong, the deployment is fast, characteristics such as with low costs, no matter in military field still civilian field, all receives favour. The development of the target tracking technology undoubtedly brings huge opportunity to the multi-field application of the unmanned aerial vehicle, and the applications such as autonomous landing, intelligent inspection, traffic management, video shooting, intelligent monitoring and the like are developed. However, as the tracked target and the background are usually in a dynamic change process, the target tracking task faces many unpredictable visual uncertainties, such as target deformation, scale change, occlusion, and the like, which makes the target tracking task extremely challenging. Due to the particularity of the carrier of the drone, the application of visual target tracking on the drone platform faces unique challenges: (1) Due to the high visual angle and high speed of the unmanned aerial vehicle, challenging scenes such as visual angle change, rapid lens movement, motion blur and the like frequently occur in the target tracking process of the unmanned aerial vehicle, so that the target appearance change is caused, further the model training is interfered, and even the tracking failure is caused; (2) In consideration of cost and cruising ability of the unmanned aerial vehicle, the computational power level of an onboard computer carried by the unmanned aerial vehicle is limited, but the tracking task of the unmanned aerial vehicle usually has strict real-time performance requirements, so that the tracking algorithm facing the unmanned aerial vehicle has to well balance the computational complexity and the tracking performance.
The existing target tracking algorithms with excellent performance can be divided into two types: a target tracking algorithm based on correlation filtering and a target tracking algorithm based on deep learning. The method based on deep learning exhibits excellent performance depending on strong discrimination brought by deep semantic features, but the deployment of the method depends on an expensive high-performance GPU, so that the method is particularly difficult to apply to unmanned planes which are generally only provided with a single CPU. In recent years, a target tracking algorithm based on correlation filtering has received wide attention in the field of unmanned aerial vehicle target tracking due to high calculation efficiency and good tracking performance. Henriques et al propose a nuclear Correlation filter tracker in the document High-Speed Tracking with Kernelized Correlation Filters, exhibiting fast and excellent Tracking performance. In the document called "Learning Background-Aware Correlation Filters for Visual Tracking", galoogahi et al introduces a binary clipping matrix into the relevant filtering framework, so as to alleviate the boundary effect and further improve the Tracking performance. However, these trackers typically only utilize training samples in the current frame to train the filter, which results in a lack of historical information. Therefore, they are prone to tracking drift when faced with scenes such as similar objects, target scale changes, target/drone motion, etc. For this reason, some studies have attempted to introduce temporal information into the correlation filtering. Li et al, in the document left Spatial-Temporal regulated Correlation Filters for Visual Tracking, propose to limit the variation of successive inter-frame Filters. The document Visual Tracking Visual Adaptive mapping-regulated Correlation Filters considers the temporal continuity of the spatial regularization term to cope with sudden appearance changes. Huang et al, in the document Learing Abstract reconstructed Correlation Filters for Real-Time UAV Tracking, propose to suppress the variation of the continuous interframe response map to suppress the possible Tracking result abnormality. These methods are commonly implemented by limiting the difference between components in a current frame and a previous frame, which to some extent enhances robustness. However, in the case of sharp changes in the appearance of the target, such as motion blur, fast motion, the above approach forces the components to be constrained rather than tolerating reasonable changes to accommodate changes in the appearance of the target, resulting in tracking failure. In addition, the existing correlation filtering tracking method generally considers all characteristic channels equally, and some channels with redundant information often cannot help to locate an object, which limits further improvement of tracking robustness.
The vision-based self-positioning method is a basic subtask in many unmanned aerial vehicle related applications, and the existing vision-based unmanned aerial vehicle self-positioning method generally utilizes local features to perform 2D-3D matching. Faessler et al propose an Infrared LED based Monocular positioning System in the document a singular position System based on infra red LEDs, but their application relies on the deployment of Infrared cameras.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provide a target tracking method and a self-positioning method applied to an unmanned aerial vehicle.
The purpose of the invention can be realized by the following technical scheme:
a target tracking method applied to an unmanned aerial vehicle comprises the following steps:
reading a t frame image acquired by an unmanned aerial vehicle, inputting the position, width and height of a tracking target in the t frame image, confirming a training area of the target in the t frame image, extracting the characteristics of the training area of the t frame image, updating an appearance model of the t frame image according to the characteristics of the training area of the t frame image, and training a filter model and channel weight distribution of the t frame image;
taking a training area of a target in a t frame image as a search area of a t +1 frame image, extracting the search area characteristics of the t +1 frame image, obtaining a detection response image of the t +1 frame image according to the search area characteristics of the t +1 frame image and a filter model and channel weight distribution of the t frame image, updating the position, width and height of the target in the t +1 frame image, judging whether a video frame acquired by an unmanned aerial vehicle is input subsequently, if so, repeating the steps to track the target of a next frame image, and otherwise, ending the tracking process.
Preferably, the method comprises the steps of:
a1: reading a t frame image acquired by the unmanned aerial vehicle, and inputting the position, the width w and the height h of a tracking target in the t frame image;
a2: according to the position of the target in the t frame image, extracting the position of the target as the center and the side length as the side length from the t frame imageThe square training area of (1), wherein alpha is a predefined training area proportion, and HOG characteristics, CN characteristics and gray-scale characteristics of the training area in the t frame image are extracted;
a3: updating an appearance model of the t frame image according to the HOG characteristic, the CN characteristic and the gray-scale characteristic of the training area in the t frame image;
a4: training a filter model and channel weight distribution of the t frame image according to the apparent model of the t frame image;
a5: the position of the target in the t frame image is taken as the center, and the side length isThe square of the image is used as a search area of the t +1 th frame image, HOG characteristics, CN characteristics and gray-scale characteristics of the search area of the t +1 th frame image are extracted, and search area characteristics of the t +1 th frame image are obtained;
a6: acquiring a detection response diagram of the t +1 frame image according to the search region characteristics of the t +1 frame image, the filter model of the t frame image and channel weight distribution;
a7: updating the position, the width w and the height h of the target in the t +1 frame image according to the detection response image of the t +1 frame image;
a8: and judging whether video frames acquired by the unmanned aerial vehicle are input subsequently, if so, letting t = t +1, repeating the steps A2-A8 to track the target of the next frame of image, and otherwise, ending the tracking process.
Preferably, the step A3 includes:
a3-1: the HOG characteristics, the CN characteristics and the gray-scale characteristics of the training area in the t frame image are fused to obtain the training area characteristics of the t frame image with D channels
A3-2: for the characteristics of the training areaPerforming discrete Fourier transform to obtain Fourier domain of training region characteristics
A3-3: judging whether the t frame image is the 1 st frame image collected by the unmanned aerial vehicle, if so, updating the appearance model of the t frame imageWhereinIs an apparent model of the image of the t-th frame,expressing a Fourier domain, otherwise, updating an appearance model of the t frame image by adopting a linear interpolation formula based on a preset learning rate eta
preferably, the specific step of step A4 includes:
apparent model based on the t frame imageDetection response graph R of preset Gaussian training label y, t-1 frame image and t-2 frame image t-1 、R t-2 And training a filter model and channel weight distribution of the t-th frame image by minimizing a preset multi-regularization objective function.
Preferably, the multi-regularization objective function is:
wherein h is t Is a filter model for the image of the t-th frame,filter model, β, for the d-th eigenchannel of the t-th frame image t The channel weight distribution for the image of the t-th frame, D is the number of characteristic channels, symbol ^ represents the circular convolution, is apparent from the t-th frame imageThe characteristics of the d-th characteristic channel of the model,is a diagonal matrix composed of the weights of the D eigen channels,the weight of the d characteristic channel of the t frame image is defined, P is a binary cutting matrix, the last three items in the multi-regularization target function are respectively a filter regularization item, a channel weight regularization item and a response difference regularization item to form a plurality of regularization items, kappa, gamma and lambda are respectively coefficients of the filter regularization item, the channel weight regularization item and the response difference regularization item,respectively representing the first difference of the d characteristic channel detection response graphs of the t frame image and the t-1 frame image,
wherein the content of the first and second substances,representing a shift operator, the effect of which is to shift the matrix maxima to a central position, and
preferably, in the step A4, an alternating direction multiplier method is used to minimize the multi-regularization objective function, and a filter model and channel weight distribution of the t-th frame are obtained.
Preferably, in the step A5, the HOG feature, the CN feature and the gray-scale feature of the search region in the t +1 th frame image are fused to obtain the search region feature of the t +1 th frame image with D channels
Preferably, the step A6 is based on the characteristics of the search region in the t +1 th frame imageFilter model h of the t-th frame image t Channel weight assignment β of the t-th frame image t Obtaining a detection response image R of the t +1 th frame image by a detection formula t+1 。
Preferably, the detection formula is:
wherein the content of the first and second substances,representing the inverse discrete fourier transform and,is the weight of the d characteristic channel of the t frame image,the search region feature of the d-th feature channel of the t + 1-th frame image, which is representative of the fourier domain,p is a binary clipping matrix for the filter model of the d-th eigen channel of the t-th frame image.
A self-positioning method applied to an unmanned aerial vehicle comprises the following steps:
b1: reading a t frame image of a self-positioning video sequence acquired by an unmanned aerial vehicle, and acquiring the position, width w and height h of a mark point in the t frame image, wherein the t frame image is provided with 4 or more mark points;
b2: the target tracking method applied to the unmanned aerial vehicle is operated in parallel, and the positions of the mark points in the subsequent image frames are tracked respectively;
b3: converting the coordinate position of the mark point in the image into a world coordinate system;
b4: and outputting the coordinate position of the unmanned aerial vehicle in the world coordinate system after iteratively optimizing the reconstruction error.
Preferably, B4 specifically includes: and (3) iteratively optimizing the reprojection error of the image coordinate system-world coordinate system of the mark point by using a nonlinear least square method, and outputting the coordinate position of the unmanned aerial vehicle in the world coordinate system.
Compared with the prior art, the invention has the beneficial effects that:
(1) According to the target tracking method applied to the unmanned aerial vehicle, the time regular term based on the response second-order difference is designed, and the interframe change rate of the response image is smoothly detected by reasonably introducing the historical response image information, so that the capability of the tracking method for adapting to the target apparent change is enhanced;
(2) The invention designs a channel weight regular term, and realizes the self-adaptive distribution of channel weights by iteratively optimizing the weight distribution of each characteristic channel by using an alternating direction multiplier method in the training process, so that the tracking method focuses on the characteristic channel with higher reliability and enhances the discrimination of the tracking method;
(3) Based on the response difference regular term, the channel weight regular term and the filter regular term, the invention constructs a multi-regularization correlation filtering unmanned aerial vehicle target tracking method, and greatly improves the tracking robustness;
(4) The invention designs a self-positioning method applied to the unmanned aerial vehicle, and the target tracking algorithm can aim at any target, so the self-positioning method can be applied to a wide range of complex scenes, and provides a new solution for the self-positioning task of the unmanned aerial vehicle.
Drawings
Fig. 1 is a flowchart of a target tracking method applied to an unmanned aerial vehicle according to the present invention;
fig. 2 is an overall framework diagram of a target tracking method applied to an unmanned aerial vehicle according to the present invention;
FIG. 3 is a qualitative comparison of a target tracking method applied to an unmanned aerial vehicle according to the present invention with an existing tracking method;
FIG. 4 is a visualization of a channel weight regularization term applied to a target tracking method for an unmanned aerial vehicle according to the present invention;
fig. 5 is a comparison of the performance of the target tracking method applied to the drone and the UAVDT data set of the superior tracking method of the present invention;
fig. 6 is a flow chart of a self-positioning method applied to a drone;
FIG. 7 is a diagram illustrating a scenario in which a self-positioning method applied to an unmanned aerial vehicle is applied to an unmanned aerial vehicle-self-guided vehicle cooperative work scenario;
fig. 8 is a schematic diagram of the marking points on the self-guiding carriage.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. Note that the following description of the embodiment is merely a substantial example, and the present invention is not intended to be limited to the application or the use thereof, and is not limited to the following embodiment.
Examples
A target tracking method applied to an unmanned aerial vehicle comprises the following steps:
reading a t frame image acquired by an unmanned aerial vehicle, inputting the position, width and height of a tracking target in the t frame image, confirming a training area of the target in the t frame image, extracting the characteristics of the training area of the t frame image, updating an appearance model of the t frame image according to the characteristics of the training area of the t frame image, and training a filter model and channel weight distribution of the t frame image;
the method comprises the steps of taking a training area of a target in a t +1 th frame image as a search area of the t +1 th frame image, extracting search area characteristics of the t +1 th frame image, obtaining a detection response image of the t +1 th frame image according to the search area characteristics of the t +1 th frame image and a filter model and channel weight distribution of the t +1 th frame image, updating the position, width and height of the target in the t +1 th frame image, judging whether a video frame input collected by an unmanned aerial vehicle exists subsequently, if yes, repeating the steps to track the target of a next frame image, and if not, ending the tracking process.
As shown in fig. 1, fig. 2, and fig. 4, specifically, the method includes the following steps:
a1: and reading a t frame image acquired by the unmanned aerial vehicle, and inputting the position, the width w and the height h of the tracking target in the t frame image.
In A1, t =1,2,3 \8230.
A2: according to the position of the target in the t frame image, extracting the position of the target as the center and the side length as the side length from the t frame imageWherein α is a predefined training region proportion, and extracting HOG features, CN features and gray-scale features of the training region in the t-th frame image.
A3: and updating the appearance model of the image of the t frame according to the HOG characteristic, the CN characteristic and the gray-scale characteristic of the training area in the image of the t frame.
The step A3 specifically comprises the following steps:
a3-1: the HOG characteristics, the CN characteristics and the gray-scale characteristics of the training area in the t frame image are fused to obtain the training area characteristics of the t frame image with D channels
A3-2: for the characteristics of the training areaPerforming discrete Fourier transform to obtain Fourier domain of training region characteristics
A3-3: judging whether the t frame image is the 1 st frame image collected by the unmanned aerial vehicle, if so, updating the apparent model of the t frame imageWhereinIs an apparent model of the image of the t-th frame,expressing a Fourier domain, otherwise, updating an appearance model of the t frame image by adopting a linear interpolation formula based on a preset learning rate etaIn this embodiment, the linear interpolation formula is:
a4: and training a filter model and channel weight distribution of the image of the t frame according to the apparent model of the image of the t frame.
The step A4 specifically comprises the following steps:
apparent model based on the t frame imageDetection response graph R of preset Gaussian training label y, t-1 frame image and t-2 frame image t-1 、R t-2 And training a filter model and channel weight distribution of the t-th frame image by minimizing a preset multi-regularization objective function.
The multi-regularization objective function is as follows:
wherein h is t Is a filter model for the image of the t-th frame,filter for d characteristic channel of t frame imageModel, beta t The channel weight distribution for the image of the t-th frame, D is the number of characteristic channels, symbol ^ represents the circular convolution, is the feature of the d-th feature channel of the appearance model of the t-th frame image,is a diagonal matrix composed of the weights of the D eigen-channels,the weight of the d characteristic channel of the t frame image is defined, P is a binary cutting matrix, the last three items in the multi-regularization target function are respectively a filter regularization item, a channel weight regularization item and a response difference regularization item to form a plurality of regularization items, kappa, gamma and lambda are respectively coefficients of the filter regularization item, the channel weight regularization item and the response difference regularization item,respectively representing the first difference of the d characteristic channel detection response graphs of the t frame image and the t-1 frame image,
wherein, the first and the second end of the pipe are connected with each other,representing a shift operator, the effect of which is to shift the matrix maxima to a central position, and
in this embodiment, in step A4, an alternating direction multiplier method is used to minimize the multi-regularization objective function, and a filter model and channel weight distribution of the t-th frame are obtained.
A5: the position of the target in the t frame image is taken as the center, and the width isThe square of the image is used as a search area of the t +1 th frame image, the HOG characteristic, the CN characteristic and the gray-scale characteristic of the search area of the t +1 th frame image are extracted, and the search area characteristic of the t +1 th frame image is obtained.
Similar to A3, in step A5, the HOG feature, the CN feature and the gray-scale feature of the search region in the t +1 th frame image are fused to obtain the search region feature of the t +1 th frame image with D channels
A6: and acquiring a detection response diagram of the t +1 frame image according to the search region characteristics of the t +1 frame image, the filter model of the t frame image and channel weight distribution. In the step A6, the characteristics of the search area in the t +1 th frame image are basedFilter model h of the t frame image t Channel weight assignment β of the t-th frame image t Obtaining a detection response graph R of the t +1 th frame image by a detection formula t+1 The detection formula is as follows:
wherein the content of the first and second substances,which represents the inverse of the discrete fourier transform,is the weight of the d characteristic channel of the t frame image,the search region feature of the d-th feature channel of the t + 1-th frame image, which is representative of the fourier domain,p is a binary clipping matrix for the filter model of the d-th eigen channel of the t-th frame image.
A7: updating the position, the width w and the height h of the target in the t +1 frame image according to the detection response image of the t +1 frame image;
a8: and D, judging whether video frames acquired by the unmanned aerial vehicle are input subsequently, if so, enabling t = t +1, repeating the steps A2-A8 to track the target of the next frame of image, and otherwise, ending the tracking process.
As shown in fig. 3, in this embodiment, the target is tracked by using the method of the present invention and similar methods, a second order difference change curve of the inter-frame detection response graph of the method of the present invention and similar methods is drawn in the graph, the change rate of the inter-frame detection response graph is reflected, the inter-frame detection response graph of similar methods changes drastically, the method of the present invention effectively smoothes the change of the inter-frame detection response graph, and the robustness of the tracking algorithm under the condition that the target appearance model changes rapidly is enhanced.
As shown in FIG. 5, the method of the invention and other 35 excellent similar methods at present are evaluated on a UAVDT unmanned aerial vehicle target tracking data set, the method of the invention shows higher precision and success rate, and is very suitable for unmanned aerial vehicle target tracking tasks when the running speed of a single CPU reaches 50.5 frames per second.
A self-positioning method applied to an unmanned aerial vehicle comprises the following steps:
b1: reading a t frame image of a self-positioning video sequence acquired by an unmanned aerial vehicle, and acquiring the position, width w and height h of a mark point in the t frame image, wherein the t frame image is provided with 4 or more mark points;
b2: in parallel with the target tracking method applied to the unmanned aerial vehicle, the positions of the mark points in the subsequent image frames are tracked respectively;
b3: converting the coordinate position of the mark point in the image into a world coordinate system;
b4: and outputting the coordinate position of the unmanned aerial vehicle in the world coordinate system after iteratively optimizing the reconstruction error.
In this embodiment, B4 specifically includes: and (3) iteratively optimizing the reprojection error of the image coordinate system-world coordinate system of the mark point by using a nonlinear least square method, and outputting the coordinate position of the unmanned aerial vehicle in the world coordinate system.
In one embodiment of the present invention, as shown in fig. 7 and 8, the video sequence of the AGV is acquired by the unmanned aerial vehicle, and the AGV is provided with 4 mark points.
The above embodiments are merely examples and do not limit the scope of the present invention. These embodiments may be implemented in other various manners, and various omissions, substitutions, and changes may be made without departing from the technical spirit of the present invention.
Claims (5)
1. A target tracking method applied to an unmanned aerial vehicle is characterized by comprising the following steps:
reading a t frame image acquired by an unmanned aerial vehicle, inputting the position, width and height of a tracking target in the t frame image, confirming a training area of the target in the t frame image, extracting the characteristics of the training area of the t frame image, updating an appearance model of the t frame image according to the characteristics of the training area of the t frame image, and training a filter model and channel weight distribution of the t frame image;
taking a training area of a target in a t frame image as a search area of a t +1 frame image, extracting the search area characteristics of the t +1 frame image, acquiring a detection response image of the t +1 frame image according to the search area characteristics of the t +1 frame image and a filter model and channel weight distribution of the t frame image, updating the position, width and height of the target in the t +1 frame image, judging whether a video frame acquired by an unmanned aerial vehicle is input subsequently, if so, repeating the steps to track the target of a next frame image, and otherwise, ending the tracking process;
the method comprises the following steps:
a1: reading a t frame image acquired by the unmanned aerial vehicle, and inputting the position, the width w and the height h of a tracking target in the t frame image;
a2: according to the position of the target in the t frame image, extracting the position of the target as the center and the side length as the side length from the t frame imageWherein alpha is a predefined training area proportion, and extracting HOG characteristics, CN characteristics and gray-scale characteristics of the training area in the t-th frame image;
a3: updating an appearance model of the t frame image according to the HOG characteristics, the CN characteristics and the gray-scale characteristics of the training area in the t frame image;
a4: training a filter model and channel weight distribution of the t frame image according to the apparent model of the t frame image;
a5: the position of the target in the t frame image is taken as the center, and the side length isThe square of the image is used as a search area of the t +1 th frame image, HOG characteristics, CN characteristics and gray-scale characteristics of the search area of the t +1 th frame image are extracted, and the search area characteristics of the t +1 th frame image are obtained;
a6: acquiring a detection response diagram of the t +1 frame image according to the search region characteristics of the t +1 frame image, the filter model of the t frame image and channel weight distribution;
a7: updating the position, the width w and the height h of the target in the t +1 frame image according to the detection response image of the t +1 frame image;
a8: judging whether video frames acquired by the unmanned aerial vehicle are input subsequently, if so, letting t = t +1, repeating the steps A2-A8 to track the target of the next frame of image, and otherwise, ending the tracking process;
the step A4 specifically comprises the following steps:
apparent model based on the t frame imageDetection response graph R of preset Gaussian training label y, t-1 frame image and t-2 frame image t-1 、R t-2 Training a filter model and channel weight distribution of the t-th frame image by minimizing a preset multi-regularization target function;
the multi-regularization objective function is as follows:
wherein h is t Is a filter model for the image of the t-th frame,filter model for the d characteristic channel of the t frame image, beta t Channel weight distribution for the t-th frame image, D is the number of characteristic channels, symbol ≧ represents the circular convolution,is the feature of the d-th feature channel of the appearance model of the t-th frame image,is a diagonal matrix composed of the weights of the D eigen-channels,the weight of the d characteristic channel of the t frame image is defined, P is a binary cutting matrix, the last three items in the multi-regularization target function are respectively a filter regularization item, a channel weight regularization item and a response difference regularization item to form a plurality of regularization items, kappa, gamma and lambda are respectively coefficients of the filter regularization item, the channel weight regularization item and the response difference regularization item,respectively representing the first difference of the d characteristic channel detection response graphs of the t frame image and the t-1 frame image,
wherein the content of the first and second substances,representing a shift operator, the effect of which is to shift the matrix maxima to the central position, an
The step A3 comprises the following steps:
a3-1: the HOG characteristics, the CN characteristics and the gray-scale characteristics of the training area in the t frame image are fused to obtain the training area characteristics of the t frame image with D channels
A3-2: for the characteristics of the training areaPerforming discrete Fourier transform to obtain Fourier domain of training region characteristics
A3-3: judging whether the t frame image is the 1 st frame image collected by the unmanned aerial vehicle, if so, updating the appearance model of the t frame image WhereinIs an apparent model of the image of the t-th frame,expressing a Fourier domain, otherwise, updating an appearance model of the t frame image by adopting a linear interpolation formula based on a preset learning rate eta
3. the method according to claim 1, wherein the multi-regularization objective function is minimized in step A4 by using an alternating direction multiplier method, and a filter model and a channel weight assignment of the t-th frame are obtained.
4. The method of claim 1, wherein the step A6 is based on the characteristics of the search area in the t +1 th frame imageFilter model h of the t-th frame image t Channel weight assignment β of the t-th frame image t Obtaining a detection response image R of the t +1 th frame image by a detection formula t+1 ;
The detection formula is as follows:
wherein, the first and the second end of the pipe are connected with each other,representing the inverse discrete fourier transform and,is the weight of the d characteristic channel of the t frame image,the search region feature of the d-th feature channel of the t + 1-th frame image, which is representative of the fourier domain,p is a binary clipping matrix for the filter model of the d-th eigen channel of the t-th frame image.
5. A self-positioning method applied to an unmanned aerial vehicle, based on any one of claims 1 to 4, and a target tracking method applied to the unmanned aerial vehicle, characterized by comprising the following steps:
b1: reading a t frame image of a self-positioning video sequence acquired by an unmanned aerial vehicle, and acquiring the position, width w and height h of a mark point in the t frame image, wherein the t frame image is provided with 4 or more mark points;
b2: the method for tracking the target applied to the unmanned aerial vehicle, which is disclosed by any one of claims 1 to 4, is operated in parallel, and the positions of the mark points in the subsequent image frames are tracked respectively;
b3: converting the coordinate position of the mark point in the image into a world coordinate system;
b4: and outputting the coordinate position of the unmanned aerial vehicle in the world coordinate system after iteratively optimizing the reconstruction error.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110086693.0A CN112819856B (en) | 2021-01-22 | 2021-01-22 | Target tracking method and self-positioning method applied to unmanned aerial vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110086693.0A CN112819856B (en) | 2021-01-22 | 2021-01-22 | Target tracking method and self-positioning method applied to unmanned aerial vehicle |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112819856A CN112819856A (en) | 2021-05-18 |
CN112819856B true CN112819856B (en) | 2022-10-25 |
Family
ID=75858752
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110086693.0A Active CN112819856B (en) | 2021-01-22 | 2021-01-22 | Target tracking method and self-positioning method applied to unmanned aerial vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112819856B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113470075B (en) * | 2021-07-09 | 2022-09-23 | 郑州轻工业大学 | Target tracking method based on interference suppression appearance modeling |
CN113379804B (en) * | 2021-07-12 | 2023-05-09 | 闽南师范大学 | Unmanned aerial vehicle target tracking method, terminal equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110211157A (en) * | 2019-06-04 | 2019-09-06 | 重庆邮电大学 | A kind of target long time-tracking method based on correlation filtering |
CN110349190A (en) * | 2019-06-10 | 2019-10-18 | 广州视源电子科技股份有限公司 | Method for tracking target, device, equipment and the readable storage medium storing program for executing of adaptive learning |
CN111951298A (en) * | 2020-06-25 | 2020-11-17 | 湖南大学 | Target tracking method fusing time series information |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108776975B (en) * | 2018-05-29 | 2021-11-05 | 安徽大学 | Visual tracking method based on semi-supervised feature and filter joint learning |
CN108986140B (en) * | 2018-06-26 | 2022-02-01 | 南京信息工程大学 | Target scale self-adaptive tracking method based on correlation filtering and color detection |
CN109741366B (en) * | 2018-11-27 | 2022-10-18 | 昆明理工大学 | Related filtering target tracking method fusing multilayer convolution characteristics |
CN110675423A (en) * | 2019-08-29 | 2020-01-10 | 电子科技大学 | Unmanned aerial vehicle tracking method based on twin neural network and attention model |
-
2021
- 2021-01-22 CN CN202110086693.0A patent/CN112819856B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110211157A (en) * | 2019-06-04 | 2019-09-06 | 重庆邮电大学 | A kind of target long time-tracking method based on correlation filtering |
CN110349190A (en) * | 2019-06-10 | 2019-10-18 | 广州视源电子科技股份有限公司 | Method for tracking target, device, equipment and the readable storage medium storing program for executing of adaptive learning |
CN111951298A (en) * | 2020-06-25 | 2020-11-17 | 湖南大学 | Target tracking method fusing time series information |
Also Published As
Publication number | Publication date |
---|---|
CN112819856A (en) | 2021-05-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI750498B (en) | Method and device for processing video stream | |
Chen et al. | Augmented ship tracking under occlusion conditions from maritime surveillance videos | |
US10719940B2 (en) | Target tracking method and device oriented to airborne-based monitoring scenarios | |
CN107679491B (en) | 3D convolutional neural network sign language recognition method fusing multimodal data | |
CN112819856B (en) | Target tracking method and self-positioning method applied to unmanned aerial vehicle | |
CN113807187A (en) | Unmanned aerial vehicle video multi-target tracking method based on attention feature fusion | |
CN107590432A (en) | A kind of gesture identification method based on circulating three-dimensional convolutional neural networks | |
CN113591968A (en) | Infrared weak and small target detection method based on asymmetric attention feature fusion | |
CN112215074A (en) | Real-time target identification and detection tracking system and method based on unmanned aerial vehicle vision | |
CN111027377A (en) | Double-flow neural network time sequence action positioning method | |
CN108345835B (en) | Target identification method based on compound eye imitation perception | |
Li et al. | UAV object tracking by background cues and aberrances response suppression mechanism | |
CN103198491A (en) | Indoor visual positioning method | |
CN112785626A (en) | Twin network small target tracking method based on multi-scale feature fusion | |
CN114029941B (en) | Robot grabbing method and device, electronic equipment and computer medium | |
CN113743357B (en) | Video characterization self-supervision contrast learning method and device | |
CN112861808B (en) | Dynamic gesture recognition method, device, computer equipment and readable storage medium | |
CN110889460A (en) | Mechanical arm specified object grabbing method based on cooperative attention mechanism | |
Jiang et al. | Object detection and counting with low quality videos | |
CN116110095A (en) | Training method of face filtering model, face recognition method and device | |
CN115035397A (en) | Underwater moving target identification method and device | |
Liu et al. | Online multi-object tracking under moving unmanned aerial vehicle platform based on object detection and feature extraction network | |
Yang et al. | Locator slope calculation via deep representations based on monocular vision | |
Murthi et al. | A semi-automated system for smart harvesting of tea leaves | |
Angus et al. | Real-time video anonymization in smart city intersections |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |