CN112419369A - Anti-occlusion real-time target tracking method - Google Patents
Anti-occlusion real-time target tracking method Download PDFInfo
- Publication number
- CN112419369A CN112419369A CN202011453408.6A CN202011453408A CN112419369A CN 112419369 A CN112419369 A CN 112419369A CN 202011453408 A CN202011453408 A CN 202011453408A CN 112419369 A CN112419369 A CN 112419369A
- Authority
- CN
- China
- Prior art keywords
- hog
- target
- frame
- spsr
- tracker
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 230000004927 fusion Effects 0.000 claims abstract description 37
- 230000004044 response Effects 0.000 claims description 99
- 101100494729 Syncephalastrum racemosum SPSR gene Proteins 0.000 claims description 75
- 230000008569 process Effects 0.000 claims description 21
- 238000004364 calculation method Methods 0.000 claims description 13
- 238000007781 pre-processing Methods 0.000 claims description 8
- 238000001514 detection method Methods 0.000 claims description 5
- 238000012549 training Methods 0.000 claims description 5
- 238000009432 framing Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 abstract description 13
- 230000000694 effects Effects 0.000 description 12
- 238000012360 testing method Methods 0.000 description 11
- 230000008859 change Effects 0.000 description 9
- 239000011159 matrix material Substances 0.000 description 6
- 238000012545 processing Methods 0.000 description 4
- 230000006872 improvement Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000000903 blocking effect Effects 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 230000001965 increasing effect Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000003707 image sharpening Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 238000012847 principal component analysis method Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
An anti-occlusion real-time target tracking method belongs to the technical field of target tracking. The method solves the problem that the existing method is not universally applicable, so that the tracking precision is low in some scenes due to the fact that the existing method is utilized. The method improves the nuclear correlation filtering algorithm, ensures high real-time performance and high robustness of the target tracking algorithm by introducing a 'protection scale pool' and 'multi-tracker fusion', can be applied to target tracking and target scale estimation in any scene by introducing a self-adaptive link, and solves the problems that fixed parameters are adopted in the existing method and the tracking precision is low in some scenes. The invention can be applied to the tracking of the target.
Description
Technical Field
The invention belongs to the technical field of target tracking, and particularly relates to an anti-occlusion real-time target tracking algorithm based on a protection scale pool and multi-tracker fusion.
Background
The kernel-dependent filtering algorithm consists of HenriquesF and the like propose in 2015 that the method maps complex time domain calculation to a frequency domain based on fast Fourier transform, converts the convolution problem into a related problem, and uses the property that a circulation matrix can be diagonalized in the frequency domain, thereby greatly simplifying the operation, enabling the fast calculation of a large number of samples to be possible and achieving a good effect at that time. However, some inherent defects of the nuclear correlation algorithm are still not well solved, such as a scale estimation problem, a target occlusion problem and the like, in recent years, researchers combine the nuclear correlation filtering algorithm with a scale pool algorithm, a kalman filtering algorithm and the like, so that the tracking effect of the algorithm is improved to a certain extent, but the nuclear correlation filtering algorithm can only realize accurate tracking in a specific scene, and the method has no universal adaptability. Therefore, how to improve the robustness and the adaptability of the kernel-dependent filtering algorithm while ensuring the real-time performance of the algorithm has become a key and difficult point of the research in the field of target tracking based on online learning.
Disclosure of Invention
The invention aims to solve the problem that the existing method is low in tracking precision in some scenes due to the fact that the existing method is not universally applicable, and provides an anti-occlusion real-time target tracking method.
The technical scheme adopted by the invention for solving the technical problems is as follows: an anti-occlusion real-time target tracking method specifically comprises the following steps:
the method comprises the steps of firstly, preprocessing an input first frame image by utilizing a Laplacian operator to obtain a preprocessed first frame image; framing a target in the preprocessed first frame image to obtain an image in a target frame;
extracting HOG characteristics and CN characteristics of the image in the target frame, obtaining a tracker template based on the HOG characteristics according to the extracted HOG characteristics, obtaining a tracker template based on the CN characteristics according to the extracted CN characteristics, and initializing a fusion coefficient beta'HOGAnd beta'CN;
Preprocessing the input second frame image by using a Laplacian operator to obtain a preprocessed second frame image;
determining the position of a candidate region in the preprocessed second frame image, extracting HOG characteristics and CN characteristics from the candidate region image, performing correlation operation on the HOG characteristics of the candidate region image and a tracker template based on the HOG characteristics to obtain a tracker response value based on the HOG characteristics, and performing correlation operation on the CN characteristics of the candidate region image and the tracker template based on the CN characteristics to obtain a tracker response value based on the CN characteristics;
step three, utilizing improved peak intensity sidelobe ratio SPSR of HOG characteristics of candidate region imageHOGAnd improved peak intensity sidelobe ratio SPSR of CN features of candidate region imageCNTo beta'HOGAnd beta'CNFine tuning is carried out to obtain a fine-tuned fusion coefficient betaHOGAnd betaCN;
Fusing the tracker response value based on the HOG characteristic and the tracker response value based on the CN characteristic according to the fusion coefficient after fine tuning to obtain a fused response value; taking the position of the maximum response value in the fused response values as a target position in the second frame image;
step four, according to SPSRHOGAnd SPSRCNCalculating the total improved peak intensity sidelobe ratio SPSR corresponding to the second frame image by combining the fine-tuned fusion coefficient;
estimating the size of the target in the second frame of image by adopting a protection scale pool algorithm; updating the tracker template based on the HOG characteristic and the tracker template based on the CN characteristic according to the target position and size in the second frame image;
setting a threshold T, wherein the value of the threshold T is one fourth of the SPSR (total improved peak intensity sidelobe ratio) corresponding to the second frame image;
step seven, calculating the total improved peak intensity sidelobe ratio SPSR corresponding to the input third frame image, and if the total improved peak intensity sidelobe ratio SPSR corresponding to the third frame image is greater than or equal to the threshold value T, estimating the target size in the third frame image by adopting a protection scale pool algorithm;
otherwise, continuing to calculate the total improved peak intensity sidelobe ratio SPSR corresponding to the fourth frame image if the total improved peak intensity sidelobe ratio SPSR corresponding to the third frame image is smaller than the threshold value T, and estimating the target size in the fourth frame image by adopting a protection scale pool algorithm if the total improved peak intensity sidelobe ratio SPSR corresponding to the fourth frame image is larger than or equal to the threshold value T; otherwise, if the total improved peak intensity sidelobe ratio SPSR in the fourth frame image is still smaller than the threshold value T, continuously calculating the total improved peak intensity sidelobe ratio SPSR corresponding to the fifth frame image;
if the total improved peak intensity sidelobe ratio SPSR in the continuous 5-frame images is smaller than the threshold value T, the target is considered to disappear due to complete shielding, and the position of the target in the full image range is detected again for the 5 th frame in the continuous 5-frame images;
if the maximum response value in the fused response values of the 5 th frame image in the continuous 5 frame images in the full image range is greater than or equal to a preset value T1, verifying the detection of the corresponding total improved peak intensity sidelobe ratio SPSR in the full image range of the 5 th frame image in the continuous 5 frame images, if the maximum response value is greater than or equal to a threshold value T, considering that the target is detected again, and estimating the size of the target in the 5 th frame image in the continuous 5 frame images;
otherwise, if the maximum response value in the fused response values of the 5 th frame image in the full image range of the corresponding continuous 5 frame images with the total improved peak intensity sidelobe ratio SPSR being less than the threshold T is less than the preset value T1, or the maximum response value in the fused response values is greater than or equal to the preset value T1, but the 5 th frame image detects that the corresponding total improved peak intensity sidelobe ratio in the full image range is less than the threshold T, continuing to perform the redetection of the target position in the full image range on the 6 th frame image until the target is redetected, and estimating the size of the redetected target;
step eight, updating the tracker template based on the HOG characteristics and the tracker template based on the CN characteristics by using the detected target characteristics;
and step nine, repeatedly executing the processes from the step seven to the step eight on the subsequently input images so as to realize real-time tracking of the target.
The invention has the beneficial effects that: the invention provides an anti-occlusion real-time target tracking method, which improves a nuclear correlation filtering algorithm, ensures high instantaneity and high robustness of a target tracking algorithm by introducing a 'protection scale pool' and 'multi-tracker fusion', can be applied to target tracking and target scale estimation in any scene by introducing a self-adaptive link, and overcomes the problems that fixed parameters are adopted in the existing method, and the tracking precision is low in some scenes.
The method of the invention can reach the running speed of more than 80FPS for 640 x 480 images and can also keep the running speed of 40FPS for 1080 x 720 images under the condition that Intel core i7-6500u low-voltage cpu is taken as a main processor, thereby well meeting the real-time requirement.
Drawings
FIG. 1 is a flow chart of a guard scale pool algorithm of the present invention;
FIG. 2 is a flow chart of the method of the present invention;
FIG. 3a) is an original image;
FIG. 3b) is a graph of the effect of Laplace sharpening;
FIG. 4 is an actual map of the SPSR;
FIG. 5a) is a graph I of scale estimation effect;
fig. 5b) is a scale estimation effect graph two.
Detailed Description
The first embodiment is as follows: this embodiment will be described with reference to fig. 2. The method for tracking the anti-occlusion real-time target in the embodiment specifically comprises the following steps:
the method comprises the steps of firstly, preprocessing an input first frame image by utilizing a Laplacian operator to obtain a preprocessed first frame image; framing a target in the preprocessed first frame image to obtain an image in a target frame;
and extracting HOG features and CN features of the image in the target frame, and obtaining HOG-based features according to the extracted HOG featuresObtaining a tracker template based on the CN features according to the extracted CN features, and initializing a fusion coefficient beta'HOGAnd beta'CN;
Preprocessing the input second frame image by using a Laplacian operator to obtain a preprocessed second frame image;
determining the position of a candidate region in a preprocessed second frame image (in a group of continuous video sequences, the acquisition interval time of two adjacent frames of images is very short, so that the relative position of a target in the whole image in the two adjacent frames of images cannot be changed greatly, based on the idea, we take the center of a tracking frame of the previous frame as a central point, and take a rectangular range with the length and width 2.5 times of the size of the tracking frame of the previous frame as a candidate region of the target in a subsequent frame), and extract the HOG feature and the CN feature of the candidate region image, perform correlation operation on the HOG feature of the candidate region image and a tracker template based on the HOG feature (correlation operation is a mathematical operation of similarity comparison of two sequences, in a kernel correlation filtering algorithm, used for calculating the similarity between two different image blocks, the kernel function and the correlation operation method form a main part of the kernel correlation filtering algorithm), obtaining a tracker response value based on the HOG characteristics, and performing correlation operation on the CN characteristics of the candidate area image and a tracker template based on the CN characteristics to obtain a tracker response value based on the CN characteristics;
step three, utilizing improved peak intensity sidelobe ratio SPSR of HOG characteristics of candidate region imageHOGAnd improved peak intensity sidelobe ratio SPSR of CN features of candidate region imageCNTo beta'HOGAnd beta'CNFine tuning is carried out to obtain a fine-tuned fusion coefficient betaHOGAnd betaCN;
Fusing the tracker response value based on the HOG characteristic and the tracker response value based on the CN characteristic according to the fusion coefficient after fine tuning to obtain a fused response value; taking the position of the maximum response value in the fused response values as a target position in the second frame image;
step four, according to SPSRHOGAnd SPSRCNAnd are combined withCalculating the total improved peak intensity sidelobe ratio SPSR corresponding to the second frame image by the fine-tuned fusion coefficient;
estimating the size of the target in the second frame of image by adopting a protection scale pool algorithm; updating the tracker template based on the HOG characteristic and the tracker template based on the CN characteristic according to the target position and size in the second frame image;
setting a threshold T, wherein the value of the threshold T is one fourth of the SPSR (the value of the threshold T is invariable in the whole detection process) of the total improved peak intensity sidelobe ratio corresponding to the second frame image;
step seven, calculating the total improved peak intensity sidelobe ratio SPSR (the updated fusion coefficient of the second frame image needs to be updated during calculation) corresponding to the input third frame image, and if the total improved peak intensity sidelobe ratio SPSR corresponding to the third frame image is larger than or equal to the threshold value T, estimating the target size in the third frame image by adopting a protection scale pool algorithm;
when the total improved peak intensity sidelobe ratio corresponding to the input third frame image is calculated, the calculation is carried out based on the updated template and the fusion coefficient;
otherwise, continuing to calculate the total improved peak intensity sidelobe ratio SPSR corresponding to the fourth frame image if the total improved peak intensity sidelobe ratio SPSR corresponding to the third frame image is smaller than the threshold value T, and estimating the target size in the fourth frame image by adopting a protection scale pool algorithm if the total improved peak intensity sidelobe ratio SPSR corresponding to the fourth frame image is larger than or equal to the threshold value T; otherwise, if the total improved peak intensity sidelobe ratio SPSR in the fourth frame image is still smaller than the threshold value T, continuously calculating the total improved peak intensity sidelobe ratio SPSR corresponding to the fifth frame image;
if the total improved peak intensity sidelobe ratio SPSR in the continuous 5-frame images is smaller than the threshold value T, the target is considered to disappear due to complete shielding, and the position of the target in the full image range is detected again for the 5 th frame in the continuous 5-frame images;
if the maximum response value of the fused response values of the 5 th frame image in the continuous 5 frame images in the full image range is greater than or equal to a preset value T1 (the preset corresponding critical value T1 is 0.20 in the invention), verifying that the 5 th frame image in the continuous 5 frame images detects the corresponding total improved peak intensity sidelobe ratio SPSR in the full image range, and if the maximum response value is greater than or equal to the threshold value T, considering that the target is detected again, and estimating the size of the target in the 5 th frame image in the continuous 5 frame images;
otherwise, if the maximum response value in the fused response values of the 5 th frame image in the full image range of the corresponding continuous 5 frame images with the total improved peak intensity sidelobe ratio SPSR being less than the threshold T is less than the preset value T1, or the maximum response value in the fused response values is greater than or equal to the preset value T1, but the 5 th frame image detects that the corresponding total improved peak intensity sidelobe ratio in the full image range is less than the threshold T, continuing to perform the redetection of the target position in the full image range on the 6 th frame image until the target is redetected, and estimating the size of the redetected target;
when the target position is redetected in the full image range, the processes of the second step and the third step need to be repeated, and the only difference is that the range of the candidate area needs to be set to be the full image range;
step eight, updating the tracker template based on the HOG characteristics and the tracker template based on the CN characteristics by using the detected target characteristics (after the position and the size of the target are detected, the HOG characteristics and the CN characteristics of the target can be extracted);
and step nine, repeatedly executing the processes from the step seven to the step eight on the subsequently input image (when in the next iteration, processing the next frame image by using the updated template, the detected target size and the target position, and finely adjusting the fusion coefficient obtained by the previous frame according to the improved peak intensity sidelobe ratio SPSR based on the HOG characteristic and the improved peak intensity sidelobe ratio SPSR based on the CN characteristic of the new frame) so as to realize the real-time tracking of the target.
The invention makes the tracking algorithm adapt to some test data sets (such as black and white or night data sets) with unobvious target CN characteristics or test data sets (such as data sets with mixed background and foreground) with unobvious target HOG characteristics in the tracking process, properly reduces the specific weight of the characteristic types which can not effectively represent the target in the fusion of the tracker, improves the proportion of the characteristic types which can effectively represent the target in the fusion of the tracker, and makes the tracking algorithm show better adaptability to different tracking conditions, in addition, the discrimination of the target and the background in different test data sets is different, which causes the value of the total improved peak intensity sidelobe ratio SPSR in the tracking to have larger difference, thus adaptively adopting different shielding judgment threshold values T for different test data sets, but also help us to judge the complete occlusion condition more effectively.
The second embodiment is as follows: the first difference between the present embodiment and the specific embodiment is: in the first step, the laplacian operator is used to preprocess the input first frame image, and the specific process is as follows:
g(x,y)=5f(x,y)-[f(x+1,y)+f(x-1,y)+f(x,y+1)+f(x,y-1)]
where f (x, y) is a pixel value of a point (x, y) in the acquired first frame image, f (x +1, y) is a pixel value of a point (x +1, y) in the acquired first frame image, f (x-1, y) is a pixel value of a point (x-1, y) in the acquired first frame image, f (x, y +1) is a pixel value of a point (x, y +1) in the acquired first frame image, f (x, y-1) is a pixel value of a point (x, y-1) in the acquired first frame image, and g (x, y) is a pixel value of a point (x, y) in the preprocessed first frame image.
The image is preprocessed based on the Laplace image sharpening, so that the image contrast can be enhanced, and the situations such as image blurring can be better dealt with.
The third concrete implementation mode: the first difference between the present embodiment and the specific embodiment is: in the first step, a tracker template based on the HOG features is obtained according to the extracted HOG features, and a tracker template based on the CN features is obtained according to the extracted CN features, wherein the specific process comprises the following steps:
and training the single-feature tracker based on the HOG features by using the extracted HOG features to obtain a tracker template based on the HOG features, and training the single-feature tracker based on the CN features by using the extracted CN features to obtain a tracker template based on the CN features.
The HOG feature is also called a histogram of oriented gradient feature, is a feature for describing local texture information of an image, and is often used in the aspects of target tracking, target detection, and the like. The histogram feature of directional gradient is that the gradient value of a certain part of the image is calculated by using a sliding window strategy, then the gradients are classified and counted according to the direction to form a gradient histogram, and the histograms represent the information of the direction, the size and the like of the image gradient in the region to form the HOG feature.
CN features were proposed by m.danelljan et al in 2014, and the core idea is to map RGB color attributes to 11 predefined colors (black, blue, brown, gray, green, orange, pink, purple, red, white, and yellow), and the mapped color features will be more robust than the initial RGB features in characterizing the color information of the target, and belong to moderately significant features.
The HOG characteristic has good adaptability to the geometric change and the optical change of the target, and the CN characteristic has good adaptability to the geometric deformation, the rotation change and the like of the target. To this extent, the two are of some complementarity.
It is not uncommon for an algorithm to fuse HOG features and CN features to improve KCF. However, most documents are fused by connecting the HOG feature matrix and the CN feature matrix end to form a comprehensive feature matrix, and then bringing the comprehensive feature matrix into a correlation filter for solution. The characteristic fusion method adopted by the invention is to respectively train a core correlation tracker based on the HOG characteristic and a core correlation filtering tracker based on the CN characteristic, wherein the two are not interfered with each other but follow the same tracking principle and are updated simultaneously.
The multi-feature fusion performed by the improved algorithm of the prior kernel correlation filtering algorithm is to fuse features to form a large feature matrix and train the tracker by taking the new comprehensive features as features, so that the operation is simple but the feature ratio is unbalanced. Therefore, the method firstly trains a plurality of independent single-feature trackers based on different features, then selects proper proportion according to actual effect, combines response graphs obtained by tracking of the single-feature trackers to form an overall response graph, and then selects the maximum response position from the overall corresponding graph as the target position of the next frame of image.
In addition to the two features of the present embodiment, three features of HOG + CN + LAB may be selected, trackers are trained and fused based on the three features, and the weighting coefficients of the trackers are appropriately adjusted, so that the tracking effect can be improved with this embodiment when the performance of the computer allows, but the algorithm running rate will be slightly reduced, and it is assumed that the response values of the kernel correlation filter obtained from a certain candidate block are respectively fHOG(z),fCN(z) and fLAB(z), then the total response is:
f(z)=βHOGfHOG(z)+βCNfCN(z)+βLABfLAB(z)
the fourth concrete implementation mode: the third difference between the present embodiment and the specific embodiment is that: in the third step, according to the fusion coefficient after fine adjustment, fusing the tracker response value based on the HOG characteristic and the tracker response value based on the CN characteristic to obtain a fused response value; the specific process comprises the following steps:
f(z)=βHOGf1(z)+βCNf2(z)
wherein f (z) is the fused response value f1(z) is the tracker response value based on the HOG feature, f2(z) is a tracker response value, β, based on the CN featureHOGIs f1(z) corresponding post-fine-tuning fusion coefficient, βCNIs f2(z) corresponding post-fine-tuning fusion coefficients satisfying betaHOG+βCN=1。
The fifth concrete implementation mode: the fourth difference between this embodiment and the specific embodiment is that: the trimmed fusion coefficient betaHOGAnd betaCNThe calculation method comprises the following steps:
wherein: beta'HOGIs f1(z) corresponding initialized fusion coefficient, β'CNIs f2(z) corresponding initialized fusion coefficients, SPSRHOGImproved peak intensity sidelobe ratio, SPSR, for HOG features of candidate region imagesCNImproved peak intensity sidelobe ratios for CN features of the candidate region images.
Taking the integration of the HOG characteristic and the CN characteristic as an example, through a large amount of experiments, the beta is considered to beHOGTracking can achieve better effect when the value is 0.7-0.8, so that the value is beta 'during initialization'HOGIs defined as 0.75, respectively β'CNDefining the peak intensity sidelobe ratio to be 0.25, then solving the improved peak intensity sidelobe ratio SPSR of the two trackers in real time in the tracking process, and when the SPSR of one tracker is large and the SPSR of the other tracker is small, considering that the target characteristics are obvious in one tracker and the target is not obvious in the other tracker, then properly increasing the fusion weight of the tracker corresponding to the large SPSR, but finally needing to fall within a preset interval.
The sixth specific implementation mode: the fifth embodiment is different from the fifth embodiment in that: improved peak intensity sidelobe ratio SPSR for HOG features of the candidate region imageHOGAnd improved peak intensity sidelobe ratio SPSR of CN features of candidate region imageCNThe calculation method comprises the following steps:
improved peak intensity sidelobe ratio SPSR for HOG features of candidate region imagesHOG:
Wherein, gmaxRepresenting a maximum response value among the response values of the HOG-feature-based tracker by a maximum response value gmaxBy expanding the width of the HOG-feature-based tracker template to the left and right, respectively, as a centerExtending length of HOG feature-based tracker template up and downForm the maximum response value gmaxA circumscribed rectangle with the center; μ represents a mean value of all response values contained in the circumscribed rectangle, and σ represents a variance of all response values contained in the circumscribed rectangle;
similarly, the improved peak intensity sidelobe ratio SPSR of the CN characteristic of the candidate area image is calculatedCNThe value of (c).
The seventh embodiment: the sixth embodiment is different from the sixth embodiment in that: in the fourth step, according to SPSRHOGAnd SPSRCNCalculating the total improved peak intensity sidelobe ratio SPSR corresponding to the second frame image by combining the fine-tuned fusion coefficient; the specific process comprises the following steps:
SPSR=βHOGSPSRHOG+βCNSPSRCN
the invention adopts a structure of a seven-scale pool, and the scale coefficients in the scale pool are respectively 1.15, 1.10, 1.05, 1, 1/1.05, 1/1.10 and 1/1.15. According to the principle of a scale pool method, a scale coefficient is multiplied by the size of an original tracking frame during each detection to separate out 7 tracking frames with different sizes to track a target, a tracker calculates target responses under the current size on the basis of the 7 scales respectively (the calculation method is the same as that in the step two), and then the response value obtained by each scale is regarded as the size of the tracking frame corresponding to the maximum response value as the new target size. In an actual experiment, although the method is simple and effective, the problem that the size of a tracking frame frequently jumps under a complex tracking environment, namely the problem that the scale of the tracking frame is unstable, is solved, the concept of a 'protection scale pool' is provided, and the operation block diagram of the protection scale pool algorithm is shown in fig. 1.
The specific implementation mode is eight: the present embodiment will be described with reference to fig. 5a) and 5 b). The seventh embodiment is different from the seventh embodiment in that: in the fifth step, a guard scale pool algorithm is adopted to estimate the target size in the second frame image, and the specific process is as follows:
when the target size is estimated, a seven-scale pool structure is adopted to obtain seven tracking frames with different scales; the sizes of the tracking frames with the seven different scales are respectively 1.15, 1.10, 1.05, 1/1.05, 1/1.10 and 1/1.15 times of the size of the target frame in the step one;
during the next iteration, the target size obtained in the step six is used as the original target frame size, and the sizes of the seven tracking frames with different scales are respectively 1.15, 1.10, 1.05, 1/1.05, 1/1.10 and 1/1.15 times of the target size obtained in the step six;
after the maximum response value of each size tracking frame is respectively calculated, multiplying the maximum response value of the tracking frame with the size of 1.15 times of that of the target frame by a protection coefficient 1/1.10, multiplying the maximum response value of the tracking frame with the size of 1.10 times of that of the target frame by a protection coefficient 1/1.05, multiplying the maximum response value of the tracking frame with the size of 1.05 times of that of the target frame by a protection coefficient 1, multiplying the maximum response value of the tracking frame with the size of 1/1.10 times of that of the target frame by a protection coefficient 1/1.05, and multiplying the maximum response value of the tracking frame with the size of 1/1.15 times of that of the target frame by a protection coefficient 1/1.05;
and multiplying the protection coefficient to obtain the maximum response value of each size tracking frame in the protection scale pool, and taking the size of the tracking frame corresponding to the maximum response value in the protection scale pool as the target size.
In the embodiment, the protection coefficient is added when the target response comparison of each scale is carried out, the protection coefficient is smaller than 1, and the larger the difference between the size of the tracking frame in the scale pool and 1 is, the smaller the protection coefficient is, and then the protection coefficient is multiplied by the target response value of each size to carry out the comparison. The method has the core idea that when the target response value of the new scale is much larger than the current scale, scale updating is carried out only when scale change does exist, when the target response value of the new scale is close to the current scale, clutter caused by background interference or motion blur is considered, the current scale is kept continuously, and meanwhile, when the size difference between the new scale and the current scale is larger, the scale updating is more cautious, so that the scale protection mechanism keeps the stability of a tracking frame to the maximum extent, and the introduction of scale estimation errors is effectively avoided.
The specific implementation method nine: the eighth embodiment is different from the eighth embodiment in that: in the fifth step, the tracker template based on the HOG feature and the tracker template based on the CN feature are updated according to the target position and size in the second frame image, and the specific process is as follows:
for the HOG feature based tracker template:
α=j*z+(1-j)*α0
wherein: alpha represents the updated HOG feature-based tracker template, j is a constant value, z represents the HOG feature of the target in the second frame image, and alpha represents0Representing the tracker template based on the HOG characteristics obtained in the step one;
as the target tracking process continues, α0Representing the last updated tracker template based on the HOG characteristics, wherein z represents the HOG characteristics of the target extracted from the current frame;
and similarly, updating the tracker template based on the CN characteristics to obtain the updated tracker template based on the CN characteristics.
The detailed implementation mode is ten: the present embodiment differs from the ninth embodiment in that: the value of the fixed value j is as follows:
if the total improved peak intensity sidelobe ratio SPSR corresponding to the current frame image is larger than or equal to the threshold value T, estimating the size of the target of the current frame image, calculating j by using the total improved peak intensity sidelobe ratio SPSR corresponding to the current frame image, and when the target is completely shielded, setting j to 0, namely stopping updating the template.
The template updating is an important part in the KCF algorithm, if the template is not updated in real time along with the tracking condition, the tracking system cannot adapt to the changes in the geometric shape and the like of the target, so that the robustness of the algorithm is deteriorated during long-term tracking, and finally the tracking failure is caused, and if the target is completely replaced by the target of the current frame in each template updating of the target, the tracking performance of the algorithm is also reduced.
In the present invention, the value of j is adjusted based on the tracked real-time conditions. When the tracking condition is ideal and the target is clear, the confidence of the change of the template is high, the target characteristic can be considered to be slightly changed indeed, in order to facilitate subsequent tracking, the updating coefficient should be increased appropriately, otherwise, when the conditions such as blurring and blocking occur, the target characteristic becomes unobvious, the extracted target template is likely to contain unnecessary information such as background clutter and blocking objects, and if the template updating is still performed according to the original mode, the template drifting phenomenon is easily caused, so the updating coefficient should be reduced appropriately.
Examples
Step 1: processing a first frame image;
step 1.1: preprocessing the image by utilizing a Laplacian operator;
the image is differentiated in the x direction and the y direction by using a Laplace operator calculation formula, a sudden change region in the Laplace operator image is very sensitive, and tends to ignore a region with gentle change in the image, so that the region with gentle change in the image can be restored while maintaining a Laplace sharpening result only by superposing an original image and an output image subjected to Laplace operation, and the original image and the Laplace sharpening effect graph are respectively shown in FIGS. 3a) and 3 b).
Step 1.2: an operator frames a target through a human-computer interaction interface;
step 1.3: initializing tracker fusion weight, extracting HOG and CN characteristics of the image of the framed part and respectively carrying out tracker template training;
step 2: subsequent image processing;
step 2.1: preprocessing the image by utilizing a Laplacian operator;
step 2.2: determining the position of a candidate region, extracting HOG and CN characteristics of the candidate region, and respectively carrying out correlation operation with templates of respective characteristics to obtain a target response value;
after the HOG characteristic and the CN characteristic are extracted, principal component analysis method dimensionality reduction is carried out on the CN characteristic, the 11-dimensional CN characteristic is changed into 2 dimensions, and meanwhile, a weighting coefficient when the tracker is fused is adjusted, so that the operation speed of the algorithm can be greatly improved, the performance of the CN tracker can still reach about 80% of that of the first embodiment, and the implementation mode can be used for enhancing the algorithm instantaneity and simultaneously ensuring good tracking effect under the condition of insufficient computer performance.
Step 2.5: according to the weight coefficient, the response values of the HOG tracker and the CN tracker are fused to obtain a final response value;
step 2.6: obtaining the target position of the current frame according to the response value, and respectively calculating the SPSR of the two trackers and the total SPSR;
step 2.7: fine-tuning the weighting coefficient beta according to the value of the SPSR;
step 2.8: carrying out shielding judgment and target redetection based on the total SPSR;
and providing the SPSR for judging the shielding based on the peak intensity sidelobe ratio, stopping updating of the model immediately after the SPSR is judged to be completely shielded, re-detecting the target in a certain image range, determining that the target reappears when the response value of a certain part in the image exceeds a preset value and the corresponding total improved peak intensity sidelobe ratio is greater than a threshold value T, and re-tracking the target when the target reappears.
Step 2.9: adding a protection scale pool algorithm to carry out scale estimation;
in the tracking process, due to the fact that the target is close to or far away from the target, the scale of the target may change, and in the initial and related filtering algorithms, the size of the tracking frame is fixed, so that a scale estimation strategy for protecting a scale pool based on the design needs to be added to estimate the target scale in real time, the size of the tracking frame is adjusted according to the actual tracking situation, and the adaptability of the tracking algorithm is improved.
Step 2.10: updating the template according to the SPSR and outputting a target position;
and step 3: testing and evaluating;
step 3.1: testing the real-time performance of the algorithm;
for the target tracking algorithm, the real-time performance of the algorithm operation is the central importance, otherwise if the algorithm operation speed is too low, the target displacement between frames is too large in the actual operation, the target loss is easily caused, and the algorithm robustness is affected, so the real-time performance of the algorithm operation needs to be evaluated at first, and generally, the algorithm operation speed is considered to reach the real-time condition when being more than 25 FPS/S.
Step 3.2: testing the robustness of the algorithm;
in the step, two indexes of tracking precision and tracking success rate of the tracking algorithm need to be measured according to a target standard position provided in a data set, and the tracking precision and the tracking success rate consider that the algorithm robustness is better.
Fig. 4 shows the measured numerical variation of the improved peak intensity sidelobe ratio (denoted as SPSR) when a certain OTB50 data set is tested, and in the video sequence, the target is occluded around the 100 th frame, and it can be seen that the SPSR value is significantly reduced around the 100 th frame.
Many current improved algorithms only focus on the improvement of tracking precision and the enhancement of algorithm robustness, so a large number of operations are added in the improvement, the algorithm can not meet the real-time requirement any more, and the inherent advantages of the kernel correlation filtering algorithm are lost. The invention has the advantages that the problem of calculation speed is highly emphasized in design, and a frame separation processing mechanism is added at a necessary position, so that the real-time performance is good, the running speed of 640 multiplied by 480 images can reach more than 80FPS/S, the running speed of 40FPS/S can be kept for 1080 multiplied by 720 images, and the real-time performance requirement is well met.
According to the practical application scenario of the invention, representative data sets related to various interferences are selected from the OTB data set and the VOT data set, and are tested and compared with the original algorithm. As shown in tables 1 and 2, the test results of the OTB data set and the test results of the VOT data set are respectively:
table 1 test results (tracking success rate/tracking accuracy) for 11 typical data sets selected in OTB data set
Table 2 test results (tracking success rate/tracking accuracy) for 5 typical datasets selected from the VOT-boat dataset
Algorithm \ data set | boat-1 | boat-3 | boat-8 | boat-13 | boat-14 | Average |
KCF | 0.649/0.561 | 0.216/0.606 | 0.458/0.124 | 0.571/0.618 | 0.140/0.285 | 0.407/0.539 |
Improved KCF | 0.726/0.741 | 0.773/0.929 | 0.930/0.680 | 0.733/0.818 | 0.797/0.889 | 0.791/0.811 |
From experimental effects on the OTB data set and the VOT data set, the algorithm provided by the invention has very obvious improvement in tracking robustness and precision, and has a very prominent effect compared with the current mainstream non-deep learning improved algorithm (for the OTB data set, the improved tracking precision and tracking success rate are generally about 0.85).
The above-described calculation examples of the present invention are merely to explain the calculation model and the calculation flow of the present invention in detail, and are not intended to limit the embodiments of the present invention. It will be apparent to those skilled in the art that other variations and modifications of the present invention can be made based on the above description, and it is not intended to be exhaustive or to limit the invention to the precise form disclosed, and all such modifications and variations are possible and contemplated as falling within the scope of the invention.
Claims (10)
1. An anti-occlusion real-time target tracking method is characterized by comprising the following steps:
the method comprises the steps of firstly, preprocessing an input first frame image by utilizing a Laplacian operator to obtain a preprocessed first frame image; framing a target in the preprocessed first frame image to obtain an image in a target frame;
extracting HOG characteristics and CN characteristics of the image in the target frame, obtaining a tracker template based on the HOG characteristics according to the extracted HOG characteristics, obtaining a tracker template based on the CN characteristics according to the extracted CN characteristics, and initializing a fusion coefficient beta'HOGAnd beta'CN;
Preprocessing the input second frame image by using a Laplacian operator to obtain a preprocessed second frame image;
determining the position of a candidate region in the preprocessed second frame image, extracting HOG characteristics and CN characteristics from the candidate region image, performing correlation operation on the HOG characteristics of the candidate region image and a tracker template based on the HOG characteristics to obtain a tracker response value based on the HOG characteristics, and performing correlation operation on the CN characteristics of the candidate region image and the tracker template based on the CN characteristics to obtain a tracker response value based on the CN characteristics;
step three, utilizing improved peak intensity sidelobe ratio SPSR of HOG characteristics of candidate region imageHOGAnd improved peak intensity sidelobe ratio SPSR of CN features of candidate region imageCNTo beta'HOGAnd beta'CNFine tuning is carried out to obtain a fine-tuned fusion coefficient betaHOGAnd betaCN;
Fusing the tracker response value based on the HOG characteristic and the tracker response value based on the CN characteristic according to the fusion coefficient after fine tuning to obtain a fused response value; taking the position of the maximum response value in the fused response values as a target position in the second frame image;
step four, according to SPSRHOGAnd SPSRCNCalculating the total improved peak intensity sidelobe ratio SPSR corresponding to the second frame image by combining the fine-tuned fusion coefficient;
estimating the size of the target in the second frame of image by adopting a protection scale pool algorithm; updating the tracker template based on the HOG characteristic and the tracker template based on the CN characteristic according to the target position and size in the second frame image;
setting a threshold T, wherein the value of the threshold T is one fourth of the SPSR (total improved peak intensity sidelobe ratio) corresponding to the second frame image;
step seven, calculating the total improved peak intensity sidelobe ratio SPSR corresponding to the input third frame image, and if the total improved peak intensity sidelobe ratio SPSR corresponding to the third frame image is greater than or equal to the threshold value T, estimating the target size in the third frame image by adopting a protection scale pool algorithm;
otherwise, continuing to calculate the total improved peak intensity sidelobe ratio SPSR corresponding to the fourth frame image if the total improved peak intensity sidelobe ratio SPSR corresponding to the third frame image is smaller than the threshold value T, and estimating the target size in the fourth frame image by adopting a protection scale pool algorithm if the total improved peak intensity sidelobe ratio SPSR corresponding to the fourth frame image is larger than or equal to the threshold value T; otherwise, if the total improved peak intensity sidelobe ratio SPSR in the fourth frame image is still smaller than the threshold value T, continuously calculating the total improved peak intensity sidelobe ratio SPSR corresponding to the fifth frame image;
if the total improved peak intensity sidelobe ratio SPSR in the continuous 5-frame images is smaller than the threshold value T, the target is considered to disappear due to complete shielding, and the position of the target in the full image range is detected again for the 5 th frame in the continuous 5-frame images;
if the maximum response value in the fused response values of the 5 th frame image in the continuous 5 frame images in the full image range is greater than or equal to a preset value T1, verifying the detection of the corresponding total improved peak intensity sidelobe ratio SPSR in the full image range of the 5 th frame image in the continuous 5 frame images, if the maximum response value is greater than or equal to a threshold value T, considering that the target is detected again, and estimating the size of the target in the 5 th frame image in the continuous 5 frame images;
otherwise, if the maximum response value in the fused response values of the 5 th frame image in the full image range of the corresponding continuous 5 frame images with the total improved peak intensity sidelobe ratio SPSR being less than the threshold T is less than the preset value T1, or the maximum response value in the fused response values is greater than or equal to the preset value T1, but the 5 th frame image detects that the corresponding total improved peak intensity sidelobe ratio in the full image range is less than the threshold T, continuing to perform the redetection of the target position in the full image range on the 6 th frame image until the target is redetected, and estimating the size of the redetected target;
step eight, updating the tracker template based on the HOG characteristics and the tracker template based on the CN characteristics by using the detected target characteristics;
and step nine, repeatedly executing the processes from the step seven to the step eight on the subsequently input images so as to realize real-time tracking of the target.
2. The method according to claim 1, wherein in the first step, the input first frame image is preprocessed by using a laplacian, and the specific process is as follows:
g(x,y)=5f(x,y)-[f(x+1,y)+f(x-1,y)+f(x,y+1)+f(x,y-1)]
where f (x, y) is a pixel value of a point (x, y) in the acquired first frame image, f (x +1, y) is a pixel value of a point (x +1, y) in the acquired first frame image, f (x-1, y) is a pixel value of a point (x-1, y) in the acquired first frame image, f (x, y +1) is a pixel value of a point (x, y +1) in the acquired first frame image, f (x, y-1) is a pixel value of a point (x, y-1) in the acquired first frame image, and g (x, y) is a pixel value of a point (x, y) in the preprocessed first frame image.
3. The anti-occlusion real-time target tracking method according to claim 2, wherein in the first step, a tracker template based on the HOG features is obtained according to the extracted HOG features, and a tracker template based on the CN features is obtained according to the extracted CN features, and the specific process is as follows:
and training the single-feature tracker based on the HOG features by using the extracted HOG features to obtain a tracker template based on the HOG features, and training the single-feature tracker based on the CN features by using the extracted CN features to obtain a tracker template based on the CN features.
4. The anti-occlusion real-time target tracking method according to claim 3, wherein in the third step, according to the fine-tuned fusion coefficient, the response value of the tracker based on the HOG feature and the response value of the tracker based on the CN feature are fused to obtain a fused response value; the specific process comprises the following steps:
f(z)=βHOGf1(z)+βCNf2(z)
wherein f (z) is the fused response value f1(z) is the tracker response value based on the HOG feature, f2(z) is a tracker response value, β, based on the CN featureHOGIs f1(z) corresponding post-fine-tuning fusion coefficient, βCNIs f2(z) corresponding post-fine-tuning fusion coefficients satisfying betaHOG+βCN=1。
5. The anti-occlusion real-time target tracking method according to claim 4, wherein the fine-tuned fusion coefficient βHOGAnd betaCNThe calculation method comprises the following steps:
wherein: beta'HOGIs f1(z) corresponding initialized fusion coefficient, β'CNIs f2(z) corresponding initialized fusion coefficients, SPSRHOGImproved peak intensity sidelobe ratio, SPSR, for HOG features of candidate region imagesCNImproved peak intensity sidelobe ratios for CN features of the candidate region images.
6. The anti-occlusion real-time target tracking method according to claim 5, wherein the improved peak intensity sidelobe ratio SPSR of the HOG feature of the candidate region imageHOGAnd improved peak intensity sidelobe ratio SPSR of CN features of candidate region imageCNIs calculated byThe method comprises the following steps:
improved peak intensity sidelobe ratio SPSR for HOG features of candidate region imagesHOG:
Wherein, gmaxRepresenting a maximum response value among the response values of the HOG-feature-based tracker by a maximum response value gmaxBy expanding the width of the HOG-feature-based tracker template to the left and right, respectively, as a centerExtending length of HOG feature-based tracker template up and downForm the maximum response value gmaxA circumscribed rectangle with the center; μ represents a mean value of all response values contained in the circumscribed rectangle, and σ represents a variance of all response values contained in the circumscribed rectangle;
similarly, the improved peak intensity sidelobe ratio SPSR of the CN characteristic of the candidate area image is calculatedCNThe value of (c).
7. The anti-occlusion real-time target tracking method according to claim 6, wherein in the fourth step, according to SPSRHOGAnd SPSRCNCalculating the total improved peak intensity sidelobe ratio SPSR corresponding to the second frame image by combining the fine-tuned fusion coefficient; the specific process comprises the following steps:
SPSR=βHOGSPSRHOG+βCNSPSRCN。
8. the anti-occlusion real-time target tracking method according to claim 7, wherein in the fifth step, a guard scale pool algorithm is adopted to estimate the size of the target in the second frame image, and the specific process is as follows:
when the target size is estimated, a seven-scale pool structure is adopted to obtain seven tracking frames with different scales; the sizes of the tracking frames with the seven different scales are respectively 1.15, 1.10, 1.05, 1/1.05, 1/1.10 and 1/1.15 times of the size of the target frame in the step one;
after the maximum response value of each size tracking frame is respectively calculated, multiplying the maximum response value of the tracking frame with the size of 1.15 times of that of the target frame by a protection coefficient 1/1.10, multiplying the maximum response value of the tracking frame with the size of 1.10 times of that of the target frame by a protection coefficient 1/1.05, multiplying the maximum response value of the tracking frame with the size of 1.05 times of that of the target frame by a protection coefficient 1, multiplying the maximum response value of the tracking frame with the size of 1/1.10 times of that of the target frame by a protection coefficient 1/1.05, and multiplying the maximum response value of the tracking frame with the size of 1/1.15 times of that of the target frame by a protection coefficient 1/1.05;
and multiplying the protection coefficient to obtain the maximum response value of each size tracking frame in the protection scale pool, and taking the size of the tracking frame corresponding to the maximum response value in the protection scale pool as the target size.
9. The anti-occlusion real-time target tracking method according to claim 8, wherein in the fifth step, the hod feature-based tracker template and the CN feature-based tracker template are updated according to the position and size of the target in the second frame image, and the specific process is as follows:
for the HOG feature based tracker template:
α=j*z+(1-j)*α0
wherein: alpha represents the updated HOG feature-based tracker template, j is a constant value, z represents the HOG feature of the target in the second frame image, and alpha represents0Representing the tracker template based on the HOG characteristics obtained in the step one;
and similarly, updating the tracker template based on the CN characteristics to obtain the updated tracker template based on the CN characteristics.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011453408.6A CN112419369A (en) | 2020-12-11 | 2020-12-11 | Anti-occlusion real-time target tracking method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011453408.6A CN112419369A (en) | 2020-12-11 | 2020-12-11 | Anti-occlusion real-time target tracking method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112419369A true CN112419369A (en) | 2021-02-26 |
Family
ID=74776507
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011453408.6A Pending CN112419369A (en) | 2020-12-11 | 2020-12-11 | Anti-occlusion real-time target tracking method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112419369A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113808171A (en) * | 2021-09-27 | 2021-12-17 | 山东工商学院 | Unmanned aerial vehicle visual tracking method based on dynamic feature selection of feature weight pool |
CN113807250A (en) * | 2021-09-17 | 2021-12-17 | 沈阳航空航天大学 | Anti-shielding and scale-adaptive low-altitude airspace flying target tracking method |
CN114663977A (en) * | 2022-03-24 | 2022-06-24 | 龙港市添誉信息科技有限公司 | Long-time span video image pedestrian monitoring accurate tracking method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109993052A (en) * | 2018-12-26 | 2019-07-09 | 上海航天控制技术研究所 | The method for tracking target and system of dimension self-adaption under a kind of complex scene |
CN110599519A (en) * | 2019-08-27 | 2019-12-20 | 上海交通大学 | Anti-occlusion related filtering tracking method based on domain search strategy |
-
2020
- 2020-12-11 CN CN202011453408.6A patent/CN112419369A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109993052A (en) * | 2018-12-26 | 2019-07-09 | 上海航天控制技术研究所 | The method for tracking target and system of dimension self-adaption under a kind of complex scene |
CN110599519A (en) * | 2019-08-27 | 2019-12-20 | 上海交通大学 | Anti-occlusion related filtering tracking method based on domain search strategy |
Non-Patent Citations (1)
Title |
---|
张伟;温显斌;: "基于多特征和尺度估计的核相关滤波跟踪算法", 天津理工大学学报, no. 03 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113807250A (en) * | 2021-09-17 | 2021-12-17 | 沈阳航空航天大学 | Anti-shielding and scale-adaptive low-altitude airspace flying target tracking method |
CN113807250B (en) * | 2021-09-17 | 2024-02-02 | 沈阳航空航天大学 | Anti-shielding and scale-adaptive low-altitude airspace flight target tracking method |
CN113808171A (en) * | 2021-09-27 | 2021-12-17 | 山东工商学院 | Unmanned aerial vehicle visual tracking method based on dynamic feature selection of feature weight pool |
CN114663977A (en) * | 2022-03-24 | 2022-06-24 | 龙港市添誉信息科技有限公司 | Long-time span video image pedestrian monitoring accurate tracking method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112419369A (en) | Anti-occlusion real-time target tracking method | |
Liu et al. | Image partial blur detection and classification | |
US9483709B2 (en) | Visual saliency estimation for images and video | |
Park et al. | Retinex method based on adaptive smoothing for illumination invariant face recognition | |
US20150326845A1 (en) | Depth value restoration method and system | |
Li et al. | An adaptive nonlocal regularized shadow removal method for aerial remote sensing images | |
CN108550161A (en) | A kind of dimension self-adaption core correlation filtering fast-moving target tracking method | |
CN109978848B (en) | Method for detecting hard exudation in fundus image based on multi-light-source color constancy model | |
Banerjee et al. | In-camera automation of photographic composition rules | |
CN104899862A (en) | Retinal vessel segmentation algorithm based on global or local threshold | |
CN113379789B (en) | Moving target tracking method in complex environment | |
Zhao et al. | Automatic blur region segmentation approach using image matting | |
CN107451595A (en) | Infrared image salient region detection method based on hybrid algorithm | |
CN105719251B (en) | A kind of compression degraded image restored method that Linear Fuzzy is moved for big picture | |
Tan et al. | Image haze removal based on superpixels and Markov random field | |
CN104637060A (en) | Image partition method based on neighbor-hood PCA (Principal Component Analysis)-Laplace | |
Pan et al. | Single-image dehazing via dark channel prior and adaptive threshold | |
CN115147613A (en) | Infrared small target detection method based on multidirectional fusion | |
Pujar et al. | Medical image segmentation based on vigorous smoothing and edge detection ideology | |
WO2002050771A1 (en) | Image subtraction | |
Yuan et al. | Weighted side-window based gradient guided image filtering | |
Ahn et al. | Segmenting a noisy low-depth-of-field image using adaptive second-order statistics | |
Jia et al. | Weighted guided image filtering with entropy evaluation weighting | |
CN107392936B (en) | Target tracking method based on meanshift | |
Wang et al. | Adaptive Bright and Dark Channel Combined with Defogging Algorithm Based on Depth of Field |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |