CN115631216A - Holder target tracking system and method based on multi-feature filter fusion - Google Patents

Holder target tracking system and method based on multi-feature filter fusion Download PDF

Info

Publication number
CN115631216A
CN115631216A CN202211644874.1A CN202211644874A CN115631216A CN 115631216 A CN115631216 A CN 115631216A CN 202211644874 A CN202211644874 A CN 202211644874A CN 115631216 A CN115631216 A CN 115631216A
Authority
CN
China
Prior art keywords
target
filter
response
image
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211644874.1A
Other languages
Chinese (zh)
Other versions
CN115631216B (en
Inventor
陈玮
蔡旭阳
尹彦卿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Avic Jincheng Unmanned System Co ltd
Jincheng Group Co ltd
Original Assignee
Avic Jincheng Unmanned System Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Avic Jincheng Unmanned System Co ltd filed Critical Avic Jincheng Unmanned System Co ltd
Priority to CN202211644874.1A priority Critical patent/CN115631216B/en
Publication of CN115631216A publication Critical patent/CN115631216A/en
Application granted granted Critical
Publication of CN115631216B publication Critical patent/CN115631216B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a pan-tilt target tracking system and a pan-tilt target tracking method, wherein tracking filters are respectively constructed according to different characteristics, weight is adaptively distributed to each filter, weighting fusion is carried out to obtain target position information, then the characteristic filter with the largest weight is used for predicting target dimensions in different dimension spaces, so that accurate dimension information of a target is obtained, the problem of dimension change in the target motion process is solved, and the information of the shielded or tracking loss state of the target is timely output by designing a template filter and a track filter. In the invention, all filters are not updated in fixed steps, and the updating rate of the filters is determined by setting the peak value and the peak-to-side lobe ratio of the fused response diagram. The multi-filter fusion tracking system and the multi-filter fusion tracking method can realize the target tracking of the holder, increase the accuracy and the robustness of the target tracking, and simultaneously can output the state information of the target in time, including correct tracking, shielding and loss.

Description

Holder target tracking system and method based on multi-feature filter fusion
Technical Field
The invention relates to a holder target tracking system and a method, in particular to a holder target tracking system and a method based on multi-feature filter fusion; belonging to the technical field of image target tracking.
Background
The holder target tracking is a key technology in holder application, an existing target tracking algorithm is mainly a tracking algorithm based on relevant filtering, and although the processing requirement of a holder chip is met, the problems that tracking is easy to lose and drift, target shielding cannot be processed and the like still exist. For example, most kernel-dependent filtering tracking methods use a single manual feature, which is weak in feature representation capability; or simply adding and summing the correlation representations of various manual features to achieve the purpose of feature fusion, and the representation capability of various features cannot be fully exerted, so that the target tracking effect is not robust and is easy to lose.
With the development of the technology, a part of correlation filtering methods provide a new technology for respectively training correlation filters for channels with different manual characteristics, and weights are adaptively distributed to each filter, so that although the characteristic representation capability is enhanced, the calculation amount is multiplied with the increase of characteristic channels (for example, 32 channels are provided for histogram gradient characteristics, and 32 filters are trained), so that the real-time effect cannot be achieved in a low-end pan-tilt chip. In addition, a deep learning training characteristic is adopted, and a related filter is combined to track a target so as to achieve the purpose of improving the tracking precision, such as a SiamFC series method; or training the target tracking model end to end by using the latest deep learning technology transform algorithm framework. However, the methods have the problems of large model calculation amount, low running speed and inapplicability to common holder chips.
In addition, most target tracking methods based on correlation filtering adopt a filter to predict on multiple scales when processing target scale changes, and select the scale with the largest response peak value to update the target state, or adopt a DSST filtering method to train the filter alone to predict the scale changes. The method scales the length and the width of the target in the same proportion, and the variation trend of the length and the width of the target is different and is a common phenomenon along with the appearance variation and not strictly following the rule in the moving process of the target. Therefore, such a tracking method cannot accurately predict the target scale, so that background noise is continuously introduced into the model or local information of the target is too focused, the filter model is deteriorated, and finally tracking failure is caused.
In addition to the above drawbacks, the current methods for tracking related filtering targets mostly update the model at a fixed learning rate, so that noises such as background are inevitably introduced continuously, and the distinguishing capability of the model is continuously damaged. In addition, each related filtering target tracking method mostly expresses the target state by the maximum value of the final response map, and if the maximum value is lower than a certain threshold value, the target shielding or the tracking failure is judged. Due to the continuous decline of the distinguishing capability of the tracking model, the tracking state of the target cannot be described by simply depending on the extreme value of the response graph, and meanwhile, the problem that the target is subjected to transient shielding cannot be effectively processed.
For the above reasons, it is necessary to provide a pan-tilt target tracking algorithm to optimize and solve the above problems and drawbacks.
Disclosure of Invention
The invention aims to provide a holder target tracking system and method based on multi-feature filter fusion, aiming at the problems in the prior art, the holder target tracking is realized through multi-feature filter fusion decision, the accuracy and the robustness of target tracking are improved, and meanwhile, the state information of a target can be timely output.
In order to achieve the above object, the present invention adopts the following technical solutions:
the invention firstly discloses a holder target tracking system based on multi-feature filter fusion, which comprises:
the correlation response graph fusion module: training a plurality of feature correlation filters by utilizing a kernel correlation filter based on a previous frame of image and target information, setting self-adaptive weights, fusing to generate a final correlation response diagram, and predicting the position of a target central point of a current frame;
a target scale prediction module: predicting target scale information of the current frame in a scale change space according to the feature correlation filter with the maximum weight;
the correlation response graph checking module: checking the fused correlation response graph, calculating the peak value and peak side lobe ratio, selecting whether a template filter is required to be introduced to judge the target state, and simultaneously selecting whether all correlation filters are updated;
a template filter module: when the correlation response graph checking module judges that the target position is predicted unreliably, inputting a current predicted target result into the template filter module, judging the target state, if the target is shielded, updating the target position by using the track prediction module, and if the target is shielded for a long time, losing the target and stopping tracking;
a trajectory prediction module: when the target is shielded, a track predictor is trained by utilizing the information of the center points of the previous multiple frames of targets, the center point of the target of the current frame is predicted, and the position information of the target is updated;
a model updating module: based on the result of the correlation response diagram checking module, analyzing the peak value and the peak value sidelobe ratio, and defining whether the model is updated or not and the updating frequency; if so, all correlation filters are updated.
The invention also discloses a holder target tracking method based on the multi-feature filter fusion of the tracking system, which comprises the following steps:
s1, extracting a plurality of manual features based on an image of an initial frame, a target and background information around the target given by a man-made or detector, training a correlation filter with corresponding features based on a kernel correlation filter target tracking method (kcf), respectively calculating feature maps corresponding to the current frame image, and obtaining a final response map by utilizing self-adaptive weight fusion
Figure 161503DEST_PATH_IMAGE001
Obtaining the position of the center point of the target
Figure 669845DEST_PATH_IMAGE002
S2, utilizing the correlation filter with the maximum weight in the step S1 to predict in a scale set space which meets the relatively independent change trend of the length and the width of the target to obtain the scale of the new change of the target
Figure 535033DEST_PATH_IMAGE003
S3, verifying the fused correlation filtering response graph in the step S1 to obtain the peak value sidelobe ratio of the response graph
Figure 470628DEST_PATH_IMAGE004
Analyzing the peak condition and the value distribution condition of the response diagram,
Figure 522897DEST_PATH_IMAGE004
higher values indicate more reliable prediction results of the multi-feature filter;
s4, extracting target features based on the image and the target information of the initial frame, training a template filter, after the peak value sidelobe ratio in the step S3 is evaluated, inputting the target information predicted in the steps S1 and S2 into the template filter to calculate the similarity between a prediction result and the template image so as to predict the shielding state of the target, and if the target is continuously shielded for a long time, judging that the target is lost;
s5, training a track predictor by utilizing the position information of a plurality of frames of targets before the current frame, and predicting the position information of the targets when the targets are shielded; in the step, the situation that the output target in the step S4 is shielded is processed through a track prediction module, and a track filter is realized based on linear Kalman filtering;
and S6, combining the response diagram peak value in the step S1 and the check module result in the step S3, adaptively defining whether each filter is updated or not and updating frequency, and respectively updating the plurality of filters in the step S1 and updating the template filter in the step S4.
It should be noted that, in the above method of the present invention, all filters are not updated with a fixed step size, but the rate of updating the filters is determined by setting the peak value and the peak-to-side lobe ratio of the fused response map in combination, which solves the following problems well: in the prior art, the problem of tracking failure caused by continuous introduction of background noise is solved by updating a fixed step size model in a related filtering target tracking algorithm.
Preferably, the aforementioned various manual features include: grayscale features, histogram gradient features (HOG features), color-name space features (CN features), and color features (RGB features).
Preferably, in the foregoing step S1, a final response map obtained by adaptive weight fusion is represented as:
Figure 336395DEST_PATH_IMAGE005
, wherein ,
Figure 56089DEST_PATH_IMAGE006
representing image blocks
Figure 162585DEST_PATH_IMAGE007
First, the
Figure 702151DEST_PATH_IMAGE008
The weight values of the individual feature dependent filters,
Figure 552295DEST_PATH_IMAGE009
for image blocks
Figure 126496DEST_PATH_IMAGE007
First, the
Figure 403894DEST_PATH_IMAGE008
A response map of the individual features is generated,
Figure 696335DEST_PATH_IMAGE010
represents the number of correlation filters;
Figure 84591DEST_PATH_IMAGE011
Figure 778877DEST_PATH_IMAGE012
representing image blocks
Figure 227176DEST_PATH_IMAGE014
First, the
Figure 741334DEST_PATH_IMAGE015
The characteristics of the composite material are that,
Figure 933281DEST_PATH_IMAGE016
is shown as
Figure 482074DEST_PATH_IMAGE015
A feature-dependent filter for the output of the image sensor,
Figure 101274DEST_PATH_IMAGE017
representing a convolution calculation.
More preferably, in the foregoing step S1, the weight value of the adaptive weight
Figure 102728DEST_PATH_IMAGE006
The ratio of the peak value of each characteristic filter response map to the peak sum of all response maps is as follows:
Figure 596901DEST_PATH_IMAGE018
, wherein ,
Figure 201DEST_PATH_IMAGE019
is shown as
Figure 790302DEST_PATH_IMAGE015
The peak of the response map of the individual feature filters,
Figure 279053DEST_PATH_IMAGE020
is shown as
Figure 78381DEST_PATH_IMAGE022
The peak of the response map of the individual feature filter;
namely:
Figure 336187DEST_PATH_IMAGE023
Figure 500452DEST_PATH_IMAGE024
Figure 538816DEST_PATH_IMAGE025
indicating that the maximum value is calculated.
Peak value of fused response map
Figure 813939DEST_PATH_IMAGE026
Corresponding coordinate point
Figure 988568DEST_PATH_IMAGE027
Is the position of the target center point
Figure 323735DEST_PATH_IMAGE028
, wherein
Figure 583815DEST_PATH_IMAGE029
More preferably, in step S2, the prediction of the target scale is performed by selecting the feature filter with the largest weight
Figure 662629DEST_PATH_IMAGE030
The method is realized by the following specific steps:
first, the length and width of the target are respectively determined
Figure 691765DEST_PATH_IMAGE031
Introducing a scale change pool
Figure 651629DEST_PATH_IMAGE032
Forming a final set of scale-change spaces
Figure 930164DEST_PATH_IMAGE033
Wherein the subscript
Figure 547090DEST_PATH_IMAGE034
Figure 696312DEST_PATH_IMAGE035
The length and the width of the target enclosing frame are respectively taken as objects of the scale change action, and subscripts 0, 1 and 2 respectively represent different scale change rates;
then, using the scale space set
Figure 671483DEST_PATH_IMAGE036
And predicting the position of the center point of the target
Figure 843839DEST_PATH_IMAGE037
Obtaining image regions under different scale spaces
Figure 326772DEST_PATH_IMAGE038
, wherein ,
Figure 268184DEST_PATH_IMAGE039
(ii) a Respectively extract the first
Figure 178371DEST_PATH_IMAGE040
A characteristic
Figure 838022DEST_PATH_IMAGE041
And a characteristic filter
Figure 859068DEST_PATH_IMAGE030
And (3) calculating:
Figure 920565DEST_PATH_IMAGE042
, wherein ,
Figure 736074DEST_PATH_IMAGE043
is as follows
Figure 883022DEST_PATH_IMAGE044
Response graphs corresponding to the scales;
finally, by comparing the scale space
Figure 816080DEST_PATH_IMAGE036
Calculating the peak value of the response map under the middle 9 scales
Figure 528821DEST_PATH_IMAGE045
Scale of maximum correspondence of peak value
Figure 515232DEST_PATH_IMAGE046
I.e. the new changed scale of the target:
Figure 946213DEST_PATH_IMAGE047
more preferably, in the aforementioned step S3, the peak-to-side lobe ratio
Figure 246745DEST_PATH_IMAGE048
Figure 345151DEST_PATH_IMAGE049
A higher value indicates a more reliable prediction result, wherein,
Figure 705725DEST_PATH_IMAGE050
represents the final response graph obtained by the adaptive weight fusion in step S1,
Figure 358423DEST_PATH_IMAGE052
represents the proportion of the side lobe region centered on the peak value to the size of the whole response map region,
Figure 524962DEST_PATH_IMAGE053
representing the mean of the remaining part of the response map after removal of the sidelobe regions,
Figure 415558DEST_PATH_IMAGE054
the standard deviation of the remaining part of the response plot after removal of the sidelobe region is shown.
Further preferably, in step S4, based on the target information given by the initial frame, the bounding box of the target is not expanded, the background information is not included, and the image area is obtained and used as the template image
Figure 507885DEST_PATH_IMAGE055
Extracting gray image characteristics, training template filter by using kernel correlation filtering technology
Figure 851142DEST_PATH_IMAGE056
(ii) a After target tracking beginsThe position of the target center point obtained by the prediction
Figure 555793DEST_PATH_IMAGE057
And predicted target scale information
Figure 300895DEST_PATH_IMAGE058
Obtaining a target prediction result image
Figure 65589DEST_PATH_IMAGE059
By passing
Figure 692879DEST_PATH_IMAGE060
And calculating the similarity between the target prediction result image and the template image to measure the state of the target and judge whether the target is blocked or lost.
Further preferably obtained by the check module in the pair S3
Figure 138904DEST_PATH_IMAGE061
When the value is evaluated, if it is less than the threshold value
Figure 800829DEST_PATH_IMAGE062
If so, indicating that the prediction reliability of the tracking result is low, extracting gray level characteristics from the image of the prediction target area, and performing correlation calculation with the template filter; if the peak value of the response image of the template filter is less than the threshold value
Figure 470845DEST_PATH_IMAGE063
If yes, the target is shielded, subsequent processing is needed, target state information is output, and the shielding of the target is accumulated and counted; when the target is shielded and the count is larger than the threshold value
Figure 788694DEST_PATH_IMAGE064
If the target is lost, the tracking is exited; if the peak value of the response image of the template filter is larger than the threshold value
Figure 100727DEST_PATH_IMAGE065
Indicating that the target state is normal and the output destination is normalAnd marking state information, and clearing the shielding count of the target.
Still more preferably, in the step S5, the track predictor module is used to process the situation that the output target is blocked in S4, and the track filter is implemented based on linear kalman filtering. Specifically, the horizontal axis and vertical axis coordinates of the target center point position information are used as two-dimensional input, and the current frame is used as the previous frame
Figure 554842DEST_PATH_IMAGE066
And (3) training a track filter by using the coordinates of the target central point in the frame range, predicting the position information of the target central point of the current frame, and updating the target central point obtained by calculation in the S1.
It should be further explained that the criteria for model update are: when the filters in step S1 fuse the peaks of the response map
Figure 661338DEST_PATH_IMAGE067
Greater than a threshold value
Figure 764685DEST_PATH_IMAGE068
And the peak-to-side lobe ratio calculated by the check module in step S3
Figure 552513DEST_PATH_IMAGE069
Greater than a threshold value
Figure 189031DEST_PATH_IMAGE071
Updating each characteristic correlation filter and the template tracking filter by using the tracking result, or not updating;
the formula for model update is:
Figure 404111DEST_PATH_IMAGE072
, wherein ,
Figure 493290DEST_PATH_IMAGE073
indicates the learning rate of each feature filter update,
Figure 881546DEST_PATH_IMAGE074
for the previous frame
Figure 310253DEST_PATH_IMAGE015
A feature-dependent filter is provided that is,
Figure 24131DEST_PATH_IMAGE075
for the updated current frame
Figure 538289DEST_PATH_IMAGE015
The characteristics of the characteristic filter are used as the characteristic filter,
Figure 464657DEST_PATH_IMAGE076
retraining based on current frame target information
Figure 75767DEST_PATH_IMAGE015
A feature correlation filter;
Figure 632650DEST_PATH_IMAGE077
, wherein ,
Figure 430842DEST_PATH_IMAGE078
represents the learning rate of the template filter update,
Figure 925015DEST_PATH_IMAGE079
is a template filter for the previous frame,
Figure 328314DEST_PATH_IMAGE080
an updated target filter for the current frame,
Figure 118416DEST_PATH_IMAGE081
a template filter retrained based on current frame target information.
The invention has the advantages that:
(1) The pan-tilt tracking system and the pan-tilt tracking method realize pan-tilt target tracking through multi-feature filter fusion decision, solve the problem of common target scale change and the problem of special deformation of target appearance through designing scale spaces which are respectively based on different proportion changes of length and width, increase the accuracy and robustness of target tracking, and simultaneously can output the state information of the target in time, including correct tracking, shielding and loss.
(2) The method can predict the position information and the scale information of the target with given initial information in the subsequent image frame, utilizes the technology of self-adaptive weight fusion of a plurality of filter related to characteristics (at least comprising gray level characteristics, histogram gradient characteristics, RGB characteristics and color naming characteristics), self-adaptively distributes weight to each filter, and carries out weighting fusion to obtain the target position information, thereby fully exerting the characteristic representation capability of different characteristics, improving the accuracy of the tracking method, and controlling the calculated amount of a tracking core module so that the tracking core module can be applied to a holder platform;
(3) In the target tracking process, predicting the target scale in different scale spaces by using a characteristic filter with the largest weight, wherein the scale spaces comprise different changes of the length and the width of the target, so that accurate scale information of the target is obtained, and the problem of scale change in the target motion process is solved; meanwhile, training a template filter which takes a target image as a reference and does not contain background information, predicting the state information of the target by combining a response image weighted by a plurality of characteristic filters and the response image of the template filter, outputting the blocked or tracking lost state information of the target in time, introducing a target track predictor, and predicting the current target position information according to the track of a preorder frame when the target is blocked so as to better process the target blocking;
(4) In addition, all filters are not updated in fixed steps, and the updating rate of the filters is determined by setting the peak value and the peak-to-side lobe ratio of the fused response diagram. Through the organic combination of the check module and other modules, the updating rate and rhythm of each model are effectively controlled, and the robustness of the method is further improved.
Drawings
FIG. 1 is a block diagram of a pan-tilt target tracking system of the present invention;
FIG. 2 is a logic block diagram of a pan-tilt target tracking method of the present invention;
FIG. 3 is a logic diagram of the operation of the template filter of the present invention;
FIG. 4 is a logic diagram of the operation of the trajectory predictor of the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and the embodiments.
Example 1
Referring to fig. 1, the embodiment discloses a pan-tilt-zoom target tracking system based on multi-feature filter fusion, which includes the following six functional modules:
(1) A correlation response graph fusion module: training a plurality of feature correlation filters by utilizing a kernel correlation filter based on a previous frame of image and target information, setting self-adaptive weights, fusing to generate a final correlation response diagram, and predicting the position of a target central point of a current frame;
(2) A target scale prediction module: predicting target scale information of the current frame in a scale change space according to the feature correlation filter with the largest weight;
(3) The correlation response graph checking module: checking the fused correlation response graph, calculating the peak value and peak side lobe ratio, selecting whether a template filter is required to be introduced to judge the target state, and simultaneously selecting whether all correlation filters are updated;
(4) A template filter module: when the correlation response graph checking module judges that the target position is predicted unreliably, inputting a current predicted target result into the template filter module, judging the target state, if the target is shielded, updating the target position by using the track prediction module, and if the target is shielded for a long time, losing the target and stopping tracking;
(5) A trajectory prediction module: when the target is shielded, a track predictor is trained by utilizing the information of the center points of the previous multiple frames of targets, the center point of the target of the current frame is predicted, and the position information of the target is updated;
(6) A model updating module: based on the result of the correlation response diagram checking module, analyzing the peak value and the peak value sidelobe ratio, and defining whether the model is updated or not and the updating frequency; if so, all correlation filters are updated.
The tracking system of the embodiment predicts the position information and the scale information of the target with given initial information in the subsequent image frame through the image data acquired by the pan-tilt. Firstly, tracking filters are respectively constructed by different features, the gray level features, the histogram gradient features, the RGB features and the color naming features are adopted, the weight is adaptively distributed to each filter, and weighting fusion is carried out to obtain target position information, so that the representation capability of the different features is fully utilized, and the calculated amount is controlled within a certain range.
And then, predicting the target scale in different scale spaces by using the characteristic filter with the largest weight, wherein the scale spaces comprise different changes of the length and the width of the target, further obtaining accurate scale information of the target, and solving the problem of scale change in the target motion process. Meanwhile, a template filter which takes the target image as a reference and does not contain background information is trained, the state information of the target is predicted by combining the response graphs weighted by the plurality of characteristic filters and the response graph of the template filter, and the state information that the target is shielded or lost by tracking is output in time. In addition, a target track predictor is introduced, and when the target is occluded, the current target position information is predicted according to the track of the preamble frame, so that the occlusion of the target can be better processed.
Moreover, all filters are not updated in fixed steps, but different condition thresholds are set to combine with peak-to-side lobe ratios of fused correlation filter response diagrams to determine the updating rate of the filters. The invention realizes the target tracking of the holder through the fusion tracking system of the multi-filter, increases the accuracy and the robustness of the target tracking, and can output the state information of the target in time, including correct tracking, shielding and loss.
Example 2
The present embodiment discloses a multi-tracking method implemented based on the tracking system of embodiment 1, and with reference to fig. 2, the method specifically includes the following steps:
s1, extracting a plurality of manual features based on an image of an initial frame, a target and background information around the target, wherein the image, the target and the background information are given by a human or a detector, and the manual features at least comprise: a grayscale feature, a histogram gradient feature (HOG feature), a color namespace feature (color-name, CN feature), and a color feature (RGB feature).
Then, training a correlation filter with corresponding characteristics based on a kernel correlation filter target tracking method (kcf), respectively calculating characteristic graphs corresponding to the current frame image, and obtaining a final response graph by utilizing self-adaptive weight fusion
Figure 607166DEST_PATH_IMAGE001
Obtaining the position of the center point of the target
Figure 140916DEST_PATH_IMAGE002
Specifically, the final response graph obtained by the adaptive weight fusion is represented as:
Figure 398722DEST_PATH_IMAGE005
, wherein ,
Figure 625304DEST_PATH_IMAGE010
which represents the number of the relevant filters,
Figure 601350DEST_PATH_IMAGE009
is an image block
Figure 673211DEST_PATH_IMAGE007
First, the
Figure 51103DEST_PATH_IMAGE008
A response map of the individual features is generated,
Figure 183007DEST_PATH_IMAGE011
Figure 443087DEST_PATH_IMAGE012
representing image blocks
Figure 256322DEST_PATH_IMAGE014
First, the
Figure 754300DEST_PATH_IMAGE015
The characteristics of the device are as follows,
Figure 322684DEST_PATH_IMAGE016
is shown as
Figure 305946DEST_PATH_IMAGE015
The characteristic of each of the plurality of characteristic-dependent filters,
Figure 922872DEST_PATH_IMAGE017
representing a convolution calculation.
Figure 72094DEST_PATH_IMAGE006
Representing image blocks
Figure 545800DEST_PATH_IMAGE007
First, the
Figure 718156DEST_PATH_IMAGE008
The weighted value of each feature correlation filter is specifically the ratio of the peak value of each feature filter response map to the peak value sum of all response maps, that is:
Figure 935510DEST_PATH_IMAGE082
wherein ,
Figure 142501DEST_PATH_IMAGE019
is shown as
Figure 52688DEST_PATH_IMAGE015
The peak of the response map of the individual feature filters,
Figure 712339DEST_PATH_IMAGE083
is shown as
Figure 733385DEST_PATH_IMAGE022
The response pattern peak of the individual feature filter; namely:
Figure 529303DEST_PATH_IMAGE023
Figure 610391DEST_PATH_IMAGE084
Figure 757339DEST_PATH_IMAGE025
indicating that the maximum value is calculated.
Finally, the peak of the fused response map
Figure 582075DEST_PATH_IMAGE026
Corresponding coordinate point
Figure 498079DEST_PATH_IMAGE027
Is the position of the target center point
Figure 687752DEST_PATH_IMAGE028
, wherein
Figure 617268DEST_PATH_IMAGE029
S2, predicting in a scale set space meeting the requirement that the target length and width change trend is relatively independent by using the correlation filter with the maximum weight in the step S1 to obtain the scale of the target new change
Figure 980116DEST_PATH_IMAGE003
The prediction of the target scale is realized by selecting the characteristic filter with the maximum weight value
Figure 281785DEST_PATH_IMAGE030
The method is realized by the following specific steps:
first, the length and width of the target are respectively determined
Figure 376780DEST_PATH_IMAGE031
Introducing a scale change pool
Figure 560636DEST_PATH_IMAGE032
Form the mostFinal set of scale-change spaces
Figure 664859DEST_PATH_IMAGE033
Wherein, subscript
Figure 352192DEST_PATH_IMAGE034
Figure 883667DEST_PATH_IMAGE035
The length and the width of the target enclosing frame are respectively taken as objects of the scale change action, and subscripts 0, 1 and 2 respectively represent different scale change rates;
then, using the scale space set
Figure 289241DEST_PATH_IMAGE036
And the predicted target center point position
Figure 197154DEST_PATH_IMAGE037
Obtaining image regions in different scale spaces
Figure 4573DEST_PATH_IMAGE038
, wherein ,
Figure 972529DEST_PATH_IMAGE039
(ii) a Respectively extract the following
Figure 599819DEST_PATH_IMAGE040
Is a characteristic
Figure 45844DEST_PATH_IMAGE041
And a characteristic filter
Figure 911032DEST_PATH_IMAGE030
And (3) calculating:
Figure 846627DEST_PATH_IMAGE042
, wherein ,
Figure 164476DEST_PATH_IMAGE043
is as follows
Figure 446815DEST_PATH_IMAGE044
Response graphs corresponding to the scales;
finally, by comparing the scale spaces
Figure 228826DEST_PATH_IMAGE036
Calculating the peak value of the response map under the middle 9 scales
Figure 273006DEST_PATH_IMAGE045
Scale of maximum correspondence of peak value
Figure 140467DEST_PATH_IMAGE046
I.e. the new changed scale of the target:
Figure 928295DEST_PATH_IMAGE047
s3, verifying the fused correlation filtering response graph in the step S1 to obtain the peak value sidelobe ratio of the response graph
Figure 564813DEST_PATH_IMAGE004
Analyzing the peak condition and the value distribution condition of the response diagram,
Figure 779893DEST_PATH_IMAGE004
higher values indicate more reliable prediction results for the multi-feature filter.
Peak to side lobe ratio
Figure 603493DEST_PATH_IMAGE048
, wherein ,
Figure 257328DEST_PATH_IMAGE050
represents the final response graph obtained by the adaptive weight fusion in step S1,
Figure 686035DEST_PATH_IMAGE052
represents the proportion of the side lobe region centered on the peak value to the size of the whole response map region,
Figure 134334DEST_PATH_IMAGE053
representing the mean of the remaining portion of the response after removal of the sidelobe regions,
Figure 914071DEST_PATH_IMAGE054
the standard deviation of the remaining part of the response plot after removal of the sidelobe region is shown.
And S4, extracting target characteristics based on the image of the initial frame and the target information, training a template filter, as shown in FIG. 3, after the peak value sidelobe ratio in the step S3 is evaluated, inputting the target information predicted in the steps S1 and S2 into the template filter to calculate the similarity between the prediction result and the template image so as to predict the shielding state of the target, and if the target is continuously shielded for a long time, judging that the target is lost.
When the target information is given based on the initial frame, the surrounding frame of the target is not expanded, the background information is not contained, the image area is obtained and is used as the template image
Figure DEST_PATH_IMAGE085
Extracting gray image characteristics, training template filter by using kernel correlation filtering technology
Figure 840439DEST_PATH_IMAGE056
(ii) a After the target tracking is started, the target central point position obtained by the prediction is utilized
Figure 418926DEST_PATH_IMAGE057
And predicted target scale information
Figure 303705DEST_PATH_IMAGE058
Obtaining a target prediction result image
Figure 570738DEST_PATH_IMAGE059
By passing
Figure 504059DEST_PATH_IMAGE086
Calculating the similarity between the target prediction result image and the template image to measure the state of the targetAnd judging whether the target is blocked or lost.
It is specifically stated that the check module obtains in the pair S3
Figure 704096DEST_PATH_IMAGE061
When the value is evaluated, if it is less than the threshold value
Figure 697460DEST_PATH_IMAGE062
If the prediction reliability of the tracking result is low, extracting gray features from the image of the prediction target area, and performing correlation calculation with the template filter; if the peak value of the response image of the template filter is less than the threshold value
Figure 248527DEST_PATH_IMAGE063
If yes, the target is shielded, subsequent processing is needed, target state information is output, and the shielding of the target is accumulated and counted; when the target is shielded and the count is larger than the threshold value
Figure 719960DEST_PATH_IMAGE064
If the target is lost, the tracking is exited; if the peak value of the response image of the template filter is larger than the threshold value
Figure 40083DEST_PATH_IMAGE065
And indicating that the target state is normal and outputting target state information, and clearing the target by the shielding count. The threshold value is generally selected based on empirical values of those skilled in the art, and, in particular in the present embodiment,
Figure 204348DEST_PATH_IMAGE062
the value of the carbon dioxide is 0.3,
Figure 977132DEST_PATH_IMAGE063
the value of the carbon dioxide is 0.4,
Figure 252255DEST_PATH_IMAGE064
taking the value of 15.
S5, training a track predictor by utilizing the position information of a plurality of frames of targets before the current frame, and predicting the position information of the targets when the targets are shielded as shown in figure 4; in this step, the situation that the output target of step S4 is occluded is processed by a trajectory prediction module, and a trajectory filter is implemented based on linear kalman filtering.
Specifically, the horizontal axis and vertical axis coordinates of the target center point position information are used as two-dimensional input, and the current frame is used as the previous frame
Figure 692464DEST_PATH_IMAGE066
And (3) training a track filter by using the coordinates of the target central point in the frame range, predicting the position information of the target central point of the current frame, and updating the target central point obtained by calculation in the S1.
It should be further explained that the criteria for model update are: when the filters in step S1 fuse the peaks of the response map
Figure 762051DEST_PATH_IMAGE067
Greater than a threshold value
Figure 287710DEST_PATH_IMAGE068
And the peak-to-side lobe ratio calculated by the check module in step S3
Figure 100946DEST_PATH_IMAGE069
Greater than a threshold value
Figure 897126DEST_PATH_IMAGE071
And updating each characteristic correlation filter and the template tracking filter by using the tracking result, or not updating. The threshold value here is likewise chosen on the basis of empirical values of a person skilled in the art, and, in particular in the present embodiment,
Figure 403193DEST_PATH_IMAGE071
the value is 0.7.
The formula for model update is:
Figure 353832DEST_PATH_IMAGE072
, wherein ,
Figure 33075DEST_PATH_IMAGE073
the learning rate of each feature filter update is shown, and in this embodiment, a value of 0.012 is suggested,
Figure 119980DEST_PATH_IMAGE074
for the previous frame
Figure 859265DEST_PATH_IMAGE015
The characteristics of the correlated filter are used to determine the characteristic,
Figure 31621DEST_PATH_IMAGE075
for the updated current frame
Figure 248976DEST_PATH_IMAGE015
The characteristics of the characteristic filter are used as the characteristic filter,
Figure 455966DEST_PATH_IMAGE076
retraining based on current frame target information
Figure 100574DEST_PATH_IMAGE015
A feature correlation filter;
Figure 760225DEST_PATH_IMAGE077
, wherein ,
Figure 781271DEST_PATH_IMAGE078
which represents the learning rate of the updating of the template filter, the value of 0.025 is suggested in this embodiment,
Figure DEST_PATH_IMAGE087
the template filter for the previous frame is used,
Figure 639506DEST_PATH_IMAGE080
the updated target filter for the current frame,
Figure 658277DEST_PATH_IMAGE081
a template filter retrained based on current frame target information.
And S6, combining the response diagram peak value in the step S1 and the check module result in the step S3, adaptively defining whether each filter is updated or not and updating frequency, and respectively updating the plurality of filters in the step S1 and updating the template filter in the step S4. That is, in the method of the present invention, all filters are not updated in fixed steps, but the rate of updating the filters is determined by setting the peak and peak-to-side lobe ratio of the fused response map in combination. Through the organic combination of the check module and other modules, the updating rate and rhythm of each model are effectively controlled, and the robustness of the method is further improved.
In summary, the pan-tilt tracking system and the tracking method of the invention realize pan-tilt target tracking through multi-feature filter fusion decision, and through designing the scale space based on different ratio changes of length and width, the problem of common target scale change is solved, the problem of special deformation of target appearance is solved, the accuracy and robustness of target tracking are increased, and meanwhile, the state information of the target including correct tracking, shielding and loss can be timely output, so that the feature representation capability of different features is fully exerted, the accuracy of the tracking method of the invention is improved, and the calculated amount of the tracking core module is controlled, so that the pan-tilt tracking system and the tracking method can be applied to a pan-tilt platform.
The foregoing illustrates and describes the principles, general features, and advantages of the present invention. It should be understood by those skilled in the art that the above embodiments do not limit the present invention in any way, and all technical solutions obtained by using equivalent alternatives or equivalent variations fall within the scope of the present invention.

Claims (10)

1. The utility model provides a cloud platform target tracking system based on multi-feature filter fuses which characterized in that includes:
the correlation response graph fusion module: training a plurality of feature correlation filters by utilizing a kernel correlation filter based on a previous frame of image and target information, setting self-adaptive weights, fusing to generate a final correlation response diagram, and predicting the position of a target central point of a current frame;
a target scale prediction module: predicting target scale information of the current frame in a scale change space according to the feature correlation filter with the maximum weight;
the correlation response graph checking module: checking the fused correlation response graph, calculating the peak value and peak side lobe ratio, selecting whether a template filter is required to be introduced to judge the target state, and simultaneously selecting whether all correlation filters are updated;
a template filter module: when the correlation response graph checking module judges that the target position is predicted unreliably, inputting the current predicted target result into the template filter module, judging the target state, if the target is shielded, updating the target position by using the track prediction module, and if the target is shielded for a long time, losing the target and stopping tracking;
a trajectory prediction module: when the target is shielded, a track predictor is trained by utilizing the information of the center points of the previous multiple frames of targets, the center point of the target of the current frame is predicted, and the position information of the target is updated;
a model updating module: based on the result of the correlation response diagram checking module, analyzing the peak value and the peak value sidelobe ratio, and defining whether the model is updated or not and the updating frequency; if so, all correlation filters are updated.
2. A holder target tracking method based on multi-feature filter fusion is characterized by comprising the following steps:
s1, extracting various manual characteristics based on an image of an initial frame, a target and background information around the target, training relevant filters with different characteristics, calculating characteristic graphs corresponding to the current frame image respectively, and obtaining a final response graph by utilizing self-adaptive weight fusion
Figure DEST_PATH_IMAGE001
Obtaining the position of a target central point;
s2, predicting in a scale set space meeting the requirement that the target length and width change trend is relatively independent by using the correlation filter with the maximum weight in the step S1 to obtain the scale of the target new change
Figure DEST_PATH_IMAGE002
S3, verifying the fused correlation filtering response graph in the step S1 to obtain the peak value sidelobe ratio of the response graph
Figure 233840DEST_PATH_IMAGE003
Analyzing the peak condition and the numerical distribution condition of the response diagram;
s4, extracting target features based on the image and the target information of the initial frame, training a template filter, after the peak value sidelobe ratio in the step S3 is evaluated, inputting the target information predicted in the steps S1 and S2 into the template filter, calculating the similarity between a prediction result and the template image to predict the shielding state of the target, and if the target is continuously shielded for a long time, judging that the target is lost;
s5, training a track predictor by utilizing the position information of a plurality of frames of targets before the current frame, and predicting the position information of the targets when the targets are shielded;
and S6, combining the response diagram peak value in the step S1 and the check module result in the step S3, adaptively defining whether each filter is updated or not and updating frequency, and respectively updating the plurality of filters in the step S1 and updating the template filter in the step S4.
3. The pan-tilt target tracking method based on multi-feature filter fusion as claimed in claim 2, wherein in the step S1,
Figure DEST_PATH_IMAGE004
wherein ,
Figure DEST_PATH_IMAGE005
representing image blocks
Figure DEST_PATH_IMAGE006
First, the
Figure 555100DEST_PATH_IMAGE007
The weight values of the individual feature-dependent filters,
Figure DEST_PATH_IMAGE008
is an image block
Figure 73543DEST_PATH_IMAGE006
First, the
Figure 810555DEST_PATH_IMAGE007
The response map of the individual features is,
Figure 130678DEST_PATH_IMAGE009
represents the number of correlation filters;
Figure DEST_PATH_IMAGE010
Figure 826102DEST_PATH_IMAGE011
representing image blocks
Figure DEST_PATH_IMAGE012
First, the
Figure 598886DEST_PATH_IMAGE013
The characteristics of the composite material are that,
Figure DEST_PATH_IMAGE014
is shown as
Figure 670747DEST_PATH_IMAGE013
A feature-dependent filter for the output of the image sensor,
Figure 783059DEST_PATH_IMAGE015
representing a convolution calculation.
4. The pan-tilt target based on multi-feature filter fusion of claim 3The tracking method is characterized in that in the step S1, the weight value of the adaptive weight
Figure 383805DEST_PATH_IMAGE005
The ratio of the peak value of each characteristic filter response map to the peak sum of all response maps is as follows:
Figure DEST_PATH_IMAGE016
, wherein ,
Figure DEST_PATH_IMAGE017
is shown as
Figure 909464DEST_PATH_IMAGE013
The peak of the response map of the individual feature filters,
Figure DEST_PATH_IMAGE018
is shown as
Figure 50596DEST_PATH_IMAGE019
The response pattern peak of the individual feature filter;
namely:
Figure DEST_PATH_IMAGE020
Figure 581196DEST_PATH_IMAGE021
fused response map peaks
Figure DEST_PATH_IMAGE022
Corresponding coordinate point
Figure 618422DEST_PATH_IMAGE023
Is the position of the target center point
Figure DEST_PATH_IMAGE024
, wherein
Figure 365799DEST_PATH_IMAGE025
5. The pan-tilt target tracking method based on multi-feature filter fusion as claimed in claim 2, wherein in the step S2, the target scale is predicted by selecting the feature filter with the largest weight
Figure DEST_PATH_IMAGE026
To realize the purpose of the method, the device is provided with a plurality of sensors,
first, the length and width of the target are respectively determined
Figure 779462DEST_PATH_IMAGE027
Introducing a scale change pool
Figure DEST_PATH_IMAGE028
Forming a final set of scale-change spaces
Figure 663105DEST_PATH_IMAGE029
Wherein, subscript
Figure DEST_PATH_IMAGE030
Figure DEST_PATH_IMAGE032
Objects which respectively represent the scale change action are the length and the width of a target enclosing frame, and subscripts 0, 1 and 2 respectively represent different scale change rates;
then, using the scale space set
Figure 605653DEST_PATH_IMAGE033
And the predicted target center point position
Figure DEST_PATH_IMAGE034
Obtaining image regions under different scale spaces
Figure DEST_PATH_IMAGE035
, wherein ,
Figure DEST_PATH_IMAGE036
(ii) a Respectively extract the following
Figure DEST_PATH_IMAGE038
A characteristic
Figure 401177DEST_PATH_IMAGE039
And a characteristic filter
Figure 821794DEST_PATH_IMAGE026
And (3) calculating:
Figure DEST_PATH_IMAGE040
wherein
Figure 559943DEST_PATH_IMAGE041
Is a first
Figure DEST_PATH_IMAGE042
Response graphs corresponding to the scales;
finally, by comparing the scale space
Figure 204551DEST_PATH_IMAGE033
Calculating the peak value of the response map under the middle 9 scales
Figure 926519DEST_PATH_IMAGE043
Scale of maximum correspondence of peak value
Figure DEST_PATH_IMAGE044
I.e. the new changed scale of the target:
Figure 885248DEST_PATH_IMAGE045
6. the pan-tilt-zoom target tracking method based on multi-feature filter fusion as claimed in claim 2, wherein in the step S3, the peak-to-side lobe ratio
Figure DEST_PATH_IMAGE046
wherein ,
Figure 743483DEST_PATH_IMAGE047
represents the final response graph obtained by the adaptive weight fusion in step S1,
Figure DEST_PATH_IMAGE048
represents the proportion of the side lobe region centered on the peak value to the size of the whole response map region,
Figure 558992DEST_PATH_IMAGE049
representing the mean of the remaining part of the response map after removal of the sidelobe regions,
Figure DEST_PATH_IMAGE050
showing the standard deviation of the remaining portion of the response plot after removal of the sidelobe regions,
Figure 768256DEST_PATH_IMAGE051
higher values indicate more reliable prediction results.
7. The pan-tilt-zoom target tracking method based on multi-feature filter fusion as claimed in claim 2, wherein in step S4, based on the target information given by the initial frame, the image area is obtained and used as the template image
Figure DEST_PATH_IMAGE052
Extracting gray image characteristics, training template filter by using kernel correlation filtering technology
Figure 828879DEST_PATH_IMAGE053
During training, background information and Hanning window processing are not introduced; after the target tracking is started, the target central point position obtained by the prediction is utilized
Figure DEST_PATH_IMAGE054
And predicted target scale information
Figure 541620DEST_PATH_IMAGE055
Obtaining a target prediction result image
Figure DEST_PATH_IMAGE056
By passing
Figure 528030DEST_PATH_IMAGE057
And calculating the similarity between the target prediction result image and the template image to measure the state of the target and judge whether the target is blocked or lost.
8. The pan-tilt target tracking method based on multi-feature filter fusion as claimed in claim 6, wherein the target tracking method is obtained by the verification module in S3
Figure DEST_PATH_IMAGE058
Evaluating the value if it is less than the threshold value
Figure 959012DEST_PATH_IMAGE059
If so, indicating that the prediction reliability of the tracking result is low, extracting gray level characteristics from the image of the prediction target area, and performing correlation calculation with the template filter; if the peak value of the response image of the template filter is less than the threshold value
Figure DEST_PATH_IMAGE060
If the target is shielded, the subsequent processing is needed, the target state information is output, and the shielding of the target is accumulated and counted; when the target is shielded and the count is larger than the threshold value
Figure 56281DEST_PATH_IMAGE061
If the target is lost, the tracking is exited; if the peak value of the response image of the template filter is larger than the threshold value
Figure DEST_PATH_IMAGE062
And indicating that the target state is normal and outputting target state information, and clearing the target by the shielding count.
9. The holder target tracking method based on the multi-feature filter fusion as claimed in claim 2, wherein in the step S5, the track predictor module is used to process the condition that the output target is blocked in S4, and the track filter is implemented based on linear kalman filtering; the horizontal axis and vertical axis coordinates of the position information of the target central point are used as two-dimensional input, and the coordinates before the current frame are utilized
Figure 623528DEST_PATH_IMAGE063
And (3) training a track filter by using the coordinates of the target central point in the frame range, predicting the position information of the target central point of the current frame, and updating the target central point obtained by calculation in the S1.
10. A pan-tilt target tracking method based on multi-feature filter fusion according to any one of claims 2 to 9, wherein in the step S6, the criteria for model update are as follows: when each filter fuses the peak value of the response map in step S1
Figure DEST_PATH_IMAGE064
Greater than a threshold value
Figure 46419DEST_PATH_IMAGE065
And the peak-to-side lobe ratio calculated by the check module in step S3
Figure DEST_PATH_IMAGE066
Greater than a threshold value
Figure 699117DEST_PATH_IMAGE067
Updating each characteristic correlation filter and the template tracking filter by using the tracking result, or not updating;
the formula for model update is:
Figure DEST_PATH_IMAGE068
, wherein ,
Figure 833033DEST_PATH_IMAGE069
indicates the learning rate of each feature filter update,
Figure DEST_PATH_IMAGE070
for the previous frame
Figure 785946DEST_PATH_IMAGE013
The characteristics of the correlated filter are used to determine the characteristic,
Figure 317421DEST_PATH_IMAGE071
for the updated current frame
Figure 722995DEST_PATH_IMAGE013
The characteristics of the characteristic filter are used as the characteristic filter,
Figure DEST_PATH_IMAGE072
retraining based on current frame target information
Figure 162066DEST_PATH_IMAGE013
A feature correlation filter;
Figure 907169DEST_PATH_IMAGE073
, wherein ,
Figure DEST_PATH_IMAGE074
represents the learning rate of the template filter update,
Figure 406283DEST_PATH_IMAGE075
the template filter for the previous frame is used,
Figure DEST_PATH_IMAGE076
an updated target filter for the current frame,
Figure DEST_PATH_IMAGE077
a template filter retrained based on current frame target information.
CN202211644874.1A 2022-12-21 2022-12-21 Multi-feature filter fusion-based holder target tracking system and method Active CN115631216B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211644874.1A CN115631216B (en) 2022-12-21 2022-12-21 Multi-feature filter fusion-based holder target tracking system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211644874.1A CN115631216B (en) 2022-12-21 2022-12-21 Multi-feature filter fusion-based holder target tracking system and method

Publications (2)

Publication Number Publication Date
CN115631216A true CN115631216A (en) 2023-01-20
CN115631216B CN115631216B (en) 2023-05-12

Family

ID=84909630

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211644874.1A Active CN115631216B (en) 2022-12-21 2022-12-21 Multi-feature filter fusion-based holder target tracking system and method

Country Status (1)

Country Link
CN (1) CN115631216B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116228817A (en) * 2023-03-10 2023-06-06 东南大学 Real-time anti-occlusion anti-jitter single target tracking method based on correlation filtering

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009060A (en) * 2019-04-17 2019-07-12 东北大学 A kind of robustness long-term follow method based on correlation filtering and target detection
CN110276785A (en) * 2019-06-24 2019-09-24 电子科技大学 One kind is anti-to block infrared object tracking method
CN112164093A (en) * 2020-08-27 2021-01-01 同济大学 Automatic person tracking method based on edge features and related filtering
CN113989331A (en) * 2021-11-12 2022-01-28 山西大学 Long-term target tracking method based on context multi-clue information and adaptive response
CN114612508A (en) * 2022-02-28 2022-06-10 桂林电子科技大学 Anti-occlusion related filtering target tracking method for multi-feature online learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009060A (en) * 2019-04-17 2019-07-12 东北大学 A kind of robustness long-term follow method based on correlation filtering and target detection
CN110276785A (en) * 2019-06-24 2019-09-24 电子科技大学 One kind is anti-to block infrared object tracking method
CN112164093A (en) * 2020-08-27 2021-01-01 同济大学 Automatic person tracking method based on edge features and related filtering
CN113989331A (en) * 2021-11-12 2022-01-28 山西大学 Long-term target tracking method based on context multi-clue information and adaptive response
CN114612508A (en) * 2022-02-28 2022-06-10 桂林电子科技大学 Anti-occlusion related filtering target tracking method for multi-feature online learning

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116228817A (en) * 2023-03-10 2023-06-06 东南大学 Real-time anti-occlusion anti-jitter single target tracking method based on correlation filtering
CN116228817B (en) * 2023-03-10 2023-10-03 东南大学 Real-time anti-occlusion anti-jitter single target tracking method based on correlation filtering

Also Published As

Publication number Publication date
CN115631216B (en) 2023-05-12

Similar Documents

Publication Publication Date Title
CN107808122B (en) Target tracking method and device
CN108416250B (en) People counting method and device
CN103530893B (en) Based on the foreground detection method of background subtraction and movable information under camera shake scene
CN102147851B (en) Device and method for judging specific object in multi-angles
CN101339655B (en) Visual sense tracking method based on target characteristic and bayesian filtering
CN108549841A (en) A kind of recognition methods of the Falls Among Old People behavior based on deep learning
CN108470354A (en) Video target tracking method, device and realization device
CN108846826A (en) Object detecting method, device, image processing equipment and storage medium
CN103927762B (en) Target vehicle automatic tracking method and device
CN107633226A (en) A kind of human action Tracking Recognition method and system
CN110555870B (en) DCF tracking confidence evaluation and classifier updating method based on neural network
CN110263660A (en) A kind of traffic target detection recognition method of adaptive scene changes
CN104866868A (en) Metal coin identification method based on deep neural network and apparatus thereof
CN111767847B (en) Pedestrian multi-target tracking method integrating target detection and association
CN108446694A (en) A kind of object detection method and device
CN107808138A (en) A kind of communication signal recognition method based on FasterR CNN
CN108681689B (en) Frame rate enhanced gait recognition method and device based on generation of confrontation network
CN112818871B (en) Target detection method of full fusion neural network based on half-packet convolution
CN115631216A (en) Holder target tracking system and method based on multi-feature filter fusion
CN115439654B (en) Method and system for finely dividing weakly supervised farmland plots under dynamic constraint
CN110706208A (en) Infrared dim target detection method based on tensor mean square minimum error
CN115641471A (en) Countermeasure sample generation method and system based on generation of countermeasure network
CN105184815B (en) Assemble a crowd event detecting method and system
CN112529941B (en) Multi-target tracking method and system based on depth trajectory prediction
CN105551063A (en) Method and device for tracking moving object in video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230419

Address after: No. 518 Zhongshan East Road, Qinhuai District, Nanjing City, Jiangsu Province, 210000

Applicant after: Jincheng Group Co.,Ltd.

Applicant after: AVIC Jincheng Unmanned System Co.,Ltd.

Address before: 210000 no.216, Longpan Middle Road, Qinhuai District, Nanjing City, Jiangsu Province

Applicant before: AVIC Jincheng Unmanned System Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant