CN106898015A - A kind of multi thread visual tracking method based on the screening of self adaptation sub-block - Google Patents

A kind of multi thread visual tracking method based on the screening of self adaptation sub-block Download PDF

Info

Publication number
CN106898015A
CN106898015A CN201710035427.9A CN201710035427A CN106898015A CN 106898015 A CN106898015 A CN 106898015A CN 201710035427 A CN201710035427 A CN 201710035427A CN 106898015 A CN106898015 A CN 106898015A
Authority
CN
China
Prior art keywords
sub
block
target
candidate subchunk
carried out
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710035427.9A
Other languages
Chinese (zh)
Other versions
CN106898015B (en
Inventor
孙伟平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201710035427.9A priority Critical patent/CN106898015B/en
Publication of CN106898015A publication Critical patent/CN106898015A/en
Application granted granted Critical
Publication of CN106898015B publication Critical patent/CN106898015B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details

Abstract

The invention belongs to vision tracking field, and in particular to a kind of multi thread visual tracking method based on the screening of self adaptation sub-block, comprise the following steps:(1) target area is carried out into conspicuousness detection, candidate subchunk set is obtained with reference to uniform piecemeal;(2) multi-scale sampling is carried out to candidate subchunk, determines the larger sub-block of frequency domain response and corresponding yardstick, update candidate subchunk set;(3) estimation is carried out to the sub-block in candidate subchunk set, is merged by the multi thread of sub-block, it is determined that the center of tracking target;(4) by the current location of target, the corresponding Gaussian kernel of each sub-block locations is updated, the sub-block that will be unsatisfactory for requiring is reinitialized.The method of the present invention can remove the interference of background, make full use of the vision of middle level features restricted and upper language prior-constrained, so that the positioning of target is more accurate, also have the advantages that step is simple, amount of calculation is small, be adapted to carry out visual target tracking in the case of with blocking.

Description

A kind of multi thread visual tracking method based on the screening of self adaptation sub-block
Technical field
The invention belongs to vision tracking field, regarded more particularly, to a kind of multi thread based on the screening of self adaptation sub-block Feel tracking.
Background technology
In practical application scene, tracking object often causes apparent part change, the table of target occur because the reason such as blocking Seeing model needs to embody the apparent change of target.Therefore, in the apparent modeling of target generally using local feature (such as sub-block).
In Chinese invention patent《A kind of new video tracing method based on self adaptation piecemeal》In disclose it is a kind of from Method of partition is adapted to, the adjust automatically piecemeal using video and the content information of image, the feature according to the target area for tracking Strategy, improves video tracking effect.In Chinese invention patent《A kind of new video tracing method based on partition strategy》In Disclose the partition strategy of a seed block, the color histogram according to fritter with its around fritter color histogram between phase Judge whether current block and surrounding fritter block like degree, be detected the fritter for blocking in particle filter below The weight shared by it is reduced during relationship change coefficient between calculating block in algorithm, so that it is accurate to tracking to reduce eclipse phenomena The influence of true property.In Chinese invention patent《A kind of real-time video tracing method》In, by tracking Target Segmentation into the side of sub-block Formula compresses characteristics of image, and by building multiple dimensioned candidate region with the dimensional variation for adapting to track target and quick shifting It is dynamic, finally calculate the correlation between characteristic vector to reach the mesh of video tracking using KCF (coring correlation filter) algorithm 's.
In these methods, use local feature to apparent modeling the situation such as to adapt to block, by uniform piecemeal Target is divided into sub-block or fritter by method, and the quantity of sub-block and the locus of sub-block are usually fixed.It is evenly dividing The sub-block quantity for arriving is more, in follow-up trace flow, either using particle filter tracking framework, or KCF tracking box Frame, is required for processing each sub-block, and the amount of calculation of algorithm is directly proportional to the quantity of sub-block.And in these sub-blocks, deposit In substantial amounts of information redundancy, and the background information that interference is produced to tracking result can be introduced, the drift during vision can be caused to track Shifting problem, influences the accuracy of tracking result.
Due to there is drawbacks described above and deficiency, this area is needed badly to make and is further improved, and designs a kind of vision Method for tracking target, can overcome the defect for being fixed mode method of partition to tracking target area in the prior art, The need for meet the target following under complicated tracking scene.
The content of the invention
For the disadvantages described above or Improvement requirement of prior art, the invention provides a kind of based on the screening of self adaptation sub-block Multi thread visual tracking method, the method is a kind of for the mesh for the complicated tracking scene such as blocking and improving Vision Tracking performance Mark apparent expression and object localization method, sub-block divided and screened according to priori, with more vision significance and Frequency domain response sub-block high represents target, merge it is apparent, spatially and temporally motion of the information to target estimate, so as to improve The accuracy and the adaptability to different scenes of vision tracking.The method can be used in the system such as the magnitude of traffic flow or society's monitoring Target person or vehicle are monitored automatically.
To achieve the above object, according to one aspect of the present invention, there is provided a kind of based on many of self adaptation sub-block screening Clue visual tracking method, it is characterised in that specifically include following steps:
S1. image information is gathered, image information divide to obtain target area, target area is uniformly divided Block, and the picture of target area is carried out into conspicuousness detection, candidate subchunk set is obtained according to visual signature significance degree;
S2. carry out multi-scale sampling to the candidate subchunk in the candidate subchunk set that is obtained in step S1, and to sample into Row core correlation filtering, retains the larger sub-block of candidate subchunk set intermediate frequency domain response, determines the yardstick of these sub-blocks, updates candidate Sub-block set;
S3. using the sub-block in candidate subchunk set as tracking target, estimation is carried out to sub-block, calculates each sub-block Apparent clue aci, spatial distribution clue sciWith movement locus clue mci, thus it is calculated the current location of tracking target;
S4. the current location of the tracking target by obtaining in step S3, to each sub-block in current goal region Put corresponding Gaussian kernel to be updated, the sub-block that will be unsatisfactory for space cohesion or Movement consistency is reinitialized.
It is further preferred that in step sl, carry out dividing the process for obtaining candidate subchunk set specifically including to sub-block Following steps:
S1.1 gathers a two field picture, obtains target area, and carry out uniform piecemeal to target area;
S1.2 is input into target area picture as conspicuousness detection algorithm, obtains the Saliency maps of target area;
S1.3 carries out Gaussian smoothing removal noise spot to the Saliency maps obtained in step S1.2;
S1.4 takes the maximum of points position (x, y) of Saliency maps, the partition determining maximum point according to step S1.1 Sub-block where position (x, y), is added into candidate subchunk set;
Be updated for Saliency maps by S1.5, the sub-block in removal step 1.4 where maximum of points position, repeat step S1.4, obtains new sub-block, and registration calculating is carried out with the sub-block in candidate subchunk set, if respectively less than threshold tau, will new son Block adds candidate subchunk set;
Preferably, in step S1.2, the conspicuousness detection algorithm is FT algorithms.
Preferably, when yardstick screening is carried out to the candidate subchunk after sampling, following steps are specifically included:
S2.1 determines the region of search scope and region of search template size of the sub-block in candidate subchunk set;
S2.2 is according to region of search scope and region of search template size initialization Gaussian kernel;
S2.3 is zoomed in and out the domain samples of each sub-block by region of search template size, and extracts HOG features, is carried out FFT obtains the multiple dimensioned sample of sub-block;
The multiple dimensioned sample of sub-block is carried out correlation filtering by S2.4 with corresponding Gaussian kernel, obtains frequency domain response, removal The relatively low sub-block of frequency domain response, updates candidate subchunk set;
S2.5 calculates the PSR values of each sample according to equation below, and the PSR values to all sub-blocks under each yardstick are asked With, wherein:
Wherein gmaxIt is the peak value of response, s is the secondary lobe region of response, μsAnd σsThe respectively average and standard in secondary lobe region Difference;
S2.6 analyzes the summing value of PSR values, and the yardstick of the maximum sub-block of PSR values summing value is defined as into best scale.
Preferably, in step S3, comprising the following steps that for estimation is carried out to the sub-block in candidate subchunk set:
S3.1 judges the motion of sub-block, rejected in candidate subchunk set displacement in adjacent two field picture it is excessive or Direction of motion relative mean displacement direction in adjacent image frame offsets excessive sub-block;
S3.2 calculates the apparent clue ac of each sub-blocki, spatial distribution clue sciWith movement locus clue mci, formula is such as Under:
aci=PSRi
Wherein PSRiIt is i-th piece after normalization of PSR values, M represents the number by remaining sub-block after step S3.1, (xi,yi) represent i-th piece of coordinate, (xs,ys) represent all pieces of center, σsFor criterion distance is poor, (Δ xi,Δyi) table Show i-th piece of motion vector, (Δ xm,Δym) represent all pieces of average motion vector, σmFor motion criteria is poor;
S3.3 calculates wi=rciwr+sciws+mciwm, wherein (wr,ws,wm) represent three kinds of weights of clue;
S3.4 makesObtain tracking the current location of target.
Specifically, the flow of visual target tracking is generally divided into apparent modeling, target estimation and positioning, model modification Etc. several stages.It is as follows that the present invention tracks the technical scheme that each stage used in vision:
The target apparent modelling phase.To enable the middle level features (i.e. sub-block) being indicated apparent to target to have more expression Power, conspicuousness detection is carried out in the frame of video the 1st first to the target area for obtaining by hand, and the Saliency maps for obtaining are drawn as sub-block The priori divided.The conspicuousness of image reflects the significance level of image different parts visual signature, often visually more prominent The part for going out is easier to be captured, and is more beneficial for the positioning of target.Conspicuousness detection algorithm can use existing algorithm, the present invention In illustrated by taking FT conspicuousness detection algorithms as an example.The Saliency maps obtained by FT algorithms there may be exceptional value, be removal Saliency maps are carried out Gaussian smoothing by noise spot.Quickly uniform piecemeal, significance value after Gaussian smoothing are carried out to target area Sub-block corresponding to the best part elects the 1st candidate subchunk as.To the Saliency maps weight behind the 1st candidate subchunk region of removal Multiple aforesaid operations, obtain the 2nd sub-block.To ensure the diversity of sub-block, it is to avoid sub-block concentrates on certain local cell of target Domain, calculates the 2nd sub-block and the 1st registration of candidate subchunk, if registration is higher than certain threshold value, the 2nd sub-block can quilt Elect candidate subchunk as.Repeat the above steps, obtain candidate subchunk collection.
For adapt to that Moving Objects in practical application often occur draws near or dimensional variation from the close-by examples to those far off, to each Candidate subchunk carries out multi-scale sampling using image pyramid.Followed by the son that frequency domain response determines to be best suitable in present frame Block yardstick.To the multiple dimensioned sample of sub-block, the HOG features of image are extracted, carry out FFT, obtain the sample of frequency domain.Frequency domain Image pattern carries out correlation filtering with the Gaussian kernel of correspondence position, asks for response and further sub-block is screened, and calculate The PSR values of block.PSR values summation to all pieces under current scale, the maximum yardstick of value is the yardstick of current goal.
The sub-block obtained by the screening in this stage, can protrude the vision significance of target, and it is current to adapt to target Dimensional variation.The quantity of sub-block has compared to uniform piecemeal and significantly reduces.
Target estimation and positioning stage.Different types of clue contains the different attribute of object, based on single line The vision tracking of rope is often limited to tracking the adaptability of scene, and the vision tracking based on middle level features sub-block is even more such as This.In order to the screening technique of the apparent modelling phase sub-block of target matches, apparent clue, the sky of sub-block of present invention synthesis sub-block Between be distributed the movement locus clue of clue and sub-block and realize the estimation of target.Wherein, the apparent clue of sub-block shows son The vision reliability of block, the spatial distribution of sub-block reflects cohesion degree of the sub-block to target's center position, the movement locus of sub-block Clue constrains the Movement consistency of each sub-block.In three kinds of clues based on vision reliability, with spatial distribution and kinematic constraint Supplemented by, the final positioning of target is carried out by way of multi thread is weighted.
The model modification stage.Model modification includes two aspects in the present invention.One is to obtain target by estimation Behind current location, the corresponding Gaussian kernel of each sub-block locations of current goal can be updated.Two is to being unsatisfactory in space The sub-block of poly- property or Movement consistency re-starts initialization.
In general, by the contemplated above technical scheme of the present invention compared with prior art, with advantages below and Beneficial effect:
(1) partition and screening technique that the present invention is provided, can find the middle level features that can more preferably represent target, go Except the interference of background;The target estimation framework of multi thread, can make full use of the restricted and high-rise language of the vision of middle level features That says is prior-constrained so that the positioning of target is more accurate.
(2) method of the present invention, by after multiple dimensioned screening sample, eliminating a large amount of invalid sub-blocks, reduces information superfluous It is remaining, the amount of calculation of algorithm can be reduced, improve the accuracy and speed of result of calculation.
(3) method of the present invention step is simple and amount of calculation is small, tracking result is accurate, is adapted in the case of with blocking Carry out visual target tracking.
Brief description of the drawings
Fig. 1 is partition schematic diagram of the invention;
Fig. 2 be along partition described in Fig. 1 after, carry out sub-block and yardstick using image pyramid and KCF tracking and screen Schematic diagram;
Fig. 3 is multi thread target estimation and positioning schematic diagram;
The step of Fig. 4 is multi thread visual tracking method of the invention flow chart.
Specific embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, it is right below in conjunction with drawings and Examples The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.As long as additionally, technical characteristic involved in invention described below each implementation method Not constituting conflict each other can just be mutually combined.
The step of multi thread visual tracking method as shown in Figure 4 flow chart, wherein, step 1 and step 2 are in Target Modeling Stage implements, and step 3 is implemented in target estimation and positioning stage.
Step 1 partition, as shown in Figure 1.
Step 1.1 gathers the 1st two field picture, and target area is obtained by hand.Uniform piecemeal is carried out to target area.
Step 1.2 is input into target area picture as conspicuousness detection algorithm (such as FT algorithms), obtains Target Photo Saliency maps.
Step 1.3 carries out Gaussian smoothing removal noise spot to Saliency maps.
Step 1.4 takes Saliency maps maximum of points position (x, y), it is determined that block where (x, y), is added into candidate subchunk collection Close.Saliency maps update.
Step 1.5 repeat step 1.4, obtains new sub-block.Registration calculating is carried out with the sub-block in candidate subchunk set, such as Fruit is respectively less than threshold tau, then new sub-block is added into candidate subchunk set.
Step 2 sub-block scale is screened, as shown in Figure 2.
Step 2.1 determines the region of search scope and region of search template size of sub-block.
Step 2.2 is according to region of search scope and region of search template size initialization Gaussian kernel.
Step 2.3 is zoomed in and out the domain samples of each sub-block by region of search template size, and extracts HOG features, Carry out the multiple dimensioned sample that FFT obtains sub-block.
The multiple dimensioned sample of sub-block is carried out correlation filtering by step 2.4 with corresponding Gaussian kernel, obtains frequency domain response, is protected The sub-block that frequency domain response is larger is stayed, candidate subchunk set is updated.
Step 2.5 calculates the PSR (peak-secondary lobe ratios, Peak-to-Sidelobe of each sample according to equation below Ratio) value, the PSR values to all sub-blocks under each yardstick are sued for peace.Wherein gmaxIt is the peak value of response, S is the secondary lobe region of response, μsAnd σsThe respectively average and standard deviation in secondary lobe region.
Step 2.6 determines best scale, that is, cause the maximum yardstick of PSR values summing value.
Step 3 multi thread estimation is positioned with target, as shown in Figure 3.
Step 3.1 judges the motion of sub-block.Rejected in candidate subchunk set displacement in consecutive frame it is excessive or Direction of motion relative mean displacement direction in consecutive frame offsets excessive sub-block.
Step 3.2 calculates the apparent clue ac of each sub-blocki, spatial distribution clue sciWith movement locus clue mci, formula It is as follows:
aci=PSRi
Wherein PSRiIt is i-th piece after normalization of PSR values, M represents the number by remaining sub-block after step 3.1, (xi, yi) represent i-th piece of coordinate, (xs,ys) represent all pieces of center, σsFor criterion distance is poor, (Δ xi,Δyi) represent the The motion vector of i blocks, (Δ xm,Δym) represent all pieces of average motion vector, σmFor motion criteria is poor.
Step 3.3 calculates wi=rciwr+sciws+mciwm, wherein (wr,ws,wm) represent three kinds of weights of clue, this example In take (0.8,01,0.1).
Step 3.4 makesObtain target's center position.
Step 4 model modification
Gaussian kernel of the step 4.1 according to each correspondence sub-block locations of present frame target update that step 3.4 is obtained.
Step 4.2 is unsatisfactory for space cohesion or the sub-block of Movement consistency is reinitialized.
As it will be easily appreciated by one skilled in the art that the foregoing is only presently preferred embodiments of the present invention, it is not used to The limitation present invention, all any modification, equivalent and improvement made within the spirit and principles in the present invention etc., all should include Within protection scope of the present invention.

Claims (5)

1. it is a kind of based on self adaptation sub-block screening multi thread visual tracking method, it is characterised in that specifically include following steps:
S1. image information is gathered, image information divide to obtain target area, uniform piecemeal is carried out to target area, and The picture of target area is carried out into conspicuousness detection, candidate subchunk set is obtained according to visual signature significance degree;
S2. multi-scale sampling is carried out to the candidate subchunk in the candidate subchunk set that is obtained in step S1, and core is carried out to sampling Correlation filtering, retains the larger sub-block of candidate subchunk set intermediate frequency domain response, determines the yardstick of these sub-blocks, updates candidate subchunk Set;
S3. estimation is carried out to the sub-block in candidate subchunk set, calculates the apparent clue ac of each sub-blocki, spatial distribution line Rope sciWith movement locus clue mci, thus it is calculated the current location of tracking target;
S4. the current location of the tracking target by obtaining in step S3, to each sub-block locations pair in current goal region The Gaussian kernel answered is updated, and the sub-block that will be unsatisfactory for space cohesion or Movement consistency is reinitialized.
2. the method for claim 1, it is characterised in that in step sl, to sub-block divide obtaining candidate subchunk The process of set specifically includes following steps:
S1.1 gathers a two field picture, obtains target area, and carry out uniform piecemeal to target area;
S1.2 is input into target area picture as conspicuousness detection algorithm, obtains the Saliency maps of target area;
S1.3 carries out Gaussian smoothing removal noise spot to the Saliency maps obtained in step S1.2;
S1.4 takes the maximum of points position (x, y) of Saliency maps, the partition determining maximum point position according to step S1.1 Sub-block where (x, y), candidate subchunk set is added by the sub-block;
Be updated for Saliency maps by S1.5, the sub-block in removal step 1.4 where maximum of points position, repeat step S1.4, New sub-block is obtained, registration calculating is carried out with the sub-block in candidate subchunk set, if respectively less than threshold tau, new sub-block is added Enter candidate subchunk set.
3. method as claimed in claim 1 or 2, it is characterised in that in step S1.2, the conspicuousness detection algorithm is FT Algorithm.
4. method as claimed in claim 3, it is characterised in that in step 2, yardstick sieve is carried out to the candidate subchunk after sampling When selecting, following steps are specifically included:
S2.1 determines the region of search scope and region of search template size of the sub-block in candidate subchunk set;
S2.2 is according to region of search scope and region of search template size initialization Gaussian kernel;
S2.3 is zoomed in and out the domain samples of each sub-block by region of search template size, and extracts HOG features, carries out FFT Conversion obtains the multiple dimensioned sample of sub-block;
The multiple dimensioned sample of sub-block is carried out correlation filtering by S2.4 with corresponding Gaussian kernel, obtains frequency domain response, removes frequency domain The relatively low sub-block of response, updates candidate subchunk set;
S2.5 calculates the PSR values of each sample according to equation below, and the PSR values to all sub-blocks under each yardstick are sued for peace, its In:
Wherein gmaxIt is the peak value of response, s is the secondary lobe region of response, μsAnd σsThe respectively average and standard deviation in secondary lobe region;
S2.6 analyzes the summing value of PSR values, and the yardstick of the maximum sub-block of PSR values summing value is defined as into best scale.
5. method as claimed in claim 4, it is characterised in that in step S3, transported to the sub-block in candidate subchunk set Move comprising the following steps that for estimation:
S3.1 judges the motion of sub-block, displacement in adjacent two field picture is rejected in candidate subchunk set excessive or adjacent Direction of motion relative mean displacement direction in picture frame offsets excessive sub-block;
S3.2 calculates the apparent clue ac of each sub-blocki, spatial distribution clue sciWith movement locus clue mci, formula is as follows:
aci=PSRi
Wherein PSRiIt is i-th piece after normalization of PSR values, M represents the number by remaining sub-block after step S3.1, (xi,yi) Represent i-th piece of coordinate, (xs,ys) represent all pieces of center, σsFor criterion distance is poor, (Δ xi,Δyi) represent i-th piece Motion vector, (Δ xm,Δym) represent all pieces of average motion vector, σmFor motion criteria is poor;
S3.3 calculates wi=rciwr+sciws+mciwm, wherein (wr,ws,wm) represent three kinds of weights of clue;
S3.4 makesObtain tracking the current location of target.
CN201710035427.9A 2017-01-17 2017-01-17 A kind of multi thread visual tracking method based on the screening of adaptive sub-block Expired - Fee Related CN106898015B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710035427.9A CN106898015B (en) 2017-01-17 2017-01-17 A kind of multi thread visual tracking method based on the screening of adaptive sub-block

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710035427.9A CN106898015B (en) 2017-01-17 2017-01-17 A kind of multi thread visual tracking method based on the screening of adaptive sub-block

Publications (2)

Publication Number Publication Date
CN106898015A true CN106898015A (en) 2017-06-27
CN106898015B CN106898015B (en) 2019-09-24

Family

ID=59197898

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710035427.9A Expired - Fee Related CN106898015B (en) 2017-01-17 2017-01-17 A kind of multi thread visual tracking method based on the screening of adaptive sub-block

Country Status (1)

Country Link
CN (1) CN106898015B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107767401A (en) * 2017-10-16 2018-03-06 武汉沃德自动化技术有限公司 Infrared target method for real time tracking and device based on core correlation filtering
CN107833240A (en) * 2017-11-09 2018-03-23 华南农业大学 The target trajectory extraction of multi-track clue guiding and analysis method
CN108053425A (en) * 2017-12-25 2018-05-18 北京航空航天大学 A kind of high speed correlation filtering method for tracking target based on multi-channel feature
CN108198209A (en) * 2017-12-22 2018-06-22 天津理工大学 It is blocking and dimensional variation pedestrian tracking algorithm
CN108961226A (en) * 2018-06-21 2018-12-07 安徽工业大学 A kind of method of insulator target following in transmission line-oriented inspection video
CN108985153A (en) * 2018-06-05 2018-12-11 成都通甲优博科技有限责任公司 A kind of face recognition method and device
CN109146918A (en) * 2018-06-11 2019-01-04 西安电子科技大学 A kind of adaptive related objective localization method based on piecemeal
CN109685831A (en) * 2018-12-20 2019-04-26 山东大学 Method for tracking target and system based on residual error layering attention and correlation filter
CN109711431A (en) * 2018-11-27 2019-05-03 哈尔滨工业大学(深圳) The method for tracking target of local block convolution, system and storage medium at one
CN109864806A (en) * 2018-12-19 2019-06-11 江苏集萃智能制造技术研究所有限公司 The Needle-driven Robot navigation system of dynamic compensation function based on binocular vision
CN110246155A (en) * 2019-05-17 2019-09-17 华中科技大学 One kind being based on the alternate anti-shelter target tracking of model and system
CN110322473A (en) * 2019-07-09 2019-10-11 四川大学 Target based on significant position is anti-to block tracking
CN111860189A (en) * 2020-06-24 2020-10-30 北京环境特性研究所 Target tracking method and device
CN112614154A (en) * 2020-12-08 2021-04-06 深圳市优必选科技股份有限公司 Target tracking track obtaining method and device and computer equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105118071A (en) * 2015-08-04 2015-12-02 山东大学 Novel video tracking method based on self-adaptive partitioning
CN105654139A (en) * 2015-12-31 2016-06-08 北京理工大学 Real-time online multi-target tracking method adopting temporal dynamic appearance model

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105118071A (en) * 2015-08-04 2015-12-02 山东大学 Novel video tracking method based on self-adaptive partitioning
CN105654139A (en) * 2015-12-31 2016-06-08 北京理工大学 Real-time online multi-target tracking method adopting temporal dynamic appearance model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHENG ZHU等: "STD: A Stereo Tracking Dataset for Evaluating Binocular Tracking Algorithms", 《PROCEEDINGS OF THE 2016 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS》 *
潘振福等: "多尺度估计的核相关滤波器目标跟踪方法", 《激光与光电子学进展》 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107767401B (en) * 2017-10-16 2020-01-14 武汉沃德自动化技术有限公司 Infrared target real-time tracking method and device based on nuclear correlation filtering
CN107767401A (en) * 2017-10-16 2018-03-06 武汉沃德自动化技术有限公司 Infrared target method for real time tracking and device based on core correlation filtering
CN107833240A (en) * 2017-11-09 2018-03-23 华南农业大学 The target trajectory extraction of multi-track clue guiding and analysis method
CN107833240B (en) * 2017-11-09 2020-04-17 华南农业大学 Target motion trajectory extraction and analysis method guided by multiple tracking clues
CN108198209A (en) * 2017-12-22 2018-06-22 天津理工大学 It is blocking and dimensional variation pedestrian tracking algorithm
CN108198209B (en) * 2017-12-22 2020-05-01 天津理工大学 People tracking method under the condition of shielding and scale change
CN108053425A (en) * 2017-12-25 2018-05-18 北京航空航天大学 A kind of high speed correlation filtering method for tracking target based on multi-channel feature
CN108985153A (en) * 2018-06-05 2018-12-11 成都通甲优博科技有限责任公司 A kind of face recognition method and device
CN109146918A (en) * 2018-06-11 2019-01-04 西安电子科技大学 A kind of adaptive related objective localization method based on piecemeal
CN108961226A (en) * 2018-06-21 2018-12-07 安徽工业大学 A kind of method of insulator target following in transmission line-oriented inspection video
CN109711431A (en) * 2018-11-27 2019-05-03 哈尔滨工业大学(深圳) The method for tracking target of local block convolution, system and storage medium at one
CN109864806A (en) * 2018-12-19 2019-06-11 江苏集萃智能制造技术研究所有限公司 The Needle-driven Robot navigation system of dynamic compensation function based on binocular vision
CN109685831B (en) * 2018-12-20 2020-08-25 山东大学 Target tracking method and system based on residual layered attention and correlation filter
CN109685831A (en) * 2018-12-20 2019-04-26 山东大学 Method for tracking target and system based on residual error layering attention and correlation filter
CN110246155A (en) * 2019-05-17 2019-09-17 华中科技大学 One kind being based on the alternate anti-shelter target tracking of model and system
CN110246155B (en) * 2019-05-17 2021-05-18 华中科技大学 Anti-occlusion target tracking method and system based on model alternation
CN110322473A (en) * 2019-07-09 2019-10-11 四川大学 Target based on significant position is anti-to block tracking
CN111860189A (en) * 2020-06-24 2020-10-30 北京环境特性研究所 Target tracking method and device
CN111860189B (en) * 2020-06-24 2024-01-19 北京环境特性研究所 Target tracking method and device
CN112614154A (en) * 2020-12-08 2021-04-06 深圳市优必选科技股份有限公司 Target tracking track obtaining method and device and computer equipment
CN112614154B (en) * 2020-12-08 2024-01-19 深圳市优必选科技股份有限公司 Target tracking track acquisition method and device and computer equipment

Also Published As

Publication number Publication date
CN106898015B (en) 2019-09-24

Similar Documents

Publication Publication Date Title
CN106898015A (en) A kind of multi thread visual tracking method based on the screening of self adaptation sub-block
KR100224752B1 (en) Target tracking method and apparatus
JP2870415B2 (en) Area division method and apparatus
EP1859411B1 (en) Tracking objects in a video sequence
CN100530238C (en) Segmentation unit for and method of determining a second segment and image processing apparatus
CN110517288A (en) Real-time target detecting and tracking method based on panorama multichannel 4k video image
CN108876820B (en) Moving target tracking method under shielding condition based on mean shift
CN108446694B (en) Target detection method and device
US6704433B2 (en) Human tracking device, human tracking method and recording medium recording program thereof
CN109086724B (en) Accelerated human face detection method and storage medium
EP2128818A1 (en) Method of moving target tracking and number accounting
JPH09218950A (en) System for detecting object from multi-eye image
CN107182036A (en) The adaptive location fingerprint positioning method merged based on multidimensional characteristic
CN106204570B (en) A kind of angular-point detection method based on non-causal fractional order gradient operator
CN105740945A (en) People counting method based on video analysis
CN109001757A (en) A kind of parking space intelligent detection method based on 2D laser radar
CN105956544B (en) A method of it extracts the remote sensing image road intersection based on structure index feature
CN106295639A (en) A kind of virtual reality terminal and the extracting method of target image and device
CN108154150B (en) Significance detection method based on background prior
CN112164093A (en) Automatic person tracking method based on edge features and related filtering
CN112329764A (en) Infrared dim target detection method based on TV-L1 model
WO2012122588A1 (en) Extraction processes
Zhang et al. Image sequence segmentation using 3-D structure tensor and curve evolution
CN104978558B (en) The recognition methods of target and device
JP4020982B2 (en) Moving image processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190924

Termination date: 20210117