CN107146238B - Based on the preferred motion target tracking method of characteristic block - Google Patents

Based on the preferred motion target tracking method of characteristic block Download PDF

Info

Publication number
CN107146238B
CN107146238B CN201710269627.0A CN201710269627A CN107146238B CN 107146238 B CN107146238 B CN 107146238B CN 201710269627 A CN201710269627 A CN 201710269627A CN 107146238 B CN107146238 B CN 107146238B
Authority
CN
China
Prior art keywords
target area
indicate
block
picture frame
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710269627.0A
Other languages
Chinese (zh)
Other versions
CN107146238A (en
Inventor
刘贵喜
秦耀龙
高美
高海玲
冯煜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Electronic Science and Technology
Original Assignee
Xian University of Electronic Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Electronic Science and Technology filed Critical Xian University of Electronic Science and Technology
Priority to CN201710269627.0A priority Critical patent/CN107146238B/en
Publication of CN107146238A publication Critical patent/CN107146238A/en
Application granted granted Critical
Publication of CN107146238B publication Critical patent/CN107146238B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present invention discloses a kind of visual target tracking method based on preferred feature block, including 1. read video to be tracked;2. initialization;3. piecemeal simultaneously detects characteristic point;4. screening characteristic point;5. screening sub-block;6. calculating target area response diagram;7. updating target area state;8. updating target area classifier;9. exporting target area information;10. judging whether to run through all frames of video to be tracked;11. motion target tracking terminates.The present invention has good real-time and higher precision during motion target tracking, and the present invention being capable of stable pursuit movement target in the case where target is blocked with target scale variation acutely.

Description

Based on the preferred motion target tracking method of characteristic block
Technical field
The invention belongs to field of computer technology, further relate to one of computer vision tracking technique field base In the preferred motion target tracking method of characteristic block.The present invention can be used for transporting vehicle, tank, the pedestrian etc. in video scene Moving-target carries out real-time tracking.
Background technique
Motion target tracking method is broadly divided into monotrack and multiple target tracking.Monotrack is mainly to video field Interested single target persistently tracks in scape, obtains position of the target in each frame of video.As long as multiple target tracking is to view Multiple targets are tracked in frequency scene.In recent years, the relatively high motion target tracking method of tracking performance mainly have STRUCK, The trackings such as KCF, STC.Tracking problem converts classification problem by STRUCK, and on-line training sample updates object module, However the sample of training might not be consistent with realistic objective, to influence classifying quality.KCF tracker is filtered using coring correlation Wave method completes the training of sample using circular matrix and Fourier transformation, obtains efficient object classifiers, however this method Target occlusion cannot be coped with, when target is blocked, tracking is be easy to cause to fail.STC track algorithm to target and it above and below The spatial relationship of text is modeled, and this method can be very good reply target occlusion problem.The generally existing field of visual target tracking The problems such as scape illumination complexity, target occlusion, dimensional variation, target deformation, cause the precision of track algorithm, robustness not high.
In the patent document of its application, " one kind is High-Speed Automatic more based on coring correlation filtering for Zhejiang Shenghui Lighting Co., Ltd. It is disclosed in target following " (number of patent application 201410418797.7, publication number CN104200237 A) a kind of based on coring phase It closes and filters High-Speed Automatic multi-object tracking method.This method uses ridge regression training program, by accelerating Fourier transformation training Target sample obtains classifier, completes the quick training to great amount of samples, obtains high performance object classifiers.But the party The shortcoming that method still remains is not accounted for the problem of target is blocked on successive frame, when target is blocked, It is easy to cause tracking to fail, thus cannot be long lasting for tracking target.
A kind of patent document " novel video tracing method based on the partition strategy " (patent of Shandong University in its application Application number 201510471102.6, publication number CN105139418 A) in disclose a kind of novel video based on partition strategy Tracking.Target area is divided into several fritters by this method, using each fritter color histogram and surrounding it is small The color histogram of block compares, and judges whether target blocks, when blocking, in particle filter algorithm below The weight of the block is reduced, the influence caused by target following is blocked in reduction.But the shortcoming that this method still remains is, The robustness of color histogram is not high, when occurring to look after variation, is easily lost target, and tracking is caused to fail.
Summary of the invention
It is an object of the invention to overcome above-mentioned the shortcomings of the prior art, propose a kind of preferred based on characteristic block Motion target tracking method.
Realizing thinking of the invention is, characteristic point pair is obtained based on characteristic point and using optical flow tracking, according to two features Point judgment criterion screens characteristic point, to obtain active block.Response diagram, peak sidelobe ratio are obtained using coring correlation filtering KCF PSR and object classifiers using response diagram as the weight of characteristic point, and then obtain translation variation and dimensional variation.Pass through peak Value secondary lobe ratio PSR judgement, obtains the occlusion state of target, according to occlusion state using corresponding different strategy, thus completion pair The tracking of target Continuous robustness improves algorithm and copes with the ability blocked with dimensional variation, enhances the robustness and precision of algorithm.
To achieve the goals above, the specific steps of the present invention are as follows includes:
(1) Moving Targets Based on Video Streams to be tracked is read from video camera:
(2) it initializes:
(2a) reads in first frame image from Moving Targets Based on Video Streams to be tracked;
(2b) outlines moving target to be tracked with mouse on first frame image to be tracked;
(2c) utilizes method of partition, carries out piecemeal to motion target area to be tracked, obtains the son of motion target area Block region;
(2d) utilizes property detector FAST, from each sub-block of first frame image motion target area to be tracked Characteristic point is detected, all characteristic points that will test are as initialization feature point;
The method that (2e) uses coring correlation filtering tracker KCF training sample, training objective area sample obtain first The classifier of motion target area on frame image, using target area classifier as initialization motion target area classifier;
(3) characteristic point is detected:
(3a) arbitrarily reads in a frame image from Moving Targets Based on Video Streams to be tracked;
(3b) utilizes method of partition, carries out piecemeal to obtained motion target area on read in frame image, obtains institute Read in the sub-block region of motion target area on frame image;
(3c) utilizes property detector FAST, detects each sub-block region of motion target area, obtains each height The characteristic point in block region;
(4) characteristic point is screened:
(4a) using light stream tracking forward, in all characteristic points on the picture frame currently read in, find with it is upper The one-to-one characteristic point of feature on the picture frame once read in forms a characteristic point pair, by all characteristic points to composition One feature point set, is stored in the picture frame currently read in;
(4b) according to the following formula, calculates the Backward error forward of every a pair of of characteristic point in feature point set:
Wherein,K-th of region of l sub-block characteristic point pair misses forward backward in the picture frame that expression is currently read in Difference,Indicate k-th of region of l sub-block characteristic point in the picture frame c currently read in,Indicate withIt is corresponding upper primary K-th of region of l sub-block characteristic point in the picture frame b of reading,WithA pair of of characteristic point of formation, l=1,2 ..., M, k=1, 2 ..., Ql, c expression present frame, b expression previous frame, the sub-block sum that M expression motion target area divides, QlIndicate l sub-block The sum of characteristic point in region;
(4c) according to the following formula, every a pair of of the characteristic point calculated in feature point set normalizes error:
Wherein, NCC (Pl k) indicate that the normalization of k-th of region of l sub-block characteristic point pair on the picture frame currently read in misses Difference, QlIndicate the sum of l sub-block provincial characteristics point, μb(l) and δb(l) the on the last picture frame b currently read in is indicated The mean value and standard deviation of all characteristic point pixels, μ in l sub-block regionc(l) and δc(l) it indicates on the picture frame c currently read in The mean value and standard deviation of the pixel of all characteristic points in l sub-block region;
(4d) according to the following formula, calculates the normalized bias mean value of all characteristic points;
Wherein, μ indicates that the normalized bias mean value of all characteristic points, M indicate the sub-block region that motion target area divides Number, QlThe number sum of trace point pair in ' expression l sub-block;
(4e) chooses all characteristic points for meeting division condition from characteristic point, and the characteristic point for meeting division condition is returned Enter reliable characteristic point set;
(5) preferred feature block is obtained:
(5a) chooses all sub-blocks for meeting precision conditions from sub-block region, and sub-block is included into preferred chunk and is concentrated;
(5b) merges all characteristic points that preferred chunk is concentrated, and obtains preferred feature point set;
(6) response diagram and peak sidelobe ratio of target area are calculated:
(6a) utilizes target area detection of classifier target area, obtains the response diagram of target area, with response diagram work It is characterized weight a little;
(6b) according to the following formula, calculates the peak sidelobe ratio of the response diagram of target area;
Wherein, PSR (o) indicates that the peak sidelobe ratio of the response diagram of target area o, max (R (o)) indicate response diagram R (o) Maximum value, μ (R (o)) indicate response diagram R (o) mean value, standard deviation sigma (R (o)) indicate response diagram R (o) standard deviation;
(7) target area state is updated;
(7a) according to the following formula, calculates the dimensional variation of target area;
Wherein, S (Ic) indicate the dimensional variation of target area on the picture frame that currently reads in,WithIt indicates current to read in Picture frame on preferred feature point concentrate characteristic point,WithIndicate that preferred feature point is concentrated on the last picture frame read in Characteristic point, wi、wjIndicate the characteristic point that preferred feature point is concentratedWeight;
(7b) according to the following formula, calculates the size of target area;
wc=wb*S(Ic)
hc=hb*S(Ic)
Wherein, wcIndicate the width of target area on the picture frame currently read in, hcIndicate target on the picture frame currently read in The height in region, wbIndicate the width of target area on the last picture frame read in, hbIndicate target on the last picture frame read in The height in region;
(7c) according to the following formula, calculates the translation variation of target area:
Wherein, D (Ic) indicating that the translation of target area on the picture frame currently read in changes, η indicates that transformation factor takes 0.35, D (Ib) indicate that the translation of target area on the last picture frame read in changes, wiIndicate the power of corresponding reliable characteristic point Value,Indicate that the coordinate for corresponding to i-th of reliable characteristic point on the picture frame currently read in, u indicate the cross of characteristic point Coordinate, v indicate the ordinate of characteristic point,It indicates to correspond to i-th of reliable characteristic on the last picture frame read in The coordinate of point;
(7d) according to the following formula, calculates target area position:
Cc=Cb+D(Ic)
Wherein, CcIndicate the current position for reading in target on picture frame, CbIndicate the last position for reading in target on picture frame It sets;
The size and target area position of (7e) using target area, update the current target area shape read on picture frame State;
(8) target area classifier is updated;
(8a) uses coring correlation filtering method KCF training objective area sample, transports on the picture frame currently read in The classifier in moving-target region;
(8b) judges whether target area classifier meets update condition, if so, executing (8c), otherwise, executes (8d);
The target area (8c) is not blocked, and target area classifier is updated, with currently reading in of obtaining of step (8a) The target area classifier obtained on the last picture frame read in of classifier replacement of target area on picture frame;
The target area (8d) is blocked, and object classifiers are not updated, and is obtained on the picture frame for being kept for the last time read in Target area classifier is constant;
(9) it by target area status information, is output in the form of rectangle frame in read in frame image;
(10) judge whether that all frames for running through video to be tracked otherwise, execute step if so, thening follow the steps (11) (3);
(11) terminate the tracking of moving target.
Compared with the prior art, the present invention has the following advantages
First, since the present invention detects characteristic point using property detector FAST, overcome the description of prior art characteristic point Scarce capacity detects slow-footed problem, so that the present invention has good real-time during motion target tracking.
Second, since the present invention uses target area method of partition, target occlusion cannot be coped in the prior art by overcoming Under tracking, allow the invention to blocking lower good pursuit movement target.
Third, due to the method that the present invention obtains characteristic point weight using target area detection of classifier image, customer service The problem of characteristic point weight inaccurately describes in the prior art, so that present invention acquisition can obtain high-precision target size.
4th, since the present invention uses light stream tracking forward, feature point tracking precision is low in the prior art for customer service Problem allows the invention to high-precision moving target displacement.
Detailed description of the invention
Fig. 1 is flow chart of the invention;
Fig. 2 is bent to the center error of davidce3 Moving Object in Video Sequences tracking in emulation experiment of the present invention Line comparison diagram;
Fig. 3 is in emulation experiment of the present invention to the Duplication curve comparison of davidce3 Moving Object in Video Sequences tracking Figure;
Fig. 4 is bent to the center error of Jogging Moving Object in Video Sequences tracking in emulation experiment of the present invention Line comparison diagram;
Fig. 5 is the Duplication curve comparison that the present invention tracks Jogging Moving Object in Video Sequences in emulation experiment Figure;
Fig. 6 is in emulation experiment of the present invention to the center error curve pair of Car Moving Object in Video Sequences tracking Than figure;
Fig. 7 is in emulation experiment of the present invention to the Duplication curve comparison figure of Car Moving Object in Video Sequences tracking.
Specific embodiment
The present invention will be further described with reference to the accompanying drawing.
Referring to Fig.1, specific steps of the invention are further described.
Step 1, Moving Targets Based on Video Streams to be tracked is read from video camera.
Step 2, it initializes.
First frame image is read in from Moving Targets Based on Video Streams to be tracked.
Moving target to be tracked is outlined with mouse on first frame image to be tracked.
Using method of partition, piecemeal is carried out to motion target area to be tracked, obtains the sub-block area of motion target area Domain.
The first step, the standard in integer sub-block region can be divided into according to target area, and sub-block region side length is greater than In 10 pixels, four boundaries of motion target area to be tracked are extended simultaneously, be expanded motion target area.
Second step obtains the sub-block region of motion target area to extension movement target area piecemeal, and sub-block region is side The long square area for being more than or equal to 10 pixels, and each sub-block area size is identical.
Using property detector FAST, detected from each sub-block of first frame image motion target area to be tracked Characteristic point, all characteristic points that will test are as initialization feature point.
Using the method for coring correlation filtering tracker KCF training sample, training objective area sample obtains first frame figure As the classifier of upper motion target area, using target area classifier as initialization motion target area classifier.
Step 3, characteristic point is detected.
A frame image is arbitrarily read in from Moving Targets Based on Video Streams to be tracked.
Using method of partition, piecemeal is carried out to obtained motion target area on read in frame image, obtains being read in The sub-block region of motion target area on frame image.
The first step, the standard in integer sub-block region can be divided into according to target area, and sub-block region side length is greater than In 10 pixels, four boundaries of motion target area to be tracked are extended simultaneously, be expanded motion target area.
Second step obtains the sub-block region of motion target area to extension movement target area piecemeal, and sub-block region is side The long square area for being more than or equal to 10 pixels, and each sub-block area size is identical.
Using property detector FAST, each sub-block region of motion target area is detected, each sub-block area is obtained The characteristic point in domain.
Step 4, characteristic point is screened.
Using light stream tracking forward, in all characteristic points on the picture frame currently read in, find with it is last The one-to-one characteristic point of feature on the picture frame of reading forms a characteristic point pair, by all characteristic points to composition one Feature point set is stored in the picture frame currently read in.
According to the following formula, the Backward error forward of every a pair of of characteristic point in feature point set is calculated:
Wherein, FB (Pl k) indicate missing backward forward for k-th of region of l sub-block characteristic point pair in the picture frame that currently reads in Difference,Indicate k-th of region of l sub-block characteristic point in the picture frame c currently read in,Indicate withIt is corresponding upper one K-th of region of l sub-block characteristic point in the picture frame b of secondary reading,WithForm a pair of of characteristic point, l=1,2 ..., M, k= 1,2 ..., Ql, c expression present frame, b expression previous frame, the sub-block sum that M expression motion target area divides, QlIndicate l The sum of characteristic point in block region.
According to the following formula, every a pair of of the characteristic point calculated in feature point set normalizes error:
Wherein, NCC (Pl k) indicate that the normalization of k-th of region of l sub-block characteristic point pair on the picture frame currently read in misses Difference, QlIndicate the sum of l sub-block provincial characteristics point, μb(l) and δb(l) the on the last picture frame b currently read in is indicated The mean value and standard deviation of all characteristic point pixels, μ in l sub-block regionc(l) and δc(l) it indicates on the picture frame c currently read in The mean value and standard deviation of the pixel of all characteristic points in l sub-block region.
According to the following formula, the normalized bias mean value of all characteristic points is calculated;
Wherein, μ indicates that the normalized bias mean value of all characteristic points, M indicate the sub-block region that motion target area divides Number, QlThe number sum of trace point pair in ' expression l sub-block.
All characteristic points for meeting division condition are chosen from characteristic point, the characteristic point for meeting division condition is included into can By feature point set, the division condition refers to:
NCC(Pl k) > μ and FB (Pl k) < QFB
Wherein, QFBThe previous Backward error threshold value for indicating characteristic point, according to target movement speed or camera movement speed Depending on, it can use the integer in 8~12, movement speed is small, and the value is smaller, conversely, the value is bigger.
Step 5, preferred feature block is obtained.
All sub-blocks for meeting precision conditions are chosen from sub-block region, and sub-block is included into preferred chunk and is concentrated.
Merge all characteristic points that preferred chunk is concentrated, obtains preferred feature point set.
Step 6, the response diagram and peak sidelobe ratio of target area are calculated.
Using target area detection of classifier target area, the response diagram of target area is obtained, using the response diagram as spy Levy the weight of point.
According to the following formula, the peak sidelobe ratio of the response diagram of target area is calculated.
Wherein, PSR (o) indicates that the peak sidelobe ratio of the response diagram of target area o, max (R (o)) indicate response diagram R (o) Maximum value, μ (R (o)) indicate response diagram R (o) mean value, standard deviation sigma (R (o)) indicate response diagram R (o) standard deviation.
Step 7, target area state is updated.
According to the following formula, the dimensional variation of target area is calculated;
Wherein, S (Ic) indicate the dimensional variation of target area on the picture frame that currently reads in,WithIt indicates current to read in Picture frame on preferred feature point concentrate characteristic point,WithIndicate preferred feature point set on the last picture frame read in In characteristic point, wi、wjIndicate the characteristic point that preferred feature point is concentratedWeight;
According to the following formula, the size of target area is calculated;
wc=wb*S(Ic)
hc=hb*S(Ic)
Wherein, wcIndicate the width of target area on the picture frame currently read in, hcIndicate target on the picture frame currently read in The height in region, wbIndicate the width of target area on the last picture frame read in, hbIndicate target on the last picture frame read in The height in region;
According to the following formula, the translation variation of target area is calculated:
Wherein, D (Ic) indicating that the translation of target area on the picture frame currently read in changes, η indicates that transformation factor takes 0.35, D (Ib) indicate that the translation of target area on the last picture frame read in changes, wiIndicate the power of corresponding reliable characteristic point Value,Indicate that the coordinate for corresponding to i-th of reliable characteristic point on the picture frame currently read in, u indicate the cross of characteristic point Coordinate, v indicate the ordinate of characteristic point,It indicates to correspond to i-th of reliable characteristic on the last picture frame read in The coordinate of point.
According to the following formula, target area position is calculated:
Cc=Cb+D(Ic)
Wherein, CcIndicate the current position for reading in target on picture frame, CbIndicate the last position for reading in target on picture frame It sets;
Size and target area position using target area update the current target area state read on picture frame.
Step 8, target area classifier is updated.
Using coring correlation filtering method KCF training objective area sample, mesh is moved on the picture frame currently read in Mark the classifier in region;
(8b) judges whether target area classifier meets update condition, if so, executing (8c), otherwise, executes (8d), The update condition refers to:
PSR(o)≥T
Wherein, T indicates to update threshold value, is determined according to target sizes and the movement speed in viewing field of camera, takes 20~30 In integer, target is smaller, and the value is smaller, and target is bigger, and the value is bigger, and target speed is faster, and the value is bigger, target fortune Dynamic speed is slower, and the value is smaller;
The target area (8c) is not blocked, target area classifier is updated, with target area on the picture frame currently read in The target area classifier obtained on the last picture frame read in of classifier replacement in domain;
The target area (8d) is blocked, and object classifiers are not updated, and is obtained on the picture frame for being kept for the last time read in Target area classifier is constant;
Step 9, target area status information is exported.
Target area status information is output on the frame image currently read in the form of frame.
Step 10, judge whether to run through all frames, if so, thening follow the steps 3, otherwise, execute step 11.
Step 11, motion target tracking terminates.
It is described further below with reference to emulation experiment Dui this Fa Ming Alto effect of the invention.
1, simulated conditions:
Emulation experiment of the invention be on AMD A10-6700, the computer of 4GB memory using VS2010 and What OpenCV2.4.4 software carried out.The video that emulation experiment uses is standard testing video set Visual Tracker Benchmark Dataset 2013。
2, emulation content:
Tracker OUR of the invention and three prior arts are emulated, three prior arts use target with Track algorithm is KCF method, STRUCK method, STC method, has carried out three groups of emulation respectively.
First group of emulation experiment carries out center error to davidce3 video using the present invention and three prior arts With the emulation of Duplication, four target center error curves are obtained as shown in Fig. 2, obtaining four target area Duplication songs Line is as shown in Figure 3.
Second group of emulation experiment carries out center error to Jogging video using the present invention and three prior arts With the emulation of Duplication, four target center error curves are obtained as shown in figure 4, obtaining four target area Duplication songs Line is as shown in Figure 5.
Third group emulation experiment carries out center error to CarScale video using the present invention and three prior arts With the emulation of Duplication, four target center error curves are obtained as shown in fig. 6, obtaining four target area Duplication songs Line is as shown in Figure 7.
3, analysis of simulation result:
In Fig. 2, Fig. 3, Fig. 4, Fig. 5, Fig. 6, Fig. 7 withThe curve of mark indicates the tracking result of KCF method Precision curve, with " +++ " mark curve indicates STRUCK method tracking result precision curve, with "-" mark song Line indicates the precision curve of the tracking result of STC method, withThe curve of mark indicates tracking result of the invention Precision curve.The abscissa of Fig. 2, Fig. 4, Fig. 6 indicate that frame number, ordinate indicate center error amount.The cross of Fig. 3, Fig. 5, Fig. 7 Coordinate representation frame number, ordinate indicate target area Duplication.
From Figure 2 it can be seen that the present invention and KCF method are always maintained at lower target's center's location error rate, STRUCK method Target's center's location error undergo one increase and reduced process, target's center's location error of STC method be gradually increased. As seen from Figure 3, the target area Duplication of the present invention, STRUCK method and KCF method, which all undergoes one, reduces rise process again, The target Duplication of STC method reduces always.The present invention and the existing moving target there are three Technical Follow-Up are the pedestrians in video, Due to being blocked in the 100th frame to the 160th frame line people by electric pole, STRUCK method directly tracks failure, but It is gradually recovered tracking again below, tracking failure occurs for STC method, and without restoring tracking, in general, tracking of the invention Precision is higher, can preferably cope with target occlusion.
From fig. 4, it can be seen that the present invention and KCF method maintain lower center error persistently to track, STC method and STRUCK The center error of method is gradually increased.As seen from Figure 5, the target area Duplication of the present invention and KCF method maintains higher Level, but the center error of KCF method has the process for reducing and ging up, the mesh of STC method and STRUCK method Mark region Duplication gradually decreases, down to 0.The present invention and the existing moving target there are three Technical Follow-Up are worn in video The people of white cotta causes STC method and STRUCK method rapid since in 48 frame, pedestrian is blocked completely by electric pole Tracking failure occurs, and from not restoring unsuccessfully, temporary reduction also occurs in the tracking accuracy of KCF method, and of the invention Then persistently tracked with low target's center's location error and high target Duplication, in general, anti-target occlusion of the invention Ability is most strong, can preferably cope with the tracking problem under target occlusion.
As seen from Figure 6, the present invention and existing target's center's location error there are three technology are gradually increased, but of the invention Target's center's location error is always minimum.As seen from Figure 7, the present invention and the target area Duplication now there are three technology are equal It is gradually reduced, but target area Duplication reduction speed of the invention is most slow, maintains higher target area in four curves Duplication.The present invention and the existing moving target there are three Technical Follow-Up are one by as far as the vehicle closely travelled, due to vehicle by As far as nearly traveling, dimensional variation is obvious, and as camera lens is more and more closer from vehicle, target is also increasing, after the 150th frame, The present invention and existing target's center's location error there are three technology all become larger, and target area Duplication gradually decreases.Tracking Precision all decreases, but tracking of the invention is still in four algorithms that performance is best, and the present invention can be coped with more preferably Under target scale variation acutely the problem of tracking.

Claims (5)

1. a kind of motion target tracking method based on preferred feature block, comprises the following steps that
(1) Moving Targets Based on Video Streams to be tracked is read from video camera:
(2) it initializes:
(2a) reads in first frame image from Moving Targets Based on Video Streams to be tracked;
(2b) outlines moving target to be tracked with mouse on first frame image to be tracked;
(2c) utilizes method of partition, carries out piecemeal to motion target area to be tracked, obtains the sub-block area of motion target area Domain;
(2d) utilizes property detector FAST, detects from each sub-block of first frame image motion target area to be tracked Characteristic point, all characteristic points that will test are as initialization feature point;
The method that (2e) uses coring correlation filtering tracker KCF training sample, training objective area sample obtain first frame figure As the classifier of upper motion target area, using target area classifier as initialization motion target area classifier;
(3) characteristic point is detected:
(3a) arbitrarily reads in a frame image from Moving Targets Based on Video Streams to be tracked;
(3b) utilizes method of partition, carries out piecemeal to obtained motion target area on read in frame image, obtains being read in The sub-block region of motion target area on frame image;
(3c) utilizes property detector FAST, detects each sub-block region of motion target area, obtains each sub-block area The characteristic point in domain;
(4) characteristic point is screened:
(4a) using light stream tracking forward, in all characteristic points on the picture frame currently read in, find with it is last The one-to-one characteristic point of feature on the picture frame of reading forms a characteristic point pair, by all characteristic points to composition one Feature point set is stored in the picture frame currently read in;
(4b) according to the following formula, calculates the Backward error forward of every a pair of of characteristic point in feature point set:
Wherein, FB (Pl k) indicate the Backward error forward of k-th of region of l sub-block characteristic point pair in the picture frame that currently reads in,Indicate k-th of region of l sub-block characteristic point in the picture frame c currently read in,Indicate withIt is corresponding in upper primary reading K-th of region of l sub-block characteristic point in the picture frame b entered,WithA pair of of characteristic point of formation, l=1,2 ..., M, k=1, 2 ..., Ql, c expression present frame, b expression previous frame, the sub-block sum that M expression motion target area divides, QlIndicate l sub-block The sum of characteristic point in region;
(4c) according to the following formula, every a pair of of the characteristic point calculated in feature point set normalizes error:
Wherein, NCC (Pl k) indicate the normalization error of k-th of region of l sub-block characteristic point pair on the picture frame that currently reads in, Ql Indicate the sum of l sub-block provincial characteristics point, μb(l) and δb(l) the l sub-block on the last picture frame b currently read in is indicated The mean value and standard deviation of all characteristic point pixels, μ in regionc(l) and δc(l) l on the picture frame c currently read in is indicated The mean value and standard deviation of the pixel of all characteristic points in block region;
(4d) according to the following formula, calculates the normalized bias mean value of all characteristic points;
Wherein, μ indicates that the normalized bias mean value of all characteristic points, M indicate the sub-block number of regions that motion target area divides, Q′lIndicate the number sum of the trace point pair in l sub-block;
(4e) chooses all characteristic points for meeting division condition from characteristic point, and the characteristic point for meeting division condition is included into can By feature point set;
(5) preferred feature block is obtained:
(5a) chooses all sub-blocks for meeting precision conditions from sub-block region, and sub-block is included into preferred chunk and is concentrated;
(5b) merges all characteristic points that preferred chunk is concentrated, and obtains preferred feature point set;
(6) response diagram and peak sidelobe ratio of target area are calculated:
(6a) utilizes target area detection of classifier target area, obtains the response diagram of target area, using the response diagram as spy Levy the weight of point;
(6b) according to the following formula, calculates the peak sidelobe ratio of the response diagram of target area;
Wherein, PSR (o) indicates that the peak sidelobe ratio of the response diagram of target area o, max (R (o)) indicate response diagram R (o) most Big value, μ (R (o)) indicate that the mean value of response diagram R (o), standard deviation sigma (R (o)) indicate the standard deviation of response diagram R (o);
(7) target area state is updated;
(7a) according to the following formula, calculates the dimensional variation of target area;
Wherein, S (Ic) indicate the dimensional variation of target area on the picture frame that currently reads in,WithIndicate the figure currently read in As on frame preferred feature point concentrate characteristic point,WithIndicate preferred feature point concentration on the last picture frame read in Characteristic point, wi、wjIndicate the characteristic point that preferred feature point is concentratedWeight;
(7b) according to the following formula, calculates the size of target area;
wc=wb*S(Ic)
hc=hb*S(Ic)
Wherein, wcIndicate the width of target area on the picture frame currently read in, hcIndicate target area on the picture frame currently read in Height, wbIndicate the width of target area on the last picture frame read in, hbIndicate target area on the last picture frame read in Height;
(7c) according to the following formula, calculates the translation variation of target area:
Wherein, D (Ic) indicating that the translation of target area on the picture frame currently read in changes, η indicates that transformation factor takes 0.35, D (Ib) indicate that the translation of target area on the last picture frame read in changes, wiIndicate the weight of corresponding reliable characteristic point,Indicate that the coordinate for corresponding to i-th of reliable characteristic point on the picture frame currently read in, u indicate the horizontal seat of characteristic point Mark, v indicate the ordinate of characteristic point,Indicate to correspond on the last picture frame read in i-th of reliable characteristic point Coordinate;
(7d) according to the following formula, calculates target area position:
Cc=Cb+D(Ic)
Wherein, CcIndicate the current position for reading in target on picture frame, CbIndicate the last position for reading in target on picture frame;
The size and target area position of (7e) using target area, update the current target area state read on picture frame;
(8) target area classifier is updated;
(8a) uses coring correlation filtering method KCF training objective area sample, moves mesh on the picture frame currently read in Mark the classifier in region;
(8b) judges whether target area classifier meets update condition, if so, executing (8c), otherwise, executes (8d);
The target area (8c) is not blocked, target area classifier is updated, with target area on the picture frame currently read in The target area classifier obtained on the last picture frame read in of classifier replacement;
The target area (8d) is blocked, and object classifiers are not updated, and keeps the target obtained on the last picture frame read in Region classifier is constant;
(9) it by target area status information, is output in the form of rectangle frame in read in frame image;
(10) judge whether that all frames for running through video to be tracked otherwise, execute step (3) if so, thening follow the steps (11);
(11) terminate the tracking of moving target.
2. the motion target tracking method according to claim 1 based on preferred feature block, which is characterized in that step Specific step is as follows for method of partition described in (2c), step (3b):
The first step, the standard in integer sub-block region can be divided into according to target area, and sub-block region side length is greater than equal to 10 A pixel extends on four boundaries of motion target area to be tracked simultaneously, and be expanded motion target area;
Second step obtains the sub-block region of motion target area to extension movement target area piecemeal, and sub-block region is that side length is big In the square area for being equal to 10 pixels, and each sub-block area size is identical.
3. the motion target tracking method according to claim 1 based on preferred feature block, which is characterized in that step (4e) Described in division condition it is as follows:
NCC(Pl k) > μ and FB (Pl k) < QFB
Wherein, QFBIndicate the previous Backward error threshold value of characteristic point.
4. the motion target tracking method according to claim 1 based on preferred feature block, which is characterized in that step (5a) Described in precision conditions it is as follows:
5. the motion target tracking method according to claim 1 based on preferred feature block, which is characterized in that step (8b) Described in update condition it is as follows:
PSR(o)≥T
Wherein, T indicates to update threshold value.
CN201710269627.0A 2017-04-24 2017-04-24 Based on the preferred motion target tracking method of characteristic block Active CN107146238B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710269627.0A CN107146238B (en) 2017-04-24 2017-04-24 Based on the preferred motion target tracking method of characteristic block

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710269627.0A CN107146238B (en) 2017-04-24 2017-04-24 Based on the preferred motion target tracking method of characteristic block

Publications (2)

Publication Number Publication Date
CN107146238A CN107146238A (en) 2017-09-08
CN107146238B true CN107146238B (en) 2019-10-11

Family

ID=59773981

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710269627.0A Active CN107146238B (en) 2017-04-24 2017-04-24 Based on the preferred motion target tracking method of characteristic block

Country Status (1)

Country Link
CN (1) CN107146238B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107610177B (en) * 2017-09-29 2019-10-29 联想(北京)有限公司 The method and apparatus of characteristic point is determined in a kind of synchronous superposition
CN107920257B (en) * 2017-12-01 2020-07-24 北京奇虎科技有限公司 Video key point real-time processing method and device and computing equipment
CN107967693B (en) * 2017-12-01 2021-07-09 北京奇虎科技有限公司 Video key point processing method and device, computing equipment and computer storage medium
WO2019237286A1 (en) * 2018-06-13 2019-12-19 华为技术有限公司 Method and device for screening local feature points
CN109448021A (en) * 2018-10-16 2019-03-08 北京理工大学 A kind of motion target tracking method and system
CN109584269A (en) * 2018-10-17 2019-04-05 龙马智芯(珠海横琴)科技有限公司 A kind of method for tracking target
CN110246155B (en) * 2019-05-17 2021-05-18 华中科技大学 Anti-occlusion target tracking method and system based on model alternation
CN110276784B (en) * 2019-06-03 2021-04-06 北京理工大学 Correlation filtering moving target tracking method based on memory mechanism and convolution characteristics
CN110619654B (en) * 2019-08-02 2022-05-13 北京佳讯飞鸿电气股份有限公司 Moving target detection and tracking method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104091348A (en) * 2014-05-19 2014-10-08 南京工程学院 Multi-target tracking method integrating obvious characteristics and block division templates
US9582718B1 (en) * 2015-06-30 2017-02-28 Disney Enterprises, Inc. Method and device for multi-target tracking by coupling multiple detection sources
CN106485732A (en) * 2016-09-09 2017-03-08 南京航空航天大学 A kind of method for tracking target of video sequence

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104091348A (en) * 2014-05-19 2014-10-08 南京工程学院 Multi-target tracking method integrating obvious characteristics and block division templates
US9582718B1 (en) * 2015-06-30 2017-02-28 Disney Enterprises, Inc. Method and device for multi-target tracking by coupling multiple detection sources
CN106485732A (en) * 2016-09-09 2017-03-08 南京航空航天大学 A kind of method for tracking target of video sequence

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
João F. Henriques 等.High-Speed Tracking with Kernelized Correlation Filters.《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》.2015,第37卷(第3期),第583-596页. *
改进核相关滤波的运动目标跟踪算法;邢运龙 等;《红外与激光工程》;20160531;第45卷(第S1期);第S126004-(1-8)页 *
改进的核相关滤波器目标跟踪算法;余礼杨 等;《计算机应用》;20151210;第35卷(第12期);第3550-3554页 *

Also Published As

Publication number Publication date
CN107146238A (en) 2017-09-08

Similar Documents

Publication Publication Date Title
CN107146238B (en) Based on the preferred motion target tracking method of characteristic block
CN108985186B (en) Improved YOLOv 2-based method for detecting pedestrians in unmanned driving
CN110929578B (en) Anti-shielding pedestrian detection method based on attention mechanism
Du et al. The unmanned aerial vehicle benchmark: Object detection and tracking
WO2021208275A1 (en) Traffic video background modelling method and system
CN109977782B (en) Cross-store operation behavior detection method based on target position information reasoning
CN105069472B (en) A kind of vehicle checking method adaptive based on convolutional neural networks
WO2023065395A1 (en) Work vehicle detection and tracking method and system
CN104183127B (en) Traffic surveillance video detection method and device
CN104850865B (en) A kind of Real Time Compression tracking of multiple features transfer learning
CN109145708B (en) Pedestrian flow statistical method based on RGB and D information fusion
CN106022263B (en) A kind of wireless vehicle tracking of fusion feature matching and optical flow method
CN104680559B (en) The indoor pedestrian tracting method of various visual angles based on motor behavior pattern
CN101470809B (en) Moving object detection method based on expansion mixed gauss model
CN103079117B (en) Video abstraction generating method and video frequency abstract generating apparatus
CN109800692A (en) A kind of vision SLAM winding detection method based on pre-training convolutional neural networks
CN102819764A (en) Method for counting pedestrian flow from multiple views under complex scene of traffic junction
CN105741324A (en) Moving object detection identification and tracking method on moving platform
CN112241969A (en) Target detection tracking method and device based on traffic monitoring video and storage medium
Wang et al. When pedestrian detection meets nighttime surveillance: A new benchmark
CN115346177A (en) Novel system and method for detecting target under road side view angle
CN106295532A (en) A kind of human motion recognition method in video image
CN106127766B (en) Method for tracking target based on Space Coupling relationship and historical models
CN105740835A (en) Preceding vehicle detection method based on vehicle-mounted camera under night-vision environment
Buch et al. Vehicle localisation and classification in urban CCTV streams

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant