CN101610412A - A kind of visual tracking method that merges based on multi thread - Google Patents

A kind of visual tracking method that merges based on multi thread Download PDF

Info

Publication number
CN101610412A
CN101610412A CNA2009100888784A CN200910088878A CN101610412A CN 101610412 A CN101610412 A CN 101610412A CN A2009100888784 A CNA2009100888784 A CN A2009100888784A CN 200910088878 A CN200910088878 A CN 200910088878A CN 101610412 A CN101610412 A CN 101610412A
Authority
CN
China
Prior art keywords
probability distribution
frame
distribution graph
var
sigma
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2009100888784A
Other languages
Chinese (zh)
Other versions
CN101610412B (en
Inventor
杨戈
刘宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN2009100888784A priority Critical patent/CN101610412B/en
Publication of CN101610412A publication Critical patent/CN101610412A/en
Application granted granted Critical
Publication of CN101610412B publication Critical patent/CN101610412B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a kind of visual tracking method that merges based on multi thread, belong to areas of information technology.The inventive method comprises: the tracking window of a) determining to comprise target area and background area in first frame of video sequence; B), obtain the color characteristic probability distribution graph of former frame, position feature probability distribution graph and motion continuity characteristic probability distribution map from second frame; C) described three kinds of probability distribution graph weighting summations are obtained the total probability distribution map; D) in described total probability distribution map, obtain the center point coordinate of the tracking window of present frame by the CAMSHIFT algorithm.The inventive method can be used for man-machine interaction, Visual intelligent monitoring, intelligent robot, virtual reality technology, based on the image encoding of model, fields such as the content retrieval of Streaming Media.

Description

A kind of visual tracking method that merges based on multi thread
Technical field
The present invention relates to vision and follow the tracks of, relate in particular to a kind of visual tracking method that merges multiple clue, belong to areas of information technology.
Background technology
Along with developing rapidly of information technology and intelligence science, utilize the computer vision of computer realization human vision function to become one of research direction the most popular in the present computer realm.Wherein the vision tracking is one of key problem of computer vision, and it is to find the residing position of interested moving target in each two field picture of image sequence.It is studied being very important, also is very urgent.
People such as Hong Liu published thesis in " Proceedings of the 14th IEEE International Conference onImage Processing (ICIP 2007) " (the 14th image processing international conference of IEEE) in 2007 " Collaborativemean shift tracking based on multi-cue integration and auxiliary objects " (merging and the cooperation average drifting tracking of helpers) based on multi thread, this paper combines color, position and predicted characteristics clue, dynamically upgrade the weights of each clue according to background, use Mean Shift technology, utilize adminicle to realize visual tracking method.But, its hypothesis background model is obeyed single Gauss model, need train the video sequence of no moving object in advance, obtain the background initial model, limited its application like this, in the clue evaluation function, represent area-of-interest with a rectangle bigger slightly than target, zone definitions between this rectangle and tracking window is the background area, reliability evaluation function for certain clue, the size of background area directly influences its value, it is big more promptly to follow the tracks of window, and its reliability evaluation functional value is more little, lacks general.
Summary of the invention
The objective of the invention is to overcome deficiency of the prior art, a kind of visual tracking method that merges multiple clue is provided, especially can be used for following the tracks of, make to allow computer that target (such as human body) is carried out vision when the motion tracking, satisfy the requirement of accuracy and real-time towards the vision of human motion.
The present invention in conjunction with a plurality of clues (color characteristic, position feature and motion continuity feature) of video image by means of CAMSHIFT (continuous adaptive average drifting, Continuously Adaptive Mean Shift) method realizes the vision tracking, as shown in Figure 1.Wherein, color characteristic preferably adopts color harmony saturation feature, red channel feature, green channel feature and blue channel feature, has realized robustness preferably for the change of blocking with pose; Position feature utilizes frame difference technology to realize; The motion continuity feature is finished according to the interframe continuity.
The present invention adopts changeless tracking window, though limited the management that appearance changes and blocks like this, it need not the consideration meeting regard the similar zone of some backgrounds the part of target as, and can realize tracking effect equally.
The present invention is achieved through the following technical solutions, and may further comprise the steps:
A) determine that in first frame of one section video sequence one follows the tracks of window, described tracking window comprises target area and background area, and described target area comprises tracked object; Preferably, described tracking window is a rectangle, and described rectangle is divided into three parts, and middle portion is described target area, and each portion of both sides is described background area, as shown in Figure 2.
B), obtain the color characteristic probability distribution graph of former frame, position feature probability distribution graph and motion continuity characteristic probability distribution map for each frame from second frame;
C) described three kinds of probability distribution graph weighting summations are obtained the total probability distribution map;
D) in described total probability distribution map, obtain the center point coordinate of the tracking window of present frame by the CAMSHIFT algorithm.
Specifically describing multiple clue involved in the present invention and clue below merges.
Color characteristic
Described color characteristic preferably includes tone (Hue) and saturation (Saturation) feature, R (Red) channel characteristics, G (Green) channel characteristics and B (Blue) channel characteristics of image, has realized robustness preferably for the change of blocking with pose.
If the present invention uses the histogram of m handle (bin), image has n pixel, their position and in histogram corresponding value be respectively { x i} I=1...n, { q u} U=1 ..., m.(R channel characteristics, G channel characteristics and B channel characteristics) or { q U (v)} U=1 ..., m; V=1 ..., m.(color harmony saturation feature).Defined function b:R 2→ 1 ..., m}, this function characterize the discrete interval value of each pixel color information correspondence.In the histogram, c the interval corresponding value of colouring information can be expressed as formula (1) and formula (2) or formula (1 ') and formula (2 '):
q u ( v ) = Σ i = 1 n δ [ b ( x i ) - u ( v ) ] - - - ( 1 ) p u ( v ) = min ( 255 max ( q u ( v ) ) q u ( v ) , 255 ) - - - ( 2 )
Or q u = Σ i = 1 n δ [ b ( x i ) - u ] - - - ( 1 ′ ) p u = min ( 255 max ( q u ) q u , 255 ) - - - ( 2 ′ )
The color characteristic probability distribution graph can be set up by following method:
First, from RGB (Red, Green, Blue) extract R (Red), G (Green) and B (Blue) passage in the image, then the RGB image is transformed into HSV (Hue, Saturation, Value) image, extract tone (Hue) passage and saturation (Saturation) passage, utilize the contrary projection (Histogram Back-Projection) of histogram to calculate the color harmony saturation probability distribution of following the tracks of pixel in the window, red probability distribution, green probability distribution, blue probability distribution is suc as formula (1) or formula (1 ').
The second, with color harmony saturation probability distribution, red probability distribution, green probability distribution, the codomain in the blue probability distribution utilize formula (2) or formula (2 ') to carry out value again, make codomain by [0, max (q U (v))] or [0, max (q u)] project to [0,255].
The 3rd, in color harmony saturation feature, red feature, green characteristic, the feature that select to be fit in the blue feature be as the color characteristic of vision track algorithm according to certain rule, form final color probability distribution graph p (x, y).
In above-mentioned four features, the preferred Dynamic Selection of the inventive method wherein best embodies one or more features of target area and background area otherness, and method is as follows:
For feature k, establishing i is the numerical value of feature k, H 1 k(i) histogram of characteristic value among the expression target area A, H 2 k(i) histogram of characteristic value among expression background area B and the C, p k(i) be the discrete probability distribution of target area A, q k(i) be the discrete probability distribution of background area B and C, L i kBe the log-likelihood of feature k, suc as formula (10), get the very little number of δ>0, this mainly is to prevent that denominator from appearring in formula (10) is 0 or the situation of log0.Var (L i kp k(i) be relative target class distribution p k(i) L i kVariance, suc as formula (11), var (L i kq k(i)) be relative background classes distribution q k(i) L i kVariance, suc as formula (12), var (L i kR k(i)) be the L that relative target and background class distributes i kVariance, suc as formula (13), V (L; p k(i), q k(i)) be L i kVariance, suc as formula (14), V (L; p k(i), q k(i)) ability that feature k can be separated target and background, V (L have been represented; p k(i), q k(i)) the easy more target of isolating from background of big more characterization k, this feature is reliable more, is suitable as the feature of tracking target more.
L i k = log max { p k ( i ) , δ } max { q k ( i ) , δ } - - - ( 10 )
var ( L i k ; p k ( i ) ) = E [ L i k * L i k ] - ( E [ L i k ] ) 2 = Σ i p k ( i ) * L i k * L i k - [ Σ i p k ( i ) * L i k ] 2 - - - ( 11 )
var ( L i k ; q k ( i ) ) = E [ L i k * L i k ] - ( E [ L i k ] ) 2 = Σ i q k ( i ) * L i k * L i k - [ Σ i q k ( i ) * L i k ] 2 - - - ( 12 )
var ( L i k ; R k ( i ) ) = E [ L i k * L i k ] - ( E [ L i k ] ) 2 = Σ i R k ( i ) * L i k * L i k - [ Σ i R k ( i ) * L i k ] 2 - - - ( 13 )
R wherein k(i)=[p k(i)+q k(i)]/2;
V ( L i k ; p k ( i ) , q k ( i ) ) = var ( L i k ; R k ( i ) ) var ( L i k ; p k ( i ) ) + var ( L i k ; q k ( i ) ) - - - ( 14 )
In the video tracking process, constantly detect the reliability of passing judgment on color harmony saturation feature, R (Red) channel characteristics, G (Green) channel characteristics and B (Blue) channel characteristics, when their reliability changes, then calculate their reliability according to formula (14), and rearrange according to the size of reliability, get V (L i kp k(i), q k(i)) a Zui Da W feature is as the color characteristic of tracking target.Described W value preferably 1,2 or 3.
Position feature
About position feature, the present invention utilizes the frame difference to calculate the gray scale difference value of every bit in two two field pictures of front and back, judges that by setting a threshold value which pixel is the motor point then, surpasses this threshold value and is the motor point.If the experience that just depends on is set frame difference limen value, then have certain blindness, only be applicable to some specific occasions.The present invention preferably uses the Otsu method dynamically to determine this frame difference limen value F for this reason.The Otsu basic idea is that searching appropriate threshold F makes distribution square minimum in the class, and this is equivalent to searching appropriate threshold F and makes distribution square maximum between class, promptly by frame difference limen value F frame difference image is divided into two classes, and the variance of feasible two classes that are divided into is maximum.Scatter square in the class and represent the distribution situation of each sample point around their averages, scatter square between class and represent distribution situation between all kinds of.Scatter between class that square is more for a short time to mean that all kinds of samples inside is tight more, scatter between class that square is big more to mean that separability is good more between all kinds of samples.
The motion continuity feature
About the motion continuity feature, the present invention estimates the speed of tracking target by the image of former frames, and then follows the tracks of the target location that obtains by former two field pictures and estimate the target's center position that current time.In very short time (between the frame of video), target travel has very strong continuity, it is invariable that the speed of target can be considered, therefore, can estimate the speed of tracking target by the image of former frames, and then follow the tracks of the target location that obtains by former two field pictures and estimate the target's center position that current time.
X(t,row)=X(t-1,row)±(X(t-1,row)-X(t-2,row)) (3)
X(t,col)=X(t-1,col)±(X(t-1,col)-X(t-2,col)) (4)
If X is (t, row) row-coordinate of expression t moment current goal center, suc as formula (3), X (t, the col) ordinate of expression t moment current goal center is suc as formula (4), row is the maximum number of lines of image, col is the vertical number of the maximum of image, has considered the continuity of target travel, predicts current location by using linear predictor.So X (t, row), X (t-1, row) and X (t-2, relation row) is suc as formula (5), X (t, col), X (t-1, col) and X (t-2, relation col) is suc as formula (6).
X(t,row)∈[max(X(t-1,row)-(X(t-1,row)-X(t-2,row)),1),min(X(t-1,row)+(X(t-1,row)-X(t-2,row)),rows)] (5)
X(t,col)∈[max(X(t-1,col)-(X(t-1,col)-X(t-2,col)),1),min(X(t-1,col)+(X(t-1,col)-X(t-2,col)),cols)] (6)
If described tracking window line width is width, lengthwise is length, the row-coordinate of current time target then, and suc as formula (7), ordinate, suc as formula (8), promptly target is in this rectangular extent.
Y(t,row)∈[max(X(t,row)-width,1),min(X(t,row)+width,rows)] (7)
Y(t,col)∈[max(X(t,col)-length,1),min(X(t,col)+length,cols)] (8)
If B ' (x, y, the t) probability distribution graph of expression motion continuity feature, suc as formula (9), wherein, (x, y, t) expression t moment coordinate (x, pixel y), the tracked target of 1 expression, 0 expression background pixel.
B ′ ( x , y , t ) = 1 ( x , y , t ) ∈ Y ( t , row ) ∩ Y ( t , col ) 0 ( x , y , t ) ∉ Y ( t , row ) ∩ Y ( t , col ) - - - ( 9 )
Clue merges
Suppose P k(row, colu are that (row, colu) at moment t, by the probability distribution that feature k obtains, it characterizes each pixel, and (row colu) belongs to the probability of target area to pixel under feature k t).(t) representative is at moment t for row, colu for P, W+2 feature (W color characteristic, a future position feature and a motion continuity feature) is through the final probability distribution after merging, and it characterizes each pixel (row, colu) belong to the probability of target area, suc as formula (15).W the foundation that color characteristic is passed judgment on as competition by the reliability of previous frame, certain feature is credible high, then occupies an leading position in the vision tracking system, provides more information to tracking system; When with a low credibility, information can be lowered utilance or ignore.
P ( row , colu , t ) = Σ k = 1 W + 2 r k * P k ( row , colu , t ) - - - ( 15 )
R wherein kBe the weights of feature k, r 1, r 2..., r kBe selecteed color characteristic, r W+1Be the weights of future position feature, r W+2Be the weights of motion continuity feature, Σ k = 1 W + 2 r k = 1 , In order to make P W+1(row, colu, t) and P W+2(codomain t) projects to [0,255], gets P for row, colu W+1(row, colu, t)=(t) * 255, P for x, y for B W+2(row, colu, t)=(t) * 255 for x, y for B '.
Compared with prior art, the present invention is simply effective, do not need to suppose background model, do not need in advance the video sequence of no moving target to be trained, its key has been to realize the fusion of a plurality of clues, be suitable for different scenes, obtained tracking effect preferably, especially be fit to the low and target of the color saturation of targeted environment in the video sequence by the situation of partial occlusion.
Utilize the present invention to carry out vision and follow the tracks of, both can be used as tracking results utilization, also can be used as the intermediate object program of next step vision understanding.The present invention has wide practical use at message area, can be at interaction technique (the Human Robot Interaction of people and robot, be called for short HRI), Visual intelligent monitoring, intelligent robot, virtual reality technology, based on the image encoding of model, fields such as the content retrieval of Streaming Media are applied.At this is example with the Visual intelligent supervisory control system only, it has contained fields such as security protection, traffic, fire-fighting, military project, communication, and video monitoring system has been applied to public place safety precautions such as sub-district security monitoring, condition of a fire monitoring, break in traffic rules and regulations, flow control, military affairs and bank, market, airport, subway etc.Existing video monitoring system is the recorded video image usually, is used for being used as retrospectant evidence, does not give full play to the supervisory function bit of real-time active.If existing video monitoring system improvement is become intelligent video monitoring system, just can strengthen monitoring capacity widely, reduce hidden danger, the resource that uses manpower and material resources sparingly is simultaneously saved investment.The soluble problem of video intelligent system has two: one frees the security protection operating personnel from numerous and diverse and uninteresting " staring at screen " task, finish this part work by machine; Another one is to search fast to want the image looked in the video data of magnanimity, promptly target is followed the tracks of, and as No. 13 lines of Beijing Metro, utilizes video analysis to catch the stealing thief; Pudong airport, the Capital Airport and many building the railway project, all estimates use video analysis technology, and visual tracking method of the present invention is exactly one of the core of these video analysis technology and key technology.
Description of drawings
Fig. 1 merges the schematic diagram of clue for the inventive method;
Fig. 2 follows the tracks of the window schematic diagram for the present invention, and A is the target area, and B and C are the background areas;
Fig. 3-5 is respectively to be 50 frames of the video sequence of 640*512 at resolution, the vision of 100 frames and 120 frames is followed the tracks of schematic diagram, wherein schemes a and represents motion continuity characteristic probability distribution map, and b represents the position feature probability distribution graph, c represents the total probability distribution map, and d represents present frame tracking results figure.
Fig. 6 a-d is respectively to be 50 frames of the video sequence of 640*480 at resolution, 90 frames, the vision tracking results figure of 120 frames and 164 frames.
Embodiment
Below in conjunction with accompanying drawing embodiments of the invention are elaborated.Protection scope of the present invention is not limited to following embodiment.
The vision of present embodiment is followed the tracks of and is carried out according to the following step:
The first, at the 1st frame of video sequence the tracking window is set, follow the tracks of the length and width of window and determine by the operator according to the size of tracked target, and constant in tracing process.To follow the tracks of window and be divided into three parts, mid portion (A) is the target area, about (B and C) be the background area, as shown in Figure 2.
The second, from the 2nd frame, according to former frame select the individual color characteristic of the most reliable 2 (W=2) (such as, R channel and B channel), calculate color characteristic probability distribution graph M 1
The 3rd, calculating location characteristic probability distribution map M 2
The 4th, calculate motion continuity characteristic probability distribution map M 3
The 5th, with as above three kinds of probability distribution graph (M of gained 1, M 2, M 3) the corresponding r of difference weighting k, obtain final probability distribution graph M, in the present embodiment M 1-M 3Weight be followed successively by 3/7 (wherein the weight of R channel and B channel is respectively 2/7 and 1/7), 2/7 and 2/7.
The 6th, in probability distribution graph M, obtain the center point coordinate of the tracking window of present frame by the CAMSHIFT algorithm, the core process of CAMSHIFT algorithm comprises: calculate zeroth order square (following formula 16) and the first moment (following formula 17 and 18) of following the tracks of window, through type (19), (coordinate when this coordinate does not have obvious displacement (variation of the value of x and y is less than 2) or iterate to maximum times 15 times is exactly the tracking window center of present frame to formula (20) iterative computation for x, y) coordinate.
M 00 = Σ x Σ y p ( x , y ) - - - ( 16 )
M 10 = Σ x Σ y xp ( x , y ) - - - ( 17 )
M 01 = Σ x Σ y yp ( x , y ) - - - ( 18 )
x = M 10 M 00 - - - ( 19 )
y = M 01 M 00 - - - ( 20 )
Fig. 3-Fig. 5 is respectively to be 50 frames of the video sequence of 640*512 at resolution, and the vision of 100 frames and 120 frames is followed the tracks of schematic diagram.
Fig. 6 is to be 50 frames of the video sequence of 640*480 at resolution, 90 frames, the vision tracking results figure of 120 frames and 164 frames.Although the saturation of this video sequence 2 is lower, reliability and the multi thread of having taken all factors into consideration color characteristic merge, and have still realized tracking target.

Claims (7)

1. a visual tracking method that merges based on multi thread comprises the following steps:
A) determine that in first frame of one section video sequence one follows the tracks of window, described tracking window comprises target area and background area, and described target area comprises tracked object;
B), obtain the color characteristic probability distribution graph of former frame, position feature probability distribution graph and motion continuity characteristic probability distribution map for each frame from second frame;
C) described three kinds of probability distribution graph weighting summations are obtained the total probability distribution map;
D) in described total probability distribution map, obtain the center point coordinate of the tracking window of present frame by the CAMSHIFT algorithm.
2. visual tracking method as claimed in claim 1 is characterized in that, described tracking window is a rectangle, and described rectangle is divided into three parts, and middle portion is described target area, and each portion of both sides is described background area.
3. visual tracking method as claimed in claim 1 is characterized in that, the color characteristic in the described color characteristic probability distribution graph comprises one or more in color harmony saturation feature, R channel characteristics, G channel characteristics and the B channel characteristics.
4. visual tracking method as claimed in claim 3 is characterized in that, calculates the V (L of described color harmony saturation feature, R channel characteristics, G channel characteristics and B channel characteristics by following formula; p k(i), q k(i)) value, the color characteristic in the described color characteristic probability distribution graph comprises previous, two or three features that described V value is maximum:
V ( L i k ; p k ( i ) , q k ( i ) ) = var ( L i k ; R k ( i ) ) var ( L i k ; p k ( i ) ) + var ( L i k ; q k ( i ) ) , Wherein,
p k(i) discrete probability distribution of the described target area of expression;
q k(i) discrete probability distribution of the described background area of expression;
L i k = log max { p k ( i ) , δ } max { q k ( i ) , δ } , Wherein to be used to guarantee not occur denominator be 0 or the situation of log0 to δ;
R k(i)=[p k(i)+q k(i)]/2;
var ( L i k ; p k ( i ) ) = E [ L i k * L i k ] - ( E [ L i k ] ) 2 = Σ i p k ( i ) * L i k * L i k - [ Σ i p k ( i ) * L i k ] 2 ; Wherein E represents average, and var represents variance;
var ( L i k ; q k ( i ) ) = E [ L i k * L i k ] - ( E [ L i k ] ) 2 = Σ i q k ( i ) * L i k * L i k - [ Σ i q k ( i ) * L i k ] 2 ;
var ( L i k ; R k ( i ) ) = E [ L i k * L i k ] - ( E [ L i k ] ) 2 = Σ i R k ( i ) * L i k * L i k - [ Σ i R k ( i ) * L i k ] 2 .
5. visual tracking method as claimed in claim 1, it is characterized in that, described position feature probability distribution graph obtains by following method: the gray scale difference value that calculates described tracking window each pixel in present frame and former frame, if described difference is greater than preset threshold, then described pixel is the motor point, and described position feature probability distribution graph comprises all described motor points.
6. visual tracking method as claimed in claim 5 is characterized in that, described threshold value is dynamically determined by the Otsu method.
7. visual tracking method as claimed in claim 1 is characterized in that, when described three kinds of probability distribution graph weighting summations were obtained the total probability distribution map, the weight sum of various probability distribution graph was 1.
CN2009100888784A 2009-07-21 2009-07-21 Visual tracking method based on multi-cue fusion Expired - Fee Related CN101610412B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009100888784A CN101610412B (en) 2009-07-21 2009-07-21 Visual tracking method based on multi-cue fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009100888784A CN101610412B (en) 2009-07-21 2009-07-21 Visual tracking method based on multi-cue fusion

Publications (2)

Publication Number Publication Date
CN101610412A true CN101610412A (en) 2009-12-23
CN101610412B CN101610412B (en) 2011-01-19

Family

ID=41483954

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009100888784A Expired - Fee Related CN101610412B (en) 2009-07-21 2009-07-21 Visual tracking method based on multi-cue fusion

Country Status (1)

Country Link
CN (1) CN101610412B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI497450B (en) * 2013-10-28 2015-08-21 Univ Ming Chuan Visual object tracking method
CN105319716A (en) * 2014-07-31 2016-02-10 精工爱普生株式会社 Display device, method of controlling display device, and program
CN105547635A (en) * 2015-12-11 2016-05-04 浙江大学 Non-contact type structural dynamic response measurement method for wind tunnel test
CN107403439A (en) * 2017-06-06 2017-11-28 沈阳工业大学 Predicting tracing method based on Cam shift
CN107833240A (en) * 2017-11-09 2018-03-23 华南农业大学 The target trajectory extraction of multi-track clue guiding and analysis method
WO2021180004A1 (en) * 2020-03-09 2021-09-16 华为技术有限公司 Video analysis method, video analysis management method, and related device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1300746C (en) * 2004-12-09 2007-02-14 上海交通大学 Video frequency motion target adaptive tracking method based on multicharacteristic information fusion
CN100531405C (en) * 2005-12-31 2009-08-19 中国科学院计算技术研究所 Target tracking method of sports video
CN1932846A (en) * 2006-10-12 2007-03-21 上海交通大学 Visual frequency humary face tracking identification method based on appearance model

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI497450B (en) * 2013-10-28 2015-08-21 Univ Ming Chuan Visual object tracking method
CN105319716A (en) * 2014-07-31 2016-02-10 精工爱普生株式会社 Display device, method of controlling display device, and program
CN105547635A (en) * 2015-12-11 2016-05-04 浙江大学 Non-contact type structural dynamic response measurement method for wind tunnel test
CN105547635B (en) * 2015-12-11 2018-08-24 浙江大学 A kind of contactless structural dynamic response measurement method for wind tunnel test
CN107403439A (en) * 2017-06-06 2017-11-28 沈阳工业大学 Predicting tracing method based on Cam shift
CN107403439B (en) * 2017-06-06 2020-07-24 沈阳工业大学 Cam-shift-based prediction tracking method
CN107833240A (en) * 2017-11-09 2018-03-23 华南农业大学 The target trajectory extraction of multi-track clue guiding and analysis method
CN107833240B (en) * 2017-11-09 2020-04-17 华南农业大学 Target motion trajectory extraction and analysis method guided by multiple tracking clues
WO2021180004A1 (en) * 2020-03-09 2021-09-16 华为技术有限公司 Video analysis method, video analysis management method, and related device

Also Published As

Publication number Publication date
CN101610412B (en) 2011-01-19

Similar Documents

Publication Publication Date Title
Prati et al. Shadow detection algorithms for traffic flow analysis: a comparative study
US7787656B2 (en) Method for counting people passing through a gate
CN101610412B (en) Visual tracking method based on multi-cue fusion
CN101916383B (en) Vehicle detecting, tracking and identifying system based on multi-camera
CN102289948B (en) Multi-characteristic fusion multi-vehicle video tracking method under highway scene
Lai et al. Image-based vehicle tracking and classification on the highway
CN108921875A (en) A kind of real-time traffic flow detection and method for tracing based on data of taking photo by plane
CN101950426B (en) Vehicle relay tracking method in multi-camera scene
CN108596129A (en) A kind of vehicle based on intelligent video analysis technology gets over line detecting method
CN104244113A (en) Method for generating video abstract on basis of deep learning technology
CN106780548A (en) moving vehicle detection method based on traffic video
CN104063885A (en) Improved movement target detecting and tracking method
CN106203513A (en) A kind of based on pedestrian's head and shoulder multi-target detection and the statistical method of tracking
CN103164711A (en) Regional people stream density estimation method based on pixels and support vector machine (SVM)
CN105005766A (en) Vehicle body color identification method
CN101266132A (en) Running disorder detection method based on MPFG movement vector
CN113763427B (en) Multi-target tracking method based on coarse-to-fine shielding processing
CN106127812A (en) A kind of passenger flow statistical method of non-gate area, passenger station based on video monitoring
CN112766038B (en) Vehicle tracking method based on image recognition
CN109063630B (en) Rapid vehicle detection method based on separable convolution technology and frame difference compensation strategy
CN107659754A (en) Effective method for concentration of monitor video in the case of a kind of leaf disturbance
CN112561951A (en) Motion and brightness detection method based on frame difference absolute error and SAD
CN103646254A (en) High-density pedestrian detection method
CN113706584A (en) Streetscape flow information acquisition method based on computer vision
CN105243354B (en) A kind of vehicle checking method based on target feature point

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110119

Termination date: 20140721

EXPY Termination of patent right or utility model