CN1992911A - Target tracking method of sports video - Google Patents

Target tracking method of sports video Download PDF

Info

Publication number
CN1992911A
CN1992911A CN 200510135495 CN200510135495A CN1992911A CN 1992911 A CN1992911 A CN 1992911A CN 200510135495 CN200510135495 CN 200510135495 CN 200510135495 A CN200510135495 A CN 200510135495A CN 1992911 A CN1992911 A CN 1992911A
Authority
CN
China
Prior art keywords
target
value
sample point
centerdot
tracking method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200510135495
Other languages
Chinese (zh)
Other versions
CN100531405C (en
Inventor
李锦涛
熊文平
张勇东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Computing Technology of CAS
Original Assignee
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Computing Technology of CAS filed Critical Institute of Computing Technology of CAS
Priority to CNB200510135495XA priority Critical patent/CN100531405C/en
Publication of CN1992911A publication Critical patent/CN1992911A/en
Application granted granted Critical
Publication of CN100531405C publication Critical patent/CN100531405C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to a method for tracking sport video target, wherein said method comprises that: extracting the color diagram of video; using corn function to treat the color diagram, to obtain target color distribution character; using particle filter algorism to predict the motion position of target; via the predicted value, using average group algorism to obtain the accurate value of target motion. The invention uses several motion models, via the character of target motion, to dynamically refresh motion models, therefore, it can use little samples to realize better effect, reduce complexity and improve tracking accuracy.

Description

Target tracking method of sports video
Technical field
The present invention relates to a kind of motion target tracking method, more particularly, relate to a kind of method for tracking target based on sports video.
Background technology
Motion target detection is an important topic using the vision field with following the tracks of, and its core concept is the comprehensive utilization image processing, and technological means such as video analysis are caught moving target quickly and accurately.
By camera supervised dynamic scene, be widely used in different social sectors already.Traffic monitoring on from the guard monitor of community and critical facility to city and highway, from the intellectual weapon that detects of military target, video camera plays important effect as the extension of human vision.In recent years, along with the extensive use of visual monitor system, the quick location and the tracking technique of moving target more and more cause many scientific research institutions and researcher's great interest.At present, method for tracking target is based on static background mostly, utilizes the target poor with the frame between the background, extracts the profile information of moving target, thereby obtains the movement position of target.Yet in the sports video, strenuous exercise often takes place in video camera, and background is all changing all the time, therefore is difficult to utilize the background frames difference to obtain the movement position of target.In the prior art, the someone proposes to utilize the characteristic information of tracked target, obtains the movement position of target again in conjunction with particle filter method.Ask for an interview list of references 1:KatjaNUmmiaro about the specific implementation of this method, Esther Koller-Meier and Luc Van Gool " A Color-based ParticleFilter " Image and Vision Computing 2002.1..
Exist target travel very fast in the sports video, target velocity changes characteristics such as very fast, therefore can propose four kinds of motion models: at the uniform velocity, evenly quicken, static and collision.Because the particle filter algorithm that is proposed in the list of references 1 has only been considered a kind of motor pattern, therefore, when the target that those motor patterns of tracking often change, effect is often relatively poor, though the scope that can distribute by the number and the enlarged sample point of increase particle filter sample point is solved, but the computation complexity of system also can obviously increase, and error also can strengthen, but also can produce a lot of sample points useless to particle filter.Bigger to those motion amplitudes especially sport video, this method accuracy is lower, and computational complexity is bigger.
Summary of the invention
The objective of the invention is only to adopt a kind of motor pattern in the prior art and cause the effect of tracking target relatively poor in order to overcome, the shortcoming that computation complexity is high, provide a kind of fast, stable, the method for tracking target that is applied to sports video of robust.
To achieve these goals, the invention provides a kind of target tracking method of sports video, comprising:
1), extract the color histogram in the video, utilize kernel function that color histogram is made normalized, obtain the distribution of color feature of target;
2), adopt the movement position of particle filter algorithm target of prediction, specifically may further comprise the steps:
2-1), in first frame, produce the set of random sample point by normal distribution S t - 1 = { s t - 1 ( n ) } n = 1 , . . . . , N , From sample set, select N sample of obeying probability distribution;
2-2), according to following formula, by the sample point set of present frame S t - 1 ′ = { s t - 1 ′ ( n ) } n = 1 , . . . . , N Obtain the sample point set of next frame image S t = { s t ( n ) } n = 1 , . . . . , N ;
s t ( n ) = A s t - 1 ′ ( n ) + w t - 1 ( n )
Wherein, w T-1 (n)Be system noise, meet Gaussian Profile, A represents motion model, and subscript t-1 represents present frame, and subscript t represents next frame, the number of n representative sample point; Sample point is produced according to motion model by the sample point of present frame in the next frame;
2-3), the movement position of target of prediction in the next frame image;
3), according to step 2) predicted value of the movement position of the target that obtains, adopt means clustering algorithm to ask the exact value of target travel position.
In the technique scheme, in the described step 1), the specific implementation step of described distribution of color Feature Extraction method is as follows:
1-1), calculate the weight of each picture element in the target;
1-2), ask the color histogram of target.
In the technique scheme, at described step 2-3) in, the performing step of the position of target of prediction in the next frame image comprises:
2-3-1), each sample point in the calculating target and the similarity of motion model;
2-3-2), by step 2-3-1) similarity that obtains, calculate the probability right of each sample point in the target;
2-3-3), by step 2-3-2) probability right of each sample point of obtaining, the desired value of estimating target position, the desired value of resulting target location are exactly the position of target in the next frame image.
In the technique scheme, in the described step 3), ask the specific implementation step of exact value of moving target position as follows:
3-1), calculate by step 2) predicted value of the target travel position that obtains is with the similarity between the motion model, if similarity is greater than a preassigned threshold values, then motion model does not change, if similarity is less than a preassigned threshold values, then change motion model, and jump to step 2-2), the movement position of target of prediction again is till similarity is greater than preassigned threshold values;
3-2), calculate the correction value of movement position target according to formula;
3-3), to step 3-2) correction value and the step 2 that obtain) predicted value that obtains asks poor, difference is done modulo operation, result to modulo operation judges, if the value ε that the value of modulo operation is set less than the user, then stop the process of iteration, resulting correction value is exactly the actual exact position of moving target, otherwise correction value is composed to predicted value, and execution in step 3-2 again), the correction value of looking for novelty.
Described motion model comprises at the uniform velocity, evenly quicken, static and collide four kinds, for different motion models, at step 2-2) in, A gets different values.
At described step 3-1) in, described preassigned threshold values gets 0.8.
At described step 3-3) in, the value ε that described user sets is 2 or 3.
Target tracking method of sports video of the present invention is introduced multiple motion model, characteristics according to target travel are dynamically upgraded motion model, thereby utilize number of samples seldom to reach effect preferably, reduced computation complexity, and improved the accuracy of following the tracks of.
Description of drawings
Fig. 1 is the flow chart of target tracking method of sports video of the present invention;
Fig. 2 is the flow chart of target tracking method of sports video specific implementation step of the present invention.
Embodiment
Below in conjunction with the drawings and specific embodiments, target tracking method of sports video of the present invention is further described.
With reference to figure 1, Fig. 2, target tracking method of sports video of the present invention comprises following steps:
Color histogram in step 10, the extraction video utilizes kernel function that color histogram is made normalized, obtains the distribution of color feature of target, and the specific implementation step of this step is as follows.
The weight of each picture element in step 11, the calculating target.Suppose that tracked target is that a center is y, window width is the rectangular window of h, and is unreliable relatively because the peripheral pixel of target may be blocked or be subjected to the influence of background, therefore each picture element in the target set different weights, picture element is the closer to target's center, and weight is big more.With g (x i) represent the weight of each picture element, the computing formula of each picture element weight as shown in Equation (1):
g ( x i ) = k ( | | ( x i - x 0 ) h | | 2 ) - - - ( 1 )
Wherein, k represents a distance function, and ‖ represents modulo operation, x iA picture element of expression target, x 0Represent the central point of target.
Step 12, ask the color histogram of target.
Ask target color histogram computing formula as shown in Equation (2):
q′(y)={q u′(y)} u=1...m (2)
Wherein, m represents the number in the zone that color space is divided into, and u represents the color space index at each regional place.q u' (y) value is represented with formula (3) and (4).
Σ u = 1 m q u ′ ( y ) = 1 - - - ( 3 )
q u ′ ( y ) = c h Σ i = 1 N g ( x i ) σ [ b ( x i ) - u ] , σ [ x ] = 1 x = 0 0 x ≠ 0 - - - ( 4 )
Function b (x i) be pixel x iIndex value in the m value histogram, g (x i) weight of the picture element that is in the step 11 to be asked, C hValue with formula (5) expression.
c h = 1 Σ i = 1 N k ( | | ( x i - x 0 ) h | | 2 ) - - - ( 5 )
The movement position of step 20, employing particle filter algorithm target of prediction is implemented as follows.
Step 21, produce random sample point by normal distribution and gather S t - 1 = { s t - 1 ( n ) } n = 1 , . . . . , N , From sample set, select N and obey probability distribution π T-1 nSample.
Step 211, calculate cumulative probability distribution c according to formula (6) T-1'
c (0) t-1=0
c ( n ) t - 1 = c ( n - 1 ) t - 1 + π t - 1 n - - - ( 6 )
c t - 1 ′ ( n ) = c t - 1 ( n ) c t - 1 ( N )
Wherein, t is the subscript of representative time, and t-1 represents the former frame image here, and t represents current frame image, calculates cumulative probability distribution c T-1' process be a recursive procedure, the sample point that produces of present frame each time all is that the sample point by the former frame image produces
Equally distributed random number r is obeyed in step 212, N of generation n∈ [0,1]
Step 213, the set { c that obtains in step 211 T-1' (j)In select to satisfy condition C t - 1 ′ ( j ) ≥ r n Minimum j, the order s t - 1 ′ ( n ) = s t - 1 ( j ) , 0≤n≤N
The meaning of this step is exactly to produce a random number in the 0-1 interval at random, selects π according to this random number then T-1 n(n=1....N) first few items sum is greater than the sample point of this random number.
Step 22, from the set S t - 1 ′ = { s t - 1 ′ ( n ) } n = 1 , . . . . , N In utilize the equation of motion (7) to produce sample set in the next frame image S t = { s t ( n ) } n = 1 , . . . . , N
s t ( n ) = A s t - 1 ′ ( n ) + w t - 1 ( n ) - - - ( 7 )
Wherein, w T-1 (n)Be system noise, meet Gaussian Profile, A represents motion model.
Step 23, the position of target of prediction in the next frame image.It is implemented as follows.
Step 231, calculate each sample point in the target and the similarity of motion model, the calculation of similarity degree formula as shown in Equation (8),
ρ [ p ′ , q ′ ( s t ( n ) ) ] = Σ u = 1 m p u ′ q u ′ ( s t n ) - - - ( 8 )
Wherein, p ' is exactly a motion model, represents the distribution of color of tracked target, q ' (s t (n)) be exactly the distribution of color of sample point.
The probability right of each sample point in step 232, the calculating target, described probability right π t nExpression, the computing formula of probability right as shown in Equation (9).
π t n = 1 2 π σ e - ( 1 - ρ [ p ′ , q ′ ( s t ( n ) ) ] ) 2 σ 2 - - - ( 9 )
The probability right of step 233, each sample point of obtaining according to step 232, the desired value of estimating target position:
E [ S t ] = Σ n = 1 N π t n s t ( n ) - - - ( 10 )
The desired value of resulting target location is exactly the position of target in the next frame image.
The predicted value of the movement position of step 30, the target that obtains according to step 20 adopts means clustering algorithm to ask the exact value of target travel position.The predicted value of supposing the moving target that obtained by step 20 is made as initial position, uses y 0Expression asks the specific implementation step of exact value of moving target position as follows.
Step 31, calculating initial position y 0With the similarity between the motion model, and judge calculating resulting similarity, when similarity ρ [p ', q ' (y 0)]>during θ, think that motion model does not change, when ρ [p ', q ' (y 0)]<during θ, then change motion model, and jump to step 22, change motion model (model value is A), the initial position of target of prediction again, up to ρ [p ', q ' (y 0)]>θ.Wherein, θ is a preassigned threshold values, and in one embodiment, θ gets 0.8.To calculation of similarity degree as shown in Equation (11).
ρ [ p ′ , q ′ ( y 0 ) ] = Σ u = 1 m p u ′ q u ′ ( y 0 ) - - - ( 11 )
Step 32, according to the correction value y ' of formula (12) calculated target positions.
y ′ = Σ i = 1 hn x i w i g ( | | y 0 - x i | | 2 ) Σ i = 1 hn w i g ( | | y 0 - x i | | 2 ) - - - ( 12 )
Wherein, g is a distance function, w i = Σ u = 1 m p u ′ q u ′ ( y 0 ) σ [ b ( x i ) - u ]
Step 33, to y ' and y 0Difference do modulo operation, the result of modulo operation is judged if the value of modulo operation less than the value ε that the user sets, then stops the process of iteration, resulting y ' is exactly the actual exact position of moving target, otherwise the value of y ' is composed to y 0, and execution in step 32 again.Wherein, the value ε of described user's setting is 2 or 3.

Claims (7)

1, a kind of target tracking method of sports video may further comprise the steps:
1), extract the color histogram in the video, utilize kernel function that color histogram is made normalized, obtain the distribution of color feature of target;
2), adopt the movement position of particle filter algorithm target of prediction, carry out according to the following steps:
2-1), in first frame, produce the set of random sample point by normal distribution S t - 1 = { s t - 1 ( n ) } n = 1 , · · · , N , From sample set, select N sample of obeying probability distribution;
2-2), according to following formula, by the sample point set of present frame S t - 1 ′ = { s t - 1 ′ ( n ) } n = 1 , · · · , N Obtain the sample point set of next frame image S t = { s t ( n ) } n = 1 , · · · , N ;
S t ( n ) = As t - 1 ′ ( n ) + w t - 1 ( n )
Wherein, w T-1 (n)Be system noise, meet Gaussian Profile, A represents motion model, and subscript t-1 represents present frame, and subscript t represents next frame, the number of n representative sample point; Sample point is produced according to motion model by the sample point of present frame in the next frame;
2-3), the movement position of target of prediction in the next frame image;
3), according to step 2) predicted value of the movement position of the target that obtains, adopt the poly-0 class algorithm of average to ask the exact value of target travel position.
2, target tracking method of sports video according to claim 1 is characterized in that, the concrete steps of extracting color histogram in the video in the described step 1) are as follows:
1-1), calculate the weight of each picture element in the target;
1-2), ask the color histogram of target.
3, target tracking method of sports video according to claim 1 is characterized in that, at described step 2-3) in, the performing step of the position of target of prediction in the next frame image comprises:
2-3-1), each sample point in the calculating target and the similarity of motion model;
2-3-2), by step 2-3-1) similarity that obtains, calculate the probability right of each sample point in the target;
2-3-3), by step 2-3-2) probability right of each sample point of obtaining, the desired value of estimating target position, the desired value of resulting target location are exactly the position of target in the next frame image.
4, target tracking method of sports video according to claim 1 is characterized in that, in the described step 3), asks the specific implementation step of exact value of moving target position as follows:
3-1), calculate by step 2) predicted value of the target travel position that obtains is with the similarity between the motion model, if similarity is greater than a preassigned threshold values, then motion model does not change, if similarity is less than a preassigned threshold values, then change motion model, and jump to step 2-2), the movement position of target of prediction again is till similarity is greater than preassigned threshold values;
3-2), calculate the correction value of movement position target according to formula;
3-3), to step 3-2) correction value and the step 2 that obtain) predicted value that obtains asks poor, difference is done modulo operation, result to modulo operation judges, if the value ε that the value of modulo operation is set less than the user, then stop the process of iteration, resulting correction value is exactly the actual exact position of moving target, otherwise correction value is composed to predicted value, and execution in step 3-2 again), the correction value of looking for novelty.
5, target tracking method of sports video according to claim 4 is characterized in that, described motion model comprises at the uniform velocity, evenly quicken, static and collide four kinds, for different motion models, at step 2-2) in, A gets different values.
6, target tracking method of sports video according to claim 4 is characterized in that, at described step 3-1) in, described preassigned threshold values gets 0.8.
7, target tracking method of sports video according to claim 4 is characterized in that, at described step 3-3) in, the value ε that described user sets is 2 or 3.
CNB200510135495XA 2005-12-31 2005-12-31 Target tracking method of sports video Expired - Fee Related CN100531405C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB200510135495XA CN100531405C (en) 2005-12-31 2005-12-31 Target tracking method of sports video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB200510135495XA CN100531405C (en) 2005-12-31 2005-12-31 Target tracking method of sports video

Publications (2)

Publication Number Publication Date
CN1992911A true CN1992911A (en) 2007-07-04
CN100531405C CN100531405C (en) 2009-08-19

Family

ID=38214785

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB200510135495XA Expired - Fee Related CN100531405C (en) 2005-12-31 2005-12-31 Target tracking method of sports video

Country Status (1)

Country Link
CN (1) CN100531405C (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009086683A1 (en) * 2007-12-29 2009-07-16 Intel Corporation Automatic detection, labeling and tracking of team members in a video
CN101753852A (en) * 2008-12-15 2010-06-23 姚劲草 Sports event dynamic mini- map based on target detection and tracking
CN101610412B (en) * 2009-07-21 2011-01-19 北京大学 Visual tracking method based on multi-cue fusion
WO2011035470A1 (en) * 2009-09-24 2011-03-31 Hewlett-Packard Development Company, L.P. Particle tracking method and apparatus
CN101453660B (en) * 2007-12-07 2011-06-08 华为技术有限公司 Video object tracking method and apparatus
CN101923716B (en) * 2009-06-10 2012-07-18 新奥特(北京)视频技术有限公司 Method for improving particle filter tracking effect
CN106780539A (en) * 2016-11-30 2017-05-31 航天科工智能机器人有限责任公司 Robot vision tracking
CN107543548A (en) * 2017-08-22 2018-01-05 京东方科技集团股份有限公司 Movable personnel positioning apparatus and activity data obtain equipment
CN109145991A (en) * 2018-08-24 2019-01-04 北京地平线机器人技术研发有限公司 Image group generation method, image group generating means and electronic equipment
CN113420183A (en) * 2021-06-21 2021-09-21 成都神州数码索贝科技有限公司 Cross-video target retrieval and collision method and system in video detection system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100382600C (en) * 2004-04-22 2008-04-16 上海交通大学 Detection method of moving object under dynamic scene
CN100337249C (en) * 2004-04-23 2007-09-12 中国科学院计算技术研究所 A video motion object dividing method
CN1287600C (en) * 2004-05-17 2006-11-29 中国科学院计算技术研究所 Tree block structure and multi-frame-reference motion estimating method and apparatus

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101453660B (en) * 2007-12-07 2011-06-08 华为技术有限公司 Video object tracking method and apparatus
WO2009086683A1 (en) * 2007-12-29 2009-07-16 Intel Corporation Automatic detection, labeling and tracking of team members in a video
CN101753852A (en) * 2008-12-15 2010-06-23 姚劲草 Sports event dynamic mini- map based on target detection and tracking
CN101923716B (en) * 2009-06-10 2012-07-18 新奥特(北京)视频技术有限公司 Method for improving particle filter tracking effect
CN101610412B (en) * 2009-07-21 2011-01-19 北京大学 Visual tracking method based on multi-cue fusion
WO2011035470A1 (en) * 2009-09-24 2011-03-31 Hewlett-Packard Development Company, L.P. Particle tracking method and apparatus
CN106780539A (en) * 2016-11-30 2017-05-31 航天科工智能机器人有限责任公司 Robot vision tracking
CN106780539B (en) * 2016-11-30 2019-08-20 航天科工智能机器人有限责任公司 Robot vision tracking
CN107543548A (en) * 2017-08-22 2018-01-05 京东方科技集团股份有限公司 Movable personnel positioning apparatus and activity data obtain equipment
CN107543548B (en) * 2017-08-22 2020-11-06 京东方科技集团股份有限公司 Activity personnel positioning device and activity data acquisition equipment
CN109145991A (en) * 2018-08-24 2019-01-04 北京地平线机器人技术研发有限公司 Image group generation method, image group generating means and electronic equipment
CN113420183A (en) * 2021-06-21 2021-09-21 成都神州数码索贝科技有限公司 Cross-video target retrieval and collision method and system in video detection system

Also Published As

Publication number Publication date
CN100531405C (en) 2009-08-19

Similar Documents

Publication Publication Date Title
CN1992911A (en) Target tracking method of sports video
Huang et al. Tracknet: A deep learning network for tracking high-speed and tiny objects in sports applications
CN106778854B (en) Behavior identification method based on trajectory and convolutional neural network feature extraction
Wang et al. Inverse sparse tracker with a locally weighted distance metric
CN107633226B (en) Human body motion tracking feature processing method
CN1131439C (en) Target tracking method and device therefor
CN104036287B (en) Human movement significant trajectory-based video classification method
CN103488993B (en) A kind of crowd's abnormal behaviour recognition methods based on FAST
CN106919902B (en) Vehicle identification and track tracking method based on CNN
CN107833239B (en) Optimization matching target tracking method based on weighting model constraint
CN106295532B (en) A kind of human motion recognition method in video image
CN111080673A (en) Anti-occlusion target tracking method
CN107844739B (en) Robust target tracking method based on self-adaptive simultaneous sparse representation
Xiang et al. Lightweight fully convolutional network for license plate detection
CN110991397B (en) Travel direction determining method and related equipment
CN104484672B (en) Quick licence plate recognition method based on multiframe picture and autonomous learning
CN101324958A (en) Method and apparatus for tracking object
CN110033012A (en) A kind of production method for tracking target based on channel characteristics weighted convolution neural network
CN104966305A (en) Foreground detection method based on motion vector division
CN111310609A (en) Video target detection method based on time sequence information and local feature similarity
CN108200432A (en) A kind of target following technology based on video compress domain
CN112132869A (en) Vehicle target track tracking method and device
CN111242971B (en) Target tracking method based on improved double-center particle swarm optimization algorithm
De Rita et al. CNN-based object detection on low precision hardware: Racing car case study
CN108320301B (en) Target tracking optimization method based on tracking learning detection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20090819

Termination date: 20201231