CN102393909B - Method for detecting goal events in soccer video based on hidden markov model - Google Patents
Method for detecting goal events in soccer video based on hidden markov model Download PDFInfo
- Publication number
- CN102393909B CN102393909B CN201110180084.8A CN201110180084A CN102393909B CN 102393909 B CN102393909 B CN 102393909B CN 201110180084 A CN201110180084 A CN 201110180084A CN 102393909 B CN102393909 B CN 102393909B
- Authority
- CN
- China
- Prior art keywords
- camera lens
- semantic
- field picture
- width
- hue
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Abstract
The invention discloses a method for detecting goal events in a soccer video based on a hidden markov model. By the method, the problems of complicated event detection system model and low detection rate in the prior art are solved. The method comprises the following steps of: firstly, performing physical shot segmenting and semantic shot labeling on a training video and a test video, and respectively forming a training data set and a test data set respectively by using acquired semantic shot sequences; secondly, calculating an initial parameter of the hidden markov model according to the training data set; thirdly, training an initial model by adopting a Baum-Welch algorithm and the training data set and establishing the hidden markov model for the goal events; fourthly, calculating a probability of the model for generating training data by adopting a forward algorithm and acquiring a judgment threshold value; and finally, calculating a probability of the model for generating test data and detecting the goal event in the test video according to the judgment threshold value. By the method, detection for semantic goal events can be realized accurately; and the method is applied to the field of semantic analysis, such as detection for wonderful events in the soccer video and the like.
Description
Technical field
The invention belongs to Video Information Retrieval Techniques: field, relate to sports video semantic analysis, can be used in football video goal event detection, to detect quickly and accurately goal event.
Background technology
Sports video enjoys the extensive concern of researcher and various circles of society because having huge audient colony and huge commercial value.The automatic detection of excellent event of sports video is the focus of Video Semantic Analysis area research always, its difficult point is to solve low-level feature to the semantic gap between high-level semantic, this Chinese scholars is conducted extensive research, obtained high achievement in research.
Method based on machine learning mainly contains at present:
(1)Ding?Y,Fan?G?L.Sports?Video?Mining?via?Multichannel?Segmental?Hidden?Markov?Models[J]IEEE?Trans.on?Multimedia,2009,11(7):1301-1309。The superperformance of the method based on Hidden Markov modeling sequential rule, built hyperchannel part Hidden Markov Model (HMM), can carry out simultaneously video structure by different level, parallel parsing, caught more exactly the mutual rule between a plurality of hidden Markov chains, semantic event detection accuracy has reached 87.06%, but the structure more complicated of model.
(2)Sadlier?D?A,O′Connor?N?E.Event?detection?in?field?sports?video?using?audio-visual?features?and?a?support?vector?machine[J]IEEE?Trans.on?Circuits?and?Systems?for?Video?Technology,2005,15(10):1225-1233。The method is by setting up audio and video characteristic detecting unit, adopts support vector machine to merge extracting feature, realized the detection of eventful and noneventful event in the videos such as football, rugby.The method is due to semantic event detection problem is directly solved as tagsort problem, and do not make full use of semantic information, and its event detection accuracy only reaches 74%.
(3)Xu?C?S,Zhang?Y?F,Zhu?G?Y,et?al.Using?webcast?text?for?semantic?event?detection?in?broadcast?sports?video[J].IEEE?Trans.on?Multimedia,2008,10(7):1342-1355。This method adopts the critical event in potential semantic analysis Sampling network text, using text detection result and low-level feature as the input of conditional random field models, realizes the detection of multiple Context event in football, basketball sport video.But it is more consuming time that this method builds the process of model, do not adopt hidden state variable, can not more effectively excavate the potential rule of Context event, limited the raising that detects performance.
Summary of the invention
The present invention seeks to the deficiency for above-mentioned prior art, a kind of football video goal event detecting method based on Hidden Markov Model (HMM) is proposed, to build simple and effective goal event model, adopt hidden state variable to excavate the sequential rule of Context event, and introduce semantic information, improve event detection accuracy.
For achieving the above object, technical scheme of the present invention comprises the steps:
(1) to N
1individual training video fragment and N
2individual test video fragment is carried out respectively physical shots and is cut apart, and obtains the physical shots sequence P of d training video fragment
dphysical shots sequence Q with e test video fragment
e, wherein, d ∈ { 1,2, L, N
1, e ∈ { 1,2, L, N
2;
(2) the physical shots sequence P to d training video fragment
din physical shots and the physical shots sequence Q of e test video fragment
ein physical shots carry out respectively semantic tagger, obtain by camera lens far away, in the semantic shot sequence Q of d training video fragment forming of camera lens, close-up shot, spectators' camera lens and playback camera lens
dsemantic shot sequence Z with e test video fragment
e, and by N
1the semantic shot sequence O of individual training video fragment
1, O
2, L,
as training dataset
by N
2the semantic shot sequence Z of individual test video fragment
1, Z
2, L,
as test data set
(3) to the N in training dataset O
1individual semantic shot sequence O
1, O
2, L,
the residing game status of each semantic shot in each semantic shot sequence of artificial judgment, i.e. state θ is carried out in match
1or the state θ that calls the time
2, obtain N
1individual status switch W
1, W
2, L,
(4) definition semantic shot integrates as ε={ s
1, s
2, s
3, s
4, s
5, wherein, s
1, s
2, s
3, s
4, s
5represent five kinds of semantic shots, i.e. s
1for camera lens far away, s
2for middle camera lens, s
3for close-up shot, s
4for spectators' camera lens, s
5for playback camera lens;
(5) according to the N in training dataset O
1individual semantic shot sequence O
1, O
2, L,
with corresponding N
1individual status switch W
1, W
2, L,
initial model parameter lambda=(U, A, the C) that calculates Hidden Markov Model (HMM), wherein, U is original state probability vector, and A is state transition probability matrix, and C is observed value probability matrix;
(6) according to training dataset O, adopt Baum-Welch algorithm to train the initial model parameter lambda of Hidden Markov Model (HMM)=(U, A, C), obtain the final mask parameter of the Hidden Markov Model (HMM) of the event of scoring
and utilize this final mask parameter to set up the Hidden Markov Model (HMM) of goal event, wherein,
final original state probability vector,
end-state transition matrix,
it is final observed value probability matrix;
(7) according to d semantic shot sequence O in the Hidden Markov Model (HMM) of goal event and training dataset O
d, the Hidden Markov Model (HMM) that adopts forward direction algorithm to calculate goal event produces d semantic shot sequence O
dprobability
(8) according to the Hidden Markov Model (HMM) of goal event, produce N in training dataset O
1individual semantic shot sequence O
1, O
2, L,
probability
select
in minimum value as the decision threshold T of goal event
1:
(9) according to e semantic shot sequence Z in the Hidden Markov Model (HMM) of goal event and test data set Z
e, the Hidden Markov Model (HMM) that adopts forward direction algorithm to calculate goal event produces e semantic shot sequence Z
eprobability
(10) if
in e test video fragment, comprise goal event, if
in e test video fragment, do not comprise goal event.
The present invention compared with prior art has the following advantages:
(1) the present invention, owing to having set up the Hidden Markov Model (HMM) of goal event, takes full advantage of the semantic information in physical shots, improved the detection performance of goal event, and model construction process is simple, does not need complicated training;
(2) the present invention is due to the physical shots of video is labeled as to semantic shot, then the detection of event of scoring of the input using semantic shot sequence as Hidden Markov Model (HMM), effectively alleviate low-level feature to the semantic gap between high-level semantic, improved the detection performance of goal event.
Accompanying drawing explanation
Fig. 1 is the representative frame exemplary plot of football video goal sequence and non-goal sequence;
Fig. 2 is process flow diagram of the present invention.
Embodiment
One, basic theory introduction
Football match is liked by masses deeply, but bout the video data volume is huge, the interested excellent event of spectators is a very little part for the whole match conventionally, therefore, match video is analyzed and processed, and the semanteme of realizing the excellent events such as goal, penalty shot detects most important in football video semantic analysis field.Yet, section of football match video has specific structure, deeply, excavate exactly architectural feature and the contact of this inherence, set up effective section of football match video structural model, make the semanteme detection of excellent event become possibility, in sports video semantic analysis field, there is important theory value and market application foreground.
Section of football match video fragment can be divided into goal video segment and non-goal video segment, each fragment comprises camera lens far away, middle camera lens, close-up shot, spectators' camera lens and playback camera lens, by the analyses of a large amount of true match videos are found, goal fragment contains more close-up shot and playback camera lens, less camera lens far away and middle camera lens.Fig. 1 is the representative frame exemplary plot of sequence and non-goal sequence of scoring in football video, wherein Fig. 1 (a) is goal sequence, it has shown with 5 camera lenses the event of once scoring, and these 5 camera lenses are shooting panorama camera lens far away, shooting sportsman close-up shot, spectators' camera lens, the middle camera lens that comprises some sportsmen and playback camera lens; Fig. 1 (b) is non-goal sequence, and it showed and shown once non-goal event with camera lens far away and intersecting of middle camera lens.
Hidden Markov Model (HMM) is a dual random process, and wherein, Markov chain is basic stochastic process, described the transfer of state, and state is invisible, and another kind of stochastic process has been described the statistical relationship between state and the visible observation sequence of state generation.
The definition of Hidden Markov Model (HMM):
λ=(N, M, U, A, C) or brief note are λ=(U, A, C)
Wherein, the number of the state that N is Hidden Markov Model (HMM), { θ
1, θ
2, L, θ
nbe N state in Hidden Markov Model (HMM), q
tfor Hidden Markov Model (HMM) is at t residing state of the moment, q
t∈ { θ
1, θ
2, L, θ
n; M is the Hidden Markov Model (HMM) observed value number that status produces at any time, { s
1, s
2, L, s
mbe M observed value, E
tfor t moment state q
tthe observed value producing, E
t∈ { s
1, s
2, L, s
m; U is original state probability vector, U={U
1, U
2, L, U
n, U
i=P (q
1=θ
i), i=1,2, L, N, U
irepresent that Hidden Markov Model (HMM) is at t=1 moment status q
1for state θ
iprobability; A is state transition probability matrix, A=(a
ij)
n * N, a
ij=P (q
t+1=θ
j| q
t=θ
i), j=1,2, L, N, a
ijrepresent that Hidden Markov Model (HMM) is at t moment status q
tfor state θ
iunder condition, at t+1 moment status q
t+1for state θ
jprobability; C is observed value probability matrix, C=(c
jk)
n * M, c
jk=P (E
t=s
k| q
t=θ
j), k=1,2, L, M, c
jkrepresent that Hidden Markov Model (HMM) is engraved in t status q constantly when t
tfor state θ
junder condition, state q
tthe observed value E producing
tfor s
kprobability.
Two, football video goal event detecting method
With reference to Fig. 2, the present invention is based on the football video goal event detecting method of Hidden Markov Model (HMM), step is as follows:
Step 1, carries out physical shots to video segment and cuts apart, and obtains physical shots sequence.
Choose goal video segment as training video fragment, choose goal video segment and non-goal video segment and form test video fragment, to N
1individual training video fragment and N
2individual test video fragment is carried out respectively physical shots and is cut apart, and obtains the physical shots sequence P of d training video fragment
dphysical shots sequence Q with e test video fragment
e, wherein, d ∈ { 1,2, L, N
1, e ∈ { 1,2, L, N
2.
Step 2, the physical shots sequence P to d training video fragment
din physical shots and the physical shots sequence Q of e test video fragment
ein physical shots carry out respectively semantic tagger, to the physical shots that comprises semantic information, give a semantic label, obtain by camera lens far away, in the semantic shot sequence O of d training video fragment forming of camera lens, close-up shot, spectators' camera lens and playback camera lens
dsemantic shot sequence Z with e test video fragment
e.
(2.1) by the physical shots sequence P of d training video fragment
din physical shots and the physical shots sequence Q of e test video fragment
ein physical shots be all labeled as respectively real-time camera lens and playback camera lens:
(2.1a) will contain N
3the training video fragment of width two field picture or each the width two field picture in test video fragment are hsv color space from RGB color space conversion, its RGB color space is by red component R, green component G and blue component B form, and obtain value h, the value s of saturation degree component S of chromatic component H, the value v of luminance component V after conversion:
v=MAX
Wherein, r is the normalized value of red component R of each pixel of each width two field picture, g is the normalized value of green component G of each pixel of each width two field picture, b is the normalized value of blue component B of each pixel of each width two field picture, MAX is r, the g of each pixel of each width two field picture, the maximal value in b, MIN is r, the g of each pixel of each width two field picture, the minimum value in b, is calculated as follows:
MAX=max(r,g,b)
MIN=min(r,g,b)
Wherein, r ' is the value of red component R of each pixel of each width two field picture, and g ' is the value of green component G of each pixel of each width two field picture, and b ' is the value of blue component B of each pixel of each width two field picture;
(2.1b) according to the corresponding l level of the value h index hue of chromatic component in the n ' width two field picture
lnumber of pixels num (hue
l), calculate index hue in the 256 handle histograms of chromatic component of the n ' width two field picture
lcorresponding value hist
n '(hue
l):
hist
n′(hue
l)=num(hue
l)
Wherein, n ' ∈ { 1,2, L, N
3, hue
lbe the l level index of the n ' width two field picture chromatic component, l ∈ { 1,2, L, 256}, hue
l∈ { 1,2, L, 256};
(2.1c) according to index hue in the histogram of the chromatic component of n+1 width two field picture
lcorresponding value hist
n+1(hue
l) and the histogram of the chromatic component of n width two field picture in index hue
lcorresponding value hist
n(hue
l), calculate the chroma histogram difference HHD of n+1 width two field picture and n width two field picture
n:
Wherein, L is the height of each width two field picture, and K is the width of each width two field picture, n ∈ { 1,2, L, N
3-1};
(2.1d) according to chroma histogram difference HHD
n, calculate the N of this video segment
3the average HHD of-1 chroma histogram difference:
(2.1e) choose HHD
nbe greater than threshold value T
2frame, wherein, threshold value T
2for 2 times of the HHD of this video segment, get T
2=0.1938;
(2.1f) choose the camera lens ls that the duration is 10~20 frames
w, obtain a series of candidate's logo camera lens
wherein, w ∈ { 1,2, L, N
4, N
4for candidate's logo camera lens sum;
(2.1g) real logo camera lens must occur in pairs, and the fragment in the middle of logo camera lens is playback fragment, and playback fragment at least comprises 1 camera lens.Utilize camera lens segmentation procedure to detect candidate's logo camera lens ls
w 'with candidate's logo camera lens ls
w '-1between the camera lens number that comprises of video segment: if the camera lens number that this video segment comprises is greater than 1, camera lens in this video segment is labeled as to playback camera lens, if the camera lens number that this video segment comprises equals 1, camera lens in this video segment is labeled as to real-time camera lens, wherein, w ' ∈ { 2,3, L, N
4;
(2.2) real-time camera lens is further labeled as to camera lens far away, middle camera lens He Fei place camera lens, the overall situation that wherein camera lens far away provides match to carry out, conventionally contain very large site area, middle camera lens is described one or several sportsmen's whole body and action, also contain certain site area, but be less than camera lens far away, therefore, adopt place ratio PR to distinguish camera lens far away and middle camera lens, i.e. the place pixel number of a width two field picture and the always ratio of pixel number, when some camera lens far away contains part spectators region, site area reduces, place ratio PR also reduces, be easy to camera lens far away and middle camera lens mistake mark, after therefore the present invention goes the cutting of two field picture top to 1/3rd, according to the place ratio PR of two field picture after cutting and the threshold value of choosing, real-time camera lens is further labeled as to camera lens far away, middle camera lens He Fei place camera lens:
(2.2a) choosing 60 width distant view two field pictures in camera lens in real time, according to index hue in 256 handle histograms of the chromatic component of p width two field picture
lcorresponding value hist
p(hue
l), calculate index hue in the cumulative histogram of chromatic component of 60 width distant view two field pictures
lcorresponding value sh (hue
l):
Wherein, hue
lbe the l level index of p width two field picture chromatic component, l ∈ { 1,2, L, 256}, hue
l∈ { 1,2, L, 256}, p ∈ { 1,2, L, 60};
(2.2b) according to index hue in cumulative histogram
lcorresponding value sh (hue
l), the peak F of calculating cumulative histogram:
F=max{sh(hue
1),sh(hue
2),L,sh(hue
256)};
(2.2c), according to value that in cumulative histogram, each index is corresponding and the peak F of cumulative histogram, determine the lower limit index hue that meets following condition
low:
sh(hue
low)≥0.2×F
sh(hue
low-1)<0.2×F
Wherein, sh (hue
low) be cumulative histogram lower limit index hue
lowcorresponding value, sh (hue
low-1) be index hue in cumulative histogram
lowthe value of-1 correspondence;
(2.2d), according to value that in cumulative histogram, each index is corresponding and the peak F of cumulative histogram, determine the upper limit index hue that meets following condition
up:
sh(hue
up)≥0.2×F
sh(hue
up+1)<0.2×F
Wherein, sh (hue
up) be upper limit index hue in cumulative histogram
upcorresponding value, sh (hue
up+ 1) be index hue in cumulative histogram
upthe value of+1 correspondence;
(2.2e) each width two field picture cutting of real-time camera lens is gone to top 1/3rd, after statistics cutting, in each width two field picture, the value h of chromatic component belongs to interval [hue
low/ 256, hue
up/ 256] place number of pixels C
1, calculate the place ratio PR of each width two field picture:
Wherein, L is the height of each width two field picture, and K is the width of each width two field picture;
(2.2f) according to the threshold value T setting
3, T
4with the place ratio PR of each width two field picture, judge the type of each width two field picture:
If the place ratio PR of a width two field picture is greater than threshold value T
3, this width two field picture is distant view two field picture,
If the place ratio PR of a width two field picture is less than or equal to threshold value T
3and be more than or equal to threshold value T
4, this width two field picture is middle scape two field picture,
If the place ratio PR of a width two field picture is less than threshold value T
4, this width two field picture Shi Fei place two field picture,
Wherein, get threshold value T
3=0.70, T
4=0.30;
If (2.2g) more than 55% two field picture of real-time camera lens to be marked belongs to distant view two field picture, marking this real-time camera lens is camera lens far away; If more than 55% two field picture of real-time camera lens to be marked belongs to middle scape two field picture, marking this real-time camera lens is middle camera lens; Otherwise be labeled as non-place camera lens;
(2.3) Jiang Fei place camera lens is further labeled as close-up shot and spectators' camera lens, because viewership in spectators' camera lens is more, background is complicated, marginal information is abundant, close-up shot personage's large percentage, smooth region is more, need represent with edge pixel ratio EPR the ratio of edge pixel point number in each width two field picture and total pixel number, therefore the present invention is according to edge pixel ratio EPR and the threshold value of choosing, Jiang Fei place camera lens is further labeled as close-up shot and spectators' camera lens as follows:
(2.3a) each width two field picture of Jiang Fei place camera lens is from RGB color space conversion to YC
bc
rcolor space, obtains the value y of luminance component Y, chroma blue component C
bvalue cb, red color component C
rvalue cr:
y=0.299r′+0.578g′+0.114b′
cb=0.564(b′-y)
cr=0.713(r′-y)
Wherein, r ' is the value of red component R of each pixel of each width two field picture, and g ' is the value of green component G of each pixel of each width two field picture, and b ' is the value of blue component B of each pixel of each width two field picture;
(2.3b) according to the value y of the luminance component Y of each width two field picture, with Canny operator, detect the edge pixel in each width two field picture, obtain the number C of edge pixel
2;
(2.3c) according to the number C of the edge pixel in each width two field picture
2, calculate the edge pixel ratio EPR of each width two field picture in non-place camera lens to be marked:
Wherein, L is the height of each width two field picture, and K is the width of each width two field picture;
If (2.3d) EPR of a width two field picture is greater than threshold value T
5, be labeled as spectators' two field picture, otherwise be labeled as feature two field picture, wherein, get T
5=0.10;
If (2.3e) more than 55% two field picture of non-place camera lens to be marked belongs to spectators' two field picture, marking Gai Fei place camera lens is spectators' camera lens, otherwise is labeled as close-up shot.
Step 3, by N
1the semantic shot sequence O of individual training video fragment
1, O
2, L,
as training dataset
by N
2the semantic shot sequence Z of individual test video fragment
1, Z
2, L,
as test data set
Step 4, to the N in training dataset O
1individual semantic shot sequence O
1, O
2, L,
the residing game status of each semantic shot in each semantic shot sequence of artificial judgment, i.e. state θ is carried out in match
1or the state θ that calls the time
2, obtain N
1individual status switch W
1, W
2, L,
Step 5, definition semantic shot integrates as ε={ s
1, s
2, s
3, s
4, s
5, wherein, s
1, s
2, s
3, s
4, s
5represent five kinds of semantic shots, i.e. s
1for camera lens far away, s
2for middle camera lens, s
3for close-up shot, s
4for spectators' camera lens, s
5for playback camera lens;
Step 6, according to the N in training dataset O
1individual semantic shot sequence O
1, O
2, L,
with corresponding N
1individual status switch W
1, W
2, L,
initial model parameter lambda=(U, A, the C) that calculates Hidden Markov Model (HMM), wherein, U is original state probability vector, and A is state-transition matrix, and C is observed value probability matrix.
(6.1) according to N
1individual semantic shot sequence O
1, O
2...,
in in state θ
isemantic shot number x
iand N
1individual semantic shot sequence O
1, O
2...,
in all semantic shot number x, calculate original state probability vector U:
U={U
1,U
2,L,U
N}
Wherein, i={1,2, L, N}, U
irepresent that Hidden Markov Model (HMM) is at t=1 moment status q
1for state θ
iprobability, the state number that N is Hidden Markov Model (HMM), N=2, q
tfor Hidden Markov Model (HMM) is at t residing state of the moment, q
t∈ { θ
1, θ
2;
(6.2) statistics N
1individual semantic shot sequence O
1, O
2, L,
middle semantic shot is from state θ
itransfer to state θ
jnumber x
(i, j)and N
1individual semantic shot sequence O
1, O
2, L,
middle semantic shot is from state θ
ibe transferred to the number x of free position
(i, *), according to x
(i, j)and x
(i, *)computing mode shift-matrix A:
A=(a
ij)
N×N
Wherein, j={1,2, L, N}, a
ijrepresent that Hidden Markov Model (HMM) is at t moment status q
tfor state θ
tunder condition, at t+1 moment status q
t+1for state θ
jprobability;
(6.3) according to N
1individual semantic shot sequence O
1, O
2, L,
in in state θ
jsemantic shot s
knumber x
j, kand N
1individual semantic shot sequence O
1, O
2, L,
in in state θ
jthe total number x of semantic shot
j, calculate observed value probability matrix C:
C=(c
jk)
N×M
Wherein, k={1,2, L, M}, c
jkrepresent that Hidden Markov Model (HMM) is at t moment status q
tfor state θ
jcondition under, state q
tthe observed value E producing
tfor semantic shot s
kprobability, M is the Hidden Markov Model (HMM) observed value number that status produces at any time, M=5, s
1, s
2, s
3, s
4, s
5five kinds of five kinds of observed values that semantic shot is Hidden Markov Model (HMM).
Step 7, according to training dataset O, adopts Baum-Welch algorithm to train the initial model parameter lambda of Hidden Markov Model (HMM)=(U, A, C), obtains the final mask parameter of the Hidden Markov Model (HMM) of the event of scoring
and utilize this final mask parameter to set up the Hidden Markov Model (HMM) of goal event, wherein,
final original state probability vector,
end-state transition matrix,
it is final observed value probability matrix.
Step 8, according to d semantic shot sequence O in the Hidden Markov Model (HMM) of goal event and training dataset O
d, the Hidden Markov Model (HMM) that adopts forward direction algorithm to calculate goal event produces d semantic shot sequence O
dprobability
(8.1) according to d semantic shot sequence O in the Hidden Markov Model (HMM) of goal event and training dataset O
din the 1st semantic shot O
d, 1, calculate in final mask parameter
under condition, Hidden Markov Model (HMM) is at t=1 status q constantly
1for state θ
iand the 1st observed value is d semantic shot sequence O in training dataset O
din the 1st semantic shot O
d, 1probability
Wherein,
for end-state probability vector
i element, η
i(O
d, 1) represent that Hidden Markov Model (HMM) is at t=1 moment status q
1for state θ
iunder condition, state q
1the observed value E producing
1for d semantic shot sequence O in training dataset O
din the 1st semantic shot O
d, 1probability, work as O
d, 1be k kind semantic shot s
ktime,
for final observed value probability matrix
the capable k column element of i;
(8.2) according to d semantic shot sequence in training dataset O
and probability
wherein, T
dfor d semantic shot sequence O in training dataset O
din semantic shot number, calculate in final mask parameter
under condition, Hidden Markov Model (HMM) is at t+1 status q constantly
t+1for state θ
jand the 1st observed value is d semantic shot sequence O in training dataset O to t+1 observed value
din the 1st semantic shot O
d, 1to t+1 semantic shot O
d, t+1probability
obtain
Wherein,
for in final mask parameter
under condition, Hidden Markov Model (HMM) is at t status q constantly
tfor state θ
iand the 1st observed value is d semantic shot sequence O in training dataset O to t observed value
din the 1st semantic shot O
d, 1to t semantic shot O
d, tprobability,
for end-state transition probability matrix
the capable j column element of i, η
j(O
d, t+1) represent that Hidden Markov Model (HMM) is at t+1 moment status q
t+1for state θ
junder condition, state q
t+1the observed value E producing
t+1for d semantic shot sequence O in training dataset O
din t+1 semantic shot O
d, t+1probability, as semantic shot O
d, t+1be k kind semantic shot s
ktime,
for final observed value probability matrix
the capable k column element of j;
(8.3) according to probability
the Hidden Markov Model (HMM) of calculating goal event produces d semantic shot sequence O
dprobability
Step 9, produces N in training dataset O according to the Hidden Markov Model (HMM) of goal event
1individual semantic shot sequence O
1, O
2, L,
probability
select
in minimum value as the decision threshold T of goal event
1:
Step 10, according to e semantic shot sequence Z in the Hidden Markov Model (HMM) of goal event and test data set Z
e, the Hidden Markov Model (HMM) that adopts forward direction algorithm to calculate goal event produces e semantic shot sequence Z
eprobability
(10.1) according to e semantic shot sequence Z in the Hidden Markov Model (HMM) of goal event and test data set Z
ein the 1st semantic shot Z
e, 1, calculate in final mask parameter
under condition, Hidden Markov Model (HMM) is at t=1 status q constantly
1for state θ
iand the 1st observed value is e semantic shot sequence Z in test data set Z
ein the 1st semantic shot Z
e, 1probability
Wherein,
for final original state probability vector
i element, γ
i(Z
e, 1) represent that Hidden Markov Model (HMM) is at t=1 moment status q
1for state θ
iunder condition, state q
1the observed value E producing
1for e semantic shot sequence Z in test data set Z
ein the 1st semantic shot Z
e, 1probability, work as Z
e, 1be k kind semantic shot s
ktime,
for final observed value probability matrix
the capable k column element of i;
(10.2) according to e semantic shot sequence in test data set Z
and probability
wherein, T '
efor e semantic shot sequence Z in test data set Z
ein semantic shot number, calculate in final mask parameter
under condition, Hidden Markov Model (HMM) is at t+1 status q constantly
t+1for state θ
jand the 1st observed value is e semantic shot sequence Z in test data set Z to t+1 observed value
ein the 1st semantic shot Z
e, 1to t+1 semantic shot Z
e, t+1probability
obtain
Wherein,
be illustrated in final mask parameter
under condition, Hidden Markov Model (HMM) is at t status q constantly
tfor state θ
iand the 1st observed value is e semantic shot sequence Z in test data set Z to t observed value
ein the 1st semantic shot Z
e, 1to t semantic shot Z
e, tprobability,
for end-state transition probability matrix
the capable j column element of i, γ
j(Z
e, t+1) represent that Hidden Markov Model (HMM) is at t+1 moment status q
t+1for state θ
junder condition, state q
t+1the observed value E producing
t+1for e semantic shot sequence Z in test data set Z
ein t+1 semantic shot Z
e, t+1probability, as semantic shot Z
e, t+1be k kind semantic shot s
ktime,
for final observed value probability matrix
the capable k column element of j;
(10.3) according to probability
the Hidden Markov Model (HMM) of calculating goal event produces e semantic shot sequence Z
eprobability
Step 11, if
in e test video fragment, comprise goal event, if
in e test video fragment, do not comprise goal event, wherein, T
1for goal event decision threshold, this decision threshold is to select the Hidden Markov Model (HMM) of goal event to produce N in training dataset O
1individual semantic shot sequence O
1, O
2, L,
probability
in minimum value.
Effect of the present invention can further illustrate by following experiment simulation.
1) simulated conditions
Experiment video is selected from the match of a plurality of plays of South Africa world cup in 2010, mpeg-1 form, and frame resolution is 352 * 288.Experiment video is divided into two parts, and a part, as training video fragment, contains 21 goal video segments, and remainder, as training video fragment, contains 29 goal video segments and 10 non-goal video segments.Experiment software environment is Matlab R2008a.
2) emulation content and result
Emulation one: according to the Hidden Markov Model (HMM) of the goal event of setting up, 39 test video fragments are asked respectively to the probability that produces test data under this model, according to decision threshold, detect in test video fragment whether contain goal event, experimental result is as shown in table 1.
Table 1
As can be seen from Table 1, the present invention for football video goal event detection precision ratio reached 92.31%, recall ratio reached 82.76%, the detection of goal event has good effect.
Above simulation result shows, the football video goal event detecting method based on Hidden Markov Model (HMM) that the present invention proposes, can realize the detection of goal event exactly.
Claims (8)
1. the football video goal event detecting method based on Hidden Markov Model (HMM), comprises the steps:
(1) to N
1individual training video fragment and N
2individual test video fragment is carried out respectively physical shots and is cut apart, and obtains the physical shots sequence P of d training video fragment
dphysical shots sequence Q with e test video fragment
e, wherein, d ∈ 1,2 ..., N
1, e ∈ 1,2 ..., N
2;
(2) the physical shots sequence P to d training video fragment
din physical shots and the physical shots sequence Q of e test video fragment
ein physical shots carry out respectively semantic tagger, obtain by camera lens far away, in the semantic shot sequence O of d training video fragment forming of camera lens, close-up shot, spectators' camera lens and playback camera lens
dsemantic shot sequence Z with e test video fragment
e, and by N
1the semantic shot sequence of individual training video fragment
as training dataset
by N
2the semantic shot sequence of individual test video fragment
as test data set
(3) to the N in training dataset O
1individual semantic shot sequence
the residing game status of each semantic shot in each semantic shot sequence of artificial judgment, i.e. state θ is carried out in match
1or the state θ that calls the time
2, obtain N
1individual status switch
(4) definition semantic shot integrates as ε={ s
1, s
2, s
3, s
4, s
5, wherein, s
1, s
2, s
3, s
4, s
5represent five kinds of semantic shots, i.e. s
1for camera lens far away, s
2for middle camera lens, s
3for close-up shot, s
4for spectators' camera lens, s
5for playback camera lens;
(5) according to the N in training dataset O
1individual semantic shot sequence
with corresponding N
1individual status switch
initial model parameter lambda=(U, A, the C) that calculates Hidden Markov Model (HMM), wherein, U is original state probability vector, and A is state transition probability matrix, and C is observed value probability matrix;
(6) according to training dataset O, adopt Baum-Welch algorithm to train the initial model parameter lambda of Hidden Markov Model (HMM)=(U, A, C), obtain the final mask parameter of the Hidden Markov Model (HMM) of the event of scoring
and utilize this final mask parameter to set up the Hidden Markov Model (HMM) of goal event, wherein,
final original state probability vector,
end-state transition probability matrix,
it is final observed value probability matrix;
(7) according to d semantic shot sequence O in the Hidden Markov Model (HMM) of goal event and training dataset O
d, the Hidden Markov Model (HMM) that adopts forward direction algorithm to calculate goal event produces d semantic shot sequence O
dprobability
(8) according to the Hidden Markov Model (HMM) of goal event, produce N in training dataset O
1individual semantic shot sequence
probability
, select
in minimum value as the decision threshold T of goal event
1:
(9) according to e semantic shot sequence Z in the Hidden Markov Model (HMM) of goal event and test data set Z
e, the Hidden Markov Model (HMM) that adopts forward direction algorithm to calculate goal event produces e semantic shot sequence Z
eprobability
;
2. football video goal event detecting method according to claim 1, wherein described " the physical shots sequence P to d training video fragment of step (2)
din physical shots and the physical shots sequence Q of e test video fragment
ein physical shots carry out respectively semantic tagger ", carry out as follows:
(2.1) by the physical shots sequence P of d training video fragment
din physical shots and the physical shots sequence Q of e test video fragment
ein physical shots be all labeled as respectively real-time camera lens and playback camera lens;
(2.2) real-time camera lens is further labeled as to camera lens far away, middle camera lens He Fei place camera lens;
(2.3) Jiang Fei place camera lens is further labeled as close-up shot and spectators' camera lens.
3. football video goal event detecting method according to claim 2, wherein step (2.1) described " by the physical shots sequence P of d training video fragment
din physical shots and the physical shots sequence Q of e test video fragment
ein physical shots be all labeled as respectively real-time camera lens and playback camera lens ", carry out as follows:
(2.1a) will contain N
3the training video fragment of width two field picture or each the width two field picture in test video fragment are hsv color space from RGB color space conversion, obtain value h, the value s of saturation degree component of chromatic component, the value v of luminance component:
v=MAX
Wherein, r is the normalized value of red component R of each pixel of each width two field picture, g is the normalized value of green component G of each pixel of each width two field picture, b is the normalized value of blue component B of each pixel of each width two field picture, MAX is r, the g of each pixel of each width two field picture, the maximal value in b, MIN is r, the g of each pixel of each width two field picture, the minimum value in b, is calculated as follows:
MAX=max(r,g,b)
MIN=min(r,g,b)
Wherein, r' is the value of red component R of each pixel of each width two field picture, and g' is the value of green component G of each pixel of each width two field picture, and b' is the value of blue component B of each pixel of each width two field picture;
(2.1b) according to the corresponding l level of the value index hue of the chromatic component of pixel in the n ' width two field picture
lnumber of pixels num (hue
l), calculate index hue in the 256 handle histograms of chromatic component of the n ' width two field picture
lcorresponding value hist
n '(hue
l):
hist
n′(hue
l)=num(hue
l)
Wherein, n ' ∈ 1,2 ..., N
3, hue
lbe the l level index of the n ' width two field picture chromatic component, l ∈ 1,2 ..., 256}, hue
l∈ 1,2 ..., 256};
(2.1c) according to index hue in the histogram of the chromatic component of n+1 width two field picture
lcorresponding value hist
n+1(hue
l) and the histogram of the chromatic component of n width two field picture in index hue
lcorresponding value hist
n(hue
l), calculate the chroma histogram difference HHD of n+1 width two field picture and n width two field picture
n:
Wherein, L is the height of each width two field picture, and K is the width of each width two field picture, n ∈ 1,2 ..., N
3-1};
(2.1d) according to chroma histogram difference HHD
n, calculate the N of this video segment
3the average HHD of-1 chroma histogram difference:
(2.1e) choose HHD
nbe greater than threshold value T
2frame, wherein, threshold value T
2for 2 times of the HHD of this video segment;
(2.1f) choose the camera lens ls that the duration is 10~20 frames
w, obtain a series of candidate's logo camera lens
wherein, w ∈ 1,2 ..., N
4, N
4for candidate's logo camera lens sum;
(2.1g) utilize camera lens segmentation procedure to detect candidate's logo camera lens ls
w 'with candidate's logo camera lens ls
w '-1between the camera lens number that comprises of video segment: if the camera lens number that this video segment comprises is greater than 1, camera lens in this video segment is labeled as to playback camera lens, if the camera lens number that this video segment comprises equals 1, camera lens in this video segment is labeled as to real-time camera lens, wherein, w ' ∈ { 2,3,, N
4.
4. football video goal event detecting method according to claim 3, described " real-time camera lens is further labeled as to camera lens far away, middle camera lens He Fei place camera lens " of step (2.2) wherein, carry out as follows:
(2.2a) choosing 60 width distant view two field pictures in camera lens in real time, according to index hue in 256 handle histograms of the chromatic component of p width two field picture
lcorresponding value hist
p(hue
l), calculate index hue in the cumulative histogram of chromatic component of 60 width distant view two field pictures
lcorresponding value sh (hue
l):
Wherein, hue
lbe the l level index of p width two field picture chromatic component, l ∈ 1,2 ..., 256}, hue
l∈ 1,2 ..., 256}, p ∈ 1,2 ..., 60};
(2.2b) according to index hue in cumulative histogram
lcorresponding value sh (hue
l), the peak F of calculating cumulative histogram:
F=max{sh(hue
1),sh(hue
2),…,sh(hue
256)};
(2.2c), according to value that in cumulative histogram, each index is corresponding and the peak F of cumulative histogram, determine the lower limit index hue that meets following condition
low:
sh(hue
low)≥0.2×F
sh(hue
low-1)<0.2×F
Wherein, sh (hue
low) be cumulative histogram lower limit index hue
lowcorresponding value, sh (hue
low-1) be index hue in cumulative histogram
lowthe value of-1 correspondence;
(2.2d), according to value that in cumulative histogram, each index is corresponding and the peak F of cumulative histogram, determine the upper limit index hue that meets following condition
up:
sh(hue
up)≥0.2×F
sh(hue
up+1)<0.2×F
Wherein, sh (hue
up) be upper limit index hue in cumulative histogram
upcorresponding value, sh (hue
up+ 1) be index hue in cumulative histogram
upthe value of+1 correspondence;
(2.2e) each width two field picture cutting of real-time camera lens is gone to top 1/3rd, after statistics cutting, in each width two field picture, the value h of the chromatic component of pixel belongs to interval [hue
low/ 256, hue
up/ 256] place number of pixels C
1, calculate the place ratio PR of each width two field picture:
Wherein, L is the height of each width two field picture, and K is the width of each width two field picture;
(2.2f), according to the place ratio PR of each width two field picture, judge the type of each width two field picture:
Wherein, get threshold value T
3=0.70, T
4=0.30;
If (2.2g) more than 55% two field picture of real-time camera lens to be marked belongs to distant view two field picture, marking this real-time camera lens is camera lens far away; If more than 55% two field picture of real-time camera lens to be marked belongs to middle scape two field picture, marking this real-time camera lens is middle camera lens; Otherwise be labeled as non-place camera lens.
5. football video goal event detecting method according to claim 2, wherein the described “Jiang Fei place camera lens of step (2.3) is further labeled as close-up shot and spectators' camera lens ", carry out as follows:
(2.3a) each width two field picture of Jiang Fei place camera lens is from RGB color space conversion to YC
bc
rcolor space, obtains the value y of luminance component Y, chroma blue component C
bvalue cb, red color component C
rvalue cr:
y=0.299r'+0.578g'+0.114b'
cb=0.564(b'-y)
cr=0.713(r'-y)
Wherein, r' is the value of red component R of each pixel of each width two field picture, and g' is the value of green component G of each pixel of each width two field picture, and b' is the value of blue component B of each pixel of each width two field picture;
(2.3b) according to value y corresponding to each pixel in the luminance component Y of each width two field picture, with Canny operator, detect the edge pixel in each width two field picture, obtain the number C of edge pixel
2;
(2.3c) according to the number C of the edge pixel in each width two field picture
2, calculate the edge pixel ratio EPR of each width two field picture in non-place camera lens to be marked:
Wherein, L is the height of each width two field picture, and K is the width of each width two field picture;
If (2.3d) EPR of a width two field picture is greater than threshold value T
5, be labeled as spectators' two field picture, otherwise be labeled as feature two field picture, wherein, get T
5=0.10;
If (2.3e) more than 55% two field picture of non-place camera lens to be marked belongs to spectators' two field picture, marking Gai Fei place camera lens is spectators' camera lens, otherwise is labeled as close-up shot.
6. football video goal event detecting method according to claim 1, wherein step (5) described " according to the N in training dataset O
1individual semantic shot sequence
with corresponding N
1individual status switch
calculate initial model parameter lambda=(U, A, the C) of Hidden Markov Model (HMM) ", carry out as follows:
(5.1) according to N
1individual semantic shot sequence
in state θ
isemantic shot number x
iand N
1individual semantic shot sequence
in all semantic shot number x, calculate original state probability vector U:
U={U
1,U
2}
Wherein, i={1,2}, U
irepresent that Hidden Markov Model (HMM) is at t=1 moment status q
1for state θ
iprobability, q
tfor Hidden Markov Model (HMM) is at t residing state of the moment, q
t∈ { θ
1, θ
2;
(5.2) statistics N
1individual semantic shot sequence
middle semantic shot is from state θ
itransfer to state θ
jnumber x
(i, j)and N
1individual semantic shot sequence
middle semantic shot is from state θ
ibe transferred to the number x of free position
(i, *), according to x
(i, j)and x
(i, *)computing mode transition probability matrix A:
A=(a
ij)
2×2
Wherein, j={1,2}, a
ijrepresent that Hidden Markov Model (HMM) is at t moment status q
tfor state θ
iunder condition, at t+1 moment status q
t+1for state θ
jprobability;
(5.3) according to N
1individual semantic shot sequence
in in state θ
jsemantic shot s
knumber x
j,kand N
1individual semantic shot sequence
in in state θ
jthe total number x of semantic shot
j, calculate observed value probability matrix C:
C=(c
jk)
2×M
Wherein, k={1,2 ..., M}, c
jkrepresent that Hidden Markov Model (HMM) is at t moment status q
tfor state θ
jcondition under, state q
tthe observed value E producing
tfor semantic shot s
kprobability, M is the Hidden Markov Model (HMM) observed value number that status produces at any time, M=5, s
1, s
2, s
3, s
4, s
5five kinds of five kinds of observed values that semantic shot is Hidden Markov Model (HMM).
7. football video goal event detecting method according to claim 1, wherein step (7) described " according to d semantic shot sequence O in the Hidden Markov Model (HMM) of goal event and training dataset O
d, the Hidden Markov Model (HMM) that adopts forward direction algorithm to calculate goal event produces d semantic shot sequence O
dprobability
", carry out as follows:
(7.1) according to d semantic shot sequence O in the Hidden Markov Model (HMM) of goal event and training dataset O
din the 1st semantic shot O
d, 1, calculate in final mask parameter
under condition, Hidden Markov Model (HMM) is at t=1 status q constantly
1for state θ
iand the 1st observed value is d semantic shot sequence O in training dataset O
din the 1st semantic shot O
d, 1probability
Wherein, i={1,2},
for final original state probability vector
i element, η
i(O
d, 1) represent that Hidden Markov Model (HMM) is at t=1 moment status q
1for state θ
iunder condition, state q
1the observed value E producing
1for d semantic shot sequence O in training dataset O
din the 1st semantic shot O
d, 1probability, work as O
d, 1be k kind semantic shot s
ktime,
for final observed value probability matrix
the capable k column element of i, k={1,2 ..., M}, M=5;
(7.2) according to d semantic shot sequence in training dataset O
and probability
(i), wherein, T
dfor d semantic shot sequence O in training dataset O
din semantic shot number, calculate in final mask parameter
under condition, Hidden Markov Model (HMM) is at t+1 status q constantly
t+1for state θ
jand the 1st observed value is followed successively by d semantic shot sequence O in training dataset O to t+1 observed value
din the 1st semantic shot O
d, 1to t+1 semantic shot O
d, t+1probability
obtain
Wherein, j={1,2},
for in final mask parameter
under condition, Hidden Markov Model (HMM) is at t status q constantly
tfor state θ
iand the 1st observed value is followed successively by d semantic shot sequence O in training dataset O to t observed value
din the 1st semantic shot O
d, 1to t semantic shot O
d,tprobability,
for end-state transition probability matrix
the capable j column element of i, η
j(O
d, t+1) represent that Hidden Markov Model (HMM) is at t+1 moment status q
t+1for state θ
junder condition, state q
t+1the observed value E producing
t+1for d semantic shot sequence O in training dataset O
din t+1 semantic shot O
d, t+1probability, as semantic shot O
d, t+1be k kind semantic shot s
ktime,
,
for final observed value probability matrix
the capable k column element of j;
(7.3) according to probability
, the Hidden Markov Model (HMM) of calculating goal event produces d semantic shot sequence O
dprobability
:
8. football video goal event detecting method according to claim 1, wherein step (9) described " according to e semantic shot sequence Z in the Hidden Markov Model (HMM) of goal event and test data set Z
e, the Hidden Markov Model (HMM) that adopts forward direction algorithm to calculate goal event produces e semantic shot sequence Z
eprobability
", carry out as follows:
(9.1) according to e semantic shot sequence Z in the Hidden Markov Model (HMM) of goal event and test data set Z
ein the 1st semantic shot Z
e, 1, calculate in final mask parameter
under condition, Hidden Markov Model (HMM) is at t=1 status q constantly
1for state θ
iand the 1st observed value is e semantic shot sequence Z in test data set Z
ein the 1st semantic shot Z
e, 1probability
Wherein, i={1,2},
for i the element of final original state probability vector U, γ
i(Z
e, 1) represent that Hidden Markov Model (HMM) is at t=1 moment status q
1for state θ
iunder condition, state q
1the observed value E producing
1for e semantic shot sequence Z in test data set Z
ein the 1st semantic shot Z
e, 1probability, work as Z
e, 1be k kind semantic shot s
ktime,
for final observed value probability matrix
the capable k column element of i, k={1,2 ..., M}, M=5;
(9.2) according to e semantic shot sequence in test data set Z
and probability
, wherein, T
e' be e semantic shot sequence Z in test data set Z
ein semantic shot number, calculate in final mask parameter
under condition, Hidden Markov Model (HMM) is at t+1 status q constantly
t+1for state θ
jand the 1st observed value is followed successively by e semantic shot sequence Z in test data set Z to t+1 observed value
ein the 1st semantic shot Z
e, 1to t+1 semantic shot Z
e, t+1probability
obtain
Wherein, j={1,2},
be illustrated in final mask parameter
under condition, Hidden Markov Model (HMM) is at t status q constantly
tfor state θ
iand the 1st observed value is followed successively by e semantic shot sequence Z in test data set Z to t observed value
ein the 1st semantic shot Z
e, 1to t semantic shot Z
e,tprobability,
for end-state transition probability matrix
the capable j column element of i, γ
j(Z
e, t+1) represent that Hidden Markov Model (HMM) is at t+1 moment status q
t+1for state θ
junder condition, state q
t+1the observed value E producing
t+1for e semantic shot sequence Z in test data set Z
ein t+1 semantic shot Z
e, t+1probability, as semantic shot Z
e, t+1be k kind semantic shot s
ktime,
for final observed value probability matrix
the capable k column element of j;
(9.3) according to probability
the Hidden Markov Model (HMM) of calculating goal event produces e semantic shot sequence Z
eprobability
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110180084.8A CN102393909B (en) | 2011-06-29 | 2011-06-29 | Method for detecting goal events in soccer video based on hidden markov model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110180084.8A CN102393909B (en) | 2011-06-29 | 2011-06-29 | Method for detecting goal events in soccer video based on hidden markov model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102393909A CN102393909A (en) | 2012-03-28 |
CN102393909B true CN102393909B (en) | 2014-01-15 |
Family
ID=45861229
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201110180084.8A Expired - Fee Related CN102393909B (en) | 2011-06-29 | 2011-06-29 | Method for detecting goal events in soccer video based on hidden markov model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102393909B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105701460B (en) * | 2016-01-07 | 2019-01-29 | 王跃明 | A kind of basketball goal detection method and apparatus based on video |
CN107241645B (en) * | 2017-06-09 | 2020-07-24 | 成都索贝数码科技股份有限公司 | Method for automatically extracting goal wonderful moment through caption recognition of video |
CN107247942B (en) * | 2017-06-23 | 2019-12-20 | 华中科技大学 | Tennis video event detection method integrating multi-mode features |
CN108846433A (en) * | 2018-06-08 | 2018-11-20 | 汕头大学 | A kind of team value amount appraisal procedure of basket baller |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101127866A (en) * | 2007-08-10 | 2008-02-20 | 西安交通大学 | A method for detecting wonderful section of football match video |
CN101482925A (en) * | 2009-01-16 | 2009-07-15 | 西安电子科技大学 | Photograph generation method based on local embedding type hidden Markov model |
JP2011028320A (en) * | 2009-07-21 | 2011-02-10 | Nippon Telegr & Teleph Corp <Ntt> | Hidden markov model searching device and method and program |
-
2011
- 2011-06-29 CN CN201110180084.8A patent/CN102393909B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101127866A (en) * | 2007-08-10 | 2008-02-20 | 西安交通大学 | A method for detecting wonderful section of football match video |
CN101482925A (en) * | 2009-01-16 | 2009-07-15 | 西安电子科技大学 | Photograph generation method based on local embedding type hidden Markov model |
JP2011028320A (en) * | 2009-07-21 | 2011-02-10 | Nippon Telegr & Teleph Corp <Ntt> | Hidden markov model searching device and method and program |
Non-Patent Citations (6)
Title |
---|
刘宇驰等.基于HMM的足球视频语义结构分析.《计算机工程与应用》.2006,(第28期),174-176. |
基于HMM的足球视频语义分析研究;彭利民;《计算机工程与设计》;20081031;第29卷(第19期);5002-5005 * |
基于HMM的足球视频语义结构分析;刘宇驰等;《计算机工程与应用》;20061031(第28期);174-176 * |
基于隐马尔科夫模型的足球视频典型事件检测;马超;《中国优秀硕士学位论文全文数据库》;20050826;全文 * |
彭利民.基于HMM的足球视频语义分析研究.《计算机工程与设计》.2008,第29卷(第19期),5002-5005. |
马超.基于隐马尔科夫模型的足球视频典型事件检测.《中国优秀硕士学位论文全文数据库》.2005,全文. |
Also Published As
Publication number | Publication date |
---|---|
CN102393909A (en) | 2012-03-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Huang et al. | Tracknet: A deep learning network for tracking high-speed and tiny objects in sports applications | |
CN101604325B (en) | Method for classifying sports video based on key frame of main scene lens | |
CN101639354B (en) | Method and apparatus for object tracking | |
CN102306153B (en) | Method for detecting goal events based on normalized semantic weighting and regular football video | |
CN102306154B (en) | Football video goal event detection method based on hidden condition random field | |
CN102819749A (en) | Automatic identification system and method for offside of football based on video analysis | |
CN102393909B (en) | Method for detecting goal events in soccer video based on hidden markov model | |
CN106709453A (en) | Sports video key posture extraction method based on deep learning | |
CN103942751A (en) | Method for extracting video key frame | |
Renò et al. | A technology platform for automatic high-level tennis game analysis | |
CN104166983A (en) | Motion object real time extraction method of Vibe improvement algorithm based on combination of graph cut | |
CN103929685A (en) | Video abstract generating and indexing method | |
CN101794451A (en) | Tracing method based on motion track | |
US8306109B2 (en) | Method for scaling video content based on bandwidth rate | |
CN111291617A (en) | Badminton event video wonderful segment extraction method based on machine learning | |
Hari et al. | Event detection in cricket videos using intensity projection profile of Umpire gestures | |
Nasir et al. | Event detection and summarization of cricket videos | |
Hsu et al. | Coachai: A project for microscopic badminton match data collection and tactical analysis | |
US8300894B2 (en) | Method for decomposition and rendering of video content and user interface for operating the method thereof | |
Tien et al. | Shot classification of basketball videos and its application in shooting position extraction | |
CN106559714A (en) | A kind of extraction method of key frame towards digital video copyright protection | |
Renò et al. | Real-time tracking of a tennis ball by combining 3d data and domain knowledge | |
Chen et al. | Tracking ball and players with applications to highlight ranking of broadcasting table tennis video | |
CN109978916A (en) | Vibe moving target detecting method based on gray level image characteristic matching | |
Goyani et al. | Key frame detection based semantic event detection and classification using heirarchical approach for cricket sport video indexing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20140115 Termination date: 20190629 |