CN101211460A - Method and device for automatically dividing and classifying sports vision frequency shot - Google Patents

Method and device for automatically dividing and classifying sports vision frequency shot Download PDF

Info

Publication number
CN101211460A
CN101211460A CNA2006101715242A CN200610171524A CN101211460A CN 101211460 A CN101211460 A CN 101211460A CN A2006101715242 A CNA2006101715242 A CN A2006101715242A CN 200610171524 A CN200610171524 A CN 200610171524A CN 101211460 A CN101211460 A CN 101211460A
Authority
CN
China
Prior art keywords
dtri
camera lens
ssu
model
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2006101715242A
Other languages
Chinese (zh)
Other versions
CN100568282C (en
Inventor
杨颖�
林守勋
张勇东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongguan Lianzhou Electronic Technology Co Ltd
Original Assignee
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Computing Technology of CAS filed Critical Institute of Computing Technology of CAS
Priority to CNB2006101715242A priority Critical patent/CN100568282C/en
Publication of CN101211460A publication Critical patent/CN101211460A/en
Application granted granted Critical
Publication of CN100568282C publication Critical patent/CN100568282C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention proposes a method for automatic cutting and classification of physical training video scene, which divides the physical training video stream into continuous scene sampling units (SSU) so that different scenes correspond to different SSU time sequence. Hidden Markov models of different scenes can be acquired by modeling the different SSU sequences by the Hidden Markov model. On the base, all possible scene model combinations are connected in series to form a scene network. A log probability is calculated for each path in the scene network, i.e. a scene model sequence. The path with maximum probability is used as an optimal path, and then all scene models on the optimal path are the final classification result, while the starting and ending SSUs of the corresponding SSU sequence are the border of the scene, thus realizing the cutting for physical training video scene.

Description

A kind of method and apparatus of cutting apart automatically with classifying sports vision frequency shot
Technical field
The present invention relates to video lens dividing method and device thereof, particularly relate to a kind of method and apparatus of cutting apart automatically and classifying for physical education video lens.
Background technology
In sports video, camera lens is the basic structural unit of sports video, and physical education video lens is commonly referred to as by single camera perspective and takes the one group of continuous images frame that forms.Dissimilar physical education video lens has showed different semantic contents, as the common reflection of long shot is the overall situation of match, and medium shot is normally to player's motion tracking, and close-up shot generally appears at the pause stage of match normally to team member and judge's closely feature.The camera lens of sports video is cut apart the dividing method that can adopt general video, obtain the border of camera lens by the similarity degree of consecutive frame, but existing method is not considered the singularity of physical education video lens, promptly motion is fast, structure has characteristics such as repeatability, so the camera lens segmentation result is inaccurate.Classify for physical education video lens, existing certain methods mainly adopts domain knowledge and specific rule, as according to careless color ratio example and personnel's size the football video camera lens being cut apart and being classified, these methods can obtain good effect to specific sports video, but do not have versatility, no sports video will be derived different classifying ruless according to characteristics separately.
On the other hand, the numerous types of sports video, but roughly can be divided into distant view, middle scape and three kinds of lens types of feature.Physical education video lens is cut apart and the purpose of classifying splits and mark its lens type with this three classes camera lens exactly from sports video, thereby sets up structurized index for sports video.But in view of the camera lens of different sports videos has the different forms of expression, require selected lens features can represent the characteristics of different lens types, have versatility again so that can be useful on the different sports videos.
Summary of the invention
The purpose of this invention is to provide a kind of general physical education video lens and cut apart and sorting technique, can automatically physical education video lens be cut apart and classify, thereby set up configuration index, further be used for the semantic content analysis of sports video for sports video.
For this reason, the present invention has chosen two features of color and motion as general lens features, obtains lens features more accurately by the difference of extracting color and movable information.Because camera lens is to be made of one group of continuous video frames, therefore one section clock signal stream just cut apart physical education video lens and classified and need set up the conversion that suitable temporal model comes the clock signal in the simulating lens.And hidden Markov model can be good at explaining the variation of clock signal, and it is every kind of lens type modeling that the present invention adopts hidden Markov model.And a segment body is educated video and be can be regarded as connection and conversion between the dissimilar camera lenses, so for the sports video stream of one section the unknown, camera lens is cut apart and can be regarded the camera lens model that finds a best as with classification task and be connected and conversion sequence.For this reason, the present invention has constructed a kind of camera lens network, and it comprises all possible camera lens model sequence, and wherein the corresponding a kind of camera lens model sequence of the every paths in the network finds best path also just to find best camera lens to cut apart and classification results.Realized that camera lens is cut apart and carried out when classifying, improved the processing speed that camera lens is cut apart and classified.
According to a first aspect of the invention, the method that provides a kind of physical education video lens to cut apart automatically and classify, this method comprises the following steps: 1) camera lens is divided into the sequence of a plurality of camera lens sampling units (Shot Sample Unit-SSU); 2) calculate color correlation and the motion correlated characteristic of each SSU according to the frame of video among each SSU; 3) according to the logarithm probability of HMM (hidden Markov model) camera lens model by each camera lens model of camera lens network calculations; 4) choose logarithm probability (log probability) and maximum model sequence, wherein, the status switch of each model in this sequence is corresponding with corresponding SSU sequence.
According to a second aspect of the invention, the device that provides a kind of physical education video lens to cut apart automatically and classify comprises following parts: the parts that 1) camera lens are divided into the sequence of a plurality of camera lens sampling units (SSU); 2) calculate the color correlation of each SSU and the parts of motion correlated characteristic according to the frame of video among each SSU; 3) according to the parts of HMM (hidden Markov model) camera lens model by the logarithm probability (log probability) of each camera lens model of camera lens network calculations; 4) choose the parts of the model sequence of logarithm probability sum maximum, wherein, the status switch of each model in this sequence is corresponding with corresponding SSU sequence.
The invention has the advantages that:
1, every class camera lens is divided into continuous SSU sequence, has better reflected the border and the temporal aspect of camera lens;
2, adopting hidden Markov model is the modeling of every class camera lens, the better variation of the SSU sequence in the simulating lens;
3, adopt the color information relevant, be easy to feature calculation and extraction with motion;
4, setting up the camera lens network discerns sports video stream and has realized cutting apart automatically of camera lens and classify.
Description of drawings
Fig. 1 illustrates (a) to be long shot by three eka-ytterbium head models, (b) is medium shot, (c) is close-up shot;
Fig. 2 represents camera lens sampling unit (SSU) sequence;
Fig. 3 represents not have from left to right 5 state hidden Markov models of redirect;
Fig. 4 illustrates a camera lens network.
Embodiment
Sports video can be divided into following three class camera lenses, i.e. long shot, and medium shot and close-up shot, as shown in Figure 1.Purpose of the present invention be exactly cut apart automatically with classifying sports vision frequency in this three classes camera lens.The invention will be further described below in conjunction with accompanying drawing.
The solution of the present invention mainly may further comprise the steps: at first divide SSU sequence with video, from each SSU, extract color and motion feature, on the basis of feature extraction, adopt the hidden Markov model training data to generate the hidden Markov model of every class camera lens then, all possible camera lens model sequence is connected into the camera lens network, from the camera lens network maximum path of several probability (log probability) of getting it right of falling into a trap, obtain final camera lens and cut apart and classification results.Below each step is described in detail:
1, camera lens sampling unit (SSU) is set and feature extraction
For camera lens is carried out accurately and fast cutting apart, at first camera lens is divided into a SSU sequence, can be overlapped between the wherein said SSU, can be not overlapping yet, as shown in Figure 2.Because the duration of a common camera lens is 1 to 120 second, then the length of each SSU should be made as 25 frames here less than the length of camera lens, and to improve the speed of feature extraction, sampling interval is 10 frames.Then different camera lenses shows as different SSU sequences.For each SSU, the mean value of the feature of its its picture frame that is comprised of feature.The feature of then extracting SSU just is converted into the feature of extracting every two field picture.For each picture frame, we extract two category features, comprise color correlation and motion correlated characteristic.
● color characteristic extracts
Maximum differential between the three class camera lenses is its change in color, the area that accounts for as the zone in place in distant view big and in close-up shot almost without any place information.Because the LUV space meets people's visually-perceptible most, each component that we adopt the LUV space is three color class features the most, i.e. L, U, V component.And for the L of every picture frame, U, the V feature is with representing that following formula calculates:
Figure A20061017152400091
Wherein (x, y), (x, y), (x y) is (x, the y) L of some pixel, U, V component, and L to V to U to L f, U f, V fBe three basic colors features of a two field picture.Obtain their single order second order difference information on this basis, as follows:
▿ L f = L f - L f - 1 ▿ U f = U f - U f - 1 ▿ V f = V f - V f - 1 ▿ 2 L f = ▿ L f - ▿ L f - 1 ▿ 2 U f = ▿ U f - ▿ U f - 1 ▿ 2 V f = ▿ V f - ▿ V f - 1 - - - ( 2 )
Wherein
Figure A20061017152400094
Figure A20061017152400095
With ▿ k V f ( k = 1,2 ) Be the k jump branch information of basic colors feature.Obtain the relevant feature of 9 colors so altogether.
● motion feature extracts
The present invention has extracted three type games features altogether, is respectively frame difference movable information D f, as follows:
Figure A20061017152400097
Wherein (f, i) color is the number of pixels of i to H in the expression f frame.
The feature that another motion is relevant is based on the movement compensating frame difference C of piece f, as follows:
C f = Σ B f ( x , y ) ∈ G f ( 1 W B Σ i = 0 255 [ H f ( B f ( x , y ) , i ) - H f - 1 ( B f - 1 * ( u * , v * ) , i ) ] 2 ) - - - ( 4 )
C fThe main thought of calculating is, entire frame is divided into the even macro block of 16*16, searches the fritter that mates the most with each macro block then in the former frame of this frame, obtains their frame difference summation.B wherein f(x, y) be the position (x, macro block y),
Figure A20061017152400099
Be the macro block that mates most in the former frame, and G fIt is the number of all fritters in the f frame.W BSize for each fritter.
Simultaneously, the motion vector of macro block has also reflected the motion size of frame, uses M fThe exercise intensity of expression macro block, then its computing formula is:
M f = Σ B f ( x , y ) ∈ G f [ ( x - u * ) 2 + ( y - v * ) 2 ] 1 / 2 - - - ( 5 )
(u wherein *, v *) be and B f(x, y) piece that mates most Position in frame f-1.
Equally, on the basis of these three basic exercise features, got back single order and second order motion difference information.
▿ D f = D f - D f - 1 ▿ C f = C f - C f - 1 ▿ M f = M f - M f - 1 ▿ 2 D f = ▿ D f - ▿ D f - 1 ▿ 2 C f = ▿ C f - ▿ C f - 1 ▿ 2 M f = ▿ M f - ▿ M f - 1 - - - ( 6 )
Equally
Figure A20061017152400105
Figure A20061017152400106
With ▿ k M f ( k = 1,2 ) Be the k jump branch information of basic exercise feature.Connect 9 color correlation like this, one has 18 features.
According to thought of the present invention, when extracting the image-related feature of a frame, can use 6 features wherein, i.e. three of a two field picture basic colors feature L f, U f, V fAnd three type games feature, i.e. frame difference movable information D f, movement compensating frame difference C fWith macro block exercise intensity M fPerhaps use 12 features wherein, promptly also comprise 1 jump branch information of above-mentioned feature; Perhaps use whole 18 features, promptly also comprise 2 jump branch information of above-mentioned feature.That is to say that for the present invention, the image of each frame can be got 6 dimensions, also can get 12 dimensions, perhaps 18 dimensions.For a SSU, it is characterized by these its in individual features average of all images frame.
2, foundation is based on the camera lens model and the camera lens network of hidden Markov model (HMM)
Because HMM is a strong instrument of handling clock signal, thereby it is more suitable to come the SSU of simulating lens inside to change with HMM.Because we will detect three class camera lenses, then need set up three HMM camera lens models altogether.The three class camera lenses that the present invention will classify adopt identical HMM model, and the topological structure of model is not for from left to right there being the redirect structure, as shown in Figure 3.This model has 5 states, and middle three states are output state, and initial state can only jump to next state, and last final state can not jump to any state.
Because the color and the motion change of camera lens are a lot, so adopt output continuously, adopt mixed Gauss model as output for the output of HMM, the component of mixed Gaussian is many more, then to the feature of camera lens fit good more.Adopt the mixed Gauss model of 4 components in this step.Experimental results show that the feature that can be good at fitting camera lens, counting yield is also very high simultaneously.
After having set up initial HMM prototype, just can train this three eka-ytterbiums head model with training data.Train corresponding camera lens HMM model just to obtain three eka-ytterbium head models respectively with every type training data.
The camera lens of sports video stream is cut apart can regard as with classification task and is obtained a camera lens model sequence, can know the initial sum final position of each camera lens model simultaneously.Then this task can be regarded as in all camera lens model sequences and find a best model sequence.For this reason, we are concatenated into a camera lens network with all possible model sequence, as shown in Figure 4.Every paths in the camera lens network is exactly a model sequence, and each node on promptly every paths all is a HMM camera lens model.For every paths, adopt the Viterbi algorithm that it is discerned, each model on the path all can obtain a logarithm (log) output probability.Output probability is big more, and to show that this model is exported the probability of corresponding observational characteristic big more.Logarithm probability all on the path is added together, choose logarithm probability wherein and maximum, just the best model sequence as optimal path.Then the model on the optimal path is exactly the result of shot classification.And wherein the status switch of each model correspondence is corresponding one by one with corresponding SSU sequence, and the initial sum final position of each SSU sequence is exactly the border of camera lens, thereby has realized cutting apart automatically of camera lens.
The present invention carries out shot classification when camera lens is cut apart, be a kind of efficiently fast camera lens cut apart and sorting technique, the used feature of its training test does not rely on the type of sports video simultaneously, is fit to various sports, so be that a kind of general physical education video lens is cut apart and scheme of classification.Having complete semantic information by the resulting lens type of this method, is a kind of structuring to sports video, can further carry out the content analysis and the semantic incident of sports video on this basis and understand.
Below be to two kinds of distinct sports videos---the experimental result that the camera lens of football video and shuttlecock video is cut apart and classified, 16 selected football videos are all from World Cup Competition in 2006,10 matches are used for doing training set, all the other 6 as test set, selected 9 badminton games were from shuttlecock World Championships in 2005, wherein 5 matches are as training set, and 4 matches are as test set.Final camera lens cut apart and classify recall rate more than 80%, accuracy rate is more than 70%, and is as shown in the table:
The test result of table 1 shuttlecock video
The test result of table 1 football video
Figure A20061017152400121
According to design of the present invention, it both can adopt software programming foregoing invention purpose, and the realization sports video flows cutting apart automatically of camera lens and classifies, and also can adopt the form of hardware product to realize.To recognize also simultaneously, not deviate from essence of the present invention and exceeding under the prerequisite of the scope of the invention that for the person of ordinary skill of the art, the present invention can also implement with many other concrete forms.Therefore, given example is indicative in the instructions, and should not be viewed as limitation of the present invention; The present invention also is not limited to the details that provides here, can change in the scope of attached claim.

Claims (20)

1. the method that physical education video lens is cut apart automatically and classified comprises the following steps:
1) camera lens is divided into the sequence of a plurality of camera lens sampling unit SSU;
2) calculate color correlation and the motion correlated characteristic of each SSU according to the frame of video among each SSU;
3) according to the logarithm probability of HMM camera lens model by each camera lens model of camera lens network calculations;
4) choose the model sequence of logarithm probability sum maximum, wherein, the status switch of each model in this sequence is corresponding with corresponding SSU sequence.
2. according to the process of claim 1 wherein the average of the color correlation of each SSU and individual features that the motion correlated characteristic is all picture frames in this SSU.
3. according to the method for claim 2, wherein the color correlation of each picture frame has comprised L, U and V component, i.e. three basic colors feature L f, U f, V fThe motion correlated characteristic comprises frame difference movable information D f, movement compensating frame difference C fExercise intensity M with macro block f, they are to utilize following formula to calculate respectively:
Figure A2006101715240002C1
Wherein L (x, y), U (x, y), V (x, y) be (x, y) some pixel L, U, V component;
Wherein (f i) represents that color is the number of pixels of i in the f frame to H;
C f = Σ B f ( x , y ) ∈ G f ( 1 W B Σ i = 0 255 [ H f ( B f ( x , y ) , i ) - H f - 1 ( B f - 1 * ( u * , v * ) , i ) ] 2 )
B wherein f(x is that the position is at (x, macro block y), B y) F-1 *(u *, v *) be the macro block that mates most in the former frame, and G fIt is the number of all fritters in the f frame.W BSize for each fritter;
M f = Σ B f ( x , y ) ∈ G f [ ( x - u * ) 2 + ( y - v * ) 2 ] 1 / 2
(u wherein *, v *) be and B f(x, y) the piece B that mates most F-1 *(u *, v *) position in frame f-1.
4. according to the method for claim 3, wherein the color correlation of each picture frame has also comprised 1 jump branch information of above-mentioned color correlation:
▿ L f = L f - - L f - 1 ▿ U f = U f - U f - 1 ▿ V f = V f - V f - 1
Divide information with 1 jump of above-mentioned motion correlated characteristic:
▿ D f = D f - - D f - 1 ▿ C f = C f - C f - 1 ▿ M f = M f - M f - 1 .
5. according to the method for claim 4, wherein the color correlation of each picture frame has also comprised 2 jump branch information of above-mentioned color correlation:
▿ 2 L f = ▿ L f - ▿ L f - 1 ▿ 2 U f = ▿ U f - ▿ U f - 1 ▿ 2 V f = ▿ V f - ▿ V f - 1
Divide information with 2 jumps of above-mentioned motion correlated characteristic:
▿ 2 D f = ▿ D f - ▿ D f - 1 ▿ 2 C f = ▿ C f - ▿ C f - 1 ▿ 2 M f = ▿ M f - ▿ M f - 1 .
6. according to the method for claim 1-5, wherein above-mentioned camera lens network is by coupling together all possible camera lens model composite sequence formation.
7. according to the method for one of claim 1-6, wherein HMM camera lens model is the camera lens model that obtains after utilizing above-mentioned color correlation and motion correlated characteristic to train as training data.
8. according to the method for one of claim 1-7, wherein HMM camera lens model comprises distant view, middle scape and close-up shot model.
9. according to the method for claim 1-8, wherein SSU can be divided into overlapped.
10. according to the method for claim 9, wherein each SSU is made as 25 frames, and sampling interval is 10 frames.
11. the device that physical education video lens is cut apart automatically and classified comprises following parts:
1) camera lens is divided into the parts of the sequence of a plurality of camera lens sampling units (SSU);
2) calculate the color correlation of each SSU and the parts of motion correlated characteristic according to the frame of video among each SSU;
3) according to the parts of HMM camera lens model by the logarithm probability of each camera lens model of camera lens network calculations;
4) choose the parts of the model sequence of logarithm probability sum maximum, wherein, the status switch of each model in this sequence is corresponding with corresponding SSU sequence.
12. according to the device of claim 11, wherein the color correlation of each SSU and motion correlated characteristic are individual features average of all picture frames in this SSU.
13. according to the device of claim 12, wherein the color correlation of each picture frame has comprised L, U and V component, i.e. three basic colors feature L f, U f, V fThe motion correlated characteristic comprises frame difference movable information D f, movement compensating frame difference C fWith grand fast exercise intensity M f, they are to utilize following formula to calculate respectively:
Figure A2006101715240004C1
Wherein L (x, y), U (x, y), V (x, y) be (x, y) some pixel L, U, V component;
Wherein (f i) represents that color is the number of pixels of i in the f frame to H;
C f = Σ B f ( x , y ) ∈ G f ( 1 W B Σ i = 0 255 [ H f ( B f ( x , y ) , i ) - H f - 1 ( B f - 1 * ( u * , v * ) , i ) ] 2 )
B wherein f(x is that the position is at (x, macro block y), B y) F-1 *(u *, v *) be the macro block that mates most in the former frame, and G fIt is the number of all fritters in the f frame.W BSize for each fritter;
M f = Σ B f ( x , y ) ∈ G f [ ( x - u * ) 2 + ( y - v * ) 2 ] 1 / 2
(u wherein *, v *) be and B f(x, y) the piece B that mates most F-1 *(u *, v *) position in frame f-1.
14. according to the device of claim 13, wherein the color correlation of each picture frame has also comprised 1 jump branch information of above-mentioned color correlation:
▿ L f = L f - - L f - 1 ▿ U f = U f - U f - 1 ▿ V f = V f - V f - 1
Divide information with 1 jump of above-mentioned motion correlated characteristic:
▿ D f = D f - D f - 1 ▿ C f = C f - C f - 1 ▿ M f = M f - M f - 1 .
15. according to the device of claim 14, wherein the color correlation of each picture frame has also comprised 2 jump branch information of above-mentioned color correlation:
▿ 2 L f = ▿ L f - ▿ L f - 1 ▿ 2 U f = ▿ U f - ▿ U f - 1 ▿ 2 V f = ▿ V f - ▿ V f - 1
Divide information with 2 jumps of above-mentioned motion correlated characteristic:
▿ 2 D f = ▿ D f - ▿ D f - 1 ▿ 2 C f = ▿ C f - ▿ C f - 1 ▿ 2 M f = ▿ M f - ▿ M f - 1 .
16. according to the device of claim 11-15, wherein above-mentioned camera lens network is by coupling together all possible camera lens model composite sequence formation.
17. according to the device of one of claim 11-16, wherein HMM camera lens model is the camera lens model that obtains after utilizing above-mentioned color correlation and motion correlated characteristic to train as training data.
18. according to the device of one of claim 11-17, wherein HMM camera lens model comprises distant view, middle scape and close-up shot model.
19. according to the device of claim 11-18, wherein SSU can be divided into overlapped.
20. according to the device of claim 19, wherein each SSU is made as 25 frames, sampling interval is 10 frames.
CNB2006101715242A 2006-12-30 2006-12-30 A kind of method and apparatus of cutting apart automatically with classifying sports vision frequency shot Expired - Fee Related CN100568282C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2006101715242A CN100568282C (en) 2006-12-30 2006-12-30 A kind of method and apparatus of cutting apart automatically with classifying sports vision frequency shot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2006101715242A CN100568282C (en) 2006-12-30 2006-12-30 A kind of method and apparatus of cutting apart automatically with classifying sports vision frequency shot

Publications (2)

Publication Number Publication Date
CN101211460A true CN101211460A (en) 2008-07-02
CN100568282C CN100568282C (en) 2009-12-09

Family

ID=39611466

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2006101715242A Expired - Fee Related CN100568282C (en) 2006-12-30 2006-12-30 A kind of method and apparatus of cutting apart automatically with classifying sports vision frequency shot

Country Status (1)

Country Link
CN (1) CN100568282C (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101604376B (en) * 2008-10-11 2011-11-16 大连大学 Method for identifying human faces based on HMM-SVM hybrid model
CN103077236A (en) * 2013-01-09 2013-05-01 公安部第三研究所 System and method for realizing video knowledge acquisition and marking function of portable-type device
CN104091326A (en) * 2014-06-16 2014-10-08 小米科技有限责任公司 Method and device for icon segmentation
CN107241645A (en) * 2017-06-09 2017-10-10 成都索贝数码科技股份有限公司 A kind of method that splendid moment of scoring is automatically extracted by the subtitle recognition to video
CN108270946A (en) * 2016-12-30 2018-07-10 央视国际网络无锡有限公司 A kind of computer-aided video editing device in feature based vector library
CN108810620A (en) * 2018-07-18 2018-11-13 腾讯科技(深圳)有限公司 Identify method, computer equipment and the storage medium of the material time point in video

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100507910C (en) * 2005-07-18 2009-07-01 北大方正集团有限公司 Method of searching lens integrating color and sport characteristics
CN1851710A (en) * 2006-05-25 2006-10-25 浙江大学 Embedded multimedia key frame based video search realizing method

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101604376B (en) * 2008-10-11 2011-11-16 大连大学 Method for identifying human faces based on HMM-SVM hybrid model
CN103077236A (en) * 2013-01-09 2013-05-01 公安部第三研究所 System and method for realizing video knowledge acquisition and marking function of portable-type device
CN103077236B (en) * 2013-01-09 2015-11-18 公安部第三研究所 Portable set realizes the system and method for video knowledge acquisition and marking Function
CN104091326A (en) * 2014-06-16 2014-10-08 小米科技有限责任公司 Method and device for icon segmentation
CN108270946A (en) * 2016-12-30 2018-07-10 央视国际网络无锡有限公司 A kind of computer-aided video editing device in feature based vector library
CN107241645A (en) * 2017-06-09 2017-10-10 成都索贝数码科技股份有限公司 A kind of method that splendid moment of scoring is automatically extracted by the subtitle recognition to video
CN107241645B (en) * 2017-06-09 2020-07-24 成都索贝数码科技股份有限公司 Method for automatically extracting goal wonderful moment through caption recognition of video
CN108810620A (en) * 2018-07-18 2018-11-13 腾讯科技(深圳)有限公司 Identify method, computer equipment and the storage medium of the material time point in video
CN108810620B (en) * 2018-07-18 2021-08-17 腾讯科技(深圳)有限公司 Method, device, equipment and storage medium for identifying key time points in video

Also Published As

Publication number Publication date
CN100568282C (en) 2009-12-09

Similar Documents

Publication Publication Date Title
Yuan et al. Temporal action localization by structured maximal sums
Parmar et al. What and how well you performed? a multitask learning approach to action quality assessment
Bloom et al. G3D: A gaming action dataset and real time action recognition evaluation framework
CN101650722B (en) Method based on audio/video combination for detecting highlight events in football video
CN101604325B (en) Method for classifying sports video based on key frame of main scene lens
CN100568282C (en) A kind of method and apparatus of cutting apart automatically with classifying sports vision frequency shot
CN109919981A (en) A kind of multi-object tracking method of the multiple features fusion based on Kalman filtering auxiliary
CN110298231A (en) A kind of method and system determined for the goal of Basketball Match video
Ghosh et al. Towards structured analysis of broadcast badminton videos
CN105959723B (en) A kind of lip-sync detection method being combined based on machine vision and Speech processing
Khan et al. Deep cnn based data-driven recognition of cricket batting shots
CN110448870B (en) Human body posture training method
CN105701460A (en) Video-based basketball goal detection method and device
EP2930690B1 (en) Apparatus and method for analyzing a trajectory
CN106203503A (en) A kind of action identification method based on skeleton sequence
CN110110649A (en) Alternative method for detecting human face based on directional velocity
Edwards et al. From pose to activity: Surveying datasets and introducing CONVERSE
Yan et al. Automatic annotation of tennis games: An integration of audio, vision, and learning
Vantigodi et al. Real-time human action recognition from motion capture data
Khan et al. Learning deep C3D features for soccer video event detection
CN105718954A (en) Target attribute and category identifying method based on visual tactility fusion
KR20170097265A (en) System for tracking of moving multi target and method for tracking of moving multi target using same
Sha et al. Swimmer localization from a moving camera
Wang et al. Fast and accurate action detection in videos with motion-centric attention model
Wang et al. Pose-based two-stream relational networks for action recognition in videos

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: DONGGUAN LIANZHOU ELECTRONIC TECHNOLOGY CO., LTD.

Free format text: FORMER OWNER: INSTITUTE OF COMPUTING TECHNOLOGY, CHINESE ACADEMY OF SCIENCES

Effective date: 20130125

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 100080 HAIDIAN, BEIJING TO: 523000 DONGGUAN, GUANGDONG PROVINCE

TR01 Transfer of patent right

Effective date of registration: 20130125

Address after: 523000 Yuquan Industrial Zone, Fenggang Town, Guangdong, Dongguan

Patentee after: Dongguan Lianzhou Electronic Technology Co., Ltd.

Address before: 100080 Haidian District, Zhongguancun Academy of Sciences, South Road, No. 6, No.

Patentee before: Institute of Computing Technology, Chinese Academy of Sciences

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20091209

Termination date: 20151230

EXPY Termination of patent right or utility model