CN102314681A - Adaptive KF (keyframe) extraction method based on sub-lens segmentation - Google Patents

Adaptive KF (keyframe) extraction method based on sub-lens segmentation Download PDF

Info

Publication number
CN102314681A
CN102314681A CN201110190937A CN201110190937A CN102314681A CN 102314681 A CN102314681 A CN 102314681A CN 201110190937 A CN201110190937 A CN 201110190937A CN 201110190937 A CN201110190937 A CN 201110190937A CN 102314681 A CN102314681 A CN 102314681A
Authority
CN
China
Prior art keywords
sub
camera lens
frame
distilling
key frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201110190937A
Other languages
Chinese (zh)
Other versions
CN102314681B (en
Inventor
雷少帅
赵文晶
韩晓霞
谢刚
续欣莹
王芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taiyuan University of Technology
Original Assignee
Taiyuan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taiyuan University of Technology filed Critical Taiyuan University of Technology
Priority to CN 201110190937 priority Critical patent/CN102314681B/en
Publication of CN102314681A publication Critical patent/CN102314681A/en
Application granted granted Critical
Publication of CN102314681B publication Critical patent/CN102314681B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Studio Devices (AREA)

Abstract

The invention discloses an adaptive KF (keyframe) extraction method based on sub-lens segmentation in the technical field of image processing. The method provided by the invention comprises the following steps: reading all frames of an image in a lens, and extracting the color feature vectors of each frame; combining a sliding window with a distance separability rule to segment the lens into sub-lenses; determining the number of KFs of each sub-lens according to the rate of change of interframe distances; and finally, selecting the set number of KFs according to the order from big to small of the interframe distances. The method provided by the invention has the advantages that a threshold is not required to be set, the number of the KFs can be confirmed adaptively, the segmentation precision of each sub-lens is improved, and the robustness is good.

Description

The self adaptation key frame method for distilling of cutting apart based on sub-camera lens
Technical field
The invention belongs to technical field of image processing, relate in particular to a kind of self adaptation key frame method for distilling of cutting apart based on sub-camera lens.
Background technology
Along with developing rapidly of multimedia technology and computer networking technology, the number of videos sharp increase, video frequency search system has received increasing concern.And key-frame extraction is basic as the early stage of video frequency searching, and its extraction effect will directly influence the performance of video frequency search system.Key frame of video extract to be exactly research and how to utilize minimum image to represent most effectively the main contents of video lens, and this just requires the selected key frame will be with the sequential property and the accuracy of the redundance reflecting video of minimum.
The key-frame extraction algorithm of cutting apart based on sub-camera lens as one of extraction method of key frame of classics, exactly from according to certain characteristic a camera lens being divided into the experimental process camera lens from time domain, extracts a frame as key frame then in each sub-camera lens.Document (Lei Pan; Xiaojun Wu; Xin Shu.Key Frame Extraction Based on Sub-shot Segmentation and Entropy) it is poor at first to calculate the histogram of present frame and preceding N frame; Adopt threshold method to carry out sub-camera lens to the difference result subsequently and cut apart, in each sub-camera lens, choose the maximum frame of information entropy at last as key frame.The defective of the method be to threshold value to choose dependence very big because the varying of video content, threshold method effect on sub-camera lens segmentation precision is undesirable; Document (Tianming Liu; Hong-Jiang Zhang; And Feihu Qi.A Novel Video Key-Frame-Extraction Algorithm Based on Perceived Motion Energy Model [J] .IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY; 13 (10): pp.1006-1013; 2003.) according to the motion of object in the camera lens camera lens is carried out time domain and cut apart, and choose the maximum frame of exercise intensity in the sub-camera lens after cutting apart as key frame; Document (Pascal Kelm; Sebastian Schmiedeke; And Thomas Sikora.FEATURE-BASED VIDEO KEY FRAME EXTRACTION FOR LOW QUALITY VIDEO SEQUENCES [J] .IN PROCEEDINGS OF 10TH INTERNATIONAL WORKSHOP ON IMAGE ANALYSIS FOR MULTIMEDIA INTERACTIVE SERVICES [C] .2009; Pp.25-28.) rely on the motion state of camera lens to carry out sub-camera lens and cut apart, on the basis of taking all factors into consideration factors such as object of which movement intensity, camera motion mode, choose can an attractive frame as key frame.
There is following several problem in above method:
1. the factor that causes the camera lens content change is diversified, and only rely on certain single characteristic carry out sub-camera lens cut apart obviously be have circumscribed.
2. above several method is only chosen a frame as key frame in each sub-camera lens; The multidate information of video content is beyond expression; Especially video content changes when very fast, and key frame can't the reflecting video content, and this has run counter to the principle that " rather lacks excessively " in the key-frame extraction.
Summary of the invention
Deficiencies such as segmentation precision is low to mentioning in the above-mentioned background technology in the existing sub-camera lens cutting procedure, poor robustness, key-frame extraction fixed number the present invention proposes a kind of self adaptation key frame method for distilling of cutting apart based on sub-camera lens.
Technical scheme of the present invention is based on the self adaptation key frame method for distilling that sub-camera lens is cut apart, to it is characterized in that this method may further comprise the steps:
Step 1: read all frames of image in the camera lens, extract the color feature vector of each frame;
Step 2: on the basis of step 1, with moving window with combine apart from the separability criterion, camera lens is divided into each sub-camera lens;
Step 3: the number of key frames of obtaining each sub-camera lens according to the interframe range rate;
Step 4: on the basis of step 3, from sorting from big to small, choose the key frame that specifies number according to frame pitch.
Said step 2 is specially:
Step 2.1: set up moving window, try to achieve the adjacent two types of samples in the sliding window within class scatter matrix and type between the dispersion matrix;
Step 2.2: on the basis of step 2.1, set up and pass judgment on function;
Step 2.3: the judge function curve of trying to achieve camera lens;
Step 2.4: to passing judgment on function curve filtering, and the frame that will pass judgment on each maximum point correspondence of function curve is regarded as the split position of sub-camera lens.
The computing formula of said judge function is:
F = det ( S b ) det ( S 1 + S 2 )
Wherein:
F is for passing judgment on function curve;
S bBe dispersion matrix between class;
S 1For the first two/moving window in the within class scatter matrix of sub-camera lens frame;
S 2Within class scatter matrix for sub-camera lens frame in 1/2nd moving windows of back;
Det (S b) dispersion determinant of a matrix value between type of being;
Det (S 1+ S 2) be the determinant of within class scatter matrix.
Said S bComputing formula be:
S b=(m 1-m 2)(m 1-m 2) T
Wherein:
m 1For the first two/mean vector of sub-camera lens frame in the moving window;
m 2Mean vector for the sub-camera lens frame in 1/2nd moving windows of back.
Said S 1Computing formula be:
S 1 = Σ H ∈ L 1 ( H - m 1 ) ( H - m 1 ) T
Wherein:
L 1For the first two/ set that the sample of sub-camera lens frame in the moving window is formed;
H is the proper vector of certain sample frame in the sample set.
Said S 2Computing formula be:
S 2 = Σ H ∈ L 2 ( H - m 2 ) ( H - m 2 ) T
Wherein:
L 2The set that the sample of the sub-camera lens frame in 1/2nd moving windows of back is formed.
The formula of said judge function curve filtering is:
F ′ = F - F ‾
Wherein:
F ' is filtered judge function curve;
Figure BSA00000534003900044
is filter threshold.
The computing formula of said
Figure BSA00000534003900045
is:
F ‾ = F mean + F std
Wherein:
F MeanFor passing judgment on the average of function curve;
F StdFor passing judgment on the standard deviation of function curve.
The computing formula of said number of key frames is:
k = dist ( 1 , n ) 1 n - 1 Σ i = 1 n - 1 dist ( i , i + 1 )
Wherein:
K is a number of key frames;
N is the totalframes of sub-camera lens;
Dist (1, n) be the Euclidean distance of sub-camera lens head and the tail two frames;
(i i+1) is the Euclidean distance of adjacent two frames in the moving window to dist.
The invention has the advantages that:
1. the present invention's artificial certain characteristic of extracting of having no backing is carried out sub-camera lens and is cut apart; But utilize the self-characteristic and the physical change of frame of video; A, type interior distance minimum principle maximum according between class distance is divided into the experimental process camera lens with camera lens, can greatly improve and cut apart accuracy.
2. discriminant function explicit physical meaning of the present invention, computing is simple.
3. the present invention has provided a formula that calculates the key-frame extraction number, and this formula can be confirmed the number of key frame adaptively according to each sub-camera lens content change speed, can reflect the dynamic perfromance of camera lens well.
4. the present invention need not be provided with threshold value, has overcome the low defective with poor robustness of document [1] thresholding method precision.
Description of drawings
Fig. 1 is a ROAD video F curve;
The extreme value that Fig. 2 tries to achieve for F ' curve;
The extreme value that Fig. 3 tries to achieve for the F curve.
Embodiment
Below in conjunction with accompanying drawing, preferred embodiment is elaborated.Should be emphasized that following explanation only is exemplary, rather than in order to limit scope of the present invention and application thereof.
The present invention utilize sample apart from the separability criteria construction discriminant function, the method through seeking this discriminant function maximum value is divided into several sub-camera lenses with camera lens.Subsequently, according to the content change speed of each sub-camera lens, confirm the key frame number of each sub-camera lens adaptively.
Step of the present invention is following:
Step 1: read all frames of image in the camera lens, extract the color feature vector of each frame;
Step 2: with moving window with combine apart from the separability criterion, camera lens is divided into each sub-camera lens;
Step 3: the number of key frames of obtaining each sub-camera lens according to the interframe range rate;
Step 4: from sorting from big to small, choose the key frame that specifies number according to frame pitch.
Particular content of the present invention is:
1. feature extraction
HSV (tone Hue, saturation degree Saturation, brightness Value) color space more meets human visual system than other color spaces, so this paper adopts the proper vector of hsv color histogram as frame of video.Because human eye is responsive to tone comparison saturation degree and brightness, so this paper is divided into 8 quantized levels with tone, saturation degree and brightness is divided into 2 quantized levels.Therefore, every image can be quantified as an one dimension histogram that comprises 32 handles (8 * 2 * 2), and promptly every frame can be by column vector H=[h 1, h 2..., h 32] TExpression.
2. sub-camera lens is cut apart
(1) early-stage preparations
At first set up the moving window that length is the 2L frame length, and the preceding L frame in the moving window is regarded as sample set L 1=(H (i-L), H (i-L+1)..., H (i-1)), back L frame is regarded as sample set L 2=(H (i+1), H (i+2)..., H (i+L)), wherein: H (j)(j ∈ [i-L, i+L]) represents the j frame in the moving window, can be by the column vector H=[h of one 32 dimension 1, h 2..., h 32] TExpression.
The mean vector m of two types of sample sets before and after by formula calculate (1) i, m 1The preceding L frame of representative (is sample set L 1) mean vector, m 2Representing back L frame (is sample set L 2) mean vector.
m i = 1 L Σ H ∈ L i H , i = 1,2 - - - ( 1 )
Wherein:
H is the proper vector of sample frame in the sample set;
L is the length of 1/2nd moving windows.
Calculate sample set L at last iWithin class scatter matrix S i, L 1, L 2The class between the dispersion matrix S bDiscrete matrix S in type iVery similar with covariance matrix in form, but covariance matrix is a kind of expectation value, and a type interior discrete matrix is represented the dispersion degree of limited sample in space distribution:
S i = Σ H ∈ L i ( H - m i ) ( H - m i ) T - - - ( 2 )
S b=(m 1-m 2)(m 1-m 2) T (3)
Wherein:
S bBe dispersion matrix between class;
S iBe within class scatter matrix;
m 1For the first two/mean vector of sub-camera lens frame in the moving window;
m 2Mean vector for the sub-camera lens frame in 1/2nd moving windows of back.
(2) make up the judge function
Maximum when two types of sample between class distances in the sliding window, type in apart from hour being sub-shot change part.By knowing apart from the separability criterion: between class distance is maximum, type interior distance is minimum, and promptly being equal to is det (S b) maximum, det (S 1+ S 2) minimum.Therefore can pass judgment on function as follows based on constructing apart from the separability criterion:
F = det ( S b ) det ( S 1 + S 2 ) - - - ( 4 )
Wherein:
F is for passing judgment on function curve;
S 1Be the within class scatter matrix of preceding L frame (the first two/in the moving window sub-camera lens frame),
L 1Set for preceding L frame sample composition;
S 2Be the within class scatter matrix of back L frame (the sub-camera lens frames in 1/2nd moving windows of back),
Figure BSA00000534003900083
L 2Set for back L frame sample composition;
Det (S b) dispersion determinant of a matrix value between type of being;
Det (S 1+ S 2) be the determinant of within class scatter matrix.
(3) the F value curve of calculating camera lens.
Move moving window backward by frame, and calculate the F value.When whole sliding window is in same camera lens, the F value is constant basically, and desirable situation F value levels off to zero; When after the F value increases to certain value gradually, reducing gradually again, explain that moving window has experienced a process of crossing over sub-camera lens:
When the F value increases gradually, explain that this moment, back L frame began to get into a back sub-camera lens;
When F obtains maximum value, explain that this moment, back L frame got into a back sub-camera lens fully, simultaneously preceding L frame is in last sub-camera lens fully;
The F value reduces gradually, and the L frame progresses into a back sub-camera lens before explaining;
F value curve tends to be steady, and explains that this moment, whole sliding window all got into next sub-camera lens.
Therefore, can utilize maximum point is corresponding in the characteristic curve of F frame number as sub-camera lens partitioning boundary.Video ROAD with in normal video storehouse (http://www.open-video.org/) is an example, and its F characteristic curve is as shown in Figure 1.
(4) sub-camera lens is cut apart
Visible by Fig. 1, except that two big extreme points, also there are two little sawtooth wave in the F characteristic curve.This is because utilize formula (4) to carry out F value when calculating, the noise that causes owing to reasons such as the flash of light in the camera lens, object of which movement and camera lens displacements, but not real sub-camera lens cut-point.
Before maximum value is extracted, at first utilize formula (5) that the F curve is carried out filtering, Fig. 2 is the filtered characteristic curve of Fig. 1 for this reason.
F ′ = F - F ‾ - - - ( 5 )
Wherein:
F ' is filtered judge function curve;
Figure BSA00000534003900092
Be filter threshold,
Figure BSA00000534003900093
F Mean, F StdAverage and the standard deviation of representing the F curve respectively.
After the filtering of F curve, suppose that the functional relation that newly obtains is F '=f (i), i is a frame number.We adopt the second difference point-score to carry out maximum value and extract, shown in formula (6).
sign[f(i+1)-f(i)]-sign[f(i)-f(i-1)]=-2 (6)
Wherein:
Sign is a sign function,
Figure BSA00000534003900094
The frame number at the maximum point place of finally trying to achieve is as shown in Figure 3, and it is the corresponding frame number of maximum point, the i.e. partitioning boundary of sub-camera lens that second order difference result equals-2 places.Can find out that this method can extract the border of sub-camera lens well, realize the sequential of camera lens is cut apart.
3. confirming of the number of key frames in the sub-camera lens
After sub-camera lens is cut apart completion, utilize the rate of change of sub-camera lens to extract key frame.Basic thought of the present invention is: because therefore frame pitch can confirm the key frame number through the interframe range rate from ability descriptor frame differences.The totalframes of supposing certain sub-camera lens is n, and this paper utilizes the sub-shot change rate of formula (7) expression, confirms the key frame number adaptively:
k = dist ( 1 , n ) 1 n - 1 Σ i = 1 n - 1 dist ( i , i + 1 ) - - - ( 7 )
Wherein:
K is a number of key frames;
N is the totalframes of sub-camera lens;
Dist (1, n) be the Euclidean distance of sub-camera lens head and the tail two frames;
(i i+1) is the Euclidean distance of adjacent two frames in the moving window to dist;
In the following formula, denominator is represented the average of interframe Euclidean distance in the whole sub-camera lens.
When k≤1, it is little to explain that video content in the sub-camera lens changes, and extracts a frame and gets final product;
When k>1, the integer of choosing near k is the key frame number.Through calculating, the k of the 3rd sub-camera lens gets 3.
4. key frame is selected
Calculate the Euclidean distance d of interior present frame of sub-camera lens and former frame, get preceding k the maximum pairing frame number of value of distance, this k frame is the key frame of sub-camera lens.
The above; Be merely the preferable embodiment of the present invention, but protection scope of the present invention is not limited thereto, any technician who is familiar with the present technique field is in the technical scope that the present invention discloses; The variation that can expect easily or replacement all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with the protection domain of claim.

Claims (9)

1. the self adaptation key frame method for distilling of cutting apart based on sub-camera lens is characterized in that this method may further comprise the steps:
Step 1: read all frames of image in the camera lens, extract the color feature vector of each frame;
Step 2: on the basis of step 1, with moving window with combine apart from the separability criterion, camera lens is divided into each sub-camera lens;
Step 3: the number of key frames of obtaining each sub-camera lens according to the interframe range rate;
Step 4: on the basis of step 3, from sorting from big to small, choose the key frame that specifies number according to frame pitch.
2. the self adaptation key frame method for distilling of cutting apart based on sub-camera lens according to claim 1 is characterized in that said step 2 is specially:
Step 2.1: set up moving window, and try to achieve the two types of sample sets in front and back in the sliding window within class scatter matrix and type between the dispersion matrix;
Step 2.2: on the basis of step 2.1, set up and pass judgment on function;
Step 2.3: the judge function curve of trying to achieve camera lens;
Step 2.4: to passing judgment on function curve filtering, and the frame that will pass judgment on each maximum point correspondence of function curve is regarded as the split position of sub-camera lens.
3. the self adaptation key frame method for distilling of cutting apart based on sub-camera lens according to claim 2 is characterized in that the computing formula of said judge function is:
F = det ( S b ) det ( S 1 + S 2 )
Wherein:
F is for passing judgment on function curve;
S bBe dispersion matrix between class;
S 1For the first two/moving window in the within class scatter matrix of sub-camera lens frame;
S 2Within class scatter matrix for sub-camera lens frame in 1/2nd moving windows of back;
Det (S b) dispersion determinant of a matrix value between type of being;
Det (S 1+ S 2) be the determinant of within class scatter matrix.
4. the self adaptation key frame method for distilling of cutting apart based on sub-camera lens according to claim 3 is characterized in that said S bComputing formula be:
S b=(m 1-m 2)(m 1-m 2) T
Wherein:
m 1For the first two/mean vector of sub-camera lens frame in the moving window;
m 2Mean vector for the sub-camera lens frame in 1/2nd moving windows of back.
5. the self adaptation key frame method for distilling of cutting apart based on sub-camera lens according to claim 4 is characterized in that said S 1Computing formula be:
S 1 = Σ H ∈ L 1 ( H - m 1 ) ( H - m 1 ) T
Wherein:
L 1For the first two/ set that the sample of sub-camera lens frame in the moving window is formed;
H is the proper vector of certain sample frame in the sample set.
6. the self adaptation key frame method for distilling of cutting apart based on sub-camera lens according to claim 5 is characterized in that said S 2Computing formula be:
S 2 = Σ H ∈ L 2 ( H - m 2 ) ( H - m 2 ) T
Wherein:
L 2The set that the sample of the sub-camera lens frame in 1/2nd moving windows of back is formed.
7. the self adaptation key frame method for distilling of cutting apart based on sub-camera lens according to claim 2 is characterized in that the formula of said judge function curve filtering is:
F ′ = F - F ‾
Wherein:
F ' is filtered judge function curve;
Figure FSA00000534003800032
is filter threshold.
8. the self adaptation key frame method for distilling of cutting apart based on sub-camera lens according to claim 7 is characterized in that the computing formula of said
Figure FSA00000534003800033
is:
F ‾ = F mean + F std
Wherein:
F MeanFor passing judgment on the average of function curve;
F StdFor passing judgment on the standard deviation of function curve.
9. the self adaptation key frame method for distilling of cutting apart based on sub-camera lens according to claim 7 is characterized in that the computing formula of said number of key frames is:
k = dist ( 1 , n ) 1 n - 1 Σ i = 1 n - 1 dist ( i , i + 1 )
Wherein:
K is a number of key frames;
N is the totalframes of sub-camera lens;
Dist (1, n) be the Euclidean distance of sub-camera lens head and the tail two frames;
(i i+1) is the Euclidean distance of adjacent two frames in the moving window to dist.
CN 201110190937 2011-07-08 2011-07-08 Adaptive KF (keyframe) extraction method based on sub-lens segmentation Expired - Fee Related CN102314681B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110190937 CN102314681B (en) 2011-07-08 2011-07-08 Adaptive KF (keyframe) extraction method based on sub-lens segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110190937 CN102314681B (en) 2011-07-08 2011-07-08 Adaptive KF (keyframe) extraction method based on sub-lens segmentation

Publications (2)

Publication Number Publication Date
CN102314681A true CN102314681A (en) 2012-01-11
CN102314681B CN102314681B (en) 2013-04-10

Family

ID=45427819

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110190937 Expired - Fee Related CN102314681B (en) 2011-07-08 2011-07-08 Adaptive KF (keyframe) extraction method based on sub-lens segmentation

Country Status (1)

Country Link
CN (1) CN102314681B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107135401A (en) * 2017-03-31 2017-09-05 北京奇艺世纪科技有限公司 Key frame extraction method and system
CN107169988A (en) * 2017-05-12 2017-09-15 江苏大学 A kind of extraction method of key frame based on COS distance hierarchical clustering
CN109862390A (en) * 2019-02-26 2019-06-07 北京融链科技有限公司 Optimization method and device, storage medium, the processor of Media Stream
CN110020093A (en) * 2019-04-08 2019-07-16 深圳市网心科技有限公司 Video retrieval method, edge device, video frequency searching device and storage medium
CN110472484A (en) * 2019-07-02 2019-11-19 山东师范大学 Video key frame extracting method, system and equipment based on multiple view feature
CN111666447A (en) * 2020-06-05 2020-09-15 镇江傲游网络科技有限公司 Content-based three-dimensional CG animation searching method and device
GR20210100499A (en) * 2021-07-20 2023-02-10 Αθανασιος Γεωργιου Παπαγκοτσης Qr code printing for vaccination/illness certification

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
TING WANG ETC.: "An Approach to Video Key-frame Extraction Based on Rough Set", 《2007 INTERNATIONAL CONFERENCE ON MULTIMEDIA AND UBIQUITOUS ENGINEERING(MUE"07)》 *
潘磊: "基于聚类的视频镜头分割和关键帧提取", 《红外与激光工程》 *
王华伟等: "一种基于子镜头聚类的情节代表帧选取方法", 《计算机工程与应用》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107135401A (en) * 2017-03-31 2017-09-05 北京奇艺世纪科技有限公司 Key frame extraction method and system
CN107135401B (en) * 2017-03-31 2020-03-27 北京奇艺世纪科技有限公司 Key frame selection method and system
CN107169988A (en) * 2017-05-12 2017-09-15 江苏大学 A kind of extraction method of key frame based on COS distance hierarchical clustering
CN109862390A (en) * 2019-02-26 2019-06-07 北京融链科技有限公司 Optimization method and device, storage medium, the processor of Media Stream
CN109862390B (en) * 2019-02-26 2021-06-01 北京融链科技有限公司 Method and device for optimizing media stream, storage medium and processor
CN110020093A (en) * 2019-04-08 2019-07-16 深圳市网心科技有限公司 Video retrieval method, edge device, video frequency searching device and storage medium
CN110472484A (en) * 2019-07-02 2019-11-19 山东师范大学 Video key frame extracting method, system and equipment based on multiple view feature
CN110472484B (en) * 2019-07-02 2021-11-09 山东师范大学 Method, system and equipment for extracting video key frame based on multi-view characteristics
CN111666447A (en) * 2020-06-05 2020-09-15 镇江傲游网络科技有限公司 Content-based three-dimensional CG animation searching method and device
GR20210100499A (en) * 2021-07-20 2023-02-10 Αθανασιος Γεωργιου Παπαγκοτσης Qr code printing for vaccination/illness certification

Also Published As

Publication number Publication date
CN102314681B (en) 2013-04-10

Similar Documents

Publication Publication Date Title
CN102314681B (en) Adaptive KF (keyframe) extraction method based on sub-lens segmentation
US11302315B2 (en) Digital video fingerprinting using motion segmentation
CN109753913B (en) Multi-mode video semantic segmentation method with high calculation efficiency
CN101719144B (en) Method for segmenting and indexing scenes by combining captions and video image information
CN102800095B (en) Lens boundary detection method
CN103942751B (en) A kind of video key frame extracting method
Vojíř et al. The enhanced flock of trackers
CN110717411A (en) Pedestrian re-identification method based on deep layer feature fusion
Poignant et al. From text detection in videos to person identification
CN105528794A (en) Moving object detection method based on Gaussian mixture model and superpixel segmentation
CN103390040A (en) Video copy detection method
CN101996410A (en) Method and system of detecting moving object under dynamic background
CN111402126B (en) Video super-resolution method and system based on blocking
CN101577824B (en) Method for extracting compressed domain key frame based on similarity of adjacent I frame DC image
Zhang et al. Coarse-to-fine object detection in unmanned aerial vehicle imagery using lightweight convolutional neural network and deep motion saliency
Youssef et al. Shot boundary detection via adaptive low rank and svd-updating
CN104123396A (en) Soccer video abstract generation method and device based on cloud television
CN103279473A (en) Method, system and mobile terminal for searching massive amounts of video content
CN104036280A (en) Video fingerprinting method based on region of interest and cluster combination
Rong et al. Scene text recognition in multiple frames based on text tracking
Thounaojam et al. A survey on video segmentation
Bae et al. Dual-dissimilarity measure-based statistical video cut detection
Wang et al. Semantic annotation for complex video street views based on 2D–3D multi-feature fusion and aggregated boosting decision forests
CN111160099B (en) Intelligent segmentation method for video image target
CN108573217B (en) Compression tracking method combined with local structured information

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C53 Correction of patent of invention or patent application
CB03 Change of inventor or designer information

Inventor after: Xie Gang

Inventor after: Lei Shaoshuai

Inventor after: Zhao Wenjing

Inventor after: Han Xiaoxia

Inventor after: Xu Xinying

Inventor after: Wang Fang

Inventor before: Lei Shaoshuai

Inventor before: Zhao Wenjing

Inventor before: Han Xiaoxia

Inventor before: Xie Gang

Inventor before: Xu Xinying

Inventor before: Wang Fang

COR Change of bibliographic data

Free format text: CORRECT: INVENTOR; FROM: LEI SHAOSHUAI ZHAO WENJING HAN XIAOXIA XIE GANG XU XINYING WANG FANG TO: XIE GANG LEI SHAOSHUAI ZHAO WENJING HAN XIAOXIA XU XINYING WANG FANG

C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130410

Termination date: 20130708