CN101425088A - Key frame extracting method and system based on chart partition - Google Patents

Key frame extracting method and system based on chart partition Download PDF

Info

Publication number
CN101425088A
CN101425088A CNA2008102250487A CN200810225048A CN101425088A CN 101425088 A CN101425088 A CN 101425088A CN A2008102250487 A CNA2008102250487 A CN A2008102250487A CN 200810225048 A CN200810225048 A CN 200810225048A CN 101425088 A CN101425088 A CN 101425088A
Authority
CN
China
Prior art keywords
frame
prime
video
node
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2008102250487A
Other languages
Chinese (zh)
Inventor
戴琼海
高跃
谢旭东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CNA2008102250487A priority Critical patent/CN101425088A/en
Publication of CN101425088A publication Critical patent/CN101425088A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a method and a system for extracting key frames based on figure partitioning; the method comprises the following steps: analyzing video lenses, extracting the frame characteristics of all the video frames in the video lenses, calculating the similarity of frame pictures, forming the similarity matrix DN*N of video frames in the lenses, applying all the video frames in the lenses to create a figure G equal to (V,E), wherein each frame in the lenses is one node of V, the edge between a node i and a node j is determined by the position relation and similarity between the ith frame and the jth frame, partitioning the figure G equal to (V, E) by the Normalized Cuts method, dividing the figure into a plurality of sections and selecting one frame which is most similar to other frames from each section of the figure as a key frame. The invention effectively obtains the key frame which has stronger expressing ability to the original video lenses.

Description

Extraction method of key frame and the system cut apart based on figure
Technical field
The present invention relates to the video analysis field, particularly relate to a kind of extraction method of key frame and system of cutting apart based on figure.
Background technology
Along with image, the development of video processing technique, the media information amount that people use processing all increases by geometric progression every day, and the development of infotech has brought the video data of magnanimity.Exchange of media informations such as video request program, digital interactive TV, video conference and application form have incorporated among people's routine work, study and the amusement fully.But the video information after the media information after the digitizing, the especially digitizing has data magnanimity, and traditional text data analysis, search method will expend a large amount of time and manpower, inefficiency when carrying out the video information management.Therefore, browse and retrieve these video datas how efficiently, quickly, become the urgent day by day demand of people.
Key frame is a frame or some two field picture of the main information content in reflection one set of shots, can express the camera lens content compactly.When retrieval needs video data, just need not search one section video through and through, but browse the content of quick locating query by key frame non-linear.
Present video key frame extracting method mainly can be divided into three classes.First method is the assigned address method.The assigned address method is the simplest extraction method of key frame.These class methods are not considered the variation tendency of the particular content and the video of video, and adopt relatively-stationary position as key frame.After definite camera lens terminal, directly get first frame, last frame, intermediate frame or near a frame of all frame mean values as key frame.Though this technology is simple to operate, calculate rapidly, can obtain key frame in real time, can not guarantee that all important fragments all have at least one key frame in the video, can not guarantee the representativeness of key frame to the camera lens content.
Second method is to analyze the method that remarkable content change takes place in the camera lens.This method is sequentially handled video sequence, only is concerned about the be changed significantly degree of video on time shaft.First frame of the common camera switching of first key frame travels through all frames in order, and when variation acquired a certain degree (reaching threshold value), the frame that just will reach threshold value was as next key frame.As at the periodical Machine of springer Vision andApplications (vol.10, no.2p.51-65,1997) in the method for Gong Buing, begin to seek backward from last reference frame, up to find a frame to the distance of reference frame greater than threshold value, just with the former frame of this frame as new key frame.Begin to search backward from key frame then, up to find a frame to the distance of this new key frame greater than threshold value, just with the former frame of this frame as next reference frame.The key frame that obtains so just represents last reference frame to all frames between next reference frame.But key frame that this method extracts and reference position and threshold setting have much relations.If the method that adopts accumulative total to change also can produce more key frame even change very little long video, so key frame is not enough to the representativeness possibility of the important fragment of video.And, owing to be that accumulative total changes, relation also being arranged with the direction of handling video, the result who obtains to the pre-treatment video from the back is with different to the result of aftertreatment in the past.
The third method then is by the frame divide into several classes of cluster analysis with video lens, chooses the point that the point of close cluster centre is represented cluster, finally forms the key frame set of video sequence.Present main clustering method, such as methods such as application fuzzy C-means clusterings, similarity is lower between cluster, can not make effectively that the similarity in the cluster is enough big.
In a word, need the urgent technical matters that solves of those skilled in the art to be exactly: how effectively to obtain the original video camera lens is represented the key frame that ability is stronger.
Summary of the invention
Technical matters to be solved by this invention provides a kind of extraction method of key frame and system of cutting apart based on figure, effectively obtains the original video camera lens is represented the key frame that ability is stronger.
In order to address the above problem, the invention discloses a kind of extraction method of key frame of cutting apart based on figure, may further comprise the steps:
Resolve video lens,, carry out calculation of similarity degree between each two field picture, and form frame of video similarity matrix D in the camera lens the feature of all frame of video extraction frames in the video lens N * N, wherein, D IjMiddle comprehensive similarity of preserving i frame and j frame;
Use all videos frame in the camera lens set up a figure G=(V, E), wherein each frame in the camera lens is as the node of V, the limit between node i and the node j is determined by the similarity and the position relation of i frame and j frame;
(V E) uses Normalized Cuts method and cuts apart, and figure is divided into plurality of sections to figure G=;
From figure, choose a frame the most similar as key frame for every section to other frames in described section.
Preferably, calculate similarity between described two two field pictures by the color histogram of two two field pictures relatively.
Further, described figure G=(V, E) limit between middle node i and the node j is determined by following steps:
Calculate the similarity D of i frame and j frame Ij
Position weight between computing node i and the node j ω ( i , j ) = e - 1 σ ( i - j ) 2 , Wherein σ is a parameter;
Limit e between computing node i and the node j (i, j)=ω (i, j) * D Ij
Preferably, to figure G=(V, E) use Normally Cuts method and carry out figure and cut apart and may further comprise the steps:
Similarity between two segmentation V ', the V " of definition figure is cut ( V ′ , V ′ ′ ) = Σ i ∈ V ′ , j ∈ V ′ ′ n e ( i , j ) ;
The connection index of definition figure is assoc ( X , V ) = Σ i ∈ X , j ∈ V n ω ( i , j ) ;
Set up constraint condition Ncut ( V ′ , V ′ ′ ) = cut ( V ′ , V ′ ′ ) assoc ( V ′ , V ) + cut ( V ′ , V ′ ′ ) assoc ( V ′ ′ , V ) ;
Iterate, obtain the minimum Ncut of the overall situation (V ', V "), obtain optimal segmentation this figure.
Preferably, be averaged by color histogram with all frame of video in described section, select then with the immediate frame of video of this average color histogram as key frame.
Further, also comprise the weight of the described key frame of calculating in this camera lens.
Further, the weight of described key frame in camera lens obtains by following computing method:
Video frame number in the whole camera lens is NT, and the video frame number in the present segment is NK;
The weight of current key frame in whole camera lens is W = NK NT .
According to embodiments of the invention, a kind of key-frame extraction system of cutting apart based on figure is also disclosed, this system comprises:
Frame of video similarity matrix computing module is used to resolve video lens, to the feature of all frame of video extraction frames in the video lens, carries out calculation of similarity degree between each two field picture, and forms frame of video similarity matrix D in the camera lens N * N
Build module, be used to use all videos frame in the camera lens set up a figure G=(V, E), wherein each frame in the camera lens is as the node of V;
Limit computing module between the node is used for according to the similarity of i frame and j frame and the limit between definite node i of position relation and the node j;
Figure is cut apart module, is used for that (V E) uses Normalized Cuts method and cuts apart, and figure is divided into plurality of sections to figure G=;
Key frame is chosen module, is used for choosing a frame the most similar to other frames described section as key frame from every section in figure, and calculates the weight of described key frame in this camera lens.
Preferably, calculate similarity between described two two field pictures by the color histogram of two two field pictures relatively.
Further, the limit computing module comprises between the described node:
Calculate the similarity D of i frame and j frame IjSubmodule;
Position weight between computing node i and the node j ω ( i , j ) = e - 1 σ ( i - j ) 2 Submodule, wherein σ is a parameter;
Limit e between computing node i and the node j (i, j)=ω (i, j) * D IjSubmodule.
Further, described figure is cut apart module and is comprised:
The similarity that is used between two segmentation V ', the V " of definition figure is cut ( V ′ , V ′ ′ ) = Σ i ∈ V ′ , j ∈ V ′ ′ n e ( i , j ) Submodule;
The connection index that is used for definition figure is assoc ( X , V ) = Σ i ∈ X , j ∈ V n ω ( i , j ) Submodule;
Be used to set up constraint condition Ncut ( V ′ , V ′ ′ ) = cut ( V ′ , V ′ ′ ) assoc ( V ′ , V ) + cut ( V ′ , V ′ ′ ) assoc ( V ′ ′ , V ) Submodule;
Be used to iterate, obtain the minimum Ncut of the overall situation (V ', V "), obtain submodule the optimal segmentation of this figure.
Compared with prior art, the present invention has the following advantages:
Method of the present invention is cut apart figure by Normalized Cuts method by set up a figure with the frame of video segment, and limits the constraint condition of cutting apart, thereby obtains the optimal dividing to this figure.The constraint condition that application Normally Cuts carries out being limited when figure is cut apart can guarantee to have bigger difference between two segmentations, and has similarity preferably in each segmentation.Frame that will be the most similar to other frames on each video segment that obtains can keep having more very much not similarity between the key frame as key frame; And the ratio value that the video frame number in this video segment accounts for whole camera lens frame number makes that as the weight of this key frame on camera lens the key frame that obtains is more accurate to the expression of original video.Simultaneously, the inventive method simplicity of design is easy to realize.
Description of drawings
Fig. 1 is a kind of extraction method of key frame embodiment flow chart of steps of cutting apart based on figure of the present invention;
Fig. 2 is a kind of key-frame extraction system architecture diagram of cutting apart based on figure of the present invention;
Fig. 3 is that the embodiment of the invention is extracted the result of key frame and used the contrast synoptic diagram that prior art is extracted the result of key frame the interview video recording.
Embodiment
For above-mentioned purpose of the present invention, feature and advantage can be become apparent more, the present invention is further detailed explanation below in conjunction with the drawings and specific embodiments.
With reference to Fig. 1, show a kind of extraction method of key frame embodiment of cutting apart based on figure of the present invention, specifically can comprise:
Step 101, parsing video lens to the feature of all frame of video extraction frames in the video lens, carry out calculation of similarity degree between each two field picture, and form frame of video similarity matrix D in the camera lens N * N, wherein, D IjMiddle comprehensive similarity of preserving i frame and j frame.
Camera lens is one group of frame sequence continuous in time, it represent in the scene in time with the space on continuous action, corresponding the once record start-stop operation of video camera is also referred to as and cuts out or take.Camera lens is the minimum unit of video data.Scene is one group and semantically is associated and the set of adjacent in time camera lens.
Preferably, can calculate similarity between described two two field pictures by the color histogram of two two field pictures relatively.
Color histogram is the method for the most frequently used expression color of image feature, and its advantage is the influence that not changed by image rotation and translation, further also can not be subjected to the influence of graphical rule variation by normalization.The overall situation that color histogram is simply described color in the piece image distributes, i.e. different color shared ratio in entire image, the image that is specially adapted to describe those images that are difficult to cut apart automatically and does not need to consider the object space position.In general, the heterochromia of two two field pictures is big more, and then the similarity degree between them is more little, otherwise then similarity degree is big more.
Step 102, use all videos frame in the camera lens set up a figure G=(V, E), wherein each frame in the camera lens is as the node of V, the limit between node i and the node j is determined by the similarity and the position relation of i frame and j frame.
(V E) is made up of two set V and E non-directed graph G=, and wherein, V is that node closes at finite nonempty set, and E is the finite set on limit.Among the present invention, each frame in the camera lens is as the node of V.Further, described figure G=(V, E) limit between middle node i and the node j is determined by following steps:
Calculate the similarity D of i frame and j frame Ij
Position weight between computing node i and the node j ω ( i , j ) = e - 1 σ ( i - j ) 2 , Wherein σ is a parameter;
Limit e between computing node i and the node j (i, j)=ω (i, j) * D Ij
Step 103, to figure G=(V E) uses Normalized Cuts method and cuts apart, and figure is divided into plurality of sections, specifically can may further comprise the steps:
Similarity between two segmentation V ', the V " of definition figure is cut ( V ′ , V ′ ′ ) = Σ i ∈ V ′ , j ∈ V ′ ′ n e ( i , j ) ;
The connection index of definition figure is assoc ( X , V ) = Σ i ∈ X , j ∈ V n ω ( i , j ) ;
Set up constraint condition Ncut ( V ′ , V ′ ′ ) = cut ( V ′ , V ′ ′ ) assoc ( V ′ , V ) + cut ( V ′ , V ′ ′ ) assoc ( V ′ ′ , V ) ;
Iterate, obtain the minimum Ncut of the overall situation (V ', V "), obtain optimal segmentation this figure.
Wherein, V ' and V " are any two sections among the V, and the similarity between this two segmentation is defined as among the V ' limit e (i, j) sum between arbitrary node j in the arbitrary node i and V "; Connection index assoc between the section (X, V) summation of the position weight of the point of the node of expression among the X in the V; When constraint condition Ncut (V ', V ") value hour, guaranteed to satisfy cutting apart of the figure that obtains that a frame of video has bigger similarity in the section, the frame of video between the section has less similarity, obtains best cutting apart this moment.
Normalized Cuts method is a kind of based on the figure division, sets up the method for similarity measurement between the node, and it is converted into a utilization with the figure partition problem and asks proper value of matrix to obtain the problem of suboptimal solution.In concrete cutting procedure, by the foundation of constraint condition, can control the degree of cutting apart, thereby determine the size of the entrained quantity of information of each block.This method has guaranteed global optimum's property, therefore can obtain better segmentation effect.The concrete theory of relevant this method belongs to well-known to those skilled in the art, therefore, repeats no more.In addition, those skilled in the art can certainly use other any feasible mathematical models according to actual needs to be cut apart figure, and obtaining cutting apart the interior frame of video of back segment has bigger similarity, and the frame of video between the section has the optimal segmentation of less similarity.
Step 104, from the figure that obtains optimal segmentation every section choose a frame the most similar as key frame to other frames in described section.
Key frame is a frame or some two field pictures of the main information content in reflection one set of shots, can express the camera lens content compactly.
Preferably, be averaged by color histogram with all frame of video in described section, select then with the immediate frame of video of this average color histogram as key frame.This method has satisfied two basic demands choosing at key frame: selected frame must be able to reflect the main incident in the section of place, and description should be as far as possible accurately fully; The extraction of key frame should be tried one's best fewer but better true, has guaranteed that data processing amount should be as far as possible little, and calculating should not be too complicated.Though adopt color histogram to carry out the extraction of key frame as feature in embodiments of the present invention; but the present invention does not limit this, and the image alignment algorithm that other are commonly used such as the image alignment algorithm of direct comparison method, color characteristic method, textural characteristics method, shape facility method and compression domain etc. are all in protection scope of the present invention.
In addition, this step also comprises the weight of the described key frame of calculating in this camera lens:
Suppose that the video frame number in the whole camera lens is NT, the video frame number in the present segment is NK, and then the weight of current key frame in whole camera lens is W = NK NT . By having introduced weight factor, make that its description to the corresponding video content is more accurately effective for each key frame.
Use method of the present invention, when retrieving required video data, needn't search one section video through and through, but the content of coming quick locating query by browsing of key frame, whether this mode also helps the content of one section original video of our fast understanding simultaneously, be required video data with decision.
With reference to Fig. 2, show a kind of key-frame extraction system embodiment of cutting apart based on figure of the present invention, specifically can comprise:
Frame of video similarity matrix computing module 201 is used to resolve video lens, to the feature of all frame of video extraction frames in the video lens, carries out calculation of similarity degree between each two field picture, and forms frame of video similarity matrix D in the camera lens N * N
Build module 202, be used to use all videos frame in the camera lens set up a figure G=(V, E), wherein each frame in the camera lens is as the node of V;
Limit computing module 203 between the node is used for according to the similarity of i frame and j frame and the limit between definite node i of position relation and the node j;
Figure is cut apart module 204, is used for that (V E) uses Normalized Cuts method and cuts apart, and figure is divided into plurality of sections to figure G=;
Key frame is chosen module 205, is used for choosing a frame the most similar to other frames described section as key frame from every section in figure, and calculates the weight of described key frame in this camera lens.
Preferably, calculate similarity between described two two field pictures by the color histogram of two two field pictures relatively.
Preferably, for one of the implementation of limit computing module 203 between the described node, it further comprises:
Calculate the similarity D of i frame and j frame IjSubmodule;
Position weight between computing node i and the node j ω ( i , j ) = e - 1 σ ( i - j ) 2 Submodule, wherein σ is a parameter;
Limit e between computing node i and the node j (i, j)=ω (i, j) * D IjSubmodule.
Preferably, cut apart the implementation of module 204 for described figure, it further comprises:
The similarity that is used between two segmentation V ', the V " of definition figure is cut ( V ′ , V ′ ′ ) = Σ i ∈ V ′ , j ∈ V ′ ′ n e ( i , j ) Submodule;
The connection index that is used for definition figure is assoc ( X , V ) = Σ i ∈ X , j ∈ V n ω ( i , j ) Submodule;
Be used to set up constraint condition Ncut ( V ′ , V ′ ′ ) = cut ( V ′ , V ′ ′ ) assoc ( V ′ , V ) + cut ( V ′ , V ′ ′ ) assoc ( V ′ ′ , V ) Submodule;
Be used to iterate, obtain the minimum Ncut of the overall situation (V ', V "), obtain submodule the optimal segmentation of this figure.
A kind of key-frame extraction system of cutting apart based on figure 200 of the present invention is when specific implementation, at first, resolve video lens by frame of video similarity matrix computing module 201, all frame of video in the video lens are extracted the feature of frame, as the color histogram similarity, carry out calculation of similarity degree between each two field picture, and form frame of video similarity matrix D in the camera lens N * NThen, build module 202 will use all videos frame in the camera lens set up non-directed graph G=(V, E), wherein each frame in the camera lens is as the node of V; Utilize limit computing module 203 between the node, according to the similarity of i frame and j frame and the limit between definite node i of position relation and the node j; Cutting apart module 204 by figure will scheme G=(V E) will use Normalized Cuts method and carries out optimal segmentation, and figure is divided into plurality of sections; At last, utilize key frame to choose module 205, from figure, choose a frame the most similar as key frame for every section, and calculate the weight of described key frame in this camera lens to other frames in described section.
For system embodiment, because it is similar substantially to method embodiment, so description is fairly simple, relevant part gets final product referring to the part explanation of method embodiment.
Shown in Fig. 3 (a) 3 key frame synoptic diagram that in the embodiment of the invention one section interview video recording extracted.First key-frame extraction the 60th frame in the video lens, second key-frame extraction the 190th frame in the video lens, the 3rd key-frame extraction the 270th frame in the video lens.Wherein, this section interview video recording only comprises a camera lens, and length is 300 frames, and a preceding part is that a build labourer speaks before camera lens in the camera lens, and a part of camera lens turns to the building ground in the middle back of video.Picture during this in camera lens has a bigger variation, camera lens from people's face to sky again to the variation in building site.Provided the key frame result schematic diagram that goes up the method extraction that proposes according to IEEE multimedia transactions (IEEE Transaction on Multimedia, vol.7, no.6p.1097-1105,2005) among Fig. 3 (b); The key frame that extracts is respectively the 91st in the video lens, 156,199,258 frames.Simultaneously, Fig. 3 (c) is the key frame synoptic diagram that adopts the method acquisition of randomly drawing; The key frame that extracts is respectively the 0th in the video lens, 99,199,299 frames.As can be seen from Figure 3, use the key frame of method acquisition of the present invention and can do relatively better expression to the original video camera lens, distance is bigger between the key frame simultaneously.
More than to a kind of extraction method of key frame and system of cutting apart based on figure provided by the present invention, be described in detail, used specific case herein principle of the present invention and embodiment are set forth, the explanation of above embodiment just is used for helping to understand method of the present invention and core concept thereof; Simultaneously, for one of ordinary skill in the art, according to thought of the present invention, the part that all can change in specific embodiments and applications, in sum, this description should not be construed as limitation of the present invention.

Claims (11)

1, a kind of extraction method of key frame of cutting apart based on figure is characterized in that, may further comprise the steps:
Resolve video lens,, carry out calculation of similarity degree between each two field picture, and form frame of video similarity matrix D in the camera lens the feature of all frame of video extraction frames in the video lens N * N, wherein, D IjMiddle comprehensive similarity of preserving i frame and j frame;
Use all videos frame in the camera lens set up a figure G=(V, E), wherein each frame in the camera lens is as the node of V, the limit between node i and the node j is determined by the similarity and the position relation of i frame and j frame;
(V E) uses Normalized Cuts method and cuts apart, and figure is divided into plurality of sections to figure G=;
From figure, choose a frame the most similar as key frame for every section to other frames in described section.
2, the extraction method of key frame of cutting apart based on figure according to claim 1 is characterized in that, calculates similarity between described two two field pictures by the color histogram of two two field pictures relatively.
3, the extraction method of key frame of cutting apart based on figure according to claim 1 is characterized in that, described figure G=(V, E) limit between middle node i and the node j is determined by following steps:
Calculate the similarity D of i frame and j frame Ij
Position weight between computing node i and the node j ω ( i , j ) = e - 1 σ ( i - j ) 2 , wherein σ is a parameter;
Limit e between computing node i and the node j (i, j)=ω (i, j) * D Ij
4, the extraction method of key frame of cutting apart based on figure according to claim 1 is characterized in that, to figure G=(V, E) use Normally Cuts method and carry out figure and cut apart and may further comprise the steps:
Similarity between two segmentation V ', the V " of definition figure is cut ( V ′ , V ′ ′ ) = Σ i ∈ V ′ , j ∈ V ′ ′ n e ( i , j ) ;
The connection index of definition figure is assoc ( X , V ) = Σ i ∈ X , j ∈ V n ω ( i , j ) ;
Set up constraint condition Ncut ( V ′ , V ′ ′ ) = cut ( V ′ , V ′ ′ ) assoc ( V ′ , V ) + cut ( V ′ , V ′ ′ ) assoc ( V ′ ′ , V ) ;
Iterate, obtain the minimum Ncut of the overall situation (V ', V "), obtain optimal segmentation this figure.
5, the extraction method of key frame of cutting apart based on figure according to claim 1 is characterized in that, is averaged by the color histogram with all frame of video in described section, select then with the immediate frame of video of this average color histogram as key frame.
6, the extraction method of key frame of cutting apart based on figure according to claim 1 is characterized in that, also comprises calculating the weight of described key frame in this camera lens.
7, the extraction method of key frame of cutting apart based on figure according to claim 6 is characterized in that, the weight of described key frame in camera lens obtains by following computing method:
Video frame number in the whole camera lens is NT, and the video frame number in the present segment is NK;
The weight of current key frame in whole camera lens is W = NK NT .
8, a kind of key-frame extraction system of cutting apart based on figure is characterized in that, comprising:
Frame of video similarity matrix computing module is used to resolve video lens, to the feature of all frame of video extraction frames in the video lens, carries out calculation of similarity degree between each two field picture, and forms frame of video similarity matrix D in the camera lens N * N
Build module, be used to use all videos frame in the camera lens set up a figure G=(V, E), wherein each frame in the camera lens is as the node of V;
Limit computing module between the node is used for according to the similarity of i frame and j frame and the limit between definite node i of position relation and the node j;
Figure is cut apart module, is used for that (V E) uses Normalized Cuts method and cuts apart, and figure is divided into plurality of sections to figure G=;
Key frame is chosen module, is used for choosing a frame the most similar to other frames described section as key frame from every section in figure, and calculates the weight of described key frame in this camera lens.
9, the key-frame extraction system of cutting apart based on figure according to claim 8 is characterized in that, calculates similarity between described two two field pictures by the color histogram of two two field pictures relatively.
10, the key-frame extraction system of cutting apart based on figure according to claim 8 is characterized in that the limit computing module further comprises between the described node:
Calculate the similarity D of i frame and j frame IjSubmodule;
Position weight between computing node i and the node j ω ( i , j ) = e - 1 σ ( i - j ) 2 Submodule, wherein σ is a parameter;
Limit e between computing node i and the node j (i, j)=ω (i, j) * D IjSubmodule.
11, the key-frame extraction system of cutting apart based on figure according to claim 8 is characterized in that described figure is cut apart module and further comprised:
The similarity that is used between two segmentation V ', the V " of definition figure is cut ( V ′ , V ′ ′ ) = Σ i ∈ V ′ , j ∈ V ′ ′ n e ( i , j ) Submodule;
The connection index that is used for definition figure is assoc ( X , V ) = Σ i ∈ X , j ∈ V n ω ( i , j ) Submodule;
Be used to set up constraint condition Ncut ( V ′ , V ′ ′ ) = cut ( V ′ , V ′ ′ ) assoc ( V ′ , V ) + cut ( V ′ , V ′ ′ ) assoc ( V ′ ′ , V ) Submodule;
Be used to iterate, obtain the minimum Ncut of the overall situation (V ', V "), obtain submodule the optimal segmentation of this figure.
CNA2008102250487A 2008-10-24 2008-10-24 Key frame extracting method and system based on chart partition Pending CN101425088A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNA2008102250487A CN101425088A (en) 2008-10-24 2008-10-24 Key frame extracting method and system based on chart partition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNA2008102250487A CN101425088A (en) 2008-10-24 2008-10-24 Key frame extracting method and system based on chart partition

Publications (1)

Publication Number Publication Date
CN101425088A true CN101425088A (en) 2009-05-06

Family

ID=40615701

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2008102250487A Pending CN101425088A (en) 2008-10-24 2008-10-24 Key frame extracting method and system based on chart partition

Country Status (1)

Country Link
CN (1) CN101425088A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101604325B (en) * 2009-07-17 2012-06-20 北京邮电大学 Method for classifying sports video based on key frame of main scene lens
CN103093458A (en) * 2012-12-31 2013-05-08 清华大学 Detecting method and detecting device for key frame
CN104679818A (en) * 2014-12-25 2015-06-03 安科智慧城市技术(中国)有限公司 Video keyframe extracting method and video keyframe extracting system
CN107025099A (en) * 2016-02-01 2017-08-08 北京大学 A kind of asynchronous figure based on deque's model calculates realization method and system
CN107529098A (en) * 2014-09-04 2017-12-29 英特尔公司 Real-time video is made a summary
CN108854031A (en) * 2018-05-29 2018-11-23 深圳臻迪信息技术有限公司 The method and relevant apparatus of exercise data are analyzed by unmanned camera work
WO2019085941A1 (en) * 2017-10-31 2019-05-09 腾讯科技(深圳)有限公司 Key frame extraction method and apparatus, and storage medium
CN109902189A (en) * 2018-11-30 2019-06-18 华为技术有限公司 A kind of picture selection method and relevant device
CN110175591A (en) * 2019-05-31 2019-08-27 中科软科技股份有限公司 A kind of method and system obtaining video similarity
CN111625683A (en) * 2020-05-07 2020-09-04 山东师范大学 Video abstract automatic generation method and system based on graph structure difference analysis
CN116405745A (en) * 2023-06-09 2023-07-07 深圳市信润富联数字科技有限公司 Video information extraction method and device, terminal equipment and computer medium

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101604325B (en) * 2009-07-17 2012-06-20 北京邮电大学 Method for classifying sports video based on key frame of main scene lens
CN103093458A (en) * 2012-12-31 2013-05-08 清华大学 Detecting method and detecting device for key frame
CN103093458B (en) * 2012-12-31 2015-11-25 清华大学 The detection method of key frame and device
CN107529098A (en) * 2014-09-04 2017-12-29 英特尔公司 Real-time video is made a summary
CN104679818A (en) * 2014-12-25 2015-06-03 安科智慧城市技术(中国)有限公司 Video keyframe extracting method and video keyframe extracting system
CN104679818B (en) * 2014-12-25 2019-03-26 上海云赛智联信息科技有限公司 A kind of video key frame extracting method and system
CN107025099A (en) * 2016-02-01 2017-08-08 北京大学 A kind of asynchronous figure based on deque's model calculates realization method and system
CN107025099B (en) * 2016-02-01 2019-12-27 北京大学 Asynchronous graph calculation implementation method and system based on double-queue model
WO2019085941A1 (en) * 2017-10-31 2019-05-09 腾讯科技(深圳)有限公司 Key frame extraction method and apparatus, and storage medium
CN108854031A (en) * 2018-05-29 2018-11-23 深圳臻迪信息技术有限公司 The method and relevant apparatus of exercise data are analyzed by unmanned camera work
CN109902189A (en) * 2018-11-30 2019-06-18 华为技术有限公司 A kind of picture selection method and relevant device
WO2020108656A1 (en) * 2018-11-30 2020-06-04 华为技术有限公司 Image selecting method and related device
CN109902189B (en) * 2018-11-30 2021-02-12 华为技术有限公司 Picture selection method and related equipment
US11758285B2 (en) 2018-11-30 2023-09-12 Huawei Technologies Co., Ltd. Picture selection method and related device
CN110175591A (en) * 2019-05-31 2019-08-27 中科软科技股份有限公司 A kind of method and system obtaining video similarity
CN110175591B (en) * 2019-05-31 2021-06-22 中科软科技股份有限公司 Method and system for obtaining video similarity
CN111625683A (en) * 2020-05-07 2020-09-04 山东师范大学 Video abstract automatic generation method and system based on graph structure difference analysis
CN116405745A (en) * 2023-06-09 2023-07-07 深圳市信润富联数字科技有限公司 Video information extraction method and device, terminal equipment and computer medium
CN116405745B (en) * 2023-06-09 2023-11-17 深圳市信润富联数字科技有限公司 Video information extraction method and device, terminal equipment and computer medium

Similar Documents

Publication Publication Date Title
CN101425088A (en) Key frame extracting method and system based on chart partition
RU2628192C2 (en) Device for semantic classification and search in archives of digitized film materials
CN101398855B (en) Video key frame extracting method and system
CN103065153B (en) A kind of video key frame extracting method based on color quantization and cluster
CN109871464B (en) Video recommendation method and device based on UCL semantic indexing
CN110795657B (en) Article pushing and model training method and device, storage medium and computer equipment
CN101369281A (en) Retrieval method based on video abstract metadata
CN102165464A (en) Method and system for automated annotation of persons in video content
CN104504109A (en) Image search method and device
CN112733660B (en) Method and device for splitting video strip
CN102236675A (en) Method for processing matched pairs of characteristic points of images, image retrieval method and image retrieval equipment
CN103279473A (en) Method, system and mobile terminal for searching massive amounts of video content
CN110688524A (en) Video retrieval method and device, electronic equipment and storage medium
CN102708572A (en) Temporal-correlation-based mode connection and camera system using the method
CN102855245A (en) Image similarity determining method and image similarity determining equipment
CN116975615A (en) Task prediction method and device based on video multi-mode information
Ghorai et al. An image inpainting method using pLSA-based search space estimation
CN111639230A (en) Similar video screening method, device, equipment and storage medium
CN113825012B (en) Video data processing method and computer device
CN114565768A (en) Image segmentation method and device
Jin et al. Network video summarization based on key frame extraction via superpixel segmentation
Wang et al. Cross-domain learning using optimized pseudo labels: toward adaptive car detection in different weather conditions and urban cities
Dimiccoli et al. Hierarchical region-based representation for segmentation and filtering with depth in single images
Velivelli et al. Automatic video annotation by mining speech transcripts
KR100811774B1 (en) Bio-image Retrieval Method Using Characteristic Edge Block Of Edge Histogram Descriptor and Apparatus at the same

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Open date: 20090506