CN102833492A - Color similarity-based video scene segmenting method - Google Patents

Color similarity-based video scene segmenting method Download PDF

Info

Publication number
CN102833492A
CN102833492A CN2012102736947A CN201210273694A CN102833492A CN 102833492 A CN102833492 A CN 102833492A CN 2012102736947 A CN2012102736947 A CN 2012102736947A CN 201210273694 A CN201210273694 A CN 201210273694A CN 102833492 A CN102833492 A CN 102833492A
Authority
CN
China
Prior art keywords
scene
value
cut
point
video segment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012102736947A
Other languages
Chinese (zh)
Other versions
CN102833492B (en
Inventor
张怡
任金昌
袁正雄
温超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201210273694.7A priority Critical patent/CN102833492B/en
Publication of CN102833492A publication Critical patent/CN102833492A/en
Application granted granted Critical
Publication of CN102833492B publication Critical patent/CN102833492B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Television Signal Processing For Recording (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention belongs to the technical field of computer video data processing, and relates to a color similarity-based video scene segmenting method. According to the color similarity-based video scene segmenting method, for a video clip, a similarity matrix is obtained according to the following method: similarity among frames is computed by using RGB (Red, Green, Blue) color histograms of all frames, and a similarity matrix of the whole clip range is obtained. The color similarity-based video scene segmenting method comprises the following steps of: carrying out first-time scene segmentation on the video clip; merging small scenes; and verifying a transition section and then segmenting. According to the color similarity-based video scene segmenting method, the scene is segmented, and a more accurate and reliable segmenting result can be obtained.

Description

A kind of video scene dividing method based on the color similarity degree
Affiliated technical field
The invention belongs to the computer video technical field of data processing, the method that particularly a kind of video scene is cut apart.
Background technology
Along with the fast development of digital multimedia technology and Computer Storage ability, digital video has obtained extensive use in people's work and life in recent years.Yet how a large amount of digital of digital video data effectively being retrieved and managed is the difficult problem that need solve present stage.Video scene is cut apart as critical basic steps, on addressing this problem, seems particularly important.Obviously, be divided into some set of segments with remarkable stand-alone content to a complete video abstract extraction and the content retrieval of complete video had important function.
Summary of the invention
The object of the present invention is to provide a kind of simple and effective video scene dividing method.The methods of video segmentation that the present invention proposes has only been used the color characteristic of each two field picture in the video, and for the extraction of cataclysm cut-point between video segment and transition fragment reasonable effect has been arranged all.For this reason, the present invention adopts following technical scheme:
A kind of video scene dividing method based on the color similarity degree; This method is for a video segment; Obtain similarity matrix according to following method: the RGB color histogram of all frames, utilize the similarity between the histogram calculation frame, obtain the similarity matrix of entire segment scope; And, stipulated a classification that scene transition arrives the scene version of another scene:
(1) cataclysm cut-point: directly turn to another scene from a scene, the centre does not have transition;
The changeover portion of (2) being fade-in fade-out: turn to the process of another scene from a scene, when previous scene was faded out, a back scene was faded in;
(3) changeover portion of complicacy: other scene versions that can not be divided into above-mentioned two kinds of situation;
This method comprises following step:
The scene first that step 1 is carried out video segment according to following method is cut apart:
(1) include first frame in first scene, with second frame as present frame;
(2) calculate the mean value avg0 of similarity matrix all values of the previous scene of present frame, and the mean value avg1 of the similarity vector of it and previous all frames of scene;
(3) if | < consistency threshold value TH1 then includes present frame in previous scene to avg0-avg1|, and the following frame of present frame as present frame, is continued (2); Otherwise, be starting point with the present frame, begin new scene, the following frame of present frame as present frame, and is continued (2), stop up to video segment, thereby obtain segmentation result;
Step 2 merges little scene: in segmentation result, merge several continuous frame numbers less than the scene that merges threshold value TH2;
Step 3 checking changeover portion is also cut apart again: calculate the standard deviation that merges similarity matrix all values in the scene domain that forms; If standard deviation is less than changeover portion discrimination threshold TH3; Then the scene after this merging is judged as pseudo-changeover portion, is reduced to the little scene state before merging; Otherwise, judge that this scene is a changeover portion, the state after keeping merging so far obtains new segmentation result.
As preferred implementation, the training of three threshold values is wherein carried out according to the following step:
(1) obtain a collection of video segment that is used to train, each real scene cut-point of this batch video segment carried out the classification of scene version from a scene transition to another scene:
(2) set initial value and stop value and the cumulative unit step-length of consistency threshold value TH1, and set initial merging threshold value TH2 and changeover portion discrimination threshold TH3;
(3) to each video segment that is used to train, carry out first scene according to the method in the step 1 and cut apart, obtain segmentation result;
(4) each real scene cut-point of segmentation result and video segment is compared: with segmentation result comparison real scene cut-point calculation cost value a and coverage rate b; And estimate the degree of agreement cut apart with the real scene cut-point with
Figure BDA00001961084200021
; Wherein, the calculating of cost value a is following:
A is initially 0; Each of segmentation result is cut apart the frame at place, in the real scene cut-point, seek it, if can in the real scene cut-point, find, and this locates to be the cataclysm cut-point, then a is not done any operation; If can in the real scene cut-point, find, and be in the changeover portion or complicated changeover portion of being fade-in fade-out, then
Figure BDA00001961084200022
if can't find, then a=a+1;
The calculating of coverage rate b is following:
Introduce variable a1, a1 is initially 0; Each cuts apart the frame at place to the real scene cut-point, in segmentation result, seeks it, if can in segmentation result, find, then a1 is not done any operation, if can't find, and a1=a1+1 then; Calculate at last
Figure BDA00001961084200023
(5) value that constantly changes TH1 makes minimum, the TH1 trained values of the video segment that obtains being used to train;
(6) mean value of TH1 trained values of video segment of getting the training that is useful on is as the recommended value of consistency threshold value;
The recommended value of the consistency threshold value that (7) obtains with above-mentioned training is TH1, carries out first scene according to the method for step 1 again and cuts apart;
(8) set initial value and stop value and the cumulative unit step-length that merges threshold value TH2;
(9) all merge into a scene to several continuous frame numbers less than the scene of TH2;
(10) will merge result that little scene obtains according to the method for step (4) and the real scene cut-point of video segment is compared;
(11) value that constantly changes TH2 makes
Figure BDA00001961084200025
minimum, the TH2 trained values of the video segment that obtains being used to train;
(12) mean value of TH2 trained values of video segment of getting the training that is useful on is as the recommended value that merges threshold value;
The recommended value of the consistency threshold value that (13) obtains with above-mentioned training is TH1, and the recommended value that merges threshold value is TH2, carries out first scene according to the method for step 1 and cuts apart, and obtains segmentation result;
(14) calculate the standard deviation that merges similarity matrix all values in the scene domain that forms, less than TH3, then the scene after this merging is judged as pseudo-changeover portion as if standard deviation, is reduced to the little scene state before merging; Otherwise, judge that this scene is a changeover portion, the state after keeping merging;
(15) segmentation result that obtains after with above judgement according to the method for step (4) and the real scene cut-point of video segment are compared;
(16) value that constantly changes TH3 makes minimum, the TH3 trained values of the video segment that obtains being used to train;
(17) mean value of TH3 trained values of video segment of getting the training that is useful on is as the recommended value of changeover portion discrimination threshold.
Scene dividing method of the present invention is to cataclysm cut-point, the section of cutting apart of being fade-in fade-out, and the complicated dividing section can both obtain effect preferably.Setting threshold is the value of a certain scope and carries out scene according to the present invention and cut apart, and then can access segmentation result (Fig. 1) comparatively accurately, explains that methods of video segmentation of the present invention has certain effect.In order to make segmentation effect better, be necessary threshold value is trained, training effect can be through verifying with the comparison of real video scene cut-point.In confirmatory experiment, we have used 50 video segments, and each video segment comprises several known real scene cut-points.In experiment, we use following two statisticss to weigh the reliability of overall training result.
(1) cost value a:, according to said method cut apart all cut-points that obtain and remove to compare the real scene cut-point, the relative cost value of the quantity of unnecessary experimental result cut-point to a video segment.
(2) coverage rate b: to a video segment, according to said method cut apart in all cut-points that obtain, the cut-point quantity that conforms to the real scene cut-point accounts for the ratio of real scene cut-point sum.
Can know that from above two indexs cost value a is more little to represent that then unnecessary experimental result cut-point is few more, the validity score cutpoint that coverage rate b approaches 1 expression omission more is few more.Through checking to 50 video segments; Cost value a is all less among the result; And coverage rate b remains on (Fig. 2) more than 80% substantially, and then the threshold value recommended value that obtains of training is reliably, and segmentation effect of the present invention and real video scene cut-point degree of agreement are higher.
Description of drawings
Fig. 1 segmentation result figure, last figure, middle figure and figure below part are the segmentation result sketch map of cataclysm cut-point, the section of cutting apart of being fade-in fade-out and complicated dividing section.
Fig. 2 verifies sketch map as a result, and Fig. 2 (a) and Fig. 2 (b) are respectively the statistics of cost value a and the statistics of coverage rate b.
Fig. 3 system flow chart of the present invention.
Embodiment
The present invention proposes a kind of new video scene dividing method.At first the scene conversion form from a scene transition to another scene is made classification, as follows:
(1) cataclysm cut-point: directly turn to another scene from a scene, the centre does not have transition.
Characteristics: scene conversion is very rapid, and frame difference is obvious before and after the cut-point, accurately belongs in the scene separately easily.
The changeover portion of (2) being fade-in fade-out: turn to the process of another scene from a scene, when previous scene was faded out, a back scene was faded in.
Characteristics: scene conversion is slow relatively, and changeover portion often comprises the image of several frames to tens number of frames, and difference is greater than the common scenarios section between each frame.
(3) complicated changeover portion: from a scene turns to the process of another scene, follow certain complicated mapping mode, like dissolving, amplification etc.(other scene versions that can not be divided into above-mentioned two kinds of situation all divide " complicated changeover portion " into)
Characteristics: variation pattern is various, and scene conversion is slow, and changeover portion often comprises the image of several frames to tens number of frames, and difference is greater than the common scenarios section between each frame.
Scene dividing method of the present invention at first utilizes " consistency threshold value " according to algorithm video to be carried out cutting apart the first time; Again each scene after cutting apart is reconsolidated changeover portion according to " merging threshold value "; Filter pseudo-changeover portion through " changeover portion discrimination threshold " at last, thereby get scene segmentation result to the end.Specifying as follows of these three kinds of threshold values:
(1) consistency threshold value: when cutting apart first, determine whether next frame is included in the previous scene according to algorithm.
(2) merge threshold value: after cutting apart first, several continuous frame numbers all will be merged into a scene (or changeover portion) less than the scene of this threshold value.
(3) changeover portion discrimination threshold: after little scene merges, calculate the standard deviation that merges similarity matrix all values in the scene domain that forms.If standard deviation less than this threshold value, then is judged as pseudo-changeover portion, should be reduced to the state before merging; Otherwise this scene is a changeover portion.
Referring to Fig. 3, optimum implementation of the present invention is following:
Step 1. is treated the RGB color histogram that divided video (N frame) extracts all two field pictures, R:32, G:32, B:32 totally 96 dimensional features wherein, and promptly the histogram matrix size is 96*N; Utilize all frames of histogram calculation similarity between any two, thereby obtain the similarity matrix (N*N) of entire segment scope.
Step 2. scene is first cut apart: according to following algorithm divided video fragment.
Algorithm:
(1) includes first frame in first scene.
(2) to second frame and frame afterwards, calculate the mean value avg0 of the similarity matrix all values of its previous scene, and the mean value avg1 of the similarity vector of it and previous all frames of scene.
(3) if | < TH1 (consistency threshold value) then includes present frame in previous scene to avg0-avg1|, continues (2);
Otherwise, be starting point with the present frame, begin new scene, and continuation (2).Stop up to video segment, thereby obtain segmentation result.
Be the specific explanations that relevant similarity matrix calculates below: as far as a scene, establishing p is start frame, and q is an end frame, and present frame is k (k be and then first frame of q)
Avg0: all frames are done similarity relatively in twos from p to q, and the matrix of (p-q+1) * (p-q+1) size that obtains is similarity matrix, and the mean value of all values is avg0 in the matrix
Avg1:k and p do similarity relatively to all frames of q, the one-dimensional vector of (p-q+1) size that obtains, and the mean value of this vector all values is avg1
Step 3. merges little scene: in segmentation result, merge the scene of several continuous frame numbers less than TH2 (merging threshold value), as the changeover portion of hypothesis.
Step 4. checking changeover portion is also cut apart again: analyze and obtain the frame difference of the interior frame difference of changeover portion greater than common scenarios, therefore calculate the standard deviation that merges similarity matrix all values in the scene domain that forms.If standard deviation is less than TH3 (changeover portion discrimination threshold), then the scene after this merging is judged as pseudo-changeover portion, is reduced to the little scene state before merging; Otherwise, judge that this scene is a changeover portion, the state after keeping merging.So far obtain new segmentation result.
Step 5. is in order to obtain better segmentation result; We are necessary consistency threshold value TH1, merging threshold value TH2 and these three threshold values of changeover portion discrimination threshold TH3 are proposed more reasonable suggestions value; Therefore adopt the method for training, these three threshold values are trained through multitude of video.Concrete training method is following:
(1) training preliminary step
Obtain the RGB color histogram of all frames of video segment (50 fragments, each 2000-6000 frame does not wait), wherein R:32, G:32, B:32 totally 96 dimensional features.Utilize histogram that each frame is calculated similarity between any two, obtain the similarity matrix of entire segment scope.
(2) consistency threshold value training
1. be consistency threshold value TH1 setting initial value and stop value (artificially controlling this scope), and cumulative unit step-length.
2. with TH1 threshold value, the algorithm divided video fragment in 2 set by step.
3. the result who divided video is obtained and the real scene cut-point of video segment are compared.With experimental result comparison real scene cut-point calculation cost value a (the more little then experimental result of a value major part can both be found in the real scene cut-point); Omit the degree b (more little then the explanation in the real scene cut-point of b value much can't be found, i.e. experimental result omission real scene cut-point degree is big) of real scene cut-point in addition in experimental result with real scene cut-point control experiment experiment with computing result as a result.Finally with the degree of agreement of
Figure BDA00001961084200051
evaluation experimental result and real scene cut-point, R value more greatly then experimental result and real scene cut-point is identical more.
A. the calculating of cost value a:
A is initially 0.
Each cuts apart the frame at place to experimental result, in the real scene cut-point, seeks it, if can in the real scene cut-point, find, and this locates to be the cataclysm cut-point, then a is not done any operation.
If can in the real scene cut-point, find; And be in the changeover portion scope (changeover portion of being fade-in fade-out or complicated changeover portion) then
If can't find, a=a+1 then
B. the calculating of coverage rate b:
Introduce the variable a1 of similar a in the calculating of b, a1 is initially 0.
Each cuts apart the frame at place to the real scene cut-point, in experimental result, seeks it, if can in experimental result, find (being cataclysm cut-point or the changeover portion of being fade-in fade-out or complicated changeover portion) in the real scene cut-point, then a1 is not done any operation.
If can't find, a1=a1+1 then.
Figure BDA00001961084200053
4. constantly change feasible
Figure BDA00001961084200054
minimum of value of TH1; Promptly under the TH1 threshold value, segmentation result and real scene cut-point are the most identical.
5. 50 videos are all trained TH1 one time, the average of getting them is as consistency threshold value recommended value.
(3) merge the threshold value training
1. the best TH1 (being obtained by the training of (2) consistency threshold value) with experiment fragment self is a threshold value, presses and step 2 identical algorithms divided video fragment.
2. set initial value and stop value (artificially controlling this scope) for merging threshold value TH2, cumulative unit step-length is 1.
3. all merge into a scene to several continuous frame numbers less than the scene of TH2.
4. will merge the result that little scene obtains and the real scene cut-point of video segment compares.With experimental result comparison real scene cut-point calculation cost value a, omit the degree b of real scene cut-point in addition with real scene cut-point control experiment experiment with computing result as a result.Finally with the degree of agreement of
Figure BDA00001961084200055
evaluation experimental result and real scene cut-point.The computational methods of cost value a and coverage rate b are constant.
5. constantly change feasible
Figure BDA00001961084200061
minimum of value of TH2; Promptly under the TH2 threshold value, segmentation result and real scene cut-point are the most identical.
6. 50 videos are all trained TH2 one time, the average of getting them is as merging the threshold value recommended value.
(4) changeover portion discrimination threshold training
1. the best TH1 (being obtained by the training of A. consistency threshold value) with experiment fragment self is a threshold value, presses and step 2 identical algorithms divided video fragment.
2. the best TH2 (merging the threshold value training by B. obtains) with experiment fragment self is a threshold value, all merges into a scene to several continuous frame numbers less than the scene of TH2.
3. be changeover portion discrimination threshold TH3 setting initial value and stop value (artificially controlling this scope), and cumulative unit step-length.
4. calculate the standard deviation that merges similarity matrix all values in the scene domain that forms.If standard deviation is less than TH3, then the scene after this merging is judged as pseudo-changeover portion, is reduced to the little scene state before merging; Otherwise, judge that this scene is a changeover portion, the state after keeping merging.
5. the real scene cut-point of result who obtains after the above judgement and video segment is compared.With experimental result comparison real scene cut-point calculation cost value a, omit the degree b of real scene cut-point in addition with real scene cut-point control experiment experiment with computing result as a result.Finally with the degree of agreement of
Figure BDA00001961084200062
evaluation experimental result and real scene cut-point.The computational methods of cost value a and coverage rate b are constant.
6. constantly change feasible
Figure BDA00001961084200063
minimum of value of TH3; Promptly under the TH3 threshold value, segmentation result and real scene cut-point are the most identical.
7. 50 videos are all trained TH3 one time, the average of getting them is as changeover portion discrimination threshold recommended value.
Step 6. is used the sample of 50 video segments (2000-6000 frame) in order to verify the reliability of training result, and three recommended values that obtain with training are as threshold value TH1, TH2, and TH3 carries out scene to 50 videos respectively and cuts apart.Assay demonstration recommended value is cut apart effectively video scene.
Step 7. is used for later video with three recommended values of training result as threshold value and cuts apart, and carries out video scene according to the method for step 1 to step 4 and cuts apart and can obtain better segmentation effect.

Claims (2)

1. video scene dividing method based on the color similarity degree; This method is for a video segment; Obtain similarity matrix according to following method: the RGB color histogram of all frames, utilize the similarity between the histogram calculation frame, obtain the similarity matrix of entire segment scope; And, stipulated a classification that scene transition arrives the scene version of another scene:
(1) cataclysm cut-point: directly turn to another scene from a scene, the centre does not have transition;
The changeover portion of (2) being fade-in fade-out: turn to the process of another scene from a scene, when previous scene was faded out, a back scene was faded in;
(3) changeover portion of complicacy: other scene versions that can not be divided into above-mentioned two kinds of situation;
This method comprises following step:
The scene first that step 1 is carried out video segment according to following method is cut apart:
(1) include first frame in first scene, with second frame as present frame;
(2) calculate the mean value avg0 of similarity matrix all values of the previous scene of present frame, and the mean value avg1 of the similarity vector of it and previous all frames of scene;
(3) if | < consistency threshold value TH1 then includes present frame in previous scene to avg0-avg1|, and the following frame of present frame as present frame, is continued (2); Otherwise, be starting point with the present frame, begin new scene, the following frame of present frame as present frame, and is continued (2), stop up to video segment, thereby obtain segmentation result;
Step 2 merges little scene: in segmentation result, merge several continuous frame numbers less than the scene that merges threshold value TH2;
Step 3 checking changeover portion is also cut apart again: calculate the standard deviation that merges similarity matrix all values in the scene domain that forms; If standard deviation is less than changeover portion discrimination threshold TH3; Then the scene after this merging is judged as pseudo-changeover portion, is reduced to the little scene state before merging; Otherwise, judge that this scene is a changeover portion, the state after keeping merging so far obtains new segmentation result.
2. the video scene dividing method based on the color similarity degree according to claim 1 is characterized in that, the training of three threshold values is wherein carried out according to the following step:
(1) obtain a collection of video segment that is used to train, each real scene cut-point of this batch video segment carried out the classification of scene version from a scene transition to another scene:
(2) set initial value and stop value and the cumulative unit step-length of consistency threshold value TH1, and set initial merging threshold value TH2 and changeover portion discrimination threshold TH3;
(3) to each video segment that is used to train, carry out first scene according to the method in the step 1 and cut apart, obtain segmentation result;
(4) each real scene cut-point of segmentation result and video segment is compared: with segmentation result comparison real scene cut-point calculation cost value a and coverage rate b; And estimate the degree of agreement cut apart with the real scene cut-point with
Figure FDA00001961084100011
; Wherein, the calculating of cost value a is following:
A is initially 0; Each of segmentation result is cut apart the frame at place, in the real scene cut-point, seek it, if can in the real scene cut-point, find, and this locates to be the cataclysm cut-point, then a is not done any operation; If can in the real scene cut-point, find, and be in the changeover portion or complicated changeover portion of being fade-in fade-out, then
Figure FDA00001961084100012
if can't find, then a=a+1;
The calculating of coverage rate b is following:
Introduce variable a1, a1 is initially 0; Each cuts apart the frame at place to the real scene cut-point, in segmentation result, seeks it, if can in segmentation result, find, then a1 is not done any operation, if can't find, and a1=a1+1 then; Calculate at last
Figure FDA00001961084100021
(5) value that constantly changes TH1 makes
Figure FDA00001961084100022
minimum, the TH1 trained values of the video segment that obtains being used to train;
(6) mean value of TH1 trained values of video segment of getting the training that is useful on is as the recommended value of consistency threshold value;
The recommended value of the consistency threshold value that (7) obtains with above-mentioned training is TH1, carries out first scene according to the method for step 1 again and cuts apart;
(8) set initial value and stop value and the cumulative unit step-length that merges threshold value TH2;
(9) all merge into a scene to several continuous frame numbers less than the scene of TH2;
(10) will merge result that little scene obtains according to the method for step (4) and the real scene cut-point of video segment is compared;
(11) value that constantly changes TH2 makes
Figure FDA00001961084100023
minimum, the TH2 trained values of the video segment that obtains being used to train;
(12) mean value of TH2 trained values of video segment of getting the training that is useful on is as the recommended value that merges threshold value;
The recommended value of the consistency threshold value that (13) obtains with above-mentioned training is TH1, and the recommended value that merges threshold value is TH2, carries out first scene according to the method for step 1 and cuts apart, and obtains segmentation result;
(14) calculate the standard deviation that merges similarity matrix all values in the scene domain that forms, less than TH3, then the scene after this merging is judged as pseudo-changeover portion as if standard deviation, is reduced to the little scene state before merging; Otherwise, judge that this scene is a changeover portion, the state after keeping merging;
(15) segmentation result that obtains after with above judgement according to the method for step (4) and the real scene cut-point of video segment are compared;
(16) value that constantly changes TH3 makes minimum, the TH3 trained values of the video segment that obtains being used to train;
The mean value of TH3 trained values of video segment of getting the training that is useful on is as the recommended value of changeover portion discrimination threshold.
CN201210273694.7A 2012-08-01 2012-08-01 A kind of video scene dividing method based on color similarity Active CN102833492B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210273694.7A CN102833492B (en) 2012-08-01 2012-08-01 A kind of video scene dividing method based on color similarity

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210273694.7A CN102833492B (en) 2012-08-01 2012-08-01 A kind of video scene dividing method based on color similarity

Publications (2)

Publication Number Publication Date
CN102833492A true CN102833492A (en) 2012-12-19
CN102833492B CN102833492B (en) 2016-12-21

Family

ID=47336437

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210273694.7A Active CN102833492B (en) 2012-08-01 2012-08-01 A kind of video scene dividing method based on color similarity

Country Status (1)

Country Link
CN (1) CN102833492B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103533255A (en) * 2013-10-28 2014-01-22 东南大学 Motion displacement curve simplification based automatic segmentation method for video scenes
CN104394422A (en) * 2014-11-12 2015-03-04 华为软件技术有限公司 Video segmentation point acquisition method and device
CN108509917A (en) * 2018-03-30 2018-09-07 北京影谱科技股份有限公司 Video scene dividing method and device based on shot cluster correlation analysis
CN108647641A (en) * 2018-05-10 2018-10-12 北京影谱科技股份有限公司 Video behavior dividing method and device based on two-way Model Fusion
WO2019080685A1 (en) * 2017-10-24 2019-05-02 北京京东尚科信息技术有限公司 Video image segmentation method and apparatus, storage medium and electronic device
CN109740499A (en) * 2018-12-28 2019-05-10 北京旷视科技有限公司 Methods of video segmentation, video actions recognition methods, device, equipment and medium
CN110248182A (en) * 2019-05-31 2019-09-17 成都东方盛行电子有限责任公司 A kind of scene segment lens detection method
CN112511719A (en) * 2020-11-10 2021-03-16 陕西师范大学 Method for judging screen content video motion type
CN113160273A (en) * 2021-03-25 2021-07-23 常州工学院 Intelligent monitoring video segmentation method based on multi-target tracking

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101132528A (en) * 2002-04-12 2008-02-27 三菱电机株式会社 Metadata reproduction apparatus, metadata delivery apparatus, metadata search apparatus, metadata re-generation condition setting apparatus
CN101719144A (en) * 2009-11-04 2010-06-02 中国科学院声学研究所 Method for segmenting and indexing scenes by combining captions and video image information
US20100149424A1 (en) * 2008-12-15 2010-06-17 Electronics And Telecommunications Research Institute System and method for detecting scene change
CN101753853A (en) * 2009-05-13 2010-06-23 中国科学院自动化研究所 Fusion method for video scene segmentation
KR20110032610A (en) * 2009-09-23 2011-03-30 삼성전자주식회사 Apparatus and method for scene segmentation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101132528A (en) * 2002-04-12 2008-02-27 三菱电机株式会社 Metadata reproduction apparatus, metadata delivery apparatus, metadata search apparatus, metadata re-generation condition setting apparatus
US20100149424A1 (en) * 2008-12-15 2010-06-17 Electronics And Telecommunications Research Institute System and method for detecting scene change
CN101753853A (en) * 2009-05-13 2010-06-23 中国科学院自动化研究所 Fusion method for video scene segmentation
KR20110032610A (en) * 2009-09-23 2011-03-30 삼성전자주식회사 Apparatus and method for scene segmentation
CN101719144A (en) * 2009-11-04 2010-06-02 中国科学院声学研究所 Method for segmenting and indexing scenes by combining captions and video image information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
印勇等: "基于主色跟踪和质心运动的视频场景分割", 《计算机应用研究》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103533255A (en) * 2013-10-28 2014-01-22 东南大学 Motion displacement curve simplification based automatic segmentation method for video scenes
CN103533255B (en) * 2013-10-28 2016-06-29 东南大学 Based on the video scene automatic division method that moving displacement curve is simplified
CN104394422A (en) * 2014-11-12 2015-03-04 华为软件技术有限公司 Video segmentation point acquisition method and device
CN104394422B (en) * 2014-11-12 2017-11-17 华为软件技术有限公司 A kind of Video segmentation point acquisition methods and device
WO2019080685A1 (en) * 2017-10-24 2019-05-02 北京京东尚科信息技术有限公司 Video image segmentation method and apparatus, storage medium and electronic device
US11227393B2 (en) 2017-10-24 2022-01-18 Beijing Jingdong Shangke Information Technology Co., Ltd. Video image segmentation method and apparatus, storage medium and electronic device
CN108509917A (en) * 2018-03-30 2018-09-07 北京影谱科技股份有限公司 Video scene dividing method and device based on shot cluster correlation analysis
CN108509917B (en) * 2018-03-30 2020-03-03 北京影谱科技股份有限公司 Video scene segmentation method and device based on lens class correlation analysis
CN108647641A (en) * 2018-05-10 2018-10-12 北京影谱科技股份有限公司 Video behavior dividing method and device based on two-way Model Fusion
CN109740499A (en) * 2018-12-28 2019-05-10 北京旷视科技有限公司 Methods of video segmentation, video actions recognition methods, device, equipment and medium
CN110248182A (en) * 2019-05-31 2019-09-17 成都东方盛行电子有限责任公司 A kind of scene segment lens detection method
CN112511719A (en) * 2020-11-10 2021-03-16 陕西师范大学 Method for judging screen content video motion type
CN113160273A (en) * 2021-03-25 2021-07-23 常州工学院 Intelligent monitoring video segmentation method based on multi-target tracking

Also Published As

Publication number Publication date
CN102833492B (en) 2016-12-21

Similar Documents

Publication Publication Date Title
CN102833492A (en) Color similarity-based video scene segmenting method
CN107330920B (en) Monitoring video multi-target tracking method based on deep learning
CN111327945B (en) Method and apparatus for segmenting video
Payne et al. Indoor vs. outdoor scene classification in digital photographs
CN102800095B (en) Lens boundary detection method
WO2019114036A1 (en) Face detection method and device, computer device, and computer readable storage medium
WO2016066038A1 (en) Image body extracting method and system
CN106937114B (en) Method and device for detecting video scene switching
CN104978567B (en) Vehicle checking method based on scene classification
CN106991370B (en) Pedestrian retrieval method based on color and depth
CN103198705B (en) Parking place state automatic detection method
CN103065325B (en) A kind of method for tracking target based on the polymerization of color Distance geometry Iamge Segmentation
CN109145708A (en) A kind of people flow rate statistical method based on the fusion of RGB and D information
CN102098449B (en) A kind of method utilizing Mark Detection to carry out TV programme automatic inside segmentation
CN110267101B (en) Unmanned aerial vehicle aerial video automatic frame extraction method based on rapid three-dimensional jigsaw
CN103886609B (en) Vehicle tracking method based on particle filtering and LBP features
CN102567738B (en) Rapid detection method for pornographic videos based on Gaussian distribution
CN112733703A (en) Vehicle parking state detection method and system
CN109242776B (en) Double-lane line detection method based on visual system
Lu et al. Generating fluent tubes in video synopsis
Shan et al. A small traffic sign detection algorithm based on modified ssd
CN108010044A (en) A kind of method of video boundaries detection
CN111783910A (en) Building project management method, electronic equipment and related products
CN109165542A (en) Based on the pedestrian detection method for simplifying convolutional neural networks
CN110322479B (en) Dual-core KCF target tracking method based on space-time significance

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant