CN100594713C - A method and system for generating video outline - Google Patents

A method and system for generating video outline Download PDF

Info

Publication number
CN100594713C
CN100594713C CN200810104584A CN200810104584A CN100594713C CN 100594713 C CN100594713 C CN 100594713C CN 200810104584 A CN200810104584 A CN 200810104584A CN 200810104584 A CN200810104584 A CN 200810104584A CN 100594713 C CN100594713 C CN 100594713C
Authority
CN
China
Prior art keywords
moving object
video
cost
outline
calculate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN200810104584A
Other languages
Chinese (zh)
Other versions
CN101262568A (en
Inventor
陈益强
黄强
纪雯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Computing Technology of CAS
Original Assignee
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Computing Technology of CAS filed Critical Institute of Computing Technology of CAS
Priority to CN200810104584A priority Critical patent/CN100594713C/en
Publication of CN101262568A publication Critical patent/CN101262568A/en
Application granted granted Critical
Publication of CN100594713C publication Critical patent/CN100594713C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a method for producing a video outline and the method comprises the steps as follows: carrying out moving object detection to a video frame to obtain a moving object and a background; tracking the moving object and producing the track of the moving object; calculating the video outline on the basis of the knowledge of cognitive psychology and re-arranging the track of the moving object accordingly; combining the background and the re-arranged track of the moving object to produce the video outline. The method of the invention avoids the disorder of the moving object in the output video outline so as to be suitable for observation for human eyes and provide convenience for customers to observe effective information in an original video as well as further avoids the collision of the moving object when being re-arranged and maintains the original timing sequence consistency of the object as far as possible.

Description

A kind of method and system that produces video outline
Technical field
The present invention relates to technical field of image processing, further relate to the video frequency abstract field.
Background technology
Video frequency abstract is a notion of amplifying out from text snippet, text snippet is the short summary to one piece of article or passage, so video frequency abstract is exactly to obtain a concise and to the point summary to the processing that one section long video content carries out correlation technique, just the structure and the content of video are analyzed in automatic or automanual mode, significant extracting section in the former video is come out, and then make up in some way, form a succinct summary that still can fully show video content.This summary can be represented with textual form, also can be some static pictures or one section video than former video weak point, and present most video summarization technique all is to represent with the latter.
Traditional video summarization technique mainly can be divided into two big classes according to the difference of the form of expression of summary: static video frequency abstract, be called video summary (Video Summary) again, mainly be the content that shows former video in the mode of static state, as title, key frame, lantern slide, scene conversion figure (STG) etc.; Dynamic video frequency abstract is called video breviary (Video Skimming) again, and it is the content that shows former video with dynamic video sequence, and itself is exactly one section shorter than former video a video, and some has the audio frequency in the former video in addition.Video summary has only been considered visual information usually, represent one section video with single static frame of video or literal, advantage is to realize simple, existing commercial system's major part all is to adopt this method, and the picture that all can join in this video of a width of cloth for each video as each big video website comes this section of mark video.But video summary has lost the expressive force of video greatly, do not meet user's perception custom, the video breviary has remedied this shortcoming to a certain extent, but the algorithm of video breviary is more complicated all, be difficult to realize automatic extraction, artificial synthetic cost is very high, and the method for video breviary is most commonly in the film clips making of professional video display industry at present.
Traditional video summarization technique has been ignored a very important aspect, i.e. combination between spatial domain-time domain.As the definition of summary, the purpose of video frequency abstract is to offer user's information as much as possible to represent the content of important video in limited room and time scope.But at present the researcher to pay close attention to maximum be how to select frame of video on time shaft, and they lined up represent to the user.They in the nature of things with frame of video as an indivisible minimum video unit.This has ignored the stream that video itself is exactly a three-dimensional of being made up of the time of the space of bidimensional and one dimension in fact.Simultaneously, in this three dimensions, the distribution of video information is uneven.At present all video frequency abstract research all be at the inhomogeneities of video information on time shaft, also right and wrong are equally distributed on spatial domain but to have ignored video.
There is shortcoming at present video frequency abstract algorithm, people such as Alex Rav-Acha have proposed video outline (Video Synopsis) method, this method is regarded video as 3 unified on time domain-spatial domain dimension bodies, not only consider the information inhomogeneities of video on time dimension, considered that also spatially there is the skewness of information equally in video.This algorithm is at first analyzed the effective information-moving object in the video, extract moving object, then again permutation and combination is carried out in these moving objects on the background of rebuilding, make full use of the space of each frame, again be fused into one section video outline, video outline after the generation is wanted much shorter than former video, but has comprised effective information nearly all in the former video.The algorithm flow of video outline can be with shown in the accompanying drawing 1.
At first carry out the detection of moving object, as 101 in the accompanying drawing 1, it belongs to moving object and still belongs to background to each pixel detection in each frame.
Carry out the tracking of moving object then, and generate track, as 102 in the accompanying drawing 1.In this step, carry out cutting apart of moving object, and each moving object is followed the tracks of, generate movement locus separately.
After the moving object track generated, the permutation and combination again of carrying out moving object as 103 in the accompanying drawing 1, promptly rearranged all moving object tracks on time shaft.Arranging moving object is to find a mapping M on the time-domain in essence, the time location of these moving objects from former frame of video is mapped to another one time location in the output video, because video outline keeps spatially constant of moving object, so this mapping only acts on time shaft.This modular algorithm is as follows:
Definition B is the set of all moving objects, and b is one of them moving object, and the position of this moving object in former video is
Figure C20081010458400061
Then pass through
Figure C20081010458400062
Mapping is in output video
Figure C20081010458400063
T wherein b s, t b eRepresent starting position and the end position of moving object in former video respectively,
Figure C20081010458400064
Represent starting position and the end position of moving object in output video respectively.If moving object does not occur in final output video, then
Figure C20081010458400065
In order to find a good mapping M, defined following energy loss formula, the M that can allow formula (1) obtain minimum value is exactly the mapping that reaches best effects.
E ( M ) = Σ b ∈ B E a ( b ^ ) + α Σ b , b ′ ∈ B E c ( b ^ , b ^ ′ ) - - - ( 1 )
E wherein aBe the movable information loss, be defined as follows:
Figure C20081010458400071
χ wherein b(x, y are the characteristic values of the moving object b of expression t), are exactly the number of pixels that moving object b comprises.Formula (2) is not if moving object of expression appears in the output video of video outline, and the characteristic value of this object will add in the lost motion so.
E cBe collision loss, be defined as overlapping on space-time of in output video any two moving objects:
E c ( b ^ , b ^ ′ ) = Σ t ∈ t ^ b ∩ t ^ b ′ χ b ^ ( x , y , t ) χ b ^ ′ ( x , y , t ) - - - ( 3 )
Wherein
Figure C20081010458400073
Expression moving object b and b ' simultaneous that time in output video, utilize formula (3) density of controlled motion object in output video to a certain extent.
The resulting background of detection of last fusional movement object and the moving object track that rearranges generate video outline, as 104 in the accompanying drawing 1.
Though said method has been considered the distribution of information on the space-time three-dimensional in the video, but the result who obtains at last is not very desirable, various moving objects are crowded together in the output video outline, though the length of video has shortened greatly like this, but the amount of information in each frame video is too many, observer's intractable is multiple data quantity so, therefore needs the amount of information in the suitable every frame video of minimizing.
Summary of the invention
The object of the invention is to solve too many, the excessive problem of moving object density of video outline single frames internal information amount in the prior art, thereby provides a kind of output video outline that makes to meet the method and system of the generation video outline of human eye observation mode.
To achieve these goals, according to an aspect of the present invention, provide a kind of method that produces video outline, may further comprise the steps:
1) frame of video is carried out moving object and detect, obtain moving object and background;
2) follow the tracks of described moving object, and generate the track of described moving object;
3) according to number density cost E Dn(p) and object direction density cost E Dd(p) utilize formula E d(p)=E Dn(p)+E Dd(p) bulk density cost E d(p), wherein p is a frame of video, number density cost E Dn(p)=| N (p)-C n|, N (p) is the number of moving object among the frame of video p, C nBe visual movement object quantity capacity, object direction density cost
Figure C20081010458400074
Δ (p) is an all objects travel direction number among the frame p, C dIt is visual movement object direction capacity;
4) basis
Figure C20081010458400075
Calculate and change cost E r(p), wherein
Figure C20081010458400076
B is moving object;
5) basis
Figure C20081010458400077
Calculate the video outline cost, wherein α and β are not to be zero jack per line real number simultaneously, and P generates sets of video frames, and resets the track of described moving object according to described video outline cost;
6) the moving object track after described background of fusion and the described rearrangement generates video outline.
According to a further aspect in the invention, also comprise the step a) basis after the step 4) of said method
Figure C20081010458400081
Figure C20081010458400082
Calculate collision cost E c(p), wherein b ' is moving object;
And basis in step 5)
Figure C20081010458400083
Calculate the video outline cost, wherein λ be the non-zero real number and with α and β jack per line.
According to a further aspect in the invention, also comprise the step b) basis after the step 4) of said method
Figure C20081010458400084
Figure C20081010458400085
Calculate noncontinuity cost E t(p), wherein b ' is moving object, t b s, t B ' sBe respectively the time started in described frame of video of moving object b and moving object b ',
Figure C20081010458400086
With
Figure C20081010458400087
It is respectively the time started in described video outline of moving object b and moving object b ';
And basis in step 5)
Figure C20081010458400088
Calculate the video outline cost, wherein δ be the non-zero real number and with α and β jack per line.
According to a further aspect in the invention, comprise the step b) basis after the step a) of said method
Figure C20081010458400089
Figure C200810104584000810
Calculate noncontinuity cost E t(p), wherein b ' is moving object, t b s, t B ' sBe respectively the time started in described frame of video of moving object b and moving object b ',
Figure C200810104584000811
With It is respectively the time started in described video outline of moving object b and moving object b ';
And basis in step 5)
Figure C200810104584000813
Calculate described video outline cost, wherein said δ be the non-zero real number and with α and β jack per line.
In accordance with a further aspect of the present invention, described α, the β of said method, δ and λ span are [1,5].
In accordance with a further aspect of the present invention, the described C of said method nBe 4 or 5.
In accordance with a further aspect of the present invention, the described C of said method dBe 3.
According to another aspect of the invention, the present invention also provides a kind of system that produces video outline, comprising:
The moving object detection module, the detection that it is used to carry out moving object obtains moving object and background;
Moving body track and track generation module, it is used to follow the tracks of described moving object, and generates the track of described moving object;
The moving object reordering module, it is used for according to number density cost E Dn(p) and object direction density cost E Dd(p) utilize formula E d(p)=E Dn(p)+E Dd(p) bulk density cost E d(p), wherein p is a frame of video, number density cost E Dn(p)=| N (p)-C n|, N (p) is the number of moving object among the frame of video p, C nBe visual movement object quantity capacity, object direction density cost
Figure C20081010458400091
Δ (p) is an all objects travel direction number among the frame p, C dBe visual movement object direction capacity, according to
Figure C20081010458400092
Calculate and change cost E r(p), wherein B is moving object, according to
Figure C20081010458400094
Calculate the video outline cost, wherein α, β are not to be zero jack per line real number simultaneously, and p is a frame of video, and P is a sets of video frames, and resets the track of described moving object according to described video outline cost; And
The video outline generation module is used to merge the moving object track after described background and the described rearrangement, generates video outline.
According to another aspect of the invention, the described moving object reordering module of said system also is used for basis
Figure C20081010458400095
Figure C20081010458400096
Calculate collision cost E c(p), wherein b and b ' they are moving object, according to
Figure C20081010458400097
Figure C20081010458400098
Calculate noncontinuity cost E t(p), t b s, t B ' sBe respectively the time started in described frame of video of moving object b and moving object b ',
Figure C20081010458400099
With
Figure C200810104584000910
Be respectively the time started in described video outline of moving object b and moving object b ', and according to
Figure C200810104584000911
Calculate the video outline cost, wherein δ and λ are the non-zero real with the α jack per line.
The present invention passes through the foregoing description, made full use of the cognitive psychology achievement in research aspect theories of vision in recent years, by in the video outline cost, considering factors such as density cost and variation cost, avoided the moving object in the output video outline disorderly and unsystematic, make its suitable eye-observation, make things convenient for the user to observe effective information in the former video; And when resetting, further avoided the collision of moving object, and kept the original sequential continuity of object as far as possible.
Description of drawings
Below in conjunction with accompanying drawing the specific embodiment of the present invention is described in further detail, wherein:
Fig. 1 is the flow chart that produces the method for video outline.
Fig. 2 is that the direction of motion is divided schematic diagram.
Embodiment
Human eye is the research category that belongs to cognitive psychology to the reflection of information.Cognitive psychology thinks that the people has Visual Working Memory (Visual Working Memory), and the storage in short-term that it refers to non-speech visual information is the temporary transient storage before visual information is accepted further to handle.The research of Visual Working Memory is mainly concentrated on the storage mode and the problem of capacity of vision object information in working memory.The researcher draws by experiment owing to be subject to the vision capacity, and people can only follow the trail of 4-5 target item, store the direction of motion information of about 3 objects.Therefore the present invention has proposed a kind of method and system that produces video outline in conjunction with this result of study, can allow the video of video outline short as much as possible, can allow very comfortable the observing of observer again.
The present invention and prior art are similar, are divided into 4 key steps: moving object detection, moving body track and track generation, moving object rearrangement and video outline generate.Wherein moving object detection, moving body track and track generate with the video outline generation of merging preceding background all a lot of relevant researchs, and the specific implementation method can be with reference to these achievements in research.The present invention takes into full account the perception of human eye for moving object, the main introduction in detail based on improving one's methods that this moving object is reset.The input of this module is the set of a moving object, and each moving object all comprises following parameter: the position in former video, it is represented with start frame position and end frame position, the pixel that comprises in every frame.The output of module is the position of each moving object in new video, and it is also represented with start frame position and end frame position.
Suppose that B is all moving object set B={ b 1, b 2..., b n, b wherein iA moving object among the expression B.If b is a moving object among the B, the position in original video is Moving object is reset can be regarded as and is found a mapping M on the time domain, to each object b, t bBe mapped to certain position in the video of generation
Figure C20081010458400102
In accordance with a preferred embodiment of the present invention, make that the mapping M of following video outline cost minimum promptly is that institute asks.
E ( M ) = | α Σ p ∈ P E d ( p ) | + | β Σ p ∈ P E r ( p ) | + | λ Σ p ∈ P E c ( p ) | + | δ Σ p ∈ P E t ( p ) | - - - ( 4 )
Wherein P is the last sets of video frames that generates, P={p 1, p 2..., p m; E dBe the density cost, E rBe to change cost, E cBe the collision cost, E tThe noncontinuity cost; α, β, δ and λ are the parameters that can be set based on the test by the user, are used for adjusting the significance level of various costs, and α, β, δ and λ be the jack per line real number, and α and β are not to be zero simultaneously.The above-mentioned cost of video outline according to the preferred embodiment of the invention combines the density cost, changes cost, collision cost and 4 factors of noncontinuity cost.But those skilled in the art should understand that only to utilize the density cost and change cost calculates video outline, or the calculating that increases collision cost or noncontinuity cost on these two cost bases also can realize advantage of the present invention equally.
One by one each cost function is described explanation below.
By cognitive psychology as can be known, in the end in the video of Sheng Chenging, the density of moving object can not be excessive, otherwise the vision capacity that has surpassed human eye just can not allow all objects of observer all observe; But the density of moving object can not be too little, and density is too little will to make that the moving object in the video of last generation is sparse, does not reach the purpose that shortens video.So adopt the density of moving object in the density cost control output video frame.According to one embodiment of present invention, density can be divided into two parts, and a part is the quantitative density of object, and a part is the density on the movement direction of object, and bulk density cost in the following manner.But it should be appreciated by those skilled in the art that following density cost account form is not unique, effect that can density cost disclosed according to the present invention is otherwise calculated.
E d(p)=E dn(p)+E dd(p) (5)
E wherein DnBe object number density cost,
E dn(p)=|N(p)-C n| (6)
N (p) is the number of moving object among the frame of video p, C nBe visual movement object quantity capacity, preferred value 4 or 5;
E DdBe object direction density cost,
Δ (p) is an all objects travel direction number among the frame of video p, C dBe visual movement object direction capacity, preferred value 3;
The visual field is divided into eight directions, each direction 45 degree, as shown in Figure 2.The direction of motion of each object is in these eight directions all.
Because change the blind existence of looking, the observer is very sensitive for the appearance of new object, so should reduce in the last rearrangement as far as possible the chance that new object surpasses appears in the synchronization.So considered the variation cost in the video outline cost of the present invention.According to one embodiment of present invention, the variation cost can be shown below and calculate.But it should be appreciated by those skilled in the art that following variation cost account form is not unique, function that can variation cost disclosed according to the present invention is otherwise calculated.Preferably, consider that human eye needs about 500 milliseconds memory time, the time that frame in back 500 milliseconds time of frame all at last should new object appearance appears in therefore new object.
Figure C20081010458400122
In the end in the video of Sheng Chenging, should in each frame, try one's best and reduce the existence of collision, also promptly reduce by two overlapped objects as far as possible and occur, can weigh the influence that last collision brings with the collision cost function.The present invention calculates the collision cost, and it is shown below, but those skilled in the art should understand that, following collision cost account form is not unique, and function that can collision cost disclosed according to the present invention is otherwise calculated, for example the method described in the background technology.
E c ( p ) = Σ b , b ′ ∈ p C p ( b , b ′ ) - - - ( 10 )
Figure C20081010458400124
In the video that generates, also should keep the continuity on the sequential between the object.If object a appears at before the object b in former video, in the video of last generation, object a also appears at before the object b, and we just say that object a and b have kept the continuity on the sequential so, otherwise does not just keep the continuity on the sequential.Therefore, can calculate noncontinuity in the last generation video with the noncontinuity cost function.According to one embodiment of present invention, can calculate the noncontinuity cost according to following formula, but it should be appreciated by those skilled in the art that following noncontinuity cost account form is not unique, function that can noncontinuity cost disclosed according to the present invention is otherwise calculated
E t ( p ) = Σ b , b ′ ∈ p D p ( b , b ′ ) - - - ( 12 )
Figure C20081010458400126
Wherein sign (a) is the sign symbol of expression a.
To those skilled in the art, the present invention can adopt a kind of video outline system to realize that this system comprises following several sections:
The moving object detection module is used to carry out the detection of moving object;
Moving body track and track generation module are used to carry out cutting apart of moving object, and each moving object are followed the tracks of, and generate movement locus separately;
The moving object reordering module is used for calculating video outline according to said method, and rearranges the moving object track according to this;
The video outline generation module, the moving object track after being used to merge background and reset generates video outline.
Should be noted that and understand, under the situation that does not break away from the desired the spirit and scope of the present invention of accompanying Claim, can make various modifications and improvement the present invention of foregoing detailed description.Therefore, the scope of claimed technical scheme is not subjected to the restriction of given any specific exemplary teachings.

Claims (9)

1. a method that produces video outline comprises the steps:
1) frame of video is carried out moving object and detect, obtain moving object and background;
2) follow the tracks of described moving object, and generate the track of described moving object;
3) according to number density cost E Dn(p) and object direction density cost E Dd(p) utilize formula E d(p)=E Dn(p)+E Dd(p) bulk density cost E d(p), wherein p is a frame of video, number density cost E Dn(p)=| N (p)-C n|, N (p) is the number of moving object among the frame of video p, C nBe visual movement object quantity capacity, object direction density cost
Figure C2008101045840002C1
Δ (p) is an all objects travel direction number among the frame p, C dIt is visual movement object direction capacity;
4) basis
Figure C2008101045840002C2
Calculate and change cost E r(p), wherein
Figure C2008101045840002C3
B is moving object;
5) basis
Figure C2008101045840002C4
Calculate the video outline cost, wherein α and β are not to be zero jack per line real number simultaneously, and P generates sets of video frames, and resets the track of described moving object according to described video outline cost;
6) the moving object track after described background of fusion and the described rearrangement generates video outline.
2. method according to claim 1 is characterized in that, also comprises the step a) basis after the described step 4) Calculate collision cost E c(p), wherein b ' is moving object;
And basis in described step 5)
Figure C2008101045840002C7
Calculate described video outline cost, wherein said λ be the non-zero real number and with α and β jack per line.
3. method according to claim 1 is characterized in that, also comprises the step b) basis after the described step 4)
Figure C2008101045840002C8
Figure C2008101045840002C9
Calculate noncontinuity cost E t(p), wherein b ' is moving object, t b s, t B ' sBe respectively the time started in described frame of video of moving object b and moving object b ',
Figure C2008101045840003C1
With It is respectively the time started in described video outline of moving object b and moving object b ';
And basis in described step 5) Calculate described video outline cost, wherein said δ be the non-zero real number and with α and β jack per line.
4. method according to claim 2 is characterized in that, comprises the step b) basis after the described step a)
Figure C2008101045840003C4
Calculate noncontinuity cost E t(p), wherein b ' is moving object, t b s, t B ' sBe respectively the time started in described frame of video of moving object b and moving object b ',
Figure C2008101045840003C6
With
Figure C2008101045840003C7
It is respectively the time started in described video outline of moving object b and moving object b ';
And basis in described step 5)
Figure C2008101045840003C8
Calculate described video outline cost, wherein said δ be the non-zero real number and with α and β jack per line.
5. method according to claim 4 is characterized in that, described α, β, δ and λ span are [1,5].
6. method according to claim 1 is characterized in that, described C nBe 4 or 5.
7. method according to claim 1 is characterized in that, described C dBe 3.
8. system that produces video outline comprises:
The moving object detection module, the detection that it is used to carry out moving object obtains moving object and background;
Moving body track and track generation module, it is used to follow the tracks of described moving object, and generates the track of described moving object;
The moving object reordering module, it is used for:
According to number density cost E Dn(p) and object direction density cost E Dd(p) utilize formula E d(p)=E Dn(p)+E Dd(p) bulk density cost E d(p), wherein p is a frame of video, number density cost E Dn(p)=| N (p)-C n|, N (p) is the number of moving object among the frame of video p, C nBe visual movement object quantity capacity, object direction density cost
Figure C2008101045840003C9
Δ (p) is an all objects travel direction number among the frame p, C dBe visual movement object direction capacity,
According to
Figure C2008101045840004C1
Calculate and change cost E r(p), wherein
Figure C2008101045840004C2
B is moving object,
According to Calculate the video outline cost, wherein α, β are not to be zero jack per line real number simultaneously, and p is a frame of video, and P is a sets of video frames, and resets the track of described moving object according to described video outline cost; And
The video outline generation module is used to merge the moving object track after described background and the described rearrangement, generates video outline.
9. system according to claim 8, wherein said moving object reordering module also is used for:
According to
Figure C2008101045840004C4
Figure C2008101045840004C5
Calculate collision cost E c(p), wherein b and b ' are moving object,
According to
Figure C2008101045840004C6
Figure C2008101045840004C7
Calculate noncontinuity cost E t(p), t b s, t B ' sBe respectively the time started in described frame of video of moving object b and moving object b ',
Figure C2008101045840004C8
With
Figure C2008101045840004C9
Be respectively the time started in described video outline of moving object b and moving object b ', and
According to
Figure C2008101045840004C10
Calculate described video outline cost, wherein δ and λ are the non-zero real with the α jack per line.
CN200810104584A 2008-04-21 2008-04-21 A method and system for generating video outline Active CN100594713C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200810104584A CN100594713C (en) 2008-04-21 2008-04-21 A method and system for generating video outline

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200810104584A CN100594713C (en) 2008-04-21 2008-04-21 A method and system for generating video outline

Publications (2)

Publication Number Publication Date
CN101262568A CN101262568A (en) 2008-09-10
CN100594713C true CN100594713C (en) 2010-03-17

Family

ID=39962739

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200810104584A Active CN100594713C (en) 2008-04-21 2008-04-21 A method and system for generating video outline

Country Status (1)

Country Link
CN (1) CN100594713C (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102375816B (en) * 2010-08-10 2016-04-20 中国科学院自动化研究所 A kind of Online Video enrichment facility, system and method
CN102156707A (en) * 2011-02-01 2011-08-17 刘中华 Video abstract forming and searching method and system
US8643746B2 (en) * 2011-05-18 2014-02-04 Intellectual Ventures Fund 83 Llc Video summary including a particular person
CN102339625B (en) * 2011-09-20 2014-07-30 清华大学 Video object level time domain editing method and system
CN102495907B (en) * 2011-12-23 2013-07-03 香港应用科技研究院有限公司 Video summary with depth information
CN102708182B (en) * 2012-05-08 2014-07-02 浙江捷尚视觉科技有限公司 Rapid video concentration abstracting method
CN103888768B (en) * 2012-12-21 2016-02-10 浙江大华技术股份有限公司 A kind of method for concentration of video image frame sequence and device
CN103096185B (en) * 2012-12-30 2016-04-20 信帧电子技术(北京)有限公司 A kind of video abstraction generating method and device
CN103092925B (en) * 2012-12-30 2016-02-17 信帧电子技术(北京)有限公司 A kind of video abstraction generating method and device
CN103227963A (en) * 2013-03-20 2013-07-31 西交利物浦大学 Static surveillance video abstraction method based on video moving target detection and tracing
CN105898313A (en) * 2014-12-15 2016-08-24 江南大学 Novel video synopsis-based monitoring video scalable video coding technology
CN106408080B (en) * 2015-07-31 2019-01-01 富士通株式会社 The counting device and method of moving object
CN107493441B (en) * 2016-06-12 2020-03-06 杭州海康威视数字技术股份有限公司 Abstract video generation method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1365574A (en) * 1999-06-18 2002-08-21 艾利森电话股份有限公司 A method and a system for generating summarized radio
CN1465191A (en) * 2001-04-27 2003-12-31 三菱电机株式会社 Method for summarizing a video using motion descriptors
US6697523B1 (en) * 2000-08-09 2004-02-24 Mitsubishi Electric Research Laboratories, Inc. Method for summarizing a video using motion and color descriptors

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1365574A (en) * 1999-06-18 2002-08-21 艾利森电话股份有限公司 A method and a system for generating summarized radio
US6697523B1 (en) * 2000-08-09 2004-02-24 Mitsubishi Electric Research Laboratories, Inc. Method for summarizing a video using motion and color descriptors
CN1465191A (en) * 2001-04-27 2003-12-31 三菱电机株式会社 Method for summarizing a video using motion descriptors

Also Published As

Publication number Publication date
CN101262568A (en) 2008-09-10

Similar Documents

Publication Publication Date Title
CN100594713C (en) A method and system for generating video outline
Bach et al. A descriptive framework for temporal data visualizations based on generalized space‐time cubes
Le et al. Applying experiential marketing in selling tourism dreams
CN103945208B (en) A kind of parallel synchronous zooming engine for multiple views bore hole 3D display and method
Yi et al. Toward a deeper understanding of the role of interaction in information visualization
Blascheck et al. Challenges and perspectives in big eye-movement data visual analytics
CN109299685A (en) Deduction network and its method for the estimation of human synovial 3D coordinate
Guo et al. Semantic segmentation of RGBD images based on deep depth regression
Zhang et al. An efficient flood dynamic visualization approach based on 3D printing and augmented reality
US10332562B2 (en) System and method for space-time annotation-capable media scrubbing
Jankowski et al. On the design of a Dual-Mode User Interface for accessing 3D content on the World Wide Web
US8405657B2 (en) Animatable graphics lighting analysis
Smyth et al. Tangible possibilities—envisioning interactions in public space
Rebelo et al. Designing posters towards a seamless integration in urban surroundings: A computational approach
Bostanci et al. Innovative approach to aesthetic evaluation based on entropy
US9007365B2 (en) Line depth augmentation system and method for conversion of 2D images to 3D images
Bludau et al. Unfolding Edges for Exploring Multivariate Edge Attributes in Graphs.
CN203217237U (en) True three-dimensional 3D display device and system
Rogers et al. Future Virtual Heritage-Techniques
Cheng et al. Enabling interactivity on displays of multivariate time series and longitudinal data
Baratin et al. Documenting the conservative evolution of the city walls thanks to the integration of digital systems of various typologies. The case study of Valbona gate
Zhang et al. A spatial analysis of the pond design to create Okufukasa, a sense of depth: A case study of Katsura Imperial Villa
Dumitriu et al. 3D geological model of an overthrust napped structure. Hășmaș mountains, Eastern Carpathians, Romania
Zakaria et al. Influences of the Western Abstract Art on the Visual Art Movement in Malaysia
Miller From I Love Lucy in Connecticut to Desperate Housewives’ Wisteria Lane: Suburban TV Shows, 1950-2007

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant