US20070101266A1 - Video summary description scheme and method and system of video summary description data generation for efficient overview and browsing - Google Patents

Video summary description scheme and method and system of video summary description data generation for efficient overview and browsing Download PDF

Info

Publication number
US20070101266A1
US20070101266A1 US11/614,406 US61440606A US2007101266A1 US 20070101266 A1 US20070101266 A1 US 20070101266A1 US 61440606 A US61440606 A US 61440606A US 2007101266 A1 US2007101266 A1 US 2007101266A1
Authority
US
United States
Prior art keywords
video
describing
video summary
interval
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/614,406
Inventor
Jae Gon Kim
Hyun Sung Chang
Munchurl Kim
Jin Woong Kim
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electronics and Telecommunications Research Institute ETRI
Original Assignee
Electronics and Telecommunications Research Institute ETRI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR10-2000-0055781A external-priority patent/KR100371813B1/en
Application filed by Electronics and Telecommunications Research Institute ETRI filed Critical Electronics and Telecommunications Research Institute ETRI
Priority to US11/614,406 priority Critical patent/US20070101266A1/en
Publication of US20070101266A1 publication Critical patent/US20070101266A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/19Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
    • G11B27/28Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/102Programmed access in sequence to addressed parts of tracks of operating record carriers
    • G11B27/105Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 

Definitions

  • the present invention relates to a video summary description scheme for efficient video overview and browsing, and also relates to a method and system of video summary description generation to describe video summary according to the video summary description scheme.
  • the technical fields in which the present invention is involved are content-based video indexing and browsing/searching and summarizing video to the content and then describing it.
  • the format of summarizing video largely falls into dynamic summary and static summary.
  • the video description scheme according to the embodiments of the present invention is for efficiently describing the dynamic summary and the static summary in the unification-based description scheme.
  • the existing video summary and description scheme provide simply the information of video interval which is included in the video summary, the existing video summary and description scheme are limited to conveying overall video contents through the playing of the video summary.
  • the existing video summary provides only the video interval which is considered to be important according to the criteria determined by the video summary provider. Accordingly, if the criteria of users and the video provider are different from each other or users have special criteria, the users cannot obtain the video summary they desire.
  • the existing video summary permits the users selecting the video summary with a desired level by providing several levels' video summary, it makes the selecting extent of the users limited so that the users cannot select by the contents of the video summary.
  • the U.S. Pat. No. 5,821,945 entitled “Method and apparatus for video browsing based on content and structure” represents video in compact form and provides browsing functionality accessing to the video with desired content through the representation.
  • the patent pertains to static summary based on the representative frame, and although the existing static summary summarizes by using the representative frame of the video shot, the representative frame of this patent provides only visual information representing the shot.
  • the patent has a limitation on conveying the information using the summary scheme.
  • the video description scheme and browsing method of the embodiments described herein utilize the dynamic summary based on the video segment.
  • the video summary description scheme was proposed by the MPEG-7 Description Scheme (VO.5) announced ISO/IEC JTC1/SC29/WG11 MPEG-7 Output Document No. N2844 on July 1999. Because the scheme describes the interval information of each video segment of dynamic video summary, in spite of providing basic functionalities describing dynamic summary, the scheme has problems in the following aspects.
  • the disclosed embodiments of the present invention provide a hierarchical video summary description scheme, which comprises the representative frame information and the representative sound information at each video interval that is included in the video summary and makes feasible the user-customized event-based summary providing the users' selection for the contents of the video summary and efficient browsing, and a video summary description data generation method and system using the description scheme.
  • a Video Summary DS includes at least one HighlightSegment DS describing information on a highlight segment corresponding to one or more video summary intervals, wherein the HighlightSegment DS includes a VideoSegmentLocator DS describing the highlight segment and an ImageLocator DS describing a representative frame of the highlight segment.
  • a Video Summary Description Scheme for describing a video summary stored in the computer readable recording medium.
  • the Video Summary DS includes at least one HighlightSegment DS for describing information on a highlight segment corresponding to one or more video summary intervals, and the HighlightSegment DS includes a VideoSegmentLocator DS describing the highlight segment and an ImageLocator DS describing a representative frame of the highlight segment.
  • a method for generating video summary description data according to the present invention including the steps of:
  • Video Summary Description Scheme (e) generating video summary description data according to a Video Summary Description Scheme (DS) that enables execution of browsing based on the video summary interval information and the representative frame, the Video Summary DS including at least one HighlightSegment DS for describing information on a highlight segment corresponding to one of the video summary intervals, wherein the HighlightSegment DS includes a VideoSegmentLocator DS for describing the highlight segment and an ImageLocator DS for describing a representative frame of the highlight segment.
  • a Video Summary Description Scheme (e) generating video summary description data according to a Video Summary Description Scheme (DS) that enables execution of browsing based on the video summary interval information and the representative frame
  • the Video Summary DS including at least one HighlightSegment DS for describing information on a highlight segment corresponding to one of the video summary intervals, wherein the HighlightSegment DS includes a VideoSegmentLocator DS for describing the highlight segment and an ImageLocator DS for
  • the present invention also includes a system for generating video summary description data according to a video summary description scheme corresponding to an original video, the system, including:
  • a video analyzer for analyzing the input original video and producing video analysis result
  • a summary rule definer for defining the summary rule for selecting the video summary interval
  • a video summary interval selector for selecting one of the video summary intervals capable of summarizing the video contents of the original video and outputting video summary interval information based on the video analysis result from the video analyzer and the summary rule from the summary rule definer;
  • a representative frame extractor for outputting a representative frame representing video summary interval based on the video summary interval information from the video summary interval selector
  • a video summary describer for generating video summary description data with a Video Summary Description Scheme (DS) by inputting the video summary interval information from the video summary interval selector and the representative frame information from the representative frame extractor,
  • DS Video Summary Description Scheme
  • Video Summary DS includes at least one HighlightSegment DS for describing information on a highlight segment corresponding to one of the video summary intervals, wherein the HighlightSegment DS includes a VideoSegmentLocator DS for describing the highlight segment and an ImageLocator DS for describing a representative frame of the highlight segment.
  • the video summary description data having a Video Summary Description Scheme (DS) for describing a video summary interval
  • the Video Summary DS includes at least one HighlightSegment DS for describing information on a highlight segment corresponding to one of the video summary intervals
  • the HighlightSegment DS includes a VideoSegmentLocator DS for describing the highlight segment and an ImageLocator DS for describing a representative frame of the highlight segment.
  • the browsing apparatus includes:
  • a second video summary representative frame player for playing a second summary level of a video summary interval, wherein the second summary level is summarized more finely than the first summary level
  • a level selector for selecting the first summary level or the second summary level thereby enabling the video player to play the selected summary level
  • an event selector for enumerating the event or the subject for a user to browse a desired event.
  • FIG. 1 is a block diagram illustrating a system for generating video summary description data according to the description scheme of the present invention.
  • FIG. 2 is a drawing that illustrates the data structure of the HierarchicalSummary DS describing the video summary description scheme according to the present invention in UML (Unified Modeling Language).
  • FIG. 3 is a compositional drawing of a user interface of the tool for playing and browsing of the video summary inputting the video summary description data described by the same description scheme as FIG. 2 .
  • FIG. 4 is a compositional drawing for the flow of the data and control for hierarchical browsing using the video summary of the present invention.
  • FIG. 1 is a block diagram illustrating a system for generating video summary description data according to the description scheme of the present invention.
  • the apparatus for generating video description data is composed of a feature extracting part 101 , an event detecting part 102 , an episode detecting part 103 , a video summary interval selecting part 104 , a summary rule defining part 105 , a representative frame extracting part 106 , a representative sound extracting part 107 and a video summary describing part 108 .
  • the feature extracting part 101 extracts necessary features to generate video summary by inputting the original video.
  • the general features include shot boundary, camera motion, caption region, face region and so on.
  • the types of features and video time interval at which those features are detected are output to the step of detecting event in the format of (types of features, feature serial number, time interval) by extracting those features.
  • (camera zoom, 1, 100 ⁇ 150) represents the information that the first zoom of camera was detected in the 100 ⁇ 150 frame.
  • the event detecting part 102 detects key events that are included in the original video. Because these events must represent the contents of the original video well and are the references for generating video summary. These events are generally differently defined according to genre of the original video.
  • Events either may represent higher meaning level or may be visual features that can directly infer higher meaning. For example, in the case of soccer video, goal, shoot, caption, replay and so on can be defined as events.
  • the event detecting part 102 outputs the types of detected events and the time interval in the format of (types of events, event serial number, time interval). For example, the event information indicating that the first goal occurred at between 200 and 300 frame is output in the format of (goal, 1, 200 ⁇ 300).
  • the episode detecting part 103 divides the video into an episode with a larger unit than an event based on the story flow. After detecting key events, an episode is detected while including accompanied events that follow the key event.
  • the goal and shoot can be key events and the bench scene, audiences scene, goal ceremony scene, replay of goal scene and so on compose accompanied events of the key events.
  • the episode is detected on the basis of the goal and shoot.
  • the episode detection information is output in the format of (episode number, time interval, priority, feature shot, associated event information).
  • the episode number is a serial number of the episode and the time interval represents the time interval of the episode by the shot unit.
  • the priority represents the degree of importance of the episode.
  • the feature shot represents the shot number including the most important information out of the shots comprising the episode and the associated event information represents the event number of the event related to the episode.
  • the information means that the first episode includes 4 ⁇ 6th shot, the priority is the highest (1), the feature shot is the fifth shot, and the associated events are the first goal and the third caption.
  • the video summary interval selecting part 104 selects the video interval at which the contents of the original video can be summarized well on the basis of the detected episode.
  • the reference of selecting the interval is performed by the predefined summary rule of the summary rule defining part 105 .
  • the summary rule defining part 105 defines rule for selecting the summary interval and outputs control signal for selecting the summary interval.
  • the summary rule defining part 105 also outputs the types of summary events, which are bases in selecting the video summary interval, to the video summary describing part 108 .
  • the video summary interval selecting part 104 outputs the time information of the selected video summary intervals by frame units and outputs the types of events corresponding to the video intervals. That is, the format of (100 ⁇ 200, goal), (500 ⁇ 700. shoot) and so on represent that the video segments selected as the video summary intervals are 100 ⁇ 200 frame, 500 ⁇ 700 frame and so on and the event of each segment is goal and shoot respectively. As well, the information such as file name can be output to facilitate the access of an additional video, which is composed of only the video summary interval.
  • the representative frame and the representative sound are extracted from the representative frame extracting part 106 and the representative sound extracting part 107 respectively by using the video summary interval information.
  • the representative frame extracting part 106 outputs the image frame number representing the video summary interval or outputs the image data.
  • the representative sound extracting part 107 outputs the sound data representing the video summary interval or outputs the sound time interval.
  • the video summary describing part 108 describes the related information in order to make efficient summary and browsing functionalities to be feasible according to the Hierarchical Summary Description Scheme of the present invention shown in FIG. 2 .
  • the main information of the Hierarchical Summary Description Scheme comprises the types of summary events of the video summary, the time information describing each video summary interval, the representative frame, the representative sound, and the event types in each interval.
  • the video summary describing part 108 outputs the video summary description data according to the description scheme illustrated in FIG. 2 .
  • FIG. 2 is a drawing that illustrates the data structure of the HierarchicalSummary DS describing the video summary description scheme according to the present invention in UML (Unified Modeling Language).
  • the HierarchicalSummary DS 201 describing the video summary is composed of one or more HighlightLevel DS 202 and one or zero SummaryThemeList DS 203 .
  • the SummaryThemeList DS provides the functionality of the event based summary and browsing by enumeratively describing the information of subject or event constituting the summary.
  • the HighlightLevel DS 202 is composed of the HighlightSegment DSs 204 as many as the number of the video intervals constituting the video summary of that level and zero or several number of HighlightLevel DS.
  • the HighlightSegment DS describes the information corresponding to the interval of each video summary.
  • the HighlightSegment DS is composed of one VideoSegmentLocator DS 205 , zero or several ImageLocator DSs 206 , zero or several SoundLocator DSs 207 and AudioSegmentLocator 208 .
  • the HierarchicalSummary DS has an attribute of SummaryComponentList, which obviously represents the summary type and which is comprised of the HierarchicalSummary DS.
  • the SummaryComponentList is derived on the basis of the SummaryComponentType and describes by enumerating all comprised SummaryComponentTypes.
  • the keyFrames represents the key frame summary composed of representative frames.
  • the keyVideoClips represents the key video clip summary composed of key video intervals' sets.
  • the keyEvents represents the summary composed of the video interval corresponding to either the event or the subject.
  • the keyAudioClips represents the key audio clip summary composed of representative audio intervals' sets.
  • the unconstraint represents the types of summary defined by users except for the summaries.
  • the HierarchicalSummary DS might comprise the SummaryThemeList DS which is enumerating the event (or subject) comprised in the summary and describing the ID.
  • the SummaryThemeList has arbitrary number of SummaryThemes as elements.
  • the SummaryTheme has an attribute of id of ID type and selectively has an attribute of parentld.
  • the SummaryThemeList DS permits the users browsing the video summary from the viewpoint of each event or several subjects described in the SummaryThemeList. That is, the application tool inputting description data makes the user select the desired subject by parsing the SummaryThemeList DS and providing the information to the user.
  • the users efficiently can do browsing at each subject after finding out the desired subject.
  • the embodiments of the present invention permit the attribute of parentld being selectively used in the SummaryTheme.
  • the parentld means the upper element (upper subject) in the tree structure.
  • the HierarchicalSummary DS of the present invention comprises HighlightLevel DSs, and each HighlightLevel DS comprises one or more HighlightSegment DS, which corresponds to a video segment (or interval) constituting the video summary.
  • the HighlightLevel DS has an attribute of themelds of IDREFS type.
  • the themelds describes the subject and event id, common to the children HighlightLevel DS of corresponding HighlightLevel DS or all HighlightSegment DSs comprised in the HighlightLevel, and the id is described in the SummaryThemeList DS.
  • the themelds can denote several events and, when doing event based summary, solve the problem that same id is unnecessarily repeated in all segments constituting the level by having the themelds representing common subject type in the HighlightSegment constituting the level.
  • the HighlightSegment DS comprises one VideoSegmentLocator DS and one or more ImageLocator DS, zero or one SoundLocator DS and zero or one AudioSegmentLocator DS.
  • the VideoSegmentLocator DS describes the time information or video itself of the video segment constituting the video summary.
  • the ImageLocator DS describes the image data information of the representative frame of the video segment.
  • the SoundLocator DS describes the sound information representing the corresponding video segment interval.
  • the AudioSegmentLocator DS describes the interval time information of the audio segment constituting the audio summary or the audio information itself.
  • the HighlightSegment DS has an attribute of themelds.
  • the themelds describes using the id defined in the SummaryThemeList which subjects or events described in the SummaryThemeList DS relates to the corresponding highlight segment.
  • the themelds can denote more than one event, and by allowing one highlight segment to have several subjects, it is an efficient technique of the present invention which is solving the problem of indispensable duplication of descriptions caused by describing the video segment at each event (or subject) when using the existing method for event-based summary.
  • the present invention makes the overview through the highlight segment video and the navigation and browsing utilizing the representative frame and the representative sound of the segment to be feasible to efficiently utilize through the introduction of the HighlightSegment DS for describing the highlight segment constituting the video summary.
  • the SoundLocator DS capable of describing the representative sound corresponding to the video interval in real instances through the characteristic sound capable of representing the video interval, for example gun shot, outcry, anchor's comment in soccer (for example, goal and shoot), actors' name in drama, specific word, etc., it is possible to do efficient browsing by roughly understanding whether the interval is an important interval containing the desired contents or what contents are contained in the interval within a short time without playing the video interval.
  • the characteristic sound capable of representing the video interval for example gun shot, outcry, anchor's comment in soccer (for example, goal and shoot), actors' name in drama, specific word, etc.
  • FIG. 3 is a compositional drawing of a user interface of the tool for playing and browsing of the video summary inputting the video summary description data described by the same description scheme as FIG. 2 .
  • the video playing part 301 plays the original video or the video summary according to the control of the user.
  • the original video representative frame part 305 shows the representative frames of the original video shots. That is, it is composed of a series of images with reduced sizes.
  • the representative frame of the original video shot is described not by the HierarchicalSummary DS of the present invention but by additional description scheme and can be utilized when both the description data are provided along with the summary description data described by the HierarchicalSummary DS of the present invention.
  • the user accesses to the original video shot corresponding to the representative frame by clicking the representative frame.
  • the video summary level 0 representative frame part and the representative sound part 307 and the video summary level 1 representative frame part and the representative sound part 306 shows the frame and sound information representing each video interval of the video summary level 0 and the video summary level 1 respectively. That is, it is composed of the iconic images representing a series of the images and sounds with reduced sizes.
  • the user clicks the representative frame of the video summary representative frame part and the representative sound part, the user accesses to the original video interval corresponding to the representative frame.
  • the representative sound icon corresponding to the representative frame of the video summary, the representative sound of the video interval is played.
  • the video summary controlling part 302 inputs the control for user selection to play the video summary.
  • the user does overview and browsing by selecting the summary of the desired level through the level selecting part 303 .
  • the event selecting part 304 enumerates the event and the subject provided by the SummaryThemeList and the user does overview and browsing by selecting the desired event. After all, this realizes the summary of the user customization type.
  • FIG. 4 is a compositional drawing for the flow of the data and control for hierarchical browsing using the video summary of the present invention.
  • the browsing is performed by accessing the data for browsing with the method of FIG. 4 through the use of the user interface of FIG. 3 .
  • the data for browsing are the video summary and the representative frame of the video summary and the original video 406 and the original video representative frame 405 .
  • the video summary is assumed to have two levels. Needless to say, the video summary may have more levels than two.
  • the video summary level 0 401 is what is summarized with shorter time than the video summary level 1 403 . That is, the video summary level 1 contains more contents than the video summary level 0 .
  • the video summary level 0 representative frame 402 is the representative frame of the video summary level 0 and the video summary level 1 representative frame 404 is the representative frame of the video summary level 1 .
  • the video summary and the original video are played through the video playing part 301 shown in FIG. 3 .
  • the video summary level 0 representative frame is displayed in the video summary level 0 representative frame and the representative sound part 306
  • the video summary level 1 representative frame is displayed in the video summary level 1 representative frame and the representative sound part 307
  • the original video representative frame is displayed in the original video representative frame part 305 .
  • the hierarchical browsing method illustrated in FIG. 4 can have various types of hierarchical paths as the following example.
  • the overall browsing scheme is as follows.
  • the video summary may play either the video summary level 0 or the video summary level 1 .
  • the interested video interval is identified through the video summary representative frame. If the scene which is desired to be exactly found, is identified in the video summary representative frame, play it by directly accessing to the video interval of the original video to which the representative frame is connected. And if the more detailed information is needed, the user may access the desired original video either by understanding the representative frame of the next level or by hierarchically understanding the contents of the representative frame of the original video.
  • the existing general video indexing and browsing techniques divide the original video in shot unit and access to the shot by perceiving the desired shot from the representative frame after constituting the representative frame representing each shot.
  • the case 1 is the case that plays the video summary level 0 and directly accesses to the original video from the video summary level 0 representative frame.
  • the case 2 is the case that plays the video summary level 0 and selects the most interested representative frame from the video summary level 0 representative frame and identifies the desired scene in the video summary level 1 representative frame corresponding to the neighborhood of the representative frame to understand more detailed information before access to the original video and then accesses to the original video.
  • the case 3 is the case that selects the most interested representative frame to obtain more detailed information in the case that the access from the video summary level 1 representative frame to the original video is difficult in the case 2 and by the original video representative frames neighboring the representative frame identifies the desired scene and then accesses to the original video using the representative frame of the original frame.
  • the case 4 and case 5 are the cases that start at the playing of the video summary level 1 and the paths are similar to the above cases.
  • the present invention can provide a system in which multiple clients can access one server and do video overview and browsing.
  • the original video is inputted to the server and the video summary description data is produced on the basis of the hierarchical summary description scheme and the video summary description data generation system linking the original video and the video summary description data is equipped.
  • the client accesses the server through the communication network, does overview of the video using the video summary description data, and does browsing and navigation of the video by accessing to the original video.

Abstract

A video summary description scheme for describing video summary intervals by meta data that provides overview functionality, and which makes it feasible to understand overall contents of the original video within a short time, and navigation and browsing functionalities, which make it feasible to search the desired video contents efficiently. A Video Summary DS includes at least one HighlightSegment DS that describes information on a highlight segment corresponding to one of the video summary intervals, wherein the HighlightSegment DS comprises a VideoSegmentLocator DS describing the highlight segment and an ImageLocator DS describing a representative frame of the highlight segment.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present invention relates to a video summary description scheme for efficient video overview and browsing, and also relates to a method and system of video summary description generation to describe video summary according to the video summary description scheme.
  • The technical fields in which the present invention is involved are content-based video indexing and browsing/searching and summarizing video to the content and then describing it.
  • 2. Description of the Related Art
  • The format of summarizing video largely falls into dynamic summary and static summary. The video description scheme according to the embodiments of the present invention is for efficiently describing the dynamic summary and the static summary in the unification-based description scheme.
  • Generally, because the existing video summary and description scheme provide simply the information of video interval which is included in the video summary, the existing video summary and description scheme are limited to conveying overall video contents through the playing of the video summary.
  • However, in many cases, the browsing for identifying and revisiting concerned parts through overview of overall contents is needed rather than only overview of overall contents through the video summary.
  • Also, the existing video summary provides only the video interval which is considered to be important according to the criteria determined by the video summary provider. Accordingly, if the criteria of users and the video provider are different from each other or users have special criteria, the users cannot obtain the video summary they desire.
  • That is, although the existing video summary permits the users selecting the video summary with a desired level by providing several levels' video summary, it makes the selecting extent of the users limited so that the users cannot select by the contents of the video summary.
  • The U.S. Pat. No. 5,821,945 entitled “Method and apparatus for video browsing based on content and structure” represents video in compact form and provides browsing functionality accessing to the video with desired content through the representation.
  • However, the patent pertains to static summary based on the representative frame, and although the existing static summary summarizes by using the representative frame of the video shot, the representative frame of this patent provides only visual information representing the shot. The patent has a limitation on conveying the information using the summary scheme.
  • As compared with the patent, the video description scheme and browsing method of the embodiments described herein utilize the dynamic summary based on the video segment.
  • The video summary description scheme was proposed by the MPEG-7 Description Scheme (VO.5) announced ISO/IEC JTC1/SC29/WG11 MPEG-7 Output Document No. N2844 on July 1999. Because the scheme describes the interval information of each video segment of dynamic video summary, in spite of providing basic functionalities describing dynamic summary, the scheme has problems in the following aspects.
  • First, there is the drawback that it cannot provide access to the original video from summary segments constituting the video summary. That is, when users want to access the original video to understand more detailed information on the basis of the summary contents and overview through video summary, the existing scheme cannot meet the need.
  • Secondly, the existing scheme cannot provide sufficient audio summary description functionalities.
  • And finally, there is the drawback that in the case of representing event-based summary, the duplicate description and the complexity of searching is indispensable.
  • BRIEF SUMMARY OF THE INVENTION
  • The disclosed embodiments of the present invention provide a hierarchical video summary description scheme, which comprises the representative frame information and the representative sound information at each video interval that is included in the video summary and makes feasible the user-customized event-based summary providing the users' selection for the contents of the video summary and efficient browsing, and a video summary description data generation method and system using the description scheme.
  • According to one aspect of the present invention, a Video Summary DS is provided that includes at least one HighlightSegment DS describing information on a highlight segment corresponding to one or more video summary intervals, wherein the HighlightSegment DS includes a VideoSegmentLocator DS describing the highlight segment and an ImageLocator DS describing a representative frame of the highlight segment.
  • In a computer-readable recording medium according to the present invention, a Video Summary Description Scheme (DS) is provided for describing a video summary stored in the computer readable recording medium. The Video Summary DS includes at least one HighlightSegment DS for describing information on a highlight segment corresponding to one or more video summary intervals, and the HighlightSegment DS includes a VideoSegmentLocator DS describing the highlight segment and an ImageLocator DS describing a representative frame of the highlight segment.
  • A method for generating video summary description data according to the present invention is provided, the method including the steps of:
  • (a) analyzing the input original video and producing a video analysis result;
  • (b) defining a summary rule for selecting video summary intervals;
  • (c) selecting the video summary interval capable of summarizing video contents from the original video based on the original video analysis result and the summary rule, which together constitute video summary interval information;
  • (d) extracting a representative frame based on the video summary interval information; and
  • (e) generating video summary description data according to a Video Summary Description Scheme (DS) that enables execution of browsing based on the video summary interval information and the representative frame, the Video Summary DS including at least one HighlightSegment DS for describing information on a highlight segment corresponding to one of the video summary intervals, wherein the HighlightSegment DS includes a VideoSegmentLocator DS for describing the highlight segment and an ImageLocator DS for describing a representative frame of the highlight segment.
  • The present invention also includes a system for generating video summary description data according to a video summary description scheme corresponding to an original video, the system, including:
  • a video analyzer for analyzing the input original video and producing video analysis result;
  • a summary rule definer for defining the summary rule for selecting the video summary interval;
  • a video summary interval selector for selecting one of the video summary intervals capable of summarizing the video contents of the original video and outputting video summary interval information based on the video analysis result from the video analyzer and the summary rule from the summary rule definer;
  • a representative frame extractor for outputting a representative frame representing video summary interval based on the video summary interval information from the video summary interval selector; and
  • a video summary describer for generating video summary description data with a Video Summary Description Scheme (DS) by inputting the video summary interval information from the video summary interval selector and the representative frame information from the representative frame extractor,
  • wherein the Video Summary DS includes at least one HighlightSegment DS for describing information on a highlight segment corresponding to one of the video summary intervals, wherein the HighlightSegment DS includes a VideoSegmentLocator DS for describing the highlight segment and an ImageLocator DS for describing a representative frame of the highlight segment.
  • An apparatus for browsing video summary description data according to the present invention, the video summary description data having a Video Summary Description Scheme (DS) for describing a video summary interval, wherein the Video Summary DS includes at least one HighlightSegment DS for describing information on a highlight segment corresponding to one of the video summary intervals, wherein the HighlightSegment DS includes a VideoSegmentLocator DS for describing the highlight segment and an ImageLocator DS for describing a representative frame of the highlight segment.
  • The browsing apparatus includes:
  • a video player for playing an original video or the video summary interval;
  • an original video representative frame player for playing a representative frame of the original video;
  • a first video summary representative frame player for playing a first summary level of the video summary interval,
  • a second video summary representative frame player for playing a second summary level of a video summary interval, wherein the second summary level is summarized more finely than the first summary level;
  • a level selector for selecting the first summary level or the second summary level thereby enabling the video player to play the selected summary level; and
  • an event selector for enumerating the event or the subject for a user to browse a desired event.
  • BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
  • The embodiments of the present invention will be explained with reference to the accompanying drawings, in which:
  • FIG. 1 is a block diagram illustrating a system for generating video summary description data according to the description scheme of the present invention.
  • FIG. 2 is a drawing that illustrates the data structure of the HierarchicalSummary DS describing the video summary description scheme according to the present invention in UML (Unified Modeling Language).
  • FIG. 3 is a compositional drawing of a user interface of the tool for playing and browsing of the video summary inputting the video summary description data described by the same description scheme as FIG. 2.
  • FIG. 4 is a compositional drawing for the flow of the data and control for hierarchical browsing using the video summary of the present invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • The present invention will be described in detail by way of a preferred embodiment with reference to accompanying drawings, in which like reference numerals are used to identify the same or similar parts.
  • FIG. 1 is a block diagram illustrating a system for generating video summary description data according to the description scheme of the present invention.
  • As illustrated in FIG. 1, the apparatus for generating video description data according to an embodiment of the present invention is composed of a feature extracting part 101, an event detecting part 102, an episode detecting part 103, a video summary interval selecting part 104, a summary rule defining part 105, a representative frame extracting part 106, a representative sound extracting part 107 and a video summary describing part 108.
  • The feature extracting part 101 extracts necessary features to generate video summary by inputting the original video. The general features include shot boundary, camera motion, caption region, face region and so on.
  • In the step of extracting features, the types of features and video time interval at which those features are detected are output to the step of detecting event in the format of (types of features, feature serial number, time interval) by extracting those features.
  • For example, in the case of camera motion, (camera zoom, 1, 100˜150) represents the information that the first zoom of camera was detected in the 100˜150 frame.
  • The event detecting part 102 detects key events that are included in the original video. Because these events must represent the contents of the original video well and are the references for generating video summary. These events are generally differently defined according to genre of the original video.
  • These events either may represent higher meaning level or may be visual features that can directly infer higher meaning. For example, in the case of soccer video, goal, shoot, caption, replay and so on can be defined as events.
  • The event detecting part 102 outputs the types of detected events and the time interval in the format of (types of events, event serial number, time interval). For example, the event information indicating that the first goal occurred at between 200 and 300 frame is output in the format of (goal, 1, 200˜300).
  • The episode detecting part 103, on the basis of the detected event, divides the video into an episode with a larger unit than an event based on the story flow. After detecting key events, an episode is detected while including accompanied events that follow the key event. For example, in the case of soccer video, the goal and shoot can be key events and the bench scene, audiences scene, goal ceremony scene, replay of goal scene and so on compose accompanied events of the key events.
  • That is, the episode is detected on the basis of the goal and shoot.
  • The episode detection information is output in the format of (episode number, time interval, priority, feature shot, associated event information). Herein, the episode number is a serial number of the episode and the time interval represents the time interval of the episode by the shot unit. The priority represents the degree of importance of the episode. The feature shot represents the shot number including the most important information out of the shots comprising the episode and the associated event information represents the event number of the event related to the episode. For example, in the case of representing the episode detection information as ( episode 1, 4˜6, 1, 5, goal 1, caption 3), the information means that the first episode includes 4˜6th shot, the priority is the highest (1), the feature shot is the fifth shot, and the associated events are the first goal and the third caption.
  • The video summary interval selecting part 104 selects the video interval at which the contents of the original video can be summarized well on the basis of the detected episode. The reference of selecting the interval is performed by the predefined summary rule of the summary rule defining part 105.
  • The summary rule defining part 105 defines rule for selecting the summary interval and outputs control signal for selecting the summary interval. The summary rule defining part 105 also outputs the types of summary events, which are bases in selecting the video summary interval, to the video summary describing part 108.
  • The video summary interval selecting part 104 outputs the time information of the selected video summary intervals by frame units and outputs the types of events corresponding to the video intervals. That is, the format of (100˜200, goal), (500˜700. shoot) and so on represent that the video segments selected as the video summary intervals are 100˜200 frame, 500˜700 frame and so on and the event of each segment is goal and shoot respectively. As well, the information such as file name can be output to facilitate the access of an additional video, which is composed of only the video summary interval.
  • If the video summary interval selection is completed, the representative frame and the representative sound are extracted from the representative frame extracting part 106 and the representative sound extracting part 107 respectively by using the video summary interval information.
  • The representative frame extracting part 106 outputs the image frame number representing the video summary interval or outputs the image data.
  • The representative sound extracting part 107 outputs the sound data representing the video summary interval or outputs the sound time interval.
  • The video summary describing part 108 describes the related information in order to make efficient summary and browsing functionalities to be feasible according to the Hierarchical Summary Description Scheme of the present invention shown in FIG. 2.
  • The main information of the Hierarchical Summary Description Scheme comprises the types of summary events of the video summary, the time information describing each video summary interval, the representative frame, the representative sound, and the event types in each interval.
  • The video summary describing part 108 outputs the video summary description data according to the description scheme illustrated in FIG. 2.
  • FIG. 2 is a drawing that illustrates the data structure of the HierarchicalSummary DS describing the video summary description scheme according to the present invention in UML (Unified Modeling Language).
  • The HierarchicalSummary DS 201 describing the video summary is composed of one or more HighlightLevel DS 202 and one or zero SummaryThemeList DS 203.
  • The SummaryThemeList DS provides the functionality of the event based summary and browsing by enumeratively describing the information of subject or event constituting the summary. The HighlightLevel DS 202 is composed of the HighlightSegment DSs 204 as many as the number of the video intervals constituting the video summary of that level and zero or several number of HighlightLevel DS.
  • The HighlightSegment DS describes the information corresponding to the interval of each video summary. The HighlightSegment DS is composed of one VideoSegmentLocator DS 205, zero or several ImageLocator DSs 206, zero or several SoundLocator DSs 207 and AudioSegmentLocator 208.
  • The following give more detailed description about the HierarchicalSummary DS.
  • The HierarchicalSummary DS has an attribute of SummaryComponentList, which obviously represents the summary type and which is comprised of the HierarchicalSummary DS.
  • The SummaryComponentList is derived on the basis of the SummaryComponentType and describes by enumerating all comprised SummaryComponentTypes.
  • In the SummaryComponentList, there are five types, such as keyFrames, keyVideoClips, keyAudioClips, keyEvents, and unconstraint.
  • The keyFrames represents the key frame summary composed of representative frames. The keyVideoClips represents the key video clip summary composed of key video intervals' sets. The keyEvents represents the summary composed of the video interval corresponding to either the event or the subject. The keyAudioClips represents the key audio clip summary composed of representative audio intervals' sets. And, the unconstraint represents the types of summary defined by users except for the summaries.
  • Also, in order to describe the event-based summary, the HierarchicalSummary DS might comprise the SummaryThemeList DS which is enumerating the event (or subject) comprised in the summary and describing the ID.
  • The SummaryThemeList has arbitrary number of SummaryThemes as elements. The SummaryTheme has an attribute of id of ID type and selectively has an attribute of parentld.
  • The SummaryThemeList DS permits the users browsing the video summary from the viewpoint of each event or several subjects described in the SummaryThemeList. That is, the application tool inputting description data makes the user select the desired subject by parsing the SummaryThemeList DS and providing the information to the user.
  • At this time, in the case of enumerating these subjects into simple format, if the number of the subjects is large, it might not be easy to find out the subject desired by the users.
  • Accordingly, by representing the subject as a tree structure similar to ToC (Table of Content), the users efficiently can do browsing at each subject after finding out the desired subject.
  • In order to do so, the embodiments of the present invention permit the attribute of parentld being selectively used in the SummaryTheme. The parentld means the upper element (upper subject) in the tree structure.
  • The HierarchicalSummary DS of the present invention comprises HighlightLevel DSs, and each HighlightLevel DS comprises one or more HighlightSegment DS, which corresponds to a video segment (or interval) constituting the video summary.
  • The HighlightLevel DS has an attribute of themelds of IDREFS type.
  • The themelds describes the subject and event id, common to the children HighlightLevel DS of corresponding HighlightLevel DS or all HighlightSegment DSs comprised in the HighlightLevel, and the id is described in the SummaryThemeList DS.
  • The themelds can denote several events and, when doing event based summary, solve the problem that same id is unnecessarily repeated in all segments constituting the level by having the themelds representing common subject type in the HighlightSegment constituting the level.
  • The HighlightSegment DS comprises one VideoSegmentLocator DS and one or more ImageLocator DS, zero or one SoundLocator DS and zero or one AudioSegmentLocator DS.
  • Herein, the VideoSegmentLocator DS describes the time information or video itself of the video segment constituting the video summary. The ImageLocator DS describes the image data information of the representative frame of the video segment. The SoundLocator DS describes the sound information representing the corresponding video segment interval. The AudioSegmentLocator DS describes the interval time information of the audio segment constituting the audio summary or the audio information itself.
  • The HighlightSegment DS has an attribute of themelds. The themelds describes using the id defined in the SummaryThemeList which subjects or events described in the SummaryThemeList DS relates to the corresponding highlight segment.
  • The themelds can denote more than one event, and by allowing one highlight segment to have several subjects, it is an efficient technique of the present invention which is solving the problem of indispensable duplication of descriptions caused by describing the video segment at each event (or subject) when using the existing method for event-based summary.
  • When describing the highlight segment constituting the video summary, in a different way from the existing hierarchical summary description scheme describing only the time information of the highlight video interval, in order to describe the video interval information of each highlight segment, the representative frame information and the representative sound information, by placing the VideoSegmentLocator DS, the ImageSegmentLocator DS and the SoundLocator DS, the present invention makes the overview through the highlight segment video and the navigation and browsing utilizing the representative frame and the representative sound of the segment to be feasible to efficiently utilize through the introduction of the HighlightSegment DS for describing the highlight segment constituting the video summary.
  • By placing the SoundLocator DS capable of describing the representative sound corresponding to the video interval, in real instances through the characteristic sound capable of representing the video interval, for example gun shot, outcry, anchor's comment in soccer (for example, goal and shoot), actors' name in drama, specific word, etc., it is possible to do efficient browsing by roughly understanding whether the interval is an important interval containing the desired contents or what contents are contained in the interval within a short time without playing the video interval.
  • FIG. 3 is a compositional drawing of a user interface of the tool for playing and browsing of the video summary inputting the video summary description data described by the same description scheme as FIG. 2.
  • The video playing part 301 plays the original video or the video summary according to the control of the user. The original video representative frame part 305 shows the representative frames of the original video shots. That is, it is composed of a series of images with reduced sizes.
  • The representative frame of the original video shot is described not by the HierarchicalSummary DS of the present invention but by additional description scheme and can be utilized when both the description data are provided along with the summary description data described by the HierarchicalSummary DS of the present invention.
  • The user accesses to the original video shot corresponding to the representative frame by clicking the representative frame.
  • The video summary level 0 representative frame part and the representative sound part 307 and the video summary level 1 representative frame part and the representative sound part 306 shows the frame and sound information representing each video interval of the video summary level 0 and the video summary level 1 respectively. That is, it is composed of the iconic images representing a series of the images and sounds with reduced sizes.
  • If the user clicks the representative frame of the video summary representative frame part and the representative sound part, the user accesses to the original video interval corresponding to the representative frame. Herein, in the case of clicking the representative sound icon corresponding to the representative frame of the video summary, the representative sound of the video interval is played.
  • The video summary controlling part 302 inputs the control for user selection to play the video summary. In the case of being provided with the multi-level video summary, the user does overview and browsing by selecting the summary of the desired level through the level selecting part 303. The event selecting part 304 enumerates the event and the subject provided by the SummaryThemeList and the user does overview and browsing by selecting the desired event. After all, this realizes the summary of the user customization type.
  • FIG. 4 is a compositional drawing for the flow of the data and control for hierarchical browsing using the video summary of the present invention.
  • The browsing is performed by accessing the data for browsing with the method of FIG. 4 through the use of the user interface of FIG. 3. The data for browsing are the video summary and the representative frame of the video summary and the original video 406 and the original video representative frame 405.
  • The video summary is assumed to have two levels. Needless to say, the video summary may have more levels than two. The video summary level 0 401 is what is summarized with shorter time than the video summary level 1 403. That is, the video summary level 1 contains more contents than the video summary level 0. The video summary level 0 representative frame 402 is the representative frame of the video summary level 0 and the video summary level 1 representative frame 404 is the representative frame of the video summary level 1.
  • The video summary and the original video are played through the video playing part 301 shown in FIG. 3. The video summary level 0 representative frame is displayed in the video summary level 0 representative frame and the representative sound part 306, the video summary level 1 representative frame is displayed in the video summary level 1 representative frame and the representative sound part 307, and the original video representative frame is displayed in the original video representative frame part 305.
  • The hierarchical browsing method illustrated in FIG. 4 can have various types of hierarchical paths as the following example.
  • Case 1: (1)-(2)
  • Case 2: (1)-(3)-(5)
  • Case 3: (1)-(3)-(4)-(6)
  • Case 4: (7)-(5)
  • Case 5: (7)-(4)-(6)
  • The overall browsing scheme is as follows.
  • First, understand the overall contents of the original video by watching the video summary of the original video. Herein, the video summary may play either the video summary level 0 or the video summary level 1. When more detailed browsing is wanted after watching the video summary, the interested video interval is identified through the video summary representative frame. If the scene which is desired to be exactly found, is identified in the video summary representative frame, play it by directly accessing to the video interval of the original video to which the representative frame is connected. And if the more detailed information is needed, the user may access the desired original video either by understanding the representative frame of the next level or by hierarchically understanding the contents of the representative frame of the original video.
  • Although these hierarchical browsing techniques might take a long time in browsing to access the desired contents while the original video is being played, the browsing time is substantially reduced by directly accessing the contents of the original video through the hierarchical representative frame.
  • The existing general video indexing and browsing techniques divide the original video in shot unit and access to the shot by perceiving the desired shot from the representative frame after constituting the representative frame representing each shot.
  • In this case, because the number of shots in the original video is large, substantial time and efforts are necessary to do browsing the desired contents out of many representative frames.
  • In the present invention, it is feasible to quickly access the desired video by constituting the hierarchical representative frame with the representative frame of the video summary.
  • The case 1 is the case that plays the video summary level 0 and directly accesses to the original video from the video summary level 0 representative frame.
  • The case 2 is the case that plays the video summary level 0 and selects the most interested representative frame from the video summary level 0 representative frame and identifies the desired scene in the video summary level 1 representative frame corresponding to the neighborhood of the representative frame to understand more detailed information before access to the original video and then accesses to the original video.
  • The case 3 is the case that selects the most interested representative frame to obtain more detailed information in the case that the access from the video summary level 1 representative frame to the original video is difficult in the case 2 and by the original video representative frames neighboring the representative frame identifies the desired scene and then accesses to the original video using the representative frame of the original frame.
  • The case 4 and case 5 are the cases that start at the playing of the video summary level 1 and the paths are similar to the above cases.
  • When applied to the server/client circumstance, the present invention can provide a system in which multiple clients can access one server and do video overview and browsing. The original video is inputted to the server and the video summary description data is produced on the basis of the hierarchical summary description scheme and the video summary description data generation system linking the original video and the video summary description data is equipped. The client accesses the server through the communication network, does overview of the video using the video summary description data, and does browsing and navigation of the video by accessing to the original video.
  • Although, the present invention was described on the basis of preferably executable examples, these executable examples do not limit the present invention but exemplify. Also, it will be appreciated by those skilled in the art that changes and variations in the embodiments herein can be made without departing from the spirit and scope of the present invention as defined by the following claims and the equivalents thereof.

Claims (23)

1. A computer-readable recording medium having a Video Summary Description Scheme (DS) for describing a video summary interval stored therein, the Video Summary DS comprising: at least one HighlightSegment DS for describing information about a highlight segment corresponding to the video summary interval, wherein the HighlightSegment DS comprises a VideoSegmentLocator DS for describing the highlight segment and an ImageLocator DS for describing a representative frame of the highlight segment.
2. The computer-readable recording medium of claim 1 wherein the VideoSegmentLocator DS comprises one of time information and video itself of the highlight segment.
3. A method for generating video summary description data corresponding to original video according to a video summary description scheme, comprising the steps of:
(a) analyzing the original video and producing a video analysis result;
(b) defining a summary rule for selecting a video summary interval;
(c) selecting the video summary interval capable of summarizing video contents from the original video based on the original video analysis result and the summary rule, which constitute video summary interval information;
(d) extracting a representative frame based on the video summary interval information; and
(e) generating video summary description data according to a Video Summary Description Scheme (DS) for enabling execution of browsing based on the video summary interval information and the representative frame,
wherein the Video Summary DS comprises at least one HighlightSegment DS for describing information on a highlight segment corresponding to the video summary interval, wherein the HighlightSegment DS comprises a VideoSegmentLocator DS for describing the highlight segment and an ImageLocator DS for describing a representative frame of the highlight segment.
4. The method of claim 3 wherein the VideoSegmentLocator DS comprises one of time information and video itself of the highlight segment.
5. The method of claim 4 wherein step (a) comprises the steps of:
extracting features from the original video and outputting the types of features and video time interval at which those features are detected;
detecting key events included in the original video based on the types of features and video time interval at which those features are detected; and
detecting an episode by dividing the original video into a story flow base on the basis of the detected key events.
6. The method of claim 4 wherein step (d) comprises the step of extracting a representative sound from the video summary interval information.
7. The method of claim 4 wherein the HighlightSegment DS further comprises a SoundLocator DS describing representative sound information of the highlight segment.
8. The method of claim 4 wherein the HighlightSegment DS further comprises an AudioSegmentLocator DS describing audio segment information constituting an audio summary of the highlight segment.
9. A system for generating video summary description data of original video according to a video summary description scheme, comprising:
video analyzing means for analyzing the original video and producing a video analysis result;
summary rule defining means for defining a summary rule for selecting a video summary interval;
video summary interval selecting means for selecting a video interval capable of summarizing the video contents of the original video and outputting video summary interval information based on the video analysis result from the video analyzing means and the summary rule from the summary rule defining means;
representative frame extracting means for outputting a representative frame representing the video summary interval based on the video summary interval information from the video summary interval selecting means; and
video summary describing means for generating video summary description data with a Video Summary Description Scheme (DS) by inputting the video summary interval information from the video summary interval selecting means and the representative frame information from the representative frame extracting means,
wherein the Video Summary DS comprises at least one HighlightSegment DS for describing information on a highlight segment corresponding to the video summary interval, wherein the HighlightSegment DS comprises a VideoSegmentLocator DS for describing the highlight segment and an ImageLocator DS for describing a representative frame of the highlight segment.
10. The system of claim 9 wherein the VideoSegmentLocator DS comprises one of time information and video itself of the highlight segment.
11. The system of claim 10 wherein the video analyzing means comprises:
feature extracting means for extracting features from the original video and producing types of features and a video time interval at which the types of features are detected;
event detecting means for detecting key events included in the original video by inputting the types of features and the video time interval at which the types of features are detected; and
episode detecting means for detecting an episode by dividing the original video into a story flow base on the basis of the detected event.
12. The system of claim 10, the system further comprises representative sound extracting means for extracting a representative sound by inputting the video summary interval information and providing the extracted representative sound to a video summary describing means.
13. The system of claim 10 wherein the HighlightSegment DS further comprises a SoundLocator DS for describing a representative sound information of the highlight segment.
14. The system of claim 10 wherein the HighlightSegment DS further comprises an AudioSegmentLocator DS for describing audio segment information constituting an audio summary of the highlight segment.
15. An apparatus for browsing video summary description data, the video summary description data having a Video Summary Description Scheme (DS) for describing video summary intervals, the Video Summary DS comprising at least one HighlightSegment DS for describing information on a highlight segment corresponding to one of the video summary intervals, the HighlightSegment DS comprising a VideoSegmentLocator DS for describing the highlight segment and an ImageLocator DS for describing a representative frame of the highlight segment.
16. The apparatus of claim 15 wherein the apparatus is arranged to display a representative frame of the highlight segment on a display device and to play the highlight segment.
17. The apparatus of claim 16 wherein the VideoSegmentLocator DS describes one of time information and video itself of the highlight segment.
18. The apparatus of claim 17 wherein the HighlightSegment DS further comprises:
a SoundLocator DS for describing representative sound information of the highlight segment; and
an AudioSegmentLocator DS for describing audio segment information constituting an audio summary of the highlight segment.
19. An apparatus for browsing video summary description data corresponding to an original video, the video summary description data having a HierarchicalSummary Description Scheme (DS) for describing a video summary, the apparatus comprising:
a video player for playing an original video or the video summary;
an original video representative frame player for playing a representative frame of the original video; and
a video summary representative frame player for playing a summary level of video interval.
20. The apparatus of claim 19 wherein the HierarchicalSummary DS comprises a HighlightLevel DS that comprises at least one HighlightSegment DS describing information on a highlight segment corresponding to the video summary interval,
the HighlightSegment DS comprising a VideoSegmentLocator DS for describing the highlight segment, and an ImageLocator DS for describing a representative frame of the highlight segment.
21. The apparatus of claim 20 wherein the VideoSegmentLocator DS describes one of time information and video itself of the highlight segment.
22. The apparatus of claim 21 wherein the HighlightSegment DS further comprises:
a SoundLocator DS for describing representative sound information of the highlight segment; and
an AudioSegmentLocator DS for describing an audio segment information constituting an audio summary of the highlight segment.
23. A Video Summary Description Scheme (DS) for describing video summary intervals of an original video, wherein the Video Summary DS comprises at least one HighlightSegment DS for describing information on a highlight segment corresponding to one of video summary intervals, wherein the HighlightSegment DS comprises a VideoSegmentLocator DS for describing the highlight segment and an ImageLocator DS for describing a representative frame of the highlight segment.
US11/614,406 1999-10-11 2006-12-21 Video summary description scheme and method and system of video summary description data generation for efficient overview and browsing Abandoned US20070101266A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/614,406 US20070101266A1 (en) 1999-10-11 2006-12-21 Video summary description scheme and method and system of video summary description data generation for efficient overview and browsing

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
KR1999-43712 1999-10-11
KR19990043712 1999-10-11
KR10-2000-0055781A KR100371813B1 (en) 1999-10-11 2000-09-22 A Recorded Medium for storing a Video Summary Description Scheme, An Apparatus and a Method for Generating Video Summary Descriptive Data, and An Apparatus and a Method for Browsing Video Summary Descriptive Data Using the Video Summary Description Scheme
KR2000-55781 2000-09-22
US09/675,984 US7181757B1 (en) 1999-10-11 2000-09-29 Video summary description scheme and method and system of video summary description data generation for efficient overview and browsing
US11/614,406 US20070101266A1 (en) 1999-10-11 2006-12-21 Video summary description scheme and method and system of video summary description data generation for efficient overview and browsing

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/675,984 Continuation US7181757B1 (en) 1999-10-11 2000-09-29 Video summary description scheme and method and system of video summary description data generation for efficient overview and browsing

Publications (1)

Publication Number Publication Date
US20070101266A1 true US20070101266A1 (en) 2007-05-03

Family

ID=37745152

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/675,984 Expired - Fee Related US7181757B1 (en) 1999-10-11 2000-09-29 Video summary description scheme and method and system of video summary description data generation for efficient overview and browsing
US11/614,406 Abandoned US20070101266A1 (en) 1999-10-11 2006-12-21 Video summary description scheme and method and system of video summary description data generation for efficient overview and browsing

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/675,984 Expired - Fee Related US7181757B1 (en) 1999-10-11 2000-09-29 Video summary description scheme and method and system of video summary description data generation for efficient overview and browsing

Country Status (1)

Country Link
US (2) US7181757B1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050155054A1 (en) * 2002-01-28 2005-07-14 Sharp Laboratories Of America, Inc. Summarization of sumo video content
US20080118120A1 (en) * 2006-11-22 2008-05-22 Rainer Wegenkittl Study Navigation System and Method
US20100241432A1 (en) * 2009-03-17 2010-09-23 Avaya Inc. Providing descriptions of visually presented information to video teleconference participants who are not video-enabled
US20140325568A1 (en) * 2013-04-26 2014-10-30 Microsoft Corporation Dynamic creation of highlight reel tv show
US20140328570A1 (en) * 2013-01-09 2014-11-06 Sri International Identifying, describing, and sharing salient events in images and videos
US20150254341A1 (en) * 2014-03-10 2015-09-10 Cisco Technology Inc. System and Method for Deriving Timeline Metadata for Video Content
WO2015191376A1 (en) * 2014-06-09 2015-12-17 Pelco, Inc. Smart video digest system and method
WO2016172379A1 (en) * 2015-04-21 2016-10-27 Stinkdigital, Ltd Video delivery platform
US9582574B2 (en) 2015-01-06 2017-02-28 International Business Machines Corporation Generating navigable content overviews
US10432987B2 (en) 2017-09-15 2019-10-01 Cisco Technology, Inc. Virtualized and automated real time video production system
US10743085B2 (en) 2017-07-21 2020-08-11 Microsoft Technology Licensing, Llc Automatic annotation of audio-video sequences

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7181757B1 (en) * 1999-10-11 2007-02-20 Electronics And Telecommunications Research Institute Video summary description scheme and method and system of video summary description data generation for efficient overview and browsing
US7877766B1 (en) 2000-05-04 2011-01-25 Enreach Technology, Inc. Method and system of providing a non-skippable sub-advertisement stream
US8564661B2 (en) * 2000-10-24 2013-10-22 Objectvideo, Inc. Video analytic rule detection system and method
US9892606B2 (en) * 2001-11-15 2018-02-13 Avigilon Fortress Corporation Video surveillance system employing video primitives
JP4191932B2 (en) * 2001-03-08 2008-12-03 パナソニック株式会社 Media distribution apparatus and media distribution method
US20040024780A1 (en) * 2002-08-01 2004-02-05 Koninklijke Philips Electronics N.V. Method, system and program product for generating a content-based table of contents
KR100571347B1 (en) * 2002-10-15 2006-04-17 학교법인 한국정보통신학원 Multimedia Contents Service System and Method Based on User Preferences and Its Recording Media
AU2003303116A1 (en) * 2002-12-19 2004-07-14 Koninklijke Philips Electronics N.V. A residential gateway system having a handheld controller with a display for displaying video signals
US20040177317A1 (en) * 2003-03-07 2004-09-09 John Bradstreet Closed caption navigation
JP4965257B2 (en) * 2003-05-26 2012-07-04 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ System and method for generating an audiovisual summary of audiovisual program content
US7480442B2 (en) * 2003-07-02 2009-01-20 Fuji Xerox Co., Ltd. Systems and methods for generating multi-level hypervideo summaries
JP2009503981A (en) * 2005-07-27 2009-01-29 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Method and apparatus for providing immediate review of multimedia material
JP4721066B2 (en) * 2007-03-16 2011-07-13 ソニー株式会社 Information processing apparatus, information processing method, and program
US8503523B2 (en) * 2007-06-29 2013-08-06 Microsoft Corporation Forming a representation of a video item and use thereof
US20090150784A1 (en) * 2007-12-07 2009-06-11 Microsoft Corporation User interface for previewing video items
US20110211811A1 (en) * 2008-10-30 2011-09-01 April Slayden Mitchell Selecting a video image
US20100211561A1 (en) * 2009-02-13 2010-08-19 Microsoft Corporation Providing representative samples within search result sets
US8959071B2 (en) 2010-11-08 2015-02-17 Sony Corporation Videolens media system for feature selection
US8938393B2 (en) 2011-06-28 2015-01-20 Sony Corporation Extended videolens media engine for audio recognition
WO2016098458A1 (en) * 2014-12-15 2016-06-23 ソニー株式会社 Information processing method, video processing device, and program
US10616651B2 (en) * 2015-05-22 2020-04-07 Playsight Interactive Ltd. Event based video generation
KR101938667B1 (en) 2017-05-29 2019-01-16 엘지전자 주식회사 Portable electronic device and method for controlling the same
US10795932B2 (en) 2017-09-28 2020-10-06 Electronics And Telecommunications Research Institute Method and apparatus for generating title and keyframe of video
US11042394B2 (en) 2017-10-13 2021-06-22 Electronics And Telecommunications Research Institute Method for processing input and output on multi kernel system and apparatus for the same
US10789990B2 (en) 2018-12-17 2020-09-29 International Business Machines Corporation Video data learning and prediction

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5821945A (en) * 1995-02-03 1998-10-13 The Trustees Of Princeton University Method and apparatus for video browsing based on content and structure
US5956026A (en) * 1997-12-19 1999-09-21 Sharp Laboratories Of America, Inc. Method for hierarchical summarization and browsing of digital video
US5963203A (en) * 1997-07-03 1999-10-05 Obvious Technology, Inc. Interactive video icon with designated viewing position
US6236395B1 (en) * 1999-02-01 2001-05-22 Sharp Laboratories Of America, Inc. Audiovisual information management system
US6573907B1 (en) * 1997-07-03 2003-06-03 Obvious Technology Network distribution and management of interactive video and multi-media containers
US6601026B2 (en) * 1999-09-17 2003-07-29 Discern Communications, Inc. Information retrieval by natural language querying
US6961954B1 (en) * 1997-10-27 2005-11-01 The Mitre Corporation Automated segmentation, information extraction, summarization, and presentation of broadcast news
US7181757B1 (en) * 1999-10-11 2007-02-20 Electronics And Telecommunications Research Institute Video summary description scheme and method and system of video summary description data generation for efficient overview and browsing

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3407840B2 (en) 1996-02-13 2003-05-19 日本電信電話株式会社 Video summarization method
JPH1169281A (en) 1997-08-15 1999-03-09 Media Rinku Syst:Kk Summary generating device and video display device
WO1999041684A1 (en) 1998-02-13 1999-08-19 Fast Tv Processing and delivery of audio-video information
US6278446B1 (en) 1998-02-23 2001-08-21 Siemens Corporate Research, Inc. System for interactive organization and browsing of video
US6573904B1 (en) * 2000-01-06 2003-06-03 International Business Machines Corporation Method and apparatus in a data processing system for updating color buffer window identifies when an overlay window identifier is removed

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5821945A (en) * 1995-02-03 1998-10-13 The Trustees Of Princeton University Method and apparatus for video browsing based on content and structure
US5963203A (en) * 1997-07-03 1999-10-05 Obvious Technology, Inc. Interactive video icon with designated viewing position
US6573907B1 (en) * 1997-07-03 2003-06-03 Obvious Technology Network distribution and management of interactive video and multi-media containers
US6961954B1 (en) * 1997-10-27 2005-11-01 The Mitre Corporation Automated segmentation, information extraction, summarization, and presentation of broadcast news
US5956026A (en) * 1997-12-19 1999-09-21 Sharp Laboratories Of America, Inc. Method for hierarchical summarization and browsing of digital video
US5995095A (en) * 1997-12-19 1999-11-30 Sharp Laboratories Of America, Inc. Method for hierarchical summarization and browsing of digital video
US6236395B1 (en) * 1999-02-01 2001-05-22 Sharp Laboratories Of America, Inc. Audiovisual information management system
US6601026B2 (en) * 1999-09-17 2003-07-29 Discern Communications, Inc. Information retrieval by natural language querying
US7181757B1 (en) * 1999-10-11 2007-02-20 Electronics And Telecommunications Research Institute Video summary description scheme and method and system of video summary description data generation for efficient overview and browsing

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050155053A1 (en) * 2002-01-28 2005-07-14 Sharp Laboratories Of America, Inc. Summarization of sumo video content
US20050155055A1 (en) * 2002-01-28 2005-07-14 Sharp Laboratories Of America, Inc. Summarization of sumo video content
US8028234B2 (en) * 2002-01-28 2011-09-27 Sharp Laboratories Of America, Inc. Summarization of sumo video content
US20050155054A1 (en) * 2002-01-28 2005-07-14 Sharp Laboratories Of America, Inc. Summarization of sumo video content
US20080118120A1 (en) * 2006-11-22 2008-05-22 Rainer Wegenkittl Study Navigation System and Method
US7787679B2 (en) * 2006-11-22 2010-08-31 Agfa Healthcare Inc. Study navigation system and method
US20100241432A1 (en) * 2009-03-17 2010-09-23 Avaya Inc. Providing descriptions of visually presented information to video teleconference participants who are not video-enabled
US8386255B2 (en) * 2009-03-17 2013-02-26 Avaya Inc. Providing descriptions of visually presented information to video teleconference participants who are not video-enabled
US10679063B2 (en) * 2012-04-23 2020-06-09 Sri International Recognizing salient video events through learning-based multimodal analysis of visual features and audio-based analytics
US20140328570A1 (en) * 2013-01-09 2014-11-06 Sri International Identifying, describing, and sharing salient events in images and videos
US20140325568A1 (en) * 2013-04-26 2014-10-30 Microsoft Corporation Dynamic creation of highlight reel tv show
US10349093B2 (en) * 2014-03-10 2019-07-09 Cisco Technology, Inc. System and method for deriving timeline metadata for video content
US20150254341A1 (en) * 2014-03-10 2015-09-10 Cisco Technology Inc. System and Method for Deriving Timeline Metadata for Video Content
WO2015191376A1 (en) * 2014-06-09 2015-12-17 Pelco, Inc. Smart video digest system and method
US10679671B2 (en) 2014-06-09 2020-06-09 Pelco, Inc. Smart video digest system and method
US9582574B2 (en) 2015-01-06 2017-02-28 International Business Machines Corporation Generating navigable content overviews
WO2016172379A1 (en) * 2015-04-21 2016-10-27 Stinkdigital, Ltd Video delivery platform
US10743085B2 (en) 2017-07-21 2020-08-11 Microsoft Technology Licensing, Llc Automatic annotation of audio-video sequences
US10432987B2 (en) 2017-09-15 2019-10-01 Cisco Technology, Inc. Virtualized and automated real time video production system

Also Published As

Publication number Publication date
US7181757B1 (en) 2007-02-20

Similar Documents

Publication Publication Date Title
US7181757B1 (en) Video summary description scheme and method and system of video summary description data generation for efficient overview and browsing
JP4652462B2 (en) Metadata processing method
KR100371813B1 (en) A Recorded Medium for storing a Video Summary Description Scheme, An Apparatus and a Method for Generating Video Summary Descriptive Data, and An Apparatus and a Method for Browsing Video Summary Descriptive Data Using the Video Summary Description Scheme
KR100512138B1 (en) Video Browsing System With Synthetic Key Frame
US20030122861A1 (en) Method, interface and apparatus for video browsing
KR100296967B1 (en) Method for representing multi-level digest segment information in order to provide efficient multi-level digest streams of a multimedia stream and digest stream browsing/recording/editing system using multi-level digest segment information scheme.
US20070136755A1 (en) Video content viewing support system and method
JP2001028722A (en) Moving picture management device and moving picture management system
KR100370247B1 (en) Video browser based on character relation
US20040181545A1 (en) Generating and rendering annotated video files
JP4732418B2 (en) Metadata processing method
CN101132528A (en) Metadata reproduction apparatus, metadata delivery apparatus, metadata search apparatus, metadata re-generation condition setting apparatus
KR100319158B1 (en) Video browsing system based on event
JP4652389B2 (en) Metadata processing method
Li et al. Bridging the semantic gap in sports
KR100361499B1 (en) Method for representing cause/effect relationship among segments in order to provide efficient browsing of video stream and video browsing method using the cause/effect relationships among segments
KR100518846B1 (en) Video data construction method for video browsing based on content

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION