WO2001015017A1 - Video data structure for video browsing based on event - Google Patents

Video data structure for video browsing based on event Download PDF

Info

Publication number
WO2001015017A1
WO2001015017A1 PCT/KR2000/000969 KR0000969W WO0115017A1 WO 2001015017 A1 WO2001015017 A1 WO 2001015017A1 KR 0000969 W KR0000969 W KR 0000969W WO 0115017 A1 WO0115017 A1 WO 0115017A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
data
event
relation
highlight
Prior art date
Application number
PCT/KR2000/000969
Other languages
English (en)
French (fr)
Inventor
Jung Min Song
Jin Soo Lee
Original Assignee
Lg Electronics Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lg Electronics Inc. filed Critical Lg Electronics Inc.
Priority to AU67385/00A priority Critical patent/AU6738500A/en
Publication of WO2001015017A1 publication Critical patent/WO2001015017A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/71Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/532Query formulation, e.g. graphical querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/74Browsing; Visualisation therefor
    • G06F16/745Browsing; Visualisation therefor the internal structure of a single video sequence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8549Creating video summaries, e.g. movie trailer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/92Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N5/926Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback by pulse code modulation

Definitions

  • the present invention relates to a video browsing system, and more particularly to a video data structure for video browsing based on event, in which a period satisfactory for event conditions based on semantic data may be summarized and displayed.
  • a video browsing system typically, users simply view movies and/or dramas as broadcasted through a TV or played at a movie theater. However, a user may wish to view a particular movie or drama at a particular time, or wish to view only a particular section of a movie or a drama. Accordingly, various techniques which enables a selective watching of a movie/drama or sections of a movie/drama have been suggested.
  • various video data may be represented or classified into format chunk, index chunk, media chunk, segment chunk, target chunk, and/or representation chunk.
  • data on various characters or objects such as a name of an object, position on the screen, numeric data with relation to a segment of the video data in which the object appears, may be represented by the target and representation chunk. Accordingly, a user can select an object through a table and reproduce for display a particular section where the object is shown in the video .
  • the additional data table may include a position where an actor appears, a position where a character of the actor appears, and a position where stage properties appear, such that a scene can be reproduced as selected by a user through the additional data table.
  • the additional data table may include a position where an actor appears, a position where a character of the actor appears, and a position where stage properties appear, such that a scene can be reproduced as selected by a user through the additional data table.
  • the additional data table may include a position where an actor appears, a position where a character of the actor appears, and a position where stage properties appear, such that a scene can be reproduced as selected by a user through the additional data table.
  • information on the selected stage property such as the manufacturer and price may be displayed on a screen, and the user may be able connect with the manufacturer or a seller of the stage property through a network connection.
  • recording information on each section of a video m a video map has been suggested.
  • information such as the degree of violence, the degree of adult contents, the degree of importance of contents, characters positions, and the degree of difficulty in understanding may be indicated for each section of a video in the video map.
  • the user may set a degree of preference for one or more items of the video map, and only sections of the video meeting the set degree of preference would be reproduced, thereby limiting a display of particular contents to unauthorized viewers.
  • Other techniques m the related art which allow users to selectively view a portion of a video include a temporal relational graph of shots for a video. However, viewing a temporal relational graph is similar to viewing several representative scenes of a video and would not allow a user to easily follow the contents of a video.
  • an object of the present invention is to solve at least the problems and disadvantages of the related art.
  • Another object of the present invention is to provide a more efficient video browsing system.
  • a still another object of the present invention is to provide a video data structure based on events significant to constant and variable relations between characters, or relations between character and place.
  • a further object of the present invention is to provide a video data structure for a video browser m which contents of an event is summarized and displayed.
  • a video data structure for video browsing based on event includes a syntactic description scheme (DS) of actual video segments, a semantic DS for describing the semantic data, and a visualization DS for displaying summary of entire or a segment of a video data.
  • DS syntactic description scheme
  • the semantic DS has an event DS describing event data, wherein the event DS has linking data to link to the visualization DS for displaying summary data of a selected event and linking data to link to segments of the syntactic data for displaying actual video segments of the selected event.
  • the semantic DS includes at least one event/object relation graph data for describing a relation among object, place, and event; a constant relation between characters; or variable relations between characters.
  • the event/object relation graph data may include linking data to link at least one object with at least one event to display a relation among object, place and event, and an object type for identifying whether each object is a place or character.
  • the event/object relation graph data may include linking data to link at least one object with at least one place and at least event to display a relation among object, place and event.
  • the event/object relation graph data may include two or more objects, a relation name and a relation type to display a constant relation between characters, a variable relation between characters, and a relation with an event which corresponds to either a constant or variable relation.
  • the visualization DS is used in displaying a summary of an entire video or a video segment, and includes at least one of a segment linking data for linking segments to be used in a successive display to display the summary data, a key frame data for summarizing and displaying key frame based video data, and a highlight data for summarizing and displaying video data as a video highlight.
  • the highlight data may have multi-levels depending on a degree of detail m the summarized data, and summarized data corresponding to each level includes a segment for linking segments which will be used n the highlight. If the highlight data has multi-levels, summarized data corresponding to each level may include a time data of a period which will be used m the highlight.
  • the event DS links the semantic DS with the visualization DS, and includes linking data for linking highlight data to summarize a specific event.
  • the event DS may further include one or a combination of linking data for linking the key frame data to summarize a specific event, linking data for linking at least one time data corresponding to a summarized period describing a specific event, linking data for linking at least one segment corresponding to a summarized period describing a specific event, and at least one time data corresponding to a summarized period describing a specific event.
  • the time data may be separated from the time data of the highlight data within the visualization DS data.
  • Fig. 1 is a video data structure based on events m accordance with the present invention
  • Figs. 2a and 2b show examples of a highlight DS m the video data structure based on events in accordance with the present invention
  • Figs. 3a to 3e show examples of a structure which links a semantic DS with a visualization DS n the video data structure n accordance with the present invention
  • Fig. 4 shows a video browser based on the video data structure n accordance with the present invention
  • Fig. 5 shows another video browser based on the video data structure m accordance with the present invention.
  • Figs. 1 - 3 show examples of a video data structure according to the present invention.
  • the video data structure of Fig. 1 ⁇ 3 is based on significant events of a video and supports a video browsing system based on content.
  • a video browser based on content in disclosed in co-pending application ?, and is fully incorporated herein.
  • the video data structure links relations between objects and changes in the relations between objects with corresponding objects and events of a video.
  • an object will generally be assumed to be a character or place in a video.
  • the video data structure allows browsing of an event based on relations between characters or relations between characters and places.
  • a visual description structure (DS) 101 is organized into a visualization DS 102, a syntactic structure DS 103, and a semantic structure DS 104.
  • the visualization DS 102 is used for displaying a summary of either an entire or segment plot of a video, and includes a highlight data or a key frame data. That is, the visualization DS 102 is organized into at least one reference to segments 113 which links video segments to be displayed, a key frame view DS 114 which is used for displaying summary video data based on key frames, and a highlight view DS 115 which is used for displaying summary video data as a video highlight.
  • the highlight view DS 115 may be configured as a highlight view DS 201 shown in Fig. 2a or a highlight view DS shown in Fig. 2b.
  • a plot of a video may be summarized briefly or summarized with greater amounts of detail.
  • the highlight view DS 201 is organized into a level 202 which has multiple levels of highlight data based upon a degree of detail in summarizing a video.
  • the highlight data corresponding to each level includes a segment 203 which links to video segments to be displayed as a video highlight.
  • the highlight view DS 204 is organized into a level 205 which has multiple levels of highlight data based upon a degree of detail in summarizing a video.
  • the highlight data corresponding to each level includes a time DS 206 which is a time period used in displaying a video highlight.
  • the syntactic structure DS 103 is used for displaying the actual video and includes actual video segments to be displayed.
  • the syntactic structure DS 103 is organized into actual video segment data of segment DS 105, and corresponding time DS 106 which is a temporal position of the segment DS 105 m a video.
  • the semantic structure DS 104 includes additional information describing a video, and is organized into an event DS 107 which represent event information, an object DS 108 which represents object information, and at least one event/object relation graph DS 109 which represents relation information.
  • the event DS 107 describes events
  • the object DS 108 describes objects such as characters and places.
  • the event/object relation graph DS 109 describes a constant or changes in relations between characters, relations between objects and places, or relations between objects and events.
  • the event/object relation graph DS 109 may include information of one or more relations.
  • a constant relation means either a relation between characters that cannot change throughout a video, such as a parent to child relation, or a relation which is most representative among variable relations between characters.
  • the event DS 107 includes a reference data for linking to the visualization DS 102 to display summary data corresponding to a selected event, and a reference data for linking to segment DS 105 m syntactic structure DS 103 to display actual video segments corresponding to the selected event.
  • the event DS 107 when an event is selected to display a corresponding video segment of a video, the event DS 107 includes a reference to segment 110 which links to actual video segments of the video corresponding to the selected event, a reference to visualization 111 for displaying summary of the actual video segments corresponding to the selected event, and an annotation DS 112 for explaining the selected event.
  • each data such as ⁇ 0,1 ⁇ , ⁇ 0,* ⁇ , or ⁇ 1,* ⁇ indicates the number of data for the corresponding data.
  • the notation of ⁇ 0,1 ⁇ for the visualization DS 102 indicates that the visual DS 101 can have zero or one visualization DS .
  • the notation of ⁇ 0,* ⁇ for the segment DS 112 indicates that the syntactic structure DS 103 may have from zero to any number of segment DS .
  • the reference to visualization 111 may include at least one of the information shown in Figs. 3a ⁇ 3e. Particularly, Fig. 3a shows a reference to highlight view
  • DS 301 which links the event DS 107 to the highlight view DS 115 for displaying the video segment corresponding to the selected event by a highlight of the video segment.
  • Fig. 3d shows a reference to segment DS 304 which links the event DS 107 to one or more segments DSs 105 corresponding to video segments which describe the selected event. If the highlight view DS 114 includes the segment 203, as shown in Fig. 2a, the reference to segment DS 304 links to one or more segments 203 corresponding to the video segments which describes the selected event.
  • a time DS 305 is directly used when the highlight view DS 114 includes the time DS 206 as shown in Fig. 2b.
  • the time DS 305 represents temporal data for a video segment which describes a selected event.
  • the video data structure of Fig. 3e directly includes the corresponding time DSs rather than linking the time DSs 206 of Fig. 2c.
  • one or more relations include data for linking object (s) with event (s) to display a video segment showing a relation among objects, places and events. At this time, an object is determined to be a place or a character by an object type.
  • an object includes data for linking one or more objects with one or more places and events to display a relation among objects, places, and events.
  • an object may include two or more objects, a relation name and a relation type to display a constant relation between characters, the variable relations between characters, and a relation with an event which shows the constant and variable relations.
  • the video browser of the present invention allows a user to select one or more relations from a relation graph between an object and a place, and displays events which describe the selected relations. At this time, if an event is selected from the displayed events, summaries of video segments, such as highlight or key frame data, corresponding to the selected event would be displayed to summarize the corresponding event.
  • a user may designate whether to display a summary of a video segment by highlight or key frame data, or a browsing system may pre-designate the form of display.
  • the video browser of the present invention allows a user to select relations between characters from a graph showing a constant relation between characters and variable relations between characters. At this time, events which show the selected relation (s) are displayed. If an event is selected from the displayed events, summary of video segments corresponding to the selected event, such as a highlight or key frame data, would be displayed to summarize the corresponding event. As discussed above, a user may designate whether to display a summary of a video segment by highlight or key frame data, or a browsing system may pre-designate the form of display.
  • events may be displayed by a key frame or a text data which describes the event, or a combination of the key frame and the text data.
  • the video browser displays the highlight for a period of the time DSs 206 corresponding to the summary of video segment .
  • Fig. 4 shows an example screen of a video browser implemented by the present video data structure, in which a video can easily be understood and browsed based on constant and variable relations between characters n a video.
  • a video browser includes a character relation screen, a main screen, and a main scene screen.
  • the character relation screen displays main characters of a video on a character screen 401, and displays characters having relations with a character selected from the character screen 601 on a relation screen 402.
  • the relation between the selected character and related characters are displayed by a tree structure, where a constant relation is placed on a top level of the tree while variable relations are placed on a lower level of the tree.
  • the displayed constant relation may include additional information such as a number of variable relations in the lower tree structure. For example, m the displayed constant relation between 'character 1' and 'character 2, ' the number '2' displayed above 'character 2' indicates that there are two variable relations in the tree structure.
  • events significant in a constant relation or a variable relation selected from the relations screen 402 are displayed by key frames on a main scene screen 403.
  • events significant m a relation may mean events which show the corresponding relation or events which brought about a change m a corresponding relation.
  • a highlight of an event corresponding to a selected key frame from the mam scene screen 403 can then be displayed on a main screen 404.
  • a user selected 'character 1' from among the characters in the character screen 401 and other characters 'character 2' ⁇ 'character n' related with 'character 1' are displayed on the relation screen 402 by a tree structure.
  • 'relation 2' with 'character 2' is selected from the relation screen 402
  • significant events corresponding to 'relation 2' with 'character 2' are displayed on the main scene screen 403 as key frames.
  • a summary video section data, i.e. highlight, of 'event 6' selected from the main scene screen 603 can be reproduced and displayed on the main screen 404, according to commands input through a user interface 405.
  • the main characters are displayed on the character screen 401 based upon the object DS 108.
  • a relation between ' character 1 ' and other characters having constant or variable relations with 'character 1' are displayed on the relation screen 402 based on the event/object relation graph DS 109.
  • a variable 'relation 2' with 'character 2' is selected from the relation screen 402
  • key frames representing events significant to 'relation 2' with 'character 2' is displayed on the main scene screen 402 based upon the reference to segment 110.
  • an 'event 6' is selected from the main scene screen 403
  • a key frame data or a highlight of 'event 6' can be displayed on the main screen 404 based upon the reference to visualization 111.
  • the key frame data or highlight may be displayed on the mam screen 404 based respectively upon the key frame DS 114 or the highlight view DS 115 of the visualization DS 102.
  • Fig. 5 shows another example screen of a video browser implemented by the present video data structure, in which a video can easily be understood and browsed based on object-place relations.
  • a video browser based on an object-place relations is disclosed in co-pending U.S. Patent Application Serial No. 09/239,531, entitled “Contents-Based Video Story Browsing System,” and is fully incorporate herein.
  • a video browser includes a key frame screen and a story screen.
  • the key frame screen displays a relation graph between main characters and places by key frames on a relation screen 501 and significant events of a relation selected from the relation screen 501 are displayed on a text screen 502 by key frames with brief annotations.
  • a highlight of a significant event selected from the relation screen 501 can be reproduced and displayed on a main screen 503.
  • a video segment corresponding to an event is summarized and displayed by a highlight as described above.
  • a video segment corresponding to an event may be summarized and displayed using other data such as key frames.
  • the object DS 108 in Fig. 1 includes information which is used in displaying objects, such as a character or a place.
  • an object type may be included in the object DS 108 to determine whether an object is a place or character.
  • the semantic structure DS 103 may include the object DS 108, the event DS 107, and a place DS and an object/place/event relation graph DS rather than the event/object relation graph DS 109.
  • the video data structure according to the present invention for video browsing based on events has the following advantages.
  • the present video data structure includes semantic data, summary data, and linking data
  • an event m a video can be browsed using highlight or key frames, based on a relation between characters or a relation between objects and a places, thereby allowing a user-friendly browsing system.
  • events can be browsed based on factors such as character and place that significantly act on development of a story in a movie or drama. Accordingly, a portion of a video can be easily selected and displayed by a user to provide an effective video browsing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Mathematical Physics (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Television Signal Processing For Recording (AREA)
PCT/KR2000/000969 1999-08-26 2000-08-25 Video data structure for video browsing based on event WO2001015017A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU67385/00A AU6738500A (en) 1999-08-26 2000-08-25 Video data structure for video browsing based on event

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1019990035689A KR100319158B1 (ko) 1999-08-26 1999-08-26 사건구간 기반 동영상 자료 생성방법 및 동영상 검색 방법
KR1999/35689 1999-08-26

Publications (1)

Publication Number Publication Date
WO2001015017A1 true WO2001015017A1 (en) 2001-03-01

Family

ID=19608822

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2000/000969 WO2001015017A1 (en) 1999-08-26 2000-08-25 Video data structure for video browsing based on event

Country Status (3)

Country Link
KR (1) KR100319158B1 (ko)
AU (1) AU6738500A (ko)
WO (1) WO2001015017A1 (ko)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002088925A1 (en) * 2001-04-30 2002-11-07 The Commonwealth Of Australia A data processing and observation system
EP1577795A3 (en) * 2003-03-15 2006-08-30 Oculus Info Inc. System and Method for Visualising Connected Temporal and Spatial Information as an Integrated Visual Representation on a User Interface
AU2002249011B2 (en) * 2001-04-30 2008-05-15 The Commonwealth Of Australia An event handling system
WO2009086194A2 (en) * 2007-12-19 2009-07-09 Nevins David C Apparatus, system, and method for organizing information by time and place
CN107077595A (zh) * 2014-09-08 2017-08-18 谷歌公司 选择和呈现代表性帧以用于视频预览
US10530894B2 (en) 2012-09-17 2020-01-07 Exaptive, Inc. Combinatorial application framework for interoperability and repurposing of code components

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100518846B1 (ko) * 1999-08-26 2005-09-30 엘지전자 주식회사 내용기반 동영상 검색 및 브라우징을 위한 동영상 데이타 구성방법
KR100429793B1 (ko) * 2001-02-05 2004-05-03 삼성전자주식회사 썸네일이 기록된 기록매체
KR100392257B1 (ko) * 2001-02-12 2003-07-22 한국전자통신연구원 비쥬얼 특징 기반의 스포츠 비디오 요약 생성방법
KR101449430B1 (ko) 2007-08-31 2014-10-14 삼성전자주식회사 컨텐츠의 요약 재생 정보 생성 방법 및 장치
KR101938667B1 (ko) 2017-05-29 2019-01-16 엘지전자 주식회사 휴대 전자장치 및 그 제어 방법

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0719046A2 (en) * 1994-11-29 1996-06-26 Siemens Corporate Research, Inc. Method and apparatus for video data management
US5586316A (en) * 1993-07-09 1996-12-17 Hitachi, Ltd. System and method for information retrieval with scaled down image

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08101845A (ja) * 1994-09-30 1996-04-16 Toshiba Corp 動画像検索システム
US5821945A (en) * 1995-02-03 1998-10-13 The Trustees Of Princeton University Method and apparatus for video browsing based on content and structure
JPH08249348A (ja) * 1995-03-13 1996-09-27 Hitachi Ltd 映像検索方法および装置
US5623589A (en) * 1995-03-31 1997-04-22 Intel Corporation Method and apparatus for incrementally browsing levels of stories

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5586316A (en) * 1993-07-09 1996-12-17 Hitachi, Ltd. System and method for information retrieval with scaled down image
EP0719046A2 (en) * 1994-11-29 1996-06-26 Siemens Corporate Research, Inc. Method and apparatus for video data management

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2002249011B2 (en) * 2001-04-30 2008-05-15 The Commonwealth Of Australia An event handling system
WO2002088925A1 (en) * 2001-04-30 2002-11-07 The Commonwealth Of Australia A data processing and observation system
US7027055B2 (en) 2001-04-30 2006-04-11 The Commonwealth Of Australia Data view of a modelling system
US7085683B2 (en) 2001-04-30 2006-08-01 The Commonwealth Of Australia Data processing and observation system
US8121973B2 (en) 2001-04-30 2012-02-21 The Commonwealth Of Australia Event handling system
US7250944B2 (en) 2001-04-30 2007-07-31 The Commonweath Of Australia Geographic view of a modelling system
WO2002088926A1 (en) * 2001-04-30 2002-11-07 The Commonwealth Of Australia An event handling system
EP1577795A3 (en) * 2003-03-15 2006-08-30 Oculus Info Inc. System and Method for Visualising Connected Temporal and Spatial Information as an Integrated Visual Representation on a User Interface
US8994731B2 (en) 2007-12-19 2015-03-31 Temporal Llc Apparatus, system, and method for organizing information by time and place
WO2009086194A2 (en) * 2007-12-19 2009-07-09 Nevins David C Apparatus, system, and method for organizing information by time and place
WO2009086194A3 (en) * 2007-12-19 2009-09-24 Nevins David C Apparatus, system, and method for organizing information by time and place
US10530894B2 (en) 2012-09-17 2020-01-07 Exaptive, Inc. Combinatorial application framework for interoperability and repurposing of code components
CN107077595A (zh) * 2014-09-08 2017-08-18 谷歌公司 选择和呈现代表性帧以用于视频预览
US12014542B2 (en) 2014-09-08 2024-06-18 Google Llc Selecting and presenting representative frames for video previews

Also Published As

Publication number Publication date
KR20010019343A (ko) 2001-03-15
AU6738500A (en) 2001-03-19
KR100319158B1 (ko) 2001-12-29

Similar Documents

Publication Publication Date Title
US20060075361A1 (en) Video browser based on character relation
US7181757B1 (en) Video summary description scheme and method and system of video summary description data generation for efficient overview and browsing
US6602297B1 (en) Motional video browsing data structure and browsing method therefor
US7698720B2 (en) Content blocking
JP4518572B2 (ja) 情報の電子データベースの送信
US8526782B2 (en) Switched annotations in playing audiovisual works
US8762850B2 (en) Methods systems, and products for providing substitute content
EP1308956B1 (en) Method and apparatus for reproducing contents from information storage medium in interactive mode
AU751884B2 (en) Electronic program guide using markup language
JP4408768B2 (ja) 記述データの生成装置、記述データを利用したオーディオビジュアル装置
US20050021653A1 (en) Multimedia search and browsing method using multimedia user profile
JP2004517532A (ja) 非侵入的且つ視聴者主体で使用するために再利用可能なオブジェクトベースの製品情報をオーディオビジュアル番組に埋め込む方法
MXPA02000484A (es) Disposicion de television interactiva con foros de discusion.
JP2006101526A (ja) 合成キーフレームを利用した階層ビデオ要約方法及びビデオブラウジングインターフェース
EP1222634A4 (en) DESCRIPTION OF A VIDEO ABSTRACT AND SYSTEM AND METHOD FOR PRODUCING DATA DESCRIBED ABOVE FOR A DETAILED AND GENERAL OVERVIEW
WO2001015017A1 (en) Video data structure for video browsing based on event
CN100397893C (zh) 用于提供用户界面的方法和装置
WO2001015015A1 (en) Video data structure for video browsing based on content
GB2366109A (en) System for authoring contents of digital television
EP1085756A2 (en) Description framework for audiovisual content
US20060100977A1 (en) System and method for using embedded supplemental information
GB2374232A (en) Method of providing additional information for digital television broadcast

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP