WO2010117213A2 - Apparatus and method for providing information related to broadcasting programs - Google Patents
Apparatus and method for providing information related to broadcasting programs Download PDFInfo
- Publication number
- WO2010117213A2 WO2010117213A2 PCT/KR2010/002144 KR2010002144W WO2010117213A2 WO 2010117213 A2 WO2010117213 A2 WO 2010117213A2 KR 2010002144 W KR2010002144 W KR 2010002144W WO 2010117213 A2 WO2010117213 A2 WO 2010117213A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- related information
- keyword
- scene
- keywords
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 34
- 230000001360 synchronised effect Effects 0.000 claims abstract description 11
- 241001465754 Metazoa Species 0.000 description 8
- 238000006243 chemical reaction Methods 0.000 description 4
- 239000000284 extract Substances 0.000 description 3
- 238000003058 natural language processing Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/19—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
- G11B27/28—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/74—Browsing; Visualisation therefor
- G06F16/748—Hypervideo
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7844—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using original textual content or text extracted from visual content or transcript of audio data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/11—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information not detectable on the record carrier
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/34—Indicating arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
- H04N21/4316—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/4722—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/8126—Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts
- H04N21/8133—Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts specifically related to the content, e.g. biography of the actors in a movie, detailed information about an article seen in a video program
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/84—Generation or processing of descriptive data, e.g. content descriptors
- H04N21/8405—Generation or processing of descriptive data, e.g. content descriptors represented by keywords
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/147—Scene change detection
Definitions
- the following description relates to a technique for providing a user who is watching television with related information through web searching.
- IPTV Internet Protocol Television
- IPTV IP Television
- users can get various kinds of information through IPTV, including scheduled times of broadcasting programs, captions, and so on.
- IPTV allows users to search for desired information through a network connected thereto. For example, when there is an animal documentary online, users can search for information about the related animals by manipulating the IPTV or a set-top box connected thereto.
- FIG. 1 is a diagram illustrating an example of an apparatus of providing information related to broadcast programs.
- FIG. 2 illustrates examples of objects.
- FIG. 3 illustrates an example of a keyword table.
- FIG. 4 illustrates examples of scene sections.
- FIG. 5 illustrates an example of a mapping relationship between scene sections and related information.
- FIG. 6 illustrates an example of a related information display screen.
- FIG. 7 illustrates another example of a related information display screen.
- FIG. 8 is a flowchart illustrating an example of a method of providing information related to a broadcast program.
- an apparatus of providing information related to a broadcast program including: an object detector to detect at least one object from a scene; a keyword generator to generate a keyword including a name and meaning information of the object; a section setting unit to set a scene section using the keyword; a related information searching unit to request searching of related information associated with the object using the keyword and receive the searched related information; and a related information provider to synchronize the received related information with the scene section and provide the related information synchronized with the scene section.
- a method of providing information related to a broadcast program including: detecting at least one object from a scene; generating a keyword including a name and meaning information of the object; setting a scene section using the keyword; requesting searching of related information associated with the object using the keyword and receiving the searched related information; and synchronizing the received related information with the scene section and providing the related information synchronized with the scene section.
- the keyword may be a word with little or no ambiguity.
- Ambiguity of an object name may be mostly eliminated by adding category information to an object name. Removal of ambiguity from an object name may be done by adding an appropriate category to the object name with reference to an object name dictionary in which object names are individually mapped to category names, by performing context analysis or by using genre information.
- a scene section may be determined based on an amount of preserved keywords between scenes and may be a group of scenes that deal with the substantially same subject.
- FIG. 1 is a diagram illustrating an example of an apparatus 100 of providing information related to broadcasting programs.
- the broadcasting program-related information providing apparatus 100 may be installed in any of various wired/wireless terminals connected to a network, including digital TV, IPTV, a computer, a mobile phone, a smart phone, a set-top box and the like, which are capable of providing users with broadcasting programs.
- the broadcasting program-related information providing apparatus 100 includes a broadcast stream receiver 101, a stream processor 102, a display 103, an object detector 104, a keyword generator 105, a section setting unit 106, a related information searching unit 107 and a related information providing unit 108.
- the broadcast stream receiver 101 receives broadcast streams.
- the broadcast streams are broadcast data transmitted from a broadcasting station.
- the broadcast streams may contain video signals, audio signals, caption signals, Electronic Program Guide (EPG) signals, etc.
- EPG Electronic Program Guide
- the stream processor 102 processes the broadcast streams to cause scenes to be displayed on the display 103.
- the stream processor 102 may perform various kinds of image processing and sound processing.
- the display 103 displays the scenes.
- the display 103 may be a display such as a LCD monitor or an input/output device such as a touch screen.
- the object detector 104 detects objects or object names from the scenes displayed on the display 103.
- objects refers to characters, items, regions, etc. that are associated with or appear in the scenes. Detection of an object includes identifying the object and extracting a name of the identified object. For example, the object detector 104 may identify objects displayed on a current screen and detect the names of the objects.
- the object detector 104 may detect objects using the following methods.
- the object detector 104 extracts character strings (or characters) from captions or telop character information of broadcast streams and analyzes the extracted character strings to detect objects. For example, the object detector 104 applies morphological analysis and part-of-speech tagging based on natural language processing to the character strings to detect nouns having meaningful information as objects.
- the object detector 104 converts sound of broadcast streams into text and analyzes the text to detect objects.
- the object detector 104 converts sound of broadcast streams into text to generate character strings (or characters) and analyzes the character strings to detect nouns having meaningful information as objects.
- the object detector 104 analyzes pictures of broadcast streams to detect objects.
- the object detector 104 may apply a character recognition algorithm to pictures extracted from broadcast streams to extract predetermined characters and detect objects from the extracted characters.
- the object detector 104 may apply an object recognition algorithm to the pictures of broadcast streams to identify predetermined portions of the pictures and then detect the names of objects corresponding to the identified portions.
- methods in which the object detector 104 detects objects are not limited to the above-described examples.
- the keyword generator 105 generates keywords corresponding to the objects detected by the object detector 104.
- the keywords include the names and meaning information of the objects.
- the meaning information of the objects is to eliminate any ambiguity of the object names and may be category information for the objects. For example, when an object name "BAT” which may mean both a flying animal "Bat” and sports equipment "Bat” is detected, the keyword generator 105 may assign category information such as "animal” or "sports equipment” to the object name "BAT” to eliminate the ambiguity of the object name "BAT” thus generating a keyword "BAT/Animal” or "BAT/Sports equipment”.
- the keyword generator 105 assigns meaning information to an object name to eliminate ambiguity from the object name in various ways, as follows.
- the keyword generator 105 may assign meaning information to an object name with reference to an object name dictionary.
- the object name dictionary is a word list in which object names are individually mapped to categories.
- the object name dictionary may include mapped words such as "BAT-animal" and "BAT-sports equipment".
- the keyword generator 105 estimates a probability at which an object name belongs to which category and determines a category suitable for the object name based on the estimated probability.
- the probability at which an object name belongs to which category may be estimated based on a disambiguation model of the natural language processing.
- the keyword generator 105 may analyze the context of an object name to assign appropriate meaning information to the object name. For example, when words “cave” and “night” appear before and/or after an object name "BAT" the keyword generator 105 may assign an "animal" category to the object name "BAT". At this time, the keyword generator 105 may use machine learning, such as Bayesian, Conditional Random Field, Support Vector Machines or the like, for disambiguation.
- machine learning such as Bayesian, Conditional Random Field, Support Vector Machines or the like, for disambiguation.
- the keyword generator 105 may assign meaning information to an object name using genre information. For example, when an object name "BAT" is detected while a program whose program genre is "documentary" is being broadcasted, an "animal" category is assigned to the object name "BAT". On the other hand, if the program genre is "Sports” the object name "BAT” is assigned a "Sports equipment” category.
- the genre information may also be acquired in various ways, for example, from EPG information of broadcast streams or by analyzing the name of the program. Further, the genre information may be acquired through a third party service from any other place than a broadcasting station. However, a method of determining the genre of a broadcasting program is not limited to these examples, and any other appropriate method may be used.
- the section setting unit 106 sets a scene section using the keyword generated by the keyword generator 105.
- the scene section means a group of scenes that can be considered to deal with the substantially same subject.
- the section setting unit 106 may set a scene section based on the amount of preserved keywords between scenes.
- the amount of preserved keywords may be defined by the number of keywords extracted in common from successive scenes.
- the section setting unit 106 may set a scene section by determining a group of scenes between which the number of preserved keywords is equal to or greater than a threshold value. In other words, the section setting unit 106 may identify scenes that are considered to deal with substantially the same content and determines a group of the scenes as a scene section.
- the section setting unit 106 may decide, instead of using the amount of preserved keywords to set a scene section, a time of scene conversion based on scene or based on scene/text to determine scene sections.
- the related information searching unit 107 requests searching of information related to the objects using the keywords generated by the keyword generator 105.
- the related information searching unit 107 may transmit an inquiry generated based on a keyword to a search server and receive the result of searching from the search server.
- the related information searching unit 107 may request an advertisement item related to a keyword to a search server.
- the related information searching unit 107 may collect many kinds of related information from various web sites depending on the category of a keyword. For example, if the category of a keyword is a movie title, the related information searching unit 107 collects various information about a movie such as theaters, actors, and synopsis from the movie introductory website. If the category of a keyword is an animal name, the related information searching unit 107 searches wikipedia or cyber encyclopedia. In the current example, the related information may include the results of such searching and advertisement items.
- the related information searching unit 107 may generate an extended inquiry by adding additional information to a keyword.
- the related information searching unit 107 may use a keyword including an object name and a category as an inquiry or may generate an extended inquiry by adding a detailed category to a keyword including an object name and a category.
- the related information searching unit 107 may also search for related information including an advertisement from its own database, instead of from a separate search server. Furthermore, the related information searching unit 107 may receive related information from a third information providing site on the web, instead of from a search sever, in order to provide information (for example, the names of stores, restaurants, etc.) that is not explicitly shown on the screen of a broadcasting program.
- the related information 108 synchronizes the received related information to the corresponding scene section and provides the related information synchronized to the scene section to the display 103.
- the synchronization means matching the received related information to a time at which the corresponding object appears on the screen.
- the related information providing unit 108 may display representative pictures of scene sections in association with related information corresponding to keywords for the scene sections, on a portion of the display on which a broadcast screen is displayed. In other words, it is possible to show, only while scenes considered to deal with the substantially same subject continue, the corresponding related information, and to stop, when scene conversion to a substantially different subject occurs, displaying of the related information.
- the related information providing unit 108 may rank received related information based on a user profile and primarily display highly ranked related information.
- the user profile may store personal information, such as the user's age and sex distinction, and the user s preference information about broadcast programs.
- FIG. 2 illustrates examples of objects.
- the object detector 104 analyzes a caption 202, a sound 203 and a specific portion on a current screen 201 to detect main objects 211, 212 and 213 with which the screen 201 is dealing.
- the object detector 104 extracts a caption 202 written as The museum Louve in France has a collection of an enormous volume of art works and performs morpheme analysis and part-of-speech tagging on the extracted caption 202 according to a natural language processing algorithm.
- the morpheme analysis may be a process of segmenting a caption in units of meaning and the part-of-speech tagging may be a process of tagging part-of-speech information to each meaning unit.
- the object detector 104 detects objects 211 from the caption 202 subjected to the morpheme analysis and part-of-speech tagging.
- the objects 211 may correspond to nouns having meaningful information. For example, objects "France”, “Louve” and "Art Work" may be detected from the caption 202.
- the object detector 104 may extract a sound 203, for example, a narration, and converts the extracted sound 203 into text.
- the text is analyzed to detect another object 212.
- an object "Seine River" 212 may be detected from a narration which can be heard to say "I went to the Louve along the Seine River”.
- the object detector 104 may detect another object 213 from a specific portion on the screen 201.
- the object detector 104 may detect another object "pyramid” by applying an object recognition algorithm to the screen 201.
- FIG. 3 shows an example of a keyword table 301.
- the keyword table 301 includes object names 302 and meaning information 303.
- the object names 302 may be representative names indicating objects.
- the meaning information 303 may be category information to eliminate any ambiguities of the object names. For example, since it is ambiguous which one of the "Louve Palace” and the “Louve Museum” indicates the “Louve", a keyword “Louve/Museum” may be generated in which a category “Museum” is added as meaning information to the "Louve".
- the keyword generator 105 may assign meaning information 303 to the object names 302 using an object name dictionary 305 stored in object name database 304.
- the object name dictionary 305 may be a words list in which object names are individually mapped to categories.
- the keyword generator 105 analyzes the context of an object name to probabilistically determine to which category in the object name dictionary the object name belongs. The probabilistic determination may depend on Equations 1 below.
- W n represents a n-th word of an identified character string
- W M-n n-1 represents n-1 words positioned in the left of W n and M-n words positioned in the right of W n among M words
- W m represents a m-th word of the M words.
- M represents the number of words included in the identified character string
- n represents where the identified character string is positioned in the M words
- P represents a probability with which the corresponding word belongs to which category, and is the amount of mutual information between two words and represents a probability with which the two words will appear together.
- the keyword generator 105 may determine a category of the "Louve” using the object name dictionary 305 and context of the word “Louve”. For example, if an object name “Art Work” or “Pyramid” often appears in the context of the word “Louve”, a word “Museum” having high relevancy to the "Art Work” or “Pyramid” may be determined as a category of the "Louve”.
- the keyword generator 105 may determine a category based on genre information.
- the genre information may be acquired from EPG information of broadcast streams, from a third party service received through the web, by analyzing a program name or program content, or the like.
- FIG. 4 is a view for explaining an example of scene sections.
- reference numbers 401 through 405 represent broadcast scenes and letters of each scene represent keywords extracted from the scene.
- the section setting unit 106 identifies keywords for each scene. For example, the section setting unit 106 identifies keywords A, B, C, D and E from the first scene 401 and identifies keywords A, B, C, D and F from the second scene 402 following the first scene 401.
- the section setting unit 106 calculates the amount of preserved keywords between the scenes 401 through 405.
- the amount of preserved keywords may be defined by the number of keywords preserved despite scene conversion.
- the amount of preserved keywords may be calculated by Equation 2 below
- the section setting unit 106 compares the calculated amounts of preserved keywords to a threshold value to set scene sections. If the threshold value is 50%, the first and second scenes 401 and 402 between which the amount of preserved keywords is 80% are set to belong to the same scene section, and the third and fourth scenes 403 and 404 between which the amount of preserved keywords is 18.1% are set to belong to different scene sections.
- the section setting unit 106 may set the first to third scenes 401, 402 and 403 as a first scene section 410 and set the fourth and fifth scenes 404 and 405 as a second scene section 420. That is, the section setting unit 106 groups scenes considered to deal with the substantially same subject regardless of the individual displays of scenes.
- the scene section setting method described above with reference to FIGS. 1 and 4 is exemplary, and it is also possible to set scene sections based on the picture statistics of scenes or the text statistics of scenes instead of using the amounts of preserved keywords between scenes.
- FIG. 5 illustrates an example of a mapping relation between scene sections and related information 501.
- the related information 501 may be various kinds of information related to keywords.
- the related information 501 may include the results of searching by inquiries generated based on keywords and various advertisement items associated with the keywords.
- related information A may be a group of information associated with a keyword A and may include the results (for example, A1 and A2) of searching and advertisement information (for example, A3).
- the related information 501 is synchronized with scene sections 502. That is, related information for a certain keyword is mapped to a scene section to which the keyword belongs. For example, referring to FIGS. 4 and 5, related information A is synchronized with and provided in a scene section 1 since the corresponding keyword A appears in the scene section 1, and related information F may be synchronized with and provided in the scene sections 1 and 2 since the corresponding keyword F appears in both the scene sections 1 and 2.
- the related information A is related information for a keyword Louve/museum
- A1 may be information about a history of the Louve Museum
- A2 may be information about the opening hour of the Louve Museum
- A3 may be an advertisement for a travel product containing a tour of the Louve Museum.
- the related information provider 108 may prioritize the related information A1, A2 and A3 with reference to a user profile and provide them in the order of priority.
- FIG. 6 illustrates an example of a related information display screen.
- related information 602 may be synchronized with a scene section corresponding to a screen currently being broadcasted and displayed on the lower portion of the screen. Accordingly, if a scene section changes due to scene conversion, the related information 602 may be accordingly changed to a different one.
- an icon 601 notifying the creation of new related information may be displayed on the upper portion of the screen.
- a user may manipulate a remote control to select the icon 601 and display the related information 602 on the screen.
- FIG. 7 illustrates another example of a related information display screen.
- representative scenes 701-a through 701-f may be displayed on the lower portion of the screen.
- Each representative scene for example, the scene 701-a may be a representative frame of a scene section.
- the representative scene 701-a includes keywords corresponding to the scene section.
- related information 703 corresponding to the selected representative scene may be displayed on the right portion of the screen. If a representative scene is selected, the screen may move to a scene section to which the selected representative scene belongs.
- the related information display screens illustrated in FIGS. 6 and 7 are examples for explaining synchronization of related information with scene sections, and the related information may be displayed using any other method. For example, it is possible to display all keywords that have appeared in a program being currently broadcasted and allow a user to select any one of the keywords so as to reproduce the program from a scene section in which the selected keyword has appeared.
- FIG. 8 is a flowchart illustrating an example of a method 800 of providing information related to broadcast programs.
- the object detector 104 may identify objects with which a current broadcasting program deals using at least one of video information, sound information, caption information, electronic program guide (EPG) information, telop character information and the like, and then detect the names of the objects.
- EPG electronic program guide
- keywords including the names and meaning information of the objects are generated (802).
- the keyword generator 105 may determine the name of each object and a category to which the object name belongs to eliminate ambiguity of the object name, thus generating a keyword including the object name and the corresponding category.
- a category of each object may be determined by utilizing an object name dictionary in which a plurality of object names are stored for each category, by analyzing context of a part where the object name appears or by using genre information.
- the genre information may be acquired from additional information included in broadcasting streams, from a third party service that provides genre information through the web or by analyzing the generated keyword.
- a scene section is set using the keyword (803).
- the section setting unit 106 may set a scene section using the amount of preserved keywords defined by the number of keywords that appear in common between scenes.
- the related information searching unit 107 may generate an inquiry based on the keyword, transfers the inquiry to a search server and receive related information including an advertisement associated with the keyword from the search server.
- the found related information is synchronized with the scene section and provided to a user (805).
- the related information providing unit 108 may display representative scenes for scene sections in association with received related information on a portion of a screen on which scenes are displayed.
- the related information provider 108 may prioritize the received related information according to a use profile and provide the related information in the order of priorities.
- the processes, functions, methods and/or software described above may be recorded, stored, or fixed in one or more computer-readable storage media that includes program instructions to be implemented by a computer to cause a processor to execute or perform the program instructions.
- the media may also include, alone or in combination with the program instructions, data files, data structures, and the like.
- the media and program instructions may be those specially designed and constructed, or they may be of the kind well-known and available to those having skill in the computer software arts.
- Examples of computer-readable media include magnetic media, such as hard disks, floppy disks, and magnetic tape; optical media such as CD ROM disks and DVDs; magneto-optical media, such as optical disks; and hardware devices that are specially configured to store and perform program instructions, such as read-only memory (ROM), random access memory (RAM), flash memory, and the like.
- Examples of program instructions include machine code, such as produced by a compiler, and files containing higher level code that may be executed by the computer using an interpreter.
- the described hardware devices may be configured to act as one or more software modules in order to perform the operations and methods described above, or vice versa.
- a computer-readable storage medium may be distributed among computer systems connected through a network and computer-readable codes or program instructions may be stored and executed in a decentralized manner.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Databases & Information Systems (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Marketing (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Library & Information Science (AREA)
- Accounting & Taxation (AREA)
- Development Economics (AREA)
- Economics (AREA)
- Finance (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Circuits Of Receivers In General (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
Claims (25)
- An apparatus of providing information related to a broadcast program, comprising:an object detector to detect at least one object from a scene;a keyword generator to generate a keyword including a name and meaning information of the object;a section setting unit to set a scene section using the keyword;a related information searching unit to request searching of related information associated with the object using the keyword and receive the searched related information; anda related information provider to synchronize the received related information with the scene section and provide the related information synchronized with the scene section.
- The apparatus of claim 1, wherein the section setting unit sets as the scene section a group of scenes between which an amount of preserved keywords is equal to or greater than a threshold value.
- The apparatus of claim 2, wherein the section setting unit sets the scene section using an amount of preserved keywords, the amount of preserved keywords defined by a number of keywords that exist in common between keywords generated from a first scene and keywords generated from a second scene.
- The apparatus of claim 1, wherein the keyword generator determines an object name corresponding to the object and a category to which the object name belongs to eliminate ambiguity from the object name, thus generating a keyword including the object name and the category.
- The apparatus of claim 4, wherein the keyword generator determines the category using an object name dictionary in which a plurality of object names are individually mapped to categories.
- The apparatus of claim 4, wherein the keyword generator determines the category by analyzing the context of a part where the keyword appears.
- The apparatus of claim 4, wherein the keyword generator determines the category by acquiring genre information of the scene.
- The apparatus of claim 7, wherein the genre information is acquired from additional information included in broadcast streams, from a third party service that provides genre information through the internet or by analyzing the generated keyword.
- The apparatus of claim 1, wherein the object detector detects the object using at least one of video information, sound information, caption information, Electronic Program Guide (EPG) information and telop character information, which are included in received broadcast streams.
- The apparatus of claim 1, further comprising a display to display the scene and the related information.
- The apparatus of claim 10, wherein the related information provider controls the display to provide the related information to a user.
- The apparatus of claim 11, wherein the related information provider controls the display to display information regarding the scene section in association with the related information on a portion of the display.
- The apparatus of claim 11, wherein the related information provider prioritizes the related information according to a user profile and provides the related information in the order of priority.
- A method of providing information related to a broadcast program, comprising:detecting at least one object from a scene;generating a keyword including a name and meaning information of the object;setting a scene section using the keyword;requesting searching of related information associated with the object using the keyword and receiving the searched related information; andsynchronizing the received related information with the scene section and providing the related information synchronized with the scene section.
- The method of claim 14, wherein the setting of the scene section comprises setting as the scene section a group of scenes between which an amount of preserved keywords is equal to or greater than a threshold value.
- The method of claim 15, wherein the amount of preserved keywords are defined by a number of keywords that exist in common between keywords generated from a first scene and keywords generated from a second scene.
- The method of claim 14, wherein the generating of the keyword comprises generating the keyword by determining an object name corresponding to the object and a category to which the object name belongs to eliminate ambiguity from the object name.
- The method of claim 17, wherein the generating of the keyword comprises determining the category using an object name dictionary in which a plurality of object names are individually mapped to categories.
- The method of claim 17, wherein the generating of the keyword comprises determining the category by analyzing context of a part where the keyword appears.
- The method of claim 17, wherein the generating of the keyword comprises determining the category by acquiring genre information of the scene.
- The method of claim 20, wherein the genre information is acquired from additional information included in broadcast streams, from a third party service that provides genre information through a web or by analyzing the generated keyword.
- The method of claim 14, wherein the detecting of the object comprises detecting the object using at least one of video information, sound information, caption information, Electronic Program Guide (EPG) information and telop character information, which are included in received broadcast streams.
- The method of claim 14, wherein the providing of the related information comprises displaying the related information on a predetermined display.
- The method of claim 23, wherein the providing of the related information comprises controlling the predetermined display to display information regarding the scene section in association with the related information on a portion of the predetermined display.
- The method of claim 23, wherein the providing of the related information comprises prioritizing the related information according to a user profile and providing the related information in the order of priority.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/260,285 US9202523B2 (en) | 2009-04-10 | 2010-04-07 | Method and apparatus for providing information related to broadcast programs |
JP2012504615A JP5557401B2 (en) | 2009-04-10 | 2010-04-07 | Broadcast program related information providing apparatus and method |
EP10761874.6A EP2417767B1 (en) | 2009-04-10 | 2010-04-07 | Apparatus and method for providing information related to broadcasting programs |
CN201080010003.3A CN102342124B (en) | 2009-04-10 | 2010-04-07 | Method and apparatus for providing information related to broadcast programs |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2009-0031471 | 2009-04-10 | ||
KR20090031471 | 2009-04-10 | ||
KR10-2010-0019153 | 2010-03-03 | ||
KR1020100019153A KR101644789B1 (en) | 2009-04-10 | 2010-03-03 | Apparatus and Method for providing information related to broadcasting program |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2010117213A2 true WO2010117213A2 (en) | 2010-10-14 |
WO2010117213A3 WO2010117213A3 (en) | 2011-01-06 |
Family
ID=43132770
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2010/002144 WO2010117213A2 (en) | 2009-04-10 | 2010-04-07 | Apparatus and method for providing information related to broadcasting programs |
Country Status (6)
Country | Link |
---|---|
US (1) | US9202523B2 (en) |
EP (1) | EP2417767B1 (en) |
JP (1) | JP5557401B2 (en) |
KR (1) | KR101644789B1 (en) |
CN (1) | CN102342124B (en) |
WO (1) | WO2010117213A2 (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102572534A (en) * | 2010-12-09 | 2012-07-11 | 财团法人资讯工业策进会 | System and method for synchronizing with multimedia broadcast program |
CN102622451A (en) * | 2012-04-16 | 2012-08-01 | 上海交通大学 | System for automatically generating television program labels |
CN103024572A (en) * | 2012-12-14 | 2013-04-03 | 深圳创维-Rgb电子有限公司 | Television |
WO2013046218A2 (en) * | 2011-06-17 | 2013-04-04 | Tata Consultancy Services Limited | Method and system for differentiating plurality of scripts of text in broadcast video stream |
US20140126884A1 (en) * | 2011-06-29 | 2014-05-08 | Sony Computer Entertainment Inc. | Information processing apparatus and information processing method |
JP2014164350A (en) * | 2013-02-21 | 2014-09-08 | Nippon Telegr & Teleph Corp <Ntt> | Three-dimensional object generation device, three-dimensional object identification device, method, and program |
EP2846272A3 (en) * | 2013-09-06 | 2015-07-01 | Kabushiki Kaisha Toshiba | Electronic apparatus, method for controlling electronic apparatus, and information recording medium |
CN105589955A (en) * | 2015-12-21 | 2016-05-18 | 米科互动教育科技(北京)有限公司 | Multimedia course processing method and device |
EP3147907A1 (en) * | 2015-09-25 | 2017-03-29 | Xiaomi Inc. | Control method and apparatus for playing audio |
WO2017087641A1 (en) * | 2015-11-17 | 2017-05-26 | BrightSky Labs, Inc. | Recognition of interesting events in immersive video |
EP3333851A1 (en) * | 2016-12-09 | 2018-06-13 | The Boeing Company | Automated object and activity tracking in a live video feed |
US10070201B2 (en) | 2010-12-23 | 2018-09-04 | DISH Technologies L.L.C. | Recognition of images within a video based on a stored representation |
WO2020076014A1 (en) | 2018-10-08 | 2020-04-16 | Samsung Electronics Co., Ltd. | Electronic apparatus and method for controlling the electronic apparatus |
WO2020201780A1 (en) * | 2019-04-04 | 2020-10-08 | Google Llc | Video timed anchors |
WO2020251967A1 (en) * | 2019-06-11 | 2020-12-17 | Amazon Technologies, Inc. | Associating object related keywords with video metadata |
US11120490B1 (en) | 2019-06-05 | 2021-09-14 | Amazon Technologies, Inc. | Generating video segments based on video metadata |
EP3905707A1 (en) * | 2020-04-29 | 2021-11-03 | LG Electronics Inc. | Display device and operating method thereof |
Families Citing this family (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101789831B1 (en) * | 2010-12-24 | 2017-10-25 | 한국전자통신연구원 | Apparatus and Method for Processing Broadcast Contents |
US9100669B2 (en) | 2011-05-12 | 2015-08-04 | At&T Intellectual Property I, Lp | Method and apparatus for associating micro-blogs with media programs |
US20130283330A1 (en) * | 2012-04-18 | 2013-10-24 | Harris Corporation | Architecture and system for group video distribution |
US9788055B2 (en) * | 2012-09-19 | 2017-10-10 | Google Inc. | Identification and presentation of internet-accessible content associated with currently playing television programs |
CN102833596B (en) * | 2012-09-20 | 2014-09-17 | 北京酷云互动科技有限公司 | Information transmitting method and device |
WO2014043987A1 (en) * | 2012-09-20 | 2014-03-27 | 北京酷云互动科技有限公司 | Information transmission method, device, and system |
CN103714087B (en) * | 2012-09-29 | 2017-06-27 | 联想(北京)有限公司 | The method and electronic equipment of a kind of information processing |
KR20140131166A (en) * | 2013-05-03 | 2014-11-12 | 삼성전자주식회사 | Display apparatus and searching method |
US20150026718A1 (en) * | 2013-07-19 | 2015-01-22 | United Video Properties, Inc. | Systems and methods for displaying a selectable advertisement when video has a background advertisement |
JP6266271B2 (en) * | 2013-09-04 | 2018-01-24 | 株式会社東芝 | Electronic device, electronic device control method, and computer program |
US20150319509A1 (en) * | 2014-05-02 | 2015-11-05 | Verizon Patent And Licensing Inc. | Modified search and advertisements for second screen devices |
CN104105002B (en) * | 2014-07-15 | 2018-12-21 | 百度在线网络技术(北京)有限公司 | The methods of exhibiting and device of audio-video document |
KR102217191B1 (en) * | 2014-11-05 | 2021-02-18 | 삼성전자주식회사 | Terminal device and information providing method thereof |
KR102019493B1 (en) * | 2015-02-09 | 2019-09-06 | 삼성전자주식회사 | Display apparatus and information providing method thereof |
CN105072459A (en) * | 2015-07-28 | 2015-11-18 | 无锡天脉聚源传媒科技有限公司 | Video information processing method and video information processing device |
US20170257678A1 (en) * | 2016-03-01 | 2017-09-07 | Comcast Cable Communications, Llc | Determining Advertisement Locations Based on Customer Interaction |
US11228817B2 (en) * | 2016-03-01 | 2022-01-18 | Comcast Cable Communications, Llc | Crowd-sourced program boundaries |
KR102557574B1 (en) * | 2016-05-17 | 2023-07-20 | 엘지전자 주식회사 | Digital device and controlling method thereof |
KR102202372B1 (en) * | 2017-01-17 | 2021-01-13 | 한국전자통신연구원 | System for creating interactive media in which user interaction can be recognized by reusing video content, and method of operating the system |
KR102402513B1 (en) | 2017-09-15 | 2022-05-27 | 삼성전자주식회사 | Method and apparatus for executing a content |
JP2019074949A (en) * | 2017-10-17 | 2019-05-16 | 株式会社Nttドコモ | Retrieval device and program |
KR102102164B1 (en) * | 2018-01-17 | 2020-04-20 | 오드컨셉 주식회사 | Method, apparatus and computer program for pre-processing video |
CN113508419B (en) | 2019-02-28 | 2024-09-13 | 斯塔特斯公司 | System and method for generating athlete tracking data from broadcast video |
WO2021149924A1 (en) * | 2020-01-20 | 2021-07-29 | 주식회사 씨오티커넥티드 | Method and apparatus for providing media enrichment |
WO2021177495A1 (en) * | 2020-03-06 | 2021-09-10 | 엘지전자 주식회사 | Natural language processing device |
US11875823B2 (en) * | 2020-04-06 | 2024-01-16 | Honeywell International Inc. | Hypermedia enabled procedures for industrial workflows on a voice driven platform |
US20220046237A1 (en) * | 2020-08-07 | 2022-02-10 | Tencent America LLC | Methods of parameter set selection in cloud gaming system |
KR102414993B1 (en) * | 2020-09-18 | 2022-06-30 | 네이버 주식회사 | Method and ststem for providing relevant infromation |
Family Cites Families (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6240555B1 (en) * | 1996-03-29 | 2001-05-29 | Microsoft Corporation | Interactive entertainment system for presenting supplemental interactive content together with continuous video programs |
US5905981A (en) | 1996-12-09 | 1999-05-18 | Microsoft Corporation | Automatically associating archived multimedia content with current textual content |
WO1999066722A1 (en) * | 1998-06-17 | 1999-12-23 | Hitachi, Ltd. | Broadcasting method and broadcast receiver |
EP1684517A3 (en) | 1998-08-24 | 2010-05-26 | Sharp Kabushiki Kaisha | Information presenting system |
US7209942B1 (en) * | 1998-12-28 | 2007-04-24 | Kabushiki Kaisha Toshiba | Information providing method and apparatus, and information reception apparatus |
EP1079387A3 (en) | 1999-08-26 | 2003-07-09 | Matsushita Electric Industrial Co., Ltd. | Mechanism for storing information about recorded television broadcasts |
JP4205293B2 (en) * | 2000-07-04 | 2009-01-07 | 慶一 樋口 | Method of operating information providing service system, information providing / supplying apparatus and transmitter / receiver used therefor |
JP2004102494A (en) * | 2002-09-06 | 2004-04-02 | Nippon Telegr & Teleph Corp <Ntt> | Method and system for replying inquiry by the internet using agent |
US8037496B1 (en) * | 2002-12-27 | 2011-10-11 | At&T Intellectual Property Ii, L.P. | System and method for automatically authoring interactive television content |
JP4241261B2 (en) | 2003-08-19 | 2009-03-18 | キヤノン株式会社 | Metadata grant method and metadata grant apparatus |
JP2005327205A (en) * | 2004-05-17 | 2005-11-24 | Nippon Telegr & Teleph Corp <Ntt> | Information retrieval device, information retrieval method, information retrieval program, and information retrieval program recording medium |
US20060059120A1 (en) | 2004-08-27 | 2006-03-16 | Ziyou Xiong | Identifying video highlights using audio-visual objects |
WO2006038529A1 (en) * | 2004-10-01 | 2006-04-13 | Matsushita Electric Industrial Co., Ltd. | Channel contract proposing apparatus, method, program and integrated circuit |
JP4981026B2 (en) * | 2005-03-31 | 2012-07-18 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | Composite news story synthesis |
WO2007086233A1 (en) | 2006-01-27 | 2007-08-02 | Pioneer Corporation | Advertisement distribution system, advertisement distribution method, broadcast reception device, and advertisement distribution device |
JP4618166B2 (en) * | 2006-03-07 | 2011-01-26 | ソニー株式会社 | Image processing apparatus, image processing method, and program |
KR100792261B1 (en) | 2006-07-19 | 2008-01-07 | 삼성전자주식회사 | System for managing video based on topic and method usign the same and method for searching video based on topic |
US20080066107A1 (en) | 2006-09-12 | 2008-03-13 | Google Inc. | Using Viewing Signals in Targeted Video Advertising |
JP2009059335A (en) * | 2007-08-07 | 2009-03-19 | Sony Corp | Information processing apparatus, method, and program |
EP1965312A3 (en) | 2007-03-01 | 2010-02-10 | Sony Corporation | Information processing apparatus and method, program, and storage medium |
JP2008227909A (en) * | 2007-03-13 | 2008-09-25 | Matsushita Electric Ind Co Ltd | Video retrieval apparatus |
JP2008294943A (en) * | 2007-05-28 | 2008-12-04 | Hitachi Ltd | Program related information acquistion system and video recorder |
US20100229078A1 (en) | 2007-10-05 | 2010-09-09 | Yutaka Otsubo | Content display control apparatus, content display control method, program, and storage medium |
KR20090085791A (en) * | 2008-02-05 | 2009-08-10 | 삼성전자주식회사 | Apparatus for serving multimedia contents and method thereof, and multimedia contents service system having the same |
US11832024B2 (en) * | 2008-11-20 | 2023-11-28 | Comcast Cable Communications, Llc | Method and apparatus for delivering video and video-related content at sub-asset level |
-
2010
- 2010-03-03 KR KR1020100019153A patent/KR101644789B1/en active IP Right Grant
- 2010-04-07 EP EP10761874.6A patent/EP2417767B1/en active Active
- 2010-04-07 US US13/260,285 patent/US9202523B2/en active Active
- 2010-04-07 WO PCT/KR2010/002144 patent/WO2010117213A2/en active Application Filing
- 2010-04-07 JP JP2012504615A patent/JP5557401B2/en not_active Expired - Fee Related
- 2010-04-07 CN CN201080010003.3A patent/CN102342124B/en not_active Expired - Fee Related
Non-Patent Citations (2)
Title |
---|
None |
See also references of EP2417767A4 |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102572534A (en) * | 2010-12-09 | 2012-07-11 | 财团法人资讯工业策进会 | System and method for synchronizing with multimedia broadcast program |
US10070201B2 (en) | 2010-12-23 | 2018-09-04 | DISH Technologies L.L.C. | Recognition of images within a video based on a stored representation |
EP2656621B1 (en) * | 2010-12-23 | 2019-04-10 | EchoStar Technologies L.L.C. | Recognition of images within a video based on a stored representation |
WO2013046218A2 (en) * | 2011-06-17 | 2013-04-04 | Tata Consultancy Services Limited | Method and system for differentiating plurality of scripts of text in broadcast video stream |
WO2013046218A3 (en) * | 2011-06-17 | 2013-05-23 | Tata Consultancy Services Limited | Method and system for differentiating plurality of scripts of text in broadcast video stream |
US20140126884A1 (en) * | 2011-06-29 | 2014-05-08 | Sony Computer Entertainment Inc. | Information processing apparatus and information processing method |
US9147434B2 (en) * | 2011-06-29 | 2015-09-29 | Sony Corporation | Information processing apparatus and information processing method |
CN102622451A (en) * | 2012-04-16 | 2012-08-01 | 上海交通大学 | System for automatically generating television program labels |
CN103024572B (en) * | 2012-12-14 | 2015-08-26 | 深圳创维-Rgb电子有限公司 | A kind of television set |
CN103024572A (en) * | 2012-12-14 | 2013-04-03 | 深圳创维-Rgb电子有限公司 | Television |
JP2014164350A (en) * | 2013-02-21 | 2014-09-08 | Nippon Telegr & Teleph Corp <Ntt> | Three-dimensional object generation device, three-dimensional object identification device, method, and program |
EP2846272A3 (en) * | 2013-09-06 | 2015-07-01 | Kabushiki Kaisha Toshiba | Electronic apparatus, method for controlling electronic apparatus, and information recording medium |
EP3147907A1 (en) * | 2015-09-25 | 2017-03-29 | Xiaomi Inc. | Control method and apparatus for playing audio |
US10324682B2 (en) | 2015-09-25 | 2019-06-18 | Xiaomi Inc. | Method, apparatus, and storage medium for controlling audio playing based on playing environment |
WO2017087641A1 (en) * | 2015-11-17 | 2017-05-26 | BrightSky Labs, Inc. | Recognition of interesting events in immersive video |
CN105589955A (en) * | 2015-12-21 | 2016-05-18 | 米科互动教育科技(北京)有限公司 | Multimedia course processing method and device |
US20180165934A1 (en) * | 2016-12-09 | 2018-06-14 | The Boeing Company | Automated object and activity tracking in a live video feed |
CN108228705A (en) * | 2016-12-09 | 2018-06-29 | 波音公司 | Automatic object and activity tracking equipment, method and medium in live video feedback |
EP3333851A1 (en) * | 2016-12-09 | 2018-06-13 | The Boeing Company | Automated object and activity tracking in a live video feed |
US10607463B2 (en) | 2016-12-09 | 2020-03-31 | The Boeing Company | Automated object and activity tracking in a live video feed |
WO2020076014A1 (en) | 2018-10-08 | 2020-04-16 | Samsung Electronics Co., Ltd. | Electronic apparatus and method for controlling the electronic apparatus |
EP3818720A4 (en) * | 2018-10-08 | 2021-08-25 | Samsung Electronics Co., Ltd. | Electronic apparatus and method for controlling the electronic apparatus |
US11184679B2 (en) | 2018-10-08 | 2021-11-23 | Samsung Electronics Co., Ltd. | Electronic apparatus and method for controlling the electronic apparatus |
WO2020201780A1 (en) * | 2019-04-04 | 2020-10-08 | Google Llc | Video timed anchors |
US11823716B2 (en) | 2019-04-04 | 2023-11-21 | Google Llc | Video timed anchors |
US11120490B1 (en) | 2019-06-05 | 2021-09-14 | Amazon Technologies, Inc. | Generating video segments based on video metadata |
WO2020251967A1 (en) * | 2019-06-11 | 2020-12-17 | Amazon Technologies, Inc. | Associating object related keywords with video metadata |
EP3905707A1 (en) * | 2020-04-29 | 2021-11-03 | LG Electronics Inc. | Display device and operating method thereof |
EP4346220A1 (en) * | 2020-04-29 | 2024-04-03 | LG Electronics Inc. | Display device and operating method thereof |
Also Published As
Publication number | Publication date |
---|---|
KR101644789B1 (en) | 2016-08-04 |
JP5557401B2 (en) | 2014-07-23 |
EP2417767B1 (en) | 2020-11-04 |
CN102342124A (en) | 2012-02-01 |
KR20100113020A (en) | 2010-10-20 |
US9202523B2 (en) | 2015-12-01 |
JP2012523607A (en) | 2012-10-04 |
WO2010117213A3 (en) | 2011-01-06 |
US20120017239A1 (en) | 2012-01-19 |
CN102342124B (en) | 2015-07-01 |
EP2417767A4 (en) | 2013-07-31 |
EP2417767A2 (en) | 2012-02-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2010117213A2 (en) | Apparatus and method for providing information related to broadcasting programs | |
US9008489B2 (en) | Keyword-tagging of scenes of interest within video content | |
KR100684484B1 (en) | Method and apparatus for linking a video segment to another video segment or information source | |
US8115869B2 (en) | Method and system for extracting relevant information from content metadata | |
US8209724B2 (en) | Method and system for providing access to information of potential interest to a user | |
WO2013055161A1 (en) | System and method for providing information regarding content | |
WO2015119335A1 (en) | Content recommendation method and device | |
WO2018097379A1 (en) | Method for inserting hash tag by image recognition, and software distribution server storing software for performing same method | |
KR101550886B1 (en) | Apparatus and method for generating additional information of moving picture contents | |
WO2011084039A9 (en) | Method for delivering media contents and apparatus thereof | |
US20130007057A1 (en) | Automatic image discovery and recommendation for displayed television content | |
WO1999041684A1 (en) | Processing and delivery of audio-video information | |
WO2015108255A1 (en) | Display apparatus, interactive server and method for providing response information | |
WO2012030103A2 (en) | Method and apparatus for providing preferred broadcast information | |
WO2018043923A1 (en) | Display device and control method therefor | |
WO2013165083A1 (en) | System and method for providing image-based video service | |
WO2021221209A1 (en) | Method and apparatus for searching for information inside video | |
WO2017164510A2 (en) | Voice data-based multimedia content tagging method, and system using same | |
JP2017005442A (en) | Content generation device and program | |
WO2011106087A1 (en) | Method for processing auxilary information for topic generation | |
JP5202217B2 (en) | Broadcast receiving apparatus and program for extracting current keywords from broadcast contents | |
WO2019225793A1 (en) | Ai video learning platform-based vod service system | |
KR20200024541A (en) | Providing Method of video contents searching and service device thereof | |
US8332890B2 (en) | Efficiently identifying television stations in a user friendly environment | |
CN109726320B (en) | Internet video crawler method, system and search system based on multi-source information fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201080010003.3 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10761874 Country of ref document: EP Kind code of ref document: A2 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13260285 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012504615 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2010761874 Country of ref document: EP |