US20120139839A1 - Methods and systems for media annotation, selection and display of additional information associated with a region of interest in video content - Google Patents
Methods and systems for media annotation, selection and display of additional information associated with a region of interest in video content Download PDFInfo
- Publication number
- US20120139839A1 US20120139839A1 US13/371,602 US201213371602A US2012139839A1 US 20120139839 A1 US20120139839 A1 US 20120139839A1 US 201213371602 A US201213371602 A US 201213371602A US 2012139839 A1 US2012139839 A1 US 2012139839A1
- Authority
- US
- United States
- Prior art keywords
- region
- video content
- particular frame
- additional information
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 23
- 230000004044 response Effects 0.000 claims abstract description 17
- 238000004590 computer program Methods 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 5
- 230000006870 function Effects 0.000 description 6
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 241000699670 Mus sp. Species 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 210000005252 bulbus oculi Anatomy 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000010348 incorporation Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/11—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information not detectable on the record carrier
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/34—Indicating arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
- H04N21/4316—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/4722—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
- H04N21/4725—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content using interactive regions of the image, e.g. hot spots
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/812—Monomedia components thereof involving advertisement data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/858—Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot
- H04N21/8586—Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot by using a URL
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/79—Processing of colour television signals in connection with recording
- H04N9/80—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
- H04N9/82—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
- H04N9/8205—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal
Abstract
Methods, systems, and processor-readable media for selecting a region within a particular frame of video content to access additional information about an area of interest associated with the region within the particular frame, and displaying the additional information, in response to selecting the region associated with the particular frame of video content to access the additional information about the area of interest associated with the region within the particular frame. A selection packed can be generated, which includes frame selection data associated with the particular frame of video content. The frame selection data can include data that is sufficient to identify the particular frame of video content.
Description
- This patent application is a continuation of U.S. patent application Ser. No. 12/976,148, entitled “Flick Intel Annotation Methods and Systems,” which was filed on Dec. 22, 2010 and which is incorporated herein by reference in its entirety. U.S. patent application Ser. No. 12/976,148 in turn claims the priority and benefit of U.S. provisional patent application 61/291,837, entitled “Systems and Methods for obtaining background data associated with a movie, show, or live sporting event”, filed on Dec. 31, 2009 and of U.S. Provisional Patent Application No. 61/419,268, filed Dec. 3, 2010, entitled “Flick Intel Annotation Systems and Webcast Infrastructure”. This patent application therefore claims priority to U.S. Provisional Patent Application Ser. No. 61/291,837 and U.S. Provisional Patent Application Ser. No. 61/419,268, which are incorporated herein by reference in their entireties.
- Embodiments relate to video content, video displays, and video compositing. Embodiments also relate to computer systems, user input devices, databases, and computer networks.
- People have watched video content on televisions and other audio-visual devices for decades. They have also used gaming systems, personal computers, handheld devices, and other devices to enjoy interactive content. They often have questions about places, people, and things appearing as the video content are displayed, and about the music they hear. Databases containing information about the content such as the actors in a scene or the music being played already exist and provide users with the ability to learn more.
- The existing database solutions provide information about elements appearing in a movie or scene, but only in a very general way. A person curious about a scene element can obtain information about the scene and hope that the information mentions the scene element in which the person is interested. Systems and methods that provide people with the ability to select a specific scene element and to obtain information about only that element are needed.
- The following summary is provided to facilitate an understanding of some of the innovative features unique to the embodiments and is not intended to be a full description. A full appreciation of the various aspects of the embodiments can be gained by taking the entire specification, claims, drawings, and abstract as a whole.
- It is therefore an aspect of the embodiments that a media device can provide video content to a display device and that a person can view the video content as it is presented on the display device. A series of scenes or a time varying series of frames along with any audio dialog, music, or sound effects are examples of video content.
- It is another aspect of the embodiments that the person can choose a region on the display device. A region can be chosen with a pointing device or any other form of user input by which the person can indicate a spot on the display device and select that spot. Frame specification data can be generated when the person chooses the region. The frame specification data can identify a specific scene or frame within the video content.
- It is yet another aspect of the embodiments to provide an element identifier based on the region and the frame specification data. Element identifiers are uniquely associated with scene elements. The element identifier can be obtained by querying an annotation database that relates element identifiers to regions and frame specification data. Note that the element identifier in some embodiments can be provided by a human worker who views the scene or frame, looks to the region, and reports what appears at that location.
- A number of embodiments, preferred and alternative are disclosed herein. For example, in an embodiment, a method can be implemented, which includes selecting a region within a particular frame of video content to access additional information about an area of interest associated with the region within the particular frame; and displaying the additional information, in response to selecting the region associated with the particular frame of video content to access the additional information about the area of interest associated with the region within the particular frame. In another embodiment, a step can be provided for generating a selection packet that includes frame selection data associated with the particular frame of video content. In yet another embodiment, the frame selection data can include data that is sufficient to identify the particular frame of video content.
- In another embodiment, a step can be provided for detecting the region. In another embodiment, a step can be implemented for accessing the additional information from an annotated content stream. In still another embodiment, a step can be provided for storing the additional information in an annotation database. In another embodiment, the region within the particular frame of the video content can be a coordinate. In another embodiment, the region within the particular frame of the video content can be a plurality of coordinates. In still another embodiment, the region within the particular frame of the video content can include data indicative of a particular region.
- In another embodiment, a system can be implemented, which includes, for example, a computer-usable medium embodying computer code. Such computer program code can include instructions executable by the processor and configured for selecting a region within a particular frame of video content to access additional information about an area of interest associated with the region within the particular frame; and displaying the additional information, in response to selecting the region associated with the particular frame of video content to access the additional information about the area of interest associated with the region within the particular frame.
- In another embodiment, such instructions can be configured for generating a selection packet that includes frame selection data associated with the particular frame of video content. In still another embodiment, the aforementioned frame selection data can include data that is sufficient to identify the particular frame of video content. In still other embodiments, such instructions can be configured for detecting the region. In yet other embodiments, such instructions can be configured for accessing the additional information from an annotated content stream. In still other embodiments, such instructions can be further configured for storing (initially or at other times) the additional information in an annotation database.
- In yet another embodiment, a processor-readable medium storing code representing instructions to cause a processor to perform a process can be provided. Such code can comprise code to, for example, select a region within a particular frame of video content to access additional information about an area of interest associated with the region within the particular frame; and display the additional information, in response to selecting the region associated with the particular frame of video content to access the additional information about the area of interest associated with the region within the particular frame. In still another embodiment, such code can further comprise code to generate a selection packet that includes frame selection data associated with the particular frame of video content. In yet another embodiment, the frame selection data can include data that is sufficient to identify the particular frame of video content. In still another embodiment, such code can include code to detect the region. In still another embodiment, such code can comprise code to access the additional information from an annotated content stream and store the additional information in an annotation database.
- The accompanying figures, in which like reference numerals refer to identical or functionally similar elements throughout the separate views and which are incorporated in and form a part of the specification, further illustrate aspects of the embodiments and, together with the background, brief summary, and detailed description serve to explain the principles of the embodiments.
-
FIG. 1 illustrates element data being presented on a second display in response to the selection of a scene element on a first display in accordance with aspects of certain embodiments; -
FIG. 2 illustrates an annotation database providing element identifiers in response to a person selecting scene elements in accordance with aspects of the embodiments; -
FIG. 3 illustrates an annotation service providing element identifiers in response to a person selecting scene elements in accordance with aspects of the embodiments; and -
FIG. 4 illustrates an annotated content stream passing to a media device such that the media device produces element data in accordance with aspects of certain embodiments. - The particular values and configurations discussed in these non-limiting examples can be varied and are cited merely to illustrate at least one embodiment and are not intended to limit the scope thereof. In general, the figures are not to scale.
- The embodiments will now be described more fully hereinafter with reference to the accompanying drawings, in which illustrative embodiments of the invention are shown. The embodiments disclosed herein can be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete and will fully convey the scope of the invention to those skilled in the art. Like numbers refer to like elements throughout. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
- Video content is a time varying presentation of scenes or video frames. Each frame can contain a number of scene elements such as actors, foreground items, background items, or other items. A person enjoying video content can select a scene element by specifying a screen region (e.g., a coordinate, group of coordinates, particular area, etc.) while the video content plays. Frame specification data identifies the specific frame or scene being displayed when the region is selected. The region in combination with the frame specification data is sufficient to identify the scene element that the person has chosen. Information about the scene element can then be presented to the person. An annotation database can associate scene elements with frame specification data and regions.
-
FIG. 1 illustrates element data being presented on asecond display 119 in response to the selection of a scene element on adisplay 101 in accordance with aspects of certain embodiments. Amedia device 104 passes video content to thedisplay 101 to be viewed by a person. The person can manipulate aselection device 112 to choose a region or coordinate(s) 102 (e.g., data indicative of a region, a coordinate, groups of coordinates, etc.) on adisplay device 101. The data indicative of the region or coordinate(s) 102 can then be passed to amedia device 104. In some embodiments the selection device can detect the region or coordinate(s) 102. For example, theselection device 112 can detect the locations ofemitters 106 and infer the screen position being pointed at from those emitter locations. In other embodiments thedisplay 101 can detect the region or coordinate(s) 103. For example, the selection device can emit a light beam that the display device detects. Other common coordinate selection means include mice, trackballs, and touch sensors. More advanced pointing means can observe the person's body or eyeballs to thereby determine a coordinate. Clicking a button or some other action can generate an event indicting that a scene element is chosen. - The
media device 104 can generate aselection packet 107 that includes frame selection data and the region or coordinate(s) 102. The frame selection data is data that is sufficient to identify a specific frame or scene. For example, the frame selection data can be amedia tag 108 and atimestamp 109. Themedia tag 108 can identify a particular movie, show, sporting event, advertisement, video clip, scene or other unit of video content. Atimestamp 109 specifies a time within the video content. In combination, a media tag and timestamp can specify a particular frame from amongst all the frames of video content that have ever been produced. - The
frame selection packet 107 can be formed into a query for anannotation database 111. Theannotation database 111 can contain associations of element identifiers associated with frame selection data and regions (e.g., data indicative of a particular region or groups of regions, a coordinate, groups of coordinates, etc.). As such, theannotation database 111 can produce anelement identifier 113 in response to the query. Theelement identifier 113 can identify aperson 114, anitem 115,music 116, aplace 117, or something else. - The
element identifier 113 can then be passed to anotherserver 118 that responds by producing element data for presentation to the person. Examples of element data include, but are not limited to: statistics on a person such as an athlete; a picture of a person, object or place; an offer for purchase of an item, service, or song; and links to other media in which a person, item, or place appears. -
FIG. 2 illustrates anannotation database 111 providingelement identifiers 211 in response to a person selecting scene elements in accordance with aspects of the embodiments. An annotation service/module 202 can produce annotatedcontent 203 by annotatingcontent 201. An annotation module is a device, algorithm, program, or other means that automatically annotates content. Image recognition algorithms can locate items within scenes and frames and thereby automatically provide annotation data. An annotation service is a service provider that annotates content. An annotation service provider can employ both human workers and annotation modules. - Annotation is a process wherein scene elements, each having an element identifier, are associated with media tags and space time ranges. A space time range identifies a range of times and positions at which a scene element appears. For example, a car can sit unmoving during an entire scene. The element identifier can specify the make, model, color, and trim level of the car, the media tag can identify a movie containing the scene, and the space time range can specify the time range of the movie scene and the location of the car within the scene.
- The
content 201 can be passed to amedia device 104 that produces amedia stream 207 for presentation on adisplay device 206. Aperson 205 watching thedisplay device 206 can use aselection device 112 to select a region on thedisplay device 206. Aselection packet 107 containing the coordinate and some frame specification data can then be passed to theannotation database 111 which responds by identifying thescene element 211. Anadditional data server 118 can produceelement data 212 for that identifiedscene element 211. Theelement data 212 can then be presented to the person. -
FIG. 3 illustrates an annotation service providing element identifiers in response to a person selecting scene elements in accordance with aspects of the embodiments. The embodiment ofFIG. 3 differs from that ofFIG. 2 in that thecontent 201 is not necessarily annotated before being viewed by theperson 205. Theselection packet 107 is passed to theannotation service 301 where ahuman worker 302 orannotation module 303 determines what scene element theperson 205 selected and creates a new annotation entry for incorporation into theannotation database 111. -
FIG. 4 illustrates an annotatedcontent stream 401 passing to amedia device 104 such that themedia device 104 produceselement data 407 in accordance with aspects of certain embodiments. Annotated content, such as annotatedcontent 203 ofFIG. 2 , can be passed as an annotatedcontent stream 401 to themedia device 104. The annotatedcontent stream 401 can include acontent stream 402,element stream 403, andelement data 406. Themedia device 104 can then pass the content for presentation on thedisplay 206 and store theelement data 406 and the data in theelement stream 403. The data in theelement stream 403 can be formed into an annotation database with the possible exception that no media tag is needed. No media tag is needed because all the annotations refer only to thecontent stream 402. As such, theelement stream 403 is illustrated as containing only space time ranges 404 andelement identifiers 405. - The
media device 104, having assembled an annotation database and having storedelement data 406, can produceelement data 407 for a scene element selected by aperson 205 without querying remote databases or accessing remote resources. - Note that in practice, the
content stream 402,element stream 403, andelement data 406 can be transferred separately or in combination as streaming data. Means for transferring content, annotations, and element data include TV signals and storage devices such as DVD disks or data disks. Furthermore, theelement data 406 can be passed to themedia device 104 or can be stored and accessed on a remote server. - It will be appreciated that variations of the above-disclosed and other features and functions, or alternatives thereof, may be desirably combined into many other different systems or applications. Also, that various presently unforeseen or unanticipated alternatives, modifications, variations or improvements therein may be subsequently made by those skilled in the art which are also intended to be encompassed by the following claims.
- The particular values and configurations discussed in these non-limiting examples can be varied and are cited merely to illustrate at least one embodiment and are not intended to limit the scope thereof.
- The embodiments will now be described more fully hereinafter with reference to the accompanying drawings, in which illustrative embodiments of the invention are shown. The embodiments disclosed herein can be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete and will fully convey the scope of the invention to those skilled in the art. Like numbers refer to like elements throughout. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
- The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
- Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
- As will be appreciated by one skilled in the art, the present invention can be embodied as a method, data processing system, or computer program product. Accordingly, the present invention may take the form of an entire hardware embodiment, an entire software embodiment or an embodiment combining software and hardware aspects all generally referred to herein as a “circuit” or “module.” Furthermore, the present invention may take the form of a computer program product on a computer-usable storage medium having computer-usable program code embodied in the medium. Any suitable computer readable medium may be utilized including hard disks, USB Flash Drives, DVDs, CD-ROMs, optical storage devices, magnetic storage devices, etc.
- Computer program code for carrying out operations of the present invention may be written in an object oriented programming language (e.g., Java, C++, etc.). The computer program code, however, for carrying out operations of the present invention may also be written in conventional procedural programming languages such as the “C” programming language, in a visually oriented programming environment such as, for example, VisualBasic, or in functional programming languages such as LISP or Erlang.
- The program code may execute entirely on the user's computer, partly on the users computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer. In the latter scenario, the remote computer may be connected to a user's computer through a local area network (LAN) or a wide area network (WAN), wireless data network e.g., WiFi, Wimax, 802.xx, and cellular network or the connection may be made to an external computer via most third party supported networks (for example, through the Internet using an Internet Service Provider).
- The invention is described in part above with reference to flowchart illustrations and/or block diagrams of methods, systems, computer program products, and data structures according to embodiments of the invention. It will be understood that each block of the illustrations, and combinations of blocks, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the block or blocks.
- These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function/act specified in the block or blocks.
- Note that computer program instructions and other process-readable media discussed herein may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions/acts specified in the block or blocks.
- Based on the foregoing, it can be appreciated that a number of embodiments, preferred and alternative, are disclosed herein. For example, in one embodiment, a method can be implemented, which includes selecting a region within a particular frame of video content to access additional information about an area of interest associated with the region within the particular frame; and displaying the additional information, in response to selecting the region associated with the particular frame of video content to access the additional information about the area of interest associated with the region within the particular frame. In another embodiment, a step can be provided for generating a selection packet that includes frame selection data associated with the particular frame of video content. In yet another embodiment, the frame selection data can include data that is sufficient to identify the particular frame of video content.
- In another embodiment, a step can be provided for detecting the region. In another embodiment, a step can be implemented for accessing the additional information from an annotated content stream. In still another embodiment, a step can be provided for storing the additional information in an annotation database. In another embodiment, the region within the particular frame of the video content can be a coordinate. In another embodiment, the region within the particular frame of the video content can be a plurality of coordinates. In still another embodiment, the region within the particular frame of the video content can include data indicative of a particular region.
- In another embodiment, a system can be implemented, which includes, for example, a computer-usable medium embodying computer code. Such computer program code can include instructions executable by the processor and configured for selecting a region within a particular frame of video content to access additional information about an area of interest associated with the region within the particular frame; and displaying the additional information, in response to selecting the region associated with the particular frame of video content to access the additional information about the area of interest associated with the region within the particular frame.
- In another embodiment, such instructions can be configured for generating a selection packet that includes frame selection data associated with the particular frame of video content. In still another embodiment, the aforementioned frame selection data can include data that is sufficient to identify the particular frame of video content. In still other embodiments, such instructions can be configured for detecting the region. In yet other embodiments, such instructions can be configured for accessing the additional information from an annotated content stream. In still other embodiments, such instructions can be further configured for storing (initially or at other times) the additional information in an annotation database.
- In yet another embodiment, a processor-readable medium storing code representing instructions to cause a processor to perform a process can be provided. Such code can comprise code to, for example, select a region within a particular frame of video content to access additional information about an area of interest associated with the region within the particular frame; and display the additional information, in response to selecting the region associated with the particular frame of video content to access the additional information about the area of interest associated with the region within the particular frame. In still another embodiment, such code can further comprise code to generate a selection packet that includes frame selection data associated with the particular frame of video content. In yet another embodiment, the frame selection data can include data that is sufficient to identify the particular frame of video content. In still another embodiment, such code can include code to detect the region. In still another embodiment, such code can comprise code to access the additional information from an annotated content stream and store the additional information in an annotation database.
- It will be appreciated that variations of the above-disclosed and other features and functions, or alternatives thereof, may be desirably combined into many other different systems or applications. Also, that various presently unforeseen or unanticipated alternatives, modifications, variations or improvements therein may be subsequently made by those skilled in the art which are also intended to be encompassed by the following claims.
Claims (20)
1. A method, comprising:
selecting a region within a particular frame of video content to access additional information about an area of interest associated with said region within said particular frame; and
displaying said additional information, in response to selecting said region associated with said particular frame of video content to access said additional information about said area of interest associated with said region within said particular frame.
2. The method of claim 1 further comprising generating a selection packet that includes frame selection data associated with said particular frame of video content.
3. The method of claim 2 wherein said frame selection data comprises data that is sufficient to identify said particular frame of video content.
4. The method of claim 1 further comprising detecting said region.
5. The method of claim 1 further comprising accessing said additional information from an annotated content stream.
6. The method of claim 1 further comprising initially storing said additional information in an annotation database.
7. The method of claim 1 wherein said region within said particular frame of said video content comprises a coordinate.
8. The method of claim 1 wherein said region within said particular frame of said video content comprises a plurality of coordinates.
9. The method of claim 1 wherein said region within said particular frame of said video content comprises data indicative of a particular region.
10. A system, comprising:
a processor; and
a computer-usable medium embodying computer code, said computer program code comprising instructions executable by said processor and configured for:
selecting a region within a particular frame of video content to access additional information about an area of interest associated with said region within said particular frame; and
displaying said additional information, in response to selecting said region associated with said particular frame of video content to access said additional information about said area of interest associated with said region within said particular frame.
11. The system of claim 10 wherein said instructions are further configured for generating a selection packet that includes frame selection data associated with said particular frame of video content.
12. The system of claim 11 wherein said frame selection data comprises data that is sufficient to identify said particular frame of video content.
13. The system of claim 10 wherein said instructions are further configured for detecting said region.
14. The system of claim 10 wherein said instructions are further configured for accessing said additional information from an annotated content stream.
15. The system of claim 10 wherein said instructions are further configured for initially storing said additional information in an annotation database.
16. A processor-readable medium storing code representing instructions to cause a processor to perform a process, said code comprising code to:
select a region within a particular frame of video content to access additional information about an area of interest associated with said region within said particular frame; and
display said additional information, in response to selecting said region associated with said particular frame of video content to access said additional information about said area of interest associated with said region within said particular frame.
17. The processor-readable medium of claim 16 wherein said code further comprises code to generate a selection packet that includes frame selection data associated with said particular frame of video content.
18. The processor-readable medium of claim 17 wherein said frame selection data comprises data that is sufficient to identify said particular frame of video content.
19. The processor-readable medium of claim 16 wherein said code further comprises code to detect said region.
20. The processor-readable medium of claim 16 wherein said code further comprises code to:
access said additional information from an annotated content stream; and
store said additional information in an annotation database.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/371,602 US20120139839A1 (en) | 2009-12-31 | 2012-02-13 | Methods and systems for media annotation, selection and display of additional information associated with a region of interest in video content |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US29183709P | 2009-12-31 | 2009-12-31 | |
US41926810P | 2010-12-03 | 2010-12-03 | |
US12/976,148 US9508387B2 (en) | 2009-12-31 | 2010-12-22 | Flick intel annotation methods and systems |
US13/371,602 US20120139839A1 (en) | 2009-12-31 | 2012-02-13 | Methods and systems for media annotation, selection and display of additional information associated with a region of interest in video content |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/976,148 Continuation US9508387B2 (en) | 2009-12-31 | 2010-12-22 | Flick intel annotation methods and systems |
Publications (1)
Publication Number | Publication Date |
---|---|
US20120139839A1 true US20120139839A1 (en) | 2012-06-07 |
Family
ID=44187704
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/976,148 Active 2034-01-24 US9508387B2 (en) | 2009-12-31 | 2010-12-22 | Flick intel annotation methods and systems |
US13/345,404 Abandoned US20120106924A1 (en) | 2009-12-31 | 2012-01-06 | Systems and methods for media annotation, selection and display of background data |
US13/371,602 Abandoned US20120139839A1 (en) | 2009-12-31 | 2012-02-13 | Methods and systems for media annotation, selection and display of additional information associated with a region of interest in video content |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/976,148 Active 2034-01-24 US9508387B2 (en) | 2009-12-31 | 2010-12-22 | Flick intel annotation methods and systems |
US13/345,404 Abandoned US20120106924A1 (en) | 2009-12-31 | 2012-01-06 | Systems and methods for media annotation, selection and display of background data |
Country Status (1)
Country | Link |
---|---|
US (3) | US9508387B2 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9459762B2 (en) | 2011-09-27 | 2016-10-04 | Flick Intelligence, LLC | Methods, systems and processor-readable media for bidirectional communications and data sharing |
US9465451B2 (en) | 2009-12-31 | 2016-10-11 | Flick Intelligence, LLC | Method, system and computer program product for obtaining and displaying supplemental data about a displayed movie, show, event or video game |
US9508387B2 (en) | 2009-12-31 | 2016-11-29 | Flick Intelligence, LLC | Flick intel annotation methods and systems |
US10459976B2 (en) * | 2015-06-30 | 2019-10-29 | Canon Kabushiki Kaisha | Method, apparatus and system for applying an annotation to a portion of a video sequence |
US11496814B2 (en) | 2009-12-31 | 2022-11-08 | Flick Intelligence, LLC | Method, system and computer program product for obtaining and displaying supplemental data about a displayed movie, show, event or video game |
US20220374585A1 (en) * | 2021-05-19 | 2022-11-24 | Google Llc | User interfaces and tools for facilitating interactions with video content |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9413803B2 (en) | 2011-01-21 | 2016-08-09 | Qualcomm Incorporated | User input back channel for wireless displays |
US9787725B2 (en) * | 2011-01-21 | 2017-10-10 | Qualcomm Incorporated | User input back channel for wireless displays |
US20130003624A1 (en) * | 2011-01-21 | 2013-01-03 | Qualcomm Incorporated | User input back channel for wireless displays |
US10135900B2 (en) | 2011-01-21 | 2018-11-20 | Qualcomm Incorporated | User input back channel for wireless displays |
US20130013318A1 (en) | 2011-01-21 | 2013-01-10 | Qualcomm Incorporated | User input back channel for wireless displays |
US9113033B2 (en) * | 2012-08-28 | 2015-08-18 | Microsoft Technology Licensing, Llc | Mobile video conferencing with digital annotation |
US10922474B2 (en) * | 2015-03-24 | 2021-02-16 | Intel Corporation | Unstructured UI |
US20180310066A1 (en) * | 2016-08-09 | 2018-10-25 | Paronym Inc. | Moving image reproduction device, moving image reproduction method, moving image distribution system, storage medium with moving image reproduction program stored therein |
CN116452273B (en) * | 2023-06-20 | 2023-09-12 | 深圳可视科技有限公司 | Display screen control method and device, electronic equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020049975A1 (en) * | 2000-04-05 | 2002-04-25 | Thomas William L. | Interactive wagering system with multiple display support |
US20020056136A1 (en) * | 1995-09-29 | 2002-05-09 | Wistendahl Douglass A. | System for converting existing TV content to interactive TV programs operated with a standard remote control and TV set-top box |
US20060080130A1 (en) * | 2004-10-08 | 2006-04-13 | Samit Choksi | Method that uses enterprise application integration to provide real-time proactive post-sales and pre-sales service over SIP/SIMPLE/XMPP networks |
US20080066129A1 (en) * | 2000-02-29 | 2008-03-13 | Goldpocket Interactive, Inc. | Method and Apparatus for Interaction with Hyperlinks in a Television Broadcast |
US20090327894A1 (en) * | 2008-04-15 | 2009-12-31 | Novafora, Inc. | Systems and methods for remote control of interactive video |
Family Cites Families (57)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE4446139C2 (en) | 1993-12-30 | 2000-08-17 | Intel Corp | Method and device for highlighting objects in a conference system |
US6727887B1 (en) | 1995-01-05 | 2004-04-27 | International Business Machines Corporation | Wireless pointing device for remote cursor control |
JP2957938B2 (en) | 1995-03-31 | 1999-10-06 | ミツビシ・エレクトリック・インフォメイション・テクノロジー・センター・アメリカ・インコーポレイテッド | Window control system |
EP0827336B1 (en) * | 1996-08-30 | 2003-10-15 | Matsushita Electric Industrial Co., Ltd. | Digital broadcasting system, digital broadcasting apparatus, and associated receiver therefor |
US5995102A (en) | 1997-06-25 | 1999-11-30 | Comet Systems, Inc. | Server system and method for modifying a cursor image |
US6842190B1 (en) | 1999-07-06 | 2005-01-11 | Intel Corporation | Video bit stream extension with supplementary content information to aid in subsequent video processing |
KR100319157B1 (en) | 1999-09-22 | 2002-01-05 | 구자홍 | User profile data structure for specifying multiple user preference items, And method for multimedia contents filtering and searching using user profile data |
US6834308B1 (en) | 2000-02-17 | 2004-12-21 | Audible Magic Corporation | Method and apparatus for identifying media content presented on a media playing device |
US20020002707A1 (en) | 2000-06-29 | 2002-01-03 | Ekel Sylvain G. | System and method to display remote content |
US6970860B1 (en) | 2000-10-30 | 2005-11-29 | Microsoft Corporation | Semi-automatic annotation of multimedia objects |
US7562012B1 (en) | 2000-11-03 | 2009-07-14 | Audible Magic Corporation | Method and apparatus for creating a unique audio signature |
US20020083469A1 (en) | 2000-12-22 | 2002-06-27 | Koninklijke Philips Electronics N.V. | Embedding re-usable object-based product information in audiovisual programs for non-intrusive, viewer driven usage |
US7363278B2 (en) | 2001-04-05 | 2008-04-22 | Audible Magic Corporation | Copyright detection and protection system and method |
JP2002335518A (en) * | 2001-05-09 | 2002-11-22 | Fujitsu Ltd | Control unit for controlling display, server and program |
US6968337B2 (en) | 2001-07-10 | 2005-11-22 | Audible Magic Corporation | Method and apparatus for identifying an unknown work |
US7529659B2 (en) | 2005-09-28 | 2009-05-05 | Audible Magic Corporation | Method and apparatus for identifying an unknown work |
US7877438B2 (en) | 2001-07-20 | 2011-01-25 | Audible Magic Corporation | Method and apparatus for identifying new media content |
US20040205482A1 (en) | 2002-01-24 | 2004-10-14 | International Business Machines Corporation | Method and apparatus for active annotation of multimedia content |
US7421660B2 (en) | 2003-02-04 | 2008-09-02 | Cataphora, Inc. | Method and apparatus to visually present discussions for data mining purposes |
US7197234B1 (en) | 2002-05-24 | 2007-03-27 | Digeo, Inc. | System and method for processing subpicture data |
GB2399983A (en) | 2003-03-24 | 2004-09-29 | Canon Kk | Picture storage and retrieval system for telecommunication system |
US7242389B1 (en) | 2003-10-07 | 2007-07-10 | Microsoft Corporation | System and method for a large format collaborative display for sharing information |
US8065665B1 (en) | 2004-02-28 | 2011-11-22 | Oracle America, Inc. | Method and apparatus for correlating profile data |
JP4304108B2 (en) | 2004-03-31 | 2009-07-29 | 株式会社東芝 | METADATA DISTRIBUTION DEVICE, VIDEO REPRODUCTION DEVICE, AND VIDEO REPRODUCTION SYSTEM |
US8086575B2 (en) | 2004-09-23 | 2011-12-27 | Rovi Solutions Corporation | Methods and apparatus for integrating disparate media formats in a networked media system |
GB2437428A (en) | 2004-12-06 | 2007-10-24 | Dspv Ltd | System and method for generic symbol recognition and user authenication using a communication device with imaging capabilities |
US7852317B2 (en) | 2005-01-12 | 2010-12-14 | Thinkoptics, Inc. | Handheld device for handheld vision based absolute pointing system |
US7672968B2 (en) | 2005-05-12 | 2010-03-02 | Apple Inc. | Displaying a tooltip associated with a concurrently displayed database object |
US20060282776A1 (en) | 2005-06-10 | 2006-12-14 | Farmer Larry C | Multimedia and performance analysis tool |
US8402503B2 (en) | 2006-02-08 | 2013-03-19 | At& T Intellectual Property I, L.P. | Interactive program manager and methods for presenting program content |
US20070288640A1 (en) | 2006-06-07 | 2007-12-13 | Microsoft Corporation | Remote rendering of multiple mouse cursors |
US8089455B1 (en) | 2006-11-28 | 2012-01-03 | Wieder James W | Remote control with a single control button |
US20080208589A1 (en) | 2007-02-27 | 2008-08-28 | Cross Charles W | Presenting Supplemental Content For Digital Media Using A Multimodal Application |
US9360985B2 (en) | 2007-06-27 | 2016-06-07 | Scenera Technologies, Llc | Method and system for automatically linking a cursor to a hotspot in a hypervideo stream |
US8006314B2 (en) | 2007-07-27 | 2011-08-23 | Audible Magic Corporation | System for identifying content of digital data |
US8285121B2 (en) | 2007-10-07 | 2012-10-09 | Fall Front Wireless Ny, Llc | Digital network-based video tagging system |
KR101382499B1 (en) | 2007-10-22 | 2014-04-21 | 삼성전자주식회사 | Method for tagging video and apparatus for video player using the same |
US8229865B2 (en) | 2008-02-04 | 2012-07-24 | International Business Machines Corporation | Method and apparatus for hybrid tagging and browsing annotation for multimedia content |
US20090228492A1 (en) | 2008-03-10 | 2009-09-10 | Verizon Data Services Inc. | Apparatus, system, and method for tagging media content |
KR20090107626A (en) | 2008-04-10 | 2009-10-14 | 주식회사 인프라웨어 | Method for providing object information of moving picture using object area information |
US20100235379A1 (en) | 2008-06-19 | 2010-09-16 | Milan Blair Reichbach | Web-based multimedia annotation system |
US20090319884A1 (en) | 2008-06-23 | 2009-12-24 | Brian Scott Amento | Annotation based navigation of multimedia content |
US10248931B2 (en) | 2008-06-23 | 2019-04-02 | At&T Intellectual Property I, L.P. | Collaborative annotation of multimedia content |
US8665374B2 (en) | 2008-08-22 | 2014-03-04 | Disney Enterprises, Inc. | Interactive video insertions, and applications thereof |
US20100138517A1 (en) | 2008-12-02 | 2010-06-03 | At&T Intellectual Property I, L.P. | System and method for multimedia content brokering |
US20100154007A1 (en) | 2008-12-17 | 2010-06-17 | Jean Touboul | Embedded video advertising method and system |
US20100235865A1 (en) | 2009-03-12 | 2010-09-16 | Ubiquity Holdings | Tagging Video Content |
EP2430833A4 (en) * | 2009-05-13 | 2014-01-22 | Coincident Tv Inc | Playing and editing linked and annotated audiovisual works |
US9838744B2 (en) | 2009-12-03 | 2017-12-05 | Armin Moehrle | Automated process for segmenting and classifying video objects and auctioning rights to interactive sharable video objects |
KR20110118421A (en) | 2010-04-23 | 2011-10-31 | 엘지전자 주식회사 | Augmented remote controller, augmented remote controller controlling method and the system for the same |
US9508387B2 (en) | 2009-12-31 | 2016-11-29 | Flick Intelligence, LLC | Flick intel annotation methods and systems |
US20160182971A1 (en) | 2009-12-31 | 2016-06-23 | Flickintel, Llc | Method, system and computer program product for obtaining and displaying supplemental data about a displayed movie, show, event or video game |
US8751942B2 (en) | 2011-09-27 | 2014-06-10 | Flickintel, Llc | Method, system and processor-readable media for bidirectional communications and data sharing between wireless hand held devices and multimedia display systems |
US20110167447A1 (en) | 2010-01-05 | 2011-07-07 | Rovi Technologies Corporation | Systems and methods for providing a channel surfing application on a wireless communications device |
US8370878B2 (en) | 2010-03-17 | 2013-02-05 | Verizon Patent And Licensing Inc. | Mobile interface for accessing interactive television applications associated with displayed content |
US9183560B2 (en) | 2010-05-28 | 2015-11-10 | Daniel H. Abelow | Reality alternate |
US20120062468A1 (en) | 2010-09-10 | 2012-03-15 | Yu-Jen Chen | Method of modifying an interface of a handheld device and related multimedia system |
-
2010
- 2010-12-22 US US12/976,148 patent/US9508387B2/en active Active
-
2012
- 2012-01-06 US US13/345,404 patent/US20120106924A1/en not_active Abandoned
- 2012-02-13 US US13/371,602 patent/US20120139839A1/en not_active Abandoned
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020056136A1 (en) * | 1995-09-29 | 2002-05-09 | Wistendahl Douglass A. | System for converting existing TV content to interactive TV programs operated with a standard remote control and TV set-top box |
US20080066129A1 (en) * | 2000-02-29 | 2008-03-13 | Goldpocket Interactive, Inc. | Method and Apparatus for Interaction with Hyperlinks in a Television Broadcast |
US20020049975A1 (en) * | 2000-04-05 | 2002-04-25 | Thomas William L. | Interactive wagering system with multiple display support |
US20060080130A1 (en) * | 2004-10-08 | 2006-04-13 | Samit Choksi | Method that uses enterprise application integration to provide real-time proactive post-sales and pre-sales service over SIP/SIMPLE/XMPP networks |
US20090327894A1 (en) * | 2008-04-15 | 2009-12-31 | Novafora, Inc. | Systems and methods for remote control of interactive video |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9465451B2 (en) | 2009-12-31 | 2016-10-11 | Flick Intelligence, LLC | Method, system and computer program product for obtaining and displaying supplemental data about a displayed movie, show, event or video game |
US9508387B2 (en) | 2009-12-31 | 2016-11-29 | Flick Intelligence, LLC | Flick intel annotation methods and systems |
US11496814B2 (en) | 2009-12-31 | 2022-11-08 | Flick Intelligence, LLC | Method, system and computer program product for obtaining and displaying supplemental data about a displayed movie, show, event or video game |
US9459762B2 (en) | 2011-09-27 | 2016-10-04 | Flick Intelligence, LLC | Methods, systems and processor-readable media for bidirectional communications and data sharing |
US9965237B2 (en) | 2011-09-27 | 2018-05-08 | Flick Intelligence, LLC | Methods, systems and processor-readable media for bidirectional communications and data sharing |
US10459976B2 (en) * | 2015-06-30 | 2019-10-29 | Canon Kabushiki Kaisha | Method, apparatus and system for applying an annotation to a portion of a video sequence |
US20220374585A1 (en) * | 2021-05-19 | 2022-11-24 | Google Llc | User interfaces and tools for facilitating interactions with video content |
Also Published As
Publication number | Publication date |
---|---|
US9508387B2 (en) | 2016-11-29 |
US20110158603A1 (en) | 2011-06-30 |
US20120106924A1 (en) | 2012-05-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9508387B2 (en) | Flick intel annotation methods and systems | |
US11741110B2 (en) | Aiding discovery of program content by providing deeplinks into most interesting moments via social media | |
US11011206B2 (en) | User control for displaying tags associated with items in a video playback | |
US10609308B2 (en) | Overly non-video content on a mobile device | |
US8990690B2 (en) | Methods and apparatus for media navigation | |
KR102438752B1 (en) | Systems and methods for performing asr in the presence of heterograph | |
US9143699B2 (en) | Overlay non-video content on a mobile device | |
US8744237B2 (en) | Providing video presentation commentary | |
US9015788B2 (en) | Generation and provision of media metadata | |
TWI538520B (en) | Method for adding video information, method for displaying video information and devices using the same | |
CA2938477C (en) | Methods and apparatus to synchronize second screen content with audio/video programming using closed captioning data | |
US20120120296A1 (en) | Methods and Systems for Dynamically Presenting Enhanced Content During a Presentation of a Media Content Instance | |
KR20220121911A (en) | Systems and methods for presenting supplemental content in augmented reality | |
CN110168541B (en) | System and method for eliminating word ambiguity based on static and time knowledge graph | |
CN102595212A (en) | Simulated group interaction with multimedia content | |
CN103384253B (en) | The play system and its construction method of multimedia interaction function are presented in video | |
Kim | Authoring multisensorial content | |
US20150312633A1 (en) | Electronic system and method to render additional information with displayed media | |
TW201322740A (en) | Digitalized TV commercial product display system, method, and recording medium thereof | |
WO2011082083A2 (en) | Flick intel annotation methods and systems | |
JP2023141808A (en) | Moving image distribution device | |
TW201739264A (en) | Method and system for automatically embedding interactive elements into multimedia content |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: FLICKINTEL, LLC, NEW MEXICO Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KRUKAR, RICHARD H.;ORTIZ, LUIS M.;LOPEZ, KERMIT;REEL/FRAME:027691/0836 Effective date: 20120207 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION |
|
AS | Assignment |
Owner name: FLICK INTELLIGENCE, LLC, NEW MEXICO Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:FLICK INTEL, LLC;REEL/FRAME:049951/0783 Effective date: 20190601 |