US20120106924A1 - Systems and methods for media annotation, selection and display of background data - Google Patents

Systems and methods for media annotation, selection and display of background data Download PDF

Info

Publication number
US20120106924A1
US20120106924A1 US13/345,404 US201213345404A US2012106924A1 US 20120106924 A1 US20120106924 A1 US 20120106924A1 US 201213345404 A US201213345404 A US 201213345404A US 2012106924 A1 US2012106924 A1 US 2012106924A1
Authority
US
United States
Prior art keywords
data
frames
video content
coordinate
element identifier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/345,404
Inventor
Richard H. Krukar
Luis M. Ortiz
Kermit D. Lopez
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Flick Intelligence LLC
Original Assignee
Flickintel LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Flickintel LLC filed Critical Flickintel LLC
Priority to US13/345,404 priority Critical patent/US20120106924A1/en
Assigned to FLICKINTEL, LLC reassignment FLICKINTEL, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KRUKAR, RICHARD H., LOPEZ, KERMIT, ORTIZ, LUIS M.
Publication of US20120106924A1 publication Critical patent/US20120106924A1/en
Assigned to Flick Intelligence, LLC reassignment Flick Intelligence, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: FLICK INTEL, LLC
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/11Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information not detectable on the record carrier
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4722End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
    • H04N21/4725End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content using interactive regions of the image, e.g. hot spots
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/812Monomedia components thereof involving advertisement data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/858Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot
    • H04N21/8586Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot by using a URL
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/82Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
    • H04N9/8205Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal

Definitions

  • Embodiments relate to video content, video displays, and video compositing. Embodiments also relate to computer systems, user input devices, databases, and computer networks.
  • a media device can provide video content to a display device and that a person can view the video content as it is presented on the display device.
  • a series of scenes or a time varying series of frames along with any audio dialog, music, or sound effects are examples of video content.
  • the person can choose a coordinate on the display device.
  • a coordinate can be chosen with a pointing device or any other form of user input by which the person can indicate a spot on the display device and select that spot.
  • Frame specification data can be generated when the person chooses the coordinate.
  • the frame specification data can identify a specific scene or frame within the video content.
  • Element identifiers are uniquely associated with scene elements.
  • the element identifier can be obtained by querying an annotation database that relates element identifiers to coordinates and frame specification data.
  • the element identifier can also be provided by a human worker who views the scene or frame, looks to the coordinate, and reports what appears at that location.
  • a system can be implemented, which includes an annotation module that automatically annotates content including video content comprising a plurality of frames; an annotation database that stores at least one element identifier, and wherein the annotation database communicates with the annotation module; and at least one element identifier provided by the annotation database in response to a query comprising frame specification data and a coordinate, wherein the frame specification data identifies which frames among the plurality of frames was displayed when the coordinate is selected.
  • a display device can display the video content comprising the plurality of frames, and a pointing device can select the coordinate on the display device.
  • a media device that can provide the video content to the display device.
  • the aforementioned frame specification data can include a timestamp and a media tag.
  • a media tag can identify the video content, wherein the timestamp identifies at least one frame among the plurality of frames of the video content and wherein the at least one frame is displayed the coordinate is selected.
  • an additional data server can produce element data based on the at least one element identifier, and a data presentation can display the element data.
  • the at least one element identifier can correspond to an item for sale and the data presentation comprises an offer to purchase the item.
  • the at least one element identifier can correspond to a person and the data presentation can provide information about that person.
  • the at least one element identifier can correspond to a song and the data presentation can comprise an offer to purchase a copy of the song.
  • the at least one element identifier can correspond to a location and the data presentation can comprise travel information for reaching the location.
  • a method can be implemented, which includes the steps of, for example, automatically annotating via an annotation module, content including video content comprising a plurality of frames; storing in an annotation database that communicates with the annotation module, at least one element identifier; and providing from the at least one element identifier from the annotation database in response to a query comprising frame specification data and a coordinate, wherein the frame specification data identifies which frames among the plurality of frames was displayed when the coordinate is selected.
  • steps or operations can be provided for displaying via a display device, the video content comprising the plurality of frames; and selecting via a pointing device, the coordinate on the display device.
  • a step or operation can be implemented for providing the video content to the display device from a media device.
  • the frame specification data comprises a timestamp and a media tag, wherein the media tag identifies the video content, wherein the timestamp identifies at least one frame among the plurality of frames of the video content and wherein the at least one frame is displayed the coordinate is selected.
  • steps can be implemented for providing an additional data server that produces element data based on the at least one element identifier; and providing a data presentation that displays the element data.
  • the at least one element identifier can correspond to an item for sale and the data presentation comprises an offer to purchase the item.
  • the at least one element identifier can correspond to a person and the data presentation provides information about that person.
  • the at least one element identifier can correspond to a song and the data presentation comprises an offer to purchase a copy of the song.
  • the at least one element identifier can correspond to a location and the data presentation comprises travel information for reaching the location.
  • a processor-readable medium can store code representing instructions to cause a processor to perform a process.
  • code can comprise code, for example: automatically annotate via an annotation module, content including video content comprising a plurality of frames; store in an annotation database that communicates with the annotation module, at least one element identifier; and provide from the at least one element identifier from the annotation database in response to a query comprising frame specification data and a coordinate, wherein the frame specification data identifies which frames among the plurality of frames was displayed when the coordinate is selected.
  • FIG. 1 illustrates element data being presented on a second display in response to the selection of a scene element on a first display in accordance with aspects of certain embodiments
  • FIG. 2 illustrates an annotation database providing element identifiers in response to a person selecting scene elements in accordance with aspects of the embodiments
  • FIG. 3 illustrates an annotation service providing element identifiers in response to a person selecting scene elements in accordance with aspects of the embodiments.
  • FIG. 4 illustrates an annotated content stream passing to a media device such that the media device produces element data in accordance with aspects of certain embodiments.
  • Video content is a time varying presentation of scenes or video frames.
  • Each frame can contain a number of scene elements such as actors, foreground items, background items, or other items.
  • a person enjoying video content can select a scene element by specifying a screen coordinate while the video content plays.
  • Frame specification data identifies the specific frame or scene being displayed when the coordinate is selected. The coordinate in combination with the frame specification data is sufficient to identify the scene element that the person has chosen. Information about the scene element can then be presented to the person.
  • An annotation database can associate scene elements with frame specification data and coordinates.
  • FIG. 1 illustrates element data being presented on a second display 119 in response to the selection of a scene element on a display 101 in accordance with aspects of certain embodiments.
  • a media device 104 passes video content to the display 101 to be viewed by a person.
  • the person can manipulate a selection device 112 to choose a coordinate 102 on a display device 101 .
  • the coordinate can then be passed to a media device 104 .
  • the selection device can detect the coordinate 105 .
  • the selection device 112 can detect the locations of emitters 106 and infer the screen position being pointed at from those emitter locations.
  • the display 101 can detect the coordinate 103 .
  • the selection device can emit a light beam that the display device detects.
  • Other common coordinate selection means include mice, trackballs, and touch sensors. More advanced pointing means can observe the person's body or eyeballs to thereby determine a coordinate. Clicking a button or some other action can generate an event indicting that a scene element is chosen.
  • the media device 104 can generate a selection packet 107 that includes frame selection data and the coordinate 102 .
  • the frame selection data is data that is sufficient to identify a specific frame or scene.
  • the frame selection data can be a media tag 108 and a timestamp 109 .
  • the media tag 108 can identify a particular movie, show, sporting event, advertisement, video clip, scene or other unit of video content.
  • a timestamp 109 specifies a time within the video content.
  • a media tag and timestamp can specify a particular frame from amongst all the frames of video content that have ever been produced.
  • the frame selection packet 107 can be formed into a query for an annotation database 111 .
  • the annotation database 111 can contain associations of element identifiers associated with frame selection data and coordinates. As such, the annotation database 111 can produce an element identifier 113 in response to the query.
  • the element identifier 113 can identify a person 114 , an item 115 , music 116 , a place 117 , or something else.
  • the element identifier 113 can then be passed to another server 118 that responds by producing element data for presentation to the person.
  • element data include, but are not limited to: statistics on a person such as an athlete; a picture of a person, object or place; an offer for purchase of an item, service, or song; and links to other media in which a person, item, or place appears.
  • FIG. 2 illustrates an annotation database 111 providing element identifiers 211 in response to a person selecting scene elements in accordance with aspects of the embodiments.
  • An annotation service/module 202 can produce annotated content 203 by annotating content 201 .
  • An annotation module is a device, algorithm, program, or other means that automatically annotates content. Image recognition algorithms can locate items within scenes and frames and thereby automatically provide annotation data.
  • An annotation service is a service provider that annotates content. An annotation service provider can employ both human workers and annotation modules.
  • Annotation is a process wherein scene elements, each having an element identifier, are associated with media tags and space time ranges.
  • a space time range identifies a range of times and positions at which a scene element appears. For example, a car can sit unmoving during an entire scene.
  • the element identifier can specify the make, model, color, and trim level of the car
  • the media tag can identify a movie containing the scene
  • the space time range can specify the time range of the movie scene and the location of the car within the scene.
  • the content 201 can be passed to a media device 104 that produces a media stream 207 for presentation on a display device 206 .
  • a person 205 watching the display device 206 can use a selection device 112 to choose a coordinate on the display device 206 .
  • a selection packet 107 containing the coordinate and some frame specification data can then be passed to the annotation database 111 which responds by identifying the scene element 211 .
  • An additional data server 118 can produce element data 212 for that identified scene element 211 .
  • the element data 212 can then be presented to the person.
  • FIG. 3 illustrates an annotation service providing element identifiers in response to a person selecting scene elements in accordance with aspects of the embodiments.
  • the embodiment of FIG. 3 differs from that of FIG. 2 in that the content 201 is not necessarily annotated before being viewed by the person 205 .
  • the selection packet 107 is passed to the annotation service 301 where a human worker 302 or annotation module 303 determines what scene element the person 205 selected and creates a new annotation entry for incorporation into the annotation database 111 .
  • FIG. 4 illustrates an annotated content stream 401 passing to a media device 104 such that the media device 104 produces element data 407 in accordance with aspects of certain embodiments.
  • Annotated content such as annotated content 203 of FIG. 2
  • the annotated content stream 401 can include a content stream 402 , element stream 403 , and element data 406 .
  • the media device 104 can then pass the content for presentation on the display 206 and store the element data 406 and the data in the element stream 403 .
  • the data in the element stream 403 can be formed into an annotation database with the possible exception that no media tag is needed. No media tag is needed because all the annotations refer only to the content stream 402 .
  • the element stream 403 is illustrated as containing only space time ranges 404 and element identifiers 405 .
  • the media device 104 having assembled an annotation database and having stored element data 406 , can produce element data 407 for a scene element selected by a person 205 without querying remote databases or accessing remote resources.
  • the content stream 402 , element stream 403 , and element data 406 can be transferred separately or in combination as streaming data.
  • Means for transferring content, annotations, and element data include TV signals and storage devices such as DVD disks or data disks.
  • the element data 406 can be passed to the media device 104 or can be stored and accessed on a remote server.
  • the present invention can be embodied as a method, data processing system, or computer program product. Accordingly, the present invention may take the form of an entire hardware embodiment, an entire software embodiment or an embodiment combining software and hardware aspects all generally referred to herein as a “circuit” or “module.” Furthermore, the present invention may take the form of a computer program product on a computer-usable storage medium having computer-usable program code embodied in the medium. Any suitable computer readable medium may be utilized including hard disks, USB Flash Drives, DVDs, CD-ROMs, optical storage devices, magnetic storage devices, etc.
  • Computer program code for carrying out operations of the present invention may be written in an object oriented programming language (e.g., Java, C++, etc.).
  • the computer program code, however, for carrying out operations of the present invention may also be written in conventional procedural programming languages such as the “C” programming language, in a visually oriented programming environment such as, for example, VisualBasic, or in functional programming languages such as LISP or Erlang.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer.
  • the remote computer may be connected to a user's computer through a local area network (LAN) or a wide area network (WAN), wireless data network e.g., WiFi, Wimax, 802.xx, and cellular network or the connection may be made to an external computer via most third party supported networks (for example, through the Internet using an Internet Service Provider).
  • These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function/act specified in the block or blocks.
  • a system can be implemented, which includes an annotation module that automatically annotates content including video content comprising a plurality of frames; an annotation database that stores at least one element identifier, and wherein the annotation database communicates with the annotation module; and at least one element identifier provided by the annotation database in response to a query comprising frame specification data and a coordinate, wherein the frame specification data identifies which frames among the plurality of frames was displayed when the coordinate is selected.
  • a display device can display the video content comprising the plurality of frames, and a pointing device can select the coordinate on the display device.
  • a media device that can provide the video content to the display device.
  • the aforementioned frame specification data can include a timestamp and a media tag.
  • a media tag can identify the video content, wherein the timestamp identifies at least one frame among the plurality of frames of the video content and wherein the at least one frame is displayed the coordinate is selected.
  • an additional data server can produce element data based on the at least one element identifier, and a data presentation can display the element data.
  • the at least one element identifier can correspond to an item for sale and the data presentation comprises an offer to purchase the item.
  • the at least one element identifier can correspond to a person and the data presentation can provide information about that person.
  • the at least one element identifier can correspond to a song and the data presentation can comprise an offer to purchase a copy of the song.
  • the at least one element identifier can correspond to a location and the data presentation can comprise travel information for reaching the location.
  • a method can be implemented, which includes the steps of, for example, automatically annotating via an annotation module, content including video content comprising a plurality of frames; storing in an annotation database that communicates with the annotation module, at least one element identifier; and providing from the at least one element identifier from the annotation database in response to a query comprising frame specification data and a coordinate, wherein the frame specification data identifies which frames among the plurality of frames was displayed when the coordinate is selected.
  • steps or operations can be provided for displaying via a display device, the video content comprising the plurality of frames; and selecting via a pointing device, the coordinate on the display device.
  • a step or operation can be implemented for providing the video content to the display device from a media device.
  • the frame specification data comprises a timestamp and a media tag, wherein the media tag identifies the video content, wherein the timestamp identifies at least one frame among the plurality of frames of the video content and wherein the at least one frame is displayed the coordinate is selected.
  • steps can be implemented for providing an additional data server that produces element data based on the at least one element identifier; and providing a data presentation that displays the element data.
  • the at least one element identifier can correspond to an item for sale and the data presentation comprises an offer to purchase the item.
  • the at least one element identifier can correspond to a person and the data presentation provides information about that person.
  • the at least one element identifier can correspond to a song and the data presentation comprises an offer to purchase a copy of the song.
  • the at least one element identifier can correspond to a location and the data presentation comprises travel information for reaching the location.
  • a processor-readable medium can store code representing instructions to cause a processor to perform a process.
  • code can comprise code, for example: automatically annotate via an annotation module, content including video content comprising a plurality of frames; store in an annotation database that communicates with the annotation module, at least one element identifier; and provide from the at least one element identifier from the annotation database in response to a query comprising frame specification data and a coordinate, wherein the frame specification data identifies which frames among the plurality of frames was displayed when the coordinate is selected.

Abstract

Video content is a time varying presentation of scenes or video frames. Each frame can contain a number of scene elements such as actors, foreground items, background items, or other items. A person enjoying video content can select a scene element by specifying a screen coordinate while the video content plays. Frame specification data identifies the specific frame or scene being displayed when the coordinate is selected. The coordinate in combination with the frame specification data is sufficient to identify the scene element that the person has chosen. Information about the scene element can then be presented to the person. An annotation database can relate the scene elements to the frame specification data and coordinates.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This patent application is a continuation of U.S. patent application Ser. No. 12/976,148, entitled “Flick Intel Annotation Methods and Systems,” which was filed on Dec. 22, 2010 and which is incorporated herein by reference in its entirety. U.S. patent application Ser. No. 12/976,148 in turn claims the priority and benefit of U.S. provisional patent application 61/291,837, entitled “Systems and Methods for obtaining background data associated with a movie, show, or live sporting event”, filed on Dec. 31, 2009 and of U.S. Provisional Patent Application No. 61/419,268, filed Dec. 3, 2010, entitled “FlickIntel Annotation Systems and Webcast Infrastructure”. This patent application therefore claims priority to U.S. Provisional Patent Applications 61/291,837 and 61/419,268, which are herein incorporated by reference.
  • TECHNICAL FIELD
  • Embodiments relate to video content, video displays, and video compositing. Embodiments also relate to computer systems, user input devices, databases, and computer networks.
  • BACKGROUND OF THE INVENTION
  • People have watched video content on televisions and other audio-visual devices for decades. They have also used gaming systems, personal computers, handheld devices, and other devices to enjoy interactive content. They often have questions about places, people and things' appearing as the video content is displayed, and about the music they hear. Databases containing information about the content such as the actors in a scene or the music being played already exist and provide users with the ability to learn more.
  • The existing database solutions provide information about elements appearing in a movie or scene, but only in a very general way. A person curious about a scene element can obtain information about the scene and hope that the information mentions the scene element in which the person is interested. Systems and methods that provide people with the ability to select a specific scene element and to obtain information about only that element are needed.
  • BRIEF SUMMARY
  • The following summary is provided to facilitate an understanding of some of the innovative features unique to the embodiments and is not intended to be a full description. A full appreciation of the various aspects of the embodiments can be gained by taking the entire specification, claims, drawings, and abstract as a whole.
  • It is therefore an aspect of the embodiments that a media device can provide video content to a display device and that a person can view the video content as it is presented on the display device. A series of scenes or a time varying series of frames along with any audio dialog, music, or sound effects are examples of video content.
  • It is another aspect of the embodiments that the person can choose a coordinate on the display device. A coordinate can be chosen with a pointing device or any other form of user input by which the person can indicate a spot on the display device and select that spot. Frame specification data can be generated when the person chooses the coordinate. The frame specification data can identify a specific scene or frame within the video content.
  • It is yet another aspect of the embodiments to provide an element identifier based on the coordinate and the frame specification data. Element identifiers are uniquely associated with scene elements. The element identifier can be obtained by querying an annotation database that relates element identifiers to coordinates and frame specification data. The element identifier can also be provided by a human worker who views the scene or frame, looks to the coordinate, and reports what appears at that location.
  • A number of embodiments, preferred and alternative, are disclosed herein. For example, in one embodiment, a system can be implemented, which includes an annotation module that automatically annotates content including video content comprising a plurality of frames; an annotation database that stores at least one element identifier, and wherein the annotation database communicates with the annotation module; and at least one element identifier provided by the annotation database in response to a query comprising frame specification data and a coordinate, wherein the frame specification data identifies which frames among the plurality of frames was displayed when the coordinate is selected. In yet another embodiment, a display device can display the video content comprising the plurality of frames, and a pointing device can select the coordinate on the display device. In still other embodiments, a media device that can provide the video content to the display device.
  • In yet other embodiments, the aforementioned frame specification data can include a timestamp and a media tag. Such a media tag can identify the video content, wherein the timestamp identifies at least one frame among the plurality of frames of the video content and wherein the at least one frame is displayed the coordinate is selected. In other embodiments, an additional data server can produce element data based on the at least one element identifier, and a data presentation can display the element data.
  • In still other embodiments, the at least one element identifier can correspond to an item for sale and the data presentation comprises an offer to purchase the item. In other embodiments, the at least one element identifier can correspond to a person and the data presentation can provide information about that person. In still other embodiments, the at least one element identifier can correspond to a song and the data presentation can comprise an offer to purchase a copy of the song. In still other embodiments, the at least one element identifier can correspond to a location and the data presentation can comprise travel information for reaching the location.
  • In another embodiment, a method can be implemented, which includes the steps of, for example, automatically annotating via an annotation module, content including video content comprising a plurality of frames; storing in an annotation database that communicates with the annotation module, at least one element identifier; and providing from the at least one element identifier from the annotation database in response to a query comprising frame specification data and a coordinate, wherein the frame specification data identifies which frames among the plurality of frames was displayed when the coordinate is selected.
  • In another embodiment, steps or operations can be provided for displaying via a display device, the video content comprising the plurality of frames; and selecting via a pointing device, the coordinate on the display device. In still other embodiments, a step or operation can be implemented for providing the video content to the display device from a media device. In still other embodiments of the aforementioned method, the frame specification data comprises a timestamp and a media tag, wherein the media tag identifies the video content, wherein the timestamp identifies at least one frame among the plurality of frames of the video content and wherein the at least one frame is displayed the coordinate is selected.
  • In other embodiments, steps can be implemented for providing an additional data server that produces element data based on the at least one element identifier; and providing a data presentation that displays the element data. In yet other embodiments of such a method, the at least one element identifier can correspond to an item for sale and the data presentation comprises an offer to purchase the item.
  • In still other embodiments of such a method, the at least one element identifier can correspond to a person and the data presentation provides information about that person. In yet another embodiment of such a method, the at least one element identifier can correspond to a song and the data presentation comprises an offer to purchase a copy of the song. In yet other embodiments, the at least one element identifier can correspond to a location and the data presentation comprises travel information for reaching the location.
  • In still other embodiments, a processor-readable medium can store code representing instructions to cause a processor to perform a process. Such code can comprise code, for example: automatically annotate via an annotation module, content including video content comprising a plurality of frames; store in an annotation database that communicates with the annotation module, at least one element identifier; and provide from the at least one element identifier from the annotation database in response to a query comprising frame specification data and a coordinate, wherein the frame specification data identifies which frames among the plurality of frames was displayed when the coordinate is selected.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The accompanying figures, in which like reference numerals refer to identical or functionally similar elements throughout the separate views and which are incorporated in and form a part of the specification, further illustrate aspects of the embodiments and, together with the background, brief summary, and detailed description serve to explain the principles of the embodiments.
  • FIG. 1 illustrates element data being presented on a second display in response to the selection of a scene element on a first display in accordance with aspects of certain embodiments;
  • FIG. 2 illustrates an annotation database providing element identifiers in response to a person selecting scene elements in accordance with aspects of the embodiments;
  • FIG. 3 illustrates an annotation service providing element identifiers in response to a person selecting scene elements in accordance with aspects of the embodiments; and
  • FIG. 4 illustrates an annotated content stream passing to a media device such that the media device produces element data in accordance with aspects of certain embodiments.
  • DETAILED DESCRIPTION
  • The particular values and configurations discussed in these non-limiting examples can be varied and are cited merely to illustrate at least one embodiment and are not intended to limit the scope thereof. In general, the figures are not to scale.
  • The embodiments will now be described more fully hereinafter with reference to the accompanying drawings, in which illustrative embodiments of the invention are shown. The embodiments disclosed herein can be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete and will fully convey the scope of the invention to those skilled in the art. Like numbers refer to like elements throughout. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
  • Video content is a time varying presentation of scenes or video frames. Each frame can contain a number of scene elements such as actors, foreground items, background items, or other items. A person enjoying video content can select a scene element by specifying a screen coordinate while the video content plays. Frame specification data identifies the specific frame or scene being displayed when the coordinate is selected. The coordinate in combination with the frame specification data is sufficient to identify the scene element that the person has chosen. Information about the scene element can then be presented to the person. An annotation database can associate scene elements with frame specification data and coordinates.
  • FIG. 1 illustrates element data being presented on a second display 119 in response to the selection of a scene element on a display 101 in accordance with aspects of certain embodiments. A media device 104 passes video content to the display 101 to be viewed by a person. The person can manipulate a selection device 112 to choose a coordinate 102 on a display device 101. The coordinate can then be passed to a media device 104. In some embodiments the selection device can detect the coordinate 105. For example, the selection device 112 can detect the locations of emitters 106 and infer the screen position being pointed at from those emitter locations. In other embodiments the display 101 can detect the coordinate 103. For example, the selection device can emit a light beam that the display device detects. Other common coordinate selection means include mice, trackballs, and touch sensors. More advanced pointing means can observe the person's body or eyeballs to thereby determine a coordinate. Clicking a button or some other action can generate an event indicting that a scene element is chosen.
  • The media device 104 can generate a selection packet 107 that includes frame selection data and the coordinate 102. The frame selection data is data that is sufficient to identify a specific frame or scene. For example, the frame selection data can be a media tag 108 and a timestamp 109. The media tag 108 can identify a particular movie, show, sporting event, advertisement, video clip, scene or other unit of video content. A timestamp 109 specifies a time within the video content. In combination, a media tag and timestamp can specify a particular frame from amongst all the frames of video content that have ever been produced.
  • The frame selection packet 107 can be formed into a query for an annotation database 111. The annotation database 111 can contain associations of element identifiers associated with frame selection data and coordinates. As such, the annotation database 111 can produce an element identifier 113 in response to the query. The element identifier 113 can identify a person 114, an item 115, music 116, a place 117, or something else.
  • The element identifier 113 can then be passed to another server 118 that responds by producing element data for presentation to the person. Examples of element data include, but are not limited to: statistics on a person such as an athlete; a picture of a person, object or place; an offer for purchase of an item, service, or song; and links to other media in which a person, item, or place appears.
  • FIG. 2 illustrates an annotation database 111 providing element identifiers 211 in response to a person selecting scene elements in accordance with aspects of the embodiments. An annotation service/module 202 can produce annotated content 203 by annotating content 201. An annotation module is a device, algorithm, program, or other means that automatically annotates content. Image recognition algorithms can locate items within scenes and frames and thereby automatically provide annotation data. An annotation service is a service provider that annotates content. An annotation service provider can employ both human workers and annotation modules.
  • Annotation is a process wherein scene elements, each having an element identifier, are associated with media tags and space time ranges. A space time range identifies a range of times and positions at which a scene element appears. For example, a car can sit unmoving during an entire scene. The element identifier can specify the make, model, color, and trim level of the car, the media tag can identify a movie containing the scene, and the space time range can specify the time range of the movie scene and the location of the car within the scene.
  • The content 201 can be passed to a media device 104 that produces a media stream 207 for presentation on a display device 206. A person 205 watching the display device 206 can use a selection device 112 to choose a coordinate on the display device 206. A selection packet 107 containing the coordinate and some frame specification data can then be passed to the annotation database 111 which responds by identifying the scene element 211. An additional data server 118 can produce element data 212 for that identified scene element 211. The element data 212 can then be presented to the person.
  • FIG. 3 illustrates an annotation service providing element identifiers in response to a person selecting scene elements in accordance with aspects of the embodiments. The embodiment of FIG. 3 differs from that of FIG. 2 in that the content 201 is not necessarily annotated before being viewed by the person 205. The selection packet 107 is passed to the annotation service 301 where a human worker 302 or annotation module 303 determines what scene element the person 205 selected and creates a new annotation entry for incorporation into the annotation database 111.
  • FIG. 4 illustrates an annotated content stream 401 passing to a media device 104 such that the media device 104 produces element data 407 in accordance with aspects of certain embodiments. Annotated content, such as annotated content 203 of FIG. 2, can be passed as an annotated content stream 401 to the media device 104. The annotated content stream 401 can include a content stream 402, element stream 403, and element data 406. The media device 104 can then pass the content for presentation on the display 206 and store the element data 406 and the data in the element stream 403. The data in the element stream 403 can be formed into an annotation database with the possible exception that no media tag is needed. No media tag is needed because all the annotations refer only to the content stream 402. As such, the element stream 403 is illustrated as containing only space time ranges 404 and element identifiers 405.
  • The media device 104, having assembled an annotation database and having stored element data 406, can produce element data 407 for a scene element selected by a person 205 without querying remote databases or accessing remote resources.
  • Note that in practice, the content stream 402, element stream 403, and element data 406 can be transferred separately or in combination as streaming data. Means for transferring content, annotations, and element data include TV signals and storage devices such as DVD disks or data disks. Furthermore, the element data 406 can be passed to the media device 104 or can be stored and accessed on a remote server.
  • It will be appreciated that variations of the above-disclosed and other features and functions, or alternatives thereof, may be desirably combined into many other different systems or applications. Also, that various presently unforeseen or unanticipated alternatives, modifications, variations or improvements therein may be subsequently made by those skilled in the art which are also intended to be encompassed by the following claims.
  • The particular values and configurations discussed in these non-limiting examples can be varied and are cited merely to illustrate at least one embodiment and are not intended to limit the scope thereof.
  • The embodiments will now be described more fully hereinafter with reference to the accompanying drawings, in which illustrative embodiments of the invention are shown. The embodiments disclosed herein can be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. Like numbers refer to like elements throughout. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
  • The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
  • Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
  • As will be appreciated by one skilled in the art, the present invention can be embodied as a method, data processing system, or computer program product. Accordingly, the present invention may take the form of an entire hardware embodiment, an entire software embodiment or an embodiment combining software and hardware aspects all generally referred to herein as a “circuit” or “module.” Furthermore, the present invention may take the form of a computer program product on a computer-usable storage medium having computer-usable program code embodied in the medium. Any suitable computer readable medium may be utilized including hard disks, USB Flash Drives, DVDs, CD-ROMs, optical storage devices, magnetic storage devices, etc.
  • Computer program code for carrying out operations of the present invention may be written in an object oriented programming language (e.g., Java, C++, etc.). The computer program code, however, for carrying out operations of the present invention may also be written in conventional procedural programming languages such as the “C” programming language, in a visually oriented programming environment such as, for example, VisualBasic, or in functional programming languages such as LISP or Erlang.
  • The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer. In the latter scenario, the remote computer may be connected to a user's computer through a local area network (LAN) or a wide area network (WAN), wireless data network e.g., WiFi, Wimax, 802.xx, and cellular network or the connection may be made to an external computer via most third party supported networks (for example, through the Internet using an Internet Service Provider).
  • The invention is described in part below with reference to flowchart illustrations and/or block diagrams of methods, systems, computer program products, and data structures according to embodiments of the invention. It will be understood that each block of the illustrations, and combinations of blocks, can be implemented by computer program Instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the block or blocks.
  • These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function/act specified in the block or blocks.
  • Note that computer program instructions and other process-readable media discusses herein may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions/acts specified in the block or blocks.
  • Based on the foregoing, it can be appreciated that a number of embodiments, preferred and alternative, are disclosed herein. For example, in one embodiment, a system can be implemented, which includes an annotation module that automatically annotates content including video content comprising a plurality of frames; an annotation database that stores at least one element identifier, and wherein the annotation database communicates with the annotation module; and at least one element identifier provided by the annotation database in response to a query comprising frame specification data and a coordinate, wherein the frame specification data identifies which frames among the plurality of frames was displayed when the coordinate is selected. In yet another embodiment, a display device can display the video content comprising the plurality of frames, and a pointing device can select the coordinate on the display device. In still other embodiments, a media device that can provide the video content to the display device.
  • In yet other embodiments, the aforementioned frame specification data can include a timestamp and a media tag. Such a media tag can identify the video content, wherein the timestamp identifies at least one frame among the plurality of frames of the video content and wherein the at least one frame is displayed the coordinate is selected. In other embodiments, an additional data server can produce element data based on the at least one element identifier, and a data presentation can display the element data.
  • In still other embodiments, the at least one element identifier can correspond to an item for sale and the data presentation comprises an offer to purchase the item. In other embodiments, the at least one element identifier can correspond to a person and the data presentation can provide information about that person. In still other embodiments, the at least one element identifier can correspond to a song and the data presentation can comprise an offer to purchase a copy of the song. In still other embodiments, the at least one element identifier can correspond to a location and the data presentation can comprise travel information for reaching the location.
  • In another embodiment, a method can be implemented, which includes the steps of, for example, automatically annotating via an annotation module, content including video content comprising a plurality of frames; storing in an annotation database that communicates with the annotation module, at least one element identifier; and providing from the at least one element identifier from the annotation database in response to a query comprising frame specification data and a coordinate, wherein the frame specification data identifies which frames among the plurality of frames was displayed when the coordinate is selected.
  • In another embodiment, steps or operations can be provided for displaying via a display device, the video content comprising the plurality of frames; and selecting via a pointing device, the coordinate on the display device. In still other embodiments, a step or operation can be implemented for providing the video content to the display device from a media device. In still other embodiments of the aforementioned method, the frame specification data comprises a timestamp and a media tag, wherein the media tag identifies the video content, wherein the timestamp identifies at least one frame among the plurality of frames of the video content and wherein the at least one frame is displayed the coordinate is selected. In other embodiments, steps can be implemented for providing an additional data server that produces element data based on the at least one element identifier; and providing a data presentation that displays the element data. In yet other embodiments of such a method, the at least one element identifier can correspond to an item for sale and the data presentation comprises an offer to purchase the item. In still other embodiments of such a method, the at least one element identifier can correspond to a person and the data presentation provides information about that person. In yet another embodiment of such a method, the at least one element identifier can correspond to a song and the data presentation comprises an offer to purchase a copy of the song. In yet other embodiments, the at least one element identifier can correspond to a location and the data presentation comprises travel information for reaching the location.
  • In still other embodiments, a processor-readable medium can store code representing instructions to cause a processor to perform a process. Such code can comprise code, for example: automatically annotate via an annotation module, content including video content comprising a plurality of frames; store in an annotation database that communicates with the annotation module, at least one element identifier; and provide from the at least one element identifier from the annotation database in response to a query comprising frame specification data and a coordinate, wherein the frame specification data identifies which frames among the plurality of frames was displayed when the coordinate is selected.
  • It will be appreciated that variations of the above-disclosed and other features and functions, or alternatives thereof, may be desirably combined into many other different systems or applications. Also that various presently unforeseen or unanticipated alternatives, modifications, variations or improvements therein may be subsequently made by those skilled in the art which are also intended to be encompassed by the following claims.

Claims (20)

1. A system, comprising:
an annotation module that automatically annotates content ding video content comprising a plurality of frames;
an annotation database that stores at least one element identifier, and wherein said annotation database communicates with said annotation module; and
at least one element identifier provided by said annotation database in response to a query comprising frame specification data and a coordinate, wherein said frame specification data identifies which frames among said plurality of frames was displayed when said coordinate is selected.
2. The system of claim 1 further comprising:
a display device that displays said video content comprising said plurality of frames; and
a pointing device for selecting said coordinate on said display device.
3. The system of claim 2 further comprising a media device that provides said video content to said display device.
4. The system of claim 1 wherein the frame specification data comprises a timestamp and a media tag, wherein said media tag identifies said video content, wherein said timestamp identifies at least one frame among said plurality of frames of said video content and wherein said at least one frame is displayed said coordinate is selected.
5. The system of claim 1 further comprising:
an additional data server that produces element data based on said at least one element identifier; and
a data presentation that displays said element data.
6. The system of claim 1 wherein said at least one element identifier corresponds to an item for sale and said data presentation comprises an offer to purchase said item.
7. The system of claim 1 wherein said at least one element identifier corresponds to a person and said data presentation provides information about that person.
8. The system of claim 1 wherein said at least one element identifier corresponds to a song and said data presentation comprises an offer to purchase a copy of said song.
9. The system of claim 1 wherein said at least one element identifier corresponds to a location and said data presentation comprises travel information for reaching said location.
10. A method, comprising:
automatically annotating via an annotation module, content including video content comprising a plurality of frames;
storing in an annotation database that communicates with said annotation module, at least one element identifier; and
providing from said at least one element identifier from said annotation database in response to a query comprising frame specification data and a coordinate, wherein said frame specification data identifies which frames among said plurality of frames was displayed when said coordinate is selected.
11. The method of claim 10 further comprising:
displaying via a display device, said video content comprising said plurality of frames; and
selecting via a pointing device, said coordinate on said display device.
12. The method of claim 11 further comprising providing said video content to said display device from a media device.
13. The method of claim 10 wherein said frame specification data comprises a timestamp and a media tag, wherein said media tag identifies said video content, wherein said timestamp identifies at least one frame among said plurality of frames of said video content and wherein said at least one frame is displayed said coordinate is selected.
14. The method of claim 10 further comprising:
providing an additional data server that produces element data based on said at least one element identifier; and
providing a data presentation that displays said element data.
15. The method of claim 10 wherein said at least one element identifier corresponds to an item for sale and said data presentation comprises an offer to purchase said item.
16. The method of claim 10 wherein said at least one element identifier corresponds to a person and said data presentation provides information about that person.
17. The method of claim 10 wherein said at least one element identifier corresponds to a song and said data presentation comprises an offer to purchase a copy of said song.
18. The method of claim 10 wherein said at least one element identifier corresponds to a location and said data presentation comprises travel information for reaching said location.
19. A processor-readable medium storing code representing instructions to cause a processor to perform a process, said code comprising code to:
automatically annotate via an annotation module, content including video content comprising a plurality of frames;
store in an annotation database that communicates with said annotation module, at least one element identifier; and
provide from said at least one element identifier from said annotation database in response to a query comprising frame specification data and a coordinate, wherein said frame specification data identifies which frames among said plurality of frames was displayed when said coordinate is selected.
20. The processor-readable medium of claim 19, wherein said frame specification data comprises a timestamp and a media tag, wherein said media tag identifies said video content, wherein said timestamp identifies at least one frame among said plurality of frames of said video content and wherein said at least one frame is displayed said coordinate is selected.
US13/345,404 2009-12-31 2012-01-06 Systems and methods for media annotation, selection and display of background data Abandoned US20120106924A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/345,404 US20120106924A1 (en) 2009-12-31 2012-01-06 Systems and methods for media annotation, selection and display of background data

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US29183709P 2009-12-31 2009-12-31
US41926810P 2010-12-03 2010-12-03
US12/976,148 US9508387B2 (en) 2009-12-31 2010-12-22 Flick intel annotation methods and systems
US13/345,404 US20120106924A1 (en) 2009-12-31 2012-01-06 Systems and methods for media annotation, selection and display of background data

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US12/976,148 Continuation US9508387B2 (en) 2009-12-31 2010-12-22 Flick intel annotation methods and systems

Publications (1)

Publication Number Publication Date
US20120106924A1 true US20120106924A1 (en) 2012-05-03

Family

ID=44187704

Family Applications (3)

Application Number Title Priority Date Filing Date
US12/976,148 Active 2034-01-24 US9508387B2 (en) 2009-12-31 2010-12-22 Flick intel annotation methods and systems
US13/345,404 Abandoned US20120106924A1 (en) 2009-12-31 2012-01-06 Systems and methods for media annotation, selection and display of background data
US13/371,602 Abandoned US20120139839A1 (en) 2009-12-31 2012-02-13 Methods and systems for media annotation, selection and display of additional information associated with a region of interest in video content

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US12/976,148 Active 2034-01-24 US9508387B2 (en) 2009-12-31 2010-12-22 Flick intel annotation methods and systems

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/371,602 Abandoned US20120139839A1 (en) 2009-12-31 2012-02-13 Methods and systems for media annotation, selection and display of additional information associated with a region of interest in video content

Country Status (1)

Country Link
US (3) US9508387B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11496814B2 (en) 2009-12-31 2022-11-08 Flick Intelligence, LLC Method, system and computer program product for obtaining and displaying supplemental data about a displayed movie, show, event or video game

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9465451B2 (en) 2009-12-31 2016-10-11 Flick Intelligence, LLC Method, system and computer program product for obtaining and displaying supplemental data about a displayed movie, show, event or video game
US8751942B2 (en) 2011-09-27 2014-06-10 Flickintel, Llc Method, system and processor-readable media for bidirectional communications and data sharing between wireless hand held devices and multimedia display systems
US9508387B2 (en) 2009-12-31 2016-11-29 Flick Intelligence, LLC Flick intel annotation methods and systems
US9413803B2 (en) 2011-01-21 2016-08-09 Qualcomm Incorporated User input back channel for wireless displays
US20130003624A1 (en) * 2011-01-21 2013-01-03 Qualcomm Incorporated User input back channel for wireless displays
US20130013318A1 (en) 2011-01-21 2013-01-10 Qualcomm Incorporated User input back channel for wireless displays
US10135900B2 (en) 2011-01-21 2018-11-20 Qualcomm Incorporated User input back channel for wireless displays
US9787725B2 (en) * 2011-01-21 2017-10-10 Qualcomm Incorporated User input back channel for wireless displays
US9113033B2 (en) * 2012-08-28 2015-08-18 Microsoft Technology Licensing, Llc Mobile video conferencing with digital annotation
US10922474B2 (en) * 2015-03-24 2021-02-16 Intel Corporation Unstructured UI
AU2015203661A1 (en) * 2015-06-30 2017-01-19 Canon Kabushiki Kaisha Method, apparatus and system for applying an annotation to a portion of a video sequence
US20180310066A1 (en) * 2016-08-09 2018-10-25 Paronym Inc. Moving image reproduction device, moving image reproduction method, moving image distribution system, storage medium with moving image reproduction program stored therein
US20220374585A1 (en) * 2021-05-19 2022-11-24 Google Llc User interfaces and tools for facilitating interactions with video content
CN116452273B (en) * 2023-06-20 2023-09-12 深圳可视科技有限公司 Display screen control method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100050082A1 (en) * 2008-08-22 2010-02-25 Pvi Virtual Media Services, Llc Interactive Video Insertions, And Applications Thereof
US20100154007A1 (en) * 2008-12-17 2010-06-17 Jean Touboul Embedded video advertising method and system
US20110137753A1 (en) * 2009-12-03 2011-06-09 Armin Moehrle Automated process for segmenting and classifying video objects and auctioning rights to interactive sharable video objects

Family Cites Families (59)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE4446139C2 (en) * 1993-12-30 2000-08-17 Intel Corp Method and device for highlighting objects in a conference system
US6727887B1 (en) * 1995-01-05 2004-04-27 International Business Machines Corporation Wireless pointing device for remote cursor control
JP2957938B2 (en) 1995-03-31 1999-10-06 ミツビシ・エレクトリック・インフォメイション・テクノロジー・センター・アメリカ・インコーポレイテッド Window control system
US20020056136A1 (en) 1995-09-29 2002-05-09 Wistendahl Douglass A. System for converting existing TV content to interactive TV programs operated with a standard remote control and TV set-top box
EP1246451A2 (en) * 1996-08-30 2002-10-02 Matsushita Electric Industrial Co., Ltd. Digital broadcasting system, digital broadcasting apparatus, and a reception apparatus for digital broadcast
US5995102A (en) * 1997-06-25 1999-11-30 Comet Systems, Inc. Server system and method for modifying a cursor image
US6842190B1 (en) * 1999-07-06 2005-01-11 Intel Corporation Video bit stream extension with supplementary content information to aid in subsequent video processing
KR100319157B1 (en) * 1999-09-22 2002-01-05 구자홍 User profile data structure for specifying multiple user preference items, And method for multimedia contents filtering and searching using user profile data
US6834308B1 (en) * 2000-02-17 2004-12-21 Audible Magic Corporation Method and apparatus for identifying media content presented on a media playing device
US7343617B1 (en) * 2000-02-29 2008-03-11 Goldpocket Interactive, Inc. Method and apparatus for interaction with hyperlinks in a television broadcast
US20020049975A1 (en) * 2000-04-05 2002-04-25 Thomas William L. Interactive wagering system with multiple display support
US20020002707A1 (en) * 2000-06-29 2002-01-03 Ekel Sylvain G. System and method to display remote content
US6970860B1 (en) * 2000-10-30 2005-11-29 Microsoft Corporation Semi-automatic annotation of multimedia objects
US7562012B1 (en) * 2000-11-03 2009-07-14 Audible Magic Corporation Method and apparatus for creating a unique audio signature
US20020083469A1 (en) * 2000-12-22 2002-06-27 Koninklijke Philips Electronics N.V. Embedding re-usable object-based product information in audiovisual programs for non-intrusive, viewer driven usage
US7363278B2 (en) * 2001-04-05 2008-04-22 Audible Magic Corporation Copyright detection and protection system and method
JP2002335518A (en) * 2001-05-09 2002-11-22 Fujitsu Ltd Control unit for controlling display, server and program
US6968337B2 (en) * 2001-07-10 2005-11-22 Audible Magic Corporation Method and apparatus for identifying an unknown work
US7529659B2 (en) * 2005-09-28 2009-05-05 Audible Magic Corporation Method and apparatus for identifying an unknown work
US7877438B2 (en) * 2001-07-20 2011-01-25 Audible Magic Corporation Method and apparatus for identifying new media content
US20040205482A1 (en) * 2002-01-24 2004-10-14 International Business Machines Corporation Method and apparatus for active annotation of multimedia content
US7421660B2 (en) * 2003-02-04 2008-09-02 Cataphora, Inc. Method and apparatus to visually present discussions for data mining purposes
US7197234B1 (en) * 2002-05-24 2007-03-27 Digeo, Inc. System and method for processing subpicture data
GB2399983A (en) * 2003-03-24 2004-09-29 Canon Kk Picture storage and retrieval system for telecommunication system
US7242389B1 (en) 2003-10-07 2007-07-10 Microsoft Corporation System and method for a large format collaborative display for sharing information
US8065665B1 (en) * 2004-02-28 2011-11-22 Oracle America, Inc. Method and apparatus for correlating profile data
JP4304108B2 (en) * 2004-03-31 2009-07-29 株式会社東芝 METADATA DISTRIBUTION DEVICE, VIDEO REPRODUCTION DEVICE, AND VIDEO REPRODUCTION SYSTEM
US8086575B2 (en) * 2004-09-23 2011-12-27 Rovi Solutions Corporation Methods and apparatus for integrating disparate media formats in a networked media system
US20060080130A1 (en) * 2004-10-08 2006-04-13 Samit Choksi Method that uses enterprise application integration to provide real-time proactive post-sales and pre-sales service over SIP/SIMPLE/XMPP networks
US7263205B2 (en) * 2004-12-06 2007-08-28 Dspv, Ltd. System and method of generic symbol recognition and user authentication using a communication device with imaging capabilities
US7864159B2 (en) 2005-01-12 2011-01-04 Thinkoptics, Inc. Handheld vision based absolute pointing system
US7672968B2 (en) 2005-05-12 2010-03-02 Apple Inc. Displaying a tooltip associated with a concurrently displayed database object
US20060282776A1 (en) * 2005-06-10 2006-12-14 Farmer Larry C Multimedia and performance analysis tool
US8402503B2 (en) * 2006-02-08 2013-03-19 At& T Intellectual Property I, L.P. Interactive program manager and methods for presenting program content
US20070288640A1 (en) * 2006-06-07 2007-12-13 Microsoft Corporation Remote rendering of multiple mouse cursors
US8089455B1 (en) * 2006-11-28 2012-01-03 Wieder James W Remote control with a single control button
US20080208589A1 (en) * 2007-02-27 2008-08-28 Cross Charles W Presenting Supplemental Content For Digital Media Using A Multimodal Application
US9360985B2 (en) 2007-06-27 2016-06-07 Scenera Technologies, Llc Method and system for automatically linking a cursor to a hotspot in a hypervideo stream
US8006314B2 (en) * 2007-07-27 2011-08-23 Audible Magic Corporation System for identifying content of digital data
US8285121B2 (en) * 2007-10-07 2012-10-09 Fall Front Wireless Ny, Llc Digital network-based video tagging system
KR101382499B1 (en) * 2007-10-22 2014-04-21 삼성전자주식회사 Method for tagging video and apparatus for video player using the same
US8875212B2 (en) 2008-04-15 2014-10-28 Shlomo Selim Rakib Systems and methods for remote control of interactive video
US8229865B2 (en) * 2008-02-04 2012-07-24 International Business Machines Corporation Method and apparatus for hybrid tagging and browsing annotation for multimedia content
US20090228492A1 (en) * 2008-03-10 2009-09-10 Verizon Data Services Inc. Apparatus, system, and method for tagging media content
KR20090107626A (en) 2008-04-10 2009-10-14 주식회사 인프라웨어 Method for providing object information of moving picture using object area information
US20100235379A1 (en) * 2008-06-19 2010-09-16 Milan Blair Reichbach Web-based multimedia annotation system
US10248931B2 (en) * 2008-06-23 2019-04-02 At&T Intellectual Property I, L.P. Collaborative annotation of multimedia content
US20090319884A1 (en) * 2008-06-23 2009-12-24 Brian Scott Amento Annotation based navigation of multimedia content
US20100138517A1 (en) * 2008-12-02 2010-06-03 At&T Intellectual Property I, L.P. System and method for multimedia content brokering
US20100235865A1 (en) * 2009-03-12 2010-09-16 Ubiquity Holdings Tagging Video Content
WO2010132718A2 (en) * 2009-05-13 2010-11-18 Coincident.Tv , Inc. Playing and editing linked and annotated audiovisual works
KR20110118421A (en) 2010-04-23 2011-10-31 엘지전자 주식회사 Augmented remote controller, augmented remote controller controlling method and the system for the same
US9508387B2 (en) 2009-12-31 2016-11-29 Flick Intelligence, LLC Flick intel annotation methods and systems
US8751942B2 (en) 2011-09-27 2014-06-10 Flickintel, Llc Method, system and processor-readable media for bidirectional communications and data sharing between wireless hand held devices and multimedia display systems
US20160182971A1 (en) 2009-12-31 2016-06-23 Flickintel, Llc Method, system and computer program product for obtaining and displaying supplemental data about a displayed movie, show, event or video game
US20110167447A1 (en) * 2010-01-05 2011-07-07 Rovi Technologies Corporation Systems and methods for providing a channel surfing application on a wireless communications device
US8370878B2 (en) * 2010-03-17 2013-02-05 Verizon Patent And Licensing Inc. Mobile interface for accessing interactive television applications associated with displayed content
US9183560B2 (en) * 2010-05-28 2015-11-10 Daniel H. Abelow Reality alternate
US20120062468A1 (en) * 2010-09-10 2012-03-15 Yu-Jen Chen Method of modifying an interface of a handheld device and related multimedia system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100050082A1 (en) * 2008-08-22 2010-02-25 Pvi Virtual Media Services, Llc Interactive Video Insertions, And Applications Thereof
US20100154007A1 (en) * 2008-12-17 2010-06-17 Jean Touboul Embedded video advertising method and system
US20110137753A1 (en) * 2009-12-03 2011-06-09 Armin Moehrle Automated process for segmenting and classifying video objects and auctioning rights to interactive sharable video objects

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11496814B2 (en) 2009-12-31 2022-11-08 Flick Intelligence, LLC Method, system and computer program product for obtaining and displaying supplemental data about a displayed movie, show, event or video game

Also Published As

Publication number Publication date
US20110158603A1 (en) 2011-06-30
US9508387B2 (en) 2016-11-29
US20120139839A1 (en) 2012-06-07

Similar Documents

Publication Publication Date Title
US9508387B2 (en) Flick intel annotation methods and systems
US11011206B2 (en) User control for displaying tags associated with items in a video playback
US11743537B2 (en) User control for displaying tags associated with items in a video playback
US11741110B2 (en) Aiding discovery of program content by providing deeplinks into most interesting moments via social media
US8990690B2 (en) Methods and apparatus for media navigation
US8656282B2 (en) Authoring tool for providing tags associated with items in a video playback
US9392211B2 (en) Providing video presentation commentary
TWI538520B (en) Method for adding video information, method for displaying video information and devices using the same
US9015788B2 (en) Generation and provision of media metadata
US9143699B2 (en) Overlay non-video content on a mobile device
KR102438752B1 (en) Systems and methods for performing asr in the presence of heterograph
KR20220121911A (en) Systems and methods for presenting supplemental content in augmented reality
CN110168541B (en) System and method for eliminating word ambiguity based on static and time knowledge graph
CN102595212A (en) Simulated group interaction with multimedia content
CN103384253B (en) The play system and its construction method of multimedia interaction function are presented in video
US20150312633A1 (en) Electronic system and method to render additional information with displayed media
JP2014176083A (en) Method for providing electronic program guide, multimedium reproduction system and computer readable storage medium
TW201322740A (en) Digitalized TV commercial product display system, method, and recording medium thereof
CN106713973A (en) Program searching method and device
JP2006229453A (en) Program presentation system
US20130055325A1 (en) Online advertising relating to feature film and television delivery over the internet
WO2011082083A2 (en) Flick intel annotation methods and systems
US11170817B2 (en) Tagging tracked objects in a video with metadata
JP2023141808A (en) Moving image distribution device
TW201739264A (en) Method and system for automatically embedding interactive elements into multimedia content

Legal Events

Date Code Title Description
AS Assignment

Owner name: FLICKINTEL, LLC, NEW MEXICO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KRUKAR, RICHARD H.;ORTIZ, LUIS M.;LOPEZ, KERMIT;REEL/FRAME:027495/0883

Effective date: 20120104

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: FLICK INTELLIGENCE, LLC, NEW MEXICO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:FLICK INTEL, LLC;REEL/FRAME:049951/0783

Effective date: 20190601