WO2014078416A1 - Systems and methods for identifying narratives related to a media stream - Google Patents

Systems and methods for identifying narratives related to a media stream Download PDF

Info

Publication number
WO2014078416A1
WO2014078416A1 PCT/US2013/069896 US2013069896W WO2014078416A1 WO 2014078416 A1 WO2014078416 A1 WO 2014078416A1 US 2013069896 W US2013069896 W US 2013069896W WO 2014078416 A1 WO2014078416 A1 WO 2014078416A1
Authority
WO
WIPO (PCT)
Prior art keywords
story
sample
content
engine
media stream
Prior art date
Application number
PCT/US2013/069896
Other languages
French (fr)
Inventor
Brian Elan Lee
Michael Sean STEWART
James Stewartson
Original Assignee
Nant Holdings Ip, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nant Holdings Ip, Llc filed Critical Nant Holdings Ip, Llc
Publication of WO2014078416A1 publication Critical patent/WO2014078416A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4722End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6582Data stored in the client, e.g. viewing habits, hardware capabilities, credit card number
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8126Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts
    • H04N21/8133Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts specifically related to the content, e.g. biography of the actors in a movie, detailed information about an article seen in a video program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Definitions

  • the field of the invention is interactive digital technologies.
  • U.S. pat. publ. no. 2012/0008821 to Sharon discusses a system that identifies what a user is watching and allows the user to have some interaction with the content
  • U.S. pat. pubi. no. 2012/0185905 to Kelley identifies content being viewed and determines if additional content is available associated with that content.
  • such references fail to identify and automatically presents narrative content to a user based on the sample.
  • the inventive subject matter provides apparatus, systems and methods in which one can identify narrative content associated with a media stream or other media (e.g., billboard, live event, etc.), and interact with the narrative content on a user device.
  • One aspect of the inventive subject matter includes a story engine capable of delivering one or more content streams to a user device, or even to multiple user devices.
  • the story engine is coupled to a story database having one or more stories, each of which can include one or more content streams or sets of content.
  • an analysis engine can generate a set of characteristics associated with the sample, which can then be used by the story engine to identify a story or other content relevant to the media based on the characteristics of the sample.
  • Environmental data could also be used to identity the relevant content including, for example, location data, time data, user preference data, device type data, and so forth.
  • Fig, 1 is a diagram of one embodiment of a story management system
  • FIG. 2 is a diagram of another embodiment of a system that identifies content related to a captured media.
  • FIG. 3 is a diagram of yet another embodiment of a system that identifies content related to a captured media.
  • Fig. 4 is a flowchart of one embodiment of a method for identifying content related to a media stream ,
  • a server can include one or more computers operating as a web server, database server, or other type of computer server in a manner to fulfill described roles, responsibilities, or functions.
  • the software instructions preferably configure the computing device to provide the roles, responsibilities, or other functionality as discussed below with respect to the disclosed apparatus.
  • the various servers, systems, databases, or interfaces exchange data using standardized protocols or algorithms, possibly based on HTTP, HTTPS, AES, public-private key exchanges, web service APIs, known financial transaction protocols, or other electronic information exchanging methods.
  • Data exchanges preferably are conducted over a packet-switched network, the Internet, LAN, WAN, VPN, or other type of packet switched network.
  • device commands configure a user's media player to present content of a story stream according to an overarching story.
  • story can include, for example, narrative (e.g., fiction, video, audio, etc.), interactive components (e.g., puzzles, games, etc.), promotions (e.g., advertisements, contests, etc.), or other types of user- engaging features. Users can interact with the content according to the programmed story.
  • a story server or database can store one or more stories as sets of story streams (sets of content), where each of the streams can target a specific media device or type of media device, for example.
  • a story stream is considered to include a sequenced presentation of data., preferably according to a time-based schedule. However, presentation of the stream could be based on the media itself, time of day, date, location of user, and so forth. One should also note the stream can alternatively be presented according to other triggering criteria based on user input.
  • Triggering criteria can be based on biometrics, location, movement, the media stream, or other acquired data.
  • Figure 1 illustrates one embodiment of a system 100 configured to identify a story a story related to a media stream.
  • Contemplated systems can comprise a story engine 1.04 coupled to a story database 102 capable of storing a plurality of stories, each of which comprises at least one story stream.
  • the story engine 104 preferably operates as a multi-media delivery channel delivering content related to a media stream to one or more user media devices 110A-N.
  • story engine 104 can be configured to deliver one or more story streams or other content to the user devices 110A-N, and cause or facilitate the user devices 110A-N to present story elements of the story streams or other content in a synchronized manner according to a desired modality.
  • Exemplary types of data that can be used to configure the user devices 1 10A- N include visual data (e.g., images, video, etc.), audible data, hap tic or kinesthetic data, metadata, web-based data, or even augmented or virtual reality data.
  • each user device 1 10A-N can receive a story stream according to a modality selected for that media device.
  • a single user device can receive multiple story streams.
  • the modality could automatically be selected based upon the capabilities of a specific user device, and different user devices can thereby receive story streams having different modalities.
  • a laptop or other personal computer may receive audio and video data
  • a mobile phone may receive only telephone calls and/or text or multimedia messages. In this manner, different pieces of a story can be delivered to different, sometimes unconnected, platforms.
  • Contemplated user devices capable of interacting with the story streams or o ther content include mobile devices (e.g., laptops, netbooks, tablet PCs, smart watches, and other portable computing devices, smart phones, web enabled glasses and other consumer products, MP3 players, personal digital assistants, vehicles, watches, etc.), desktop computers, televisions, game consoles or other platforms, electronic picture frames, appliances, kiosks, radios, telephones, vehicles, sensor devices, or other types of devices.
  • mobile devices e.g., laptops, netbooks, tablet PCs, smart watches, and other portable computing devices, smart phones, web enabled glasses and other consumer products, MP3 players, personal digital assistants, vehicles, watches, etc.
  • desktop computers e.g., televisions, game consoles or other platforms, electronic picture frames, appliances, kiosks, radios, telephones, vehicles, sensor devices, or other types of devices.
  • one or more of the user devices 1 30A-N can include at least one sensor configured to collect ambient information about a user's environment.
  • sensors could include, for example, GPS, cellular triangulation, or other location discovery systems, cameras, video recorders, accelerometers, magnetometers, speedometers, odometers, altitude detectors, thermometers, optical sensors, motion sensors, heart rate monitors, proximity sensors, microphones, and so forth.
  • one or more of the various servers composing the system 100 can be local or remote relative to the user's devices 1 10A-N.
  • the location of the stored files could vary over time to facilitate seamless presentation of the content to the user.
  • the analysis engine 106 could be local to a user's device.
  • the story database or story engine could also be local to a user's device.
  • the story database could be partially located on the user's device and partially located remotely from the device.
  • a first user device could comprise a smart phone having a plurality of software applications that take advantage of a processor and memory of the smart phone.
  • the analysis engine could comprise the processor of the smart phone that conducts the analysis required based on software instructions preferably stored on the phone,
  • Such an approach allows content or streams to be downloaded to a computing device local to the user or even to one or more of the user's devices 1 10A-N.
  • the one or more devices 1 10A-N can still present its story siream(s) seamlessly according to the stream's schedule or triggering criteria.
  • the story database can be remote from one or more of the user's devices 1 l OA-N located across the Internet 130.
  • Exemplary remote servers can include single purpose server farms, distal services, distributed computing platforms (e.g. , cloud based services, etc.), or even augmented or mixed reality computing platforms.
  • a user actively or passively ob tains a sample of a media stream using a first user device 1 10A.
  • the sample could be obtained using the device's sensor, for example, such as a camera, a microphone or other sensor.
  • a camera could take a still image or record a short video to be analyzed, for example. Audio data could also or alternatively be captured,
  • An analysis engine 120 can be configured to analyze the sample or data related thereto, and generate a set of characteristics associated with the sample.
  • Exemplary data could include, for example, some or all of the sample (e.g., a still frame or at least a partial frame) and a set of parameters of the sample (e.g., duration, frequency, identifiable patterns or lack thereof, maximum or minimum amplitudes, etc.).
  • Other contemplated data includes, for example, visual data (e.g., images and video), audio data, and location information of the first user device, especially that information at the time when the sample was obtained.
  • the analysis engine 120 could be local to the user device 1 1 OA, such that analysis of the sample can occur local ly before content is requested.
  • the analysis engine 120 could be local to the story engine 104.
  • Contemplated characteristics of the set could include, for example, a time, a location of the first user device, a channel, an indicia, a genre, an actor, product placement, a pattern, text, a color scheme, audio characteristics, and facial recognition information.
  • the analysis engine could generate the characteristics via image analysis.
  • the image analysis algorithms could be chosen to allow the engine to detect faces or products in an image, or information related to potential areas of identification that can be sent remotely for identification.
  • the story engine 104 can be coupled to the story database 102.
  • the story engine 104 can analyze the set of characteristics generated by the analysis engine 120 and preferably identify at least one story from the story database 102 using one or more algorithms based on the set of characteristics.
  • the story engine 104 may identify a story based on simple matching of characteristics to known parameters of the stories, the story engine 104 could also be configured to identify a story based on apparent relevance to the media stream / sample. This is important where the media stream is unknown or where the sample is unclear as to the identity of the media stream, which may occur if the obtained sample includes background noise, a blurred image, and so forth.
  • the analysis engine 120 can be configured to recognize an image of media or a live event, for example, a video stream or still frame thereof, a poster, or other media (still or otherwise), and identify a set of characteristics associated with the media.
  • the story engine 104 can be further configured to select a story based on som e or all of the set of characteristics, and present an identified story stream or other content, preferably automatically (e.g., without user interaction), to the user device as a function of the location and/or playback of the media stream.
  • the story engine could present a story stream or other content to a user device that corresponds with the current point in the narrative of the video.
  • a second sample could be taken to verify the playback position of the media stream and make any necessary adjustments pri or to presenting the story stream or other content.
  • the specific story stream of a story could be selected as a function of the current playback position of the video and/or a location of the user.
  • the playback position of the media stream could be identified from the sample characteristics or based on a user's location. In addition, much like identifying a television show using channel listings, in some embodiments the position could be identified based on a time and date, and a channel being viewed by the user.
  • the first user device can include, and preferably execute, a software application that has a user media interface configured to allow a user to interact with the stor ⁇ ' ' stream or other content
  • Exemplary applications include, for example, a web-based application and an application program interface (API), through which commands or data can be exchanged interact with the story engine's servers.
  • the story engine can be configured to present the story stream to the user device as a function of the sample.
  • the story stream or other content is preferably related to the media stream, and in fact, enhances the user's experience with the media stream.
  • the story could be a game related to the media stream.
  • the content could be information related to the media stream, such as a biography, a historical or other facts, an advertisement, a trivia question, a trailer, an inside look, and so forth.
  • the story engine can select a set of content from multiple sets of content stored in a content database (e,g. f story database). Some or all of the selected content can then be transmitted and/or presented on one or more devices.
  • Figure 2 illustrates another embodiment of a system 200 configured to identify content related to captured media.
  • a user can obtain a sample of the media 222 via a user device 210, which is shown as a smart phone, but could be any number of computing devices such as those described above.
  • a portion or all of the captured media can be transmitted via a network 230 to an analysis engine 202, Alternatively, the device 210 can include the analysis engine or a separate processing engine to at least initially analyze the sample and extricate the relevant information.
  • the same considerations for like components with like numerals of Figure 1 apply.
  • figure 3 illustrates yet another embodiment of a system 300 configured to identify content related to captured media.
  • Multiple users can obtain a sample of the media 322 via user devices 310A-N, each of which is shown as a smart phone, but could be any number of computing devices such as those described above.
  • a portion or ail of the captured media can be transmitted via a network 330 to an analysis engine 302.
  • the devices 310A-N can include the analysis engine or a separate processing engine to at least initially analyze the sample and extricate the relevant information.
  • the interaction of one user with the content stream sent to the user's device can alter the content streams presented on unrelated user's device.
  • MMORPG immersive massively multiplayer online role-playing game
  • this can vary another user's interaction with the game or story.
  • an immersive massively multiplayer online role-playing game (MMORPG) or other game can be created related to the common media, despite each user likely taking a different sample (e.g., different view) of the media.
  • MMORPG massively multiplayer online role-playing game
  • Figure 4 illustrates one embodiment of a method 300 for identifying a story related to a media stream.
  • access is provided to a story database configured to story a plurality of stories, each of which is associated with a set of story parameters.
  • the story database could comprise other content such as that discussed above.
  • a sample of a media stream can be obtained using a first device.
  • Contemplated devices include, for example, a television, a mobile telephone, a tablet PC, a laptop computer, a desktop computer, a telephone, a radio, an appliance, an electronic picture frame, a vehicle, a game platform, and a sensor.
  • the first device can include the analysis engine.
  • the story engine could be remote from the first device.
  • the story engine could be local to the story database. However, it is contemplated that the story engine could instead be remote from the story database, such as where the story engine is located on the first device and the story database comprises a remote server.
  • At least a portion of the sample or data related to the sample can he analyzed in step 430 using an analysis engine to identify a set of sample parameters or characteristics.
  • This could comprise known image or audio analysis techniques to generate a list of features of an image or audio recording. These characteristics could then be used to identify a story related to the captured sample.
  • one or more of the set of sample parameters can be compared with the sets of story parameters using a sto engine to identify a stow in the story database as a function of a relationship of the set of sample parameters with the story parameters.
  • a sto engine to identify a stow in the story database as a function of a relationship of the set of sample parameters with the story parameters.
  • one or more of the sample parameters can be matched with one or more of the story parameters of each of the plurality of stories using the stow engine to generate a relevance value for each of the plurality of stories, and identifying the story having the highest relevance value. This is useful where the sample is poor quality or where the media in question is not recognized by the system.
  • a user interface of the first device can be configured to automatically present the identified story or other content. It is further contemplated in step 452 that the identified story can be automatically presented at a time position associated with playback of the media stream.
  • Coupled to is intended to include both direct coupling (in which two elements that are coupled to each other contact each other) and indirect coupling (in which at least one additional element is located between the two elements). Therefore, the terms “coupled to” and “coupled with” are used synonymously.

Abstract

Story management systems and methods are described that allow a user to use a first user device to sample a media stream and interact with a story related to the media stream sample. The sample or related data can be analyzed by an analysis engine coupled to a story database to generate a set of characteristics associated with the sample. A story engine can then identify a story in the story database as a function of the set of characteristics and then automatically present the identified story to the user.

Description

10001 j This application claims the benefit of priority to U.S. provisional application having serial no. 61/725661 filed on November 13, 2012. This and all other referenced extrinsic materials are incorporated herein by reference in their entirety. Where a definition or use of a term in a reference that is incorporated by reference is inconsistent or contrary to the definition of that term provided herein, the definition of that term pro vided herein is deemed to be controlling.
Field of the Invention
(0002) The field of the invention is interactive digital technologies. Baekgronnd
[0003] The following description includes information that may be useful in understanding the present invention. It is not an admission that any of the information pro vided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
[0004] Consumers seek ou t ever more immersive media experiences. With the ad vent of mobile computing, oppoxtunities exist for integrating real-world experiences with immersive narratives bridging across a full spectrum of device capabilities. Rather than a consumer passively watching a television show or listening to an audio stream, the consumer should be able to directly and actively engage with a narrative or story according to their ow preferences.
(0005) Companies such as Shazam™ have developed technology capable of analyzing a song or video sample and returning information related to the sample, typically song title, artist name, etc, Such technology is discussed in U.S. pat nos. 788 1657, 8086171, 8190435, and 8290423 and U.S. pat. pubi, nos. 2012/0008821, 2012/0191231 and 2012/0221131. However, such technology is limited in the information returned. Similar technology is also discussed in U.S. pat. nos. 7016532, 7477780, 7680324, and 7565008.
[0006] U.S. pat. publ. no. 2012/0008821 to Sharon discusses a system that identifies what a user is watching and allows the user to have some interaction with the content, in addition, U.S. pat. pubi. no. 2012/0185905 to Kelley identifies content being viewed and determines if additional content is available associated with that content. However, such references fail to identify and automatically presents narrative content to a user based on the sample.
[0007] Thus, there is still a need for systems and methods configured to identify media and present narrative or other content relevant to the media.
[0008] The inventive subject matter provides apparatus, systems and methods in which one can identify narrative content associated with a media stream or other media (e.g., billboard, live event, etc.), and interact with the narrative content on a user device. One aspect of the inventive subject matter includes a story engine capable of delivering one or more content streams to a user device, or even to multiple user devices. In some embodiments, the story engine is coupled to a story database having one or more stories, each of which can include one or more content streams or sets of content. When a user obtains a sample of media, such as an image, an analysis engine can generate a set of characteristics associated with the sample, which can then be used by the story engine to identify a story or other content relevant to the media based on the characteristics of the sample. Environmental data could also be used to identity the relevant content including, for example, location data, time data, user preference data, device type data, and so forth.
[0009] Various objects, features, aspects and advantages of the inventive subject matter will become more apparent from the following detailed description of preferred embodiments, along with the accompanying drawing figures in which like numerals represent like components.
Brief Description of The Drawings
[0010] Fig, 1 is a diagram of one embodiment of a story management system,
[0011] Fig, 2 is a diagram of another embodiment of a system that identifies content related to a captured media.
[0012] Fig. 3 is a diagram of yet another embodiment of a system that identifies content related to a captured media. 0013] Fig. 4 is a flowchart of one embodiment of a method for identifying content related to a media stream ,
Detailed Description
[0014] Throughout the following discussion, numerous references will be made regarding servers, services, interfaces, portals, platforms, or other systems formed from computing devices. It should be appreciated that the use of such terms is deemed to represent one or more computing devices having at least one processor configured to execute software instructions stored on a computer readable tangible, non-transitory medium (e.g., hard drive, solid state drive, RAM, flash, ROM, etc.). For example, a server can include one or more computers operating as a web server, database server, or other type of computer server in a manner to fulfill described roles, responsibilities, or functions. The software instructions preferably configure the computing device to provide the roles, responsibilities, or other functionality as discussed below with respect to the disclosed apparatus. In especially preferred embodiments, the various servers, systems, databases, or interfaces exchange data using standardized protocols or algorithms, possibly based on HTTP, HTTPS, AES, public-private key exchanges, web service APIs, known financial transaction protocols, or other electronic information exchanging methods. Data exchanges preferably are conducted over a packet-switched network, the Internet, LAN, WAN, VPN, or other type of packet switched network.
[0015] One should appreciate that the disclosed techniques provide many advantageous technical effects including presenting a story or other relevant content to a user based on a sampling of a media stream to allow users to actively engage with the media and thereby be more immersed in the media, event and/or their surroundings.
10016 J The following discussion provides many example embodiments of the inventive subject matter. Although each embodiment represents a single combination of inventive elements, the in ventive subject matter is considered to include all possible combinations of the disclosed elements. Thus if one embodiment comprises elements A, B, and C, and a second embodiment comprises elements B and D, then the inventive subject matter is also considered to include other remaining combinations of A, B, C, or D, even if not explicitly disclosed, [0017] The following discussion describes presenting narrative or other content to a user as a function of data related to a media stream. Typically, narrative content comprises a story, which is considered to have one or more data streams, herein referred to as "story streams", carrying experience-related content and device commands. However, the content could also include information related to the media stream, such as useful facts, biographies, and so forth.
[0018] In some embodiments, device commands configure a user's media player to present content of a story stream according to an overarching story. Although the term "story" is used, this should not be construed as only including narratives. Rather, contemplated stories can include, for example, narrative (e.g., fiction, video, audio, etc.), interactive components (e.g., puzzles, games, etc.), promotions (e.g., advertisements, contests, etc.), or other types of user- engaging features. Users can interact with the content according to the programmed story. A story server or database can store one or more stories as sets of story streams (sets of content), where each of the streams can target a specific media device or type of media device, for example. A story stream is considered to include a sequenced presentation of data., preferably according to a time-based schedule. However, presentation of the stream could be based on the media itself, time of day, date, location of user, and so forth. One should also note the stream can alternatively be presented according to other triggering criteria based on user input.
Triggering criteria can be based on biometrics, location, movement, the media stream, or other acquired data.
[0019] Figure 1 illustrates one embodiment of a system 100 configured to identify a story a story related to a media stream. Contemplated systems can comprise a story engine 1.04 coupled to a story database 102 capable of storing a plurality of stories, each of which comprises at least one story stream. The story engine 104 preferably operates as a multi-media delivery channel delivering content related to a media stream to one or more user media devices 110A-N. For example, story engine 104 can be configured to deliver one or more story streams or other content to the user devices 110A-N, and cause or facilitate the user devices 110A-N to present story elements of the story streams or other content in a synchronized manner according to a desired modality. Exemplary types of data that can be used to configure the user devices 1 10A- N include visual data (e.g., images, video, etc.), audible data, hap tic or kinesthetic data, metadata, web-based data, or even augmented or virtual reality data. (0020] It is contemplated that each user device 1 10A-N can receive a story stream according to a modality selected for that media device. However, in other contemplated embodiments, a single user device can receive multiple story streams. Thus, for example, the modality could automatically be selected based upon the capabilities of a specific user device, and different user devices can thereby receive story streams having different modalities. For example, a laptop or other personal computer may receive audio and video data, while a mobile phone may receive only telephone calls and/or text or multimedia messages. In this manner, different pieces of a story can be delivered to different, sometimes unconnected, platforms.
[0021] One embodiment of a platform for providing a transmedia environment is described in co-pending U.S. utility appi. having serial no. 13/414,192 filed on March 7, 2012.
[0022] Contemplated user devices capable of interacting with the story streams or o ther content include mobile devices (e.g., laptops, netbooks, tablet PCs, smart watches, and other portable computing devices, smart phones, web enabled glasses and other consumer products, MP3 players, personal digital assistants, vehicles, watches, etc.), desktop computers, televisions, game consoles or other platforms, electronic picture frames, appliances, kiosks, radios, telephones, vehicles, sensor devices, or other types of devices.
[0023] Advantageously, it is preferred that one or more of the user devices 1 30A-N can include at least one sensor configured to collect ambient information about a user's environment. Such sensors could include, for example, GPS, cellular triangulation, or other location discovery systems, cameras, video recorders, accelerometers, magnetometers, speedometers, odometers, altitude detectors, thermometers, optical sensors, motion sensors, heart rate monitors, proximity sensors, microphones, and so forth.
[0024] Although shown distal to user devices 1 1 OA-N, one or more of the various servers composing the system 100 can be local or remote relative to the user's devices 1 10A-N. IN addition, the location of the stored files could vary over time to facilitate seamless presentation of the content to the user. For example, the analysis engine 106 could be local to a user's device. Likewise, it is contemplated that the story database or story engine could also be local to a user's device. Or the story database could be partially located on the user's device and partially located remotely from the device. For example, a first user device could comprise a smart phone having a plurality of software applications that take advantage of a processor and memory of the smart phone. In such embodiments, the analysis engine could comprise the processor of the smart phone that conducts the analysis required based on software instructions preferably stored on the phone,
[0025] Such an approach allows content or streams to be downloaded to a computing device local to the user or even to one or more of the user's devices 1 10A-N. In this manner, should the user lose connectivity with a network, or the user's connectivity temporarily slow, the one or more devices 1 10A-N can still present its story siream(s) seamlessly according to the stream's schedule or triggering criteria. It is also contemplated that the story database can be remote from one or more of the user's devices 1 l OA-N located across the Internet 130. Exemplary remote servers can include single purpose server farms, distal services, distributed computing platforms (e.g. , cloud based services, etc.), or even augmented or mixed reality computing platforms.
[0026] Preferably, a user actively or passively ob tains a sample of a media stream using a first user device 1 10A. The sample could be obtained using the device's sensor, for example, such as a camera, a microphone or other sensor. A camera could take a still image or record a short video to be analyzed, for example. Audio data could also or alternatively be captured,
[0027] An analysis engine 120 can be configured to analyze the sample or data related thereto, and generate a set of characteristics associated with the sample. Exemplary data could include, for example, some or all of the sample (e.g., a still frame or at least a partial frame) and a set of parameters of the sample (e.g., duration, frequency, identifiable patterns or lack thereof, maximum or minimum amplitudes, etc.). Other contemplated data includes, for example, visual data (e.g., images and video), audio data, and location information of the first user device, especially that information at the time when the sample was obtained. Although shown remote from the story engine 104 and user device 11 OA, it is contemplated that the analysis engine 120 could be local to the user device 1 1 OA, such that analysis of the sample can occur local ly before content is requested. Alternatively, the analysis engine 120 could be local to the story engine 104.
[0028] Contemplated characteristics of the set could include, for example, a time, a location of the first user device, a channel, an indicia, a genre, an actor, product placement, a pattern, text, a color scheme, audio characteristics, and facial recognition information. In some contemplated embodiments, the analysis engine could generate the characteristics via image analysis. For example, in some embodiments, the image analysis algorithms could be chosen to allow the engine to detect faces or products in an image, or information related to potential areas of identification that can be sent remotely for identification.
[0029] The story engine 104 can be coupled to the story database 102. The story engine 104 can analyze the set of characteristics generated by the analysis engine 120 and preferably identify at least one story from the story database 102 using one or more algorithms based on the set of characteristics. Although the story engine 104 may identify a story based on simple matching of characteristics to known parameters of the stories, the story engine 104 could also be configured to identify a story based on apparent relevance to the media stream / sample. This is important where the media stream is unknown or where the sample is unclear as to the identity of the media stream, which may occur if the obtained sample includes background noise, a blurred image, and so forth.
[0030] Thus, the analysis engine 120 can be configured to recognize an image of media or a live event, for example, a video stream or still frame thereof, a poster, or other media (still or otherwise), and identify a set of characteristics associated with the media. Advantageously, the story engine 104 can be further configured to select a story based on som e or all of the set of characteristics, and present an identified story stream or other content, preferably automatically (e.g., without user interaction), to the user device as a function of the location and/or playback of the media stream. For example, where the media stream is a video, the story engine could present a story stream or other content to a user device that corresponds with the current point in the narrative of the video. This could be based on an analysis of the sample and then adding the elapsed time since the sample was obtained to determine where playback is expected to be with the media stream. In some embodiments, a second sample could be taken to verify the playback position of the media stream and make any necessary adjustments pri or to presenting the story stream or other content. Similarly, the specific story stream of a story could be selected as a function of the current playback position of the video and/or a location of the user. |0031] The playback position of the media stream could be identified from the sample characteristics or based on a user's location. In addition, much like identifying a television show using channel listings, in some embodiments the position could be identified based on a time and date, and a channel being viewed by the user.
[0032] In one aspect, the first user device can include, and preferably execute, a software application that has a user media interface configured to allow a user to interact with the stor}'' stream or other content, Exemplary applications include, for example, a web-based application and an application program interface (API), through which commands or data can be exchanged interact with the story engine's servers. In such embodiments, the story engine can be configured to present the story stream to the user device as a function of the sample.
[0033] The story stream or other content is preferably related to the media stream, and in fact, enhances the user's experience with the media stream. In some contemplated embodiments, the story could be a game related to the media stream. In other embodiments, the content could be information related to the media stream, such as a biography, a historical or other facts, an advertisement, a trivia question, a trailer, an inside look, and so forth.
[0034] As discussed above, rather than a story, content could be selected and presented on the user device based on characteristics of a sample of a media stream. In such embodiments, the story engine (content engine) can select a set of content from multiple sets of content stored in a content database (e,g.f story database). Some or all of the selected content can then be transmitted and/or presented on one or more devices.
[0035] Figure 2 illustrates another embodiment of a system 200 configured to identify content related to captured media. A user can obtain a sample of the media 222 via a user device 210, which is shown as a smart phone, but could be any number of computing devices such as those described above. A portion or all of the captured media can be transmitted via a network 230 to an analysis engine 202, Alternatively, the device 210 can include the analysis engine or a separate processing engine to at least initially analyze the sample and extricate the relevant information. With respect to the remaining numerals in Figure 2, the same considerations for like components with like numerals of Figure 1 apply. (0036) figure 3 illustrates yet another embodiment of a system 300 configured to identify content related to captured media. Multiple users can obtain a sample of the media 322 via user devices 310A-N, each of which is shown as a smart phone, but could be any number of computing devices such as those described above. A portion or ail of the captured media can be transmitted via a network 330 to an analysis engine 302. Alternatively, the devices 310A-N can include the analysis engine or a separate processing engine to at least initially analyze the sample and extricate the relevant information.
[0037] It is contemplated that the interaction of one user with the content stream sent to the user's device can alter the content streams presented on unrelated user's device. Thus, for example, in the case of a story of game, as users progress through the game or story, this can vary another user's interaction with the game or story. In this manner, an immersive massively multiplayer online role-playing game (MMORPG) or other game can be created related to the common media, despite each user likely taking a different sample (e.g., different view) of the media. With respect to the remaining numerals in Figure 3, the same considerations for like components with like numerals of Figure 1 apply.
[0038] Figure 4 illustrates one embodiment of a method 300 for identifying a story related to a media stream. In step 410, access is provided to a story database configured to story a plurality of stories, each of which is associated with a set of story parameters. However, it is further contemplated that the story database could comprise other content such as that discussed above.
| 0039| In step 420, a sample of a media stream can be obtained using a first device.
Contemplated devices include, for example, a television, a mobile telephone, a tablet PC, a laptop computer, a desktop computer, a telephone, a radio, an appliance, an electronic picture frame, a vehicle, a game platform, and a sensor. In some contemplated embodiments shown in step 422, the first device can include the analysis engine. It is further contemplated in step 424 that the story engine could be remote from the first device. In step 426, the story engine could be local to the story database. However, it is contemplated that the story engine could instead be remote from the story database, such as where the story engine is located on the first device and the story database comprises a remote server. (0040] At least a portion of the sample or data related to the sample can he analyzed in step 430 using an analysis engine to identify a set of sample parameters or characteristics. This could comprise known image or audio analysis techniques to generate a list of features of an image or audio recording. These characteristics could then be used to identify a story related to the captured sample.
[004 J ] In step 440, one or more of the set of sample parameters can be compared with the sets of story parameters using a sto engine to identify a stow in the story database as a function of a relationship of the set of sample parameters with the story parameters. Although such comparison could include a 1 : 1 matching by matching the set of sample parameters in step 442 to one of the set of story parameters using the story engine to identify the story, it is more likely that the comparison will result in an identified story that is the best match based on the parameters. For example, in step 444, one or more of the sample parameters can be matched with one or more of the story parameters of each of the plurality of stories using the stow engine to generate a relevance value for each of the plurality of stories, and identifying the story having the highest relevance value. This is useful where the sample is poor quality or where the media in question is not recognized by the system.
(0042] In step 350, a user interface of the first device can be configured to automatically present the identified story or other content. It is further contemplated in step 452 that the identified story can be automatically presented at a time position associated with playback of the media stream.
[0043] As used herein, and unless the context dictates otherwise, the term "coupled to" is intended to include both direct coupling (in which two elements that are coupled to each other contact each other) and indirect coupling (in which at least one additional element is located between the two elements). Therefore, the terms "coupled to" and "coupled with" are used synonymously.
[0044J In some embodiments, the numbers expressing quantities of ingredients, properties such as concentration, reaction conditions, and so forth, used to describe and claim certain
embodiments of the invention are to be understood as being modified in some instances by the term "about." Accordingly, in some embodiments, the numerical parameters set forth in the written description and attached claims are approximations that can vary depending upon the desired properties sought to be obtained by a particular embodiment. In some embodiments, the numerical parameters should be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of some embodiments of the invention are
approxima ions, the numerical values set forth in the specific examples are reported as precisely as practicable. The numerical values presented in some embodiments of the invention may contain certain errors necessarily resulting from the standard deviation found in their respective tes ting m eas urements .
[0045] Unless the context dictates the contrary, all ranges set forth herein should be interpreted as being inclusive of their endpoints and open-ended ranges should be interpreted to include only commercially practical values. Similarly, all lists of values should be considered as inclusive of intermediate values unless the context indicates the contrary.
|0046| As used in the description herein and throughout the claims that follow, the meaning of "a," "an," and "the" includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein, the meaning of "in" includes "in" and "on" unless the con text clearly dictates otherwise.
[0047] The recitation of ranges of values herein is merely intended to serve as a shorthand method of referring individually to each separate value falling within the range. Unless otherwise indicated herein, each individual value with a range is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly
contradicted by context. The use of any and all examples, or exemplary language (e.g. "such as") provided with respect to certain embodiments herein is intended merely to better illuminate the invention and does not pose a limitation on the scope of the invention otherwise claimed, No language in the specification should be construed as indicating any non-claimed element essential to the practice of the invention,
[0048] Groupings of alternative elements or embodiments of the invention disclosed herein are not to be construed as limitations, Each group member can be referred to and claimed individually or in any combination with other members of the group or other elements found herein, One or more members of a group can be included in, or deleted from, a group for reasons of convenience and/or patentability. When any such inclusion or deletion occurs, the
specification is herein deemed to contain the group as modified thus fulfilling the written description of ail Markush groups used in the appended claims.
[0049] It should be apparent to those skilled in the art that many more modifications besides those already described are possible without departing from the inventive concepts herein. The inventive subject matter, therefore, is not to be restricted except in the spirit of the appended claims, Moreover, in interpreting both the specification and the claims, all terms should be interpreted in the broadest possible manner consistent with the context, In particular, the terms "comprises" and "comprising'' should be interpreted as referring to elements, components, or steps in a non-exclusive manner, indicating that the referenced elements, components, or steps may be present, or utilized, or combined with other elements, components, or steps that are not expressly referenced. Where the specification claims refers to at least one of something selected from the group consisting of A, B, C .... and N, the text should be mterpreted as requiring only one element from the group, not A plus , or B plus N, etc.

Claims

CLAIMS What is claimed is:
1. A story management system configured to identify a story related to a media stream, comprising:
a first user device configured to obtain a sample of a media stream;
a story database configured to store a plurality of stories, each of which comprises at least one story stream;
an analysis engine configured to analyze data related to the sample, and generate a set of characteristics associated with the sample; and
a story engine coupled to the story database, and configured to select a story in the story database as a function of the set of characteristics, and au tomatically cause the selected story to be presented to the first user device.
2. The system of claim 1, wherein the analysis engine is local to the first user de vice,
3. The system of claim 1, wherein the analysis engine is separa te from the first user device,
4. The system of any of claims 1-3, wherein the data comprises at least a portion of the sample,
5. The system of any of claims 1-4, wherein the data comprises a set of parameters of the sample.
6. The system of any of claims 1-4, wherein the data comprises a set of parameters of the sample, and wherein the analysis engine is further configured to analyze the sample and generate the set of characteristics associated with the sample.
7. The system of any of claims 1 -6, wherein each of the stories of the story database comprise one or more story parameters, and wherein the story engine is configured to select the story in the story database by comparing the set of characteristics to the one or more story parameters of each of the stories.
8. The system of claim 7, wherein the story engine is further configured to select the story by matching the set of characteristics to the story parameters using one or more algorithms to identify the story having a highest relevance to the set of characteristics.
9. The system of any of claims 1 -8, wherein the first user device is configured to analyze the sample and generate the data using one or more algorithms,
10. The system of any of claims 1-9, wherein the data comprises visual data,
11. The system of any of claims 1-10, wherein the data comprises an image,
12. The system of any of claims 1-11, wherein the data comprises at least a portion of the sample.
13. The system of any of claims 1-12, wherein the data comprises audible data.
14. The system of any of claims 1-13, wherein the data comprises at least one of a duration, a frequency, and an identifiable pattern of the sample.
15. The system of any of claims 1-14, wherein the data comprises location information of the first user device.
16. The system of any of claims 1-15, wherein the analysis engine is further configured to analyze at least one of the sample, a portion of the sample, and characteristics of the sample to the analysis and generate the data using one or more algorithms.
17. The system of any of claims 1-16, wherein the story engine is further configured to automatically present the identified story at a time position that corresponds to an identified time position of the media stream, such that the presentation of the identified story is synced with the media stream.
18. The system of any of claims 1-37, wherei the first user device comprises a television, a mobile telephone, a tablet PC, a laptop computer, a desktop computer, a telephone, a radio, an appliance, an electronic picture frame, a vehicle, a game platform, and a sensor,
19. The system of any of claims 1-18, wherein the sample comprises an at least partial frame of a video.
20. The system of any of claims 1-19, wherein the sample comprises a still image,
21. The system of any of claims 1-20, wherein the sample comprises a sample stream of audio,
22. The system of any of claims 1-21, wherein the set of characteristics comprises at least one of a time, a location of the first user device, a channel, an indicia, a genre, an actor, and facial recognition information .
23. The system of any of claims 1-22, wherein the story engine is further configured to instruct a software on the first user device to present the story stream.
24. The system of any of claims 1-23, wherein the story engine is further configured to cause the story stream to begin playback a t a point that is synchronized with playback of the media stream,
25. The system of any of claims 1-24, wherein the story stream comprises information related to the media stream,
26. The system of claim 25, wherein the information comprises at least one of a biography and a historical fact.
27. The system of claim 25, wherein the information is related to the content of the media stream.
28. A story management system configured to identify content related to a media stream, comprising:
a first user device configured to obtain a sample of a media stream;
a content database configured to store sets of content;
an analysis engine configured to analyze data related to the sample, and generate a set of characteristics associated with the sample; and
a content engine coupled to the content database, and configured to select a set of content as a function of the set of characteristics, and automatically cause some or all of the selected content to be transmitted to the first user device.
29. The system of claim 28, wherein the analysis engine is local to the first user device.
30. The system of any of claims 28-29, wherein the data comprises at least a portion of the sample,
31. The system of any of claims 28-30, wherein the data comprises a set of parameters of the sample, and wherein the analysis engine is further configured to analyze the sample and generate the set of characteristics associated with the sample.
32. The system of any of claims 28-31 , wherein the content engine is configured to select the set of content by comparing the set of characteristics to parameters of the sets of content.
33. The system of claim 32, wherein the content engine is further configured to select the set of content using one or more algorithms by identifying the story having a highest relevance to the set of characteristics.
34. The system of any of claims 28-33, wherein the first user device is configured to analyze the sample and generate the data using one or more algorithms,
35. The system of any of claims 28-34, wherein the data comprises visual data.
36. The system of any of claims 28-35, wherein the data comprises at least a portion of the sample.
37. The system of any of claims 28-36, wherein the data comprises audible data.
38. The system of any of claims 28-37, wherein the data comprises at least one of a duration, a frequency, and an identifiable pattern of the sample.
39. The system of any of claims 28-38, wherein the data comprises location information of the first user device,
40. The system of any of claims 28-39, wherein the content engine is further configured to automatically present some or all of the selected content as a function of an identified playback position of the sample of the media stream, such that the presentation of the selected content is synchronized with the media stream,
41. The system of any of claims 28-40, wherein the first user device comprises a television, a mobile telephone, a tablet PC, a laptop computer, a desktop computer, a telephone, a radio, an appliance, an electronic picture frame, a vehicle, a game platform, and a sensor.
42. The system of any of claims 28-41, wherein the sample comprises an at least partial still frame,
43. The system of any of claims 28-42, wherein the sample comprises a still image.
44. The system of any of claims 28-43, wherein the sample comprises location information of the first user device.
45. The system of any of claims 28-44, wherein the set of characteristics comprises at least one of a time, a location of the first user device, a channel, an indicia, a genre, an actor, and facial recogniti on informal; on.
46. The system of any of claims 28-45, wherein the content engine is further configured to instruct a software on the first user device to present some or al l of the selected content,
47. The system of any of claims 28-46, wherein the content engine is further configured to cause content of the set to be synchronously presented with playback of the media stream.
48. The system of any of claims 28-47, wherein the set of content comprises information related to the media stream.
49. A story management system configured to identify content related to a media stream, comprising:
a portable media play er configured to obtain a sample of a media stream, wherein the portable media player further comprises an analysis engine configured to analyze data related to the sample, and generate characteristics associated with the sample; a content engine configured to receive the characteristics generated by the analysis
engine, and select a set of content from a content dat abase as a function of the characteristics; and
wherein the content engine is further configured to cause some or ail of the selected
content to be presented on the portable media player,
50. The system of claim 49, wherein the data comprises at least a portion of the sample,
51. The system of any of claims 49-50, wherein the data comprises a set of parameters of the sample, and wherein the analysis engine is further coniigured to analyze the sample and generate the characteristics associated with the sample.
52. The system of any of claims 49-51 , wherem each set of content comprises one or more parameters, and wherein the content engine is configured to select the set by comparing the characteristics to the one or more parameters of each set.
53. The system of any of claims 49-52, wherein the portable media player is coniigured to analyze the sample and generate the data using one or more algorithms.
54. The system of any of claims 49-53, wherein the data comprises visual data.
55. The system of any of claims 49-54, wherein the data comprises at least a portion of the sample.
56. The system of any of claims 49-55, wherein the data comprises audible data.
57. The system of any of claims 49-56, wherein the data comprises an identifiable pattern of the sample.
58. The system of any of claims 49-57, wherein the data comprises location information of the portable media player when the sample was taken.
59. The system of any of claims 49-58, wherein the content engine is further configured to automatically present the selected content at a time position that corresponds to an identified time position of the media stream, such that the presentation of the content is synced with the media stream.
60. The system of any of claims 49-59, wherein the portable media player comprises a smart phone or a tablet computer.
61 . The system of any of claims 49-60, wherein the sample comprises a still image.
62. The system of any of claims 49-61, wherein the sample comprises an audio stream.
63. The system of any of claims 49-62, wherein the set of characteristics comprises at least one of a time, a location of the portable media player, a channel, an indicia, a genre, an actor, and facial recognition information.
64. The system of any of claims 49-63, wherem the content engine is further configured to instruct a software on the portable media player to present the story stream.
65. The system of any of claims 49-64, wherein the story engine is further configured to cause the content to be presented at a point that is synchronized with playback of the media stream ,
66. A method for identifying a story related to a media stream, comprising;
providing access to a story database configured to story a plurality of stories, each of which is associated with a set of story parameters;
obtaining a sample of a media stream using a first device;
analyzing at least a portion of the sample or data, related to the sample using an analysis engine to identify a set of sample parameters;
comparing the set of sample parameters with the sets of story parameters using a story engine to identify a story in the story database as a function of a relationship of the set of sample parameters with the story parameters; and
configuring a user interface of the first device to automatically present the identified story.
67. The method of claim 66, wherein the first device comprises the analysis engine.
68. The method of claim 66 or 67, wherein the step of comparing the set further comprises matching the set of sample parameters to one of the set of story parameters using the story engine to identify the story.
69. The method of any of claims 66-68, wherein the story engine is remote from the first device.
70. The method of any of claims 66-68, wherein the story engine is local to the story database.
71. The meth od of any of claims 66-70, wherein th e step of comparing further comprises matching one or more of the sample parameters with one or more of the story parameters of each of the plurality of stories using the story engine to generate a relevance value for each of the plurality of stories, and identifying the story having the highest relevance value.
72. The method of any of claims 66-71, wherein the story parameters comprises at least one of a genre, a duration, a time stamp, a location, and a marker.
73. The method of any of claims 66-72, wherein the step of configuring the user interface further comprises automatically presenting the identified story at a time position associated with playback of the media stream, such that the presentation of the identified story is synced with the media stream.
74. The method of any of claims 66-73, wherein the story engine generates the time position as a function at least on e of th e sample, a time stamp of the sample, a time stamp of the media stream, and a location of the first device.
75. The method of any of claims 66-74, wherein the first device comprises a television, a mobile telephone, a tablet PC, a laptop computer, a desktop computer, a telephone, a radio, an appliance, an electronic picture frame, a vehicle, a game platform, and a sensor.
PCT/US2013/069896 2012-11-13 2013-11-13 Systems and methods for identifying narratives related to a media stream WO2014078416A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201261725661P 2012-11-13 2012-11-13
US61/725,661 2012-11-13

Publications (1)

Publication Number Publication Date
WO2014078416A1 true WO2014078416A1 (en) 2014-05-22

Family

ID=50731654

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2013/069896 WO2014078416A1 (en) 2012-11-13 2013-11-13 Systems and methods for identifying narratives related to a media stream

Country Status (1)

Country Link
WO (1) WO2014078416A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080313130A1 (en) * 2007-06-14 2008-12-18 Northwestern University Method and System for Retrieving, Selecting, and Presenting Compelling Stories form Online Sources
US20090313324A1 (en) * 2008-06-17 2009-12-17 Deucos Inc. Interactive viewing of media content
US20110242134A1 (en) * 2010-03-30 2011-10-06 Sony Computer Entertainment Inc. Method for an augmented reality character to maintain and exhibit awareness of an observer
US20120008821A1 (en) * 2010-05-10 2012-01-12 Videosurf, Inc Video visual and audio query
WO2012122280A1 (en) * 2011-03-07 2012-09-13 Fourth Wall Studios, Inc. Transmedia user experience engines

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080313130A1 (en) * 2007-06-14 2008-12-18 Northwestern University Method and System for Retrieving, Selecting, and Presenting Compelling Stories form Online Sources
US20090313324A1 (en) * 2008-06-17 2009-12-17 Deucos Inc. Interactive viewing of media content
US20110242134A1 (en) * 2010-03-30 2011-10-06 Sony Computer Entertainment Inc. Method for an augmented reality character to maintain and exhibit awareness of an observer
US20120008821A1 (en) * 2010-05-10 2012-01-12 Videosurf, Inc Video visual and audio query
WO2012122280A1 (en) * 2011-03-07 2012-09-13 Fourth Wall Studios, Inc. Transmedia user experience engines

Similar Documents

Publication Publication Date Title
US10609308B2 (en) Overly non-video content on a mobile device
US20220150572A1 (en) Live video streaming services
US11171893B2 (en) Methods and systems for providing virtual collaboration via network
US9762817B2 (en) Overlay non-video content on a mobile device
US10423512B2 (en) Method of collecting and processing computer user data during interaction with web-based content
TWI409691B (en) Comment filters for real-time multimedia broadcast sessions
US8913171B2 (en) Methods and systems for dynamically presenting enhanced content during a presentation of a media content instance
US10700944B2 (en) Sensor data aggregation system
KR20190107167A (en) Gallery of messages with a shared interest
CN104620522A (en) Determining user interest through detected physical indicia
EP2860968B1 (en) Information processing device, information processing method, and program
US20130041976A1 (en) Context-aware delivery of content
CN102859486A (en) Zoom display navigation
KR20150074006A (en) Hybrid advertising supported and user-owned content presentation
US20140325540A1 (en) Media synchronized advertising overlay
US11283890B2 (en) Post-engagement metadata generation
US20220020053A1 (en) Apparatus, systems and methods for acquiring commentary about a media content event
US20090049390A1 (en) Methods and apparatuses for distributing content based on profile information and rating the content
US20190012834A1 (en) Augmented Content System and Method
WO2014078416A1 (en) Systems and methods for identifying narratives related to a media stream
US20220270368A1 (en) Interactive video system for sports media
CN114780180A (en) Object data display method and device, electronic equipment and storage medium
WO2014078391A1 (en) Systems and methods for synchronizing content playback across media devices

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13854510

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13854510

Country of ref document: EP

Kind code of ref document: A1