US20150229990A1 - Linked content - Google Patents
Linked content Download PDFInfo
- Publication number
- US20150229990A1 US20150229990A1 US14/691,557 US201514691557A US2015229990A1 US 20150229990 A1 US20150229990 A1 US 20150229990A1 US 201514691557 A US201514691557 A US 201514691557A US 2015229990 A1 US2015229990 A1 US 2015229990A1
- Authority
- US
- United States
- Prior art keywords
- content
- audience
- subsequent
- audience member
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
- H04N21/44218—Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
- G06Q30/0251—Targeted advertisements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
- G06Q30/0251—Targeted advertisements
- G06Q30/0261—Targeted advertisements based on user location
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
- G06Q30/0251—Targeted advertisements
- G06Q30/0269—Targeted advertisements based on user profile or attribute
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04H—BROADCAST COMMUNICATION
- H04H60/00—Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
- H04H60/29—Arrangements for monitoring broadcast services or broadcast-related services
- H04H60/33—Arrangements for monitoring the users' behaviour or opinions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04H—BROADCAST COMMUNICATION
- H04H60/00—Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
- H04H60/35—Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users
- H04H60/45—Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users for identifying users
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04H—BROADCAST COMMUNICATION
- H04H60/00—Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
- H04H60/61—Arrangements for services using the result of monitoring, identification or recognition covered by groups H04H60/29-H04H60/54
- H04H60/63—Arrangements for services using the result of monitoring, identification or recognition covered by groups H04H60/29-H04H60/54 for services of sales
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/251—Learning process for intelligent management, e.g. learning user preferences for recommending movies
- H04N21/252—Processing of multiple end-users' preferences to derive collaborative data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/258—Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
- H04N21/25808—Management of client data
- H04N21/25841—Management of client data involving the geographical location of the client
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/258—Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
- H04N21/25866—Management of end-user data
- H04N21/25883—Management of end-user data being end-user demographical data, e.g. age, family status or address
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/4104—Peripherals receiving signals from specially adapted client devices
- H04N21/4126—The peripheral being portable, e.g. PDAs or mobile phones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/4104—Peripherals receiving signals from specially adapted client devices
- H04N21/4126—The peripheral being portable, e.g. PDAs or mobile phones
- H04N21/41265—The peripheral being portable, e.g. PDAs or mobile phones having a remote control device for bidirectional communication between the remote control device and client device
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/4223—Cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/435—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/4508—Management of client data or end-user data
- H04N21/4524—Management of client data or end-user data involving the geographical location of the client
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/4508—Management of client data or end-user data
- H04N21/4532—Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/466—Learning process for intelligent management, e.g. learning user preferences for recommending movies
- H04N21/4667—Processing of monitored end-user data, e.g. trend analysis based on the log file of viewer selections
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/466—Learning process for intelligent management, e.g. learning user preferences for recommending movies
- H04N21/4668—Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/475—End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
- H04N21/4753—End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for user identification, e.g. by entering a PIN or password
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/812—Monomedia components thereof involving advertisement data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04H—BROADCAST COMMUNICATION
- H04H2201/00—Aspects of broadcast communication
- H04H2201/30—Aspects of broadcast communication characterised by the use of a return channel, e.g. for collecting users' opinions, for returning broadcast space/time information or for requesting data
- H04H2201/37—Aspects of broadcast communication characterised by the use of a return channel, e.g. for collecting users' opinions, for returning broadcast space/time information or for requesting data via a different channel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04H—BROADCAST COMMUNICATION
- H04H2201/00—Aspects of broadcast communication
- H04H2201/40—Aspects of broadcast communication characterised in that additional data relating to the broadcast data are available via a different channel than the broadcast channel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04H—BROADCAST COMMUNICATION
- H04H60/00—Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
- H04H60/35—Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users
- H04H60/38—Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users for identifying broadcast time or space
- H04H60/40—Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users for identifying broadcast time or space for identifying broadcast time
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04H—BROADCAST COMMUNICATION
- H04H60/00—Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
- H04H60/35—Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users
- H04H60/49—Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users for identifying locations
- H04H60/51—Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users for identifying locations of receiving stations
Definitions
- Advertisements are shown before, during, and after media presentations. Advertisements are even included within media presentations through product placement.
- the advertisements shown with the media are selected based on anticipated audience demographics and interests of the anticipated audience. The advertisements are shown regardless of whether it is a good time for the audience member to act in response to the advertisement.
- Embodiments of the present invention generate linked content.
- Linked contents may include a preliminary content and one or more subsequent contents.
- the viewer is only shown the subsequent content upon detecting a positive reaction to the preliminary content.
- the subsequent content is associated with presentation triggers that specify a context in which the subsequent content should be presented.
- the context may be defined by a time of day, location, user activity, and/or other parameters.
- the context could be the user driving (activity context) near a coffee shop (location context) in the morning (time context).
- the preliminary content and subsequent content may be shown on different devices.
- the preliminary content may be shown on a television as part of a media presentation and the subsequent content could be shown on a mobile device, such as a smartphone or a tablet.
- Showing the subsequent content on a location-aware (e.g., GPS enabled) mobile device allows the presentation trigger to include a location.
- the reaction to the preliminary content may be explicit or implicit.
- An explicit reaction could be the user making an affirmative gesture indicating that he likes the preliminary content.
- the implicit reaction may be derived through an analysis of image data.
- the image data may be generated by a depth camera, video camera, or other imaging device.
- the user's facial expressions are analyzed to determine a reaction to primary content.
- biometric readings may be derived from the imaging data.
- FIG. 1 is a block diagram of an exemplary computing environment suitable for implementing embodiments of the invention
- FIG. 2 is a diagram of an online entertainment environment, in accordance with an embodiment of the present invention.
- FIG. 3 is a diagram of a remote entertainment computing environment, in accordance with an embodiment of the present invention.
- FIG. 4 is a diagram of an exemplary audience area captured using a depth camera, in accordance with an embodiment of the present invention.
- FIG. 5 is a diagram of an exemplary audience area captured using a depth camera, in accordance with an embodiment of the present invention.
- FIG. 6 is a diagram of an exemplary audience area captured using a depth camera, in accordance with an embodiment of the present invention.
- FIG. 7 is a diagram showing an ad path, in accordance with an embodiment of the present invention.
- FIG. 8 is a diagram showing an ad path, in accordance with an embodiment of the present invention.
- FIG. 9 is a flow chart showing a method of providing linked advertisements, in accordance with an embodiment of the present invention.
- FIG. 10 is a flow chart showing a method of assigning an audience member to an ad path, in accordance with an embodiment of the present invention.
- FIG. 11 is a flow chart showing a method of managing an ad path, in accordance with an embodiment of the present invention.
- Embodiments of the present invention generate linked advertisements.
- Linked advertisements may include a preliminary advertisement and one or more subsequent advertisements.
- the preliminary advertisement and subsequent advertisement may be separated by time, location, and device.
- the preliminary advertisement and subsequent advertisements display related products or services and may operate together as a unified ad campaign.
- the viewer is only shown the subsequent advertisement upon detecting a positive reaction to the preliminary advertisement.
- the subsequent advertisement is associated with presentation triggers that specify a context in which the subsequent advertisement should be presented.
- the context may be defined by a time of day, location, user activity, and/or other parameters.
- the presentation trigger may specify a context in which the user is able to purchase an advertised good or service.
- the context could be the user driving (activity context) near a coffee shop (location context) in the morning (time context).
- the preliminary advertisement and subsequent advertisement may be shown on different devices.
- the preliminary advertisement may be shown on a television as part of a media presentation (e.g., a separate ad, product placement) and the subsequent advertisement could be shown on a mobile device, such as a smartphone or a tablet.
- Showing the subsequent advertisement on a location-aware (e.g., GPS enabled) mobile device allows the presentation trigger to include a location.
- Other contextual parameters may include a time of day and the user's current activity.
- the presentation trigger could specify that the subsequent advertisement is only shown during business hours for the retail outlet.
- the subsequent advertisement is only shown at a time when the user is likely to purchase a product or service. For example, a user may be likely to purchase food at a restaurant during lunch time or dinner time.
- the reaction to the preliminary advertisement may be explicit or implicit.
- An explicit reaction could be the user making an affirmative gesture indicating that they like the preliminary advertisement.
- the user could also explicitly request more information or otherwise express an interest in the advertised product through a companion device, such as a smartphone or tablet.
- a companion application is provided to allow the user to explicitly indicate he likes an advertised product or service.
- the subsequent advertisement may include a coupon or other incentive for the user to try the product or service. In this way, the user could be encouraged to express her interest in an advertised product or service through the companion application.
- an explicit indication of interest may be made through a gesture, game pad, keyboard, controller, or voice command picked up by an entertainment device facilitating the linked advertising.
- the entertainment device could be a television, game console, cable box, or other similar device that is able to receive input from the audience and correlate it with what content the display device shows.
- the user's reaction to an advertised product is implicit.
- the implicit reaction may be derived through an analysis of image data.
- a depth camera, video camera, or other imaging device may generate the image data.
- the user's facial expressions are analyzed to determine a reaction to an advertised product or service.
- biometric readings may be derived from the imaging data. For example, facial flushing and heart rate may be determined from the imaging data and used to classify the reaction as positive, negative, or indifferent.
- the audience member's facial expressions and biometric changes are compared against a baseline for the audience member to determine whether the reaction is positive as well as a strength of the reaction.
- the subsequent advertisements may be part of an advertising path that includes a series of advertisements with different presentation triggers, content, and incentives.
- the strength of the user's reaction to the preliminary advertisement is used to activate different advertisements within the path.
- the presentation trigger for an active subsequent advertisement is monitored, whereas presentation triggers for inactive advertisements are not.
- Ads within the path may be activated and deactivated in response to additional user actions or rules.
- a strong positive reaction to the preliminary ad activates a subsequent advertisement having a comparatively lower incentive.
- the subsequent advertisement may include a 50 cent discount on a sandwich.
- a mild or weekly positive reaction to the preliminary advertisement may activate a subsequent advertisement having a higher incentive.
- the subsequent advertisement could have a two dollar discount on a sandwich.
- the user's prior purchase history may also be used to determine which ad(s) in an ad path to active. For example, if a user repeatedly ignores an advertisement with a lower incentive, he may be moved to an advertisement with a higher incentive. Similarly, if a user gives a strong positive response to a preliminary advertisement, but is known to regularly purchase products associated with the advertisement, he may be associated with a subsequent advertisement having a lower incentive. Alternatively, the user could be associated with a subsequent advertisement that reminds the user of a consumer club he is in, such as a sandwich club. This subsequent advertisement could remind the user that he needs to purchase two more sandwiches before he earns a free sandwich.
- Embodiments of the present invention use audience data to select an appropriate ad path from one of several ad paths available.
- the audience data may be derived from image data generated by an imaging device, such as a video camera, that has a view of the audience area.
- Automated image analysis may be used to generate useful audience data that is used to select the overlay.
- the audience data derived from the image data includes number of people present in the audience, engagement level of people in the audience, personal characteristics of those individuals, and response to the media content. Different levels of engagement may be assigned to audience members. Image data may be analyzed to determine how many people are present in the audience and characteristics of those people.
- Audience data includes a level of engagement or attentiveness.
- a person's attentiveness may be classified into one or more categories or levels. The categories may range from not paying attention to full attention.
- a person that is not looking at the television and is in a conversation with somebody else, either in the room or on the phone, may be classified as not paying attention or fully distracted.
- somebody in the room that is not looking at the TV, but is not otherwise obviously distracted may have a medium level of attentiveness.
- Someone that is looking directly at the television without an apparent distraction may be classified as fully attentive.
- a machine-learning image classifier may assign the levels of attentiveness by analyzing image data.
- Audience data may include a person's reaction to a media content, such as a preliminary advertisement.
- the person's reaction may be measured by studying biometrics gleaned from the imaging data. For example, heartbeat and facial flushing may be detected in the image data. Similarly, pupil dilation and other facial expressions may be associated with different reactions. All of these biometric characteristics may be interpreted by a classifier to determine whether the person likes or dislikes a media content.
- the different audience data may be used to determine when reaction criteria associated with a preliminary advertisement are satisfied. For example, a criterion may not be satisfied when a person is present but shows a low level of attentiveness. An advertiser may specify that an ad path is activated only when one or more of the individuals present are fully attentive.
- a person's reaction to a primary ad or other media content may be used to determine whether a subsequent ad is activated. For example, a person classified as having a negative reaction to a product placement within a movie may not be associated with an ad path for the product advertised through product placement. Alternatively, a person that responds positively to a primary ad may be associated with an ad path for a related product or service.
- the personal characteristics of audience members may also be considered when selecting an ad path.
- the personal characteristics of the audience members include demographic data that may be discerned from image classification or from associating the person with a known personal account. For example, an entertainment company may require that the person submit a name, age, address, and other demographic information to maintain a personal account.
- the personal account may be associated with a facial recognition program that is used to authenticate the person. Regardless of whether the entertainment company is providing the primary ad, the facial recognition record associated with the personal account could be used to identify the person in the audience associated with the account.
- all of the audience members may be associated with an account that allows precise demographic information to be associated with each audience member. Account information may be used to associate multiple devices with an audience member.
- computing device 100 an exemplary operating environment for implementing embodiments of the invention is shown and designated generally as computing device 100 .
- Computing device 100 is but one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should the computing device 100 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated.
- the invention may be described in the general context of computer code or machine-useable instructions, including computer-executable instructions such as program components, being executed by a computer or other machine, such as a personal data assistant or other handheld device.
- program components including routines, programs, objects, components, data structures, and the like, refer to code that performs particular tasks or implements particular abstract data types.
- Embodiments of the invention may be practiced in a variety of system configurations, including handheld devices, consumer electronics, general-purpose computers, specialty computing devices, etc.
- Embodiments of the invention may also be practiced in distributed computing environments where tasks are performed by remote-processing devices that are linked through a communications network.
- computing device 100 includes a bus 110 that directly or indirectly couples the following devices: memory 112 , one or more processors 114 , one or more presentation components 116 , input/output (I/O) ports 118 , I/O components 120 , and an illustrative power supply 122 .
- Bus 110 represents what may be one or more busses (such as an address bus, data bus, or combination thereof).
- FIG. 1 is merely illustrative of an exemplary computing device that can be used in connection with one or more embodiments of the invention. Distinction is not made between such categories as “workstation,” “server,” “laptop,” “handheld device,” etc., as all are contemplated within the scope of FIG. 1 and refer to “computer” or “computing device.”
- Computer-readable media can be any available media that can be accessed by computing device 100 and includes both volatile and nonvolatile media, removable and non-removable media.
- Computer-readable media may comprise computer storage media and communication media.
- Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data.
- Computer storage media includes RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Computer storage media does not comprise a propagated data signal.
- Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
- modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
- communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer-readable media.
- Memory 112 includes computer-storage media in the form of volatile and/or nonvolatile memory.
- the memory 112 may be removable, nonremovable, or a combination thereof.
- Exemplary memory includes solid-state memory, hard drives, optical-disc drives, etc.
- Computing device 100 includes one or more processors 114 that read data from various entities such as bus 110 , memory 112 or I/O components 120 .
- Presentation component(s) 116 present data indications to a person or other device.
- Exemplary presentation components 116 include a display device, speaker, printing component, vibrating component, etc.
- I/O ports 118 allow computing device 100 to be logically coupled to other devices including I/O components 120 , some of which may be built in.
- Illustrative I/O components 120 include a microphone, joystick, game pad, satellite dish, scanner, printer, wireless device, etc.
- the online entertainment environment 200 comprises various entertainment devices connected through a network 220 to an entertainment service 230 .
- Exemplary entertainment devices include a game console 210 , a tablet 212 , a personal computer 214 , a digital video recorder 217 , a cable box 218 , and a television 216 .
- Use of other entertainment devices not depicted in FIG. 2 such as smart phones, is also possible.
- the game console 210 may have one or more game controllers communicatively coupled to it.
- the tablet 212 may act as an input device for the game console 210 or the personal computer 214 .
- the tablet 212 is a stand-alone entertainment device.
- Network 220 may be a wide area network, such as the Internet. As can be seen, most devices shown in FIG. 2 could be directly connected to the network 220 . The devices shown in FIG. 2 , are able to communicate with each other through the network 220 and/or directly as indicated by the lines connecting the devices.
- the controllers associated with game console 210 include a game pad 211 , a headset 236 , an imaging device 213 , and a tablet 212 .
- Tablet 212 is shown coupled directly to the game console 210 , but the connection could be indirect through the Internet or a subnet.
- the entertainment service 230 helps make a connection between the tablet 212 and the game console 210 .
- the tablet 212 is capable of generating numerous input streams and may also serve as a display output mechanism. In addition to being a primary display, the tablet 212 could provide supplemental information related to primary information shown on a primary display, such as television 216 .
- the input streams generated by the tablet 212 include video and picture data, audio data, movement data, touch screen data, and keyboard input data.
- the headset 236 captures audio input from a player and the player's surroundings and may also act as an output device, if it is coupled with a headphone or other speaker.
- the headset 236 may facilitate voice control of the game console or other entertainment devices.
- a microphone (not shown) may be integrated into or connected to any of the entertainment devices to facilitate voice control.
- the imaging device 213 is coupled to game console 210 .
- the imaging device 213 may be a video camera, a still camera, a depth camera, or a video camera capable of taking still or streaming images.
- the imaging device 213 includes an infrared light and an infrared camera.
- the imaging device 213 may also include a microphone, speaker, and other sensors.
- the imaging device 213 is a depth camera that generates three-dimensional image data.
- the three-dimensional image data may be a point cloud or depth cloud.
- the three-dimensional image data may associate individual pixels with both depth data and color data.
- a pixel within the depth cloud may include red, green, and blue color data, and X, Y, and Z coordinates. Stereoscopic depth cameras are also possible.
- the imaging device 213 may have several image-gathering components.
- the imaging device 213 may have multiple cameras.
- the imaging device 213 may have multidirectional functionality. In this way, the imaging device 213 may be able to expand or narrow a viewing range or shift its viewing range from side to side and up and down.
- the game console 210 may have image-processing functionality that is capable of identifying objects within the depth cloud. For example, individual people may be identified along with characteristics of the individual people. In one embodiment, gestures made by the individual people may be distinguished and used to control games or media output by the game console 210 .
- the game console 210 may use the image data, including depth cloud data, for facial recognition purposes to specifically identify individuals within an audience area.
- the facial recognition function may associate individuals with an account associated with a gaming service or media service, or used for login security purposes, to specifically identify the individual.
- the game console 210 uses microphone, and/or image data captured through imaging device 213 to identify content being displayed through television 216 .
- a microphone may pick up the audio data of a movie being generated by the cable box 218 and displayed on television 216 .
- the audio data may be compared with a database of known audio data and the data identified using automatic content recognition techniques, for example.
- Content being displayed through the tablet 212 or the PC 214 may be identified in a similar manner. In this way, the game console 210 is able to determine what is presently being displayed to a person regardless of whether the game console 210 is the device generating and/or distributing the content for display.
- the game console 210 may include classification programs that analyze image and/or data to generate audience data. For example, the game console 210 may determine number of people in the audience, audience member characteristics, levels of engagement, and audience response. Audio data may be compared with a content data base do identify content being displayed. The audio data may be captured by a microphone coupled to the game console 210 . In this way, the content displayed may be identified when it is output by an entertainment device other than the game console 210 . Content output by the game console 210 could also be identified using the audio signals.
- the game console 210 includes a local storage component.
- the local storage component may store user profiles for individual persons or groups of persons viewing and/or reacting to media content. Each user profile may be stored as a separate file, such as a cookie.
- the information stored in the user profiles may be updated automatically.
- personal information, viewing histories, viewing selections, personal preferences, the number of times a person has viewed known media content, the portions of known media content the person has viewed, a person's responses to known media content, and a person's engagement levels in known media content may be stored in a user profile associated with a person. As described elsewhere, the person may be first identified before information is stored in a user profile associated with the person.
- a person's characteristics may be first recognized and mapped to an existing user profile for a person with similar or the same characteristics.
- Demographic information may also be stored.
- Each item of information may be stored as a “viewing record” associated with a particular type of media content.
- viewer personas as described below, may be stored in a user profile.
- Entertainment service 230 may comprise multiple computing devices communicatively coupled to each other.
- the entertainment service is implemented using one or more server farms.
- the server farms may be spread out across various geographic regions including cities throughout the world. In this scenario, the entertainment devices may connect to the closest server farms. Embodiments of the present invention are not limited to this setup.
- the entertainment service 230 may provide primary content and secondary content.
- Primary content may include television shows, movies, and video games.
- Secondary content may include advertisements, social content, directors' information and the like.
- FIG. 2 also includes a cable box 218 and a DVR 217 . Both of these devices are capable of receiving content through network 220 . The content may be on-demand or broadcast as through a cable distribution network. Both the cable box 218 and DVR 217 have a direct connection with television 216 . Both devices are capable of outputting content to the television 216 without passing through game console 210 , but in one embodiment the cable box 218 and DVR 217 pass through the game console 210 . As can be seen, game console 210 also has a direct connection to television 216 . Television 216 may be a smart television that is capable of receiving entertainment content directly from entertainment service 230 . As mentioned, the game console 210 may perform audio analysis to determine what media title is being output by the television 216 when the title originates with the cable box 218 , DVR 217 , or television 216 .
- the entertainment environment 300 includes entertainment device A 310 , entertainment device B 312 , entertainment device C 314 , and entertainment device N 316 (hereafter entertainment devices 310 - 316 ).
- Entertainment device N 316 is intended to represent that there could be an almost unlimited number of clients connected to network 305 .
- the entertainment devices 310 - 316 may take different forms.
- the entertainment devices 310 - 316 may be game consoles, televisions, DVRs, cable boxes, personal computers, tablets, or other entertainment devices capable of outputting media.
- the entertainment devices 310 - 316 are capable of gathering viewer data through an imaging device, similar to imaging device 213 of FIG. 2 that was previously described.
- the imaging device could be built into a client, such as a web cam and microphone, or could be a stand-alone device.
- the entertainment devices 310 - 316 include a local storage component configured to store personal profiles for one or more persons.
- the local storage component is described in greater detail above with reference to the game console 210 .
- the entertainment devices 310 - 316 may include classification programs that analyze image data to generate audience data. For example, the entertainment devices 310 - 316 may determine how many people are in the audience, audience member characteristics, levels of engagement, and audience response.
- Network 305 is a wide area network, such as the Internet.
- Network 305 is connected to advertiser 320 , content provider 322 , and secondary content provider 324 .
- the advertiser 320 distributes advertisements to entertainment devices 310 - 316 .
- the advertiser 320 may also cooperate with entertainment service 330 to provide advertisements.
- the content provider 322 provides primary content such as movies, video games, and television shows. The primary content may be provided directly to entertainment devices 310 - 316 or indirectly through entertainment service 330 .
- Secondary content provider 324 provides content that compliments the primary content.
- Secondary content may be a director's cut, information about a character, game help information, and other content that compliments the primary content.
- the same entity may generate both primary content and secondary content.
- a television show may be generated by a director that also generates additional secondary content to compliment the television show.
- the secondary content and primary content may be purchased separately and could be displayed on different devices.
- the primary content could be displayed through a television while the secondary content is viewed on a companion device, such as a tablet.
- the advertiser 320 , content provider 322 , and secondary content provider 324 may stream content directly to entertainment devices or seek to have their content distributed by a service, such as entertainment service 330 .
- the entertainment service 330 provides content and advertisements to entertainment devices.
- the entertainment service 330 is shown as a single block. In reality, the functions should be widely distributed across multiple devices.
- the various features of entertainment service 330 described herein may be provided by multiple entities and components.
- the entertainment service 330 comprises a game execution environment 332 , a game data store 334 , a content data store 336 , a distribution component 338 , a streaming component 340 , a content recognition database 342 , an ad data store 344 , an ad placement component 346 , an ad sales component 348 , an audience data store 350 , an audience processing component 352 , and an audience distribution component 354 .
- the various components may work together to provide content, including games, advertisements, and media titles to a client, and capture audience data.
- the audience data may be used to specifically target advertisements and/or content to a person.
- the audience data may also be aggregated and shared with or sold to others.
- the game execution environment 332 provides an online gaming experience to a client device.
- the game execution environment 332 comprises the gaming resources required to execute a game.
- the game execution environment 332 comprises active memory along with computing and video processing.
- the game execution environment 332 receives gaming controls, such as controller input, through an I/O channel and causes the game to be manipulated and progressed according to its programming.
- the game execution environment 332 outputs a rendered video stream that is communicated to the game device.
- Game progress may be saved online and associated with an individual person that has an ID through a gaming service.
- the game ID may be associated with a facial pattern.
- the game data store 334 stores game code for various game titles.
- the game execution environment 332 may retrieve a game title and execute it to provide a gaming experience.
- the content distribution component 338 may download a game title to an entertainment device, such as entertainment device A 310 .
- the content data store 336 stores media titles, such as songs, videos, television shows, and other content.
- the distribution component 338 may communicate this content from content data store 336 to the entertainment devices 310 - 316 . Once downloaded, an entertainment device may play the content on or output the content from the entertainment device. Alternatively, the streaming component 340 may use content from content data store 336 to stream the content to the person.
- the content recognition database 342 includes a collection of audio clips associated with known media titles that may be compared to audio input received at the entertainment service 330 .
- the received audio input e.g., received from the game console 210 of FIG. 2
- the source of the audio input i.e., the identity of media content
- the identified media title/content is then communicated back to the entertainment device (e.g., the game console) for further processing.
- Exemplary processing may include associating the identified media content with a person that viewed or is actively viewing the media content and storing the association as a viewing record.
- the entertainment service 330 also provides advertisements. Advertisements available for distribution may be stored within ad data store 344 .
- the advertisements may be presented as an overlay in conjunction with primary content and may be partial or full-screen advertisements that are presented between segments of a media presentation or between the beginning and end of a media presentation, such as a television commercial.
- the advertisements may be associated with audio content. Additionally, the advertisements may take the form of secondary content that is displayed on a companion device in conjunction with a display of primary content.
- the advertisements may also be presented when a person associated with a targeted persona is located in the audience area and/or is logged in to the entertainment service 330 , as further described below.
- the ad placement component 346 determines when an advertisement should be displayed to a person and/or what advertisement should be displayed.
- the ad placement component 346 may consume real-time audience data and automatically place an advertisement associated with a highest-bidding advertiser in front of one or more viewers because the audience data indicates that the advertiser's bidding criteria is satisfied. For example, an advertiser may wish to display an advertisement to men present in Kansas City, Mo. When the audience data indicates that one or more men in Kansas City are viewing primary content, an ad could be served with that primary content.
- the ad may be inserted into streaming content or downloaded to the various entertainment devices along with triggering mechanisms or instructions on when the advertisement should be displayed to the person.
- the triggering mechanisms may specify desired audience data that triggers display of the ad.
- the ad placement component 346 may manage linked advertisements.
- the ad placement component 346 may communicate preliminary advertisements and subsequent advertisements to entertainment clients.
- the ad placement component 346 could communicate a preliminary advertisement and associated response criteria to a smart TV.
- the smart TV could indicate that an audience member satisfied the response criteria.
- the ad placement component 346 could communicate subsequent advertisements to the audience member's tablet along with presentation trigger.
- the ad placement component 346 could activate and deactivate subsequent advertisements as view responses to the advertisements are received.
- viewers' response to ads in an ad path may tracked by the ad placement component 346 .
- the ad placement component 346 may maintain a record of viewer responses and purchases.
- the ad placement component 346 may bill advertisers using the viewer response data.
- the ad sales component 348 interacts with advertisers 320 to set a price for displaying an advertisement.
- an auction is conducted for various advertising space.
- the auction may be a real-time auction in which the highest bidder is selected when a viewer or viewing opportunity satisfies the advertiser's criteria.
- the audience data store 350 aggregates and stores audience data received from entertainment devices 310 - 316 .
- the audience data may first be parsed according to known types or titles of media content. Each item of audience data that relates to a known type or title of media content is a viewing record for that media content. Viewing records for each type of media content may be aggregated, thereby generating viewing data.
- the viewing data may be summarized according to categories. Exemplary categories include a total number of persons that watched the content, the average number of persons per household that watched the content, a number of times certain persons watched the content, a determined response of people toward the content, a level of engagement of people in the media title, a length of time individuals watched the content, the common distractions that were ignored or engaged in while the content was being displayed, and the like.
- the viewing data may similarly be summarized according to types of persons that watched the known media content. For example, personal characteristics of the persons, demographic information about the persons, and the like may be summarized within the viewing data.
- the audience processing component 352 may build and assign personas using the audience data and a machine-learning algorithm.
- a persona is an abstraction of a person or groups of people that describes preferences or characteristics about the person or groups of people. The personas may be based on media content the persons have viewed or listened to, as well as other personal information stored in a user profile on the entertainment device (e.g., game console) and associated with the person. For example, the persona could define a person as a female between the ages of 20 and 35 having an interest in science fiction, movies, and sports. Similarly, a person that always has a positive emotional response to car commercials may be assigned a persona of “car enthusiast.” More than one persona may be assigned to an individual or group of individuals.
- a family of five may have a group persona of “animated film enthusiasts” and “football enthusiasts.” Within the family, a child may be assigned a persona of “likes video games,” while the child's mother may be assigned a person of “dislikes video games.” It will be understood that the examples provided herein are merely exemplary. Any number or type of personas may be assigned to a person.
- the audience distribution component 354 may distribute audience data to content providers, advertisers, or other interested parties. For example, the audience distribution component 354 could provide information indicating that 300,000 discrete individuals viewed a television show in a geographic region. The audience data could be derived from image data received at each entertainment device. In addition to the number of people that viewed the media content, more granular information could be provided. For example, the total persons giving full attention to the content could be provided. In addition, response data for people could be provided. To protect the identity of individual persons, only a persona assigned to a person may be exposed and distributed to advertisers. A value may be placed on the distribution, as a condition on its delivery, as described above. The value may also be based on the amount, type, and dearth of viewing data delivered to an advertiser or content publisher.
- the audience area 400 is the area in front of the display device 410 .
- the audience area 400 comprises the area from which a person can see the content.
- the audience area 400 comprises the area within a viewing range of the imaging device 418 . In most embodiments, however, the viewing range of the imaging device 418 overlaps with the area from which a person can see content on the display device 410 . If the content is only audio content, then the audience area is the area where the person may hear the content.
- an entertainment system that comprises a display device 410 , a game console 412 , a cable box 414 , a DVD player 416 , and an imaging device 418 .
- the game console 412 may be similar to game console 210 of FIG. 2 described previously.
- the cable box 414 and the DVD player 416 may stream content from an entertainment service, such as entertainment service 330 of FIG. 3 , to the display device 410 (e.g., television).
- the game console 412 , cable box 414 , and the DVD player 416 are all coupled to the display device 410 . These devices may communicate content to the display device 410 via a wired or wireless connection, and the display device 410 may display the content.
- the content shown on the display device 410 may be selected by one or more persons within the audience. For example, a person in the audience may select content by inserting a DVD into the DVD player 416 or select content by clicking, tapping, gesturing, or pushing a button on a companion device (e.g., a tablet) or a remote in communication with the display device 410 . Content selected for viewing may be tracked and stored on the game console 412 .
- a companion device e.g., a tablet
- the imaging device 418 is connected to the game console 412 .
- the imaging device 418 may be similar to imaging device 213 of FIG. 2 described previously.
- the imaging device 418 captures image data of the audience area 400 .
- Other devices that include imaging technology, such as the tablet 212 of FIG. 2 may also capture image data and communicate the image data to the game console 412 via a wireless or wired connection.
- audience data may be gathered through image processing. Audience data may include a detected number of persons within the audience area 400 . Persons may be detected based on their form, appendages, height, facial features, movement, speed of movement, associations with other persons, biometric indicators, and the like. Once detected, the persons may be counted and tracked so as to prevent double counting. The number of persons within the audience area 400 also may be automatically updated as people leave and enter the audience area 400 .
- Audience data may similarly include a direction each audience member is facing. Determining the direction persons are facing may, in some embodiments, be based on whether certain facial or body features are moving or detectable. For example, when certain features, such as a person's cheeks, chin, mouth and hairline are detected, they may indicate that a person is facing the display device 410 . Audience data may include a number of persons that are looking toward the display device 410 , periodically glancing at the display device 410 , or not looking at all toward the display device 410 . In some embodiments, a period of time each person views specific media presentations may also comprise audience data.
- audience data may indicate that an individual 420 is standing in the background of the audience area 400 while looking at the display device 410 .
- Individuals 422 , 424 , 426 , and child 428 and child 430 may also be detected and determined to be all facing the display device 410 .
- a man 432 and a woman 434 may be detected and determined to be looking away from the television.
- the dog 436 may also be detected, but characteristics (e.g., short stature, four legs, and long snout) about the dog 436 may not be stored as audience data because they indicate that the dog 436 is not a person.
- audience data may include an identity of each person within the audience area 400 .
- Facial recognition technologies may be utilized to identify a person within the audience area 400 or to create and store a new identity for a person. Additional characteristics of the person (e.g., form, height, weight) may similarly be analyzed to identify a person.
- the person's determined characteristics may be compared to characteristics of a person stored in a user profile on the display device 410 . If the determined characteristics match those in a stored user profile, the person may be identified as a person associated with the user profile.
- Audience data may include personal information associated with each person in the audience area.
- personal characteristics include an estimated age, a race, a nationality, a gender, a height, a weight, a disability, a medical condition, a likely activity level of (e.g., active or relatively inactive), a role within a family (e.g., father or daughter), and the like.
- an image processor may determine that audience member 420 is a woman of average weight.
- analyzing the width, height, bone structure, and size of individual 432 may lead to a determination that the individual 432 is a male.
- Personal information may also be derived from stored user profile information.
- Such personal information may include an address, a name, an age, a birth date, an income, one or more viewing preferences (e.g., movies, games, and reality television shows) of or login credentials for each person.
- viewing preferences e.g., movies, games, and reality television shows
- audience data may be generated based on both processed image data and stored personal profile data. For example, if individual 434 is identified and associated with a personal profile of a 13-year-old, processed image data that classifies individual 434 as an adult (i.e., over 18 years old) may be disregarded as inaccurate.
- the audience data also comprises an identification of the primary content being displayed when image data is captured at the imaging device 418 .
- the primary content may, in one embodiment, be identified because it is fed through the game console 412 .
- audio output associated with the display device 410 may be received at a microphone associated with the game console 412 .
- the audio output is then compared to a library of known content and determined to correspond to a known media title or a known genre of media title (e.g., sports, music, movies, and the like).
- audience data may indicate that basketball game 411 was being displayed to individuals 420 , 422 , 424 , 426 , 428 , 430 , 432 , and 434 when images of the individuals were captured.
- the audience data may also include a mapping of the image data to the exact segment of the media presentation (e.g., basketball game 411 ) being displayed when the image data was captured.
- FIG. 5 an audience area depicting audience members' levels of engagement is shown, in accordance with an embodiment of the present invention.
- the entertainment system is identical to that shown in FIG. 4 , but the audience members have changed.
- Image data captured at the imaging device 418 may be processed similarly to how it was processed with reference to FIG. 4 .
- the image data may be processed to generate audience data that indicates a level of engagement of and/or attention paid by the audience toward the media presentation (e.g., the basketball game 411 ).
- An indication of the level of engagement of a person may be generated based on detected traits of or actions taken by the person, such as facial features, body positioning, and body movement. For example, the movement of a person's eyes, the direction the person's body is facing, the direction the person's face is turned, whether the person is engaged in another task (e.g., talking on the phone), whether the person is talking, the number of additional persons within the audience area 500 , and the movement of the person (e.g., pacing, standing still, sitting, or lying down) are traits of and/or actions taken by a person that may be distilled from the image data.
- the determined traits may then be mapped to predetermined categories or levels of engagement (e.g., a high level of engagement or a low level of engagement). Any number of categories or levels of engagement may be created, and the examples provided herein are merely exemplary.
- a level of engagement may additionally be associated with one or more predetermined categories of distractions.
- traits of or actions taken by a person may be mapped to both a level of engagement and a type of distraction.
- Exemplary actions that indicate a distraction include engaging in conversation, using more than one display device (e.g., the display device 510 and a companion device), reading a book, playing a board game, falling asleep, getting a snack, leaving the audience area 500 , walking around, and the like.
- Exemplary distraction categories may include “interacted with other persons,” “interacted with an animal,” “interacted with other display devices,” “took a brief break,” and the like.
- Audio data Other input that may be used to determine a person's level of engagement is audio data.
- Microphones associated with the game console 412 may pick up conversations or sounds from the audience.
- the audio data may be interpreted and determined to be responsive to (i.e., related to or directed at) the media presentation or nonresponsive to the media presentation.
- the audio data may be associated with a specific person (e.g., a person's voice).
- signal data from companion devices may be collected to generate audience data.
- the signal data may indicate, in greater detail than the image data, a type or identity of a distraction, as described below.
- the image data gathered through imaging device 418 may be analyzed to determine that individual 520 is reading a paper 522 and is therefore distracted from the content shown on display device 510 .
- Individual 536 is viewing tablet 538 while the content is being displayed through display device 510 .
- signal data may be analyzed to understand what the person is doing on the tablet. For example, the person could be surfing the Web, checking e-mail, checking a social network site, or performing some other task.
- the individual 536 could also be viewing secondary content that is related to the primary content 411 shown on display device 510 . What the person doing on tablet 538 may cause a different level of engagement to be associated with the person.
- the level of engagement mapped to the person's action i.e., looking at the tablet
- the level of engagement mapped to the person's action i.e., looking at the tablet
- the individual 536 's action of looking at the tablet may be mapped to a somewhat higher level of engagement.
- Individuals 532 and 534 are carrying on a conversation with each other but are not otherwise distracted because they are seated in front of the display device 510 . If, however, audio input from individuals 532 and 534 indicate that they are speaking with each other while seated in front of the display device 510 , their actions may be mapped to an intermediate level of engagement. Only individual 530 is viewing the primary content 511 and not otherwise distracted. Accordingly, a high level of engagement may be associated with individual 530 and/or the media content being displayed.
- Determined distractions and levels of engagement of a person may additionally be associated with particular portions of image data, and thus, corresponding portions of media content.
- audience data may be stored locally on the game console 412 or communicated to a server for remote storage and distribution.
- the audience data may be stored as a viewing record for the media content.
- the audience data may be stored in a user profile associated with the person for whom a level of engagement or distractions was determined.
- FIG. 6 a person's reaction to media content is classified and stored in association with the viewing data.
- the entertainment setup shown in FIG. 6 is the same as that shown in FIG. 4 .
- the primary content 611 is different.
- the primary content is a car commercial indicating a sale.
- the persons' responses to the car commercial may be measured through one or more methods and stored as audience data.
- a person's response may be gleaned from the images and/or audio originating from the person (e.g., the person's voice).
- Exemplary responses include smiling, frowning, wide eyes, glaring, yelling, speaking softly, laughing, crying, and the like.
- Other responses may include a change to a biometric reading, such as an increased or a decreased heart rate, facial flushing, or pupil dilation.
- Still other responses may include movement, or a lack thereof, for example, pacing, tapping, standing, sitting, darting one's eyes, fixing one's eyes, and the like.
- Each response may be mapped to one or more predetermined emotions, such as happiness, sadness, excitement, boredom, depression, calmness, fear, anger, confusion, disgust, and the like.
- mapping a person's response to an emotion may additionally be based on the length of time the person held the response or the pronouncement of the person's response.
- a person's response may be mapped to more than one emotion.
- a person's response e.g., smiling and jumping up and down
- the predetermined categories of emotions may include tiers or spectrums of emotions. Baseline emotions of a person may also be taken into account when mapping a person's response to an emotion.
- a detected “happy” emotion for the person may be elevated to a higher “tier” of happiness, such as “elation.”
- the baseline may serve to inform determinations about the attentiveness of the person toward a particular media title.
- Responsiveness may be related to a determined level of engagement of a person, as described above. Thus, responsiveness may be determined based on the direction the person is looking when a title is being displayed. For example, a person that is turned away from the display device is unlikely to be reacting to content being displayed on the display device. Responsiveness may similarly be determined based on the number and type of distractions located within the viewing area of the display device. Similarly, responsiveness may be based on an extent to which a person is interacting with or responding to distractions.
- responsiveness may be determined based on whether a person is actively or has recently changed a media title that is being displayed (i.e., a person is more likely to be viewing content he or she just selected to view). It will be understood that responsiveness can be determined in any number of ways by utilizing machine-learning algorithms, and the examples provided herein are meant only to be illustrative.
- the image data may be utilized to determine responses of individual 622 and individual 620 to the primary content 611 .
- Individual 622 may be determined to have multiple responses to the car commercial, each of which may be mapped to the same or multiple emotions. For example, the individual 622 may be determined to be smiling, laughing, to be blinking normally, to be sitting, and the like. All of these reactions, alone and/or in combination, may lead to a determination that the individual 622 is pleased and happy. This is assumed to be a reaction to the primary content 611 and recorded in association with the display event.
- individual 620 is not smiling, has lowered eyebrows, and is crossing his arms, indicating that the individual 620 may be angry or not pleased with the car commercial.
- the linear ad path 700 includes a preliminary ad 710 , a subsequent ad 720 having incentive A, and a subsequent ad 730 having incentive B. Incentives A and B are different.
- the preliminary ad 710 may be shown as part of a media presentation, such as product placement in a television show or an ad shown during a commercial break.
- the preliminary ad 710 is associated with one or more reaction criteria that are used to determine whether an audience member should be shown either of the subsequent ads.
- the preliminary advertisement 710 may require that the user pays full attention to the preliminary ad to activate either subsequent ad.
- attentiveness as the reaction criteria may be used when the ads build on each other to tell a story, require knowledge of the previous ad in the path, or understand the new ad.
- the reaction criteria specifies that a positive response is detected or received from the audience member. An explicit response may be received while an implicit response is detected.
- the subsequent ad 720 is activated for presentation to the user upon satisfaction of presentation triggers associated with the subsequent ad 720 .
- the subsequent ad 720 may be communicated to a device associated with the audience member.
- the audience member may be associated through a user account with a personal computer, tablet, and smartphone.
- the subsequent ad 720 may be communicated to one or more devices that are capable of detecting context associated with the presentation trigger, including the device on which the preliminary ad 710 was viewed. For example, if the presentation trigger requires the user to be in a geographic area, then the subsequent ad 720 would only be communicated to devices that are location aware. On the other hand, if the presentation trigger associated with subsequent ad 720 only requires that it be shown to the user at a particular time, then it could also be sent to the personal computer, game console, or other nonlocation-aware entertainment devices.
- the subsequent ad 720 may be shown to the user multiple times across multiple devices.
- the presentation and response, if any, may be communicated to a centralized ad tracking service.
- the user's response, or lack of response, to the subsequent ad 720 may cause the user to be shifted down the ad path 700 to subsequent ad 730 having incentive B.
- the failure of the user to respond to subsequent ad 720 with incentive A causes the user to be shifted down the path 700 to subsequent ad 730 with incentive B, which is higher than incentive A.
- incentive A and incentive B are not of a significantly different value, but are just different. For example, incentive A could be for the user to get a free soft drink, while incentive B is for the user to get a free cup of coffee.
- the user's positive response to subsequent ad 720 causes the user to be shifted to subsequent ad 730 .
- a related product could be advertised through subsequent ad 730 .
- incentives A and B would be directed toward different products associated with their corresponding advertisements. For example, having purchased movie tickets through subsequent ad 720 , subsequent ad 730 , having a coupon for popcorn, could be displayed upon the user arriving at the theater.
- a nonlinear ad path 800 is shown, in accordance with an embodiment of a present invention.
- the ad path 800 starts with a preliminary user response 810 to content, such as preliminary ad. Different subsequent ads within the ad path 800 may be activated in response to the preliminary user response 810 .
- the preliminary user response 810 may be explicit or implicit.
- the preliminary user response 810 may be in response to an advertisement, but could also be in response to nonadvertising content.
- the user could respond positively to a baseball game (nonadvertising content) between two teams.
- a baseball game nonadvertising content
- the user could be placed into an ad path designed to incentivize the user to purchase goods or services associated with the baseball team.
- subsequent ad A 812 having presentation trigger A could be related to the user purchasing tickets for a baseball game.
- the user's response to subsequent ad A 812 is evaluated.
- the path may be deactivated at step 816 .
- the user's purchase record may be updated indicating that the user purchased baseball tickets.
- ad 818 is associated with a different trigger and could also be associated with different incentives.
- the response to subsequent ad B 818 is monitored at decision point 820 .
- the ad B 818 may remain active for subsequent presentation when trigger B is satisfied the next time. If a positive response is noted at decision point 820 , the user could be moved to a different part in the path associated with subsequent ad E 822 . Notice that subsequent ad B 818 and subsequent ad E 822 are both associated with trigger B.
- Ads within an advertising path and across different advertising paths may use the same triggers.
- trigger B could be associated with a time frame before an upcoming baseball home stand.
- the subsequent ad E 822 could be associated with a different home stand or games that received the positive response to subsequent ad B 818 .
- the various points along the path could loop or be deactivated in response to a positive response or purchase.
- the part of the path showing subsequent ad C 824 and subsequent ad F 826 are related to a complimentary product, such as a baseball jersey or cap.
- a complimentary product such as a baseball jersey or cap.
- the complimentary products may be part of a related subsequent path with different triggers and incentives.
- the trigger C that is part of subsequent ad C 824 may be related to geographic proximity with a retail outlet where baseball caps are sold.
- the user could be associated with multiple subsequent ads within the ad path 800 at the same time when appropriate. For example, the user could be associated with a subsequent ad offering baseball tickets at the same time she is associated with a subsequent ad selling baseball caps. Similarly, the user could be associated with multiple subsequent ads offering the same thing but with different triggers. For example, the triggers could specify different geographic locations associated with different retail stores and different incentives offered by those respective stores.
- FIG. 9 a method 900 of providing linked advertisements is shown, in accordance with an embodiment of the present invention.
- the method may be performed on a game console or other entertainment device that is connected to an imaging device with a view of an audience area approximate to a display device.
- image data that depicts an audience for an ongoing media presentation is received.
- the image data may be in the form of a depth cloud generated by a depth camera, a video stream, still images, skeletal tracking information or other information derived from the image data.
- the ongoing media presentation may be a movie, game, television show, an advertisement, or the like. Ads shown during breaks in a television show may be considered part of the ongoing media presentation.
- the audience may include one or more individuals within an audience area.
- the audience area includes the extents from which the ongoing media presentation may be viewed from the display device.
- the individuals within the audience area may be described as audience members herein.
- an audience member is identified by analyzing the image data.
- the audience member is identified through facial recognition.
- the audience member may be associated with a user account that provides facial recognition authentication or login.
- the audience member's account may then be associated with one or more social networks.
- social networks are associated with a facial recognition login feature that allows the audience member to be associated with a social network.
- the audience member may be given an opportunity to explicitly associate his account with one or more social networks.
- the audience member may be a member of more social networks than are actually associated with the account. But embodiments of the present invention may work with whatever social networks the audience member has provided access to.
- the audience member may be asked to provide authentication information or permission to access the social network. This information may be requested through a setup overlay or screen. The setup may occur at a point separate from when the media presentation is ongoing, for example, when an entertainment device is set up.
- audience data is generated by analyzing the image data.
- Exemplary audience data has been described previously.
- the audience data may include a number of people that are present within the audience. For example, the audience data could indicate that five people are present within the audience area.
- the audience data may also associate audience members with demographic characteristics.
- the audience data may also indicate an audience member's level of attentiveness to the ongoing media presentation. Different audience members may be associated with a different level of attentiveness. In one embodiment, the attentiveness is measured using distractions detected within the image data. In other words, a member's interactions with objects other than the display may be interpreted as the member paying less than full attention to the ongoing media presentation. For example, if the audience member is interacting with a different media presentation (e.g., reading a book, playing a game) then less than full attentiveness is paid to the ongoing media presentation. Interactions with other audience members may indicate a low level of attentiveness. Two audience members having a conversation may be assigned less than a full attentiveness level. Similarly, an individual speaking on a phone may be assigned less than full attention.
- a member's interactions with objects other than the display may be interpreted as the member paying less than full attention to the ongoing media presentation. For example, if the audience member is interacting with a different media presentation (e.g., reading a book, playing a game)
- an individual's actions in relation to the ongoing media presentation may be analyzed to determine a level of attentiveness. For example, the user's gaze may be analyzed to determine whether the audience member is looking at the display.
- gaze detection may be used to determine whether the user is ignoring the overlay and looking at the ongoing media presentation or is focused on the overlay, or even noticed the overlay for a short period.
- attentiveness information could be assigned to different content shown on a single display.
- the audience data may also measure a user's reaction or response to the ongoing media presentation. As mentioned previously with reference to FIG. 6 , a user's response or reaction may be measured based on biometric data and facial expressions.
- the audience member is determined to have reacted positively to a preliminary advertisement for a product or service shown as part of the ongoing media presentation.
- the preliminary advertisement could be a commercial shown during a break in the primary content, including before presentation of the primary content begins or after it concludes.
- the preliminary advertisement could also be product placement within primary media content.
- the preliminary advertisement could be an overlay. The overlay could be shown concurrently with the primary content.
- the audience member's positive reaction is determined using the audience data generated previously at step 930 .
- reactions within the audience data are correlated to content within the ongoing media presentation. For example, a positive response observed at the same time a sports car appears within the media content may trigger a subsequent ad for the sports car.
- the sports car may be part of a product placement.
- each reaction within the audience data is associated with a time and can be used to associate the reaction with a particular point in the content. For example, if the presentation starts at noon and a reaction is observed within the audience data at 1:00 p.m., then the reaction may be associated with content shown one hour into the ongoing media presentation. Other ways to correlate a reaction with a point within the media presentation are possible.
- metadata is associated with the ongoing media presentation to identify content displayed at various progress points.
- One example of content is a preliminary advertisement.
- a preliminary advertisement may be a television commercial, product placement, an overlay, or the like.
- the preliminary advertisement is associated with a response or reaction threshold.
- a reaction within the audience data is correlated to the preliminary advertisement and determined to satisfy the reaction threshold, then a subsequent advertisement may be activated.
- a subsequent advertisement may be thought of as a follow-up to the preliminary advertisement.
- the preliminary advertisement and subsequent advertisement could be the same.
- the subsequent advertisement is associated with one or more presentation triggers.
- the subsequent ad is only displayed when the presentation trigger criteria are satisfied.
- the presentation criteria may be context based. For example, the subsequent advertisement may be shown on a mobile device associated with the audience member at a time and place that is conducive to purchasing an advertised product or service.
- the audience member is associated with an ad path that comprises at least one subsequent advertisement that has a presentation trigger that when satisfied causes a presentation of the subsequent advertisement. Examples of ad paths have been described previously.
- the user could be associated with a particular subsequent advertisement within the ad path that includes multiple subsequent advertisements.
- the strength of the user reaction is determined or is used to determine which subsequent advertisement within the path should be activated.
- An activated subsequent advertisement is actively monitored for satisfaction of the presentation trigger associated with the subsequent advertisement.
- Dormant or inactive subsequent advertisements within the path are not monitored and are not triggered for display until activated.
- a user's response to a first subsequent advertisement either positive or negative, could cause an additional subsequent advertisement to be activated and the initial subsequent active advertisement to be inactivated.
- the audience member is associated with a user account.
- the user account may be for an entertainment service, e-mail, a social network, or other service that is accessed through multiple devices.
- the subsequent advertisements may be presented through this service or through an application associated with this service. In this way, subsequent advertisements may be presented on one or more devices associated with the user. Accordingly, the ad path may be communicated to multiple devices associated with the user. Different subsequent advertisements could be activated on different devices. For example, a subsequent advertisement associated with a particular time and location could be active on the mobile device associated with the audience member, whereas a different subsequent advertisement could be activated on a game console or other mostly stationary device.
- the subsequent advertisement on the mobile device is obtrusive.
- the subsequent advertisement may be associated with a vibration, noise, or other indication to get the user's attention.
- a method 1000 of assigning an audience member to an ad path is shown, in accordance with an embodiment of the present invention.
- image data that depicts an audience for the ongoing media presentation is received.
- an audience member is identified by analyzing the image data. For example, the audience member could be identified using voice recognition or facial recognition.
- audience data is generated by analyzing the image data. The audience data may classify the reactions of individuals present within the audience. The individuals may be alternatively described as users or audience members.
- a strength of the audience member's reaction to content shown as part of the ongoing media presentation is determined from the audience data.
- the strength of the reaction is intended to capture a user's enthusiasm for a content, such as a preliminary advertisement.
- the strength of the advertisement may be determined based on an audience member profile that tracks a range of responses made by the audience member. For example, a raised eyebrow may be an extremely positive strong response from a first audience member and only a mildly positive response or even a skeptical response from a different audience member. Some individuals are more expressive than others and the user account and profile is able to adjust for these differences by comparing the reactions of an individual over time. Feedback from an advertising path may be provided to further refine the responses of an individual. For example, if a particular reaction is initially interpreted as positive, but the user never responds positively to the advertisements, then the expression may be reclassified.
- the audience member is associated with an ad path that comprises multiple subsequent advertisements that each has a presentation trigger that, when satisfied, causes a presentation of an associated subsequent advertisement.
- An ad path having multiple subsequent advertisements has been described previously with reference to FIG. 8 .
- the ad path, including the subsequent advertisements and associated presentation triggers are communicated to multiple devices associated with the audience member.
- the communication or the ad path could be communicated to the audience member's smartphone and tablet.
- the communications between devices could be coordinated through a central advertising component associated with an advertising service or entertainment service.
- the centralized ad service may generate different ad paths and communicate them to different devices.
- the centralized ad service may also download initial triggers to entertainment devices that cause the user to be associated with a certain ad path.
- audience data is communicated from an entertainment device to a centralized ad component that analyzes the audience data and associates the user with an ad path when various criteria are satisfied.
- the criteria are communicated to an entertainment device that makes the comparison.
- Method 1100 may be performed by a mobile device such as a smartphone or a tablet.
- the mobile device is location-aware through GPS technology, or other location technology.
- an ad path that comprises a subsequent advertisement that is related to content to which the user previously responded positively is received on a mobile device associated with a user.
- the subsequent advertisement could have been shown on a different device such as a television or on the mobile device.
- determining that a presentation trigger associated with the subsequent advertisement is determined to be satisfied by the mobile device's present context.
- the context could include a time, location, and user activity.
- An example of a user activity includes driving, riding a bus, and riding a train.
- the user context for time and location could be satisfied, but the user may only be shown the advertisement if driving. This may make sense because the user may not be able to get off a train or a bus in time to respond to the suggestion made within the subsequent advertisement.
- the subsequent advertisement could provide a coupon for coffee as the user approaches a coffee shop.
- the user was determined to be on a bus, for example, by observing a pattern of starting and stopping at known bus stops, then the user may not be shown the advertisement unless a bus stop is near the coffee shop. All of this would be taken into account by the presentation triggers.
- the subsequent advertisement is presented to the user.
- the user's response to the subsequent advertisement is monitored.
- a positive or negative response may be communicated to a central ad component that handles billing and may provide additional instructions regarding the next steps with regard to the ad path.
- a different subsequent advertisement within the ad path is activated upon detecting a positive or a negative response.
- the present subsequent advertisement may be simultaneously deactivated.
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Business, Economics & Management (AREA)
- Databases & Information Systems (AREA)
- Finance (AREA)
- Accounting & Taxation (AREA)
- Development Economics (AREA)
- Strategic Management (AREA)
- Marketing (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- Economics (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Game Theory and Decision Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computer Graphics (AREA)
- Human Computer Interaction (AREA)
- Computing Systems (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
- This application is a continuation of U.S. patent application Ser. No. 13/865,673, filed on Apr. 18, 2013, titled LINKED ADVERTISEMENTS, which application is herein incorporated by reference.
- Advertisements are shown before, during, and after media presentations. Advertisements are even included within media presentations through product placement. The advertisements shown with the media are selected based on anticipated audience demographics and interests of the anticipated audience. The advertisements are shown regardless of whether it is a good time for the audience member to act in response to the advertisement.
- This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used in isolation as an aid in determining the scope of the claimed subject matter.
- Embodiments of the present invention generate linked content. Linked contents may include a preliminary content and one or more subsequent contents. In one embodiment, the viewer is only shown the subsequent content upon detecting a positive reaction to the preliminary content. The subsequent content is associated with presentation triggers that specify a context in which the subsequent content should be presented. The context may be defined by a time of day, location, user activity, and/or other parameters. For example, the context could be the user driving (activity context) near a coffee shop (location context) in the morning (time context).
- The preliminary content and subsequent content may be shown on different devices. For example, the preliminary content may be shown on a television as part of a media presentation and the subsequent content could be shown on a mobile device, such as a smartphone or a tablet. Showing the subsequent content on a location-aware (e.g., GPS enabled) mobile device allows the presentation trigger to include a location.
- The reaction to the preliminary content may be explicit or implicit. An explicit reaction could be the user making an affirmative gesture indicating that he likes the preliminary content. The implicit reaction may be derived through an analysis of image data. The image data may be generated by a depth camera, video camera, or other imaging device. In one embodiment, the user's facial expressions are analyzed to determine a reaction to primary content. Apart from, or in combination with the facial expressions, biometric readings may be derived from the imaging data.
- Embodiments of the invention are described in detail below with reference to the attached drawing figures, wherein:
-
FIG. 1 is a block diagram of an exemplary computing environment suitable for implementing embodiments of the invention; -
FIG. 2 is a diagram of an online entertainment environment, in accordance with an embodiment of the present invention; -
FIG. 3 is a diagram of a remote entertainment computing environment, in accordance with an embodiment of the present invention; -
FIG. 4 is a diagram of an exemplary audience area captured using a depth camera, in accordance with an embodiment of the present invention; -
FIG. 5 is a diagram of an exemplary audience area captured using a depth camera, in accordance with an embodiment of the present invention; -
FIG. 6 is a diagram of an exemplary audience area captured using a depth camera, in accordance with an embodiment of the present invention; -
FIG. 7 is a diagram showing an ad path, in accordance with an embodiment of the present invention; -
FIG. 8 is a diagram showing an ad path, in accordance with an embodiment of the present invention; -
FIG. 9 is a flow chart showing a method of providing linked advertisements, in accordance with an embodiment of the present invention; -
FIG. 10 is a flow chart showing a method of assigning an audience member to an ad path, in accordance with an embodiment of the present invention; and -
FIG. 11 is a flow chart showing a method of managing an ad path, in accordance with an embodiment of the present invention. - The subject matter of embodiments of the invention is described with specificity herein to meet statutory requirements. However, the description itself is not intended to limit the scope of this patent. Rather, the inventors have contemplated that the claimed subject matter might be embodied in other ways, to include different steps or combinations of steps similar to the ones described in this document, in conjunction with other present or future technologies. Moreover, although the terms “step” and/or “block” may be used herein to connote different elements of methods employed, the terms should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described.
- Embodiments of the present invention generate linked advertisements. Linked advertisements may include a preliminary advertisement and one or more subsequent advertisements. The preliminary advertisement and subsequent advertisement may be separated by time, location, and device. The preliminary advertisement and subsequent advertisements display related products or services and may operate together as a unified ad campaign.
- In one embodiment, the viewer is only shown the subsequent advertisement upon detecting a positive reaction to the preliminary advertisement. The subsequent advertisement is associated with presentation triggers that specify a context in which the subsequent advertisement should be presented. The context may be defined by a time of day, location, user activity, and/or other parameters. For example, the presentation trigger may specify a context in which the user is able to purchase an advertised good or service. For example, the context could be the user driving (activity context) near a coffee shop (location context) in the morning (time context).
- The preliminary advertisement and subsequent advertisement may be shown on different devices. For example, the preliminary advertisement may be shown on a television as part of a media presentation (e.g., a separate ad, product placement) and the subsequent advertisement could be shown on a mobile device, such as a smartphone or a tablet. Showing the subsequent advertisement on a location-aware (e.g., GPS enabled) mobile device allows the presentation trigger to include a location. Other contextual parameters may include a time of day and the user's current activity. For example, the presentation trigger could specify that the subsequent advertisement is only shown during business hours for the retail outlet. In another example, the subsequent advertisement is only shown at a time when the user is likely to purchase a product or service. For example, a user may be likely to purchase food at a restaurant during lunch time or dinner time.
- The reaction to the preliminary advertisement may be explicit or implicit. An explicit reaction could be the user making an affirmative gesture indicating that they like the preliminary advertisement. The user could also explicitly request more information or otherwise express an interest in the advertised product through a companion device, such as a smartphone or tablet. In one embodiment, a companion application is provided to allow the user to explicitly indicate he likes an advertised product or service. The subsequent advertisement may include a coupon or other incentive for the user to try the product or service. In this way, the user could be encouraged to express her interest in an advertised product or service through the companion application. For users that do not have a companion device, or do not want to use a companion application, an explicit indication of interest may be made through a gesture, game pad, keyboard, controller, or voice command picked up by an entertainment device facilitating the linked advertising. The entertainment device could be a television, game console, cable box, or other similar device that is able to receive input from the audience and correlate it with what content the display device shows.
- In one embodiment, the user's reaction to an advertised product (e.g., a separate ad, product placement) or service is implicit. The implicit reaction may be derived through an analysis of image data. A depth camera, video camera, or other imaging device may generate the image data. In one embodiment, the user's facial expressions are analyzed to determine a reaction to an advertised product or service. Apart from, or in combination with, the facial expressions, biometric readings may be derived from the imaging data. For example, facial flushing and heart rate may be determined from the imaging data and used to classify the reaction as positive, negative, or indifferent. In one embodiment, the audience member's facial expressions and biometric changes are compared against a baseline for the audience member to determine whether the reaction is positive as well as a strength of the reaction.
- The subsequent advertisements may be part of an advertising path that includes a series of advertisements with different presentation triggers, content, and incentives. In one embodiment, the strength of the user's reaction to the preliminary advertisement is used to activate different advertisements within the path. The presentation trigger for an active subsequent advertisement is monitored, whereas presentation triggers for inactive advertisements are not. Ads within the path may be activated and deactivated in response to additional user actions or rules.
- In one embodiment, a strong positive reaction to the preliminary ad activates a subsequent advertisement having a comparatively lower incentive. For example, the subsequent advertisement may include a 50 cent discount on a sandwich. A mild or weekly positive reaction to the preliminary advertisement may activate a subsequent advertisement having a higher incentive. For example, the subsequent advertisement could have a two dollar discount on a sandwich.
- The user's prior purchase history may also be used to determine which ad(s) in an ad path to active. For example, if a user repeatedly ignores an advertisement with a lower incentive, he may be moved to an advertisement with a higher incentive. Similarly, if a user gives a strong positive response to a preliminary advertisement, but is known to regularly purchase products associated with the advertisement, he may be associated with a subsequent advertisement having a lower incentive. Alternatively, the user could be associated with a subsequent advertisement that reminds the user of a consumer club he is in, such as a sandwich club. This subsequent advertisement could remind the user that he needs to purchase two more sandwiches before he earns a free sandwich.
- Embodiments of the present invention use audience data to select an appropriate ad path from one of several ad paths available. The audience data may be derived from image data generated by an imaging device, such as a video camera, that has a view of the audience area. Automated image analysis may be used to generate useful audience data that is used to select the overlay.
- The audience data derived from the image data includes number of people present in the audience, engagement level of people in the audience, personal characteristics of those individuals, and response to the media content. Different levels of engagement may be assigned to audience members. Image data may be analyzed to determine how many people are present in the audience and characteristics of those people.
- Audience data includes a level of engagement or attentiveness. A person's attentiveness may be classified into one or more categories or levels. The categories may range from not paying attention to full attention. A person that is not looking at the television and is in a conversation with somebody else, either in the room or on the phone, may be classified as not paying attention or fully distracted. On the other hand, somebody in the room that is not looking at the TV, but is not otherwise obviously distracted, may have a medium level of attentiveness. Someone that is looking directly at the television without an apparent distraction may be classified as fully attentive. A machine-learning image classifier may assign the levels of attentiveness by analyzing image data.
- Audience data may include a person's reaction to a media content, such as a preliminary advertisement. The person's reaction may be measured by studying biometrics gleaned from the imaging data. For example, heartbeat and facial flushing may be detected in the image data. Similarly, pupil dilation and other facial expressions may be associated with different reactions. All of these biometric characteristics may be interpreted by a classifier to determine whether the person likes or dislikes a media content.
- The different audience data may be used to determine when reaction criteria associated with a preliminary advertisement are satisfied. For example, a criterion may not be satisfied when a person is present but shows a low level of attentiveness. An advertiser may specify that an ad path is activated only when one or more of the individuals present are fully attentive.
- A person's reaction to a primary ad or other media content may be used to determine whether a subsequent ad is activated. For example, a person classified as having a negative reaction to a product placement within a movie may not be associated with an ad path for the product advertised through product placement. Alternatively, a person that responds positively to a primary ad may be associated with an ad path for a related product or service.
- In addition to determining whether to activate an ad path based on engagement levels, personal characteristics of audience members may also be considered when selecting an ad path. The personal characteristics of the audience members include demographic data that may be discerned from image classification or from associating the person with a known personal account. For example, an entertainment company may require that the person submit a name, age, address, and other demographic information to maintain a personal account. The personal account may be associated with a facial recognition program that is used to authenticate the person. Regardless of whether the entertainment company is providing the primary ad, the facial recognition record associated with the personal account could be used to identify the person in the audience associated with the account. In some situations, all of the audience members may be associated with an account that allows precise demographic information to be associated with each audience member. Account information may be used to associate multiple devices with an audience member.
- Having briefly described an overview of embodiments of the invention, an exemplary operating environment suitable for use in implementing embodiments of the invention is described below.
- Referring to the drawings in general, and initially to
FIG. 1 in particular, an exemplary operating environment for implementing embodiments of the invention is shown and designated generally ascomputing device 100.Computing device 100 is but one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should thecomputing device 100 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated. - The invention may be described in the general context of computer code or machine-useable instructions, including computer-executable instructions such as program components, being executed by a computer or other machine, such as a personal data assistant or other handheld device. Generally, program components, including routines, programs, objects, components, data structures, and the like, refer to code that performs particular tasks or implements particular abstract data types. Embodiments of the invention may be practiced in a variety of system configurations, including handheld devices, consumer electronics, general-purpose computers, specialty computing devices, etc. Embodiments of the invention may also be practiced in distributed computing environments where tasks are performed by remote-processing devices that are linked through a communications network.
- With continued reference to
FIG. 1 ,computing device 100 includes abus 110 that directly or indirectly couples the following devices:memory 112, one ormore processors 114, one ormore presentation components 116, input/output (I/O)ports 118, I/O components 120, and anillustrative power supply 122.Bus 110 represents what may be one or more busses (such as an address bus, data bus, or combination thereof). Although the various blocks ofFIG. 1 are shown with lines for the sake of clarity, in reality, delineating various components is not so clear, and metaphorically, the lines would more accurately be grey and fuzzy. For example, one may consider a presentation component such as a display device to be an I/O component 120. Also, processors have memory. The inventors hereof recognize that such is the nature of the art, and reiterate that the diagram ofFIG. 1 is merely illustrative of an exemplary computing device that can be used in connection with one or more embodiments of the invention. Distinction is not made between such categories as “workstation,” “server,” “laptop,” “handheld device,” etc., as all are contemplated within the scope ofFIG. 1 and refer to “computer” or “computing device.” -
Computing device 100 typically includes a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by computingdevice 100 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer-readable media may comprise computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. - Computer storage media includes RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Computer storage media does not comprise a propagated data signal.
- Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer-readable media.
-
Memory 112 includes computer-storage media in the form of volatile and/or nonvolatile memory. Thememory 112 may be removable, nonremovable, or a combination thereof. Exemplary memory includes solid-state memory, hard drives, optical-disc drives, etc.Computing device 100 includes one ormore processors 114 that read data from various entities such asbus 110,memory 112 or I/O components 120. Presentation component(s) 116 present data indications to a person or other device.Exemplary presentation components 116 include a display device, speaker, printing component, vibrating component, etc. I/O ports 118 allowcomputing device 100 to be logically coupled to other devices including I/O components 120, some of which may be built in. Illustrative I/O components 120 include a microphone, joystick, game pad, satellite dish, scanner, printer, wireless device, etc. - Turning now to
FIG. 2 , anonline entertainment environment 200 is shown, in accordance with an embodiment of the present invention. Theonline entertainment environment 200 comprises various entertainment devices connected through anetwork 220 to anentertainment service 230. Exemplary entertainment devices include agame console 210, atablet 212, apersonal computer 214, adigital video recorder 217, acable box 218, and atelevision 216. Use of other entertainment devices not depicted inFIG. 2 , such as smart phones, is also possible. - The
game console 210 may have one or more game controllers communicatively coupled to it. In one embodiment, thetablet 212 may act as an input device for thegame console 210 or thepersonal computer 214. In another embodiment, thetablet 212 is a stand-alone entertainment device.Network 220 may be a wide area network, such as the Internet. As can be seen, most devices shown inFIG. 2 could be directly connected to thenetwork 220. The devices shown inFIG. 2 , are able to communicate with each other through thenetwork 220 and/or directly as indicated by the lines connecting the devices. - The controllers associated with
game console 210 include agame pad 211, aheadset 236, animaging device 213, and atablet 212.Tablet 212 is shown coupled directly to thegame console 210, but the connection could be indirect through the Internet or a subnet. In one embodiment, theentertainment service 230 helps make a connection between thetablet 212 and thegame console 210. Thetablet 212 is capable of generating numerous input streams and may also serve as a display output mechanism. In addition to being a primary display, thetablet 212 could provide supplemental information related to primary information shown on a primary display, such astelevision 216. The input streams generated by thetablet 212 include video and picture data, audio data, movement data, touch screen data, and keyboard input data. - The
headset 236 captures audio input from a player and the player's surroundings and may also act as an output device, if it is coupled with a headphone or other speaker. Theheadset 236 may facilitate voice control of the game console or other entertainment devices. A microphone (not shown) may be integrated into or connected to any of the entertainment devices to facilitate voice control. - The
imaging device 213 is coupled togame console 210. Theimaging device 213 may be a video camera, a still camera, a depth camera, or a video camera capable of taking still or streaming images. In one embodiment, theimaging device 213 includes an infrared light and an infrared camera. Theimaging device 213 may also include a microphone, speaker, and other sensors. In one embodiment, theimaging device 213 is a depth camera that generates three-dimensional image data. The three-dimensional image data may be a point cloud or depth cloud. The three-dimensional image data may associate individual pixels with both depth data and color data. For example, a pixel within the depth cloud may include red, green, and blue color data, and X, Y, and Z coordinates. Stereoscopic depth cameras are also possible. Theimaging device 213 may have several image-gathering components. For example, theimaging device 213 may have multiple cameras. In other embodiments, theimaging device 213 may have multidirectional functionality. In this way, theimaging device 213 may be able to expand or narrow a viewing range or shift its viewing range from side to side and up and down. - The
game console 210 may have image-processing functionality that is capable of identifying objects within the depth cloud. For example, individual people may be identified along with characteristics of the individual people. In one embodiment, gestures made by the individual people may be distinguished and used to control games or media output by thegame console 210. Thegame console 210 may use the image data, including depth cloud data, for facial recognition purposes to specifically identify individuals within an audience area. The facial recognition function may associate individuals with an account associated with a gaming service or media service, or used for login security purposes, to specifically identify the individual. - In one embodiment, the
game console 210 uses microphone, and/or image data captured throughimaging device 213 to identify content being displayed throughtelevision 216. For example, a microphone may pick up the audio data of a movie being generated by thecable box 218 and displayed ontelevision 216. The audio data may be compared with a database of known audio data and the data identified using automatic content recognition techniques, for example. Content being displayed through thetablet 212 or thePC 214 may be identified in a similar manner. In this way, thegame console 210 is able to determine what is presently being displayed to a person regardless of whether thegame console 210 is the device generating and/or distributing the content for display. - The
game console 210 may include classification programs that analyze image and/or data to generate audience data. For example, thegame console 210 may determine number of people in the audience, audience member characteristics, levels of engagement, and audience response. Audio data may be compared with a content data base do identify content being displayed. The audio data may be captured by a microphone coupled to thegame console 210. In this way, the content displayed may be identified when it is output by an entertainment device other than thegame console 210. Content output by thegame console 210 could also be identified using the audio signals. - In another embodiment, the
game console 210 includes a local storage component. The local storage component may store user profiles for individual persons or groups of persons viewing and/or reacting to media content. Each user profile may be stored as a separate file, such as a cookie. The information stored in the user profiles may be updated automatically. Personal information, viewing histories, viewing selections, personal preferences, the number of times a person has viewed known media content, the portions of known media content the person has viewed, a person's responses to known media content, and a person's engagement levels in known media content may be stored in a user profile associated with a person. As described elsewhere, the person may be first identified before information is stored in a user profile associated with the person. In other embodiments, a person's characteristics may be first recognized and mapped to an existing user profile for a person with similar or the same characteristics. Demographic information may also be stored. Each item of information may be stored as a “viewing record” associated with a particular type of media content. As well, viewer personas, as described below, may be stored in a user profile. -
Entertainment service 230 may comprise multiple computing devices communicatively coupled to each other. In one embodiment, the entertainment service is implemented using one or more server farms. The server farms may be spread out across various geographic regions including cities throughout the world. In this scenario, the entertainment devices may connect to the closest server farms. Embodiments of the present invention are not limited to this setup. Theentertainment service 230 may provide primary content and secondary content. Primary content may include television shows, movies, and video games. Secondary content may include advertisements, social content, directors' information and the like. -
FIG. 2 also includes acable box 218 and aDVR 217. Both of these devices are capable of receiving content throughnetwork 220. The content may be on-demand or broadcast as through a cable distribution network. Both thecable box 218 andDVR 217 have a direct connection withtelevision 216. Both devices are capable of outputting content to thetelevision 216 without passing throughgame console 210, but in one embodiment thecable box 218 andDVR 217 pass through thegame console 210. As can be seen,game console 210 also has a direct connection totelevision 216.Television 216 may be a smart television that is capable of receiving entertainment content directly fromentertainment service 230. As mentioned, thegame console 210 may perform audio analysis to determine what media title is being output by thetelevision 216 when the title originates with thecable box 218,DVR 217, ortelevision 216. - Turning now to
FIG. 3 , a distributedentertainment environment 300 is shown, in accordance with an embodiment of the present invention. Theentertainment environment 300 includesentertainment device A 310,entertainment device B 312,entertainment device C 314, and entertainment device N 316 (hereafter entertainment devices 310-316).Entertainment device N 316 is intended to represent that there could be an almost unlimited number of clients connected tonetwork 305. The entertainment devices 310-316 may take different forms. For example, the entertainment devices 310-316 may be game consoles, televisions, DVRs, cable boxes, personal computers, tablets, or other entertainment devices capable of outputting media. In addition, the entertainment devices 310-316 are capable of gathering viewer data through an imaging device, similar toimaging device 213 ofFIG. 2 that was previously described. The imaging device could be built into a client, such as a web cam and microphone, or could be a stand-alone device. - In one embodiment, the entertainment devices 310-316 include a local storage component configured to store personal profiles for one or more persons. The local storage component is described in greater detail above with reference to the
game console 210. The entertainment devices 310-316 may include classification programs that analyze image data to generate audience data. For example, the entertainment devices 310-316 may determine how many people are in the audience, audience member characteristics, levels of engagement, and audience response. -
Network 305 is a wide area network, such as the Internet.Network 305 is connected toadvertiser 320,content provider 322, andsecondary content provider 324. Theadvertiser 320 distributes advertisements to entertainment devices 310-316. Theadvertiser 320 may also cooperate withentertainment service 330 to provide advertisements. Thecontent provider 322 provides primary content such as movies, video games, and television shows. The primary content may be provided directly to entertainment devices 310-316 or indirectly throughentertainment service 330. -
Secondary content provider 324 provides content that compliments the primary content. Secondary content may be a director's cut, information about a character, game help information, and other content that compliments the primary content. The same entity may generate both primary content and secondary content. For example, a television show may be generated by a director that also generates additional secondary content to compliment the television show. The secondary content and primary content may be purchased separately and could be displayed on different devices. For example, the primary content could be displayed through a television while the secondary content is viewed on a companion device, such as a tablet. Theadvertiser 320,content provider 322, andsecondary content provider 324 may stream content directly to entertainment devices or seek to have their content distributed by a service, such asentertainment service 330. -
Entertainment service 330 provides content and advertisements to entertainment devices. Theentertainment service 330 is shown as a single block. In reality, the functions should be widely distributed across multiple devices. In embodiments of the present invention, the various features ofentertainment service 330 described herein may be provided by multiple entities and components. Theentertainment service 330 comprises agame execution environment 332, agame data store 334, acontent data store 336, adistribution component 338, astreaming component 340, acontent recognition database 342, anad data store 344, anad placement component 346, anad sales component 348, anaudience data store 350, anaudience processing component 352, and anaudience distribution component 354. As can be seen, the various components may work together to provide content, including games, advertisements, and media titles to a client, and capture audience data. The audience data may be used to specifically target advertisements and/or content to a person. The audience data may also be aggregated and shared with or sold to others. - The
game execution environment 332 provides an online gaming experience to a client device. Thegame execution environment 332 comprises the gaming resources required to execute a game. Thegame execution environment 332 comprises active memory along with computing and video processing. Thegame execution environment 332 receives gaming controls, such as controller input, through an I/O channel and causes the game to be manipulated and progressed according to its programming. In one embodiment, thegame execution environment 332 outputs a rendered video stream that is communicated to the game device. Game progress may be saved online and associated with an individual person that has an ID through a gaming service. The game ID may be associated with a facial pattern. - The
game data store 334 stores game code for various game titles. Thegame execution environment 332 may retrieve a game title and execute it to provide a gaming experience. Alternatively, thecontent distribution component 338 may download a game title to an entertainment device, such asentertainment device A 310. - The
content data store 336 stores media titles, such as songs, videos, television shows, and other content. Thedistribution component 338 may communicate this content fromcontent data store 336 to the entertainment devices 310-316. Once downloaded, an entertainment device may play the content on or output the content from the entertainment device. Alternatively, thestreaming component 340 may use content fromcontent data store 336 to stream the content to the person. - The
content recognition database 342 includes a collection of audio clips associated with known media titles that may be compared to audio input received at theentertainment service 330. As described above, the received audio input (e.g., received from thegame console 210 ofFIG. 2 ) is mapped to the library of known media titles. Upon mapping the audio input to a known media title, the source of the audio input (i.e., the identity of media content) may be determined. The identified media title/content is then communicated back to the entertainment device (e.g., the game console) for further processing. Exemplary processing may include associating the identified media content with a person that viewed or is actively viewing the media content and storing the association as a viewing record. - The
entertainment service 330 also provides advertisements. Advertisements available for distribution may be stored withinad data store 344. The advertisements may be presented as an overlay in conjunction with primary content and may be partial or full-screen advertisements that are presented between segments of a media presentation or between the beginning and end of a media presentation, such as a television commercial. The advertisements may be associated with audio content. Additionally, the advertisements may take the form of secondary content that is displayed on a companion device in conjunction with a display of primary content. The advertisements may also be presented when a person associated with a targeted persona is located in the audience area and/or is logged in to theentertainment service 330, as further described below. - The
ad placement component 346 determines when an advertisement should be displayed to a person and/or what advertisement should be displayed. Thead placement component 346 may consume real-time audience data and automatically place an advertisement associated with a highest-bidding advertiser in front of one or more viewers because the audience data indicates that the advertiser's bidding criteria is satisfied. For example, an advertiser may wish to display an advertisement to men present in Kansas City, Mo. When the audience data indicates that one or more men in Kansas City are viewing primary content, an ad could be served with that primary content. The ad may be inserted into streaming content or downloaded to the various entertainment devices along with triggering mechanisms or instructions on when the advertisement should be displayed to the person. The triggering mechanisms may specify desired audience data that triggers display of the ad. - The
ad placement component 346 may manage linked advertisements. Thead placement component 346 may communicate preliminary advertisements and subsequent advertisements to entertainment clients. For example, thead placement component 346 could communicate a preliminary advertisement and associated response criteria to a smart TV. The smart TV could indicate that an audience member satisfied the response criteria. In response, thead placement component 346 could communicate subsequent advertisements to the audience member's tablet along with presentation trigger. Thead placement component 346 could activate and deactivate subsequent advertisements as view responses to the advertisements are received. Further, viewers' response to ads in an ad path may tracked by thead placement component 346. Thead placement component 346 may maintain a record of viewer responses and purchases. Thead placement component 346 may bill advertisers using the viewer response data. - The
ad sales component 348 interacts withadvertisers 320 to set a price for displaying an advertisement. In one embodiment, an auction is conducted for various advertising space. The auction may be a real-time auction in which the highest bidder is selected when a viewer or viewing opportunity satisfies the advertiser's criteria. - The
audience data store 350 aggregates and stores audience data received from entertainment devices 310-316. The audience data may first be parsed according to known types or titles of media content. Each item of audience data that relates to a known type or title of media content is a viewing record for that media content. Viewing records for each type of media content may be aggregated, thereby generating viewing data. The viewing data may be summarized according to categories. Exemplary categories include a total number of persons that watched the content, the average number of persons per household that watched the content, a number of times certain persons watched the content, a determined response of people toward the content, a level of engagement of people in the media title, a length of time individuals watched the content, the common distractions that were ignored or engaged in while the content was being displayed, and the like. The viewing data may similarly be summarized according to types of persons that watched the known media content. For example, personal characteristics of the persons, demographic information about the persons, and the like may be summarized within the viewing data. - The
audience processing component 352 may build and assign personas using the audience data and a machine-learning algorithm. A persona is an abstraction of a person or groups of people that describes preferences or characteristics about the person or groups of people. The personas may be based on media content the persons have viewed or listened to, as well as other personal information stored in a user profile on the entertainment device (e.g., game console) and associated with the person. For example, the persona could define a person as a female between the ages of 20 and 35 having an interest in science fiction, movies, and sports. Similarly, a person that always has a positive emotional response to car commercials may be assigned a persona of “car enthusiast.” More than one persona may be assigned to an individual or group of individuals. For example, a family of five may have a group persona of “animated film enthusiasts” and “football enthusiasts.” Within the family, a child may be assigned a persona of “likes video games,” while the child's mother may be assigned a person of “dislikes video games.” It will be understood that the examples provided herein are merely exemplary. Any number or type of personas may be assigned to a person. - The
audience distribution component 354 may distribute audience data to content providers, advertisers, or other interested parties. For example, theaudience distribution component 354 could provide information indicating that 300,000 discrete individuals viewed a television show in a geographic region. The audience data could be derived from image data received at each entertainment device. In addition to the number of people that viewed the media content, more granular information could be provided. For example, the total persons giving full attention to the content could be provided. In addition, response data for people could be provided. To protect the identity of individual persons, only a persona assigned to a person may be exposed and distributed to advertisers. A value may be placed on the distribution, as a condition on its delivery, as described above. The value may also be based on the amount, type, and dearth of viewing data delivered to an advertiser or content publisher. - Turning now to
FIG. 4 , anaudience area 400 that includes a group of people is shown, in accordance with an embodiment of the present invention. The audience area is the area in front of thedisplay device 410. In one embodiment, theaudience area 400 comprises the area from which a person can see the content. In another embodiment, theaudience area 400 comprises the area within a viewing range of theimaging device 418. In most embodiments, however, the viewing range of theimaging device 418 overlaps with the area from which a person can see content on thedisplay device 410. If the content is only audio content, then the audience area is the area where the person may hear the content. - Content is provided to the audience area by an entertainment system that comprises a
display device 410, agame console 412, acable box 414, aDVD player 416, and animaging device 418. Thegame console 412 may be similar togame console 210 ofFIG. 2 described previously. Thecable box 414 and theDVD player 416 may stream content from an entertainment service, such asentertainment service 330 ofFIG. 3 , to the display device 410 (e.g., television). Thegame console 412,cable box 414, and theDVD player 416 are all coupled to thedisplay device 410. These devices may communicate content to thedisplay device 410 via a wired or wireless connection, and thedisplay device 410 may display the content. In some embodiments, the content shown on thedisplay device 410 may be selected by one or more persons within the audience. For example, a person in the audience may select content by inserting a DVD into theDVD player 416 or select content by clicking, tapping, gesturing, or pushing a button on a companion device (e.g., a tablet) or a remote in communication with thedisplay device 410. Content selected for viewing may be tracked and stored on thegame console 412. - The
imaging device 418 is connected to thegame console 412. Theimaging device 418 may be similar toimaging device 213 ofFIG. 2 described previously. Theimaging device 418 captures image data of theaudience area 400. Other devices that include imaging technology, such as thetablet 212 ofFIG. 2 , may also capture image data and communicate the image data to thegame console 412 via a wireless or wired connection. - In one embodiment, audience data may be gathered through image processing. Audience data may include a detected number of persons within the
audience area 400. Persons may be detected based on their form, appendages, height, facial features, movement, speed of movement, associations with other persons, biometric indicators, and the like. Once detected, the persons may be counted and tracked so as to prevent double counting. The number of persons within theaudience area 400 also may be automatically updated as people leave and enter theaudience area 400. - Audience data may similarly include a direction each audience member is facing. Determining the direction persons are facing may, in some embodiments, be based on whether certain facial or body features are moving or detectable. For example, when certain features, such as a person's cheeks, chin, mouth and hairline are detected, they may indicate that a person is facing the
display device 410. Audience data may include a number of persons that are looking toward thedisplay device 410, periodically glancing at thedisplay device 410, or not looking at all toward thedisplay device 410. In some embodiments, a period of time each person views specific media presentations may also comprise audience data. - As an example, audience data may indicate that an individual 420 is standing in the background of the
audience area 400 while looking at thedisplay device 410.Individuals child 428 andchild 430 may also be detected and determined to be all facing thedisplay device 410. Aman 432 and awoman 434 may be detected and determined to be looking away from the television. Thedog 436 may also be detected, but characteristics (e.g., short stature, four legs, and long snout) about thedog 436 may not be stored as audience data because they indicate that thedog 436 is not a person. - Additionally, audience data may include an identity of each person within the
audience area 400. Facial recognition technologies may be utilized to identify a person within theaudience area 400 or to create and store a new identity for a person. Additional characteristics of the person (e.g., form, height, weight) may similarly be analyzed to identify a person. In one embodiment, the person's determined characteristics may be compared to characteristics of a person stored in a user profile on thedisplay device 410. If the determined characteristics match those in a stored user profile, the person may be identified as a person associated with the user profile. - Audience data may include personal information associated with each person in the audience area. Exemplary personal characteristics include an estimated age, a race, a nationality, a gender, a height, a weight, a disability, a medical condition, a likely activity level of (e.g., active or relatively inactive), a role within a family (e.g., father or daughter), and the like. For example, based on the image data, an image processor may determine that
audience member 420 is a woman of average weight. Similarly, analyzing the width, height, bone structure, and size ofindividual 432 may lead to a determination that the individual 432 is a male. Personal information may also be derived from stored user profile information. Such personal information may include an address, a name, an age, a birth date, an income, one or more viewing preferences (e.g., movies, games, and reality television shows) of or login credentials for each person. In this way, audience data may be generated based on both processed image data and stored personal profile data. For example, ifindividual 434 is identified and associated with a personal profile of a 13-year-old, processed image data that classifies individual 434 as an adult (i.e., over 18 years old) may be disregarded as inaccurate. - The audience data also comprises an identification of the primary content being displayed when image data is captured at the
imaging device 418. The primary content may, in one embodiment, be identified because it is fed through thegame console 412. In other embodiments, and as described above, audio output associated with thedisplay device 410 may be received at a microphone associated with thegame console 412. The audio output is then compared to a library of known content and determined to correspond to a known media title or a known genre of media title (e.g., sports, music, movies, and the like). As well, other cues (e.g., whether the person appears to be listening to as opposed to watching a media presentation) may be analyzed to determine the identity of the media content (e.g., a song as opposed to the soundtrack to a movie). Thus, audience data may indicate thatbasketball game 411 was being displayed toindividuals - Turning now to
FIG. 5 , an audience area depicting audience members' levels of engagement is shown, in accordance with an embodiment of the present invention. The entertainment system is identical to that shown inFIG. 4 , but the audience members have changed. Image data captured at theimaging device 418 may be processed similarly to how it was processed with reference toFIG. 4 . However, in this illustrative embodiment, the image data may be processed to generate audience data that indicates a level of engagement of and/or attention paid by the audience toward the media presentation (e.g., the basketball game 411). - An indication of the level of engagement of a person may be generated based on detected traits of or actions taken by the person, such as facial features, body positioning, and body movement. For example, the movement of a person's eyes, the direction the person's body is facing, the direction the person's face is turned, whether the person is engaged in another task (e.g., talking on the phone), whether the person is talking, the number of additional persons within the
audience area 500, and the movement of the person (e.g., pacing, standing still, sitting, or lying down) are traits of and/or actions taken by a person that may be distilled from the image data. The determined traits may then be mapped to predetermined categories or levels of engagement (e.g., a high level of engagement or a low level of engagement). Any number of categories or levels of engagement may be created, and the examples provided herein are merely exemplary. - In another embodiment, a level of engagement may additionally be associated with one or more predetermined categories of distractions. In this way, traits of or actions taken by a person may be mapped to both a level of engagement and a type of distraction. Exemplary actions that indicate a distraction include engaging in conversation, using more than one display device (e.g., the display device 510 and a companion device), reading a book, playing a board game, falling asleep, getting a snack, leaving the
audience area 500, walking around, and the like. Exemplary distraction categories may include “interacted with other persons,” “interacted with an animal,” “interacted with other display devices,” “took a brief break,” and the like. - Other input that may be used to determine a person's level of engagement is audio data. Microphones associated with the
game console 412 may pick up conversations or sounds from the audience. The audio data may be interpreted and determined to be responsive to (i.e., related to or directed at) the media presentation or nonresponsive to the media presentation. The audio data may be associated with a specific person (e.g., a person's voice). As well, signal data from companion devices may be collected to generate audience data. The signal data may indicate, in greater detail than the image data, a type or identity of a distraction, as described below. - Thus, the image data gathered through
imaging device 418 may be analyzed to determine that individual 520 is reading apaper 522 and is therefore distracted from the content shown on display device 510.Individual 536 is viewingtablet 538 while the content is being displayed through display device 510. In addition to observing the person holding the tablet, signal data may be analyzed to understand what the person is doing on the tablet. For example, the person could be surfing the Web, checking e-mail, checking a social network site, or performing some other task. However, the individual 536 could also be viewing secondary content that is related to theprimary content 411 shown on display device 510. What the person doing ontablet 538 may cause a different level of engagement to be associated with the person. For example, if the activity is totally unrelated (i.e., the activity is not secondary content), then the level of engagement mapped to the person's action (i.e., looking at the tablet) and associated with the person may be determined to be quite low. On the other hand, if the person is viewing secondary content that compliments the primary content 511, then the individual 536's action of looking at the tablet may be mapped to a somewhat higher level of engagement. -
Individuals individuals individual 530 and/or the media content being displayed. - Determined distractions and levels of engagement of a person may additionally be associated with particular portions of image data, and thus, corresponding portions of media content. As mentioned elsewhere, such audience data may be stored locally on the
game console 412 or communicated to a server for remote storage and distribution. The audience data may be stored as a viewing record for the media content. As well, the audience data may be stored in a user profile associated with the person for whom a level of engagement or distractions was determined. - Turning now to
FIG. 6 , a person's reaction to media content is classified and stored in association with the viewing data. The entertainment setup shown inFIG. 6 is the same as that shown inFIG. 4 . However, theprimary content 611 is different. In this case, the primary content is a car commercial indicating a sale. In addition to detecting thatindividuals - In one embodiment, a person's response may be gleaned from the images and/or audio originating from the person (e.g., the person's voice). Exemplary responses include smiling, frowning, wide eyes, glaring, yelling, speaking softly, laughing, crying, and the like. Other responses may include a change to a biometric reading, such as an increased or a decreased heart rate, facial flushing, or pupil dilation. Still other responses may include movement, or a lack thereof, for example, pacing, tapping, standing, sitting, darting one's eyes, fixing one's eyes, and the like. Each response may be mapped to one or more predetermined emotions, such as happiness, sadness, excitement, boredom, depression, calmness, fear, anger, confusion, disgust, and the like. For example, when a person frowns, her frown may be mapped to an emotion of dissatisfaction or displeasure. In embodiments, mapping a person's response to an emotion may additionally be based on the length of time the person held the response or the pronouncement of the person's response. As well, a person's response may be mapped to more than one emotion. For example, a person's response (e.g., smiling and jumping up and down) may indicate that the person is both happy and excited. Additionally, the predetermined categories of emotions may include tiers or spectrums of emotions. Baseline emotions of a person may also be taken into account when mapping a person's response to an emotion. For example, if the person rarely shows detectable emotions, a detected “happy” emotion for the person may be elevated to a higher “tier” of happiness, such as “elation.” As well, the baseline may serve to inform determinations about the attentiveness of the person toward a particular media title.
- In some embodiments, only responses and determined emotions that are responsive to the media content being displayed to the person are associated with the media content. Responsiveness may be related to a determined level of engagement of a person, as described above. Thus, responsiveness may be determined based on the direction the person is looking when a title is being displayed. For example, a person that is turned away from the display device is unlikely to be reacting to content being displayed on the display device. Responsiveness may similarly be determined based on the number and type of distractions located within the viewing area of the display device. Similarly, responsiveness may be based on an extent to which a person is interacting with or responding to distractions. For example, a person who is talking on the phone, even though facing and looking at a display screen of the display device, may be experiencing an emotion unrelated to the media content being displayed on the screen. As well, responsiveness may be determined based on whether a person is actively or has recently changed a media title that is being displayed (i.e., a person is more likely to be viewing content he or she just selected to view). It will be understood that responsiveness can be determined in any number of ways by utilizing machine-learning algorithms, and the examples provided herein are meant only to be illustrative.
- Thus, returning to
FIG. 6 , the image data may be utilized to determine responses ofindividual 622 and individual 620 to theprimary content 611.Individual 622 may be determined to have multiple responses to the car commercial, each of which may be mapped to the same or multiple emotions. For example, the individual 622 may be determined to be smiling, laughing, to be blinking normally, to be sitting, and the like. All of these reactions, alone and/or in combination, may lead to a determination that the individual 622 is pleased and happy. This is assumed to be a reaction to theprimary content 611 and recorded in association with the display event. By contrast, individual 620 is not smiling, has lowered eyebrows, and is crossing his arms, indicating that the individual 620 may be angry or not pleased with the car commercial. - Turning now to
FIG. 7 , alinear ad path 700 is shown, in accordance with an embodiment of the present invention. Thelinear ad path 700 includes apreliminary ad 710, asubsequent ad 720 having incentive A, and asubsequent ad 730 having incentive B. Incentives A and B are different. - The
preliminary ad 710 may be shown as part of a media presentation, such as product placement in a television show or an ad shown during a commercial break. Thepreliminary ad 710 is associated with one or more reaction criteria that are used to determine whether an audience member should be shown either of the subsequent ads. For example, thepreliminary advertisement 710 may require that the user pays full attention to the preliminary ad to activate either subsequent ad. Using attentiveness as the reaction criteria may be used when the ads build on each other to tell a story, require knowledge of the previous ad in the path, or understand the new ad. - In another embodiment, the reaction criteria specifies that a positive response is detected or received from the audience member. An explicit response may be received while an implicit response is detected. Once the reaction criteria are satisfied, the
subsequent ad 720 is activated for presentation to the user upon satisfaction of presentation triggers associated with thesubsequent ad 720. - The
subsequent ad 720 may be communicated to a device associated with the audience member. For example, the audience member may be associated through a user account with a personal computer, tablet, and smartphone. Thesubsequent ad 720 may be communicated to one or more devices that are capable of detecting context associated with the presentation trigger, including the device on which thepreliminary ad 710 was viewed. For example, if the presentation trigger requires the user to be in a geographic area, then thesubsequent ad 720 would only be communicated to devices that are location aware. On the other hand, if the presentation trigger associated withsubsequent ad 720 only requires that it be shown to the user at a particular time, then it could also be sent to the personal computer, game console, or other nonlocation-aware entertainment devices. - The
subsequent ad 720 may be shown to the user multiple times across multiple devices. In each case, the presentation and response, if any, may be communicated to a centralized ad tracking service. At some point, the user's response, or lack of response, to thesubsequent ad 720 may cause the user to be shifted down thead path 700 tosubsequent ad 730 having incentive B. In one embodiment, the failure of the user to respond tosubsequent ad 720 with incentive A causes the user to be shifted down thepath 700 tosubsequent ad 730 with incentive B, which is higher than incentive A. In other situations, incentive A and incentive B are not of a significantly different value, but are just different. For example, incentive A could be for the user to get a free soft drink, while incentive B is for the user to get a free cup of coffee. - In one embodiment, the user's positive response to
subsequent ad 720 causes the user to be shifted tosubsequent ad 730. For example, upon detecting that the user purchased a first product in response tosubsequent ad 720, a related product could be advertised throughsubsequent ad 730. In this case, incentives A and B would be directed toward different products associated with their corresponding advertisements. For example, having purchased movie tickets throughsubsequent ad 720,subsequent ad 730, having a coupon for popcorn, could be displayed upon the user arriving at the theater. - Turning now to
FIG. 8 , anonlinear ad path 800 is shown, in accordance with an embodiment of a present invention. Thead path 800 starts with apreliminary user response 810 to content, such as preliminary ad. Different subsequent ads within thead path 800 may be activated in response to thepreliminary user response 810. Thepreliminary user response 810 may be explicit or implicit. Thepreliminary user response 810 may be in response to an advertisement, but could also be in response to nonadvertising content. - For example, the user could respond positively to a baseball game (nonadvertising content) between two teams. Upon determining that the user is associated with a city where one of the teams is based, the user could be placed into an ad path designed to incentivize the user to purchase goods or services associated with the baseball team. For example,
subsequent ad A 812, having presentation trigger A could be related to the user purchasing tickets for a baseball game. Atdecision point 814, the user's response tosubsequent ad A 812 is evaluated. Upon determining that the user purchased baseball tickets, the path may be deactivated atstep 816. The user's purchase record may be updated indicating that the user purchased baseball tickets. - Upon determining that the user did not purchase baseball tickets in response to
subsequent ad A 812, the user may be moved to a different part in thepath 800 associated withsubsequent ad B 818. As can be seen,ad 818 is associated with a different trigger and could also be associated with different incentives. The response tosubsequent ad B 818 is monitored atdecision point 820. Upon determining that a positive response has not been received, thead B 818 may remain active for subsequent presentation when trigger B is satisfied the next time. If a positive response is noted atdecision point 820, the user could be moved to a different part in the path associated withsubsequent ad E 822. Notice thatsubsequent ad B 818 andsubsequent ad E 822 are both associated with trigger B. Ads within an advertising path and across different advertising paths may use the same triggers. For example, trigger B could be associated with a time frame before an upcoming baseball home stand. Thesubsequent ad E 822 could be associated with a different home stand or games that received the positive response tosubsequent ad B 818. Though not shown, the various points along the path could loop or be deactivated in response to a positive response or purchase. - In one embodiment, the part of the path showing
subsequent ad C 824 andsubsequent ad F 826 are related to a complimentary product, such as a baseball jersey or cap. Thus, while the part of the path associated withsubsequent ads subsequent ad C 824 may be related to geographic proximity with a retail outlet where baseball caps are sold. - The user could be associated with multiple subsequent ads within the
ad path 800 at the same time when appropriate. For example, the user could be associated with a subsequent ad offering baseball tickets at the same time she is associated with a subsequent ad selling baseball caps. Similarly, the user could be associated with multiple subsequent ads offering the same thing but with different triggers. For example, the triggers could specify different geographic locations associated with different retail stores and different incentives offered by those respective stores. - Turning now to
FIG. 9 , amethod 900 of providing linked advertisements is shown, in accordance with an embodiment of the present invention. The method may be performed on a game console or other entertainment device that is connected to an imaging device with a view of an audience area approximate to a display device. - At
Step 910, image data that depicts an audience for an ongoing media presentation is received. The image data may be in the form of a depth cloud generated by a depth camera, a video stream, still images, skeletal tracking information or other information derived from the image data. The ongoing media presentation may be a movie, game, television show, an advertisement, or the like. Ads shown during breaks in a television show may be considered part of the ongoing media presentation. - The audience may include one or more individuals within an audience area. The audience area includes the extents from which the ongoing media presentation may be viewed from the display device. The individuals within the audience area may be described as audience members herein.
- At
step 920, an audience member is identified by analyzing the image data. In one embodiment, the audience member is identified through facial recognition. For example, the audience member may be associated with a user account that provides facial recognition authentication or login. The audience member's account may then be associated with one or more social networks. In one embodiment, social networks are associated with a facial recognition login feature that allows the audience member to be associated with a social network. - The audience member may be given an opportunity to explicitly associate his account with one or more social networks. The audience member may be a member of more social networks than are actually associated with the account. But embodiments of the present invention may work with whatever social networks the audience member has provided access to. Upon determining that the audience member is associated with a social network, the audience member may be asked to provide authentication information or permission to access the social network. This information may be requested through a setup overlay or screen. The setup may occur at a point separate from when the media presentation is ongoing, for example, when an entertainment device is set up.
- At
Step 930, audience data is generated by analyzing the image data. Exemplary audience data has been described previously. The audience data may include a number of people that are present within the audience. For example, the audience data could indicate that five people are present within the audience area. The audience data may also associate audience members with demographic characteristics. - The audience data may also indicate an audience member's level of attentiveness to the ongoing media presentation. Different audience members may be associated with a different level of attentiveness. In one embodiment, the attentiveness is measured using distractions detected within the image data. In other words, a member's interactions with objects other than the display may be interpreted as the member paying less than full attention to the ongoing media presentation. For example, if the audience member is interacting with a different media presentation (e.g., reading a book, playing a game) then less than full attentiveness is paid to the ongoing media presentation. Interactions with other audience members may indicate a low level of attentiveness. Two audience members having a conversation may be assigned less than a full attentiveness level. Similarly, an individual speaking on a phone may be assigned less than full attention.
- In addition to measuring distractions, an individual's actions in relation to the ongoing media presentation may be analyzed to determine a level of attentiveness. For example, the user's gaze may be analyzed to determine whether the audience member is looking at the display. When multiple content items are shown within the ongoing media presentation, such as an overlay over a primary content, gaze detection may be used to determine whether the user is ignoring the overlay and looking at the ongoing media presentation or is focused on the overlay, or even noticed the overlay for a short period. Thus, attentiveness information could be assigned to different content shown on a single display.
- The audience data may also measure a user's reaction or response to the ongoing media presentation. As mentioned previously with reference to
FIG. 6 , a user's response or reaction may be measured based on biometric data and facial expressions. - At
step 940, the audience member is determined to have reacted positively to a preliminary advertisement for a product or service shown as part of the ongoing media presentation. The preliminary advertisement could be a commercial shown during a break in the primary content, including before presentation of the primary content begins or after it concludes. The preliminary advertisement could also be product placement within primary media content. The preliminary advertisement could be an overlay. The overlay could be shown concurrently with the primary content. - The audience member's positive reaction is determined using the audience data generated previously at
step 930. In one embodiment, reactions within the audience data are correlated to content within the ongoing media presentation. For example, a positive response observed at the same time a sports car appears within the media content may trigger a subsequent ad for the sports car. The sports car may be part of a product placement. - In one embodiment, each reaction within the audience data is associated with a time and can be used to associate the reaction with a particular point in the content. For example, if the presentation starts at noon and a reaction is observed within the audience data at 1:00 p.m., then the reaction may be associated with content shown one hour into the ongoing media presentation. Other ways to correlate a reaction with a point within the media presentation are possible. In one embodiment, metadata is associated with the ongoing media presentation to identify content displayed at various progress points. One example of content is a preliminary advertisement. A preliminary advertisement may be a television commercial, product placement, an overlay, or the like.
- The preliminary advertisement is associated with a response or reaction threshold. When a reaction within the audience data is correlated to the preliminary advertisement and determined to satisfy the reaction threshold, then a subsequent advertisement may be activated. A subsequent advertisement may be thought of as a follow-up to the preliminary advertisement. In one embodiment, the preliminary advertisement and subsequent advertisement could be the same. However, the subsequent advertisement is associated with one or more presentation triggers. The subsequent ad is only displayed when the presentation trigger criteria are satisfied. The presentation criteria may be context based. For example, the subsequent advertisement may be shown on a mobile device associated with the audience member at a time and place that is conducive to purchasing an advertised product or service.
- At
step 950, the audience member is associated with an ad path that comprises at least one subsequent advertisement that has a presentation trigger that when satisfied causes a presentation of the subsequent advertisement. Examples of ad paths have been described previously. The user could be associated with a particular subsequent advertisement within the ad path that includes multiple subsequent advertisements. In one embodiment, the strength of the user reaction is determined or is used to determine which subsequent advertisement within the path should be activated. An activated subsequent advertisement is actively monitored for satisfaction of the presentation trigger associated with the subsequent advertisement. Dormant or inactive subsequent advertisements within the path are not monitored and are not triggered for display until activated. A user's response to a first subsequent advertisement, either positive or negative, could cause an additional subsequent advertisement to be activated and the initial subsequent active advertisement to be inactivated. - In one embodiment, the audience member is associated with a user account. The user account may be for an entertainment service, e-mail, a social network, or other service that is accessed through multiple devices. The subsequent advertisements may be presented through this service or through an application associated with this service. In this way, subsequent advertisements may be presented on one or more devices associated with the user. Accordingly, the ad path may be communicated to multiple devices associated with the user. Different subsequent advertisements could be activated on different devices. For example, a subsequent advertisement associated with a particular time and location could be active on the mobile device associated with the audience member, whereas a different subsequent advertisement could be activated on a game console or other mostly stationary device. In one embodiment, the subsequent advertisement on the mobile device is obtrusive. The subsequent advertisement may be associated with a vibration, noise, or other indication to get the user's attention.
- Turning now to
FIG. 10 , amethod 1000 of assigning an audience member to an ad path is shown, in accordance with an embodiment of the present invention. Atstep 1010, image data that depicts an audience for the ongoing media presentation is received. Atstep 1020, an audience member is identified by analyzing the image data. For example, the audience member could be identified using voice recognition or facial recognition. Atstep 1030, audience data is generated by analyzing the image data. The audience data may classify the reactions of individuals present within the audience. The individuals may be alternatively described as users or audience members. - At
step 1040, a strength of the audience member's reaction to content shown as part of the ongoing media presentation is determined from the audience data. The strength of the reaction is intended to capture a user's enthusiasm for a content, such as a preliminary advertisement. The strength of the advertisement may be determined based on an audience member profile that tracks a range of responses made by the audience member. For example, a raised eyebrow may be an extremely positive strong response from a first audience member and only a mildly positive response or even a skeptical response from a different audience member. Some individuals are more expressive than others and the user account and profile is able to adjust for these differences by comparing the reactions of an individual over time. Feedback from an advertising path may be provided to further refine the responses of an individual. For example, if a particular reaction is initially interpreted as positive, but the user never responds positively to the advertisements, then the expression may be reclassified. - At
step 1050, the audience member is associated with an ad path that comprises multiple subsequent advertisements that each has a presentation trigger that, when satisfied, causes a presentation of an associated subsequent advertisement. An ad path having multiple subsequent advertisements has been described previously with reference toFIG. 8 . In one embodiment, the ad path, including the subsequent advertisements and associated presentation triggers, are communicated to multiple devices associated with the audience member. For example, the communication or the ad path could be communicated to the audience member's smartphone and tablet. The communications between devices could be coordinated through a central advertising component associated with an advertising service or entertainment service. The centralized ad service may generate different ad paths and communicate them to different devices. The centralized ad service may also download initial triggers to entertainment devices that cause the user to be associated with a certain ad path. - In another embodiment, audience data is communicated from an entertainment device to a centralized ad component that analyzes the audience data and associates the user with an ad path when various criteria are satisfied. In an alternative embodiment, the criteria are communicated to an entertainment device that makes the comparison.
- Turning now to
FIG. 11 , amethod 1100 of managing an ad path is shown, in accordance with an embodiment of the present invention.Method 1100 may be performed by a mobile device such as a smartphone or a tablet. In one embodiment, the mobile device is location-aware through GPS technology, or other location technology. - At
step 1110, an ad path that comprises a subsequent advertisement that is related to content to which the user previously responded positively is received on a mobile device associated with a user. The subsequent advertisement could have been shown on a different device such as a television or on the mobile device. - At
step 1120, determining that a presentation trigger associated with the subsequent advertisement is determined to be satisfied by the mobile device's present context. The context could include a time, location, and user activity. An example of a user activity includes driving, riding a bus, and riding a train. In each case, the user context for time and location could be satisfied, but the user may only be shown the advertisement if driving. This may make sense because the user may not be able to get off a train or a bus in time to respond to the suggestion made within the subsequent advertisement. For example, the subsequent advertisement could provide a coupon for coffee as the user approaches a coffee shop. If the user was determined to be on a bus, for example, by observing a pattern of starting and stopping at known bus stops, then the user may not be shown the advertisement unless a bus stop is near the coffee shop. All of this would be taken into account by the presentation triggers. - At
step 1130, the subsequent advertisement is presented to the user. In one embodiment, the user's response to the subsequent advertisement is monitored. A positive or negative response may be communicated to a central ad component that handles billing and may provide additional instructions regarding the next steps with regard to the ad path. In one embodiment, a different subsequent advertisement within the ad path is activated upon detecting a positive or a negative response. The present subsequent advertisement may be simultaneously deactivated. - Embodiments of the invention have been described to be illustrative rather than restrictive. It will be understood that certain features and subcombinations are of utility and may be employed without reference to other features and subcombinations. This is contemplated by and is within the scope of the claims.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/691,557 US20150229990A1 (en) | 2013-04-18 | 2015-04-20 | Linked content |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/865,673 US9015737B2 (en) | 2013-04-18 | 2013-04-18 | Linked advertisements |
US14/691,557 US20150229990A1 (en) | 2013-04-18 | 2015-04-20 | Linked content |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/865,673 Continuation US9015737B2 (en) | 2013-04-18 | 2013-04-18 | Linked advertisements |
Publications (1)
Publication Number | Publication Date |
---|---|
US20150229990A1 true US20150229990A1 (en) | 2015-08-13 |
Family
ID=50771625
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/865,673 Active 2033-07-14 US9015737B2 (en) | 2013-04-18 | 2013-04-18 | Linked advertisements |
US14/691,557 Abandoned US20150229990A1 (en) | 2013-04-18 | 2015-04-20 | Linked content |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/865,673 Active 2033-07-14 US9015737B2 (en) | 2013-04-18 | 2013-04-18 | Linked advertisements |
Country Status (4)
Country | Link |
---|---|
US (2) | US9015737B2 (en) |
EP (1) | EP2987125A4 (en) |
CN (1) | CN105339969B (en) |
WO (1) | WO2014172509A2 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10896456B2 (en) * | 2014-12-23 | 2021-01-19 | Ntt Docomo, Inc. | Method and apparatus for proximity service discovery |
US11627343B2 (en) * | 2018-11-29 | 2023-04-11 | Apple Inc. | Adaptive coding and streaming of multi-directional video |
Families Citing this family (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9881058B1 (en) | 2013-03-14 | 2018-01-30 | Google Inc. | Methods, systems, and media for displaying information related to displayed content upon detection of user attention |
US20140365310A1 (en) * | 2013-06-05 | 2014-12-11 | Machine Perception Technologies, Inc. | Presentation of materials based on low level feature analysis |
US9066116B2 (en) * | 2013-08-13 | 2015-06-23 | Yahoo! Inc. | Encoding pre-roll advertisements in progressively-loading images |
US20150142552A1 (en) * | 2013-11-21 | 2015-05-21 | At&T Intellectual Property I, L.P. | Sending Information Associated with a Targeted Advertisement to a Mobile Device Based on Viewer Reaction to the Targeted Advertisement |
TWI571117B (en) * | 2013-12-18 | 2017-02-11 | 財團法人資訊工業策進會 | The method for providing second-screen information |
US20160012475A1 (en) * | 2014-07-10 | 2016-01-14 | Google Inc. | Methods, systems, and media for presenting advertisements related to displayed content upon detection of user attention |
US10229429B2 (en) * | 2015-06-26 | 2019-03-12 | International Business Machines Corporation | Cross-device and cross-channel advertising and remarketing |
US9877058B2 (en) * | 2015-12-02 | 2018-01-23 | International Business Machines Corporation | Presenting personalized advertisements on smart glasses in a movie theater based on emotion of a viewer |
US9866907B2 (en) * | 2015-12-30 | 2018-01-09 | Paypal, Inc. | Television advertisement tracking |
US11540009B2 (en) * | 2016-01-06 | 2022-12-27 | Tvision Insights, Inc. | Systems and methods for assessing viewer engagement |
EP4080794A1 (en) * | 2016-01-06 | 2022-10-26 | TVision Insights, Inc. | Systems and methods for assessing viewer engagement |
US10497014B2 (en) * | 2016-04-22 | 2019-12-03 | Inreality Limited | Retail store digital shelf for recommending products utilizing facial recognition in a peer to peer network |
US11049147B2 (en) | 2016-09-09 | 2021-06-29 | Sony Corporation | System and method for providing recommendation on an electronic device based on emotional state detection |
JP6612707B2 (en) * | 2016-09-30 | 2019-11-27 | 本田技研工業株式会社 | Information provision device |
EP3529764A1 (en) * | 2016-10-20 | 2019-08-28 | Bayer Business Services GmbH | Device for determining features of a person |
US10097888B2 (en) * | 2017-02-06 | 2018-10-09 | Cisco Technology, Inc. | Determining audience engagement |
EP3613224A4 (en) | 2017-04-20 | 2020-12-30 | TVision Insights, Inc. | Methods and apparatus for multi-television measurements |
CN110019897B (en) * | 2017-08-01 | 2022-02-08 | 北京小米移动软件有限公司 | Method and device for displaying picture |
US20230319348A1 (en) | 2017-09-12 | 2023-10-05 | Dental Imaging Technologies Corporation | Systems and methods for assessing viewer engagement |
US10841651B1 (en) | 2017-10-10 | 2020-11-17 | Facebook, Inc. | Systems and methods for determining television consumption behavior |
US10425687B1 (en) | 2017-10-10 | 2019-09-24 | Facebook, Inc. | Systems and methods for determining television consumption behavior |
KR20190051255A (en) * | 2017-11-06 | 2019-05-15 | 삼성전자주식회사 | Image display apparatus and operating method thereof |
US10798425B1 (en) * | 2019-03-24 | 2020-10-06 | International Business Machines Corporation | Personalized key object identification in a live video stream |
JP7440173B2 (en) * | 2019-10-03 | 2024-02-28 | 日本電気株式会社 | Advertisement determination device, advertisement determination method, program |
US11218525B2 (en) * | 2020-01-21 | 2022-01-04 | Dish Network L.L.C. | Systems and methods for adapting content delivery based on endpoint communications |
EP4176403A1 (en) | 2020-07-01 | 2023-05-10 | Bakhchevan, Gennadii | A system and a method for personalized content presentation |
US11589114B1 (en) * | 2021-10-19 | 2023-02-21 | Dish Network L.L.C. | Adaptive content composite interaction and control interface |
US11861981B2 (en) | 2021-10-19 | 2024-01-02 | Dish Network L.L.C. | Experience-adaptive interaction interface for uncertain measurable events engagement |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090239510A1 (en) * | 2008-03-24 | 2009-09-24 | At&T Mobility Ii Llc | Theme based advertising |
US20090327894A1 (en) * | 2008-04-15 | 2009-12-31 | Novafora, Inc. | Systems and methods for remote control of interactive video |
US20130205314A1 (en) * | 2012-02-07 | 2013-08-08 | Arun Ramaswamy | Methods and apparatus to select media based on engagement levels |
US20130246175A1 (en) * | 2011-12-05 | 2013-09-19 | Qualcomm Labs, Inc. | Selectively presenting advertisements to a customer of a service based on a place movement pattern profile |
Family Cites Families (57)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5550928A (en) | 1992-12-15 | 1996-08-27 | A.C. Nielsen Company | Audience measurement system and method |
ES2397354T3 (en) * | 1999-02-17 | 2013-03-06 | Index Systems, Inc. | System and method to adapt features of television and / or electronic programming guide, such as advertising |
US7103904B1 (en) | 1999-06-30 | 2006-09-05 | Microsoft Corporation | Methods and apparatus for broadcasting interactive advertising using remote advertising templates |
US6708335B1 (en) | 1999-08-18 | 2004-03-16 | Webtv Networks, Inc. | Tracking viewing behavior of advertisements on a home entertainment system |
US6873710B1 (en) | 2000-06-27 | 2005-03-29 | Koninklijke Philips Electronics N.V. | Method and apparatus for tuning content of information presented to an audience |
US6904408B1 (en) * | 2000-10-19 | 2005-06-07 | Mccarthy John | Bionet method, system and personalized web content manager responsive to browser viewers' psychological preferences, behavioral responses and physiological stress indicators |
US6585521B1 (en) * | 2001-12-21 | 2003-07-01 | Hewlett-Packard Development Company, L.P. | Video indexing based on viewers' behavior and emotion feedback |
US8352983B1 (en) | 2002-07-11 | 2013-01-08 | Tvworks, Llc | Programming contextual interactive user interface for television |
US7930716B2 (en) | 2002-12-31 | 2011-04-19 | Actv Inc. | Techniques for reinsertion of local market advertising in digital video from a bypass source |
US20050289582A1 (en) * | 2004-06-24 | 2005-12-29 | Hitachi, Ltd. | System and method for capturing and using biometrics to review a product, service, creative work or thing |
US7623823B2 (en) | 2004-08-31 | 2009-11-24 | Integrated Media Measurement, Inc. | Detecting and measuring exposure to media content items |
US8805339B2 (en) | 2005-09-14 | 2014-08-12 | Millennial Media, Inc. | Categorization of a mobile user profile based on browse and viewing behavior |
CN101401422B (en) | 2006-03-08 | 2011-09-07 | 黄金富 | Personal and regional commercial television advertisement broadcasting system and the method thereof |
US20070220108A1 (en) | 2006-03-15 | 2007-09-20 | Whitaker Jerry M | Mobile global virtual browser with heads-up display for browsing and interacting with the World Wide Web |
US20100161425A1 (en) | 2006-08-10 | 2010-06-24 | Gil Sideman | System and method for targeted delivery of available slots in a delivery network |
EP2063767A4 (en) | 2006-09-05 | 2014-05-21 | Innerscope Res Inc | Method and system for determining audience response to a sensory stimulus |
US9514436B2 (en) | 2006-09-05 | 2016-12-06 | The Nielsen Company (Us), Llc | Method and system for predicting audience viewing behavior |
US8310985B2 (en) | 2006-11-13 | 2012-11-13 | Joseph Harb | Interactive radio advertising and social networking |
US20090217315A1 (en) | 2008-02-26 | 2009-08-27 | Cognovision Solutions Inc. | Method and system for audience measurement and targeting media |
US8386304B2 (en) | 2007-05-29 | 2013-02-26 | Yu Chen | Methods for interactive television and mobile device |
US7865916B2 (en) * | 2007-07-20 | 2011-01-04 | James Beser | Audience determination for monetizing displayable content |
US7889073B2 (en) * | 2008-01-31 | 2011-02-15 | Sony Computer Entertainment America Llc | Laugh detector and system and method for tracking an emotional response to a media presentation |
US20090327076A1 (en) * | 2008-06-27 | 2009-12-31 | Microsoft Corporation | Ad targeting based on user behavior |
US8290604B2 (en) | 2008-08-19 | 2012-10-16 | Sony Computer Entertainment America Llc | Audience-condition based media selection |
US20100070987A1 (en) * | 2008-09-12 | 2010-03-18 | At&T Intellectual Property I, L.P. | Mining viewer responses to multimedia content |
WO2010055501A1 (en) * | 2008-11-14 | 2010-05-20 | Tunewiki Ltd. | A method and a system for lyrics competition, educational purposes, advertising and advertising verification |
US8489112B2 (en) | 2009-07-29 | 2013-07-16 | Shopkick, Inc. | Method and system for location-triggered rewards |
WO2011031873A2 (en) * | 2009-09-09 | 2011-03-17 | Andrew Michael Spencer | Interactive advertising platform and methods |
US20110173655A1 (en) | 2009-12-02 | 2011-07-14 | Xorbit, Inc. | Automated system and method for graphic advertisement selection and overlay |
US20110145048A1 (en) | 2009-12-10 | 2011-06-16 | Liu David K Y | System & Method for Presenting Content To Captive Audiences |
WO2012025933A2 (en) * | 2010-01-25 | 2012-03-01 | Avanti Joshi | An advertising system |
US20110199386A1 (en) | 2010-02-12 | 2011-08-18 | Honeywell International Inc. | Overlay feature to provide user assistance in a multi-touch interactive display environment |
US20110214082A1 (en) | 2010-02-28 | 2011-09-01 | Osterhout Group, Inc. | Projection triggering through an external marker in an augmented reality eyepiece |
US20110225043A1 (en) | 2010-03-12 | 2011-09-15 | Yahoo! Inc. | Emotional targeting |
CN101931767B (en) * | 2010-04-27 | 2012-08-15 | 四川长虹电器股份有限公司 | Viewing habit analysis-based automatic electronic ad list customization system |
US20130102854A1 (en) | 2010-06-07 | 2013-04-25 | Affectiva, Inc. | Mental state evaluation learning for advertising |
US20130189661A1 (en) | 2010-06-07 | 2013-07-25 | Affectiva, Inc. | Scoring humor reactions to digital media |
US8296151B2 (en) | 2010-06-18 | 2012-10-23 | Microsoft Corporation | Compound gesture-speech commands |
US10521813B2 (en) | 2010-07-06 | 2019-12-31 | Groupon, Inc. | System and method for incentives |
US20120072936A1 (en) * | 2010-09-20 | 2012-03-22 | Microsoft Corporation | Automatic Customized Advertisement Generation System |
US8438590B2 (en) | 2010-09-22 | 2013-05-07 | General Instrument Corporation | System and method for measuring audience reaction to media content |
US20120116871A1 (en) | 2010-11-05 | 2012-05-10 | Google Inc. | Social overlays on ads |
US10248960B2 (en) | 2010-11-16 | 2019-04-02 | Disney Enterprises, Inc. | Data mining to determine online user responses to broadcast messages |
US20120143693A1 (en) | 2010-12-02 | 2012-06-07 | Microsoft Corporation | Targeting Advertisements Based on Emotion |
US20120150633A1 (en) * | 2010-12-08 | 2012-06-14 | Microsoft Corporation | Generating advertisements during interactive advertising sessions |
US20120185905A1 (en) | 2011-01-13 | 2012-07-19 | Christopher Lee Kelley | Content Overlay System |
US20120326993A1 (en) | 2011-01-26 | 2012-12-27 | Weisman Jordan K | Method and apparatus for providing context sensitive interactive overlays for video |
US8670183B2 (en) * | 2011-03-07 | 2014-03-11 | Microsoft Corporation | Augmented view of advertisements |
CN102129644A (en) * | 2011-03-08 | 2011-07-20 | 北京理工大学 | Intelligent advertising system having functions of audience characteristic perception and counting |
US20120233009A1 (en) * | 2011-03-09 | 2012-09-13 | Jon Bernhard Fougner | Endorsement Subscriptions for Sponsored Stories |
US20120265606A1 (en) | 2011-04-14 | 2012-10-18 | Patnode Michael L | System and method for obtaining consumer information |
US20120304206A1 (en) * | 2011-05-26 | 2012-11-29 | Verizon Patent And Licensing, Inc. | Methods and Systems for Presenting an Advertisement Associated with an Ambient Action of a User |
US9077458B2 (en) * | 2011-06-17 | 2015-07-07 | Microsoft Technology Licensing, Llc | Selection of advertisements via viewer feedback |
CA2839481A1 (en) | 2011-06-24 | 2012-12-27 | The Directv Group, Inc. | Method and system for obtaining viewing data and providing content recommendations at a set top box |
US9100685B2 (en) | 2011-12-09 | 2015-08-04 | Microsoft Technology Licensing, Llc | Determining audience state or interest using passive sensor data |
US9451303B2 (en) | 2012-02-27 | 2016-09-20 | The Nielsen Company (Us), Llc | Method and system for gathering and computing an audience's neurologically-based reactions in a distributed framework involving remote storage and computing |
US20130263182A1 (en) * | 2012-03-30 | 2013-10-03 | Hulu Llc | Customizing additional content provided with video advertisements |
-
2013
- 2013-04-18 US US13/865,673 patent/US9015737B2/en active Active
-
2014
- 2014-04-17 CN CN201480022189.2A patent/CN105339969B/en active Active
- 2014-04-17 WO PCT/US2014/034437 patent/WO2014172509A2/en active Application Filing
- 2014-04-17 EP EP14725867.7A patent/EP2987125A4/en not_active Ceased
-
2015
- 2015-04-20 US US14/691,557 patent/US20150229990A1/en not_active Abandoned
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090239510A1 (en) * | 2008-03-24 | 2009-09-24 | At&T Mobility Ii Llc | Theme based advertising |
US20090327894A1 (en) * | 2008-04-15 | 2009-12-31 | Novafora, Inc. | Systems and methods for remote control of interactive video |
US20130246175A1 (en) * | 2011-12-05 | 2013-09-19 | Qualcomm Labs, Inc. | Selectively presenting advertisements to a customer of a service based on a place movement pattern profile |
US20130205314A1 (en) * | 2012-02-07 | 2013-08-08 | Arun Ramaswamy | Methods and apparatus to select media based on engagement levels |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10896456B2 (en) * | 2014-12-23 | 2021-01-19 | Ntt Docomo, Inc. | Method and apparatus for proximity service discovery |
US11627343B2 (en) * | 2018-11-29 | 2023-04-11 | Apple Inc. | Adaptive coding and streaming of multi-directional video |
US12096044B2 (en) | 2018-11-29 | 2024-09-17 | Apple Inc. | Adaptive coding and streaming of multi-directional video |
Also Published As
Publication number | Publication date |
---|---|
WO2014172509A2 (en) | 2014-10-23 |
CN105339969B (en) | 2022-05-10 |
EP2987125A4 (en) | 2016-05-04 |
EP2987125A2 (en) | 2016-02-24 |
WO2014172509A3 (en) | 2015-01-08 |
US20140317646A1 (en) | 2014-10-23 |
US9015737B2 (en) | 2015-04-21 |
CN105339969A (en) | 2016-02-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9015737B2 (en) | Linked advertisements | |
US20140337868A1 (en) | Audience-aware advertising | |
US11687976B2 (en) | Computerized method and system for providing customized entertainment content | |
US20140331242A1 (en) | Management of user media impressions | |
US9363546B2 (en) | Selection of advertisements via viewer feedback | |
US8489459B2 (en) | Demographic based content delivery | |
US20140325540A1 (en) | Media synchronized advertising overlay | |
JP5649303B2 (en) | Method and apparatus for annotating media streams | |
JP6291481B2 (en) | Determining the subsequent part of the current media program | |
US20130268955A1 (en) | Highlighting or augmenting a media program | |
US20240148295A1 (en) | Recommendations Based On Biometric Feedback From Wearable Device | |
US9414115B1 (en) | Use of natural user interface realtime feedback to customize user viewable ads presented on broadcast media | |
WO2019183061A1 (en) | Object identification in social media post | |
US12125151B2 (en) | Systems and methods for creating a custom secondary content for a primary content based on interactive data | |
US20230316662A1 (en) | Systems and methods for creating a custom secondary content for a primary content based on interactive data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GARZA, ENRIQUE DE LA;ZILBERSTEIN, KARIN;MUNSEE, JOSHUA LAWRENCE;AND OTHERS;SIGNING DATES FROM 20130418 TO 20130830;REEL/FRAME:036540/0705 |
|
AS | Assignment |
Owner name: ZHIGU HOLDINGS LIMITED, CAYMAN ISLANDS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT TECHNOLOGY LICENSING, LLC;REEL/FRAME:040354/0001 Effective date: 20160516 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |