GB2502320A - Targeting media content including use of subject reaction - Google Patents

Targeting media content including use of subject reaction Download PDF

Info

Publication number
GB2502320A
GB2502320A GB201209105A GB201209105A GB2502320A GB 2502320 A GB2502320 A GB 2502320A GB 201209105 A GB201209105 A GB 201209105A GB 201209105 A GB201209105 A GB 201209105A GB 2502320 A GB2502320 A GB 2502320A
Authority
GB
United Kingdom
Prior art keywords
media content
subject
attribute
image
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB201209105A
Other versions
GB201209105D0 (en
Inventor
Jesper Jannesson
Michael Dominic Van Almsick
Alexander Michael Lemos
Timothy James Cornelius
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
INTELLEX SYSTEMS Ltd
Original Assignee
INTELLEX SYSTEMS Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by INTELLEX SYSTEMS Ltd filed Critical INTELLEX SYSTEMS Ltd
Priority to GB201209105A priority Critical patent/GB2502320A/en
Publication of GB201209105D0 publication Critical patent/GB201209105D0/en
Publication of GB2502320A publication Critical patent/GB2502320A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0261Targeted advertisements based on user location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0269Targeted advertisements based on user profile or attribute

Abstract

A method of performing targeted media, for example advertisements, where a first image of the subject 210 is captured using a camera 202 and an attribute of the subject, such as gender or age, is derived from the first image. An advertisement is selected based on the attribute. The advertisement is then performed on a first display 208 and a second image of the subject is then captured, which may be captured by the first camera 202 or a second camera. A reaction of the subject to the media content is recorded from the second image. The reaction may be whether the subject is looking at the advertisement. Thus the advertiser has access to information regarding the effectiveness of the campaign. In an environment where the subject moves, such as on an escalator, first 210 and second 212, or multiple, displays are provided which may display linked adverts.

Description

METHOD OF PERFORMING TARGETED CONTENT
FIELD
S The present disclosure relates to a method of performing targeted content, for example in the form of advertisements, to a subject based on attributes derived from an image recognition process. in particular, the present disclosure relates to performing targeted content without user intervention.
BACKGROUND
Image recognition software is known. Some examples of applications of image recognition software are: in stadiums to identify hooligans that have previously been banned; in shops to provide data on the clientele that they are attracting; and to identify stolen cars by reading their number plates and consulting national databases. It is also known to perform, in response to user interaction, an item of media content to a user, wherein the item of media content is modified based on a recognized attribute of the user. For example, where the item of media content is directed to a product, the colour of the product may he matched to the colour of the user's clothing.
SUMMARY
An invention is set out in the claims.
A method of performing targeted media to a subject is provided. A first image of the subject is captured. An attribute of the subject is derived from the first image, and media content is selected based on the attribute. The media content is performed. A second image of the subject is then captured and a reaction of the subject to the media content is recorded from the second image. Finally, an action is performed based on the recorded reaction of the subject.
Also provided is an apparatus for performing targeted media content to a subject. The apparatus comprises an image capture device configured to capture a first image of the subject; a processor configured to derive an attribute of the subject from the first image; and a display. The processor is further configured to select media content based on the attribute.
The display is configured to pcrform the media content. The image capture device is further configured to capture a second image of the subjcct. The processor is further configured to record a reaction of the subject to the media content from the second image and to perform an action based on the recorded reaction of the subject.
BRIEF I)ESCRIPTION OF THE DRAWINGS Specific embodiments and examples are shown in the accompanying drawings, in which: Figure 1 is a schematic diagram of a system for automatically performing targeted advertisements; Figure 2 is a schematic diagram of a system for performing a distributed targeted advertisement event according to a first embodiment; Figure 3 is a schematic diagram of a system for performing a distributed targeted advertisement event according to a second embodiment; Figure 4 is a flow chart according to a system for automatically performing targeted advertisements: Figure 5 is a flow chart according to a system for performing a distributed advertisement event; and Figure 6 is a flow chart according to an alternative system for performing a distributed advertisement event.
DESCRIPTION
In overview, a method of performing targeted media to a subject without subject intervention is provided. The method comprises capturing a first image of the subject; deriving an attribute of the subject from the first image; selecting media content based on the attribute; performing the media content; capturing a second image of the subject; recording a reaction of the subject to the media content from the second image; and performing an action based on the recorded reaction of the subject. The method is autonomous and is performed without any need for user interaction.
In one embodiment, Figure 1 shows a system 100 for automatically performing targeted advertisements. The system 100 comprises a camera 102, a processor 104, a database 106 and a display 108. The camera 102, database 106 and display 108 are each coupled to the processor 104. The camera 102, processor 04, database 06 and display 108 coopcratc to perform on the display 108 an advertisement targeted to a subject 110 based on an attribute of the subject 110 derived by the processor 104 from an image of the subject 110 captured by the camera 102.
in one embodiment, the processor 104 is integral with the camera 102 and the processor 104 communicates with the display 108 via a wireless data connection. By way of example, the wireless data connection may be established according to the IEEE 802.11 wireLess local area network or Bluetooth® standards.
Thc database 106 stores a number of advertisemcnts. Each advertisement is indexed in the database 106 by one or more attributes of a potential subject 110 to which the advertisement should be displayed. The attributes may include gender and age, for example. The database 106 may be integral with the camera 102, or it may be remote from the camera 102 and processor 1 04.
With reference to Figures 1 and 4, the processor 104 is operable to cause the camera 102 to continuously and automatically capture images at step 400. The processor 104 is operable to perform an image recognition process on each of the captured images. The image recognition process may be implemented as image recognition software on a medium readable by the processor 104. The image recognition software may be the image recognition package provided by Vitracom® (http://www.vitracom.dc), or any other software package suitable for performing image recognition known to the person skilled in the art.
During the image recognition process, the processor 104 determines at step 402 whether the captured image contains one or more subjects 0. If the captured image does not contain a subject 110, the image recognition proccss ends and the processor 104 begins processing the next captured image.
D
If it is determined that the captured image contains one or more subjects 110, then the image recognition process continues and the processor 104 determines an attribute of each of the subjects 110 at step 404. The determined attributes may include gender and age, for example.
If the captured image contains only one subject 110, or if each of the subj ects 110 in the captured image are determined to share the same attribute, then the processor 104 queries the database 106 and retrieves an advertisement corresponding to that attribute from the database 106.
If the captured image contains more than one subject 110 and all of the subjects 110 in the captured image do not share the same attribute, then the processor performs a ranking algorithm at step 406 to determine with which attribute to query the database 106. For example, the attribute with which the database 106 is queried may be the attribute that is shared between the greatest number of subjects 110 in the captured image. The processor 104 thcn retrieves an advertisement corresponding to that attribute from the database 106 at step 408.
If the captured image contains more than a threshold number of subjects 110 such that attribute recognition for each subject 11 0 in the captured image is not possible, then the processor 104 queries the database for a generic advertisement and retrieves a generic advertisement from the database 106 at step 408.
The retrieved advertisement is then sent by the processor 104 to the display 108 at step 410, where it is displayed to the one or more subjects 110. Thus, the one or more subjects 110 arc automatically presented with an advertisement that is targeted to their attributes, without subject intervention.
In an embodiment, at step 412 the camera 102 captures a second image of the one or more subjects 110 after the targeted advertisement has been displayed on the display 108.
Alternatively, the second image may be captured by a second camera coupled to the processor 104 (or a separate but similarly configured processor). The second image is processed by the processor 104 according to an image recognition process to determine a reaction of the one or more subjects 110 to the displayed advertisement at step 414. For example, after determining the presence of one or more subjects 110 in the second image, the processor 104 may determine whether one or more of the subjects 110 are looking at the display 108 whilst the advertisement is being displayed.
Once the reaction of the one or more subjects 110 to the displayed advertisement has been determined, the processor 104 stores the reaction in the database 106 or in any other suitable storage device. Optionally, the processor 104 may send the reaction to a remote server via a connection to a wide area network (WAN). In either case, an advertiser has access to highly desirable information about the effectiveness of an advertising campaign.
Figure 2 shows a system 200 for performing a distributed advertisement event. The system is similar to the system 200 shown in Figure 1 and comprises a camera 202, a processor 204, a database 206 and a first display 208. The camera 202, database 206 and display 208 are each coupled to the processor 204.
In addition to the above components, the system 200 thrther comprises a second display 212 coupled to the processor 204 and spatially separated from the first display 208, Similar to the first display 208, the second display 212 may communicate with the processor 204 via a wireless data connection.
For each advertisement stored in the database 206, the database 206 also stores a second advertisement that is related to the first advertisement. A particular tirst advertisement and its related second advertisement are indexed in the database 206 by the same one or more attributes of a potential subject 210 to which the advertisements should be displayed.
It is intended that the system 200 shown in Figure 2 is implemented in an environment where it is likely that the subject 210 will move from the first display 208 to the second display 212 as shown by the arrow 214. For example, the system 200 may be installed on an escalator, with the camera 202 and first display 208 located in the vicinity of the top of the escalator and
S
the second display 212 located in the vicinity of the bottom of the escalator for a downward-travelling escalator, or vice versa for an upward-travelling escalator.
The system 200 shown in Figurc 2 operates in a similar fashion to the system 100 shown in Figurc 100 described above. In the interest of clarity, only the differences in operation between the system 200 shown in Figure 2 and the system 100 shown in Figure t will be described.
With reference to Figures 2 and 5, the camera 202 automatically captures images in the vicinity of the first display 208, so that an image of the one or more subjects 210 is captured at step 500 as the one or more subjects 210 approach the first display 208. Once the image recognition process is complete and one or more attributes of the one or more subjects 210 have been determined by the processor 204 at steps 502, 504 and 506 according to the process described above in respect of the system 100 of Figure 1, the processor 204 queries the database 206 using the determined one or more attributes at step 508. however, whereas the processor 104 of the system 100 shown in Figure 1 retrieves a single advertisement in response to querying the database 106, the processor 204 of the system 200 shown in Figure 2 retrieves from the database 206 both a first advertisement corresponding to the one or more determined attributes and a second advertisement that is related to the first advertisement.
l'he processor 204 then sends the first advertisement to the first display 208 at step 510, where it is displayed to the one or more subjects 210. Optionally, if the camera 202 is spatially separated from the display 208, the processor may send the first advertisement to the first display 208 after a predetermined amount of time corresponding to the approximate time taken by the subject 210 to travel between the camera 202 and the first display 208. Thus, the one or more subjects 210 are automatically presented with a first advertisement that is targeted to their attributes, without subject intervention.
At step 512, the processor 204 then sends the related second advertisement to the second display 212 after a predetermined amount of time corresponding to the time taken by the one or more subjects 210 to travel from the first display 208 to the second display 210. Ihus, upon reaching the second display 212, the one or more subjects 210 are automatically presented with a related second advertisement that is targeted to their attributes, without subject intervention.
Optionally, the related second advertisement may complete the first advertisement. Upon viewing the first advertisement, the attention of the one or more subjects 2 0 is directed to the subject matter of the first advertisement. The one or more subjects 210 then have time to dwell on the subject matter of the first advertisement during the time it takes the one or more subjects 210 to travel from the first display 208 to the second display 212. Having dwelled on the subject matter of the first advertisement for the duration of their travel, the one or more subjects may be more receptive to the second advertisement displayed on the second display 212, which is related to the subject matter of the first advertisement, to the benefit of the advertiser. Thus an improved advertising method is provided.
As with the system 100 shown in Figure 1 described above, in an embodiment the camera 202 captures a second image of the one or more subjects 210 afler the first advertisement has been displayed on the first display 208 at step 514. Alternatively, the second image may be captured by a second camera coupled to the processor 204. The second image is processed by the processor 204 according to the process described above with respect to the system 100 shown in Figure 1 to determine a reaction of the one or more subjects 210 to the first advertisemcnt at step 516. Furthermore, the system 200 shown in Figure 2 may comprise a third camera located in the vicinity of the second display 212 and coupled to the processor 204, which captures a third image of the one or more subjects 210 after the second advertisement has been displayed on the second display 212. The third image is processed by the processor 204 according to the process described above with respect to the system 100 shown in Figure Ito determine a reaction of the one or more subjects 210 to the second advertisement.
It is contemplated that additional displays could be coupled to the processor 204 to display additional related advertisements stored in the database 206 at appropriate intervals to target the one or more subjects 210 as they travel along the path 214.
Figure 3 shows an alternative system 300 for performing a distributed advertisement event.
The system 300 comprises a first subsystem 316 and a second subsystem 318. The first 316 and second 318 subsystems are each similar to the system 100 shown in Figure 1 and described above, and each comprise a camera 302, 322, a processor 304, 324, a database 306, 326 and a display 308, 328. The cameras 302, 322, databases 306, 326 and displays 308, 328 are each coupled to the respective processor 304, 324.
As with the system 100 shown in Figure 1, the database 306 of the first subsystem 316 stores a number of advertisements, each being indexed in the database 306 by one or more attributes of a potential subject 310 to which the advertisement should be displayed. The database 326 of the second subsystem 318 stores a number of related advertisements, each being indexed in 1 0 the database 326 by one or more attributes of a potential subject 310 to which the related advertisement should he displayed. Each advertisement stored in the database 306 of the first subsystem 316 has a corresponding related advertisement stored in the database 326 of the second subsystem 318. A particular advertisement stored in the database 306 of the first subsystem 316 and its corresponding related advertisement stored in the database 326 of the second subsystem 318 are both indexed in the respective databases 306, 326 by the same one or more attributes of a potential subject 210 to which the advertisements should be displayed, such that a query based on a particular determined attribute from each of the processors 304, 324 to the respective database 306, 326 will retrieve the advertisement and its corresponding related advertisement, respectively. In all other respects the first 3 16 and second 318 subsystems may operate independently in accordance with the process described with respect to the system 100 shown in Figure 1 described above.
It is intended that the system 300 shown in Figure 3 is implemented in an environment where it is likely that the subject 310 will follow a path as shown by the arrow 314, with the first subsystem 316 located in the vicinity of a first point on the path and the second subsystem 318 located in the vicinity of a second, subsequent point on the path. For example, thc system 300 may be installed on an escalator, with the first subsystem 316 located in the vicinity of the top of the escalator and the second subsystem 318 located in the vicinity of the bottom of the escalator for a downward-travelling escalator, or vice versa for an upward-travelling escalator.
Thus, in operation and with reference to Figures 3 and 4, as one or more subjects 310 approach the first subsystem 316, the camera 302 captures an image of the one or more subjects 310 at step 400 and the processor 304 determines an attribute of the one or more subjects 310 from the captured image at steps 402, 404 and 406. The processor 304 then retrieves a targeted advertisement from the database 306 based on the determined attribute at step 408 and sends the targeted advertisement to the display 308 at stcp 410 to be displayed to the one or more subjects 310. Thus, the one or more subjects 110 are automatically presented with an advertisement that is targeted to their attributes, without subjcct intervention.
Subsequently, with reference to Figures 3 and 6 as the one or more subjects 310 approach the second subsystem 318, the camera 322 captures an image of the one or more subjects 310 at step 600 and the processor 324 determines an attribute of the one or more subjects 310 from the captured image at steps 602, 604 and 606. Because the same one or more subjects 310 were captured by the camera 322, it is likely that the attribute determined by the proccssor 324 of the second subsystem 318 will be identical to thc attribute determined by the processor 304 of the first subsystem 316. Thus the processor 324 of the second subsystem 318 then retrieves from the database 326 a related or follow-on advertisement that corresponds to the targeted advertisement based on the determined attribute at step 608 and sends the targeted related advertisement to the display 328 at step 610 to be displayed to the one or more subjects 310. Thus, the one or more subjects 310 are automatically presented with an advertisement that is related to the targeted advertisement, without subject intervention.
Thus, the system 300 shown in Figure 3 brings about the same advantages as the system 200 shown in Figure 2. In particular, upon viewing the targeted advertisement, the attention of the one or more subjects 310 is directed to the subject matter of the targeted advertisement. The one or more subjects 310 then have time to dwell on the subject matter of the targeted advertisement during the time it takes the one or more subjects 310 to travel from the display 308 of the first subsystem 316 to the display 328 of the second subsystem 318. Having dwelled on the subject matter of the targeted advertisement for the duration of their travel, the one or more subjects 310 may he more receptive to the related advertisement displayed on the display 328 of the second subsystem 318, which is related to the subject matter of the first advertisement, to the benefit of the advertiser. Thus an improved advertising method is provided.
In addition, because the processor 324 of the sccond subsystem 316 retrieves the related advertisement and causes it to be displayed in response to determining the same attribute of the one or more subjects 310 as was determined by thc processor 304 of the first subsystem 316, the system 300 can display targeted advertisements and corresponding related advertisements to subjects 310 moving along the path 314 at different speeds.
As with the system 100 shown in Figure 1 and the system 200 shown in Figure 2 described above, in an embodiment one or both of the first 316 and second 318 subsystems captures a second image of the one or more subjects 310 at steps 412 and 612 using the respective camera 302, 322 after the respective advertisement has been displayed on the respective display 308, 328. Alternatively, one or both of the first 316 and second 318 subsystems may comprise a second camera coupled to the respective processor 304, 324 for capturing the second image. The second image is processed by the respective processor 304, 324 according to the process described above with respect to the system 100 shown in Figure 1 to determine a reaction of the one or more subjects 310 to the respective advertisement at steps 414 and 614.
Whilst the embodiments described above have been described separately, any of the features of one of the embodiments may be combined with features of another of the embodiments to arrive at further embodiments in accordance with the present disclosure.
Whilst the advertisements have been described as being stored in a database in the above, it is contemplated that any suitable storage medium could be used. The advertisements may be performed at any appropriate location or combination of local and/or remote locations.
Furthermore, whilst advertisements have been described as being "displayed", it is contemplated that the advertisements may be performed over any appropriate medium, for example via a loudspeaker, or by emitting a scenL

Claims (32)

  1. CLAIMS1 A method of performing targeted media to a subject, the method comprising: capturing a first image of the subject; deriving an attribute of the subject from the first image; selecting media content based on the attribute; performing the media content; capturing a second image of the subject; recording a reaction of the subject to the media content from the second image; and performing an action based on the recorded reaction of the subject.
  2. 2. The method of claim 1, wherein the media content is a first targeted media content selected based on the attribute and is performed at a first location, and further comprising performing at a second location second targeted media content, the second targeted media content selected based on the attribute.
  3. 3. The method of claim 2, wherein the second targeted media content is related to the first targeted media content and wherein the second media content is performed a predetermined amount of time after the first media content is performed.
  4. 4. The method of claim 1, wherein the method is performed without subject intervention.
  5. 5. The method of claim 2, wherein the image is captured in the vicinity of the first location, the method further comprising capturing a second image olthe subject in the vicinity of the second location, wherein the second media content is performed in response to deriving the attribute of the subject from the second image.
  6. 6. The method of claim 1, wherein the action is storing the reaction.
  7. 7. The method of claim 1, wherein the action is sending the reaction to a server.
  8. 8. The method of any preceding claim, wherein images are continuously captured.
  9. 9. The method of claim 8, wherein each captured image is processed to determine the presence of a subject.
  10. I 0. The method of claim 9, further comprising determining the presence of two or more subjects, wherein the step of deriving an attribute of the subject from the image comprises deriving an attribute of each of the subjects from the image.
  11. II. ftc method of claim 10, wherein in response to deriving the same attribute for each of the subjects, the media content is selected based on that attribute.
  12. 12. The method of claim 10, wherein in response to deriving a different attribute for one or more of the subjects, a ranking algorithm is performed to determine on which attribute to base selection of the media content.
  13. 13. The method of claim 9, wherein in response to determining the presence of a number of subjects greater than a predetermined threshold number of subjects, the media content is selected based on a generic attribute.
  14. 14. The method of any preceding claim, wherein the attribute is a gender of the subject.
  15. 15. The method of any preceding claim, wherein the attribute is an age of the subject.
  16. 16. An apparatus for performing targeted media content to a subject, the apparatus comprising: an image capture device configured to capture a first image of the subject; a processor configured to derive an attribute of the subject from the first image; and a display, wherein the processor is further configured to select media content based on the attribute, the display is configured to perform the media content, the image capture device is further configured to capture a second image of the subject, and the processor is further configured to record a reaction of the subject to the media content from the second image and to perform an action based on the recorded reaction of the subject.
  17. 17. ftc apparatus of claim 16, wherein the media content is a first media content, the display is a first disp'ay located at a first location and is configured to perform the first media content at the first location, and the processor is further configured to select a sccond media content based on the attribute, the apparatus further comprising a second display located at a second location and configured to perform the second media content at the second location.
  18. 18. The apparatus of claim 17, wherein the second media content is related to the first media content and wherein the second display is configured to perform the second media content a predetermined amount of time after the first display performs the first media content.
  19. 19, 1'he apparatus of claim 16, wherein the apparatus is configured to operate without subject intervention.
  20. 20. The apparatus of claim 17, wherein the image capture device is located in the vicinity of the first location, and further comprising: a second image capture device located in the vicinity of the second location for capturing a second image of the subject; and a second processor configured to derive the attribute of the subject from the second image, wherein the second processor is further configured to select the second media content based on the attribute and the second display is configured to perform the second media content in response to the second processor deriving the attribute of the subject from the second image.
  21. 21. The apparatus of claim 16, wherein the processor is intea1 with the image capture device.
  22. 22. The apparatus of claim 16, wherein the processor communicates wirelessly with the first and second displays.
  23. 23. The apparatus of claim 16, wherein the action is storing the reaction.
  24. 24. The apparatus of claim 16, wherein the action is sending the reaction to a server.
  25. 25. The apparatus of any of claims 16 to 24, wherein the image capture device is configured to continuously capture images.
  26. 26. The apparatus of claim 25, wherein the processor is configured to process each captured image to determine the presence of a subject.
  27. 27. The apparatus of claim 26, wherein in response to determining the presence of two or more subjects, the processor is configured to derive an attribute of each of the subjects from the image.
  28. 28. The apparatus of claim 27, wherein in response to deriving the same attribute for each of the subjects, the processor is configured to select the media content based on that attribute.
  29. 29. The apparatus of chum 27, wherein in response to deriving a different attribute for one or more of thc subjects, the processor is configured to perform a ranking algorithm to determine on which attribute to base selection of the media content.
  30. 30. The apparatus of claim 26, wherein in response to determining the presence of a number of subjects greater than a threshold number of subjects, the processor is configured to select the media content based on a generic attribute.
  31. 31. The apparatus of any of claims 16 to 30, wherein the attribute is a gender of the subp ect.
  32. 32. The apparatus of any of claims 16 to 30, wherein the attribute is an age of the subject.
GB201209105A 2012-05-24 2012-05-24 Targeting media content including use of subject reaction Withdrawn GB2502320A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB201209105A GB2502320A (en) 2012-05-24 2012-05-24 Targeting media content including use of subject reaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB201209105A GB2502320A (en) 2012-05-24 2012-05-24 Targeting media content including use of subject reaction

Publications (2)

Publication Number Publication Date
GB201209105D0 GB201209105D0 (en) 2012-07-04
GB2502320A true GB2502320A (en) 2013-11-27

Family

ID=46546558

Family Applications (1)

Application Number Title Priority Date Filing Date
GB201209105A Withdrawn GB2502320A (en) 2012-05-24 2012-05-24 Targeting media content including use of subject reaction

Country Status (1)

Country Link
GB (1) GB2502320A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AT15770U1 (en) * 2016-06-08 2018-05-15 View Promotion Gmbh Elevator information system with scheduler

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080004953A1 (en) * 2006-06-30 2008-01-03 Microsoft Corporation Public Display Network For Online Advertising
US7930204B1 (en) * 2006-07-25 2011-04-19 Videomining Corporation Method and system for narrowcasting based on automatic analysis of customer behavior in a retail store
US20110106624A1 (en) * 2007-07-13 2011-05-05 Sunrise R&D Holdings, Llc Systems of influencing shopper's product selection at the first moment of truth based upon a shopper's location in a retail establishment
WO2012018841A2 (en) * 2010-08-02 2012-02-09 Visa U.S.A. Inc. Systems and methods to optimize media presentations using a camera

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080004953A1 (en) * 2006-06-30 2008-01-03 Microsoft Corporation Public Display Network For Online Advertising
US7930204B1 (en) * 2006-07-25 2011-04-19 Videomining Corporation Method and system for narrowcasting based on automatic analysis of customer behavior in a retail store
US20110106624A1 (en) * 2007-07-13 2011-05-05 Sunrise R&D Holdings, Llc Systems of influencing shopper's product selection at the first moment of truth based upon a shopper's location in a retail establishment
WO2012018841A2 (en) * 2010-08-02 2012-02-09 Visa U.S.A. Inc. Systems and methods to optimize media presentations using a camera

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AT15770U1 (en) * 2016-06-08 2018-05-15 View Promotion Gmbh Elevator information system with scheduler

Also Published As

Publication number Publication date
GB201209105D0 (en) 2012-07-04

Similar Documents

Publication Publication Date Title
US20210326931A1 (en) Digital advertising system
US11250456B2 (en) Systems, method and apparatus for automated inventory interaction
JP6138930B2 (en) Method and apparatus for selecting advertisements for display on a digital sign
US9530144B2 (en) Content output device, content output method, content output program, and recording medium having content output program recorded thereon
JP6562077B2 (en) Exhibition device, display control device, and exhibition system
US20130138499A1 (en) Usage measurent techniques and systems for interactive advertising
US20130278760A1 (en) Augmented reality product display
US20190108551A1 (en) Method and apparatus for customer identification and tracking system
US20130195322A1 (en) Selection of targeted content based on content criteria and a profile of users of a display
US20110175992A1 (en) File selection system and method
CN104424585A (en) Playing method and electronic device
KR20130105542A (en) Object identification in images or image sequences
WO2021142388A1 (en) System and methods for inventory management
JP2016218821A (en) Marketing information use device, marketing information use method and program
WO2020163217A1 (en) Systems, method and apparatus for frictionless shopping
WO2021142387A1 (en) System and methods for inventory tracking
US20200118077A1 (en) Systems, Method and Apparatus for Optical Means for Tracking Inventory
US20200159784A1 (en) Information processing apparatus, information processing system, information processing method, and method of determining similarity/dissimilarity
US9727890B2 (en) Systems and methods for registering advertisement viewing
WO2021104388A1 (en) System and method for interactive perception and content presentation
GB2502320A (en) Targeting media content including use of subject reaction
WO2013174433A1 (en) Method of performing targeted content
WO2019192455A1 (en) Store system, article matching method and apparatus, and electronic device
US20230074732A1 (en) Facial Recognition For Age Verification In Shopping Environments
US10269134B2 (en) Method and system for determining a region of interest of a user in a virtual environment

Legal Events

Date Code Title Description
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1185982

Country of ref document: HK

WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)
REG Reference to a national code

Ref country code: HK

Ref legal event code: WD

Ref document number: 1185982

Country of ref document: HK