WO2013174433A1 - Method of performing targeted content - Google Patents

Method of performing targeted content Download PDF

Info

Publication number
WO2013174433A1
WO2013174433A1 PCT/EP2012/059710 EP2012059710W WO2013174433A1 WO 2013174433 A1 WO2013174433 A1 WO 2013174433A1 EP 2012059710 W EP2012059710 W EP 2012059710W WO 2013174433 A1 WO2013174433 A1 WO 2013174433A1
Authority
WO
WIPO (PCT)
Prior art keywords
media content
attribute
subject
image
processor
Prior art date
Application number
PCT/EP2012/059710
Other languages
French (fr)
Inventor
Jesper JANNESSON
Michael Dominic VAN ALMSICK
Alexander Michael LEMOS
Timothy James CORNELIUS
Original Assignee
Intellex Systems Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intellex Systems Limited filed Critical Intellex Systems Limited
Priority to PCT/EP2012/059710 priority Critical patent/WO2013174433A1/en
Publication of WO2013174433A1 publication Critical patent/WO2013174433A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising

Definitions

  • the present disclosure relates to a method of performing targeted content, for example in the form of advertisements, to a subject based on attributes derived from an image recognition process.
  • the present disclosure relates to performing a distributed advertisement event in which related targeted advertisements are performed to the subject at separate media sites.
  • Image recognition software is known. Some examples of applications of image recognition software are: in stadiums to identify hooligans that have previously been banned; in shops to provide data on the clientele that they are attracting; and to identify stolen cars by reading their number plates and consulting national databases. It is also known to perform, in response to user interaction, an item of media content to a user, wherein the item of media content is modified based on a recognized attribute of the user. For example, where the item of media content is directed to a product, the colour of the product may be matched to the colour of the user's clothing.
  • a method of performing targeted media content is provided.
  • An image of a subject is captured.
  • An attribute of the subject is derived from the image.
  • First targeted media content is performed at a first location, the first targeted media content selected based on the attribute.
  • Second targeted media content is performed at a second location, the second targeted media content selected based on the attribute.
  • the apparatus comprises an image capture device for capturing an image of a subject; a processor configured to derive an attribute of the subject from the image; a first display located at a first location; and a second display located at a second location.
  • the processor is further configured to select first and second targeted media content based on the attribute.
  • the first display is configured to perform the selected first targeted media content at the first location and the second display is configured to perform the selected second targeted media content at the second location.
  • Figure 1 is a schematic diagram of a system for automatically performing targeted
  • FIG 2 is a schematic diagram of a system for performing a distributed targeted
  • Figure 3 is a schematic diagram of a system for performing a distributed targeted
  • Figure 4 is a flow chart according to a system for automatically performing targeted advertisements
  • Figure 5 is a flow chart according to a system for performing a distributed advertisement event.
  • Figure 6 is a flow chart according to an alternative system for performing a distributed advertisement event. DESCRIPTION
  • a method of performing targeted media content comprises capturing an image of a subject; deriving an attribute of the subject from the image; performing at a first location first targeted media content, the first targeted media content selected based on the attribute; and performing at a second location second targeted media content, the second targeted media content selected based on the attribute.
  • the subject After viewing the first targeted media content, the subject has time to dwell on the subject matter of the first targeted media content while he or she travels between the first location and the second location. Upon arrival at the second location, the subject may be more receptive to the related second targeted media content, and hence an improved method of performing targeted media content is provided.
  • Figure 1 shows a system 100 for automatically performing targeted advertisements.
  • the system 100 comprises a camera 102, a processor 104, a database 106 and a display 108.
  • the camera 102, database 106 and display 108 are each coupled to the processor 104.
  • the camera 102, processor 104, database 106 and display 108 cooperate to perform on the display 108 an advertisement targeted to a subject 110 based on an attribute of the subject 110 derived by the processor 104 from an image of the subject 1 10 captured by the camera 102.
  • the processor 104 is integral with the camera 102 and the processor 104 communicates with the display 108 via a wireless data connection.
  • the wireless data connection may be established according to the IEEE 802.11 wireless local area network or Bluetooth® standards.
  • the database 106 stores a number of advertisements. Each advertisement is indexed in the database 106 by one or more attributes of a potential subject 1 10 to which the advertisement should be displayed.
  • the attributes may include gender and age, for example.
  • the database 106 may be integral with the camera 102, or it may be remote from the camera 102 and processor 104.
  • the processor 104 is operable to cause the camera 102 to continuously and automatically capture images at step 400.
  • the processor 104 is operable to perform an image recognition process on each of the captured images.
  • the image recognition process may be implemented as image recognition software on a medium readable by the processor 104.
  • the image recognition software may be the image recognition package provided by Vitracom® (http://www.vitracom.de), or any other software package suitable for performing image recognition known to the person skilled in the art.
  • the processor 104 determines at step 402 whether the captured image contains one or more subjects 110. If the captured image does not contain a subject 1 10, the image recognition process ends and the processor 104 begins processing the next captured image.
  • the image recognition process continues and the processor 104 determines an attribute of each of the subjects 110 at step 404.
  • the determined attributes may include gender and age, for example. If the captured image contains only one subject 110, or if each of the subjects 1 10 in the captured image are determined to share the same attribute, then the processor 104 queries the database 106 and retrieves an advertisement corresponding to that attribute from the database 106.
  • the processor performs a ranking algorithm at step 406 to determine with which attribute to query the database 106.
  • the attribute with which the database 106 is queried may be the attribute that is shared between the greatest number of subjects 110 in the captured image.
  • the processor 104 then retrieves an advertisement corresponding to that attribute from the database 106 at step 408. If the captured image contains more than a threshold number of subjects 110 such that attribute recognition for each subject 110 in the captured image is not possible, then the processor 104 queries the database for a generic advertisement and retrieves a generic advertisement from the database 106 at step 408.
  • the retrieved advertisement is then sent by the processor 104 to the display 108 at step 410, where it is displayed to the one or more subjects 110.
  • the one or more subjects 110 are automatically presented with an advertisement that is targeted to their attributes, without subject intervention.
  • the camera 102 captures a second image of the one or more subjects 110 after the targeted advertisement has been displayed on the display 108.
  • the second image may be captured by a second camera coupled to the processor 104 (or a separate but similarly configured processor).
  • the second image is processed by the processor 104 according to an image recognition process to determine a reaction of the one or more subjects 110 to the displayed advertisement at step 414. For example, after determining the presence of one or more subjects 110 in the second image, the processor 104 may determine whether one or more of the subjects 110 are looking at the display 108 whilst the advertisement is being displayed.
  • the processor 104 stores the reaction in the database 106 or in any other suitable storage device.
  • the processor 104 may send the reaction to a remote server via a connection to a wide area network (WAN).
  • WAN wide area network
  • FIG 2 shows a system 200 for performing a distributed advertisement event.
  • the system 200 is similar to the system 200 shown in Figure 1 and comprises a camera 202, a processor 204, a database 206 and a first display 208.
  • the camera 202, database 206 and display 208 are each coupled to the processor 204.
  • system 200 further comprises a second display 212 coupled to the processor 204 and spatially separated from the first display 208. Similar to the first display 208, the second display 212 may communicate with the processor 204 via a wireless data connection.
  • the database 206 For each advertisement stored in the database 206, the database 206 also stores a second advertisement that is related to the first advertisement. A particular first advertisement and its related second advertisement are indexed in the database 206 by the same one or more attributes of a potential subject 210 to which the advertisements should be displayed. It is intended that the system 200 shown in Figure 2 is implemented in an environment where it is likely that the subject 210 will move from the first display 208 to the second display 212 as shown by the arrow 214.
  • the system 200 may be installed on an escalator, with the camera 202 and first display 208 located in the vicinity of the top of the escalator and the second display 212 located in the vicinity of the bottom of the escalator for a downward- travelling escalator, or vice versa for an upward-travelling escalator.
  • the system 200 shown in Figure 2 operates in a similar fashion to the system 100 shown in Figure 100 described above. In the interest of clarity, only the differences in operation between the system 200 shown in Figure 2 and the system 100 shown in Figure 1 will be described.
  • the camera 202 automatically captures images in the vicinity of the first display 208, so that an image of the one or more subjects 210 is captured at step 500 as the one or more subjects 210 approach the first display 208.
  • the processor 204 of the system 200 shown in Figure 2 retrieves from the database 206 both a first advertisement corresponding to the one or more determined attributes and a second advertisement that is related to the first advertisement.
  • the processor 204 then sends the first advertisement to the first display 208 at step 510, where it is displayed to the one or more subjects 210.
  • the processor may send the first advertisement to the first display 208 after a predetermined amount of time corresponding to the approximate time taken by the subject 210 to travel between the camera 202 and the first display 208.
  • the one or more subjects 210 are automatically presented with a first advertisement that is targeted to their attributes, without subject intervention.
  • the processor 204 then sends the related second advertisement to the second display 212 after a predetermined amount of time corresponding to the time taken by the one or more subjects 210 to travel from the first display 208 to the second display 210.
  • the one or more subjects 210 are automatically presented with a related second advertisement that is targeted to their attributes, without subject intervention.
  • the related second advertisement may complete the first advertisement.
  • the attention of the one or more subjects 210 is directed to the subject matter of the first advertisement.
  • the one or more subjects 210 then have time to dwell on the subject matter of the first advertisement during the time it takes the one or more subjects 210 to travel from the first display 208 to the second display 212. Having dwelled on the subject matter of the first advertisement for the duration of their travel, the one or more subjects may be more receptive to the second advertisement displayed on the second display 212, which is related to the subject matter of the first advertisement, to the benefit of the advertiser.
  • an improved advertising method is provided.
  • the camera 202 captures a second image of the one or more subjects 210 after the first advertisement has been displayed on the first display 208 at step 514.
  • the second image may be captured by a second camera coupled to the processor 204.
  • the second image is processed by the processor 204 according to the process described above with respect to the system 100 shown in Figure 1 to determine a reaction of the one or more subjects 210 to the first advertisement at step 516.
  • the system 200 shown in Figure 2 may comprise a third camera located in the vicinity of the second display 212 and coupled to the processor 204, which captures a third image of the one or more subjects 210 after the second advertisement has been displayed on the second display 212.
  • the third image is processed by the processor 204 according to the process described above with respect to the system 100 shown in Figure 1 to determine a reaction of the one or more subjects 210 to the second advertisement. It is contemplated that additional displays could be coupled to the processor 204 to display additional related advertisements stored in the database 206 at appropriate intervals to target the one or more subjects 210 as they travel along the path 214.
  • Figure 3 shows an alternative system 300 for performing a distributed advertisement event.
  • the system 300 comprises a first subsystem 316 and a second subsystem 318.
  • the first 316 and second 318 subsystems are each similar to the system 100 shown in Figure 1 and described above, and each comprise a camera 302, 322, a processor 304, 324, a database 306, 326 and a display 308, 328.
  • the cameras 302, 322, databases 306, 326 and displays 308, 328 are each coupled to the respective processor 304, 324.
  • the database 306 of the first subsystem 316 stores a number of advertisements, each being indexed in the database 306 by one or more attributes of a potential subject 310 to which the advertisement should be displayed.
  • the database 326 of the second subsystem 318 stores a number of related advertisements, each being indexed in the database 326 by one or more attributes of a potential subject 310 to which the related advertisement should be displayed.
  • Each advertisement stored in the database 306 of the first subsystem 316 has a corresponding related advertisement stored in the database 326 of the second subsystem 318.
  • a particular advertisement stored in the database 306 of the first subsystem 316 and its corresponding related advertisement stored in the database 326 of the second subsystem 318 are both indexed in the respective databases 306, 326 by the same one or more attributes of a potential subject 210 to which the advertisements should be displayed, such that a query based on a particular determined attribute from each of the processors 304, 324 to the respective database 306, 326 will retrieve the advertisement and its corresponding related advertisement, respectively.
  • the first 316 and second 318 subsystems may operate independently in accordance with the process described with respect to the system 100 shown in Figure 1 described above.
  • the system 300 shown in Figure 3 is implemented in an environment where it is likely that the subject 310 will follow a path as shown by the arrow 314, with the first subsystem 316 located in the vicinity of a first point on the path and the second subsystem 318 located in the vicinity of a second, subsequent point on the path.
  • the system 300 may be installed on an escalator, with the first subsystem 316 located in the vicinity of the top of the escalator and the second subsystem 318 located in the vicinity of the bottom of the escalator for a downward-travelling escalator, or vice versa for an upward-travelling escalator.
  • the camera 302 captures an image of the one or more subjects 310 at step 400 and the processor 304 determines an attribute of the one or more subjects 310 from the captured image at steps 402, 404 and 406.
  • the processor 304 retrieves a targeted advertisement from the database 306 based on the determined attribute at step 408 and sends the targeted advertisement to the display 308 at step 410 to be displayed to the one or more subjects 310.
  • the one or more subjects 110 are automatically presented with an advertisement that is targeted to their attributes, without subject intervention.
  • the camera 322 captures an image of the one or more subjects 310 at step 600 and the processor 324 determines an attribute of the one or more subjects 310 from the captured image at steps 602, 604 and 606. Because the same one or more subjects 310 were captured by the camera 322, it is likely that the attribute determined by the processor 324 of the second subsystem 318 will be identical to the attribute determined by the processor 304 of the first subsystem 316.
  • the processor 324 of the second subsystem 318 retrieves from the database 326 a related or follow-on advertisement that corresponds to the targeted advertisement based on the determined attribute at step 608 and sends the targeted related advertisement to the display 328 at step 610 to be displayed to the one or more subjects 310.
  • the one or more subjects 310 are automatically presented with an advertisement that is related to the targeted advertisement, without subject intervention.
  • the system 300 shown in Figure 3 brings about the same advantages as the system 200 shown in Figure 2.
  • the attention of the one or more subjects 310 is directed to the subject matter of the targeted advertisement.
  • the one or more subjects 310 then have time to dwell on the subject matter of the targeted advertisement during the time it takes the one or more subjects 310 to travel from the display 308 of the first subsystem 316 to the display 328 of the second subsystem 318. Having dwelled on the subject matter of the targeted advertisement for the duration of their travel, the one or more subjects 310 may be more receptive to the related advertisement displayed on the display 328 of the second subsystem 318, which is related to the subject matter of the first advertisement, to the benefit of the advertiser.
  • an improved advertising method is provided.
  • the system 300 can display targeted advertisements and corresponding related advertisements to subjects 310 moving along the path 314 at different speeds.
  • one or both of the first 316 and second 318 subsystems captures a second image of the one or more subjects 310 at steps 412 and 612 using the respective camera 302, 322 after the respective advertisement has been displayed on the respective display 308, 328.
  • one or both of the first 316 and second 318 subsystems may comprise a second camera coupled to the respective processor 304, 324 for capturing the second image.
  • the second image is processed by the respective processor 304, 324 according to the process described above with respect to the system 100 shown in Figure 1 to determine a reaction of the one or more subjects 310 to the respective advertisement at steps 414 and 614.
  • advertisements Whilst the advertisements have been described as being stored in a database in the above, it is contemplated that any suitable storage medium could be used.
  • the advertisements may be performed at any appropriate location or combination of local and/or remote locations.
  • advertisements have been described as being “displayed”, it is contemplated that the advertisements may be performed over any appropriate medium, for example via a loudspeaker, or by emitting a scent.

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

A method of performing targeted media content is provided. An image of a subject is captured. An attribute of the subject is derived from the image. First targeted media content is performed at a first location, the first targeted media content selected based on the attribute. Second targeted media content is performed at a second location, the second targeted media content selected based on the attribute.

Description

METHOD OF PERFORMING TARGETED CONTENT
FIELD The present disclosure relates to a method of performing targeted content, for example in the form of advertisements, to a subject based on attributes derived from an image recognition process. In particular, the present disclosure relates to performing a distributed advertisement event in which related targeted advertisements are performed to the subject at separate media sites.
BACKGROUND
Image recognition software is known. Some examples of applications of image recognition software are: in stadiums to identify hooligans that have previously been banned; in shops to provide data on the clientele that they are attracting; and to identify stolen cars by reading their number plates and consulting national databases. It is also known to perform, in response to user interaction, an item of media content to a user, wherein the item of media content is modified based on a recognized attribute of the user. For example, where the item of media content is directed to a product, the colour of the product may be matched to the colour of the user's clothing.
SUMMARY
The invention is set out in the claims.
A method of performing targeted media content is provided. An image of a subject is captured. An attribute of the subject is derived from the image. First targeted media content is performed at a first location, the first targeted media content selected based on the attribute. Second targeted media content is performed at a second location, the second targeted media content selected based on the attribute.
Also provided is an apparatus for performing targeted media content. The apparatus comprises an image capture device for capturing an image of a subject; a processor configured to derive an attribute of the subject from the image; a first display located at a first location; and a second display located at a second location. The processor is further configured to select first and second targeted media content based on the attribute. The first display is configured to perform the selected first targeted media content at the first location and the second display is configured to perform the selected second targeted media content at the second location.
BRIEF DESCRIPTION OF THE DRAWINGS Specific embodiments and examples are shown in the accompanying drawings, in which:
Figure 1 is a schematic diagram of a system for automatically performing targeted
advertisements; Figure 2 is a schematic diagram of a system for performing a distributed targeted
advertisement event according to a first embodiment;
Figure 3 is a schematic diagram of a system for performing a distributed targeted
advertisement event according to a second embodiment;
Figure 4 is a flow chart according to a system for automatically performing targeted advertisements;
Figure 5 is a flow chart according to a system for performing a distributed advertisement event; and
Figure 6 is a flow chart according to an alternative system for performing a distributed advertisement event. DESCRIPTION
In overview, a method of performing targeted media content is provided. The method comprises capturing an image of a subject; deriving an attribute of the subject from the image; performing at a first location first targeted media content, the first targeted media content selected based on the attribute; and performing at a second location second targeted media content, the second targeted media content selected based on the attribute. After viewing the first targeted media content, the subject has time to dwell on the subject matter of the first targeted media content while he or she travels between the first location and the second location. Upon arrival at the second location, the subject may be more receptive to the related second targeted media content, and hence an improved method of performing targeted media content is provided. In one embodiment, Figure 1 shows a system 100 for automatically performing targeted advertisements. The system 100 comprises a camera 102, a processor 104, a database 106 and a display 108. The camera 102, database 106 and display 108 are each coupled to the processor 104. The camera 102, processor 104, database 106 and display 108 cooperate to perform on the display 108 an advertisement targeted to a subject 110 based on an attribute of the subject 110 derived by the processor 104 from an image of the subject 1 10 captured by the camera 102.
In one embodiment, the processor 104 is integral with the camera 102 and the processor 104 communicates with the display 108 via a wireless data connection. By way of example, the wireless data connection may be established according to the IEEE 802.11 wireless local area network or Bluetooth® standards.
The database 106 stores a number of advertisements. Each advertisement is indexed in the database 106 by one or more attributes of a potential subject 1 10 to which the advertisement should be displayed. The attributes may include gender and age, for example. The database 106 may be integral with the camera 102, or it may be remote from the camera 102 and processor 104.
With reference to Figures 1 and 4, the processor 104 is operable to cause the camera 102 to continuously and automatically capture images at step 400. The processor 104 is operable to perform an image recognition process on each of the captured images. The image recognition process may be implemented as image recognition software on a medium readable by the processor 104. The image recognition software may be the image recognition package provided by Vitracom® (http://www.vitracom.de), or any other software package suitable for performing image recognition known to the person skilled in the art.
During the image recognition process, the processor 104 determines at step 402 whether the captured image contains one or more subjects 110. If the captured image does not contain a subject 1 10, the image recognition process ends and the processor 104 begins processing the next captured image.
If it is determined that the captured image contains one or more subjects 110, then the image recognition process continues and the processor 104 determines an attribute of each of the subjects 110 at step 404. The determined attributes may include gender and age, for example. If the captured image contains only one subject 110, or if each of the subjects 1 10 in the captured image are determined to share the same attribute, then the processor 104 queries the database 106 and retrieves an advertisement corresponding to that attribute from the database 106.
If the captured image contains more than one subject 110 and all of the subjects 1 10 in the captured image do not share the same attribute, then the processor performs a ranking algorithm at step 406 to determine with which attribute to query the database 106. For example, the attribute with which the database 106 is queried may be the attribute that is shared between the greatest number of subjects 110 in the captured image. The processor 104 then retrieves an advertisement corresponding to that attribute from the database 106 at step 408. If the captured image contains more than a threshold number of subjects 110 such that attribute recognition for each subject 110 in the captured image is not possible, then the processor 104 queries the database for a generic advertisement and retrieves a generic advertisement from the database 106 at step 408. The retrieved advertisement is then sent by the processor 104 to the display 108 at step 410, where it is displayed to the one or more subjects 110. Thus, the one or more subjects 110 are automatically presented with an advertisement that is targeted to their attributes, without subject intervention. In an embodiment, at step 412 the camera 102 captures a second image of the one or more subjects 110 after the targeted advertisement has been displayed on the display 108.
Alternatively, the second image may be captured by a second camera coupled to the processor 104 (or a separate but similarly configured processor). The second image is processed by the processor 104 according to an image recognition process to determine a reaction of the one or more subjects 110 to the displayed advertisement at step 414. For example, after determining the presence of one or more subjects 110 in the second image, the processor 104 may determine whether one or more of the subjects 110 are looking at the display 108 whilst the advertisement is being displayed.
Once the reaction of the one or more subjects 110 to the displayed advertisement has been determined, the processor 104 stores the reaction in the database 106 or in any other suitable storage device. Optionally, the processor 104 may send the reaction to a remote server via a connection to a wide area network (WAN). In either case, an advertiser has access to highly desirable information about the effectiveness of an advertising campaign.
Figure 2 shows a system 200 for performing a distributed advertisement event. The system 200 is similar to the system 200 shown in Figure 1 and comprises a camera 202, a processor 204, a database 206 and a first display 208. The camera 202, database 206 and display 208 are each coupled to the processor 204.
In addition to the above components, the system 200 further comprises a second display 212 coupled to the processor 204 and spatially separated from the first display 208. Similar to the first display 208, the second display 212 may communicate with the processor 204 via a wireless data connection.
For each advertisement stored in the database 206, the database 206 also stores a second advertisement that is related to the first advertisement. A particular first advertisement and its related second advertisement are indexed in the database 206 by the same one or more attributes of a potential subject 210 to which the advertisements should be displayed. It is intended that the system 200 shown in Figure 2 is implemented in an environment where it is likely that the subject 210 will move from the first display 208 to the second display 212 as shown by the arrow 214. For example, the system 200 may be installed on an escalator, with the camera 202 and first display 208 located in the vicinity of the top of the escalator and the second display 212 located in the vicinity of the bottom of the escalator for a downward- travelling escalator, or vice versa for an upward-travelling escalator.
The system 200 shown in Figure 2 operates in a similar fashion to the system 100 shown in Figure 100 described above. In the interest of clarity, only the differences in operation between the system 200 shown in Figure 2 and the system 100 shown in Figure 1 will be described.
With reference to Figures 2 and 5, the camera 202 automatically captures images in the vicinity of the first display 208, so that an image of the one or more subjects 210 is captured at step 500 as the one or more subjects 210 approach the first display 208. Once the image recognition process is complete and one or more attributes of the one or more subjects 210 have been determined by the processor 204 at steps 502, 504 and 506 according to the process described above in respect of the system 100 of Figure 1 , the processor 204 queries the database 206 using the determined one or more attributes at step 508. However, whereas the processor 104 of the system 100 shown in Figure 1 retrieves a single advertisement in response to querying the database 106, the processor 204 of the system 200 shown in Figure 2 retrieves from the database 206 both a first advertisement corresponding to the one or more determined attributes and a second advertisement that is related to the first advertisement. The processor 204 then sends the first advertisement to the first display 208 at step 510, where it is displayed to the one or more subjects 210. Optionally, if the camera 202 is spatially separated from the display 208, the processor may send the first advertisement to the first display 208 after a predetermined amount of time corresponding to the approximate time taken by the subject 210 to travel between the camera 202 and the first display 208. Thus, the one or more subjects 210 are automatically presented with a first advertisement that is targeted to their attributes, without subject intervention. At step 512, the processor 204 then sends the related second advertisement to the second display 212 after a predetermined amount of time corresponding to the time taken by the one or more subjects 210 to travel from the first display 208 to the second display 210. Thus, upon reaching the second display 212, the one or more subjects 210 are automatically presented with a related second advertisement that is targeted to their attributes, without subject intervention.
Optionally, the related second advertisement may complete the first advertisement. Upon viewing the first advertisement, the attention of the one or more subjects 210 is directed to the subject matter of the first advertisement. The one or more subjects 210 then have time to dwell on the subject matter of the first advertisement during the time it takes the one or more subjects 210 to travel from the first display 208 to the second display 212. Having dwelled on the subject matter of the first advertisement for the duration of their travel, the one or more subjects may be more receptive to the second advertisement displayed on the second display 212, which is related to the subject matter of the first advertisement, to the benefit of the advertiser. Thus an improved advertising method is provided.
As with the system 100 shown in Figure 1 described above, in an embodiment the camera 202 captures a second image of the one or more subjects 210 after the first advertisement has been displayed on the first display 208 at step 514. Alternatively, the second image may be captured by a second camera coupled to the processor 204. The second image is processed by the processor 204 according to the process described above with respect to the system 100 shown in Figure 1 to determine a reaction of the one or more subjects 210 to the first advertisement at step 516. Furthermore, the system 200 shown in Figure 2 may comprise a third camera located in the vicinity of the second display 212 and coupled to the processor 204, which captures a third image of the one or more subjects 210 after the second advertisement has been displayed on the second display 212. The third image is processed by the processor 204 according to the process described above with respect to the system 100 shown in Figure 1 to determine a reaction of the one or more subjects 210 to the second advertisement. It is contemplated that additional displays could be coupled to the processor 204 to display additional related advertisements stored in the database 206 at appropriate intervals to target the one or more subjects 210 as they travel along the path 214. Figure 3 shows an alternative system 300 for performing a distributed advertisement event. The system 300 comprises a first subsystem 316 and a second subsystem 318. The first 316 and second 318 subsystems are each similar to the system 100 shown in Figure 1 and described above, and each comprise a camera 302, 322, a processor 304, 324, a database 306, 326 and a display 308, 328. The cameras 302, 322, databases 306, 326 and displays 308, 328 are each coupled to the respective processor 304, 324.
As with the system 100 shown in Figure 1, the database 306 of the first subsystem 316 stores a number of advertisements, each being indexed in the database 306 by one or more attributes of a potential subject 310 to which the advertisement should be displayed. The database 326 of the second subsystem 318 stores a number of related advertisements, each being indexed in the database 326 by one or more attributes of a potential subject 310 to which the related advertisement should be displayed. Each advertisement stored in the database 306 of the first subsystem 316 has a corresponding related advertisement stored in the database 326 of the second subsystem 318. A particular advertisement stored in the database 306 of the first subsystem 316 and its corresponding related advertisement stored in the database 326 of the second subsystem 318 are both indexed in the respective databases 306, 326 by the same one or more attributes of a potential subject 210 to which the advertisements should be displayed, such that a query based on a particular determined attribute from each of the processors 304, 324 to the respective database 306, 326 will retrieve the advertisement and its corresponding related advertisement, respectively. In all other respects the first 316 and second 318 subsystems may operate independently in accordance with the process described with respect to the system 100 shown in Figure 1 described above.
It is intended that the system 300 shown in Figure 3 is implemented in an environment where it is likely that the subject 310 will follow a path as shown by the arrow 314, with the first subsystem 316 located in the vicinity of a first point on the path and the second subsystem 318 located in the vicinity of a second, subsequent point on the path. For example, the system 300 may be installed on an escalator, with the first subsystem 316 located in the vicinity of the top of the escalator and the second subsystem 318 located in the vicinity of the bottom of the escalator for a downward-travelling escalator, or vice versa for an upward-travelling escalator. Thus, in operation and with reference to Figures 3 and 4, as one or more subjects 310 approach the first subsystem 316, the camera 302 captures an image of the one or more subjects 310 at step 400 and the processor 304 determines an attribute of the one or more subjects 310 from the captured image at steps 402, 404 and 406. The processor 304 then retrieves a targeted advertisement from the database 306 based on the determined attribute at step 408 and sends the targeted advertisement to the display 308 at step 410 to be displayed to the one or more subjects 310. Thus, the one or more subjects 110 are automatically presented with an advertisement that is targeted to their attributes, without subject intervention.
Subsequently, with reference to Figures 3 and 6 as the one or more subjects 310 approach the second subsystem 318, the camera 322 captures an image of the one or more subjects 310 at step 600 and the processor 324 determines an attribute of the one or more subjects 310 from the captured image at steps 602, 604 and 606. Because the same one or more subjects 310 were captured by the camera 322, it is likely that the attribute determined by the processor 324 of the second subsystem 318 will be identical to the attribute determined by the processor 304 of the first subsystem 316. Thus the processor 324 of the second subsystem 318 then retrieves from the database 326 a related or follow-on advertisement that corresponds to the targeted advertisement based on the determined attribute at step 608 and sends the targeted related advertisement to the display 328 at step 610 to be displayed to the one or more subjects 310. Thus, the one or more subjects 310 are automatically presented with an advertisement that is related to the targeted advertisement, without subject intervention.
Thus, the system 300 shown in Figure 3 brings about the same advantages as the system 200 shown in Figure 2. In particular, upon viewing the targeted advertisement, the attention of the one or more subjects 310 is directed to the subject matter of the targeted advertisement. The one or more subjects 310 then have time to dwell on the subject matter of the targeted advertisement during the time it takes the one or more subjects 310 to travel from the display 308 of the first subsystem 316 to the display 328 of the second subsystem 318. Having dwelled on the subject matter of the targeted advertisement for the duration of their travel, the one or more subjects 310 may be more receptive to the related advertisement displayed on the display 328 of the second subsystem 318, which is related to the subject matter of the first advertisement, to the benefit of the advertiser. Thus an improved advertising method is provided.
In addition, because the processor 324 of the second subsystem 316 retrieves the related advertisement and causes it to be displayed in response to determining the same attribute of the one or more subjects 310 as was determined by the processor 304 of the first subsystem 316, the system 300 can display targeted advertisements and corresponding related advertisements to subjects 310 moving along the path 314 at different speeds.
As with the system 100 shown in Figure 1 and the system 200 shown in Figure 2 described above, in an embodiment one or both of the first 316 and second 318 subsystems captures a second image of the one or more subjects 310 at steps 412 and 612 using the respective camera 302, 322 after the respective advertisement has been displayed on the respective display 308, 328. Alternatively, one or both of the first 316 and second 318 subsystems may comprise a second camera coupled to the respective processor 304, 324 for capturing the second image. The second image is processed by the respective processor 304, 324 according to the process described above with respect to the system 100 shown in Figure 1 to determine a reaction of the one or more subjects 310 to the respective advertisement at steps 414 and 614.
Whilst the embodiments described above have been described separately, any of the features of one of the embodiments may be combined with features of another of the embodiments to arrive at further embodiments in accordance with the present disclosure.
Whilst the advertisements have been described as being stored in a database in the above, it is contemplated that any suitable storage medium could be used. The advertisements may be performed at any appropriate location or combination of local and/or remote locations.
Furthermore, whilst advertisements have been described as being "displayed", it is contemplated that the advertisements may be performed over any appropriate medium, for example via a loudspeaker, or by emitting a scent.

Claims

1. A method of performing targeted media content, the method comprising:
capturing an image of a subject;
deriving an attribute of the subject from the image;
performing, at a first location, first targeted media content, the first targeted media content selected based on the attribute; and
performing, at a second location, second targeted media content, the second targeted media content selected based on the attribute.
2. The method of claim 1, wherein the second targeted media content is related to the first targeted media content.
3. The method of claim 1, wherein the second media content is performed a
predetermined amount of time after the first media content is performed.
4. The method of claim 1 , wherein the image is captured in the vicinity of the first location, the method further comprising capturing a second image of the subject in the vicinity of the second location, wherein the second media content is performed in response to deriving the attribute of the subject from the second image.
5. The method of claim 1 , further comprising:
capturing a second image of the subject;
recording a reaction of the subject to the media content from the second image; and performing an action based on the recorded reaction of the subject.
6. The method of claim 5, wherein the action is storing the reaction.
7. The method of claim 5, wherein the action is sending the reaction to a server.
8. The method of any preceding claim, wherein images are continuously captured.
9. The method of claim 8, wherein each captured image is processed to determine the presence of a subject.
10. The method of claim 9, further comprising determining the presence of two or more subjects, wherein the step of deriving an attribute of the subject from the image comprises deriving an attribute of each of the subjects from the image.
11. The method of claim 10, wherein in response to deriving the same attribute for each of the subjects, the first and second targeted media content are selected based on that attribute.
12. The method of claim 10, wherein in response to deriving a different attribute for one or more of the subjects, a ranking algorithm is performed to determine on which attribute to base selection of the first and second targeted media content.
13. The method of claim 9, wherein in response to determining the presence of a number of subjects greater than a predetermined threshold number of subjects, the first and second targeted media content are selected based on a generic attribute.
14. The method of any preceding claim, wherein the attribute is a gender of the subject.
15. The method of any preceding claim, wherein the attribute is an age of the subject.
16. An apparatus for performing targeted media content, the apparatus comprising:
an image capture device for capturing an image of a subject;
a processor configured to derive an attribute of the subject from the image;
a first display located at a first location; and
a second display located at a second location,
wherein the processor is further configured to select first and second targeted media content based on the attribute, the first display is configured to perform the selected first targeted media content at the first location and the second display is configured to perform the selected second targeted media content at the second location.
17. The apparatus of claim 16, wherein the second targeted media content is related to the first targeted media content.
18. The apparatus of claim 16, wherein the second media content is performed a predetermined amount of time after the first media content is performed.
19. The apparatus of claim 16, wherein the image capture device is located in the vicinity of the first location, and further comprising:
a second image capture device located in the vicinity of the second location for capturing a second image of the subject; and
a second processor configured to derive the attribute of the subject from the second image,
wherein the second processor is further configured to select the second media content based on the attribute and the second display is configured to perform the second media content in response to the second processor deriving the attribute of the subject from the second image.
20. The apparatus of claim 16, wherein the image capture device is configured to capture a second image of the subject, and wherein the processor is further configured to record a reaction of the subject to the media content from the second image and to perform an action based on the recorded reaction of the subject.
21. The apparatus of claim 16, wherein the processor is integral with the image capture device.
22. The apparatus of claim 16, wherein the processor communicates wirelessly with the first and second displays.
23. The apparatus of claim 20, wherein the action is storing the reaction.
24. The apparatus of claim 20, wherein the action is sending the reaction to a server.
25. The apparatus of any of claims 16 to 24, wherein the image capture device is configured to continuously capture images.
26. The apparatus of claim 25, wherein the processor is configured to process each captured image to determine the presence of a subject.
27. The apparatus of claim 26, wherein in response to determining the presence of two or more subjects, the processor is configured to derive an attribute of each of the subjects from the image.
28. The apparatus of claim 27, wherein in response to deriving the same attribute for each of the subjects, the processor is configured to select the first and second targeted media content based on that attribute.
29. The apparatus of claim 27, wherein in response to deriving a different attribute for one or more of the subjects, the processor is configured to perform a ranking algorithm to determine on which attribute to base selection of the first and second targeted media content.
30. The apparatus of claim 26, wherein in response to determining the presence of a number of subjects greater than a predetermined threshold number of subjects, the processor is configured to select the first and second targeted media content based on a generic attribute.
31. The apparatus of any of claims 16 to 30, wherein the attribute is a gender of the subject.
32. The apparatus of any of claims 16 to 30, wherein the attribute is an age of the subject.
PCT/EP2012/059710 2012-05-24 2012-05-24 Method of performing targeted content WO2013174433A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2012/059710 WO2013174433A1 (en) 2012-05-24 2012-05-24 Method of performing targeted content

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2012/059710 WO2013174433A1 (en) 2012-05-24 2012-05-24 Method of performing targeted content

Publications (1)

Publication Number Publication Date
WO2013174433A1 true WO2013174433A1 (en) 2013-11-28

Family

ID=46168460

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2012/059710 WO2013174433A1 (en) 2012-05-24 2012-05-24 Method of performing targeted content

Country Status (1)

Country Link
WO (1) WO2013174433A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018073114A1 (en) 2016-10-20 2018-04-26 Bayer Business Services Gmbh System for selectively informing a person

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030088832A1 (en) * 2001-11-02 2003-05-08 Eastman Kodak Company Method and apparatus for automatic selection and presentation of information
WO2003091974A1 (en) * 2002-04-23 2003-11-06 Smart Point Media Ag System for utilizing information carriers in commercially used facilities
US20040044564A1 (en) * 2002-08-27 2004-03-04 Dietz Paul H. Real-time retail display system
GB2410360A (en) * 2004-01-23 2005-07-27 Sony Uk Ltd Display
US20080249870A1 (en) * 2007-04-03 2008-10-09 Robert Lee Angell Method and apparatus for decision tree based marketing and selling for a retail store

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030088832A1 (en) * 2001-11-02 2003-05-08 Eastman Kodak Company Method and apparatus for automatic selection and presentation of information
WO2003091974A1 (en) * 2002-04-23 2003-11-06 Smart Point Media Ag System for utilizing information carriers in commercially used facilities
US20040044564A1 (en) * 2002-08-27 2004-03-04 Dietz Paul H. Real-time retail display system
GB2410360A (en) * 2004-01-23 2005-07-27 Sony Uk Ltd Display
US20080249870A1 (en) * 2007-04-03 2008-10-09 Robert Lee Angell Method and apparatus for decision tree based marketing and selling for a retail store

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018073114A1 (en) 2016-10-20 2018-04-26 Bayer Business Services Gmbh System for selectively informing a person
CN109952589A (en) * 2016-10-20 2019-06-28 拜耳商业服务有限责任公司 For targetedly providing the system of information to people
US20200051150A1 (en) * 2016-10-20 2020-02-13 Bayer Business Services Gmbh System for selectively informing a person

Similar Documents

Publication Publication Date Title
US11250456B2 (en) Systems, method and apparatus for automated inventory interaction
JP6138930B2 (en) Method and apparatus for selecting advertisements for display on a digital sign
JP6267861B2 (en) Usage measurement techniques and systems for interactive advertising
US9247225B2 (en) Video indexing with viewer reaction estimation and visual cue detection
US20140122248A1 (en) Digital Advertising System
CN105611410B (en) A kind of information-pushing method and device
US20210398139A1 (en) Methods and devices for processing information
US20180232799A1 (en) Exhibition device, display control device and exhibition system
CN105450778B (en) Information transmission system
US20190108551A1 (en) Method and apparatus for customer identification and tracking system
KR20130105542A (en) Object identification in images or image sequences
US9648116B2 (en) System and method for monitoring mobile device activity
US20200320576A1 (en) Targeted Advertising Based On Demographic Features Extracted From Persons
EP2988473B1 (en) Argument reality content screening method, apparatus, and system
US20210216952A1 (en) System and Methods for Inventory Management
JP2012252613A (en) Customer behavior tracking type video distribution system
WO2021142387A1 (en) System and methods for inventory tracking
WO2012012059A2 (en) Selecting displays for displaying content
KR20230080513A (en) Method and system to share advertisement content from a main device to a secondary device
WO2013174433A1 (en) Method of performing targeted content
US20210158399A1 (en) System and method for interactive perception and content presentation
GB2502320A (en) Targeting media content including use of subject reaction
KR20140067792A (en) Contents information service system including contents recognition server, terminal device and the control method thereof
US10269134B2 (en) Method and system for determining a region of interest of a user in a virtual environment
CN110944147B (en) Resource bit monitoring system, method and device and electronic equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12723668

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12723668

Country of ref document: EP

Kind code of ref document: A1