WO2012054048A1 - Apparatus and method for evaluating an object - Google Patents

Apparatus and method for evaluating an object Download PDF

Info

Publication number
WO2012054048A1
WO2012054048A1 PCT/US2010/053649 US2010053649W WO2012054048A1 WO 2012054048 A1 WO2012054048 A1 WO 2012054048A1 US 2010053649 W US2010053649 W US 2010053649W WO 2012054048 A1 WO2012054048 A1 WO 2012054048A1
Authority
WO
WIPO (PCT)
Prior art keywords
emotion
individual
identified
determined
level
Prior art date
Application number
PCT/US2010/053649
Other languages
French (fr)
Inventor
Joseph Nole
Original Assignee
Hewlett-Packard Development Company, L.P.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett-Packard Development Company, L.P. filed Critical Hewlett-Packard Development Company, L.P.
Priority to PCT/US2010/053649 priority Critical patent/WO2012054048A1/en
Publication of WO2012054048A1 publication Critical patent/WO2012054048A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0242Determining effectiveness of advertisements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression

Definitions

  • Evaluating the human perception of objects is a complex and labor intensive process. For example, in order to evaluate the human perception of a particular consumer product typically requires either customer surveys, questionnaires, customer interviews, or the like.
  • Perception can be inferred, in many cases, such as for consumer products, by studying sales data to determine how well a product is selling and thereby infer customer perception of the product.
  • Perception of advertisements may also be inferred by studying changes in product sales data during an advertising campaign.
  • measuring the effectiveness of a product or an advertisement in this way may take many weeks or months to determine. Consequently, this delay significantly reduces opportunities for the campaign or product to be improved, modified, extended, re-enforced, or withdrawn, etc. based on the determined effectiveness.
  • Figure 1 is a simplified block diagram illustrating an object evaluation system according to an example of the present invention
  • Figure 2 is a simplified flow diagram outlining example processing steps taken by an objection evaluation system according to an example of the present invention
  • Figure 3 is a simplified block diagram illustrating an analysis module according to an example of the present invention.
  • Figure 4 is a simplified flow diagram outlining an example method of operating an analysis module according to an example of the present invention
  • Figure 5 is a simplified flow diagram outlining an example method of operating an analysis module according to an example of the present invention.
  • Figure 8 is a simplified block diagram illustrating an example processing system on which examples of the present invention may be implemented.
  • FIG. 1 there shown an illustration of an object evaluation system 100 according to an example of the present invention.
  • the system 100 comprises a stream 108 of video images generated, for example by a video camera 108 placed in proximity to an object 102 to be evaluated.
  • the video stream 108 is analyzed by an analysis module 1 10 which determines an evaluation for the object 102 based on the contents of the video stream 108.
  • the object 102 may be an advertisement or a product, although it will appreciated the object 102 is not limited thereto.
  • the object 102 may be, for example, a static advertisement printed on a suitable support, a static advertisement displayed on an electronic display, such as on a television screen or computer monitor, a video advertisement, an audio advertisement, or the like.
  • the object 102 is a product it may be, for example, a single product, a group of products, or the like.
  • the video camera 108 is positioned such that it captures video images of people 104a to 104n in proximity to the object 102 and generates a video stream 108. Sn the illustration shown in Figure 1 the video camera 106 is placed behind the object 102 to capture video images of people moving in close proximity to the object 102. In other examples, however, the video camera 106 may be positioned in other ways, for example, to capture video images of people approaching or moving away from the object 102.
  • the video camera 108 may be a single video camera, or may be a video camera of a multiple-camera video surveillance system.
  • An outline method of operating the video stream analyzer 1 10 is now described with further reference to Figure 2.
  • the video stream analyzer 1 10 receives (202) the video stream 108 comprising video images and processes video images in the video stream 108 to identify (204) human individuals from the video stream. Each individual may be identified, for example, through analysis of their facial or other anatomical features. For each individual identified the video stream analyzer 1 10 detects (206), through appropriate processing of the video stream 108, an emotion and emotion intensity level. In at least one example, the video stream analyzer analyzes video images snowing faces of individuals, and determines an emotion and emotion level through analysis of facial features. The emotions detected may include, for example, happiness, sadness, surprise, fear, anger, and no emotion.
  • An emotion level may, for instance, be attributed on a 1 to 10 scale, where 1 indicates a low level of emotion and 10 indicates a high level of emotion.
  • a detected emotion may, for example, include a hybrid emotion.
  • an individual may be determined to be expressing a low level of surprise and a medium level of happiness. Any suitable emotion detection techniques may be used.
  • the result of the emotion detection for each individual are used to calculate an object interest level.
  • a sum of the emotion levels for each detected emotion over a predetermined evaluation period is made and a global interest level is determined therefrom. For example, if 10 individuals are identified in the video stream their respective emotions and emotion levels may be recorded in a suitable memory, data store, log, data structure, etc, as shown in Table 1 below.
  • the video stream 108 captures images of people in proximity to the object under evaluation 102 it can be inferred that the emotions detected from the video stream are the emotions expressed as a result of having seen or experienced the object 102.
  • An overall object interest level may be calculated by, for example, averaging the emotion levels for each detected emotion and determining which emotion or emotions are the most prominent. In other examples, other suitable methods may be used.
  • the video stream 108 is analyzed by the video stream analyzer for a predetermined period. In one example 24 hours of video stream may be analyzed, whereas in other examples a shorter or greater period of video stream may be analyzed.
  • FIG 3 shows a video stream analyzer 1 10 in greater detail, according to an example of the present invention. An example method of operating the video stream analyzer 1 10 is further described below with reference to Figures 4 and 5.
  • the video stream analyzer 1 10 comprises a video buffer 302, an identity analyzer 304, an emotion analyzer 308, a temporary emotion data store 308, an emotion data store 310, and an interest determination module 312.
  • the video stream analyzer 1 10 receives (402) the video stream 108 in the video buffer 302.
  • the video buffer 302 enables the video stream to be analyzed in other than real-time. In other examples, however, the video stream may be analyzed in real-time in which case a smaller video buffer (or even no video buffer at all) may be required depending on the processing capabilities of the video stream analyzer 1 10.
  • the identity analyzer 304 analyzes (404) the video frames from the video stream stored in the video buffer 302 to identify individual people vvithin the video stream. It will be appreciated that numerous facial recognition technologies and systems exist and may be used. For example, the identity analyzer 304 may use known image processing techniques to locate a face in an image and may use various facial feature detection techniques to locate a person's eyes, nose, mouth, ears, etc.
  • the identity analyzer 304 computes a unique or substantially unique identifier for each person in the video stream.
  • the unique identifier may be calculated based, for example, on the characteristics of each facial feature and the spatial relationship between them. In some examples a hash function may be used in the calculation of the unique identifier. It should be noted, however, that in the present example the purpose of the identity analyzer 304 is to uniquely or substantially uniquely differentiate between different people captured in the video stream 108, and not to attribute a real-life identity to each person detected.
  • the identity analyzer 304 passes the determined identifier to an emotion analyzer 306.
  • the identity analyzer additionally identifies one or more video frames in which the identified individual has been identified.
  • the reason for identifying video frames is to enable the emotion and emotion level of an individual to be determined during each video sequence in which the individual is identifiable. For example, an individual may be captured in the video stream when first coming into proximity to the object under evaluation. Initially, the individual may not notice the object and hence may only show a neutral emotion. The individual may then turn away from the object under evaluation and later turn back towards to the object, this time noticing the object and showing a high level of happiness.
  • the identity anaiyzer 304 additionaliy determines and sends a set of image coordinates defining the area of the video image from where the individual was identified.
  • the emotion anaiyzer 304 analyzes (406) the defined area of each of the identified video frames to determine a perceived emotion and emotion level for the individual in each of the identified video frames.
  • the results for each video frame are recorded in a temporary emotion data store 308, along with the determined individual identifier, a video time stamp, a detected emotion, and determined emotion level.
  • the recorded video timestamp represents the actual time the video frame was taken.
  • a representative emotion and emotion level is then determined to characterize the different emotions and emotion levels expressed by an individual during a video sequence.
  • the representative emotion is determined as being the highest emotion level for each detected emotion determined during a video sequence.
  • the determined emotion is happiness and the determined emotion level is 8.
  • the determined representative emotion level may be based on an average emotion level, a median emotion level, or any other suitable calculated.
  • more than one emotion is determined for an individual during a video sequence one or more representative emotions may be determined,
  • the determined representative emotion and emotion level, along with the individual identifier and timestamp are recorded or stored (408) in the main emotion data store 310.
  • An example extract of the data stored in the main emotion data store 310 is shown below in Table 3.
  • each detected emotion and corresponding determined emotion level are recorded (408) in the main emotion data store 310.
  • the highest emotion levels are recorded in the emotion data store 310 only the highest determined emotion level for each different determined emotion is recorded.
  • the interest determination module 312 can process the data stored therein to calculate an object interest level, as described below with further reference to Figure 5.
  • an evaluation period is obtained is defined.
  • the evaluation period defines the period over which the object interest level is to be determined.
  • the evaluation period may be a period of 24 hours, may be a period corresponding to the opening hours of a shop from which the video stream 108 is received, or any other suitable period.
  • the interest determination module 312 processes the data in the emotion data store 310 corresponding to the desired evaluation period to determine (508) a suitable object interest level. Once the object interest level has been determined it may, for example, be stored in a data store or log, published, or displayed on a suitable graphical user interface.
  • the interest determination module 312 determines the most frequently determined emotion as being the most prominent emotion expressed during the evaluation period and performs an average of the different emotion levels corresponding to the determined most frequent emotion.
  • the interest determination module 312 calculates the object interest levei based on an average of emotion levels for each different determined emotion.
  • the identifiers of one or more predetermined individual identifiers may be either not stored, or if stored removed, from the emotion data store 310 or may be not taken into account by the interest determination module. This may be useful, for example, to exclude cautioni who work in proximity to the object under evaluation from being included in the object interest level calculation. For example, such personnel are likely to appear in the video stream multiple times during the evaluation period and could cause inaccuracies in the object interest level from being introduced.
  • the identity analyzer 304 may detect additional characteristics of each individual, such as their sex, approximate age, whether an adult or child, etc. This information may be additional stored in the emotion data store 310.
  • the interest determination module 312 may, thereafter, calculate an object interest levei based on only a subset of the data stored in the emotion data store 310. For example, the interest determination module 312 may calculate the object interest level only from individuals being identified as being female and adult.
  • the video stream analyzer may process more than one video stream.
  • one video stream may be captured facing away from the object under evaluation, and another video stream may be capture facing the object under evaluation.
  • the video streams may, for example, be generated by custom video cameras, or may, if suitable, be video streams taken from an existing closed-circuit television (CCTV) system.
  • CCTV closed-circuit television
  • FIG. 6 there is shown a block diagram of computer system 600 on which the analysis module 1 10 may be implemented in one example.
  • the analysis module 1 10 may be implemented by way of programming instructions that define object evaluation software as described above being stored on a non-transitory computer readable storage medium 604 or 606.
  • the memory 604 and storage 606 are coupled to a processor 602, such as a microprocessor, through a communication bus 610.
  • the instructions, when executed by the processor 602 provide the functionality of an object evaluation system as described above by executing the above- described method steps.
  • embodiments of the present invention can be realized in the form of hardware, software or a combination of hardware and software. Any such software may be stored in the form of volatile or non-volatile storage such as, for example, a storage device like a ROM, whether erasable or rewritable or not, or in the form of memory such as, for example, RAM, memory chips, device or integrated circuits or on an optically or magnetically readable medium such as, for example, a CD, DVD, magnetic disk or magnetic tape. It will be appreciated that the storage devices and storage media are example of machine-readable storage that are suitable for storing a program or programs that, when executed, implement examples of the present invention.
  • example provide a program comprising code for implementing a system or meihod as claimed in any preceding claim and a machine readable storage storing such a program. Still further, examples of the present invention may be conveyed electronically via any medium such as a communication signal carried over a wired or wireless connection and embodiments suitably encompass the same.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Theoretical Computer Science (AREA)
  • Finance (AREA)
  • General Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Development Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

According to one example of the present invention, there is provided apparatus for evaluating an object. The object is evaluated from a stream of video images taken in proximity to the object. The apparatus comprises an identity analyzer module for analyzing video images from the stream and for identifying individuals therein. An emotion analyzer is also provided for analyzing a video image from which an individual was identified and for determining an emotion and emotion intensity expressed by the identified individual. An object evaluation module is further provided for determining an evaluation level for the object based on the determined emotions and emotion intensity levels.

Description

APPARATUS AND METHOD FOR EVALUATING AN OBJECT
BACKGROUND
[0001 ] Evaluating the human perception of objects is a complex and labor intensive process. For example, in order to evaluate the human perception of a particular consumer product typically requires either customer surveys, questionnaires, customer interviews, or the like.
[0002] Perception can be inferred, in many cases, such as for consumer products, by studying sales data to determine how well a product is selling and thereby infer customer perception of the product. Perception of advertisements, for example, may also be inferred by studying changes in product sales data during an advertising campaign. However, measuring the effectiveness of a product or an advertisement in this way may take many weeks or months to determine. Consequently, this delay significantly reduces opportunities for the campaign or product to be improved, modified, extended, re-enforced, or withdrawn, etc. based on the determined effectiveness.
BRIEF DESCRIPTION
[0003] Examples and embodiments of the invention will now be described, by way of non-limiting example only, with reference to the accompanying drawings, in which:
[0004] Figure 1 is a simplified block diagram illustrating an object evaluation system according to an example of the present invention;
[0005] Figure 2 is a simplified flow diagram outlining example processing steps taken by an objection evaluation system according to an example of the present invention;
[0006] Figure 3 is a simplified block diagram illustrating an analysis module according to an example of the present invention;
[0007] Figure 4 is a simplified flow diagram outlining an example method of operating an analysis module according to an example of the present invention; [0008] Figure 5 is a simplified flow diagram outlining an example method of operating an analysis module according to an example of the present invention; and
[0009] Figure 8 is a simplified block diagram illustrating an example processing system on which examples of the present invention may be implemented,
DETAILED DESCRIPTION
[00010] Referring now to Figure 1 there shown an illustration of an object evaluation system 100 according to an example of the present invention. The system 100 comprises a stream 108 of video images generated, for example by a video camera 108 placed in proximity to an object 102 to be evaluated. The video stream 108 is analyzed by an analysis module 1 10 which determines an evaluation for the object 102 based on the contents of the video stream 108.
[0001 1 ] In the present example the object 102 may be an advertisement or a product, although it will appreciated the object 102 is not limited thereto. In the case where the object 102 is an advertisement, it may be, for example, a static advertisement printed on a suitable support, a static advertisement displayed on an electronic display, such as on a television screen or computer monitor, a video advertisement, an audio advertisement, or the like. In the case where the object 102 is a product it may be, for example, a single product, a group of products, or the like.
[00012] The video camera 108 is positioned such that it captures video images of people 104a to 104n in proximity to the object 102 and generates a video stream 108. Sn the illustration shown in Figure 1 the video camera 106 is placed behind the object 102 to capture video images of people moving in close proximity to the object 102. In other examples, however, the video camera 106 may be positioned in other ways, for example, to capture video images of people approaching or moving away from the object 102.
[00013] The video camera 108 may be a single video camera, or may be a video camera of a multiple-camera video surveillance system. [00014] An outline method of operating the video stream analyzer 1 10 is now described with further reference to Figure 2.
[00015] The video stream analyzer 1 10 receives (202) the video stream 108 comprising video images and processes video images in the video stream 108 to identify (204) human individuals from the video stream. Each individual may be identified, for example, through analysis of their facial or other anatomical features. For each individual identified the video stream analyzer 1 10 detects (206), through appropriate processing of the video stream 108, an emotion and emotion intensity level. In at least one example, the video stream analyzer analyzes video images snowing faces of individuals, and determines an emotion and emotion level through analysis of facial features. The emotions detected may include, for example, happiness, sadness, surprise, fear, anger, and no emotion. An emotion level may, for instance, be attributed on a 1 to 10 scale, where 1 indicates a low level of emotion and 10 indicates a high level of emotion. A detected emotion may, for example, include a hybrid emotion. For example, an individual may be determined to be expressing a low level of surprise and a medium level of happiness. Any suitable emotion detection techniques may be used.
[00016] The result of the emotion detection for each individual are used to calculate an object interest level. In one example, a sum of the emotion levels for each detected emotion over a predetermined evaluation period is made and a global interest level is determined therefrom. For example, if 10 individuals are identified in the video stream their respective emotions and emotion levels may be recorded in a suitable memory, data store, log, data structure, etc, as shown in Table 1 below.
INDIVIDUAL IDENTIFIER EMOTION EMOTION L EVEL
1 Happiness 7
2 Surprise 5
3 Happiness 6
4 Anger 2
TABLE 1 - Example results of emotion detection
[00017] Since the video stream 108 captures images of people in proximity to the object under evaluation 102 it can be inferred that the emotions detected from the video stream are the emotions expressed as a result of having seen or experienced the object 102.
[00018] An overall object interest level may be calculated by, for example, averaging the emotion levels for each detected emotion and determining which emotion or emotions are the most prominent. In other examples, other suitable methods may be used.
[00019] The video stream 108 is analyzed by the video stream analyzer for a predetermined period. In one example 24 hours of video stream may be analyzed, whereas in other examples a shorter or greater period of video stream may be analyzed.
[00020] Figure 3 shows a video stream analyzer 1 10 in greater detail, according to an example of the present invention. An example method of operating the video stream analyzer 1 10 is further described below with reference to Figures 4 and 5.
[00021 ] In the present example, the video stream analyzer 1 10 comprises a video buffer 302, an identity analyzer 304, an emotion analyzer 308, a temporary emotion data store 308, an emotion data store 310, and an interest determination module 312.
[00022] The video stream analyzer 1 10 receives (402) the video stream 108 in the video buffer 302. The video buffer 302 enables the video stream to be analyzed in other than real-time. In other examples, however, the video stream may be analyzed in real-time in which case a smaller video buffer (or even no video buffer at all) may be required depending on the processing capabilities of the video stream analyzer 1 10.
[00023] The identity analyzer 304 analyzes (404) the video frames from the video stream stored in the video buffer 302 to identify individual people vvithin the video stream. It will be appreciated that numerous facial recognition technologies and systems exist and may be used. For example, the identity analyzer 304 may use known image processing techniques to locate a face in an image and may use various facial feature detection techniques to locate a person's eyes, nose, mouth, ears, etc.
[00024] The identity analyzer 304 computes a unique or substantially unique identifier for each person in the video stream. The unique identifier may be calculated based, for example, on the characteristics of each facial feature and the spatial relationship between them. In some examples a hash function may be used in the calculation of the unique identifier. It should be noted, however, that in the present example the purpose of the identity analyzer 304 is to uniquely or substantially uniquely differentiate between different people captured in the video stream 108, and not to attribute a real-life identity to each person detected.
[00025] When an individual has been identified by the identity analyzer 304 it passes the determined identifier to an emotion analyzer 306. In addition to the determined identifier in the present example the identity analyzer additionally identifies one or more video frames in which the identified individual has been identified. In the present example the reason for identifying video frames is to enable the emotion and emotion level of an individual to be determined during each video sequence in which the individual is identifiable. For example, an individual may be captured in the video stream when first coming into proximity to the object under evaluation. Initially, the individual may not notice the object and hence may only show a neutral emotion. The individual may then turn away from the object under evaluation and later turn back towards to the object, this time noticing the object and showing a high level of happiness. Each of these sequences may be captured in different sequences of video frames. In at least one example the video frames are consecutive video frames. [00028] The identity anaiyzer 304 additionaliy determines and sends a set of image coordinates defining the area of the video image from where the individual was identified.
[00027] The emotion anaiyzer 304 analyzes (406) the defined area of each of the identified video frames to determine a perceived emotion and emotion level for the individual in each of the identified video frames. The results for each video frame are recorded in a temporary emotion data store 308, along with the determined individual identifier, a video time stamp, a detected emotion, and determined emotion level. In at least one example, the recorded video timestamp represents the actual time the video frame was taken.
[00028] Since during a sequence of video images the facial expression of an individual may change the emotions and emotion levels for a sequence of video images are recorded in the temporary emotion data store 308. In Table 2 below is shown example emotion and emotion levels of an individual who was detected in 5 sequential video frames.
Figure imgf000008_0001
TABLE 2 - Example determined emotions and emotion levels
[00029] A representative emotion and emotion level is then determined to characterize the different emotions and emotion levels expressed by an individual during a video sequence. In the present example the representative emotion is determined as being the highest emotion level for each detected emotion determined during a video sequence. Hence, in the current example for the identified video sequence the determined emotion is happiness and the determined emotion level is 8. In other examples the determined representative emotion level may be based on an average emotion level, a median emotion level, or any other suitable calculated. In other examples, if more than one emotion is determined for an individual during a video sequence one or more representative emotions may be determined,
[00030] The determined representative emotion and emotion level, along with the individual identifier and timestamp are recorded or stored (408) in the main emotion data store 310. An example extract of the data stored in the main emotion data store 310 is shown below in Table 3.
Figure imgf000009_0001
TABLE 3 - Example extract from the Emotion Data Store
[00031 ] In the case where multiple emotions are detected by the emotion analyzer 306 during a video sequence each detected emotion and corresponding determined emotion level are recorded (408) in the main emotion data store 310. For example, where the highest emotion levels are recorded in the emotion data store 310 only the highest determined emotion level for each different determined emotion is recorded.
[00032] Once a suitable set of emotion data is stored in the emotion data store 310 the interest determination module 312 can process the data stored therein to calculate an object interest level, as described below with further reference to Figure 5. [00033] At 502 an evaluation period is obtained is defined. The evaluation period defines the period over which the object interest level is to be determined. For example, the evaluation period may be a period of 24 hours, may be a period corresponding to the opening hours of a shop from which the video stream 108 is received, or any other suitable period.
[00034] At 504 the interest determination module 312 processes the data in the emotion data store 310 corresponding to the desired evaluation period to determine (508) a suitable object interest level. Once the object interest level has been determined it may, for example, be stored in a data store or log, published, or displayed on a suitable graphical user interface.
[00035] In one example, the interest determination module 312 determines the most frequently determined emotion as being the most prominent emotion expressed during the evaluation period and performs an average of the different emotion levels corresponding to the determined most frequent emotion.
[00036] In another example, the interest determination module 312 calculates the object interest levei based on an average of emotion levels for each different determined emotion.
[00037] In a further example, the identifiers of one or more predetermined individual identifiers may be either not stored, or if stored removed, from the emotion data store 310 or may be not taken into account by the interest determination module. This may be useful, for example, to exclude personnei who work in proximity to the object under evaluation from being included in the object interest level calculation. For example, such personnel are likely to appear in the video stream multiple times during the evaluation period and could cause inaccuracies in the object interest level from being introduced.
[00038] In a further example the identity analyzer 304 may detect additional characteristics of each individual, such as their sex, approximate age, whether an adult or child, etc. This information may be additional stored in the emotion data store 310. The interest determination module 312 may, thereafter, calculate an object interest levei based on only a subset of the data stored in the emotion data store 310. For example, the interest determination module 312 may calculate the object interest level only from individuals being identified as being female and adult.
[00039] In a still further example, only a single emotion data store may be provided.
[00040] In a yet further example, the video stream analyzer may process more than one video stream. For example, in a shop environment one video stream may be captured facing away from the object under evaluation, and another video stream may be capture facing the object under evaluation. The video streams may, for example, be generated by custom video cameras, or may, if suitable, be video streams taken from an existing closed-circuit television (CCTV) system.
[00041 ] Turning now to Figure 6 there is shown a block diagram of computer system 600 on which the analysis module 1 10 may be implemented in one example. For example, the analysis module 1 10 may be implemented by way of programming instructions that define object evaluation software as described above being stored on a non-transitory computer readable storage medium 604 or 606. The memory 604 and storage 606 are coupled to a processor 602, such as a microprocessor, through a communication bus 610. The instructions, when executed by the processor 602 provide the functionality of an object evaluation system as described above by executing the above- described method steps.
[00042] It will be appreciated that embodiments of the present invention can be realized in the form of hardware, software or a combination of hardware and software. Any such software may be stored in the form of volatile or non-volatile storage such as, for example, a storage device like a ROM, whether erasable or rewritable or not, or in the form of memory such as, for example, RAM, memory chips, device or integrated circuits or on an optically or magnetically readable medium such as, for example, a CD, DVD, magnetic disk or magnetic tape. It will be appreciated that the storage devices and storage media are example of machine-readable storage that are suitable for storing a program or programs that, when executed, implement examples of the present invention. Accordingly, example provide a program comprising code for implementing a system or meihod as claimed in any preceding claim and a machine readable storage storing such a program. Still further, examples of the present invention may be conveyed electronically via any medium such as a communication signal carried over a wired or wireless connection and embodiments suitably encompass the same.
[00043] Ail of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and/or ail of the steps of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive.
[00044] Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features.

Claims

CLAIMS 1 . Apparatus for evaluating an object from a stream of video images taken in proximity to the object, comprising:
an identity analyzer module for analyzing video images from the stream and for identifying individuals therein;
an emotion analyzer for analyzing a video image from which an individual was identified and for determining an emotion and emotion intensity expressed by the identified individual; and
an object evaluation module for determining an evaluation level for the object based on the determined emotions and emotion intensity levels.
2. The apparatus of claim 1 , wherein the identity analyzer is to analyze facial features of an individual and to calculate an identifier for that individual.
3. The apparatus of claim 2, further comprising a first data store for storing, for each frame of a set of video frames in which an individual was identified, the calculated identifier and an associated determined emotion and emotion intensity.
4. The apparatus of claim 1 , wherein the identity analyzer determines the approximate coordinates of a face of an identified individual for each video image in which an individual was identified, and wherein the emotion analyzer performs emotion analysis on the area defined by the determined coordinates.
5. The apparatus of claim 3, wherein the emotion analyzer is further to determine, from the data stored in the first data store for a set of video frames, a representative emotion and emotion level for an identified individual, and storing the determined representative emotion, emotion level and corresponding individual identifier in a second data store.
6. The apparatus of claim 1 , further comprising an object interest calculator for calculating an object interest level based on at least part of the emotion and emotion level data stored in the second data store,
7. The apparatus of claim 8, wherein the identity analyzer is further to determine additional characteristics of each identified individual including at least one of: the sex of the individual and the approximate age of the individual.
8. The apparatus of claim 7, wherein the object interest calculator calculates an object interest level based on the determined emotions and emotion levels of individuals having at least one of the determined additional characteristics.
9. The apparatus of any previous claim, wherein the emotion analyzer is not to determine an emotion or emotion level for individuals having a predetermined individual identifier.
10. A computer implemented method of evaluating an object comprising:
receiving, at a processor, a plurality of video images taken in proximity to the object;
performing, by the processor, identity analysis on video images from the stream to identify human individuals therein;
performing, by the processor, for each identified human individual, emotion analysis to determine an emotion expressed by the individual; and
calculating, by the processor, an evaluation level for the object based on the determined emotion of different ones of the identified human individuals.
1 1 . The method of claim 10, wherein the step of performing identity analysis further includes:
identifying the presence of a human face in a video image;
performing facial analysis on the identified human face; and generating an identifier identifying the individual based on determined characteristics of the identified human face.
12. The method of claim 10, wherein the step of performing emotion analysis further comprises:
determining an emotion and emotion intensity of an identified human face; and
storing the emotion, emotion intensity, and associated generated identifier in a data store.
13. The method of claim 12, wherein the step of calculating an evaluation level for the object comprises calculating an evaluation level based on the emotion data stored in the data store.
14. The method of claim 10, wherein the step of performing identity analysis further comprises determining at least one of the individual's sex and approximate age, and wherein the step of calculating an evaluation level further comprises calculating an evaluation level based on the detected emotions and emotion levels of at least one of: individuals being within a predetermined age range; and individuals having a predetermined sex.
15. A machine-readable medium that stores machine-readable instructions executable by a processor to determine an evaluation level of an object, the machine-readable medium comprising:
machine-readable instructions that, when executed by the processor, receive a stream of video images:
machine-readable instructions that, when executed by the processor, identify human individuals in the video images from the received stream;
machine-readable instructions that, when executed by the processor, determine, for each identified human individual, an emotion and emotion intensity expressed by the individual; and machine-readable instructions that, when executed by the processor, calculate an evaluation level for the object based on the determined emotions and emotion levels of different ones of the identified human individuals.
PCT/US2010/053649 2010-10-22 2010-10-22 Apparatus and method for evaluating an object WO2012054048A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2010/053649 WO2012054048A1 (en) 2010-10-22 2010-10-22 Apparatus and method for evaluating an object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2010/053649 WO2012054048A1 (en) 2010-10-22 2010-10-22 Apparatus and method for evaluating an object

Publications (1)

Publication Number Publication Date
WO2012054048A1 true WO2012054048A1 (en) 2012-04-26

Family

ID=45975524

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2010/053649 WO2012054048A1 (en) 2010-10-22 2010-10-22 Apparatus and method for evaluating an object

Country Status (1)

Country Link
WO (1) WO2012054048A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10489389B2 (en) 2012-06-07 2019-11-26 Wormhole Labs, Inc. Experience analytic objects, systems and methods
US10546586B2 (en) 2016-09-07 2020-01-28 International Business Machines Corporation Conversation path rerouting in a dialog system based on user sentiment
US10649613B2 (en) 2012-06-07 2020-05-12 Wormhole Labs, Inc. Remote experience interfaces, systems and methods
US10700944B2 (en) 2012-06-07 2020-06-30 Wormhole Labs, Inc. Sensor data aggregation system
CN112686156A (en) * 2020-12-30 2021-04-20 平安普惠企业管理有限公司 Emotion monitoring method and device, computer equipment and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050024401A (en) * 2002-06-27 2005-03-10 코닌클리케 필립스 일렉트로닉스 엔.브이. Measurement of content ratings through vision and speech recognition
KR20060102651A (en) * 2005-03-24 2006-09-28 엘지전자 주식회사 Wireless communication terminal with message transmission according to feeling of terminal-user and method of message transmission using same
KR20090105198A (en) * 2008-04-01 2009-10-07 주식회사 케이티 Method and apparatus for providing message based on emotion

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050024401A (en) * 2002-06-27 2005-03-10 코닌클리케 필립스 일렉트로닉스 엔.브이. Measurement of content ratings through vision and speech recognition
KR20060102651A (en) * 2005-03-24 2006-09-28 엘지전자 주식회사 Wireless communication terminal with message transmission according to feeling of terminal-user and method of message transmission using same
KR20090105198A (en) * 2008-04-01 2009-10-07 주식회사 케이티 Method and apparatus for providing message based on emotion

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10895951B2 (en) 2012-06-07 2021-01-19 Wormhole Labs, Inc. Mapping past content from providers in video content sharing community
US10649613B2 (en) 2012-06-07 2020-05-12 Wormhole Labs, Inc. Remote experience interfaces, systems and methods
US10656781B2 (en) 2012-06-07 2020-05-19 Wormhole Labs, Inc. Product placement using video content sharing community
US10700944B2 (en) 2012-06-07 2020-06-30 Wormhole Labs, Inc. Sensor data aggregation system
US10866687B2 (en) 2012-06-07 2020-12-15 Wormhole Labs, Inc. Inserting advertisements into shared video feed environment
US10489389B2 (en) 2012-06-07 2019-11-26 Wormhole Labs, Inc. Experience analytic objects, systems and methods
US10969926B2 (en) 2012-06-07 2021-04-06 Wormhole Labs, Inc. Content restriction in video content sharing community
US11003306B2 (en) 2012-06-07 2021-05-11 Wormhole Labs, Inc. Ranking requests by content providers in video content sharing community
US11030190B2 (en) 2012-06-07 2021-06-08 Wormhole Labs, Inc. Experience analytic objects, systems and methods
US11449190B2 (en) 2012-06-07 2022-09-20 Wormhole Labs, Inc. User tailored of experience feeds
US11469971B2 (en) 2012-06-07 2022-10-11 Wormhole Labs, Inc. Crowd sourced sensor data management systems
US10546586B2 (en) 2016-09-07 2020-01-28 International Business Machines Corporation Conversation path rerouting in a dialog system based on user sentiment
CN112686156A (en) * 2020-12-30 2021-04-20 平安普惠企业管理有限公司 Emotion monitoring method and device, computer equipment and readable storage medium

Similar Documents

Publication Publication Date Title
US8724845B2 (en) Content determination program and content determination device
JP4876687B2 (en) Attention level measuring device and attention level measuring system
JP6622894B2 (en) Multifactor image feature registration and tracking method, circuit, apparatus, system, and associated computer-executable code
JP6267861B2 (en) Usage measurement techniques and systems for interactive advertising
US20140270483A1 (en) Methods and systems for measuring group behavior
US20150006281A1 (en) Information processor, information processing method, and computer-readable medium
US20140278742A1 (en) Store-wide customer behavior analysis system using multiple sensors
JP2004054376A (en) Method and device for estimating group attribute
JP2004348618A (en) Customer information collection and management method and system therefor
JP2005251170A (en) Display
JP2008152810A (en) Customer information collection and management system
JP2010113313A (en) Electronic advertisement apparatus, electronic advertisement method and program
WO2012054048A1 (en) Apparatus and method for evaluating an object
JP2017083980A (en) Behavior automatic analyzer and system and method
US9361705B2 (en) Methods and systems for measuring group behavior
JP2012252613A (en) Customer behavior tracking type video distribution system
JP2010211485A (en) Gaze degree measurement device, gaze degree measurement method, gaze degree measurement program and recording medium with the same program recorded
KR20150093532A (en) Method and apparatus for managing information
JP6791362B2 (en) Image processing equipment, image processing methods, and programs
KR20160011804A (en) The method for providing marketing information for the customers of the stores based on the information about a customers' genders and ages detected by using face recognition technology
JP5115763B2 (en) Image processing apparatus, content distribution system, image processing method, and program
WO2021104388A1 (en) System and method for interactive perception and content presentation
Ishibashi Application of deep learning to pre-processing of cousumer's eye tracking data in supermarket
CN112989988A (en) Information integration method, device, equipment, readable storage medium and program product
JP5103287B2 (en) ADVERTISEMENT EFFECT MEASUREMENT DEVICE, ADVERTISEMENT EFFECT MEASUREMENT METHOD, ADVERTISEMENT EFFECT MEASUREMENT PROGRAM, AND RECORDING MEDIUM CONTAINING THE PROGRAM

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10858765

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10858765

Country of ref document: EP

Kind code of ref document: A1