CN104050587A - Method and apparatus for subjective advertisement effectiveness analysis - Google Patents

Method and apparatus for subjective advertisement effectiveness analysis Download PDF

Info

Publication number
CN104050587A
CN104050587A CN201410099093.8A CN201410099093A CN104050587A CN 104050587 A CN104050587 A CN 104050587A CN 201410099093 A CN201410099093 A CN 201410099093A CN 104050587 A CN104050587 A CN 104050587A
Authority
CN
China
Prior art keywords
advertisement
user
response
vehicle
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410099093.8A
Other languages
Chinese (zh)
Inventor
兰德·亨利·维新坦那尔
刘忆民
佩里·罗宾逊·麦克尼尔
奥莱格·由里维奇·古斯京
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ford Global Technologies LLC
Original Assignee
Ford Global Technologies LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ford Global Technologies LLC filed Critical Ford Global Technologies LLC
Publication of CN104050587A publication Critical patent/CN104050587A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0242Determining effectiveness of advertisements

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a method and apparatus for subjective advertisement effectiveness analysis. A system includes a processor configured to receive an advertisement. The processor is also configured to present the advertisement to a vehicle occupant. The processor is further configured to visually record an occupant response during the course of the advertisement presentation using a vehicle camera. The processor is additionally configured to analyze the visually recorded response to gauge a user reaction to the advertisement and based on the analysis adjust an advertisement variable metric with respect to the presented advertisement.

Description

Method and apparatus for subjective advertisement efficiency analysis
Technical field
Illustrative embodiment relates in general to a kind of method and apparatus for subjective advertisement efficiency analysis.
Background technology
Shop-assistant in from televisor to shop, advertisement is a kind of form that the mankind exchange.People are especially good at carrying out aspectant interchange with other people, but up to now, have developed the technology of the communication requirement that meets the technical society day by day increasing.
It is speech and the mutual combination of non-speech that the mankind exchange.By facial expression, limbs posture and other Nonverbal clues, people still can exchange with other people effectively.Especially true in emotion communication.In fact, unexpectedly, 93% affection exchange by facial expression, posture or sound variation but not speech ground or crack language occur (para-linguistically).Many discoveries in advertisement field and experience also show strengthen visual emotion communication.
Advertisement has difficult task more: the conduct generally understood of exploitation is for exchanging the basic example of complete interchange of complicated idea and the universal set of example.For this reason, preferably have sound communication as text or Text To Speech (TTS) communication.The face-to-face communication of being combined with voice is even also good than independent voice, sound, expression and posture that this makes the both sides of dialogue can see and hear the other side.Because good interchange is very important, so face-to-face exchange remains preferably, and this is why politician and CEO still go on business to meet each other and retailer still needs sales force's reason.
EP1557810 european patent application relates in general to a kind of demonstration and arranges, comprising: image display device, has the two or more groups image for showing; Camera, towards the orientation of watching the user of display; Face detector, for detection of the face of the people in the image being captured by camera, this face detector is arranged at least two kinds of face classification and detects face; And in response to the frequency of face classification detected in one or more different time sections by face detector, select the device of one group of image on image display device by that time showing in every day.
No. 2012/0265616 U.S. Patent application relates in general to a kind of to Dynamic Selection ad content efficient system and method.In example, can be for targeted advertisements region receiving target sense organ content and identification information.Can evaluating objects sense organ content and identification information to determine the feature in targeted advertisements region.Feature based on meeting the condition of pre-defined function, can determine the subset of ad content.In certain embodiments, can on remote computing device, carry out Dynamic Selection ad content.Other embodiment can adopt the subset of ad content for the consumption in targeted advertisements region.
No. 2012/0243751 U.S. Patent application relate in general on people, collect and for analyzing the facial information of emotion.Facial information can be used for determining describes people at the benchmark face of the acquiescence expression that had on the face.Can be used for estimating emotion and also can be used for inferring psychological condition with the deviation of this benchmark face.Can be face-image auto-scoring for various expressions (comprise and laugh at, frown and blink).Can during this benchmark face analysis, use image descriptor and Images Classification symbol.
Summary of the invention
In the first illustrative embodiment, a kind of system comprises the processor that is configured to receive advertisement.Described processor is also configured to advertisement to present to automotive occupant.Described processor is also configured to: use the occupant's response during vehicle camera visually records the process that advertisement presents.Described processor is also configured to analyze visually response reaction for advertisement with estimating user of record, and based on described dissecting needle, the advertisement presenting is adjusted to advertisement variable metric.
Advertisement presents and comprises visual presenting.
Advertisement presents and comprises listening and present.
Processor is configured to respond by vehicle cameras record user.
User's response comprises facial expression.
The analysis of facial expression is drawn to definite user feeling state.
Utilize the response based on affective state of user to advertisement, user's affective state upgrades subscriber data.
In the second illustrative embodiment, a kind of system comprises the processor that is configured to receive the advertisement that comprises face recognition move instruction.Described processor is also configured to use vehicle camera to catch the image of automotive occupant or video with the recording user state of expressing one's feelings.Described processor is also configured to analysis user expression state, and when subscriber's meter situation state meets face recognition move instruction, transmits advertisement.
Face recognition move instruction comprises for transmit the instruction of advertisement during facial state is corresponding with particular emotion.
Face recognition move instruction is the response based on previously having observed at least in part, wherein, under described response, utilizes the actively response of record to transmit similar advertisement.
The process transmitting with the advertisement in predefine fragment, records facial expression.
Processor is also configured to: recording user facial expression during advertisement transmits, and utilize user to upgrade subscriber data to the response of the user's facial expression based on record of advertisement.
At the reproducting periods of ad response, predefine fragment is carried out to association with corresponding to particular advertisement fragment.
In the 3rd illustrative embodiment, a kind of computer implemented method comprises that reception comprises the advertisement of face recognition move instruction.Described method also comprises that the image that uses vehicle camera to catch automotive occupant or video are with the recording user state of expressing one's feelings.In addition, described method comprises analysis user expression state, and when subscriber's meter situation state meets face recognition move instruction, transmits advertisement.
Face recognition move instruction comprises for transmit the instruction of advertisement during facial state is corresponding with particular emotion.
Face recognition move instruction is the response based on previously having observed at least in part, wherein, under described response, utilizes the actively response of record to transmit similar advertisement.
The process transmitting with the advertisement in predefine fragment, records facial expression.
Described method also comprises: recording user facial expression during advertisement transmits, and utilize user to upgrade subscriber data to the response of the user's facial expression based on record of advertisement.
At the reproducting periods of ad response, predefine fragment is carried out to association with corresponding to particular advertisement fragment.
Accompanying drawing explanation
Fig. 1 illustrates illustrative vehicle computing system;
Fig. 2 A, Fig. 2 B, Fig. 2 C and Fig. 2 D illustrate example facial expression and analyze;
Fig. 3 illustrates for analyzing the illustrative process of facial expression;
Fig. 4 illustrates illustrative advertisement analytic system;
Fig. 5 illustrates the illustrative process of collecting for ad data;
Fig. 6 illustrates second illustrative process of collecting for ad data;
Fig. 7 illustrates the illustrated examples of the face recognition in rich media player environment;
Fig. 8 illustrates the example of the mood test of estimating for advertisement.
Embodiment
As required, at this, specific embodiment of the present invention is disclosed; Yet, will be appreciated that the disclosed embodiments are only examples of the present invention, the present invention can realize with various alternative forms.Accompanying drawing is not necessarily to scale; Can exaggerate or minimize some features so that the details of specific components to be shown.Therefore, concrete structure disclosed herein and function detail will not be interpreted as restriction, and be only that conduct is for instructing those skilled in the art to utilize in every way representative basis of the present invention.
Fig. 1 illustrates the example frame topological diagram for the computing system based on vehicle (VCS) 1 of vehicle 31.The SYNC system that the example Shi You Ford Motor Company of this computing system 1 based on vehicle manufactures.The vehicle that is provided with the computing system based on vehicle can comprise the visual front-end interface 4 that is arranged in vehicle.If be provided with for example touch sensitive screen, user can also with described interface alternation.In another illustrative embodiment, by button press, can listen voice and phonetic synthesis to carry out alternately.
In the illustrative embodiment 1 shown in Fig. 1, processor 3 is controlled at least certain part of the operation of the computing system based on vehicle.Be arranged on the vehicle-mounted processing that processor in vehicle allows order and program.In addition, processor be connected to volatile memory 5 and permanent memory 7 both.In this illustrative embodiment, volatile memory is random-access memory (ram), and permanent memory is hard disk drive (HDD) or flash memory.
Processor is also provided with and allows user to carry out mutual a plurality of different input from processor.In this illustrative embodiment, microphone 29, auxiliary input 25(are used for inputting 33), USB input 23, GPS input 24 and bluetooth input 15 be all provided.Also be provided with input selector 51, to allow user to exchange between various inputs.Before the input of microphone and subconnector is sent to processor, by converter 27, by described input, from analog-converted, be numeral.Although not shown, a plurality of vehicle assemblies that communicate with VCS and accessory part can be used vehicle network (such as, but not limited to CAN bus) with to VCS(or its assembly) transmit data and transmit come from VCS(or its assembly) data.
To the output of system, can include but not limited to visual display unit 4 and loudspeaker 13 or stereophonic sound system output.Loudspeaker is connected to amplifier 11 and from processor 3, receives its signal by digital to analog converter 9.Also can along the bidirectional traffic shown in 19 and 21, produce respectively the output of remote bluetooth device (such as PND54) or USB device (such as vehicle navigation apparatus 60).
In an illustrative embodiment, the mobile device 53(that system 1 is used bluetooth transceiver 15 and user for example, cell phone, smart phone, PDA or there is any other device of wireless remote network concatenation ability) communicate 17.Mobile device can be used for subsequently by for example 55 communicating 59 with the network 61 of vehicle 31 outsides with communicating by letter of cell tower 57.In certain embodiments, cell tower 57 can be WiFi access point.
Example communication between mobile device and bluetooth transceiver is represented by signal 14.
Can indicate mobile device 53 and bluetooth transceiver 15 to match by button 52 or similar input.Therefore, CPU be instructed to on-vehicle Bluetooth transceiver by with mobile device in bluetooth transceiver match.
Can utilize for example associated with mobile device 53 data plan, data-over-voice or DTMF tone to transmit data between CPU3 and network 61.Selectively, can expect to comprise to there is the vehicle-mounted modulator-demodular unit 63 of antenna 18 to transmit data 16 by voice band between CPU3 and network 61.Mobile device 53 can be used for subsequently by for example 55 communicating 59 with the network 61 of vehicle 31 outsides with communicating by letter of cell tower 57.In certain embodiments, modulator-demodular unit 63 can be set up and communicate by letter 20 with cell tower 57, to communicate with network 61.As non-limiting example, modulator-demodular unit 63 can be USB cellular modem, and communication 20 can be cellular communication.
In an illustrative embodiment, processor is provided with the operating system that comprises the API communicating with modem application software.Flush bonding module or firmware on the addressable bluetooth transceiver of modem application software, to complete the radio communication with (such as being arranged in mobile device) remote bluetooth transceiver.Bluetooth is IEEE802PAN(territory net) subset of agreement.IEEE802LAN(LAN (Local Area Network)) agreement comprises WiFi and has considerable interleaving function with IEEE802PAN.Both be suitable for the radio communication in vehicle.Another communication mode that can use in this area is free space optical communication (such as IrDA) and nonstandardized technique consumer IR agreement.
In another embodiment, mobile device 53 comprises the modulator-demodular unit for voice band or broadband data communication.In the embodiment of data-over-voice, when the owner of mobile device can speak by device when data are just transmitted, can realize the technology that is known as frequency division multiplexing.At All Other Times, when owner does not have operative installations, data transmit can use whole bandwidth (being 300Hz to 3.4kHz in an example).Although frequency division multiplexing may be common and still use for the analog cellular communication between vehicle and internet, it has been used to a great extent code territory multiple access (CDMA), the time-domain multiple access (TDMA) of digital cellular telecommunications system, the mixture of spatial domain multiple access (SDMA) substitutes.These are all ITU IMT-2000(3G) compatible standard, and provide the data rate of 385kbs for user static or walking provides up to the data rate of 2mbs and for the user in the vehicle mobile.3G standard is just provided 100mbs and provides the IMT-Advanced(4G of 1gbs by static user by the user in vehicle now) substituted.If user has the data plan associated with mobile device, described data plan can allow wideband transmit and system can use much wide bandwidth (expedited data transmission).In another embodiment, the cellular device (not shown) that mobile device 53 is mounted to vehicle 31 substitutes.In another embodiment, mobile device 53 can be can be by WLAN (wireless local area network) (LAN) device that for example (and unrestricted) 802.11g network (being WiFi) or WiMax network communicate.
In one embodiment, importing data into can be via data-over-voice or data plan through mobile device, through on-vehicle Bluetooth transceiver and enter the internal processor 3 of vehicle.For example, the in the situation that of some ephemeral data, data can be stored on HDD or other storage medium 7, until when no longer needing described data.
Other source that can be connected with vehicle interface comprises: for example have USB and connect 56 and/or the personal navigation apparatus 54 of antenna 58, the vehicle navigation apparatus 60 with USB62 or other connection, vehicle-mounted GPS apparatus 24 or have to the long-range system (not shown) of the concatenation ability of network 61.USB is a kind of in a class Serial Line Internet Protocol.IEEE1394(live wire), EIA(Electronic Industries Association) serial protocol, IEEE1284(Centronics port), S/PDIF(Sony/Philip digital interconnect form) and USB-IF(USB application person forum) formed the backbone of device auto levelizer sata standard.Most agreements can be implemented as for telecommunication or optical communication.
In addition, CPU can communicate with various other servicing units 65.These devices can connect by wireless connections 67 or wired connection 69.Servicing unit 65 can include but not limited to personal media player, wireless protection device, portable computer etc.
In addition or selectively, CPU for example can be used WiFi transceiver 71 and be connected to the wireless router 73 based on vehicle.This can allow CPU to be connected to telecommunication network in the scope of local router 73.
Except having by being arranged in the exemplary process that the vehicle computing system of vehicle carries out, in certain embodiments, can also carry out exemplary process by the computing system of communicating by letter with vehicle computing system.Such system can include but not limited to: wireless device (such as but not limited to mobile phone) or the remote computing system (such as but not limited to server) connecting by wireless device.Generally, these systems can be called as vehicle correlation computations system (VACS).In certain embodiments, the specific components of VACS can be carried out according to the particular of system the specific part of processing.By example and unrestriced mode, if processed, have and the wireless device of pairing sends or the step of the information of reception, probably due to wireless device not can with self " sending and receiving " information, so wireless device is not carried out this processing.When those of ordinary skill in the art is not suitable for understanding given solution is applied to specific VACS.In all solutions, expection is at least positioned at the vehicle computing system (VCS) of vehicle self can carry out exemplary process.
The man-machine interaction major part day by day increasing is by adopting the machine of people's ability to approach everybody mutual ability.Computing machine can show as himself the avatar with anthropomorphic sound, posture and expression.Illustrative embodiment is paid close attention to by machine vision and is identified the facial expression/emotional expression (emotional expression) of the people in vehicle and it is made to suitable reaction.
The how are you feeling today of having attempted teaching computer understanding people's subjective emotional expression and having noted people, especially in advertisement field.Computing machine has recorded the reaction of people for advertisement, as people whether laughing at, people whether frowning, whether people shock and surprised, whether people are paying close attention to even.Facial expression/limbs are expressed the picture that enriches of having described emotion response, for advertisement, brand effect and product/service satisfaction provide immeasurable clairvoyance.
When people listen to program or advertisement in car, different when this situation and people watch TV.Geometry and constraint by vehicle have built people's orientation, and by occupant's categorizing system, are determined their existence.Due to the linearity of travelling and forward direction characteristic, occupant's direction of gaze is forward in the time of most of.For fear of dangerous sight line, disturb, vehicle media most is all towards audio frequency.Conventionally due to limited size, vehicle registration device generally receives head and facial image rather than whole health.
From image, determine that emotion will help to measure advertisement validity.Do like this and will use for identifying the strategy of the general emotion of the mankind that can be definite by born and general expression.Two kinds of systems of doing are like this known.Darwin first in his emotional expression > > of works < < humans and animals by utilizing the expression of people and mammal to identify particular emotion, and processed this problem along expression and affective behavior that evolution road is followed the tracks of animal.Due to Darwinian emotion in humans and animals, observe (comprise love, sympathize with, abhor, suspect, envy, envy, greediness, revenge, deception, devotion, cunning, compunction, vanity, think highly of oneself, wild ambition, pride, humble), so they are considered to ubiquity.
Recently, the facial actions code system (FACS) by Paul Ekman and his colleague exploitation is widely used.FACS system identification can be originated and be identified seven kinds of basic general emotions determining reliably by automatic identification or crowd: indignation, detest, frightened, glad, sad, surprised and neutral.They be mesolimbic system emotion and can be combined ground and utilize estimated strength to identify.They are widely regarded as general and non-autonomous.Developed multiple facial expression recognition, and it has been implemented in the business software from such as NVISO, Visual Recognition, Noldus FaceReader etc.
Fig. 2 A to Fig. 2 D illustrates according to the example facial expression of FACS and analyzes.Consideration eyebrow 201,211,221,231, eye 203,213,223,233 and mouth 205,215,225,235 are determined the various degree of the affective content in expression.
For each facial expression, a plurality of possible emotion that existence can be determined.In this illustrated examples, emotion be surprised, glad, sad, feel uncertain, detest, indignation and normal.If need, can also add other emotions.
Each emotion also has confidence level associated with it, is from 0 to 1 in this example.As can be seen from Figure 2A,, eye 203 relatively static at eyebrow 201 is under standard aperture and the most of static affective state of mouth, and what affective state 209 was the most corresponding is " normally ".This can represent benchmark expression.
In Fig. 2 B, eyebrow 211 flattens, eye 213 narrows down and lip 215 is bent downwardly.Based on these, observe, what the current corresponding confidence level of new affective state 219 was the highest is " sadness ", although feel uncertain following closely.
In Fig. 2 C, eyebrow 221 is current to be kicked up, and the large and mouth 225 of 223 of eyes also magnifies.This new expression 229 meets with " surprised " most, although this expression also highly meets with " happiness ".
Finally, in Fig. 2 D, eyebrow 231 is flattening a little, eye 233 narrow down a little and mouth 235 upwards curved.This expression 239 has the highest matching degree with " happiness ".
When reply driver (this driver driving during by VCS and compunication, thereby likely mutual with advertisement, buy or some service (for example provided to evaluation, satisfaction for dealer's service)), time, to the detection of subjective emotional expression, can play a significant role.At present, user is that system can not be carried out good response to " dialog mode " session conventionally for a complaint of such system.
The degree of accuracy of the computing system that the child of four years old may develop than cost dollars up to a million for the understanding of the conversation sentence with emotion is higher.This is because these systems operate according to keyword conventionally, and also because these systems can not be known from experience context and do not understand any emotion.On the other hand, people may be partial to utilize Nonverbal emotional expression to say subjective language, and this is with contrary by the dialogue that normally a succession of language of objective expression forms.Be substituted in the clear and definite grading evaluation afterwards of dealer's service experience, automotive occupant can be for example with different emotions (intonation) or provide facial expression and say " it is fine " (significantly smile or neutrality), even identical people, such " it is fine " also may have diverse implication based on facial expression, intonation or posture.
Competition for the advertising space in vehicle (and obtaining product/service evaluation) becomes more and more fierce.The advertising method using in print media has been transferred to online advertisement now, and starts for vehicle.It is a kind of for measuring the validity of advertisement in the influence process, car of advertisement in car and to the secured fashion of (comprising subjective emotional expression) of the satisfaction after product/service sale that illustrative embodiment provides, and also can in this region, collect data and express to understand more about human emotion.
Exemplary illustrative ad system consists of front end man-machine interface, machine learning system and expression (face) recognition system and rear end control system.In addition,, for portability and in order to strengthen learning ability, described system is used storage and the calculating based on cloud.Front end man-machine interface can be communicated by letter with driver and be received speech and Nonverbal input based on a lot of existing options (voice, touch or facial expression/emotional expression, sense of touch (camera, bearing circle, seat and control) etc.).For Fig. 4, provide the more discussion to this system.
Fig. 3 illustrates for analyzing the illustrative process of facial expression.In this illustrated examples, 301, user, the time point of certain in vehicle starts in described processing.In this example, user can be any occupant in the visual range of vehicle camera.By face recognition, VCS can be identified in camera visual range the occupant who is monitored.303, any potential discernible object/occupant of described processing scanning, and determine that 307 users are whether in vehicle data storehouse.
If user is in database, 309, described processing is retrieved and user-dependent information.After having identified user, the ad click that can look back and analyze this user past is historical.311, described system starts how to learn the subjective emotional expression of interpreting user by express identification software by face/posture/intonation.
If user is new user for system, use existing universal model as starting point, and the feedback based on from user, system is along with each use becomes better fast, and along with the time is understood the implication of non-verbal exposition with becoming better and better.Machine learning is processed based on statistical method, and wherein, the example of described statistical method comprises the use to environmental rape person (contextual bandit), Bayesian learning or artificial neural network.Once system development goes out user's mental measurement mapping model, the system with camera just can adopt subjective emotion input (for example, facial expression) and the order quantizing (for example, be or no, good or bad) is sent to vehicle control system.
Front-end control be applied in vehicle, move and use the conversational system that comprises camera with: a) with user interactions and obtain the feedback for advertisement, business, Products Show, service etc.; B) feedback of recording user and limbs are expressed.Signal filter based on cloud (rear end control system): a) process input data, obtain relevant user information, b) filter user responds and remove unwanted information; C) user expression is classified and index, by user feedback dynamically cluster be group by them and user profile (crowd's statistical information, information of vehicles) fusion.At any time, advertiser can make request and system will be obtained relevant information input.In addition, machine learning software can be processed individual driver's historical data, and along with the time, becomes better and better and identify this driver's non-verbal exposition.
For example, if driver has just completed dealer's service access and returned in car, vehicle can identification of driver the evaluation of inquiry to current experience.Driver can talkative " good " or just laughs a little greatly or holded up thumb.This evaluate response by the camera in vehicle by non-speech record.Can apply and first input be sent to message cloud by control.Message filter will be processed expression, and identification emotional expression, is linked to evaluation result (marking) subsequently.At vehicle, result is sent to advertiser or service provider (for example dealer) afterwards together with driver information (No. VIN, crowd's statistical information), advertiser can determine and how to tackle subsequently.
Speech based on observing and Nonverbal response, system attempt to estimate that driver loses interest in (for example neutral expression) or dissatisfied (313), general satisfaction interested or general (315) or very interested or be satisfied with very much (317).If user loses interest in or dissatisfied, described processing can find user with the reason of this attitude reaction in 319 inquiry problems or trial.If user is satisfied, interested or very interested, described processing can 321(now or following) recommend similar service or advertisement.User also can be stored in database for later reference for the reaction of various services and advertisement.
Fig. 4 illustrates illustrative advertisement analytic system.In this illustrated examples, VCS module comprise can with the application link module of application communication of operation on phone or other mobile devices 403.For example, this module can be fed to ad data mobile device (if advertisement is provided), thereby can recommend particular advertisement by the user preference based on observing.In this illustrative embodiment, OEM Application of Interface 417 is moved on the mobile device between application 419 and application link module.
The part of non-verbal exposition register 413 as VCS is also provided.The visual expression that its record is produced by advertisement.In conjunction with this record, apply or come the advertisement of self-application can send the data about advertisement, thereby the context that register can awareness advertising carrys out estimating ad.For example and unrestricted, the advertisement for McDonald can have the label such as snack food, " food " and " hamburger " associated with it.When expressing register, measure and express when estimating the response for advertisement, described processing can determine that user likes the advertisement of still not liking relevant to these labels.Other advertisements from other sellers can only have some labels associated with it, therefore can sort out user for the reaction of specific label by the observation repeating.For example, another advertisement may be for FIVE GUYS HAMBURGERS.If this advertisement only has " food " associated with it and " hamburger ", and user is that to like the response of the advertisement of Er Dui McDonald be not like to the response of this advertisement, can guess that user does not like McDonald, not like fast food or when presenting McDonald's advertisement, do not starve (also having other possibility conclusion).By repeated observation and filtration, can determine the set of complicated user preference.
VCS401 in this example also comprises the media player 415 that can be used for ad playback.What also input is offered to VCS is HMI(man-machine interface) 407 and Vehicular system 409.
HMI includes but not limited to the element such as camera, loudspeaker, speech identifying function, bearing circle input, panel board and touch-screen display.Vehicular system includes but not limited to navigation feature and hardware, driver status measurement (estimating such as workload estimator and driver's happiness degree), vehicle identification information and driver's historical (that is, driver's data).
By mobile device, VCS also communicates by letter with cloud 405.Described cloud provides senior computational resource, and described resource can comprise server 421, data management system 423, Advertisement Server 425 and learning software 427.Owing to may being difficult to comprise in vehicle for analyzing enough computing powers of facial expression, therefore described cloud can be provided for more computational resources of face analysis object.Can send to cloud for further analyzing and estimating by the image of expression or to the measurement at picture number strong point or other expression related data.
Fig. 5 illustrates the illustrative process of collecting for ad data.In this illustrated examples, 501, described processing starts by broadcast advertisement on vehicle.Except being presented to user, this advertisement also can have associated with it for following the tracks of user's data useful to the reaction of the advertisement of advertisement and similar type.For example, because user may represent to be more concerned about to the advertisement based on food in the lunchtime, therefore also can tracking time data, environmental data and other data.Similarly, when rainy, user may be more prone to utilize drive-in restaurant.
When broadcast advertisement, 503, facial recognition software is enabled and is started recording user emotion also for user feeling is stamped timestamp.Can record these user feelings by the use of camera, these user feelings be stamped to timestamp comparing with advertisement the fractional part of advertisement (or even down to), and estimate as shown in Fig. 2 A to Fig. 2 D.505, also can collect vehicle environmental information from vehicle BS.Vehicle environmental information can comprise about the information of vehicle-state (speed, position), information, environmental information (weather, the volume of traffic) and any other useful information about user's (passengers quantity, weight, size).
In this example, system is at least measured occupant's quantity and in the 507 measurement drivers grade of diverting one's attention 507.It can be may give the useful designator of any how many notices of advertisement to driver that driver diverts one's attention, and can be used to temper analysis.For example, if driver highly diverts one's attention and the volume of traffic is large, the response of " indignation " may be irrelevant with advertisement.
509, once advertisement finishes, described processing can be estimated the scope along with the facial expression of time spot, estimates environmental information and estimates any other correlated variables.513, this Information Availability is in upgrading ad data, and also can add to the estimation of particular advertisement subscriber data to for upgrading 515.
Fig. 6 illustrates second illustrative embodiment of collecting for ad data.In this illustrative embodiment, advertisement key is for (key) particular emotion state.For example, when user is obviously angry, may not expect broadcast advertisement, this is because advertiser may not wish that their product is unconsciously associated with indignation.Advertiser even can to user during in particular emotion state broadcast advertisement pay bonus dividend.
In this is processed, 601, facial recognition software starts to detect affective state based on expression.As shown at Fig. 2 A to Fig. 2 D, can estimate that expression is for analyzing driver's affective state.603, when reaching the affective state of expectation, advertisement starts to play.
When advertisement is play, 605, facial recognition software again starts recording user affective state and stamps timestamp for responding.This is for determining that when present advertisement under particular emotion state time advertisement is successfully useful more than having.For example, if people in " sadness " state, and the advertisement of favorite food is played, this (may find comfort in food) people may transfer to " normally " from " sadness ".Emotion is estimated also can combine for further analysis with user behavior (that is, if vehicle was accessed restaurant in ensuing 5 minutes).After knowing that user takes comfort in specific food or general food, can advise providing during in " sadness " state user food advertisement.
Similarly, may there is no the response from user, or user's state may be transferred to " even worse " state (such as " indignation ") from " low " state (such as " sadness ").If there is measurable correlativity between the advertisement of playing and the advertisement that makes user's " indignation " under " sadness " state, can advise avoiding this advertisement (or similarly other advertisements) during " sadness " state.
As previously mentioned, 607, also can collect vehicle environmental information.This information comprises any measurable or reportable variable, and this variable can be used for determining successful environment that can estimating ad.Although may be difficult to distinguish that based on given variable user responds in the situation that there is a plurality of variable, long-run analysis can help refinement for the specificity factor of any given variable.
In addition, in this embodiment, occupant's quantity and the grade of diverting one's attention are measured, identical previously in the situation of the example shown in Fig. 5.
Once advertisement finishes, described processing just can again be analyzed face recognition in time, environmental information and other and may produce effect and/or can be estimated the suitable variable with given relation of reacting for reaction.When variable is for reaction when " producing effect ", this means in the situation that conventionally observing (its dependent variable is all neutral), this variable tends to produce the effect be identified (for example, time=lunchtime, effect=is conventionally for the active response of food advertisement).Therefore,, in the situation that this is observed, the eye response to advertisement in the lunchtime based on user, transmits food advertisement in the time of can advising equaling in time variable " lunchtime " (or meal point or equivalent).
Similarly, particular variables may have relation with given reaction.For example, if user's " happiness ", they may be more prone to carry out " Impulse Buy ".Advertiser knows whether their product is considered to " Impulse Buy " conventionally, and (under this unrestricted model) plays " Impulse Buy " advertisement during being more desirably in corresponding to the User Status of " happiness ".
In addition,, 615, can the response based on observing upgrade the ad data for each advertisement.617, these data can be uploaded to subscriber data.These data also can be stored in other data, such as " group data "." group data " is the data of identification group response when broadcast advertisement in the situation that lineup is on the scene." group data " can comprise special member, crowd's measurement type member (for example, adult and child) or just relevant on the scene anyone's group.
Fig. 7 illustrates the illustrated examples of the face recognition in rich media player environment.In this illustrated examples, described processing starts in the identical mode of the mode with for Fig. 6.701, face recognition processing is enabled, 703, when identifying or during emotion that " conjecture to " is correct, by playback ad content.
When advertisement is play, 705, facial recognition software continues to check and record affective state.In addition, in this illustrated examples, described state is stamped timestamp, yet in alternate model, can also measure the mean state along with the carrying out of advertisement.
In this example, 707, advertisement has an opportunity dynamically to adapt to driver's response.At a plurality of states this illustrate, although advertisement can be used still less or more state suitable in the situation that.In addition,, for branch's object, particular state can be gathered is one group.That is to say, be confirmed as the advertisement of similar type to reveal out that the state of similar response can cause the similar branch of played advertisement.
In this example, branch is the advertisement based on having dynamic content associated with it.For example, market advertisement can provide some general for the encouragement at market shopping.Because market has a plurality of shops and restaurant, therefore have an opportunity for specific user's customized advertising.For example, if user is sad, but become glad when the advertisement of playing based on food, market advertisement can be in advertisement be substantially branched off into food advertisement after market.Similarly, if user tends to dress advertisements to make active response when user is glad, market advertisement can be branched off into dress advertisements when user is glad.
In illustrative embodiment, with non-limiting way, present the Qi Ge branch with seven corresponding advertisements of affective state.Based on user, be surprised (709), sad (711), glad (713), indignation (715), arrogant (717), dejected (719) or neutral (721), can play different advertising segments.In this example, can play and the corresponding fragment 722,723,725,727,729,731,733 of various emotions, but as described fragment packet that can be to any amount.In addition or selectively, can other emotions based on needs add other fragments.
Finally, 735, can play end fragment, wherein, described end fragment can comprise for example dynamic content based on previous fragment, can be maybe static fragment.In appropriate circumstances, subsequently 737, advertisement can be repeated maybe can finish.
Fig. 8 illustrates the example of the mood test of estimating for advertisement.In this illustrative embodiment, particular advertisement is sent to vehicle for mood test.For example, if seller develops, may be the advertisement of problematic application, this seller may wish in a plurality of users, to test this advertisement and User Status before normal distribution, to obtain, when advertisement is likely obtained to the successful perception of test.
801, the advertisement that comprises mood test instruction is sent to vehicle.At reasonable time, advertisement can playback in vehicle.803, this can comprise the repeatedly playback based on predefined playback interval.In addition,, 805, the time based on after predetermined interval (this time can be used for estimating reaction in time), described processing will be carried out mood test.
For example, 807, test can start from inquiring about occupant the problem to the viewpoint of product.As response, 809, driver can provide about product and the input that adds suggestion.In addition, 811, camera image recordable and timestamp in car.813, can analyze these images and timestamp and they were associated with the moment of the advertisement presenting.
In addition, 815, can be for measurable sentiment analysis image when driver says particular words.Owing to providing measurement to voice in this illustrative embodiment, therefore in face recognition analysis, can ignore mouth.In another example, only in the situation that not measuring phonetic entry, can comprise mouth.
817, the time delay between the time that the time of asking questions and problem are answered is registered as the measurement for the familiarity of problem.In addition,, 819, in appropriate circumstances, the factor based on measuring and variable record and upgrade advertisement and index that can correlation measurement.In addition,, 821, can record the advertisement validity of the index based on observing.
Although below described exemplary embodiment, these embodiment are not intended to describe likely form of the present invention.On the contrary, the word using is in this manual the word of describing and unrestriced word should be understood that without departing from the spirit and scope of the present invention, can carry out various changes.In addition, the feature of the various embodiment of realization can be combined to form further embodiment of the present invention.

Claims (10)

1. a computer implemented method, comprising:
Receive advertisement;
Advertisement is presented to automotive occupant;
Use the occupant's response during vehicle camera visually records the process that advertisement presents;
Analyze visually response reaction for advertisement with estimating user of record;
Based on described dissecting needle, the advertisement presenting is adjusted to advertisement variable metric.
2. the method for claim 1, wherein advertisement presents and comprises and visually present and can listen at least one in presenting.
3. the method for claim 1, wherein described record comprises by vehicle cameras record user and responding.
4. the method for claim 1, wherein user's response comprises facial expression.
5. method as claimed in claim 4, wherein, draws definite user feeling state to the analysis of facial expression.
6. method as claimed in claim 5, wherein, utilizes the response based on affective state of user to advertisement, and user's affective state upgrades subscriber data.
7. a computer implemented method, comprising:
Reception comprises the advertisement of face recognition move instruction;
Use vehicle camera to catch the image of automotive occupant or video with the recording user state of expressing one's feelings;
Analysis user expression state;
When subscriber's meter situation state meets face recognition move instruction, transmit advertisement.
8. method as claimed in claim 7, wherein, face recognition move instruction comprises for transmit the instruction of advertisement during facial state is corresponding with particular emotion.
9. method as claimed in claim 7, wherein, face recognition move instruction is the response based on previously having observed at least in part, wherein, under described response, utilizes the actively response of record to transmit similar advertisement.
10. system as claimed in claim 7, wherein, the process transmitting with the advertisement in predefine fragment, records facial expression.
CN201410099093.8A 2013-03-15 2014-03-17 Method and apparatus for subjective advertisement effectiveness analysis Pending CN104050587A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/834,330 2013-03-15
US13/834,330 US20140278910A1 (en) 2013-03-15 2013-03-15 Method and apparatus for subjective advertisment effectiveness analysis

Publications (1)

Publication Number Publication Date
CN104050587A true CN104050587A (en) 2014-09-17

Family

ID=51419294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410099093.8A Pending CN104050587A (en) 2013-03-15 2014-03-17 Method and apparatus for subjective advertisement effectiveness analysis

Country Status (3)

Country Link
US (1) US20140278910A1 (en)
CN (1) CN104050587A (en)
DE (1) DE102014204530A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106910512A (en) * 2015-12-18 2017-06-30 株式会社理光 The analysis method of voice document, apparatus and system
CN107886045A (en) * 2016-09-30 2018-04-06 本田技研工业株式会社 Facility satisfaction computing device
CN110060117A (en) * 2018-01-11 2019-07-26 丰田自动车株式会社 Recommendation apparatus, recommended method and the storage medium for storing recommended program
CN110803170A (en) * 2018-08-03 2020-02-18 黄学正 Driving assistance system with intelligent user interface

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150142552A1 (en) * 2013-11-21 2015-05-21 At&T Intellectual Property I, L.P. Sending Information Associated with a Targeted Advertisement to a Mobile Device Based on Viewer Reaction to the Targeted Advertisement
CN105095182B (en) * 2014-05-22 2018-11-06 华为技术有限公司 A kind of return information recommendation method and device
US9747573B2 (en) 2015-03-23 2017-08-29 Avatar Merger Sub II, LLC Emotion recognition for workforce analytics
US9852355B2 (en) * 2015-04-21 2017-12-26 Thales Avionics, Inc. Facial analysis for vehicle entertainment system metrics
JP6802170B2 (en) * 2015-08-28 2020-12-16 日本電気株式会社 Impact measuring device, impact measuring method and computer program
US10701429B2 (en) * 2016-08-16 2020-06-30 Conduent Business Services, Llc Method and system for displaying targeted multimedia items to a ridesharing group
DE102017107086A1 (en) * 2017-04-03 2018-10-04 Advanced Digital Solutions Ltd. Additionals for the retail trade
WO2019024068A1 (en) * 2017-08-04 2019-02-07 Xinova, LLC Systems and methods for detecting emotion in video data
DE102018133445A1 (en) 2018-12-21 2020-06-25 Volkswagen Aktiengesellschaft Method and device for monitoring an occupant of a vehicle and system for analyzing the perception of objects
DE102018133453A1 (en) 2018-12-21 2020-06-25 Volkswagen Aktiengesellschaft Method and device for monitoring an occupant of a vehicle
DE102020107062A1 (en) 2020-03-14 2021-09-16 Audi Aktiengesellschaft Method for operating an output device of a motor vehicle, control device, motor vehicle, and server device
US20230135254A1 (en) * 2020-07-01 2023-05-04 Gennadii BAKHCHEVAN A system and a method for personalized content presentation
CN113327140B (en) * 2021-08-02 2021-10-29 深圳小蝉文化传媒股份有限公司 Video advertisement putting effect intelligent analysis management system based on big data analysis

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080228394A1 (en) * 2007-03-16 2008-09-18 Sony Corporation Information supplying system, apparatus mounted in vehicle, information supplying server, program, and information processing method
CN101621668A (en) * 2008-07-01 2010-01-06 索尼株式会社 Information processing apparatus and information processing method
CN102129644A (en) * 2011-03-08 2011-07-20 北京理工大学 Intelligent advertising system having functions of audience characteristic perception and counting
CN102346898A (en) * 2010-09-20 2012-02-08 微软公司 Automatic customized advertisement generation system
CA2775814A1 (en) * 2012-05-04 2012-07-10 Microsoft Corporation Advertisement presentation based on a current media reaction
CN102737331A (en) * 2010-12-02 2012-10-17 微软公司 Targeting advertisements based on emotion
CN102881239A (en) * 2011-07-15 2013-01-16 鼎亿数码科技(上海)有限公司 Advertisement playing system and method based on image identification

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2410359A (en) 2004-01-23 2005-07-27 Sony Uk Ltd Display
US7620026B2 (en) * 2006-10-12 2009-11-17 At&T Intellectual Property I, L.P. Methods, systems, and computer program products for providing advertising and/or information services over mobile ad hoc cooperative networks using electronic billboards and related devices
US8085139B2 (en) * 2007-01-09 2011-12-27 International Business Machines Corporation Biometric vehicular emergency management system
US20140039991A1 (en) * 2012-08-03 2014-02-06 Elwha LLC, a limited liabitity corporation of the State of Delaware Dynamic customization of advertising content

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080228394A1 (en) * 2007-03-16 2008-09-18 Sony Corporation Information supplying system, apparatus mounted in vehicle, information supplying server, program, and information processing method
CN101621668A (en) * 2008-07-01 2010-01-06 索尼株式会社 Information processing apparatus and information processing method
CN102346898A (en) * 2010-09-20 2012-02-08 微软公司 Automatic customized advertisement generation system
CN102737331A (en) * 2010-12-02 2012-10-17 微软公司 Targeting advertisements based on emotion
CN102129644A (en) * 2011-03-08 2011-07-20 北京理工大学 Intelligent advertising system having functions of audience characteristic perception and counting
CN102881239A (en) * 2011-07-15 2013-01-16 鼎亿数码科技(上海)有限公司 Advertisement playing system and method based on image identification
CA2775814A1 (en) * 2012-05-04 2012-07-10 Microsoft Corporation Advertisement presentation based on a current media reaction

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106910512A (en) * 2015-12-18 2017-06-30 株式会社理光 The analysis method of voice document, apparatus and system
CN107886045A (en) * 2016-09-30 2018-04-06 本田技研工业株式会社 Facility satisfaction computing device
CN107886045B (en) * 2016-09-30 2021-07-20 本田技研工业株式会社 Facility satisfaction calculation device
CN110060117A (en) * 2018-01-11 2019-07-26 丰田自动车株式会社 Recommendation apparatus, recommended method and the storage medium for storing recommended program
CN110803170A (en) * 2018-08-03 2020-02-18 黄学正 Driving assistance system with intelligent user interface

Also Published As

Publication number Publication date
DE102014204530A1 (en) 2014-09-18
US20140278910A1 (en) 2014-09-18

Similar Documents

Publication Publication Date Title
CN104050587A (en) Method and apparatus for subjective advertisement effectiveness analysis
US10334301B2 (en) Providing content responsive to multimedia signals
JP6953464B2 (en) Information push method and equipment
US20140279021A1 (en) Ad Manager for a Vehicle Multimedia System
US20060184800A1 (en) Method and apparatus for using age and/or gender recognition techniques to customize a user interface
US10636046B2 (en) System and method for conducting surveys inside vehicles
CN106952054B (en) System and method for evaluating sales service quality of automobile 4S store
US20160253699A1 (en) Method and apparatus for advertisement screening
US11551279B2 (en) Systems and methods for making vehicle purchase recommendations based on a user preference profile
US20160063561A1 (en) Method and Apparatus for Biometric Advertisement Feedback Collection and Utilization
JP2015005175A (en) Information processing device, communication system, and information processing method
US20120066003A1 (en) Automobile sales training and promotion system
Smit et al. The march to reliable metrics: A half-century of coming closer to the truth
US20120017231A1 (en) Behavior monitoring system
JP7267696B2 (en) Information processing device, information processing method, and information processing program
KR20120137596A (en) Real-time and conversational type questions/answers systm and method using a communication network
WO2021075337A1 (en) Information processing device, information processing method, and information processing program
CN111476613A (en) Shopping guide auxiliary method and device based on passenger flow analysis, server and storage medium
JP7122665B2 (en) Systems and methods for facilitating dynamic brand promotion using self-driving vehicles
JP7314442B2 (en) point signage business system
JP2019135661A (en) Influence degree measuring apparatus and influence degree measuring method
US11983309B2 (en) Device and method to acquire timing of blink motion performed by a dialogue device
CN109711946A (en) A kind of method and vehicle shared server that vehicle is shared
Miyajima et al. Behavior signal processing for vehicle applications
CN113168644A (en) Method and apparatus for monitoring vehicle occupants

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20140917