CN113822229A - Expression recognition-oriented user experience evaluation modeling method and device - Google Patents
Expression recognition-oriented user experience evaluation modeling method and device Download PDFInfo
- Publication number
- CN113822229A CN113822229A CN202111265966.4A CN202111265966A CN113822229A CN 113822229 A CN113822229 A CN 113822229A CN 202111265966 A CN202111265966 A CN 202111265966A CN 113822229 A CN113822229 A CN 113822229A
- Authority
- CN
- China
- Prior art keywords
- expression
- user
- video
- experience
- user experience
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000014509 gene expression Effects 0.000 title claims abstract description 187
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000011156 evaluation Methods 0.000 title claims abstract description 30
- 238000013210 evaluation model Methods 0.000 claims abstract description 29
- 238000012545 processing Methods 0.000 claims abstract description 20
- 238000013507 mapping Methods 0.000 claims abstract description 18
- 238000004458 analytical method Methods 0.000 claims description 32
- 238000013528 artificial neural network Methods 0.000 claims description 31
- 230000008451 emotion Effects 0.000 claims description 9
- 238000004422 calculation algorithm Methods 0.000 claims description 8
- 238000012549 training Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 5
- 241001125929 Trisopterus luscus Species 0.000 claims description 2
- 230000000875 corresponding effect Effects 0.000 description 33
- 235000013305 food Nutrition 0.000 description 13
- 230000008921 facial expression Effects 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 238000007619 statistical method Methods 0.000 description 7
- 238000000605 extraction Methods 0.000 description 6
- 230000007704 transition Effects 0.000 description 6
- 238000012360 testing method Methods 0.000 description 5
- 238000011161 development Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 238000012216 screening Methods 0.000 description 4
- 206010063659 Aversion Diseases 0.000 description 3
- 208000027418 Wounds and injury Diseases 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000001427 coherent effect Effects 0.000 description 2
- 235000009508 confectionery Nutrition 0.000 description 2
- 230000006378 damage Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000002996 emotional effect Effects 0.000 description 2
- 239000013604 expression vector Substances 0.000 description 2
- 230000010365 information processing Effects 0.000 description 2
- 208000014674 injury Diseases 0.000 description 2
- 230000002045 lasting effect Effects 0.000 description 2
- 230000036651 mood Effects 0.000 description 2
- 230000035484 reaction time Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000036760 body temperature Effects 0.000 description 1
- 230000002860 competitive effect Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000008909 emotion recognition Effects 0.000 description 1
- 238000010195 expression analysis Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of computers, and particularly relates to a user experience evaluation modeling method and device for expression recognition, wherein the method comprises the following steps: s1, acquiring X seconds of experience video when the user continuously experiences the product and corresponding satisfaction scores; s2, analyzing and processing the experience video to obtain the expression of the user in each frame of image; s3, counting the probability distribution of various expressions in each experience video; and S4, establishing a nonlinear mapping relation between the probability distribution of the expression of the user and the corresponding satisfaction degree score, and obtaining the output of the user experience evaluation model by taking the probability distribution of the expression as input and the corresponding satisfaction degree score as output. The method and the device can accurately and conveniently acquire experience information when the user uses the product.
Description
Technical Field
The invention belongs to the technical field of computers, and particularly relates to a user experience evaluation modeling method and device for expression recognition.
Background
With the development of science and technology, people select products with wider selectable range and high information public aperture, so that the selection standard is higher and higher, more and more people do not simply pursue product performance, and the importance degree of product experience is continuously improved. At present, after competitive development, innovation and upgrade of various large manufacturers, the product quality becomes an admission threshold, the gap of the technology is more and more flat, products are expected to stand out, and the improvement of the experience of users becomes one of main product design ideas. The development of "experience economy" has driven the development of "experience design".
The key point is to accurately acquire the experience of the user when the user experiences the product. The traditional experience feeling acquisition mode is mainly characterized in that sensor equipment is arranged on a client, physiological sign data such as heartbeat and body temperature and the like when a user experiences a product are detected, and then the physical sign data are analyzed to judge the experience feeling of the user, so that the operation is complex, time and labor are consumed, and the user is willing to be matched little and the sample size is difficult to guarantee as a detection device needs to be arranged on the client like a physical body; in addition, the sign data of a person under different emotions may be similar, for example, when the person is angry and excited, the sign data are very close, and the result of the detection method is not intuitive and fine.
Disclosure of Invention
The invention aims to provide a user experience evaluation modeling method facing expression recognition, which can accurately and conveniently acquire experience information of a user when the user uses a product.
The basic scheme provided by the invention is as follows:
a user experience evaluation modeling method facing expression recognition comprises the following steps:
s1, acquiring X seconds of experience video when the user experiences the product and corresponding satisfaction scores;
s2, analyzing and processing the experience video to obtain the expression of the user in each frame of image;
s3, counting the probability distribution of various expressions in each experience video;
and S4, establishing a nonlinear mapping relation between the probability distribution of the expression of the user and the corresponding satisfaction degree score, and obtaining a user experience evaluation model taking the probability distribution of the expression as input and the corresponding satisfaction degree score as output.
Basic scheme theory of operation and beneficial effect:
by using the method, the experience videos and the satisfaction degree scores of part of users can be obtained in an internal test mode, so that a user experience evaluation model is established.
Specifically, in the inner stage, when the user experiences the product, the user shoots an experience video with one end lasting for X seconds, and the user fills in the satisfaction degree score and then obtains corresponding data. And analyzing and processing the experience video to obtain the expression in each frame of image.
Then, statistics are carried out on the probability distribution of the expression occurrence in each experience video, such as the situations of the emotional types of happiness, dislike, surprise and the like. And then, establishing a nonlinear mapping relation between the probability distribution of the user expression and the corresponding satisfaction degree score. Then, according to the nonlinear mapping relationship, a corresponding relationship between the probability distribution of the expression and the satisfaction degree score can be obtained. Based on the evaluation model, a corresponding user evaluation system/device can be developed, and after the experience video of a certain user is acquired, the corresponding satisfaction degree score can be directly obtained by analyzing the probability distribution of the expression of the user.
Compared with the prior art that the experience information of the user is acquired by acquiring the sign data of the user, the method and the device for acquiring the experience information of the user are simple, the difficulty in acquiring the sample is low, and the sample volume is guaranteed. Moreover, because the satisfaction degree is associated with the expression distribution probability distribution, and the expression distribution probability is determined and intuitive statistical data, both the accuracy and the intuition can be taken into consideration.
Then, when the merchant wants to obtain the satisfaction degree score of the user, only the experience video of the user with the user expression needs to be obtained. The experience video of the user can be achieved through self-shooting of the user, shooting can be conducted at the experience point, and the user experience video is convenient and fast to use. Moreover, as the expression analysis technology is mature, the reliability of the probability distribution of the expression obtained by analysis is very high, and the effectiveness of the obtained satisfaction score can be ensured.
To sum up, the method and the device can accurately and conveniently acquire the experience information when the user uses the product.
Further, S4 includes:
s41, constructing a neural network by taking the probability distribution of the expression of the user as input and the corresponding satisfaction score as output;
s42, training the neural network, and updating the weight and the threshold of the neural network;
s43, optimizing the neural network through L2 norm regularization, adding an L2 norm penalty term to inhibit overfitting on the basis of a loss function preset by the neural network to obtain a minimum function required by training, and establishing a nonlinear mapping relation between probability distribution of the expression of the user and corresponding satisfaction degree scores;
and S44, obtaining a user experience evaluation model which takes the probability distribution of the expression as input and the corresponding satisfaction degree score as output.
Has the advantages that: through the specific nonlinear adaptive information processing capacity of the neural network, a more accurate basic mapping relation between the probability distribution of the expression of the user and the corresponding satisfaction degree score can be obtained, and an L2 norm punishment item is added on the basis of a preset loss function of the neural network to inhibit overfitting, so that the overfitting condition of the neural network can be prevented, the accuracy of the obtained nonlinear mapping relation is ensured, and the effectiveness and the accuracy of the model are ensured.
Further, in S1, the duration of the experience video is 5 seconds or longer.
Has the advantages that: except for extreme conditions that the user especially accords with personal preferences or does not accord with the personal preferences, most users need a certain time to experience when experiencing products (such as food), and experience videos of more than or equal to 5 seconds can ensure that enough expression information is acquired.
Further, in S1, the duration of the experience video is 10 seconds or less.
Has the advantages that: if the time for experiencing the video is too long, on one hand, the degree of matching of the user is reduced, and on the other hand, when the time for analyzing and processing the experienced video is too long, the time required for analyzing and processing the experienced video is too long, so that the overall efficiency is low.
Further, in S2, the analyzing the experience video includes: and analyzing the experience video to obtain complete video frame image data, and identifying the expression in each video frame.
Has the advantages that: due to the characters and other reasons, when a person is in an emotion, the degree of facial expression and the reaction time of the person are slightly different, and after a complete video frame image is obtained through video analysis, expression recognition and statistical analysis are carried out, so that the sufficiency of data volume during statistical analysis can be ensured, and the accuracy of subsequent modeling is further ensured.
Further, in S2, when the expression of each video frame is identified, the expression of each video frame is obtained after extracting statistical features from the video frame by machine vision identification according to a predefined expression library.
Has the advantages that: the video frames have more pixel points, each second of video has 20 frames of video frames, and if the expression recognition is performed by adopting a characteristic value extraction processing or expression vector extraction analysis mode, the efficiency is very low, and the calculation amount is very large. The expression recognition method has the advantages that the expression is obtained by statistical feature extraction in a machine vision recognition mode, and the recognition of the expression can be quickly completed on the basis of ensuring the overall accuracy due to the fact that the dimensionality of feature selection is low and the calculated amount is small when the features of the expression recognition are extracted.
Further, in S2, the expressions in the expression library include no emotion, disgust, fear, happiness, hurry, surprise, anger, poult, and grimace.
Has the advantages that: compared with a conventional expression library, the puckered lips and the grimace are creatively added by the applicant, because the characters of part of users are more mobile and lively, the expressions for puckering the lips or the grimace can be subconsciously made during experience, but the two expressions do not necessarily represent positive experience or negative experience, and the positive experience or the negative experience and the positive or negative degree can be obtained by combining with other expressions for analysis. When users of the type are met, the situation that the analysis degree is inaccurate easily occurs according to the analysis of the conventional expression library, for example, the real experience of the users is only good, but the analysis of the existing expression library is very good. By using the expression library, the situation can be avoided, the accuracy of the user experience evaluation model is guaranteed, the accuracy of subsequent satisfaction scoring obtained through the user experience video is guaranteed, more accurate feedback data are provided for merchants, and the merchants can know the user experience situation more accurately.
Further, in S1, after obtaining the satisfaction scores of the users for the attributes in the form of questionnaires, integrating the scores of the attributes of each user through the URWA algorithm to obtain the satisfaction scores of the users; the formula of the URWA algorithm is:
wherein S isiRepresenting a satisfaction score for the experiencer numbered i; a isijAn evaluation score representing the attribute numbered j of the experiencer numbered i; omegaiA set of weights for various attribute experiences of the experiencer numbered i; omegaijA weight of the attribute experience numbered j for the experiencer numbered i; m is the total number of attributes; omegai=(ωi1,ωi2,...,ωim) Any omegaij∈[0,1]And isωijThe calculation formula of (2) is as follows:
has the advantages that: the URWA algorithm is a self-defined algorithm of the applicant, through the method, uncertainty in scoring is removed, influence of individual data with large difference with general data on a final modeling result is avoided, noises such as different from person to person and different from experience environment can be removed, and user experience of products is directly obtained.
Further, in S1, the experience video is a coherent video.
Has the advantages that: if the experience video is formed by editing a plurality of sections of videos, the change process of the expression and emotion in the user experience process is difficult to accurately capture, and the accuracy of a user experience evaluation model obtained subsequently is influenced.
Further, the method also comprises S21, analyzing the occupation ratio of the video frames with expressions in all the video frames, returning to S1 if the occupation ratio is smaller than a preset value, dividing the video frames into video frames of a plurality of time periods according to the time sequence of the video frames if the occupation ratio is smaller than the preset value, analyzing the number of the video frames with expressions in each time period one by one according to the time sequence, turning to S3 if the number of the video frames with expressions in each time period is larger than a preset standard value, and returning to S1 if the number of the video frames with expressions in each time period is not larger than the preset standard value.
Has the advantages that: the core of the user experience evaluation model obtained by the invention is that the statistical analysis of the expression in the user experience process is carried out, and if the proportion of the video frames with the expression in the experience video in all the video frames is smaller than a preset value, the expression statistical analysis is ensured to be effective, so that the expression of the user is not shot at many time points in the experience video, the obtained expression probability distribution can not accurately reflect the real situation, and therefore, S1 is returned, and the experience video is obtained again.
In addition, even if the overall proportion is greater than the preset value, if the number of video frames with expressions in a certain period is not greater than the preset standard value, the obtained expression probability distribution may not accurately reflect the real situation. For example, the product is a food which is perceived as bitter first and then sweet after being eaten by a user, and when the user eats the food, the expression normally has a transition from negative expression (such as aversion) to positive expression (such as happiness), and the ratio of each expression can be used as a reference for the speed of taste transition and the taste level experienced by the user when the user experiences the food, and the speed of taste transition and the taste level of the experienced food can influence the satisfaction score of the user. If the number of video frames with expressions in a certain time period does not meet a preset value, the number of corresponding expressions in the time period is small, the ratio of each expression is distorted, the probability distribution of the experience video becomes interference data when a nonlinear mapping relation is established, and the accuracy of a subsequently acquired user evaluation model is influenced. Returning to S1 to retrieve the experience video.
Since it is stated that there may be a problem with the shooting angle when the second condition (i.e. the number of video frames with expressions in a certain period of time is not greater than the preset standard value) is not satisfied, the first condition (i.e. the occupation ratio of the video frames with expressions in the experience video in all the video frames is less than the preset value) is also not satisfied. The efficiency of the overall analysis is faster than that of the analysis in a single time period, and the overall efficiency of experience of video screening can be guaranteed in such a mode.
In conclusion, through S21, the accuracy of the subsequently acquired user evaluation model and the overall efficiency in experiencing video screening can be ensured.
Another objective of the present invention is to provide a user experience evaluation device facing expression recognition, which uses the above user experience evaluation modeling method facing expression recognition; comprises an acquisition module, an analysis module and a storage module;
the acquisition module is used for acquiring the experience video of the user for X seconds continuously; a user experience evaluation model is prestored in the analysis module; the analysis module is used for analyzing and processing the user experience video to obtain expressions in each frame of image and then counting probability distribution of various expressions; the analysis module is also used for obtaining the satisfaction degree score of the user according to the user evaluation model and by combining the probability distribution of various expressions of the user; the storage unit is used for storing the satisfaction degree scores of the users.
Has the advantages that: when a merchant wants to acquire the satisfaction degree score of a user, the device can acquire the experience video of the user with the user expression through the acquisition module, the analysis module can automatically perform subsequent image processing, count probability distribution of various expressions, analyze the satisfaction degree score and store the satisfaction degree score in the storage unit. The merchant only needs to check and know the satisfaction degree scores in the storage unit.
Compared with the prior art that the experience information of the user is acquired by acquiring the sign data of the user, the satisfaction degree score data of the user can be acquired conveniently, quickly and accurately.
Drawings
FIG. 1 is a flowchart of a first embodiment of a user experience evaluation modeling method for expression recognition according to the present invention;
fig. 2 is a logic block diagram of a first embodiment of a user experience evaluation device for expression recognition according to the present invention.
Detailed Description
The following is further detailed by the specific embodiments:
example one
As shown in fig. 1, a modeling method for user experience evaluation facing expression recognition includes:
s1, acquiring X seconds of experience video when the user continuously experiences the product and corresponding satisfaction scores; wherein X is more than or equal to 5 and less than or equal to 10. The specific content of the experience video can be determined according to specific requirements of a merchant, and by taking food as an example, the merchant wants to acquire experience data of a certain food product from a customer, and can acquire X seconds of continuous video when the customer eats the product. The specific mode of obtaining can be carried out through other shooting modes in a customer self-shooting or off-line experience store, and the modes are very convenient and fast. In this embodiment, mobile phone shooting in an experience store.
In this embodiment, the video duration X is 5.5. Except for extreme conditions that the user particularly accords with personal preferences or does not particularly accord with the personal preferences, most users need a certain time to experience when experiencing products (such as food), and experience videos of not less than 5 seconds can ensure that enough expression information is acquired. In addition, if the time for experiencing the video is too long, on one hand, the degree of matching of the user is reduced, and on the other hand, when the time for analyzing and processing the experienced video is too long, the time required for analyzing and processing the experienced video is too long, so that the overall efficiency is low. And 5 to 10 seconds of experience video, the sufficiency of expression information, the matching willingness of customers and the analysis processing efficiency can be considered.
Note that the experience video in S1 is a coherent video. If the experience video is formed by editing a plurality of sections of videos, the change process of the expression and emotion in the user experience process is difficult to accurately capture, and the accuracy of a user experience evaluation model obtained subsequently is influenced.
In this embodiment, the satisfaction score is obtained in the form of a questionnaire, which may be a paper questionnaire or an electronic questionnaire, and in this embodiment, the questionnaire is an electronic questionnaire, which can be obtained and filled in a code scanning manner, and is more convenient for subsequent statistical analysis compared with the paper questionnaire. Wherein the content of the questionnaire is satisfaction degree of each attribute of the experienced product, and the code of the product attribute is G1、G2、...、Gm(ii) a The satisfaction scores were divided into very unsatisfactory, general, satisfactory and very satisfactory and 5 grades, with corresponding satisfaction scores of 1,2, 3, 4 and 5, respectively, as shown in table 1:
TABLE 1 satisfaction score Table
The satisfaction degree corresponding to the j attribute of the ith experiencer is aijThe satisfaction score of the ith experiencer is SiAs shown in table 2:
TABLE 2 Emerson Attribute satisfaction schematic
When the user satisfaction is obtained, integrating the scores of each user for each attribute through a URWA algorithm to finally obtain the final satisfaction score of the user for the product, wherein the formula of the URWA algorithm is as follows:
wherein S isiRepresenting a satisfaction score for the experiencer numbered i; a isijAn evaluation score representing the attribute numbered j of the experiencer numbered i; omegaiA set of weights for various attribute experiences of the experiencer numbered i; omegaijA weight of the attribute experience numbered j for the experiencer numbered i; m is the total number of attributes; omegai=(ωi1,ωi2,...,ωim) Any omegaij∈[0,1]And isωijThe calculation formula of (2) is as follows:
by the method, uncertainty in scoring is removed, influence of individual data with large difference with general data on a final modeling result is avoided, noises such as different from one person to another and different from an experience environment can be removed, and user experience of a product is directly obtained.
S2, analyzing and processing the experience video to obtain the expression of the user in each frame of image; wherein the analyzing and processing the experience video comprises: and analyzing the experience video to obtain complete video frame image data, and identifying the expression in each video frame.
Specifically, when the expression of each video frame is identified, the expression of each video frame is obtained after statistical features are extracted from the video frames through machine vision identification according to a predefined expression library. Because the number of the pixel points of the video frames is very large, and the video of each second has 20 frames of video frames, if the expression recognition is carried out by adopting a characteristic value extraction processing or expression vector extraction analysis mode, the efficiency is very low and the calculation amount is very large, and the expression is obtained by adopting a machine vision recognition mode to carry out statistical characteristic extraction. In this embodiment, the expressions in the expression library include no emotion, disgust, fear, happiness, hurry, surprise, anger, pout, and grimace, and the expressions and codes are shown in table 3:
TABLE 3 expression and code correspondence table
S3, counting the probability distribution of various expressions in each experience video;
specifically, as shown in table 4, the probability of occurrence of 9 expressions in each experience video, i.e. no emotion E, is counted1Aversion to E2Fear E3Happy E4Heart injury E5Surprised E6Anger E7Puckered tip E8And ghost face E9Probability of occurrence, respectively And recording the user experience satisfaction degree scores corresponding to the videos correspondingly.
Table 4 expression probability and user experience satisfaction score recording table
In S4, a non-linear mapping relationship between the probability distribution of the expression of the user and the corresponding satisfaction score is established, and a user experience evaluation model in which the probability distribution of the expression is used as input and the corresponding satisfaction score is used as output is obtained. Specifically, S4 includes:
s41, constructing a neural network by taking the probability distribution of the expression of the user as input and the corresponding satisfaction score as output; in this embodiment, the constructed neural network is a BP neural network.
Specifically, the input of the neural network is an input matrix X composed of 9 expression probability distribution data:
outputting an output matrix T formed by user experience satisfaction degree scoring data of each facial expression video:
T=[S1,S2,…,SL]T。
in other embodiments, the gender and age of the user may also be added as input parameters if the user experience satisfaction is more correlated with gender/age. At this time, in S3, the probability distribution of various expressions in each experience video and the sex and age of the person in each segment of video are counted;
specifically, as shown in table 5, the probability of the appearance of 9 expressions in each test video and the sex and age of the person, i.e., the emotion-free E, were counted1Aversion to E2Fear E3Happy E4Heart injury E5Surprised E6Anger E7Puckered tip E8And ghost face E9Probability of occurrence, respectivelySex g1、g2、...、sLAge a1、a2、...、aLAnd recording the user experience satisfaction degree scores corresponding to the videos correspondingly.
TABLE 5 expression probability and user experience satisfaction score record TABLE II
And S4, establishing a nonlinear mapping relation between the probability distribution of the expression of the user and the corresponding satisfaction degree score, and obtaining a user experience evaluation model taking the probability distribution of the expression as input and the corresponding satisfaction degree score as output. Specifically, S4 includes:
s41, constructing a neural network by taking the probability distribution of the expression of the user as input and the corresponding satisfaction score as output; in this embodiment, the constructed neural network is a BP neural network.
Specifically, the input of the neural network is an input matrix X composed of 9 expression probability distribution data and person gender and age data:
outputting an output matrix T formed by user experience satisfaction degree scoring data of each facial expression video:
T=[S1,S2,…,SL]T。
s42, training the neural network, and updating the weight and the threshold of the neural network;
s43, optimizing the neural network through L2 norm regularization, adding an L2 norm penalty term on the basis of a preset loss function of the neural network to inhibit overfitting to obtain a minimum function required by training, and establishing a nonlinear mapping relation between probability distribution of the expression of the user and corresponding satisfaction degree scores. The expressions involved in the technology of optimizing the neural network by norm regularization of L2 all belong to technical expressions used in the art, and are not described herein again. In this embodiment, the neural network is a radial basis neural network.
And S44, obtaining a user experience evaluation model which takes the probability distribution of the expression as input and the corresponding satisfaction degree score as output.
Through the specific nonlinear adaptive information processing capacity of the neural network, a more accurate basic mapping relation between the probability distribution of the expression of the user and the corresponding satisfaction degree score can be obtained, and an L2 norm punishment item is added on the basis of a preset loss function of the neural network to inhibit overfitting, so that the overfitting condition of the neural network can be prevented, the accuracy of the obtained nonlinear mapping relation is ensured, and the effectiveness and the accuracy of the model are ensured.
As shown in fig. 2, based on the expression recognition-oriented user experience evaluation modeling method, the present application further provides an expression recognition-oriented user experience evaluation device, which is developed by using a user experience evaluation model in the expression recognition-oriented user experience evaluation modeling method; the device comprises an acquisition module, an analysis module and a storage module. The acquisition module is a camera, the analysis module and the storage module are integrated at the background end, and the background end is a cloud server in the embodiment.
The acquisition module is used for acquiring the experience video of the user for X seconds continuously; a user experience evaluation model is prestored in the analysis module; the analysis module is used for analyzing and processing the user experience video to obtain expressions in each frame of image and then counting probability distribution of various expressions; the analysis module is also used for obtaining the satisfaction degree score of the user according to the user evaluation model and by combining the probability distribution of various expressions of the user; the storage unit is used for storing the satisfaction degree scores of the users.
The specific implementation process is as follows:
facial expressions are an important means of communicating between people. A person's mood and mood is often manifested by facial expressions. Facial expression recognition is the basis of human emotion recognition, and the result of user experience testing is closely related to expression feedback of the user when participating in experience.
By using the method, the experience videos and the satisfaction degree scores of part of users can be obtained in an internal test mode, so that a user experience evaluation model is established. Specifically, in the internal test stage, when a user experiences a product, an experience video with one end lasting for X seconds is shot, and the user fills in a satisfaction score and then acquires corresponding data. And analyzing and processing the experience video to obtain the expression in each frame of image.
Due to the characters and other reasons, when the emotion appears, the degree of the facial expression and the reaction time of a person are slightly different.
Compared with a conventional expression library, the puckered lips and the ghosted faces are creatively added into the expression library of the application by the applicant, because the characters of some users are more vivid and active, the puckered lips or the ghosted faces can be subconsciously made during experience, but the two expressions do not necessarily represent positive experience or negative experience, and the puckered lips or the ghosted faces and the positive or negative degree can be obtained by combining with other expressions for analysis. When users of the type are met, the situation that the analysis degree is inaccurate easily occurs according to the analysis of the conventional expression library, for example, the real experience of the users is only good, but the analysis of the existing expression library is very good. By using the expression library, the situation can be avoided, the accuracy of the user experience evaluation model is guaranteed, the accuracy of subsequent satisfaction scoring obtained through the user experience video is guaranteed, more accurate feedback data are provided for merchants, and the merchants can know the user experience situation more accurately.
Then, statistics is carried out on the probability distribution of the expression occurrence in each experience video, such as the situations of the emotional types of happiness, aversion, surprise and the like. And then, establishing a nonlinear mapping relation between the probability distribution of the user expression and the corresponding satisfaction degree score. Then, according to the nonlinear mapping relationship, a corresponding relationship between the probability distribution of the expression and the satisfaction degree score can be obtained. Based on the evaluation model, a corresponding user evaluation system/device can be developed, and after the experience video of a certain user is acquired, the corresponding satisfaction degree score can be directly obtained by analyzing the probability distribution of the expression of the user.
When the merchant wants to obtain the satisfaction degree score of the user, only the experience video of the user with the user expression needs to be obtained. Specifically, the experience video of the user with the user expression can be collected through the collection module, the analysis module can automatically perform subsequent image processing, count probability distribution of various expressions, analyze satisfaction degree scores, and store the satisfaction degree scores in the storage unit. The merchant only needs to check and know the satisfaction degree scores in the storage unit.
Compared with the prior art that the experience information of the user is acquired by acquiring the sign data of the user, the method and the device for acquiring the experience information of the user are simple, the difficulty in acquiring the sample is low, and the sample volume is guaranteed. Moreover, because the satisfaction degree is associated with the expression distribution probability distribution, and the expression distribution probability is determined and intuitive statistical data, both the accuracy and the intuition can be taken into consideration.
To sum up, the method and the device can accurately and conveniently acquire the experience information when the user uses the product.
Example two
Different from the first embodiment, the expression recognition-oriented user experience evaluation modeling method in the present embodiment is used for food products, and the method further includes S21, analyzing the proportion of the video frames with expressions in all the video frames, returning to S1 if the proportion is smaller than a preset value, dividing the video frames into video frames of a plurality of time periods according to the time sequence of the video frames if the proportion is smaller than the preset value, analyzing the number of the video frames with expressions in each time period one by one according to the time sequence, going to S3 if the number of the video frames with expressions in each time period is larger than a preset standard value, and returning to S1 if the number of the video frames with expressions in each time period is larger than the preset standard value. The proportion and the preset standard value can be specifically set by a person skilled in the art according to a device for shooting the experience video and the specific type of the experience product.
The core of the user experience evaluation model obtained by the invention is that the statistical analysis of the expression in the user experience process is carried out, and if the proportion of the video frames with the expression in the experience video in all the video frames is smaller than a preset value, the expression statistical analysis is ensured to be effective, so that the expression of the user is not shot at many time points in the experience video, the obtained expression probability distribution can not accurately reflect the real situation, and therefore, S1 is returned, and the experience video is obtained again.
In addition, even if the overall proportion is greater than the preset value, if the number of video frames with expressions in a certain period is not greater than the preset standard value, the obtained expression probability distribution may not accurately reflect the real situation. For example, the product is a food which is perceived as bitter first and then sweet after being eaten by a user, and when the user eats the food, the expression normally has a transition from negative expression (such as aversion) to positive expression (such as happiness), and the ratio of each expression can be used as a reference for the speed of taste transition and the taste level experienced by the user when the user experiences the food, and the speed of taste transition and the taste level of the experienced food can influence the satisfaction score of the user. If the number of video frames with expressions in a certain time period does not meet a preset value, the number of corresponding expressions in the time period is small, the ratio of each expression is distorted, the probability distribution of the experience video becomes interference data when a nonlinear mapping relation is established, and the accuracy of a subsequently acquired user evaluation model is influenced. Returning to S1 to retrieve the experience video.
Since it is stated that there may be a problem with the shooting angle when the second condition (i.e. the number of video frames with expressions in a certain period of time is not greater than the preset standard value) is not satisfied, the first condition (i.e. the occupation ratio of the video frames with expressions in the experience video in all the video frames is less than the preset value) is also not satisfied. The efficiency of the overall analysis is faster than that of the analysis in a single time period, and the overall efficiency of experience of video screening can be guaranteed in such a mode.
In conclusion, through S21, the accuracy of the subsequently acquired user evaluation model and the overall efficiency in experiencing video screening can be ensured.
The foregoing is merely an example of the present invention, and common general knowledge in the field of known specific structures and characteristics is not described herein in any greater extent than that known in the art at the filing date or prior to the priority date of the application, so that those skilled in the art can now appreciate that all of the above-described techniques in this field and have the ability to apply routine experimentation before this date can be combined with one or more of the present teachings to complete and implement the present invention, and that certain typical known structures or known methods do not pose any impediments to the implementation of the present invention by those skilled in the art. It should be noted that, for those skilled in the art, without departing from the structure of the present invention, several changes and modifications can be made, which should also be regarded as the protection scope of the present invention, and these will not affect the effect of the implementation of the present invention and the practicability of the patent. The scope of the claims of the present application shall be determined by the contents of the claims, and the description of the embodiments and the like in the specification shall be used to explain the contents of the claims.
Claims (10)
1. A user experience evaluation modeling method facing expression recognition is characterized by comprising the following steps:
s1, acquiring X seconds of experience video when the user experiences the product and corresponding satisfaction scores;
s2, analyzing and processing the experience video to obtain the expression of the user in each frame of image;
s3, counting the probability distribution of various expressions in each experience video;
and S4, establishing a nonlinear mapping relation between the probability distribution of the expression of the user and the corresponding satisfaction degree score, and obtaining a user experience evaluation model taking the probability distribution of the expression as input and the corresponding satisfaction degree score as output.
2. The expression recognition-oriented user experience evaluation modeling method according to claim 1, wherein S4 includes:
s41, constructing a neural network by taking the probability distribution of the expression of the user as input and the corresponding satisfaction score as output;
s42, training the neural network, and updating the weight and the threshold of the neural network;
s43, optimizing the neural network through L2 norm regularization, adding an L2 norm penalty term to inhibit overfitting on the basis of a loss function preset by the neural network to obtain a minimum function required by training, and establishing a nonlinear mapping relation between probability distribution of the expression of the user and corresponding satisfaction degree scores;
and S44, obtaining a user experience evaluation model which takes the probability distribution of the expression as input and the corresponding satisfaction degree score as output.
3. The expression recognition-oriented user experience evaluation modeling method according to claim 1, characterized in that: in S1, the duration of video experience is 5 seconds or longer.
4. The expression recognition-oriented user experience evaluation modeling method according to claim 3, characterized in that: in S1, the duration of video experience is 10 seconds or less.
5. The expression recognition-oriented user experience evaluation modeling method according to claim 1, characterized in that: in S2, the analyzing and processing the experience video includes: and analyzing the experience video to obtain complete video frame image data, and identifying the expression in each video frame.
6. The expression recognition-oriented user experience evaluation modeling method according to claim 5, characterized in that: in S2, when the expression of each video frame is recognized, the expression of each video frame is obtained after extracting statistical features from the video frame by machine vision recognition according to a predefined expression library.
7. The expression recognition-oriented user experience evaluation modeling method according to claim 6, characterized in that: at S2, the expressions in the expression library include no emotion, disgust, fear, happiness, hurry, surprise, anger, pout, and grimace.
8. The expression recognition-oriented user experience evaluation modeling method according to claim 1, characterized in that: in S1, after the satisfaction scores of the users on the attributes are obtained in a questionnaire mode, the scores of the attributes of each user are integrated through a URWA algorithm to obtain the satisfaction scores of the users; the formula of the URWA algorithm is:
wherein S isiRepresenting a satisfaction score for the experiencer numbered i; a isijAn evaluation score representing the attribute numbered j of the experiencer numbered i; omegaiA set of weights for various attribute experiences of the experiencer numbered i; omegaijA weight of the attribute experience numbered j for the experiencer numbered i; m is the total number of attributes; omegai=(ωi1,ωi2,...,ωim) Any omegaij∈[0,1]And isωijThe calculation formula of (2) is as follows:
9. the expression recognition-oriented user experience evaluation modeling method according to claim 8, characterized in that: and S21, analyzing the occupation ratio of the video frames with expressions in all the video frames, returning to S1 if the occupation ratio is smaller than a preset value, dividing the video frames into video frames with a plurality of time periods according to the time sequence of the video frames if the occupation ratio is smaller than the preset value, analyzing the number of the video frames with expressions in each time period one by one according to the time sequence, turning to S3 if the number of the video frames with expressions in each time period is larger than a preset standard value, and returning to S1 if the number of the video frames with expressions in each time period is larger than the preset standard value.
10. A user experience evaluation device facing expression recognition is characterized in that: using the expression recognition oriented user experience assessment modeling method of any of claims 1-9;
comprises an acquisition module, an analysis module and a storage module; the acquisition module is used for acquiring the experience video of the user for X seconds continuously; a user experience evaluation model is prestored in the analysis module; the analysis module is used for analyzing and processing the user experience video to obtain expressions in each frame of image and then counting probability distribution of various expressions; the analysis module is also used for obtaining the satisfaction degree score of the user according to the user evaluation model and by combining the probability distribution of various expressions of the user; the storage unit is used for storing the satisfaction degree scores of the users.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111265966.4A CN113822229A (en) | 2021-10-28 | 2021-10-28 | Expression recognition-oriented user experience evaluation modeling method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111265966.4A CN113822229A (en) | 2021-10-28 | 2021-10-28 | Expression recognition-oriented user experience evaluation modeling method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113822229A true CN113822229A (en) | 2021-12-21 |
Family
ID=78919109
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111265966.4A Pending CN113822229A (en) | 2021-10-28 | 2021-10-28 | Expression recognition-oriented user experience evaluation modeling method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113822229A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109492529A (en) * | 2018-10-08 | 2019-03-19 | 中国矿业大学 | A kind of Multi resolution feature extraction and the facial expression recognizing method of global characteristics fusion |
CN109858405A (en) * | 2019-01-17 | 2019-06-07 | 深圳壹账通智能科技有限公司 | Satisfaction evaluation method, apparatus, equipment and storage medium based on micro- expression |
CN109919102A (en) * | 2019-03-11 | 2019-06-21 | 重庆科技学院 | A kind of self-closing disease based on Expression Recognition embraces body and tests evaluation method and system |
CN109919099A (en) * | 2019-03-11 | 2019-06-21 | 重庆科技学院 | A kind of user experience evaluation method and system based on Expression Recognition |
WO2019184125A1 (en) * | 2018-03-30 | 2019-10-03 | 平安科技(深圳)有限公司 | Micro-expression-based risk identification method and device, equipment and medium |
-
2021
- 2021-10-28 CN CN202111265966.4A patent/CN113822229A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019184125A1 (en) * | 2018-03-30 | 2019-10-03 | 平安科技(深圳)有限公司 | Micro-expression-based risk identification method and device, equipment and medium |
CN109492529A (en) * | 2018-10-08 | 2019-03-19 | 中国矿业大学 | A kind of Multi resolution feature extraction and the facial expression recognizing method of global characteristics fusion |
CN109858405A (en) * | 2019-01-17 | 2019-06-07 | 深圳壹账通智能科技有限公司 | Satisfaction evaluation method, apparatus, equipment and storage medium based on micro- expression |
CN109919102A (en) * | 2019-03-11 | 2019-06-21 | 重庆科技学院 | A kind of self-closing disease based on Expression Recognition embraces body and tests evaluation method and system |
CN109919099A (en) * | 2019-03-11 | 2019-06-21 | 重庆科技学院 | A kind of user experience evaluation method and system based on Expression Recognition |
Non-Patent Citations (1)
Title |
---|
孔国强编著: "《技术经济学》", 31 August 1997, pages: 71 - 77 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110503531B (en) | Dynamic social scene recommendation method based on time sequence perception | |
US11481791B2 (en) | Method and apparatus for immediate prediction of performance of media content | |
CN108694647B (en) | Method and device for mining merchant recommendation reason and electronic equipment | |
JP6697356B2 (en) | Device, program and method for identifying state of specific object among predetermined objects | |
US10185869B2 (en) | Filter and shutter based on image emotion content | |
US12008600B2 (en) | Sentiments based transaction systems and methods | |
CN110175298B (en) | User matching method | |
US11113890B2 (en) | Artificial intelligence enabled mixed reality system and method | |
CN108665311B (en) | Electric commercial user time-varying feature similarity calculation recommendation method based on deep neural network | |
JP6715410B2 (en) | Evaluation method, evaluation device, evaluation program, and evaluation system | |
CN111724235A (en) | Online commodity recommendation method based on user novelty | |
US20160019411A1 (en) | Computer-Implemented System And Method For Personality Analysis Based On Social Network Images | |
WO2020253360A1 (en) | Content display method and apparatus for application, storage medium, and computer device | |
CN113947422A (en) | Marketing method and device based on multi-dimensional features and electronic equipment | |
Bai et al. | Automatic long-term deception detection in group interaction videos | |
CN110633410A (en) | Information processing method and device, storage medium, and electronic device | |
CN114491255A (en) | Recommendation method, system, electronic device and medium | |
CN111325705A (en) | Image processing method, device, equipment and storage medium | |
CN114004796A (en) | User evaluation result acquisition method and device, server and storage medium | |
CN113609319A (en) | Commodity searching method, device and equipment | |
CN113822229A (en) | Expression recognition-oriented user experience evaluation modeling method and device | |
CN111046293A (en) | Method and system for recommending content according to evaluation result | |
JP6043460B2 (en) | Data analysis system, data analysis method, and data analysis program | |
WO2020136668A1 (en) | System and method for generating a modified design creative | |
CN115408611A (en) | Menu recommendation method and device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |