CN113111690B - Facial expression analysis method and system and satisfaction analysis method and system - Google Patents

Facial expression analysis method and system and satisfaction analysis method and system Download PDF

Info

Publication number
CN113111690B
CN113111690B CN202010033040.1A CN202010033040A CN113111690B CN 113111690 B CN113111690 B CN 113111690B CN 202010033040 A CN202010033040 A CN 202010033040A CN 113111690 B CN113111690 B CN 113111690B
Authority
CN
China
Prior art keywords
facial expression
determining
face
expression
emotion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010033040.1A
Other languages
Chinese (zh)
Other versions
CN113111690A (en
Inventor
郭明坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Lynxi Technology Co Ltd
Original Assignee
Beijing Lynxi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Lynxi Technology Co Ltd filed Critical Beijing Lynxi Technology Co Ltd
Priority to CN202010033040.1A priority Critical patent/CN113111690B/en
Priority to PCT/CN2021/071233 priority patent/WO2021143667A1/en
Publication of CN113111690A publication Critical patent/CN113111690A/en
Application granted granted Critical
Publication of CN113111690B publication Critical patent/CN113111690B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a facial expression analysis method and a facial expression analysis system, comprising the following steps: acquiring a facial expression video fragment to be analyzed, and acquiring a picture stream in the video fragment; analyzing the facial expression index of each frame of picture in the picture flow, and determining a facial expression spectrogram corresponding to the picture flow; according to the facial expression spectrogram, determining a reference line corresponding to the face in a natural state, and determining a natural emotion area of the face in the natural state based on the reference line; and dividing the facial expression spectrogram into a plurality of emotion areas corresponding to different expressions by taking the natural emotion areas as references. The invention also provides a facial expression satisfaction analysis method and a facial expression satisfaction analysis system. The invention utilizes the complete video information of the expression of the user, fully considers the fluctuation of the expression, can determine the true emotion of the user, and can accurately determine the satisfaction degree of the user.

Description

Facial expression analysis method and system and satisfaction analysis method and system
Technical Field
The invention relates to the technical field of data analysis, in particular to a facial expression analysis method and system and a satisfaction analysis method and system.
Background
In the related art, when facial expression analysis is performed, a neural network model is mostly trained according to training data and corresponding labeling information, and an object to be predicted is input into the trained neural network model to obtain a facial expression analysis result. The facial expression is fluctuating, i.e., the face does not remain happy, calm or angry for every second, which makes the resulting facial expression analysis result inaccurate.
Disclosure of Invention
In order to solve the problems, the invention aims to provide a facial expression analysis method and system and a satisfaction analysis method and system, which utilize complete video information of a user expression, fully consider expression fluctuation, determine the true emotion of the user and accurately determine the satisfaction of the user.
The invention provides a facial expression analysis method, which comprises the following steps:
s1, acquiring a facial expression video clip to be analyzed, and acquiring a picture stream in the video clip;
s2, analyzing facial expression indexes of each frame of picture in the picture stream, and determining a facial expression spectrogram corresponding to the picture stream;
s3, determining a reference line corresponding to the face in a natural state according to the facial expression spectrogram, and determining a natural emotion area of the face in the natural state based on the reference line;
And S4, dividing the facial expression spectrogram into a plurality of emotion areas corresponding to different expressions by taking the natural emotion areas as references.
As a further improvement of the present invention, in S1, the picture stream in the video segment is obtained by frame-by-frame or by keyframe extraction at fixed intervals.
As a further improvement of the present invention, obtaining a picture stream in a video clip includes: dividing the video segment to obtain a plurality of video sub-segments, and randomly or fixedly extracting at least one frame in each video sub-segment;
and determining the picture flow in the video clip according to the extracted multiple picture frames.
As a further improvement of the present invention, the method further comprises: and carrying out face detection on each frame of picture in the picture stream to obtain face images in each frame of picture.
As a further improvement of the present invention, S2 includes:
s21, dividing the face image into a plurality of areas, wherein each area comprises a plurality of key feature points for determining the facial expression index;
s22, determining key feature points contained in each frame of picture, determining expression scores corresponding to all areas, and determining facial expression indexes of each frame of picture according to the expression scores corresponding to all areas;
S23, obtaining a facial expression spectrogram corresponding to the image stream according to the facial expression indexes of all the images.
As a further improvement of the present invention, S21 includes:
carrying out face feature point recognition on the face image to obtain a plurality of feature points of the face image;
identifying a plurality of key feature points for determining the facial expression index from the plurality of feature points;
and dividing the face into a plurality of areas according to the plurality of key feature points, wherein each area comprises a plurality of key feature points used for determining the facial expression index.
As a further improvement of the present invention, S22 includes:
determining at least one included angle between key feature point connecting lines in each region, and determining expression scores corresponding to each region according to the at least one included angle;
determining the weight corresponding to each region;
and determining the facial expression index of each frame of picture according to the expression scores corresponding to the areas and the weights corresponding to the areas.
As a further improvement of the present invention, S3 includes:
s31, determining a first interval with highest occurrence frequency of the facial expression index in the facial expression spectrogram;
s32, determining a datum line corresponding to the face in a natural state according to the first interval;
And S33, taking the datum line as a center, and taking a second area with a certain width range above and below the datum line as a natural emotion area of the face in a natural state.
As a further improvement of the present invention, S32 includes:
determining a horizontal centerline of the first section;
if the facial expression index corresponding to the horizontal center line is larger than a first threshold value and smaller than a second threshold value, determining the horizontal center line as a datum line corresponding to the face in a natural state;
if the facial expression index corresponding to the horizontal center line is smaller than or equal to a first threshold value, determining the horizontal line corresponding to the first threshold value as a datum line corresponding to the face in a natural state;
and if the facial expression index corresponding to the horizontal center line is greater than or equal to a second threshold value, determining the horizontal line corresponding to the second threshold value as a datum line corresponding to the face in a natural state.
As a further improvement of the present invention, in S4, in the facial expression spectrogram, an area above the natural emotion area is determined as a positive emotion area, and an area below the natural emotion area is determined as a negative emotion area.
The invention also provides a facial expression satisfaction analysis method, which comprises the following steps: s5, in each time period in the facial expression video clips to be analyzed, analyzing and calculating each emotion area corresponding to different expressions in the facial expression spectrogram, and determining the satisfaction degree of the user.
As a further improvement of the present invention, S5 includes:
s51, dividing a facial expression video segment to be analyzed into a plurality of time periods, and respectively calculating the proportion of different expressions in each time period according to the facial expression spectrogram;
s52, determining the weight corresponding to each time period;
and S53, determining a satisfaction result according to the proportion of different expressions in each time period and the weight corresponding to each time period.
The invention also provides a facial expression analysis system, which adopts the facial expression analysis method, comprising the following steps:
the image acquisition module is used for acquiring a video fragment of the facial expression to be analyzed and acquiring an image stream in the video fragment;
the expression spectrum module is used for analyzing the facial expression index of each frame of picture in the picture flow and determining a facial expression spectrogram corresponding to the picture flow;
the expression reference module is used for determining a reference line corresponding to the face in a natural state according to the facial expression spectrogram and determining a natural emotion area of the face in the natural state based on the reference line;
and the expression partitioning module is used for partitioning the facial expression spectrogram into a plurality of emotion areas corresponding to different expressions by taking the natural emotion areas as references.
As a further improvement of the invention, in the picture acquisition module, the picture stream in the video segment is acquired by frame-by-frame or fixed-interval frame extraction or key frame extraction.
As a further improvement of the invention, when the image acquisition module acquires the image stream in the facial expression video clip, the image acquisition module includes:
dividing the video segment to obtain a plurality of video sub-segments, and randomly or fixedly extracting at least one frame in each video sub-segment;
and determining the picture flow in the video clip according to the extracted multiple picture frames.
As a further improvement of the invention, the facial expression analysis system further includes: and the face detection module is used for carrying out face detection on each frame of picture in the picture stream and obtaining face images in each frame of picture.
As a further improvement of the invention, the expression spectrum module includes:
the facial region dividing module is used for dividing a facial image into a plurality of regions, and each region comprises a plurality of key feature points used for determining facial expression indexes;
the expression index determining module is used for determining key feature points contained in each region for each frame of picture respectively, determining expression scores corresponding to each region, and determining facial expression indexes of each frame of picture according to the expression scores corresponding to each region;
The expression spectrogram determining module is used for acquiring the facial expression spectrogram corresponding to the image flow according to the facial expression indexes of all the images.
As a further improvement of the invention, the face region dividing module includes:
carrying out face feature point recognition on the face image to obtain a plurality of feature points of the face image;
identifying a plurality of key feature points for determining the facial expression index from the plurality of feature points;
and dividing the face into a plurality of areas according to the plurality of key feature points, wherein each area comprises a plurality of key feature points used for determining the facial expression index.
As a further improvement of the invention, the expression index determination module includes:
determining at least one included angle between key feature point connecting lines in each region, and determining expression scores corresponding to each region according to the at least one included angle;
determining the weight corresponding to each region;
and determining the facial expression index of each frame of picture according to the expression scores corresponding to the areas and the weights corresponding to the areas.
As a further improvement of the invention, the expression reference module includes:
the frequency interval determining module is used for determining a first interval with highest occurrence frequency of the facial expression index in the facial expression spectrogram;
The reference line determining module is used for determining a reference line corresponding to the face in a natural state according to the first interval;
and the natural emotion region determining module is used for taking the reference line as a center and taking a second region in a certain width range above and below the reference line as a natural emotion region of the face in a natural state.
As a further improvement of the invention, the reference line determination module includes:
determining a horizontal centerline of the first section;
if the facial expression index corresponding to the horizontal center line is larger than a first threshold value and smaller than a second threshold value, determining the horizontal center line as a datum line corresponding to the face in a natural state;
if the facial expression index corresponding to the horizontal center line is smaller than or equal to a first threshold value, determining the horizontal line corresponding to the first threshold value as a datum line corresponding to the face in a natural state;
and if the facial expression index corresponding to the horizontal center line is greater than or equal to a second threshold value, determining the horizontal line corresponding to the second threshold value as a datum line corresponding to the face in a natural state.
As a further improvement of the invention, in the expression partitioning module, in the facial expression spectrogram, an area above the natural emotion area is determined as a positive emotion area, and an area below the natural emotion area is determined as a negative emotion area.
The invention also provides a facial expression satisfaction analysis system, which adopts the facial expression analysis system and further comprises:
and the satisfaction calculating module is used for analyzing and calculating each emotion area corresponding to different expressions in the facial expression spectrogram in each time period in the facial expression video clips to be analyzed, and determining the satisfaction of the user.
As a further improvement of the invention, the satisfaction calculating module includes:
the time period expression occupation ratio calculation module is used for dividing a facial expression video segment to be analyzed into a plurality of time periods and respectively calculating the proportion of different expressions in each time period according to the facial expression spectrogram;
the time period weight determining module is used for determining the weight corresponding to each time period;
and the satisfaction result calculation module is used for determining a satisfaction result according to the proportion of different expressions in each time period and the weight corresponding to each time period.
The invention also provides electronic equipment, which comprises a memory and a processor, and is characterized in that the memory is used for storing one or more computer instructions, wherein the one or more computer instructions are executed by the processor to realize the facial expression analysis method and the facial expression satisfaction analysis method.
The present invention also provides a computer-readable storage medium having stored thereon a computer program that is executed by a processor to implement the facial expression analysis method and the facial expression satisfaction analysis method.
The beneficial effects of the invention are as follows:
the method has the advantages that the complete video information of the expression of the user is utilized, fluctuation of the expression is fully considered, the difference of the natural states of the individual user is fully considered, the reference line corresponding to the natural states is obtained based on frequency analysis, and the true emotion of the user can be determined.
And setting a reference interval corresponding to the natural state according to the reference line of the user, so as to avoid the situation that the whole process is positive expression or negative expression.
The emotion layering of the user is realized through the reference area, the weight is divided for the time period of the video clip of the user, the emotion type and the time weight of the user are comprehensively considered, and the satisfaction degree of the user can be more accurately determined.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the prior art, the drawings that are used in the description of the embodiments or the prior art will be briefly described below. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without inventive faculty.
Fig. 1 is a schematic flow chart of a facial expression analysis method according to an embodiment of the disclosure;
FIG. 2 is a flow chart of a facial expression satisfaction analysis method according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a facial expression spectrogram corresponding to a picture stream according to an embodiment of the disclosure;
FIG. 4 is a schematic diagram of a reference interval for determining a reference line according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a plurality of emotional regions according to an embodiment of the disclosure;
fig. 6 is a schematic diagram of the frequency and percentage of three emotions, positive, natural, and negative, per time period according to an embodiment of the present disclosure.
Detailed Description
The following description of the technical solutions in the embodiments of the present disclosure will be made clearly and completely with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure.
It should be noted that, if directional indications (such as up, down, left, right, front, and rear … …) are included in the embodiments of the present disclosure, the directional indications are merely used to explain the relative positional relationship, movement conditions, etc. between the components in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indications are correspondingly changed.
In addition, in the description of the present disclosure, the terminology used is for the purpose of illustration only and is not intended to limit the scope of the present disclosure. The terms "comprises" and/or "comprising" are used to specify the presence of stated elements, steps, operations, and/or components, but do not preclude the presence or addition of one or more other elements, steps, operations, and/or components. The terms "first," "second," and the like may be used for describing various elements, do not represent a sequence, and are not intended to limit the elements. Furthermore, in the description of the present disclosure, unless otherwise indicated, the meaning of "a plurality" is two and more. These terms are only used to distinguish one element from another element. These and/or other aspects will become apparent to those of ordinary skill in the art from a review of the following drawings, and a description of the embodiments of the present disclosure will be more readily understood. The drawings are intended to depict the embodiments of the disclosure for purposes of illustration only. Those skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated in the present disclosure may be employed without departing from the principles of the present disclosure.
The facial expression analysis method according to the embodiment of the present disclosure, as shown in fig. 1, includes:
S1, acquiring a facial expression video clip to be analyzed, and acquiring a picture stream in the video clip.
In an alternative embodiment, in acquiring the picture stream, the picture stream in the video segment may be acquired by frame-by-frame (i.e., each frame is extracted) or by frame extraction at regular intervals (e.g., one frame is extracted every second) or by key frame extraction (i.e., i frames are extracted according to a change of picture). In this embodiment, when the picture stream in the video segment is acquired, the video segment may be segmented to obtain a plurality of video sub-segments, at least one frame in each video sub-segment is randomly or fixedly extracted, and the picture stream in the video segment is determined according to the extracted plurality of picture frames. The method comprises the steps of collecting complete video information containing the user expression, fully considering fluctuation of the facial expression, and avoiding single-sided performance and inaccuracy of the user emotion determined by a frame of image.
S2, analyzing the facial expression index of each frame of picture in the picture flow, and determining the facial expression spectrogram corresponding to the picture flow.
For example, the facial expression index ranges from 0 to 100, and higher index indicates more positive expression, i.e., more prone to happiness, surprise, etc. Lower index indicates more negative expression, i.e., more prone to anger, fear, etc. The index near the middle indicates that the expression is in a natural state. For example, according to the timestamp information of each frame in the image stream, a facial expression spectrogram is generated according to the facial expression index of each frame and the corresponding time information. The fluctuation process of the user expression can be intuitively displayed through the generated facial expression spectrogram.
In an alternative embodiment, before analyzing the facial expression index of each frame of picture in the picture stream, the method further includes: and carrying out face detection on each frame of picture in the picture stream to obtain face images in each frame of picture, and analyzing the face expression index of each frame of picture to obtain a face expression spectrogram.
In the face detection, for example, MTCNN, SSD, YOLOV3 or the like can be selected as the face detection algorithm, but the face detection algorithm is not limited to the above-mentioned ones and can be selected according to the requirements.
And outputting a facial expression index by analyzing each frame of picture.
In an alternative embodiment, S2 includes:
s21, dividing the face image into a plurality of areas, wherein each area comprises a plurality of key feature points for determining the facial expression index.
Optionally, face feature point recognition is performed on the face image to obtain a plurality of feature points of the face image, and a plurality of key feature points used for determining the face expression index are recognized from the plurality of feature points; the face is divided into a plurality of areas according to the key feature points, and each area contains the key feature points used for determining the facial expression index. If all the feature points are directly utilized to determine the facial expression index, the calculated amount is increased, and the accuracy of the facial expression index is ensured and the calculated amount can be reduced through key feature point identification. For example, there are 106 feature points on the face image, and from these 106 feature points, some key feature points for determining the facial expression index, such as a mouth key feature point, an eye key feature point, an eyebrow key feature point, and the like, are identified. And when the characteristic points are identified, the characteristic points can be identified based on the trained neural network model. Of course, the method is not limited to the above, and may be adaptively selected and adjusted.
In an alternative embodiment, key feature point recognition may be further performed on the facial image to obtain a plurality of key feature points for determining the facial expression index. For example, the key feature points are mouth key feature points, eye key feature points and eyebrow key feature points, and the face image can be processed by using the trained neural network model, for example, the face image is input into the trained neural network model to be processed, so as to obtain the mouth key feature points, eye key feature points and eyebrow key feature points in the face image of each frame.
Optionally, the face image may be divided according to the regions of the reference facial organ to obtain a plurality of regions of the face image, and the face image of each region may be extracted with key feature points to obtain a plurality of key feature points included in each region. For example, the reference facial organ includes a mouth, an eye, and an eyebrow, three regions of a face image may be obtained, and images of the three regions may be input into a mouth key feature point detection model, an eye key feature point detection model, and an eyebrow key feature point detection model, respectively, to obtain a plurality of key feature points included in each region.
S22, key feature points contained in each region are respectively determined for each frame of picture, expression scores corresponding to each region are determined, and facial expression indexes of each frame of picture are determined according to the expression scores corresponding to each region.
Optionally, determining at least one included angle between key feature point connecting lines in each region, and determining expression scores corresponding to each region according to the at least one included angle; determining the weight corresponding to each region; and determining the facial expression index of each frame of picture according to the expression scores corresponding to the areas and the weights corresponding to the areas. For example, when calculating each included angle and the area where the included angle is located, each area corresponds to a weight, the weights of the areas can be different, the sum of the weights of all the areas is 1, each area contains at least one included angle, each included angle corresponds to an expression score (for example, a percentile), and the included angle and the area are weighted to obtain the facial expression index. Because the feature points of the face area are more, if the included angles of the connecting lines of every two feature points are calculated, the calculated amount is increased, and after the key characteristics for weighting and calculating the face expression index are screened out, the included angles of the connecting lines can be directly calculated for the key feature points, so that the calculated amount is reduced. The key feature points of one region can be mutually connected with a plurality of lines, and the target line can be selected to calculate an included angle, such as an included angle between the lines of adjacent key feature points, an included angle between the key feature points at two ends and the line of the key feature point in the middle, and the like. Therefore, on the basis of ensuring the accuracy of the facial expression index, the calculated amount can be reduced, and the processing efficiency can be improved.
In an alternative embodiment, the outline information of the facial organ included in each region may be determined according to the key feature points included in each region, and the expression scores corresponding to each region may be determined according to the outline information of the facial organ included in each region.
S23, obtaining a facial expression spectrogram corresponding to the image stream according to the facial expression indexes of all the images.
And obtaining the facial expression index corresponding to each frame of picture after the weighted calculation, and further obtaining the facial expression spectrogram corresponding to the picture flow, as shown in fig. 3.
S3, determining a reference line corresponding to the face in a natural state according to the facial expression spectrogram, and determining a natural emotion area of the face in the natural state based on the reference line.
Because the expression of each person in the natural state is different, some people have natural bitter gourd faces, and some people have natural faces with laughing, the datum line of the natural state of each person is different, and the true emotion of each person can be accurately determined according to the datum line of the user by finding the datum line, so that the accuracy of facial expression recognition is effectively improved.
In an alternative embodiment, S3 includes:
S31, in the facial expression spectrogram, determining a first interval with highest appearance frequency of the facial expression index. The people are in a natural state most of the time in the process of receiving the service, and the horizontal line where the point with the highest occurrence frequency is located can be used as a datum line. But the values of each point are not exactly the same, so a reference interval (i.e. the first interval), i.e. the interval with the highest frequency, needs to be found. For example, a section width may be preset, and according to the section width, a first section with the highest appearance frequency of the facial expression index in the facial expression spectrogram may be determined.
S32, determining a reference line corresponding to the face in a natural state according to the first section.
The first interval with the highest occurrence frequency of the facial expression index in the facial expression spectrogram can truly reflect the expression state of the current user in the natural state, and the obtained datum line can accurately reflect the expression of the current user in the natural state.
Optionally, S32 includes:
determining a horizontal centerline of the first section;
if the facial expression index corresponding to the horizontal central line is larger than the first threshold value and smaller than the second threshold value, determining the horizontal central line as a datum line corresponding to the face in a natural state;
If the facial expression index corresponding to the horizontal center line is smaller than or equal to a first threshold value, determining the horizontal line corresponding to the first threshold value as a datum line corresponding to the face in a natural state;
if the facial expression index corresponding to the horizontal center line is greater than or equal to a second threshold value, determining the horizontal line corresponding to the second threshold value as a datum line corresponding to the face in a natural state.
As shown in fig. 4, for example, the width of the interval may be set to 20, and in the process of determining the reference line, the facial expression spectrogram is scanned from bottom to top by using the interval, and the interval with the highest frequency is found, where the reference line is the horizontal center of the interval. A schematic of the determined fiducial line is shown in fig. 3.
As shown in fig. 3, to avoid the situation that the whole journey is the positive emotion or the negative emotion, for example, the reference line may be set between 30 and 60, if the actually measured reference line is higher than 60, the reference line is set to 60, and if the actually measured reference line is lower than 30, the reference line is set to 30, so as to avoid the special situation that the whole journey is the positive emotion or the negative emotion. Of course, the setting values of the first threshold value and the second threshold value of the reference line may be adaptively adjusted, and are not limited to the above values.
S33, taking the datum line as the center, and taking a second area in a certain width range above and below the datum line as a natural emotion area of the face in a natural state.
As shown in fig. 4, after the reference line is acquired, for example, the region of the total width 30 may be represented by the upper and lower 15, which represents a natural emotion region of the face in a natural state. Of course, the width of the natural emotion region may be adaptively adjusted, and is not limited to the above-described values.
And S4, dividing the facial expression spectrogram into a plurality of emotion areas corresponding to different expressions by taking the natural emotion areas as references.
As shown in fig. 5, in the facial expression spectrogram, an area above a natural emotion area may be determined as a positive emotion area, and an area below the natural emotion area may be determined as a negative emotion area. Through the reference area, namely the natural emotion area, emotion layering of the user can be realized, and the whole process is avoided to be positive or negative emotion.
As shown in fig. 2, after the foregoing facial expression analysis method is adopted, the facial expression satisfaction analysis method according to the embodiment of the disclosure further includes: s5, in each time period in the facial expression video clips to be analyzed, analyzing and calculating each emotion area corresponding to different expressions in the facial expression spectrogram, and determining the satisfaction degree of the user.
In an alternative embodiment, S5 includes:
s51, dividing a facial expression video segment to be analyzed into a plurality of time periods, and respectively calculating the proportion of different expressions in each time period according to a facial expression spectrogram;
s52, determining the weight corresponding to each time period;
and S53, determining a satisfaction result according to the proportion of different expressions in each time period and the weight corresponding to each time period.
Since the satisfaction of "laughing" is definitely higher than "laughing" the different emotions need to be weighted differently in time. For example, the weights may be set to: the first 20% of time emotions account for 10% weight, the last 10% of emotion accounts for 60% weight, and the middle part accounts for the remaining 30% weight. Of course, the time weights may be adjusted appropriately. And (4) after a plurality of emotion areas are obtained according to the method in the step (S4), respectively counting the proportion of each emotion area in the facial expression spectrogram in different time periods. And in each time period, determining the weight corresponding to each emotion region according to the proportion of each emotion region in the facial expression spectrogram, and carrying out weighted calculation on the weight of each time period and the weight of each emotion region in each time period to obtain the satisfaction coefficient of the user. The user satisfaction coefficient may be obtained by subtracting the weighted calculation of negative emotions from the weighted value of positive emotions. The method has the advantages that the weight is divided for the time periods of the video clips of the users, the emotion types and the time weights of the users are comprehensively considered by combining the proportion of the emotion areas in each time period, and the satisfaction degree of the users can be more accurately determined.
As shown in fig. 6, in this embodiment, it can be seen that the positive emotion of the user gradually increases, and the negative emotion gradually decreases. According to the example shown in fig. 6, the user satisfaction coefficient= (31% + 10% +28% + 30% +37% + 60%) - (31% + 10% +22% + 30% +11% + 60%) =0.174 can be calculated.
Alternatively, the satisfaction result may be determined from the satisfaction coefficient. For example, when the satisfaction coefficient is greater than or equal to the satisfaction threshold, the satisfaction result is satisfaction. When the satisfaction coefficient is less than or equal to the dissatisfaction threshold, the satisfaction result is dissatisfaction. Satisfaction results are general when the satisfaction coefficient is greater than the dissatisfaction threshold and less than the satisfaction threshold. For example, it is possible to set 0.1 or more satisfactory, 0.1 or less unsatisfactory, and 0.1 to 0.1 as usual. Wherein, satisfaction threshold value and dissatisfaction threshold value are adaptable.
The embodiment of the disclosure provides a facial expression analysis system, which comprises: the system comprises a picture acquisition module, an expression frequency spectrum module, an expression reference module and an expression partition module.
The picture acquisition module is configured to acquire a facial expression video clip to be analyzed and acquire a picture stream in the video clip.
In an alternative embodiment, in acquiring the picture stream, the picture stream in the video segment may be acquired by frame-by-frame (i.e., each frame is extracted) or by frame extraction at regular intervals (e.g., one frame is extracted every second) or by key frame extraction (i.e., i frames are extracted according to a change of picture). In this embodiment, when the picture stream in the video segment is acquired, the video segment may be segmented to obtain a plurality of video sub-segments, at least one frame in each video sub-segment is randomly or fixedly extracted, and the picture stream in the video segment is determined according to the extracted plurality of picture frames. The method comprises the steps of collecting complete video information containing the user expression, fully considering fluctuation of the facial expression, and avoiding single-sided performance and inaccuracy of the user emotion determined by a frame of image.
The expression spectrum module is configured to analyze the facial expression index of each frame of picture in the picture stream and determine a facial expression spectrum corresponding to the picture stream.
For example, the facial expression index ranges from 0 to 100, and higher index indicates more positive expression, i.e., more prone to happiness, surprise, etc. Lower index indicates more negative expression, i.e., more prone to anger, fear, etc. The index near the middle indicates that the expression is in a natural state. For example, according to the timestamp information of each frame in the image stream, a facial expression spectrogram is generated according to the facial expression index of each frame and the corresponding time information. The fluctuation process of the user expression can be intuitively displayed through the generated facial expression spectrogram.
In an alternative embodiment, before analyzing the facial expression index of each frame of picture in the picture stream, the method further comprises a facial detection module configured to perform facial detection on each frame of picture in the picture stream to obtain a facial image in each frame of picture, and then analyze the facial expression index of each frame of picture to obtain a facial expression spectrogram.
In the face detection, for example, MTCNN, SSD, YOLOV3 or the like can be selected as the face detection algorithm, but the face detection algorithm is not limited to the above-mentioned ones and can be selected according to the requirements. And outputting a facial expression index by analyzing each frame of picture.
In an alternative embodiment, the expression spectrum module may include a face region dividing module, an expression index determining module, and an expression spectrum determining module.
The facial region dividing module is configured to divide a facial image into a plurality of regions, and each region contains a plurality of key feature points used for determining facial expression indexes.
In an alternative embodiment, the face region dividing module includes: carrying out face feature point recognition on the face image to obtain a plurality of feature points of the face image, and recognizing a plurality of key feature points used for determining the face expression index from the plurality of feature points; the face is divided into a plurality of areas according to the key feature points, and each area contains the key feature points used for determining the facial expression index. If all the feature points are directly utilized to determine the facial expression index, the calculated amount is increased, and the accuracy of the facial expression index is ensured and the calculated amount can be reduced through key feature point identification. For example, there are 106 feature points on the face image, and from these 106 feature points, some key feature points for determining the facial expression index, such as a mouth key feature point, an eye key feature point, an eyebrow key feature point, and the like, are identified. And when the feature points are identified, key feature point identification can be performed based on the trained neural network model. Of course, the method is not limited to the above, and may be adaptively selected and adjusted.
In an optional implementation manner, the facial region dividing module may further identify key feature points of the facial image to obtain a plurality of key feature points for determining the facial expression index. For example, the key feature points are mouth key feature points, eye key feature points and eyebrow key feature points, and the face image can be processed by using the trained neural network model, for example, the face image is input into the trained neural network model to be processed, so as to obtain the mouth key feature points, eye key feature points and eyebrow key feature points in the face image of each frame.
Optionally, the face region dividing module may further divide the face image according to the regions of the reference facial organ to obtain a plurality of regions of the face image, and extract key feature points of the face image of each region to obtain a plurality of key feature points included in each region. For example, the reference facial organ includes a mouth, an eye, and an eyebrow, three regions of a face image may be obtained, and images of the three regions may be input into a mouth key feature point detection model, an eye key feature point detection model, and an eyebrow key feature point detection model, respectively, to obtain a plurality of key feature points included in each region.
The expression index determining module is configured to determine key feature points contained in each region for each frame of picture, determine expression scores corresponding to each region, and determine facial expression indexes of each frame of picture according to the expression scores corresponding to each region.
In an alternative embodiment, the expression index determining module includes: determining at least one included angle between key feature point connecting lines in each region, and determining expression scores corresponding to each region according to the at least one included angle; determining the weight corresponding to each region; and determining the facial expression index of each frame of picture according to the expression scores corresponding to the areas and the weights corresponding to the areas. For example, when calculating each included angle and the area where the included angle is located, each area corresponds to a weight, the weights of the areas are different, the sum of the weights of all the areas is 1, each area contains at least one included angle, each included angle corresponds to an expression score (for example, a percentage system), and the included angle and the area are weighted and calculated to obtain the facial expression index. Because the feature points of the face area are more, the calculation amount of the included angle of the connecting lines of every two feature points is increased, and the connecting line included angle can be directly calculated for the key feature points after the key features for weighting and calculating the face expression index are screened, so that the calculation amount is reduced. The key feature points of one region can be mutually connected with a plurality of lines, and the target line can be selected to calculate an included angle, such as an included angle between the lines of adjacent key feature points, an included angle between the key feature points at two ends and the line of the key feature point in the middle, and the like. Therefore, on the basis of ensuring the accuracy of the facial expression index, the calculated amount can be reduced, and the processing efficiency can be improved.
In an optional implementation manner, the expression index determining module may further determine contour information of facial organs included in each region according to key feature points included in each region, and determine expression scores corresponding to each region according to the contour information of the facial organs included in each region.
The expression spectrogram determining module is configured to obtain facial expression spectrograms corresponding to the image streams according to facial expression indexes of all images. The facial expression spectrogram determining module obtains facial expression indexes corresponding to each frame of picture after the facial expression indexes are weighted and calculated, and further obtains a facial expression spectrogram corresponding to the picture flow, as shown in fig. 3.
The expression reference module is configured to determine a reference line corresponding to the face in a natural state according to the facial expression spectrogram, and determine a natural emotion area of the face in the natural state based on the reference line.
Because the expression of each person in the natural state is different, some people have natural bitter gourd faces, and some people have natural faces with laughing, the datum line of the natural state of each person is different, and the true emotion of each person can be accurately determined according to the datum line of the user by finding the datum line, so that the accuracy of facial expression recognition is effectively improved.
In an alternative embodiment, the expression reference module includes a frequency interval determining module, a reference line determining module, and a natural emotion region determining module.
The frequency interval determining module is configured to determine a first interval with highest occurrence frequency of the facial expression index in the facial expression spectrogram.
The people are in a natural state most of the time in the process of receiving the service, and the horizontal line where the point with the highest occurrence frequency is located can be used as a datum line. However, the values of each point may not be identical, so that a reference interval (i.e., the first interval) needs to be found, that is, the interval with the highest frequency needs to be found. For example, a section width may be preset, and according to the section width, a first section with the highest appearance frequency of the facial expression index in the facial expression spectrogram may be determined.
The reference line determining module is configured to determine a reference line corresponding to the face in a natural state according to the first section.
The first interval with the highest occurrence frequency of the facial expression index in the facial expression spectrogram can truly reflect the expression state of the current user in the natural state, and the obtained datum line can accurately reflect the expression of the current user in the natural state.
In an alternative embodiment, the reference line determination module includes:
determining a horizontal centerline of the first section;
if the facial expression index corresponding to the horizontal center line is larger than a first threshold value and smaller than a second threshold value, determining the horizontal center line as a datum line corresponding to the face in a natural state;
if the facial expression index corresponding to the horizontal center line is smaller than or equal to a first threshold value, determining the horizontal line corresponding to the first threshold value as a datum line corresponding to the face in a natural state;
and if the facial expression index corresponding to the horizontal center line is greater than or equal to a second threshold value, determining the horizontal line corresponding to the second threshold value as a datum line corresponding to the face in a natural state.
As shown in fig. 4, for example, the width of the interval may be set to 20, and in the process of determining the reference line, the facial expression spectrogram is scanned from bottom to top by using the interval, and the interval with the highest frequency is found, where the reference line is the horizontal center of the interval. A schematic of the reference line is shown in fig. 3.
As shown in fig. 3, to avoid the situation that the whole journey is the positive emotion or the negative emotion, for example, the reference line may be set between 30 and 60, if the actually measured reference line is higher than 60, the reference line is set to 60, and if the actually measured reference line is lower than 30, the reference line is set to 30, so as to avoid the special situation that the whole journey is the positive emotion or the negative emotion. Of course, the setting values of the first threshold value and the second threshold value of the reference line may be adaptively adjusted, and are not limited to the above values.
And the natural emotion region determination module is configured to take the reference line as a center and take a second region in a certain width range above and below the reference line as a natural emotion region of the face in a natural state.
As shown in fig. 4, after the reference line is acquired, for example, the region of the total width 30 may be represented by the upper and lower 15, which represents a natural emotion region of the face in a natural state. Of course, the width of the natural emotion region may be adaptively adjusted, and is not limited to the above-described values.
The expression partitioning module is configured to partition the facial expression spectrogram into a plurality of emotion areas corresponding to different expressions based on the natural emotion areas. As shown in fig. 5, in the facial expression spectrogram, an area above a natural emotion area may be determined as a positive emotion area, and an area below the natural emotion area may be determined as a negative emotion area. Through the reference area, namely the natural emotion area, emotion layering of the user can be realized, and the whole process is avoided to be positive or negative emotion.
The facial expression satisfaction analysis system according to the embodiment of the disclosure adopts the facial expression analysis system described above, and is different in that the facial expression satisfaction analysis system further comprises a satisfaction calculation module. The satisfaction calculating module is configured to analyze and calculate each emotion area corresponding to different expressions in the facial expression spectrogram in each time period in the facial expression video clips to be analyzed, and determine the satisfaction of the user.
In an alternative embodiment, the satisfaction calculation module includes:
the time period expression duty ratio calculation module is configured to divide a facial expression video segment to be analyzed into a plurality of time periods, and respectively calculate the proportion of different expressions in each time period according to the facial expression spectrogram.
And the time period weight determining module is configured to determine the weight corresponding to each time period.
The satisfaction result calculation module is configured to determine a satisfaction result according to the proportion of different expressions in each time period and the weight corresponding to each time period.
Since the satisfaction of "laughing" is definitely higher than "laughing" the different emotions need to be weighted differently in time. For example, the weights may be set to: the first 20% of time emotions account for 10% weight, the last 10% of emotion accounts for 60% weight, and the middle part accounts for the remaining 30% weight. Of course, the time weights may be adjusted appropriately. And respectively counting the proportion of each emotion region in the facial expression spectrogram in different time periods after each emotion region is acquired by the expression partitioning module. And in each time period, determining the weight corresponding to each emotion region according to the proportion of each emotion region in the facial expression spectrogram, and carrying out weighted calculation on the weight of each time period and the weight of each emotion region in each time period to obtain the satisfaction degree of the user. User satisfaction may be obtained by subtracting the weighted value of the negative emotion from the weighted value of the positive emotion. The method has the advantages that the weight is divided for the time periods of the video clips of the users, the emotion types and the time weights of the users are comprehensively considered by combining the proportion of the emotion areas in each time period, and the satisfaction degree of the users can be more accurately determined.
As shown in fig. 6, in this embodiment, it can be seen that the positive emotion of the user gradually increases, and the negative emotion gradually decreases. According to the example shown in fig. 6, the user satisfaction coefficient= (31% + 10% +28% + 30% +37% + 60%) - (31% + 10% +22% + 30% +11% + 60%) =0.174 can be calculated.
Alternatively, the satisfaction result may be determined from the satisfaction coefficient. For example, when the satisfaction coefficient is greater than or equal to the satisfaction threshold, the satisfaction result is satisfaction. When the satisfaction coefficient is less than or equal to the dissatisfaction threshold, the satisfaction result is dissatisfaction. Satisfaction results are general when the satisfaction coefficient is greater than the dissatisfaction threshold and less than the satisfaction threshold. For example, it is possible to set a value higher than 0.1 to be satisfactory, a value lower than-0.1 to be unsatisfactory, and a value between-0.1 and-0.1 to be general. Wherein, satisfaction threshold value and dissatisfaction threshold value are adaptable.
According to the facial expression analysis method and system and the satisfaction analysis method and system, complete video information of the user expression is utilized, fluctuation of the facial expression is fully considered, natural state differences of the user individuals are fully considered, a reference line corresponding to the natural state is obtained based on frequency analysis, and true emotion of the user can be determined. And setting a reference interval corresponding to the natural state according to the reference line of the user, so as to avoid the situation that the whole process is positive expression or negative expression. The emotion layering of the user is realized through the reference area, the weight is divided for the time period of the video clip of the user, the emotion type and the time weight of the user are comprehensively considered, and the satisfaction degree of the user can be more accurately determined.
The disclosure also relates to an electronic device, including a server, a terminal, and the like. The electronic device includes: at least one processor; a memory communicatively coupled to the at least one processor; and a communication component in communication with the storage medium, the communication component receiving and transmitting data under control of the processor; the memory stores instructions executable by the at least one processor to implement the facial expression analysis method and the facial expression satisfaction analysis method in the above embodiments.
In an alternative embodiment, the memory is implemented as a non-volatile computer-readable storage medium, and is used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The processor executes various functional applications and data processing of the device by running nonvolatile software programs, instructions and modules stored in the memory, namely, the facial expression analysis method and the facial expression satisfaction analysis method are realized.
The memory may include a memory program area and a memory data area, wherein the memory program area may store an operating system, at least one application program required for a function; the storage data area may store a list of options, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some embodiments, the memory optionally includes memory remotely located from the processor, the remote memory being connectable to the external device through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more modules are stored in the memory that, when executed by the one or more processors, perform the facial expression analysis method and the facial expression satisfaction analysis method of any of the method embodiments described above.
The product can execute the facial expression analysis method and the facial expression satisfaction analysis method provided by the embodiment of the application, has corresponding functional modules and beneficial effects of the execution method, and technical details which are not described in detail in the embodiment can be seen in the facial expression analysis method and the facial expression satisfaction analysis method provided by the embodiment of the application.
The present disclosure also relates to a computer-readable storage medium storing a computer-readable program for causing a computer to execute the above-described partial or full facial expression analysis method and facial expression satisfaction analysis method embodiments.
That is, it will be understood by those skilled in the art that all or part of the steps in implementing the methods of the embodiments described above may be implemented by a program stored in a storage medium, where the program includes several instructions for causing a device (which may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps in the methods of the embodiments described herein. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the disclosure may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Furthermore, one of ordinary skill in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the present disclosure and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
It will be understood by those skilled in the art that while the present disclosure has been described with reference to exemplary embodiments, various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the present disclosure. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the disclosure without departing from the essential scope thereof. Therefore, it is intended that the disclosure not be limited to the particular embodiments disclosed, but that the disclosure will include all embodiments falling within the scope of the appended claims.

Claims (16)

1. A facial expression analysis method, comprising:
s1, acquiring a facial expression video clip to be analyzed, and acquiring a picture stream in the video clip;
s2, for each frame of picture in the picture stream, a plurality of key feature points in the face image are identified to determine a face expression index, and a face expression spectrogram corresponding to the picture stream is determined according to the face expression indexes of all the pictures;
s3, determining a reference line corresponding to the face in a natural state according to the face expression index with the highest occurrence frequency in the face expression spectrogram, and determining a natural emotion area of the face in the natural state based on the reference line;
and S4, dividing the facial expression spectrogram into a plurality of emotion areas corresponding to different expressions by taking the natural emotion areas as references.
2. A facial expression analysis method as in claim 1, wherein in S1, the stream of pictures in the video segment is obtained by frame-by-frame or fixed interval extraction of frames or extraction of key frames.
3. The method of claim 2, wherein the step of obtaining the picture stream in the video segment of the facial expression comprises: dividing the video segment to obtain a plurality of video sub-segments, and randomly or fixedly extracting at least one frame in each video sub-segment;
And determining the picture flow in the video clip according to the extracted multiple picture frames.
4. A method of facial expression analysis as recited in claim 1, said method further comprising: and carrying out face detection on each frame of picture in the picture stream to obtain face images in each frame of picture.
5. The facial expression analysis method as recited in claim 4, wherein S2 comprises:
s21, dividing the face image into a plurality of areas, wherein each area comprises a plurality of key feature points for determining the facial expression index;
s22, determining key feature points contained in each region for each frame of picture, determining expression scores corresponding to each region, and determining facial expression indexes of each frame of picture according to the expression scores corresponding to each region;
s23, obtaining a facial expression spectrogram corresponding to the image stream according to the facial expression indexes of all the images.
6. The facial expression analysis method as recited in claim 5, S21 comprising:
carrying out face feature point recognition on the face image to obtain a plurality of feature points of the face image;
identifying a plurality of key feature points for determining the facial expression index from the plurality of feature points;
And dividing the face into a plurality of areas according to the plurality of key feature points, wherein each area comprises a plurality of key feature points used for determining the facial expression index.
7. The facial expression analysis method as recited in claim 5, wherein S22 comprises:
determining at least one included angle between key feature point connecting lines in each region, and determining expression scores corresponding to each region according to the at least one included angle;
determining the weight corresponding to each region;
and determining the facial expression index of each frame of picture according to the expression scores corresponding to the areas and the weights corresponding to the areas.
8. The facial expression analysis method as recited in claim 1, wherein S3 comprises:
s31, determining a first interval with highest occurrence frequency of the facial expression index in the facial expression spectrogram;
s32, determining a datum line corresponding to the face in a natural state according to the first interval;
and S33, taking the datum line as a center, and taking a second area in a certain width range above and below the datum line as a natural emotion area of the face in a natural state.
9. The facial expression analysis method as recited in claim 8, wherein S32 comprises:
Determining a horizontal centerline of the first section;
if the facial expression index corresponding to the horizontal center line is larger than a first threshold value and smaller than a second threshold value, determining the horizontal center line as a datum line corresponding to the face in a natural state;
if the facial expression index corresponding to the horizontal center line is smaller than or equal to a first threshold value, determining the horizontal line corresponding to the first threshold value as a datum line corresponding to the face in a natural state;
and if the facial expression index corresponding to the horizontal center line is greater than or equal to a second threshold value, determining the horizontal line corresponding to the second threshold value as a datum line corresponding to the face in a natural state.
10. The facial expression analysis method as recited in claim 1, wherein in S4, in the facial expression spectrogram, a region above the natural emotion region is determined as a positive emotion region, and a region below the natural emotion region is determined as a negative emotion region.
11. A facial expression satisfaction analysis method according to any one of claims 1 to 10, further comprising: s5, in each time period in the facial expression video clips to be analyzed, analyzing and calculating each emotion area corresponding to different expressions in the facial expression spectrogram, and determining the satisfaction degree of the user.
12. The facial expression satisfaction analysis method of claim 11, wherein S5 comprises:
s51, dividing a facial expression video segment to be analyzed into a plurality of time periods, and respectively calculating the proportion of different expressions in each time period according to the facial expression spectrogram;
s52, determining the weight corresponding to each time period;
and S53, determining a satisfaction result according to the proportion of different expressions in each time period and the weight corresponding to each time period.
13. A facial expression analysis system employing a facial expression analysis method as claimed in any one of claims 1 to 10, comprising:
the image acquisition module is used for acquiring a video fragment of the facial expression to be analyzed and acquiring an image stream in the video fragment;
the expression spectrum module is used for identifying a plurality of key feature points in the face image for each frame of picture in the picture stream to determine a face expression index, and determining a face expression spectrum corresponding to the picture stream according to the face expression indexes of all the pictures;
the expression reference module is used for determining a reference line corresponding to the face in a natural state according to the face expression index with the highest occurrence frequency in the face expression spectrogram, and determining a natural emotion area of the face in the natural state based on the reference line;
And the expression partitioning module is used for partitioning the facial expression spectrogram into a plurality of emotion areas corresponding to different expressions by taking the natural emotion areas as references.
14. A facial expression satisfaction analysis system, employing a facial expression analysis system as recited in claim 13, further comprising:
and the satisfaction calculating module is used for analyzing and calculating each emotion area corresponding to different expressions in the facial expression spectrogram in each time period in the facial expression video clips to be analyzed, and determining the satisfaction of the user.
15. An electronic device comprising a memory and a processor, wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement the method of any of claims 1-12.
16. A computer readable storage medium having stored thereon a computer program, wherein the computer program is executed by a processor to implement the method of any of claims 1-12.
CN202010033040.1A 2020-01-13 2020-01-13 Facial expression analysis method and system and satisfaction analysis method and system Active CN113111690B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010033040.1A CN113111690B (en) 2020-01-13 2020-01-13 Facial expression analysis method and system and satisfaction analysis method and system
PCT/CN2021/071233 WO2021143667A1 (en) 2020-01-13 2021-01-12 Facial expression analysis method and system, and facial expression-based satisfaction analysis method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010033040.1A CN113111690B (en) 2020-01-13 2020-01-13 Facial expression analysis method and system and satisfaction analysis method and system

Publications (2)

Publication Number Publication Date
CN113111690A CN113111690A (en) 2021-07-13
CN113111690B true CN113111690B (en) 2024-01-30

Family

ID=76708830

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010033040.1A Active CN113111690B (en) 2020-01-13 2020-01-13 Facial expression analysis method and system and satisfaction analysis method and system

Country Status (2)

Country Link
CN (1) CN113111690B (en)
WO (1) WO2021143667A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113850247B (en) * 2021-12-01 2022-02-08 环球数科集团有限公司 Tourism video emotion analysis system fused with text information
CN114743252B (en) * 2022-06-10 2022-09-16 中汽研汽车检验中心(天津)有限公司 Feature point screening method, device and storage medium for head model
CN117131099B (en) * 2022-12-14 2024-08-02 广州数化智甄科技有限公司 Emotion data analysis method and device in product evaluation and product evaluation method
CN117122320B (en) * 2022-12-14 2024-07-05 广州数化智甄科技有限公司 Emotion data benchmarking method and device and computer readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109447001A (en) * 2018-10-31 2019-03-08 深圳市安视宝科技有限公司 A kind of dynamic Emotion identification method
CN109886110A (en) * 2019-01-17 2019-06-14 深圳壹账通智能科技有限公司 Micro- expression methods of marking, device, computer equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875633B (en) * 2018-06-19 2022-02-08 北京旷视科技有限公司 Expression detection and expression driving method, device and system and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109447001A (en) * 2018-10-31 2019-03-08 深圳市安视宝科技有限公司 A kind of dynamic Emotion identification method
CN109886110A (en) * 2019-01-17 2019-06-14 深圳壹账通智能科技有限公司 Micro- expression methods of marking, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN113111690A (en) 2021-07-13
WO2021143667A1 (en) 2021-07-22

Similar Documents

Publication Publication Date Title
CN113111690B (en) Facial expression analysis method and system and satisfaction analysis method and system
CN110267119B (en) Video precision and chroma evaluation method and related equipment
CN110347872B (en) Video cover image extraction method and device, storage medium and electronic equipment
CN105160318A (en) Facial expression based lie detection method and system
CN106056064A (en) Face recognition method and face recognition device
CN110309799B (en) Camera-based speaking judgment method
CN113111968B (en) Image recognition model training method, device, electronic equipment and readable storage medium
CN110418204B (en) Video recommendation method, device, equipment and storage medium based on micro expression
CN103810490A (en) Method and device for confirming attribute of face image
CN110232331B (en) Online face clustering method and system
CN113065474A (en) Behavior recognition method and device and computer equipment
CN112770061A (en) Video editing method, system, electronic device and storage medium
CN110149531A (en) The method and apparatus of video scene in a kind of identification video data
JP2022540101A (en) POSITIONING METHOD AND APPARATUS, ELECTRONIC DEVICE, COMPUTER-READABLE STORAGE MEDIUM
CN109876416A (en) A kind of rope skipping method of counting based on image information
CN110287912A (en) Method, apparatus and medium are determined based on the target object affective state of deep learning
CN106548114A (en) Image processing method and device
KR101783183B1 (en) Method and apparatus for emotion classification of smart device user
CN105224957B (en) A kind of method and system of the image recognition based on single sample
CN110490064A (en) Processing method, device, computer equipment and the computer storage medium of sports video data
CN113326829B (en) Method and device for recognizing gesture in video, readable storage medium and electronic equipment
CN113536947A (en) Face attribute analysis method and device
CN114463816A (en) Satisfaction determining method and device, processor and electronic equipment
CN106599765B (en) Method and system for judging living body based on video-audio frequency of object continuous pronunciation
CN114612934A (en) Gait sequence evaluation method and system based on quality dimension

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant