CN113111690A - Facial expression analysis method and system and satisfaction analysis method and system - Google Patents

Facial expression analysis method and system and satisfaction analysis method and system Download PDF

Info

Publication number
CN113111690A
CN113111690A CN202010033040.1A CN202010033040A CN113111690A CN 113111690 A CN113111690 A CN 113111690A CN 202010033040 A CN202010033040 A CN 202010033040A CN 113111690 A CN113111690 A CN 113111690A
Authority
CN
China
Prior art keywords
facial expression
determining
expression
facial
emotion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010033040.1A
Other languages
Chinese (zh)
Other versions
CN113111690B (en
Inventor
郭明坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Lynxi Technology Co Ltd
Original Assignee
Beijing Lynxi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Lynxi Technology Co Ltd filed Critical Beijing Lynxi Technology Co Ltd
Priority to CN202010033040.1A priority Critical patent/CN113111690B/en
Priority to PCT/CN2021/071233 priority patent/WO2021143667A1/en
Publication of CN113111690A publication Critical patent/CN113111690A/en
Application granted granted Critical
Publication of CN113111690B publication Critical patent/CN113111690B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a facial expression analysis method and a system, comprising the following steps: acquiring a facial expression video clip to be analyzed and acquiring a picture stream in the video clip; analyzing the facial expression index of each frame of picture in the picture stream, and determining a facial expression spectrogram corresponding to the picture stream; determining a reference line corresponding to the face in a natural state according to the facial expression spectrogram, and determining a natural emotion area of the face in the natural state based on the reference line; and dividing the facial expression spectrogram into a plurality of emotion areas corresponding to different expressions by taking the natural emotion areas as a reference. The invention also provides a method and a system for analyzing the satisfaction degree of the facial expression. The invention utilizes the complete video information of the expression of the user, fully considers the fluctuation of the expression, can determine the real emotion of the user, and can accurately determine the satisfaction degree of the user.

Description

Facial expression analysis method and system and satisfaction analysis method and system
Technical Field
The invention relates to the technical field of data analysis, in particular to a facial expression analysis method and system and a satisfaction analysis method and system.
Background
In the related art, when facial expression analysis is performed, most of the methods train a neural network model only according to training data and corresponding labeling information, and input an object to be predicted into the trained neural network model to obtain a facial expression analysis result. The facial expression is fluctuating, that is, the face does not keep happy, calm or angry continuously every second, which makes the obtained facial expression analysis result inaccurate.
Disclosure of Invention
In order to solve the above problems, the present invention provides a method and a system for analyzing facial expressions and a method and a system for analyzing satisfaction, which utilize the complete video information of user expressions, fully consider the fluctuation of the expressions, determine the real emotion of the user, and accurately determine the satisfaction of the user.
The invention provides a facial expression analysis method, which comprises the following steps:
s1, acquiring a facial expression video clip to be analyzed and acquiring a picture stream in the video clip;
s2, analyzing the facial expression index of each frame in the picture stream, and determining a facial expression spectrogram corresponding to the picture stream;
s3, determining a reference line corresponding to the human face in a natural state according to the facial expression spectrogram, and determining a natural emotion area of the human face in the natural state based on the reference line;
and S4, dividing the facial expression spectrogram into a plurality of emotion areas corresponding to different expressions by taking the natural emotion areas as a reference.
As a further improvement of the present invention, in S1, the picture stream in the video clip is obtained by frame-by-frame or fixed-interval frame extraction or key frame extraction.
As a further improvement of the present invention, acquiring a picture stream in a video clip includes: segmenting the video segments to obtain a plurality of video sub-segments, and randomly or fixedly extracting at least one frame in each video sub-segment;
and determining a picture stream in the video clip according to the extracted plurality of picture frames.
As a further improvement of the present invention, the method further comprises: and carrying out face detection on each frame of picture in the picture stream to obtain a face image in each frame of picture.
As a further improvement of the present invention, S2 includes:
s21, dividing the face image into a plurality of areas, wherein each area comprises a plurality of key feature points for determining the expression index of the face;
s22, determining key feature points contained in each frame of picture, determining expression scores corresponding to the regions, and determining the facial expression index of each frame of picture according to the expression scores corresponding to the regions;
and S23, acquiring facial expression frequency spectrum diagrams corresponding to the picture streams according to the facial expression indexes of all the pictures.
As a further improvement of the present invention, S21 includes:
carrying out human face characteristic point identification on the human face image to obtain a plurality of characteristic points of the human face image;
identifying a plurality of key feature points for determining a facial expression index from the plurality of feature points;
and dividing the face into a plurality of areas according to the plurality of key feature points, wherein each area comprises a plurality of key feature points for determining the expression index of the face.
As a further improvement of the present invention, S22 includes:
determining at least one included angle between key feature point connecting lines in each region, and determining expression scores corresponding to each region according to the at least one included angle;
determining the weight corresponding to each region;
and determining the facial expression index of each frame of picture according to the expression scores corresponding to the regions and the weights corresponding to the regions.
As a further improvement of the present invention, S3 includes:
s31, determining a first interval with the highest frequency of appearance of the facial expression index in the facial expression spectrogram;
s32, determining a reference line corresponding to the face in a natural state according to the first interval;
and S33, taking the reference line as a center, and taking a second interval with a certain width range above and below the reference line as a natural emotion area of the human face in a natural state.
As a further improvement of the present invention, S32 includes:
determining a horizontal centerline of the first interval;
if the facial expression index corresponding to the horizontal center line is larger than a first threshold and smaller than a second threshold, determining the horizontal center line as a reference line corresponding to the face in a natural state;
if the facial expression index corresponding to the horizontal center line is smaller than or equal to a first threshold, determining a horizontal line corresponding to the first threshold as a reference line corresponding to the face in a natural state;
and if the facial expression index corresponding to the horizontal center line is larger than or equal to a second threshold, determining a horizontal line corresponding to the second threshold as a reference line corresponding to the face in a natural state.
As a further improvement of the present invention, in S4, in the facial expression spectrogram, the region above the natural emotion region is determined as a positive emotion region, and the region below the natural emotion region is determined as a negative emotion region.
The invention also provides a method for analyzing the satisfaction degree of the facial expression, which comprises the following steps of: and S5, analyzing and calculating each emotion area corresponding to different expressions in the facial expression spectrogram in each time period in the facial expression video clip to be analyzed, and determining the satisfaction degree of the user.
As a further improvement of the present invention, S5 includes:
s51, dividing the facial expression video clip to be analyzed into a plurality of time periods, and respectively calculating the proportion of different expressions in each time period according to the facial expression spectrogram;
s52, determining the weight corresponding to each time period;
and S53, determining a satisfaction result according to the proportion of different expressions in each time period and the corresponding weight of each time period.
The invention also provides a facial expression analysis system, and the facial expression analysis method comprises the following steps:
the image acquisition module is used for acquiring a video clip of the facial expression to be analyzed and acquiring an image stream in the video clip;
the expression frequency spectrum module is used for analyzing the facial expression index of each frame of picture in the picture stream and determining a facial expression frequency spectrum graph corresponding to the picture stream;
the expression reference module is used for determining a reference line corresponding to the human face in a natural state according to the facial expression spectrogram and determining a natural emotion area of the human face in the natural state based on the reference line;
and the expression partitioning module is used for dividing the facial expression spectrogram into a plurality of emotion areas corresponding to different expressions by taking the natural emotion area as a reference.
As a further improvement of the invention, in the picture acquisition module, a picture stream in the video clip is acquired by frame-by-frame or fixed interval frame extraction or key frame extraction.
As a further improvement of the invention, when the image acquisition module acquires an image stream in a facial expression video clip, the image acquisition module includes:
segmenting the video segments to obtain a plurality of video sub-segments, and randomly or fixedly extracting at least one frame in each video sub-segment;
and determining a picture stream in the video clip according to the extracted plurality of picture frames.
As a further improvement of the invention, the facial expression analysis system further comprises: and the face detection module is used for carrying out face detection on each frame of picture in the picture stream to obtain a face image in each frame of picture.
As a further improvement of the invention, the expression spectrum module comprises:
the face region dividing module is used for dividing the face image into a plurality of regions, and each region comprises a plurality of key feature points for determining a face expression index;
the expression index determining module is used for respectively determining key feature points contained in each region aiming at each frame of picture, determining expression scores corresponding to the regions, and determining the facial expression index of each frame of picture according to the expression scores corresponding to the regions;
and the facial expression spectrogram determining module is used for acquiring the facial expression spectrogram corresponding to the picture stream according to the facial expression indexes of all the pictures.
As a further improvement of the invention, the face region dividing module comprises:
carrying out human face characteristic point identification on the human face image to obtain a plurality of characteristic points of the human face image;
identifying a plurality of key feature points for determining a facial expression index from the plurality of feature points;
and dividing the face into a plurality of areas according to the plurality of key feature points, wherein each area comprises a plurality of key feature points for determining the expression index of the face.
As a further improvement of the invention, the expression index determining module comprises:
determining at least one included angle between key feature point connecting lines in each region, and determining expression scores corresponding to each region according to the at least one included angle;
determining the weight corresponding to each region;
and determining the facial expression index of each frame of picture according to the expression scores corresponding to the regions and the weights corresponding to the regions.
As a further improvement of the invention, the expression reference module comprises:
the frequency interval determining module is used for determining a first interval with the highest frequency of appearance of the facial expression index in the facial expression spectrogram;
the datum line determining module is used for determining a datum line corresponding to the human face in a natural state according to the first interval;
and the natural emotion area determining module is used for taking the datum line as a center and taking a second interval in a certain width range above and below the datum line as a natural emotion area of the human face in a natural state.
As a further improvement of the invention, the datum line determining module comprises:
determining a horizontal centerline of the first interval;
if the facial expression index corresponding to the horizontal center line is larger than a first threshold and smaller than a second threshold, determining the horizontal center line as a reference line corresponding to the face in a natural state;
if the facial expression index corresponding to the horizontal center line is smaller than or equal to a first threshold, determining a horizontal line corresponding to the first threshold as a reference line corresponding to the face in a natural state;
and if the facial expression index corresponding to the horizontal center line is larger than or equal to a second threshold, determining a horizontal line corresponding to the second threshold as a reference line corresponding to the face in a natural state.
As a further improvement of the invention, in the expression partitioning module, in the facial expression spectrogram, a region above the natural emotion region is determined as a positive emotion region, and a region below the natural emotion region is determined as a negative emotion region.
The invention also provides a system for analyzing satisfaction degree of facial expression, which adopts the system for analyzing facial expression and further comprises:
and the satisfaction calculation module is used for analyzing and calculating each emotion area corresponding to different expressions in the facial expression spectrogram in each time period in the facial expression video clip to be analyzed, and determining the satisfaction of the user.
As a further improvement of the invention, the satisfaction calculation module comprises:
the time period expression ratio calculation module is used for dividing the facial expression video clip to be analyzed into a plurality of time periods and respectively calculating the ratio of different expressions in each time period according to the facial expression spectrogram;
the time period weight determining module is used for determining the weight corresponding to each time period;
and the satisfaction result calculating module is used for determining a satisfaction result according to the proportion of different expressions in each time period and the weight corresponding to each time period.
The invention also provides an electronic device comprising a memory and a processor, wherein the memory is used for storing one or more computer instructions, and the one or more computer instructions are executed by the processor to realize the facial expression analysis method and the facial expression satisfaction analysis method.
The present invention also provides a computer-readable storage medium having stored thereon a computer program which is executed by a processor to implement the facial expression analysis method and the facial expression satisfaction analysis method.
The invention has the beneficial effects that:
the complete video information of the expression of the user is utilized, the fluctuation of the expression is fully considered, the individual natural state difference of the user is fully considered, the reference line corresponding to the natural state is obtained based on frequency analysis, and the real emotion of the user can be determined.
And setting a reference interval corresponding to the natural state according to the reference line of the user, thereby avoiding the situation that the whole course is positive expression or negative expression.
The emotion layering of the user is realized through the reference area, the time period of the user video clip is weighted, the emotion type and the time weight of the user are comprehensively considered, and the satisfaction degree of the user can be determined more accurately.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without undue inventive faculty.
Fig. 1 is a schematic flow chart of a method for analyzing facial expressions according to an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of a method for analyzing satisfaction of facial expressions according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a facial expression spectrogram corresponding to a picture stream according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a reference interval for determining a reference line according to an embodiment of the disclosure;
FIG. 5 is a schematic illustration of a plurality of emotional areas according to an embodiment of the disclosure;
fig. 6 is a graph illustrating the frequency and percentage of the three emotional frequencies, positive, natural, and negative, for each time period according to an embodiment of the disclosure.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It should be noted that, if directional indications (such as up, down, left, right, front, and back … …) are involved in the disclosed embodiment, the directional indications are only used to explain the relative position relationship between the components, the motion situation, and the like in a specific posture (as shown in the drawing), and if the specific posture is changed, the directional indications are changed accordingly.
In addition, in the description of the present disclosure, the terms used are for illustrative purposes only and are not intended to limit the scope of the present disclosure. The terms "comprises" and/or "comprising" are used to specify the presence of stated elements, steps, operations, and/or components, but do not preclude the presence or addition of one or more other elements, steps, operations, and/or components. The terms "first," "second," and the like may be used to describe various elements, not necessarily order, and not necessarily limit the elements. In addition, in the description of the present disclosure, "a plurality" means two or more unless otherwise specified. These terms are only used to distinguish one element from another. These and/or other aspects will become apparent to those of ordinary skill in the art in view of the following drawings, and the description of the embodiments of the disclosure will be more readily understood by those of ordinary skill in the art. The drawings are only for purposes of illustrating the described embodiments of the disclosure. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated in the present disclosure may be employed without departing from the principles described in the present disclosure.
As shown in fig. 1, a method for analyzing facial expressions according to an embodiment of the present disclosure includes:
s1, acquiring the facial expression video clip to be analyzed and acquiring the picture stream in the video clip.
In an alternative embodiment, when acquiring the picture stream, the picture stream in the video clip may be acquired by frame-by-frame (i.e. extracting every frame) or frame-by-frame (e.g. extracting every second for one frame) or key frame (i.e. extracting i frames according to the change of the picture). In this embodiment, when obtaining the picture stream in the video segment, the video segment may be segmented to obtain a plurality of video sub-segments, at least one frame in each video sub-segment is randomly or fixedly extracted, and the picture stream in the video segment is determined according to the extracted plurality of picture frames. The method has the advantages that complete video information containing the user expression is collected, the fluctuation of the facial expression is fully considered, and the one-sidedness and inaccuracy of the user emotion are determined by one frame of image independently.
S2, analyzing the facial expression index of each frame in the picture stream, and determining the facial expression spectrogram corresponding to the picture stream.
For example, the facial expression index range is 0-100, and a higher index indicates that the expression is more positive, i.e., more favorable and surprised. A lower index indicates that the expression is more negative, i.e. more prone to emotions such as anger, fear, etc. The index is close to the middle to indicate that the expression is in a natural state. For example, according to the timestamp information of each frame of picture in the picture stream, a facial expression spectrogram is generated according to the facial expression index of each frame of picture and the corresponding time information. The fluctuation process of the user expression can be visually displayed through the generated facial expression spectrogram.
In an optional embodiment, before analyzing the facial expression index of each frame in the picture stream, the method further includes: and carrying out face detection on each frame of picture in the picture stream to obtain a face image in each frame of picture, and analyzing the face expression index of each frame of picture to obtain a face expression spectrogram.
In the face detection, for example, MTCNN, SSD, YOLOV3, etc. may be used as the face detection algorithm, but the face detection algorithm is not limited to the above-mentioned algorithms, and may be selected according to the requirement.
And outputting a facial expression index by analyzing each frame of picture.
In an alternative embodiment, S2 includes:
and S21, dividing the face image into a plurality of areas, wherein each area comprises a plurality of key feature points for determining the expression index of the face.
Optionally, the method includes performing human face feature point recognition on a human face image to obtain a plurality of feature points of the human face image, and recognizing a plurality of key feature points for determining a human face expression index from the plurality of feature points; and dividing the face into a plurality of areas according to the plurality of key feature points, wherein each area comprises a plurality of key feature points for determining the expression index of the face. If all the feature points are directly used for determining the facial expression index, the calculation amount is increased, and the calculation amount can be reduced while the accuracy of the facial expression index is ensured through key feature point identification. For example, there are 106 feature points on the face image, and some key feature points for determining the expression index of the face, such as key feature points of the mouth, key feature points of the eyes, key feature points of the eyebrows, etc., are identified from the 106 feature points. In identifying the plurality of feature points, feature point identification may be performed based on the trained neural network model. Of course, the method is not limited to the above method, and adaptive selection and adjustment may be performed.
In an optional implementation manner, key feature point recognition may be further performed on the face image to obtain a plurality of key feature points for determining the facial expression index. For example, the key feature points are key feature points of the mouth, key feature points of the eyes, and key feature points of the eyebrows, and the trained neural network model may be used to process the face image, for example, the face image is input into the trained neural network model to be processed, so as to obtain the key feature points of the mouth, the key feature points of the eyes, and the key feature points of the eyebrows in each frame of the face image.
Optionally, the face image may be divided according to the region of the reference facial organ to obtain a plurality of regions of the face image, and the key feature points of the face image in each region are extracted to obtain a plurality of key feature points included in each region. For example, three regions of a face image can be obtained by referring to a facial organ including a mouth, eyes, and eyebrows, and images of the three regions can be input into the mouth key feature point detection model, the eye key feature point detection model, and the eyebrow key feature point detection model, respectively, to obtain a plurality of key feature points included in each region, respectively.
S22, determining key feature points contained in each region respectively for each frame of picture, determining expression scores corresponding to the regions, and determining the facial expression index of each frame of picture according to the expression scores corresponding to the regions.
Optionally, at least one included angle between key feature point connecting lines in each region is determined, and expression scores corresponding to each region are determined according to the at least one included angle; determining the weight corresponding to each region; and determining the facial expression index of each frame of picture according to the expression scores corresponding to the regions and the weights corresponding to the regions. For example, when each included angle and the area where the included angle is located are calculated, each area corresponds to a weight, the weights of the areas can be different, the sum of the weights of all the areas is 1, each area includes at least one included angle, each included angle corresponds to an expression score (for example, a percentile system), and the included angle and the area are weighted and calculated to obtain the facial expression index. Because the feature points of the face area are more, if the included angle of the connecting lines of every two feature points is calculated, the calculation amount is increased, after the key features for weighting and calculating the face expression index are screened out, the connecting line included angle can be directly calculated for the key feature points, and the calculation amount is reduced. The key feature points in one region may have a plurality of connecting lines, and a target connecting line may be selected to calculate an included angle, such as an included angle between the connecting lines of adjacent key feature points, an included angle between the connecting lines of key feature points at two ends and the connecting line of the middle key feature point, and the like. Therefore, on the basis of ensuring the accuracy of the facial expression index, the calculation amount is reduced, and the processing efficiency is improved.
In an optional implementation manner, the contour information of the facial organ included in each region may be further determined according to the key feature points included in each region, and the expression score corresponding to each region is respectively determined according to the contour information of the facial organ included in each region.
And S23, acquiring facial expression frequency spectrum diagrams corresponding to the picture streams according to the facial expression indexes of all the pictures.
After the above weighting calculation, the facial expression index corresponding to each frame of picture is obtained, and then the facial expression spectrogram corresponding to the picture stream is obtained, as shown in fig. 3.
And S3, determining a reference line corresponding to the face in the natural state according to the facial expression spectrogram, and determining a natural emotion area of the face in the natural state based on the reference line.
Because the expression under everybody natural state all is different, and someone is natural balsam pear face, and someone is natural face and has a smiling meaning, so everybody's natural state's datum line is different, through finding the datum line, can more accurately according to user's datum line, confirms its true mood to effectively improve the degree of accuracy of facial expression discernment.
In an alternative embodiment, S3 includes:
s31, determining a first interval with the highest frequency of appearance of the facial expression index in the facial expression spectrogram. The person is in a natural state most of the time in the process of receiving the service, and a horizontal line where a point with the highest frequency of occurrence is located can be taken as a reference line. However, the value of each point cannot be the same, so a reference interval (i.e. the first interval), i.e. the interval with the highest frequency, needs to be found. For example, an interval width may be preset, and according to the interval width, a first interval in the facial expression spectrogram, in which the frequency of appearance of the facial expression index is the highest, is determined.
And S32, determining a reference line corresponding to the human face in the natural state according to the first interval.
The first interval with the highest frequency of appearance of the facial expression indexes in the facial expression spectrogram can reflect the expression state of the current user in the natural state more truly, and accordingly the obtained datum line can accurately reflect the expression of the current user in the natural state.
Optionally, S32 includes:
determining a horizontal centerline of the first interval;
if the facial expression index corresponding to the horizontal center line is larger than a first threshold and smaller than a second threshold, determining the horizontal center line as a reference line corresponding to the face in a natural state;
if the facial expression index corresponding to the horizontal center line is smaller than or equal to a first threshold, determining a horizontal line corresponding to the first threshold as a reference line corresponding to the face in a natural state;
and if the facial expression index corresponding to the horizontal center line is larger than or equal to a second threshold, determining a horizontal line corresponding to the second threshold as a reference line corresponding to the face in a natural state.
As shown in fig. 4, for example, the interval width may be set to 20, and in the process of determining the reference line, the facial expression spectrogram is scanned from bottom to top in this interval, and an interval with the highest frequency is found, where the reference line is the horizontal center of the interval. A schematic of the determined reference line is shown in fig. 3.
As shown in fig. 3, in order to avoid the situation where the whole course is positive or negative, for example, the reference line may be set between 30 and 60, 60 if the actually measured reference line is higher than 60, or 30 if the actually measured reference line is lower than 30, so as to avoid the special situation where the whole course is positive or negative. Of course, the setting values of the first threshold and the second threshold of the reference line may be adaptively adjusted, and are not limited to the above values.
And S33, taking the reference line as a center, and taking a second interval in a certain width range above and below the reference line as a natural emotion area of the human face in a natural state.
As shown in fig. 4, after the reference line is acquired, for example, the regions with top and bottom 15 and total width 30 may represent the natural emotional regions of the human face in the natural state, i.e., the human face in the natural state. Of course, the width of the natural emotional area may be adaptively adjusted, and is not limited to the above values.
And S4, dividing the facial expression spectrogram into a plurality of emotion areas corresponding to different expressions by taking the natural emotion areas as a reference.
As shown in fig. 5, in the facial expression spectrogram, a region above the natural emotion region may be determined as a positive emotion region, and a region below the natural emotion region may be determined as a negative emotion region. Through the reference region, namely the natural emotion region, the emotion layering of the user can be realized, and the emotion which is positive or negative in the whole process is avoided.
As shown in fig. 2, after the facial expression satisfaction analysis method is adopted, the facial expression satisfaction analysis method according to the embodiment of the present disclosure further includes: and S5, analyzing and calculating each emotion area corresponding to different expressions in the facial expression spectrogram in each time period in the facial expression video clip to be analyzed, and determining the satisfaction degree of the user.
In an alternative embodiment, S5 includes:
s51, dividing the facial expression video clip to be analyzed into a plurality of time periods, and respectively calculating the proportion of different expressions in each time period according to the facial expression spectrogram;
s52, determining the weight corresponding to each time period;
and S53, determining a satisfaction result according to the proportion of different expressions in each time period and the corresponding weight of each time period.
Since the satisfaction of "crying and going" is definitely higher than that of "crying and going", the weights of different emotions in time need to be designed differently. For example, the weights may be set as: the first 20% of the time the mood was 10% weighted, the last 10% of the mood was 60% weighted, and the middle part was the remaining 30% weighted. Of course, the time weight may be appropriately adjusted. After obtaining a plurality of emotion areas according to the method described in the step S4, respectively counting the proportion of each emotion area in the facial expression spectrogram in different time periods. And in each time period, determining the weight corresponding to each emotion area according to the proportion of each emotion area in the facial expression spectrogram, and performing weighted calculation on the weight of each time period and the weight of each emotion area in each time period to obtain a satisfaction coefficient of the user. The user satisfaction factor may be obtained by subtracting the weighted calculation of the negative emotions from the weighted values of the positive emotions. The time periods of the user video clips are weighted, the emotion types and the time weights of the users are comprehensively considered in combination with the proportion of the emotion areas in each time period, and the satisfaction degree of the users can be determined more accurately.
As shown in fig. 6, in the present embodiment, it can be seen that the positive emotions of the user gradually increase and the negative emotions gradually decrease. According to the example shown in fig. 6, the satisfaction factor of the user (31% + 10% + 28% + 30% + 37% + 60%) - (31% + 10% + 22% + 30% + 11% + 60%) is calculated to be 0.174.
Alternatively, the satisfaction result may be determined according to a satisfaction coefficient. For example, when the satisfaction coefficient is greater than or equal to the satisfaction threshold, the satisfaction result is satisfactory. When the satisfaction factor is less than or equal to the dissatisfaction threshold, the satisfaction result is dissatisfied. When the satisfaction coefficient is larger than the dissatisfaction threshold and smaller than the satisfaction threshold, the satisfaction result is general. For example, a value of 0.1 or more may be set to be satisfactory, and a value of-0.1 or less may be set to be unsatisfactory, and a value of-0.1 to 0.1 may be set to be general. Wherein, the satisfaction threshold and the dissatisfaction threshold can be adaptively adjusted.
The facial expression analysis system of the embodiment of the present disclosure includes: the system comprises a picture acquisition module, an expression spectrum module, an expression reference module and an expression partition module.
The image acquisition module is configured to acquire a facial expression video clip to be analyzed and acquire an image stream in the video clip.
In an alternative embodiment, when acquiring the picture stream, the picture stream in the video clip may be acquired by frame-by-frame (i.e. extracting every frame) or frame-by-frame (e.g. extracting every second for one frame) or key frame (i.e. extracting i frames according to the change of the picture). In this embodiment, when obtaining the picture stream in the video segment, the video segment may be segmented to obtain a plurality of video sub-segments, at least one frame in each video sub-segment is randomly or fixedly extracted, and the picture stream in the video segment is determined according to the extracted plurality of picture frames. The method has the advantages that complete video information containing the user expression is collected, the fluctuation of the facial expression is fully considered, and the one-sidedness and inaccuracy of the user emotion are determined by one frame of image independently.
The expression spectrum module is configured to analyze the facial expression index of each frame of picture in the picture stream and determine a facial expression spectrum corresponding to the picture stream.
For example, the facial expression index range is 0-100, and a higher index indicates that the expression is more positive, i.e., more favorable and surprised. A lower index indicates that the expression is more negative, i.e. more prone to emotions such as anger, fear, etc. The index is close to the middle to indicate that the expression is in a natural state. For example, according to the timestamp information of each frame of picture in the picture stream, a facial expression spectrogram is generated according to the facial expression index of each frame of picture and the corresponding time information. The fluctuation process of the user expression can be visually displayed through the generated facial expression spectrogram.
In an optional implementation manner, before analyzing the facial expression index of each frame of picture in the picture stream, a face detection module is further included, and is configured to perform face detection on each frame of picture in the picture stream, acquire a face image in each frame of picture, and analyze the facial expression index of each frame of picture to acquire a facial expression spectrogram.
In the face detection, for example, MTCNN, SSD, YOLOV3, etc. may be used as the face detection algorithm, but the face detection algorithm is not limited to the above-mentioned algorithms, and may be selected according to the requirement. And outputting a facial expression index by analyzing each frame of picture.
In an optional implementation manner, the expression spectrum module may include a face region dividing module, an expression index determining module, and an expression spectrum map determining module.
The face region dividing module is configured to divide the face image into a plurality of regions, and each region comprises a plurality of key feature points for determining the expression index of the face.
In an optional implementation, the face region dividing module includes: performing human face characteristic point identification on the human face image to obtain a plurality of characteristic points of the human face image, and identifying a plurality of key characteristic points for determining a human face expression index from the plurality of characteristic points; and dividing the face into a plurality of areas according to the plurality of key feature points, wherein each area comprises a plurality of key feature points for determining the expression index of the face. If all the feature points are directly used for determining the facial expression index, the calculation amount is increased, and the calculation amount can be reduced while the accuracy of the facial expression index is ensured through key feature point identification. For example, there are 106 feature points on the face image, and some key feature points for determining the expression index of the face, such as key feature points of the mouth, key feature points of the eyes, key feature points of the eyebrows, etc., are identified from the 106 feature points. When the plurality of feature points are identified, key feature point identification can be carried out based on the trained neural network model. Of course, the method is not limited to the above method, and adaptive selection and adjustment may be performed.
In an optional implementation manner, the face region division module may further perform key feature point recognition on the face image to obtain a plurality of key feature points for determining the expression index of the face. For example, the key feature points are key feature points of the mouth, key feature points of the eyes, and key feature points of the eyebrows, and the trained neural network model may be used to process the face image, for example, the face image is input into the trained neural network model to be processed, so as to obtain the key feature points of the mouth, the key feature points of the eyes, and the key feature points of the eyebrows in each frame of the face image.
Optionally, the face region division module may further divide the face image according to a region of a reference facial organ to obtain a plurality of regions of the face image, and extract key feature points from the face image of each region respectively to obtain a plurality of key feature points included in each region. For example, three regions of a face image can be obtained by referring to a facial organ including a mouth, eyes, and eyebrows, and images of the three regions can be input into the mouth key feature point detection model, the eye key feature point detection model, and the eyebrow key feature point detection model, respectively, to obtain a plurality of key feature points included in each region, respectively.
The expression index determining module is configured to respectively determine key feature points contained in each region aiming at each frame of picture, determine expression scores corresponding to the regions, and determine the facial expression index of each frame of picture according to the expression scores corresponding to the regions.
In an optional embodiment, the expression index determination module includes: determining at least one included angle between key feature point connecting lines in each region, and determining expression scores corresponding to each region according to the at least one included angle; determining the weight corresponding to each region; and determining the facial expression index of each frame of picture according to the expression scores corresponding to the regions and the weights corresponding to the regions. For example, when each included angle and the area where the included angle is located are calculated, each area corresponds to a weight, the weights of the areas are different, the sum of the weights of all the areas is 1, each area includes at least one included angle, each included angle corresponds to an expression score (for example, a percentile system), and the included angle and the area are weighted and calculated to obtain the facial expression index. Because the feature points of the face area are more, if the calculation of the included angle of the connecting line of every two feature points is increased, after the key features for weighting and calculating the face expression index are screened out, the included angle of the connecting line can be directly calculated for the key feature points, and the calculation amount is reduced. The key feature points in one region may have a plurality of connecting lines, and a target connecting line may be selected to calculate an included angle, such as an included angle between the connecting lines of adjacent key feature points, an included angle between the connecting lines of key feature points at two ends and the connecting line of the middle key feature point, and the like. Therefore, on the basis of ensuring the accuracy of the facial expression index, the calculation amount is reduced, and the processing efficiency is improved.
In an optional implementation manner, the expression index determining module may further determine contour information of facial organs included in each region according to the key feature points included in each region, and determine expression scores corresponding to each region according to the contour information of the facial organs included in each region.
And the expression frequency spectrum graph determining module is configured to obtain the facial expression frequency spectrum graphs corresponding to the picture streams according to the facial expression indexes of all the pictures. The facial expression spectrogram determining module obtains the facial expression index corresponding to each frame of picture after the weighted calculation of the facial expression index, and further obtains the facial expression spectrogram corresponding to the picture stream, as shown in fig. 3.
And the expression reference module is configured to determine a reference line corresponding to the face in a natural state according to the facial expression spectrogram, and determine a natural emotion area of the face in the natural state based on the reference line.
Because the expression under everybody natural state all is different, and someone is natural balsam pear face, and someone is natural face and has a smiling meaning, so everybody's natural state's datum line is different, through finding the datum line, can more accurately according to user's datum line, confirms its true mood to effectively improve the degree of accuracy of facial expression discernment.
In an optional implementation manner, the expression reference module comprises a frequency interval determination module, a reference line determination module and a natural emotion area determination module.
And the frequency interval determining module is configured to determine a first interval with the highest frequency of appearance of the facial expression indexes in the facial expression spectrogram.
The person is in a natural state most of the time in the process of receiving the service, and a horizontal line where a point with the highest frequency of occurrence is located can be taken as a reference line. However, the value of each point cannot be the same, so a reference interval (i.e. the first interval) needs to be found, i.e. the interval with the highest frequency needs to be found. For example, an interval width may be preset, and according to the interval width, a first interval in the facial expression spectrogram, in which the frequency of appearance of the facial expression index is the highest, is determined.
The datum line determining module is configured to determine a datum line corresponding to the human face in a natural state according to the first section.
The first interval with the highest frequency of appearance of the facial expression indexes in the facial expression spectrogram can reflect the expression state of the current user in the natural state more truly, and accordingly the obtained datum line can accurately reflect the expression of the current user in the natural state.
In an alternative embodiment, the baseline determination module includes:
determining a horizontal centerline of the first interval;
if the facial expression index corresponding to the horizontal center line is larger than a first threshold and smaller than a second threshold, determining the horizontal center line as a reference line corresponding to the face in a natural state;
if the facial expression index corresponding to the horizontal center line is smaller than or equal to a first threshold, determining a horizontal line corresponding to the first threshold as a reference line corresponding to the face in a natural state;
and if the facial expression index corresponding to the horizontal center line is larger than or equal to a second threshold, determining a horizontal line corresponding to the second threshold as a reference line corresponding to the face in a natural state.
As shown in fig. 4, for example, the interval width may be set to 20, and in the process of determining the reference line, the facial expression spectrogram is scanned from bottom to top in this interval, and an interval with the highest frequency is found, where the reference line is the horizontal center of the interval. A schematic of the reference line is shown in fig. 3.
As shown in fig. 3, in order to avoid the situation where the whole course is positive or negative, for example, the reference line may be set between 30 and 60, 60 if the actually measured reference line is higher than 60, or 30 if the actually measured reference line is lower than 30, so as to avoid the special situation where the whole course is positive or negative. Of course, the setting values of the first threshold and the second threshold of the reference line may be adaptively adjusted, and are not limited to the above values.
And the natural emotion area determination module is configured to take the reference line as a center, and take a second interval in a certain width range above and below the reference line as a natural emotion area of the human face in a natural state.
As shown in fig. 4, after the reference line is acquired, for example, the regions with the top 15, bottom 15 and total width 30 may represent the natural emotional regions of the expression in the natural state, i.e., the natural state of the face. Of course, the width of the natural emotional area may be adaptively adjusted, and is not limited to the above values.
The expression partitioning module is configured to divide the facial expression spectrogram into a plurality of emotion areas corresponding to different expressions by taking the natural emotion area as a reference. As shown in fig. 5, in the facial expression spectrogram, a region above the natural emotion region may be determined as a positive emotion region, and a region below the natural emotion region may be determined as a negative emotion region. Through the reference region, namely the natural emotion region, the emotion layering of the user can be realized, and the emotion which is positive or negative in the whole process is avoided.
The facial expression satisfaction analysis system in the embodiment of the disclosure adopts the facial expression analysis system, and is characterized by further comprising a satisfaction calculation module. And the satisfaction calculation module is configured to analyze and calculate each emotion area corresponding to different expressions in the facial expression spectrogram in each time period in the facial expression video clip to be analyzed, and determine the satisfaction of the user.
In an alternative embodiment, the satisfaction calculation module comprises:
and the time period expression ratio calculation module is configured to divide the facial expression video clip to be analyzed into a plurality of time periods, and respectively calculate the ratio of different expressions in each time period according to the facial expression spectrogram.
And the time period weight determining module is configured to determine the weight corresponding to each time period.
And the satisfaction result calculating module is configured to determine a satisfaction result according to the proportion of different expressions in each time period and the weight corresponding to each time period.
Since the satisfaction of "crying and going" is definitely higher than that of "crying and going", the weights of different emotions in time need to be designed differently. For example, the weights may be set as: the first 20% of the time the mood was 10% weighted, the last 10% of the mood was 60% weighted, and the middle part was the remaining 30% weighted. Of course, the time weight may be appropriately adjusted. And respectively counting the proportion of each emotion area in the facial expression spectrogram in different time periods after each emotion area is acquired according to the expression partitioning module. And in each time period, determining the weight corresponding to each emotion area according to the proportion of each emotion area in the facial expression spectrogram, and then performing weighted calculation on the weight of each time period and the weight of each emotion area in each time period to obtain the satisfaction degree of the user. User satisfaction may be obtained by subtracting the weighted value of the negative emotions from the weighted value of the positive emotions. The time periods of the user video clips are weighted, the emotion types and the time weights of the users are comprehensively considered in combination with the proportion of the emotion areas in each time period, and the satisfaction degree of the users can be determined more accurately.
As shown in fig. 6, in the present embodiment, it can be seen that the positive emotions of the user gradually increase and the negative emotions gradually decrease. According to the example shown in fig. 6, the satisfaction factor of the user (31% + 10% + 28% + 30% + 37% + 60%) - (31% + 10% + 22% + 30% + 11% + 60%) is calculated to be 0.174.
Alternatively, the satisfaction result may be determined according to a satisfaction coefficient. For example, when the satisfaction coefficient is greater than or equal to the satisfaction threshold, the satisfaction result is satisfactory. When the satisfaction factor is less than or equal to the dissatisfaction threshold, the satisfaction result is dissatisfied. When the satisfaction coefficient is larger than the dissatisfaction threshold and smaller than the satisfaction threshold, the satisfaction result is general. For example, it is possible to set satisfactory above 0.1 and unsatisfactory below-0.1, and average between-0.1 and 0.1. Wherein, the satisfaction threshold and the dissatisfaction threshold can be adaptively adjusted.
According to the facial expression analysis method and system and the satisfaction degree analysis method and system, complete video information of user expressions is utilized, fluctuation of the facial expressions is fully considered, natural state differences of individual users are fully considered, reference lines corresponding to natural states are obtained based on frequency analysis, and real emotion of the users can be determined. And setting a reference interval corresponding to the natural state according to the reference line of the user, thereby avoiding the situation that the whole course is positive expression or negative expression. The emotion layering of the user is realized through the reference area, the time period of the user video clip is weighted, the emotion type and the time weight of the user are comprehensively considered, and the satisfaction degree of the user can be determined more accurately.
The disclosure also relates to an electronic device comprising a server, a terminal and the like. The electronic device includes: at least one processor; a memory communicatively coupled to the at least one processor; and a communication component communicatively coupled to the storage medium, the communication component receiving and transmitting data under control of the processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to implement the facial expression analysis method and the facial expression satisfaction analysis method in the above embodiments.
In an alternative embodiment, the memory is used as a non-volatile computer-readable storage medium for storing non-volatile software programs, non-volatile computer-executable programs, and modules. The processor executes various functional applications and data processing of the device by running the nonvolatile software program, instructions and modules stored in the memory, that is, the above-described facial expression analysis method and facial expression satisfaction analysis method are implemented.
The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store a list of options, etc. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and such remote memory may be connected to the external device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more modules are stored in the memory and, when executed by the one or more processors, perform the facial expression analysis method and the facial expression satisfaction analysis method of any of the method embodiments described above.
The product can execute the facial expression analysis method and the facial expression satisfaction analysis method provided by the embodiment of the application, has corresponding functional modules and beneficial effects of the execution method, does not describe the technical details in the embodiment in detail, and can refer to the facial expression analysis method and the facial expression satisfaction analysis method provided by the embodiment of the application.
The present disclosure also relates to a computer-readable storage medium storing a computer-readable program for causing a computer to execute some or all of the above-described embodiments of the facial expression analysis method and the facial expression satisfaction analysis method.
That is, as can be understood by those skilled in the art, all or part of the steps in the method for implementing the embodiments described above may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the disclosure may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Furthermore, those of ordinary skill in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the disclosure and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
It will be understood by those skilled in the art that while the present disclosure has been described with reference to exemplary embodiments, various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the disclosure. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the disclosure without departing from the essential scope thereof. Therefore, it is intended that the disclosure not be limited to the particular embodiment disclosed, but that the disclosure will include all embodiments falling within the scope of the appended claims.

Claims (16)

1. A facial expression analysis method is characterized by comprising the following steps:
s1, acquiring a facial expression video clip to be analyzed and acquiring a picture stream in the video clip;
s2, analyzing the facial expression index of each frame in the picture stream, and determining a facial expression spectrogram corresponding to the picture stream;
s3, determining a reference line corresponding to the human face in a natural state according to the facial expression spectrogram, and determining a natural emotion area of the human face in the natural state based on the reference line;
and S4, dividing the facial expression spectrogram into a plurality of emotion areas corresponding to different expressions by taking the natural emotion areas as a reference.
2. The method for analyzing facial expressions according to claim 1, wherein in S1, the picture stream in the video clip is obtained by frame-by-frame or fixed-interval frame extraction or key frame extraction.
3. The method of claim 2, wherein the obtaining of the picture stream in the facial expression video clip comprises: segmenting the video segments to obtain a plurality of video sub-segments, and randomly or fixedly extracting at least one frame in each video sub-segment;
and determining a picture stream in the video clip according to the extracted plurality of picture frames.
4. The method of analyzing facial expressions of claim 1, further comprising: and carrying out face detection on each frame of picture in the picture stream to obtain a face image in each frame of picture.
5. The method of analyzing facial expressions according to claim 4, wherein the step S2 includes:
s21, dividing the face image into a plurality of areas, wherein each area comprises a plurality of key feature points for determining the expression index of the face;
s22, respectively determining key feature points contained in each region aiming at each frame of picture, determining expression scores corresponding to the regions, and determining the facial expression index of each frame of picture according to the expression scores corresponding to the regions;
and S23, acquiring facial expression frequency spectrum diagrams corresponding to the picture streams according to the facial expression indexes of all the pictures.
6. The method for analyzing facial expressions according to claim 5, wherein the S21 includes:
carrying out human face characteristic point identification on the human face image to obtain a plurality of characteristic points of the human face image;
identifying a plurality of key feature points for determining a facial expression index from the plurality of feature points;
and dividing the face into a plurality of areas according to the plurality of key feature points, wherein each area comprises a plurality of key feature points for determining the expression index of the face.
7. The method for analyzing facial expressions according to claim 5, wherein the S22 includes:
determining at least one included angle between key feature point connecting lines in each region, and determining expression scores corresponding to each region according to the at least one included angle;
determining the weight corresponding to each region;
and determining the facial expression index of each frame of picture according to the expression scores corresponding to the regions and the weights corresponding to the regions.
8. The method for analyzing facial expressions according to claim 1, wherein S3 includes:
s31, determining a first interval with the highest frequency of appearance of the facial expression index in the facial expression spectrogram;
s32, determining a reference line corresponding to the face in a natural state according to the first interval;
and S33, taking the reference line as a center, and taking a second interval in a certain width range above and below the reference line as a natural emotion area of the human face in a natural state.
9. The method for analyzing facial expressions according to claim 8, wherein S32 includes:
determining a horizontal centerline of the first interval;
if the facial expression index corresponding to the horizontal center line is larger than a first threshold and smaller than a second threshold, determining the horizontal center line as a reference line corresponding to the face in a natural state;
if the facial expression index corresponding to the horizontal center line is smaller than or equal to a first threshold, determining a horizontal line corresponding to the first threshold as a reference line corresponding to the face in a natural state;
and if the facial expression index corresponding to the horizontal center line is larger than or equal to a second threshold, determining a horizontal line corresponding to the second threshold as a reference line corresponding to the face in a natural state.
10. The method for analyzing facial expressions according to claim 1, wherein in S4, in the facial expression spectrogram, the region above the natural emotion region is determined as a positive emotion region, and the region below the natural emotion region is determined as a negative emotion region.
11. A method for analyzing satisfaction of facial expressions, comprising, after the method for analyzing facial expressions according to any one of claims 1 to 10: and S5, analyzing and calculating each emotion area corresponding to different expressions in the facial expression spectrogram in each time period in the facial expression video clip to be analyzed, and determining the satisfaction degree of the user.
12. The method for analyzing satisfaction of facial expression according to claim 11, wherein S5 comprises:
s51, dividing the facial expression video clip to be analyzed into a plurality of time periods, and respectively calculating the proportion of different expressions in each time period according to the facial expression spectrogram;
s52, determining the weight corresponding to each time period;
and S53, determining a satisfaction result according to the proportion of different expressions in each time period and the corresponding weight of each time period.
13. A facial expression analysis system, characterized in that a facial expression analysis method according to any one of claims 1 to 10 is adopted, comprising:
the image acquisition module is used for acquiring a video clip of the facial expression to be analyzed and acquiring an image stream in the video clip;
the expression frequency spectrum module is used for analyzing the facial expression index of each frame of picture in the picture stream and determining a facial expression frequency spectrum graph corresponding to the picture stream;
the expression reference module is used for determining a reference line corresponding to the human face in a natural state according to the facial expression spectrogram and determining a natural emotion area of the human face in the natural state based on the reference line;
and the expression partitioning module is used for dividing the facial expression spectrogram into a plurality of emotion areas corresponding to different expressions by taking the natural emotion area as a reference.
14. A facial expression satisfaction analyzing system according to claim 13, wherein the facial expression satisfaction analyzing system further comprises:
and the satisfaction calculation module is used for analyzing and calculating each emotion area corresponding to different expressions in the facial expression spectrogram in each time period in the facial expression video clip to be analyzed, and determining the satisfaction of the user.
15. An electronic device comprising a memory and a processor, wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement the method of any of claims 1-12.
16. A computer-readable storage medium, on which a computer program is stored, the computer program being executable by a processor for implementing the method according to any one of claims 1-12.
CN202010033040.1A 2020-01-13 2020-01-13 Facial expression analysis method and system and satisfaction analysis method and system Active CN113111690B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010033040.1A CN113111690B (en) 2020-01-13 2020-01-13 Facial expression analysis method and system and satisfaction analysis method and system
PCT/CN2021/071233 WO2021143667A1 (en) 2020-01-13 2021-01-12 Facial expression analysis method and system, and facial expression-based satisfaction analysis method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010033040.1A CN113111690B (en) 2020-01-13 2020-01-13 Facial expression analysis method and system and satisfaction analysis method and system

Publications (2)

Publication Number Publication Date
CN113111690A true CN113111690A (en) 2021-07-13
CN113111690B CN113111690B (en) 2024-01-30

Family

ID=76708830

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010033040.1A Active CN113111690B (en) 2020-01-13 2020-01-13 Facial expression analysis method and system and satisfaction analysis method and system

Country Status (2)

Country Link
CN (1) CN113111690B (en)
WO (1) WO2021143667A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114743252A (en) * 2022-06-10 2022-07-12 中汽研汽车检验中心(天津)有限公司 Feature point screening method, device and storage medium for head model
CN117122320A (en) * 2022-12-14 2023-11-28 广州数化智甄科技有限公司 Emotion data benchmarking method and device and computer readable storage medium
CN117131099A (en) * 2022-12-14 2023-11-28 广州数化智甄科技有限公司 Emotion data analysis method and device in product evaluation and product evaluation method

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113850247B (en) * 2021-12-01 2022-02-08 环球数科集团有限公司 Tourism video emotion analysis system fused with text information

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109447001A (en) * 2018-10-31 2019-03-08 深圳市安视宝科技有限公司 A kind of dynamic Emotion identification method
CN109886110A (en) * 2019-01-17 2019-06-14 深圳壹账通智能科技有限公司 Micro- expression methods of marking, device, computer equipment and storage medium
US20190384967A1 (en) * 2018-06-19 2019-12-19 Beijing Kuangshi Technology Co., Ltd. Facial expression detection method, device and system, facial expression driving method, device and system, and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190384967A1 (en) * 2018-06-19 2019-12-19 Beijing Kuangshi Technology Co., Ltd. Facial expression detection method, device and system, facial expression driving method, device and system, and storage medium
CN109447001A (en) * 2018-10-31 2019-03-08 深圳市安视宝科技有限公司 A kind of dynamic Emotion identification method
CN109886110A (en) * 2019-01-17 2019-06-14 深圳壹账通智能科技有限公司 Micro- expression methods of marking, device, computer equipment and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114743252A (en) * 2022-06-10 2022-07-12 中汽研汽车检验中心(天津)有限公司 Feature point screening method, device and storage medium for head model
CN114743252B (en) * 2022-06-10 2022-09-16 中汽研汽车检验中心(天津)有限公司 Feature point screening method, device and storage medium for head model
CN117122320A (en) * 2022-12-14 2023-11-28 广州数化智甄科技有限公司 Emotion data benchmarking method and device and computer readable storage medium
CN117131099A (en) * 2022-12-14 2023-11-28 广州数化智甄科技有限公司 Emotion data analysis method and device in product evaluation and product evaluation method
CN117122320B (en) * 2022-12-14 2024-07-05 广州数化智甄科技有限公司 Emotion data benchmarking method and device and computer readable storage medium

Also Published As

Publication number Publication date
CN113111690B (en) 2024-01-30
WO2021143667A1 (en) 2021-07-22

Similar Documents

Publication Publication Date Title
CN113111690A (en) Facial expression analysis method and system and satisfaction analysis method and system
CN109522815B (en) Concentration degree evaluation method and device and electronic equipment
CN103810490B (en) A kind of method and apparatus for the attribute for determining facial image
US9852327B2 (en) Head-pose invariant recognition of facial attributes
CN109035246B (en) Face image selection method and device
US11194073B2 (en) Millimeter wave image based human body foreign object detection method and system
US20150023603A1 (en) Head-pose invariant recognition of facial expressions
KR20190020779A (en) Ingestion Value Processing System and Ingestion Value Processing Device
CN110942006A (en) Motion gesture recognition method, motion gesture recognition apparatus, terminal device, and medium
KR102284096B1 (en) System and method for estimating subject image quality using visual saliency and a recording medium having computer readable program for executing the method
CN110418204B (en) Video recommendation method, device, equipment and storage medium based on micro expression
CN112770061A (en) Video editing method, system, electronic device and storage medium
CN110232331B (en) Online face clustering method and system
CN109543629B (en) Blink identification method, device, equipment and readable storage medium
CN111860091A (en) Face image evaluation method and system, server and computer readable storage medium
CN109829364A (en) A kind of expression recognition method, device and recommended method, device
US20240135956A1 (en) Method and apparatus for measuring speech-image synchronicity, and method and apparatus for training model
CN110458861A (en) Object detection and tracking and equipment
CN106548114B (en) Image processing method, device and computer-readable medium
US11361467B2 (en) Pose selection and animation of characters using video data and training techniques
CN113408332A (en) Video mirror splitting method, device, equipment and computer readable storage medium
CN114612934A (en) Gait sequence evaluation method and system based on quality dimension
CN113536947A (en) Face attribute analysis method and device
CN107886959A (en) A kind of method and apparatus extracted honeybee and visit flower video segment
CN113762149A (en) Feature fusion human behavior recognition system and method based on segmentation attention

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant