CN116597497A - Data acquisition and analysis method for AI (advanced technology attachment) recognition of facial expressions - Google Patents

Data acquisition and analysis method for AI (advanced technology attachment) recognition of facial expressions Download PDF

Info

Publication number
CN116597497A
CN116597497A CN202310720238.0A CN202310720238A CN116597497A CN 116597497 A CN116597497 A CN 116597497A CN 202310720238 A CN202310720238 A CN 202310720238A CN 116597497 A CN116597497 A CN 116597497A
Authority
CN
China
Prior art keywords
expression
data
face
time
facial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310720238.0A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaoxing Maimang Intelligent Technology Co ltd
Original Assignee
Shaoxing Maimang Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaoxing Maimang Intelligent Technology Co ltd filed Critical Shaoxing Maimang Intelligent Technology Co ltd
Priority to CN202310720238.0A priority Critical patent/CN116597497A/en
Publication of CN116597497A publication Critical patent/CN116597497A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a data acquisition and analysis method, in particular to a data acquisition and analysis method for AI (automatic identification) facial expression. The method comprises the following steps: s1: reading a face image acquired by a camera; s2: facial expression recognition is carried out on the face image through an artificial intelligence technology, and classification statistics is carried out on the recognized expression; s3: performing time stamp marking on each identified expression, and recording the expression type corresponding to each time stamp and the expression intensity value corresponding to each time stamp; s4: analyzing the acquired data; and ordering all the acquired data according to a time sequence by utilizing two parameters of the time stamp and the expression intensity value, and carrying out classified statistical analysis according to different expression types. The method of the invention provides a new application direction for the fields of medical health and the like. The method provided by the invention has higher practicability and feasibility, and is expected to be further developed and applied on the basis of the original rich technology.

Description

Data acquisition and analysis method for AI (advanced technology attachment) recognition of facial expressions
Technical Field
The invention relates to a data acquisition and analysis method, in particular to a data acquisition and analysis method for AI (automatic identification) facial expression.
Background
With the continuous development of the artificial intelligence field, facial expression recognition technology has been widely applied to the fields of movies, games, social software, security recognition, medical health, and the like. Attention has been paid to the potential application of expression recognition techniques in the field of artificial intelligence, such as man-machine interaction, emotion calculation, emotion retrieval, etc. At present, people mainly adopt deep learning, machine learning and other technologies to finish facial expression recognition. Among them, deep learning is the most widely used method such as Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN), etc. Aiming at the problem of determining expression intensity, the currently mainly adopted methods are manual marking, prediction based on certain rules and by using certain models. Although the existing method can accurately identify the facial expression, the method still has a great limitation on the determination of the expression intensity. The effect of the currently adopted prediction model is unstable, and the prediction model has obvious limitation in the aspect of expression intensity prediction related to time. Meanwhile, the traditional method lacks of research on time sequences, and cannot well master the variation trend and rule of expression intensity.
Therefore, the invention provides a method for predicting the emotion intensity by using time series analysis aiming at the problems of the current facial expression recognition technology. The method not only can effectively relieve the limitation of the traditional method on time sequence, but also can provide important reference for the development direction of future facial expression recognition technology and service through analysis of the trend of the expression intensity change, and has wide market prospect and application value.
Disclosure of Invention
The invention relies on artificial intelligence technology, can identify finer facial expressions, and can accurately classify and timestamp the facial expressions. Meanwhile, different expression types can be analyzed, so that more detailed user emotion state information is obtained. The method can be applied to various scenes requiring facial expression recognition, such as automatic driving, customer service robots and the like.
The invention solves the technical problems as follows: a data acquisition and analysis method for AI to recognize facial expression includes the following steps:
s1: reading a face image acquired by a camera;
s2: facial expression recognition is carried out on the face image through an artificial intelligence technology, and classification statistics is carried out on the recognized expression;
s3: performing time stamp marking on each identified expression, and recording the expression type corresponding to each time stamp and the expression intensity value corresponding to each time stamp; s4: analyzing the acquired data; and ordering all the acquired data according to a time sequence by utilizing two parameters of the time stamp and the expression intensity value, and carrying out classified statistical analysis according to different expression types.
Further, the process of reading the face image collected by the camera is divided into:
s1: preparing face recognition equipment; before face recognition, a device is required to be prepared for acquiring a face image, and the device can directly read the face image and perform data transmission so as to perform the next data processing;
s2: starting equipment to acquire faces; after the equipment is ready, the equipment needs to be started to acquire the human face; before starting, the equipment needs to be configured correspondingly to ensure that the quality of the acquired face image is as high as possible;
s3: reading the collected face image; after the equipment collects the face image, the image data is required to be transmitted to the equipment for data processing; in the step, preprocessing is carried out on the image so as to facilitate the subsequent face recognition;
s4: performing face recognition; after the collected face images are transmitted to data processing equipment, face recognition is carried out by utilizing an artificial intelligence technology; in this step, an appropriate face recognition algorithm is required to recognize the image and extract the face features;
s5: recognizing a facial expression; facial expression recognition is also needed while face recognition is performed, and facial expressions are classified and counted by using a proper algorithm.
Further, the face recognition data correction method comprises the following steps: when the face is in partial shielding or the camera cannot shoot the front face, the data processing method adjusts: data preprocessing: when a face image is input, the accuracy of the algorithm under the condition of shielding or non-frontal face is improved by preprocessing the face image; leading in the pre-condition: the pre-condition is set for the specific application scene, so that data noise caused by abnormal shooting is avoided to a certain extent, and the accuracy of an algorithm is improved; algorithm robustness is enhanced: the performance of the algorithm under different image quality, angle and illumination conditions is improved by training the algorithm. Further, facial expression recognition of the face image is realized through the following steps:
s1: face detection: firstly, carrying out face detection on an input image; a Haar, HOG or CNN algorithm is adopted for positioning the face region;
s2: positioning key points: for the detected face area, key point positioning is performed next; positioning important face positions of eyes, nose tips and mouths by adopting a regression-based, integration-based or visual attention mechanism method; these keypoint location information are used to calculate changes in facial expression;
S3: feature extraction: after the key point positioning is finished, the face features are extracted through a modern deep learning algorithm such as a convolutional neural network; obtaining a high quality representation of the facial feature image by using a convolutional neural network or other deep learning network; training a CNN model aiming at facial expression recognition, and using the CNN model for pattern recognition and expression classification;
s4: expression classification: classifying different expressions by using the face characteristics and a classifier; the classifier is carried out by adopting a forward neural network, a support vector machine, a decision tree or a probability method, and finally the classification and recognition of the expression are realized;
therefore, the main implementation steps of facial expression recognition of the facial image comprise facial detection, key point positioning, feature extraction and expression classification; through the continuous development and optimization of deep learning and artificial intelligence technology, more accurate and reliable facial expression recognition is realized.
Further, the classifying and counting process of the expression is as follows:
firstly, carrying out expression classification; facial expression recognition is carried out on the face image through an artificial intelligence technology; after the face image is identified, the algorithm classifies the expression identified by the face image; then carrying out expression statistics; performing time stamping on each identified expression, and recording the corresponding expression type and expression intensity value; in the process, summarizing and counting all expression data;
Data visualization is performed immediately thereafter; presenting analysis results in a data visualization mode; the analysis results are visually presented in a statistical chart, thermodynamic diagram and line diagram mode, so that a user can better understand and use the analysis results; further, the method for performing time stamping on the expression mainly performs time stamping on the expression data collected each time so as to facilitate subsequent statistics and analysis; the specific description is as follows:
s1, collecting expression data: collecting a face image in a real-time or off-line mode, and carrying out facial expression recognition; recording the expression data each time an expression image is identified;
s2, recording a time stamp: when the expression data is recorded, the time stamp is recorded as an important data item; the time stamp is recorded by using the current system time or adopting a relative time representation mode of a starting time point and a time offset;
s3, storing expression data: storing the recorded expression data in a database or a file system for subsequent analysis and application; when the expression data is stored, a expression history record table is established by taking the time stamp as a main key, and the expression data collected each time is recorded in the table by taking the time stamp as a basis;
S4, analyzing expression data: obtaining a series of time stamps and corresponding expression data by analyzing the historical expression record data; based on these data, the surface data is further counted and analyzed, and when the analysis is performed, time series analysis and correlation analysis are also performed according to the time stamp.
Further, the control of the expression intensity value corresponding to each timestamp is realized by the following method:
s1, data calibration: when facial expression recognition is carried out, the expression intensity value is required to be calibrated; the process requires manual or professional tools to score and label the expression images so that the algorithm can learn and train; marking by adopting a standardized step scoring system according to the intensity of different expression types; the calibrated data are used as the basis for algorithm training and testing;
s2, parameter entering setting: controlling the expression intensity corresponding to each time stamp by adjusting the model to enter parameters;
s3, controlling output: for each time stamp, a range of output expression intensity values is set for subsequent data analysis and processing.
Further, the analysis of the collected data is performed according to the following steps:
S1, data processing: the collected data is processed mainly including cleaning, filtering and de-duplication operations; the accuracy and the integrity of the data are ensured through the operations;
s2, data classification: classifying the acquired data according to different requirements; for example, classifying the data according to different time periods, different people, different expression types;
s3, data statistics: after the data classification is completed, the data is analyzed by calculating statistical indexes; the main statistical indexes comprise average value, median, variance, minimum value and maximum value; in the facial expression recognition field, counting the distribution condition of expression intensity in each time period, the occurrence frequency of various expressions and the variation trend of expressions in different time periods;
s4, data visualization: after the calculation of the statistical index is completed, the data is presented in a data visualization mode;
s5, data analysis: after the visualization of the data is completed, further performing data analysis; wherein the data analysis adopts a machine learning or statistical method;
the data classification of the faces of different ages in the step 2 adopts the following method:
s1, dividing age groups: firstly, dividing data according to different age groups;
S2, age prediction: for samples without the noted ages, the human face age prediction model can be used for prediction; the face age prediction model is constructed by adopting a deep learning method, and a predicted age bracket is obtained by analyzing a face image; s3, customizing a classifier: customizing face recognition classifiers for data sets of divided age groups according to different age groups; s4, data set division: aiming at data sets of different age groups, dividing the data sets by adopting a training set and a testing set; the specific dividing ratio is adjusted according to the data quantity and the algorithm requirement.
Further, the time stamp and the expression intensity value are analyzed by the following steps:
s1, time sequence analysis: because the time stamp records the acquisition time of each expression intensity value, the acquired data are arranged in time sequence and then are subjected to time sequence analysis; this analysis helps to calculate the trend, periodicity and trending of the expression intensity values over time;
s2, expression intensity distribution: calculating the frequency and intensity of different expressions according to the distribution condition of expression intensity values, and calculating the distribution condition of expression intensity values in a certain time;
S3, correlation analysis: taking the time stamp and the expression intensity value as variables for carrying out correlation analysis; calculating the law of the change of expression strength along with time through calculating the correlation among variables;
s4, audience calculation: the timestamp and the expression intensity value are used as two parameters for analysis, so that the emotion and attitude of the audience can be better known.
Further, the acquired face image identity recognition technology is a deep learning and convolution neural network technology; the different expression types include, but are not limited to, happiness, anger, grime, happiness, and convulsion; the expression intensity value corresponding to each timestamp comprises, but is not limited to smile and frowning; the facial image acquired by the reading camera can identify and track facial expressions in real time, so that the human emotion state can be accurately captured, the recorded expression data can be used for data mining, and beneficial suggestions and references are provided for user experience optimization; the application scenarios of the data acquisition and analysis method for the AI recognition of the facial expression include, but are not limited to, mobile phone APP, intelligent home, automatic driving and customer service robot.
The invention provides a method for predicting the emotion intensity by utilizing time sequence analysis, which has the following beneficial effects:
1. The accuracy of expression strength prediction is improved: the traditional expression intensity prediction method is limited by a time sequence, and rules and trends of the expression intensity data after the back cannot be fully mined. The expression intensity is predicted by using the time sequence analysis method, so that the change trend of the expression intensity can be predicted more accurately, and the accuracy of expression intensity prediction is improved.
2. The reliability of facial expression recognition technology is enhanced: the application of the facial expression recognition technology requires high-precision expression intensity prediction, and the method can improve the accuracy of the expression intensity, thereby enhancing the reliability of the facial expression recognition technology.
3. The method is more suitable for the actual application scene: the method can provide high-precision expression strength prediction in the practical application scene with high real-time requirements, and is suitable for various real-time emotion interactions and analysis.
4. Technical support is provided for the fields of emotion calculation, emotion retrieval and the like: the method of the invention has advantages for application in the fields of emotion calculation and emotion retrieval. Through grasping the change rule of the emotion intensity, the emotion state of the user can be better analyzed and analyzed, and more support is provided for the technology in the fields of emotion calculation, emotion retrieval and the like.
5. Can provide new application direction for the fields of medical health and the like: the method of the invention provides a new application direction for the fields of medical health and the like. For example, in medical terms, the technique may be used to monitor the emotional state of a patient in real time and optimize the treatment regimen by formulating a corresponding emotion recognition regimen.
In conclusion, the method provided by the invention has higher practicability and feasibility, and is expected to be further developed and applied on the basis of the original rich technology.
Drawings
Fig. 1 is a flowchart of main steps of a data acquisition and analysis method for AI-recognizing facial expressions in the present invention.
Fig. 2 is a flowchart of a branching step of reading a face image acquired by a camera.
Fig. 3 is a flowchart of the branching steps of facial expression recognition of a face image.
Fig. 4 is a flow chart of the branching steps of the method of time stamping expressions.
FIG. 5 is a table of data for the identification process of various expressions for a 50 year old man.
Detailed Description
Examples: a data acquisition and analysis method for AI to recognize facial expression includes the following steps:
s1: reading a face image acquired by a camera;
s2: facial expression recognition is carried out on the face image through an artificial intelligence technology, and classification statistics is carried out on the recognized expression;
S3: performing time stamp marking on each identified expression, and recording the expression type corresponding to each time stamp and the expression intensity value corresponding to each time stamp;
s4: analyzing the acquired data; and ordering all the acquired data according to a time sequence by utilizing two parameters of the time stamp and the expression intensity value, and carrying out classified statistical analysis according to different expression types.
The process of reading the face image collected by the camera is divided into the following steps:
s1: preparing face recognition equipment; before face recognition, a device needs to be prepared to collect face images, such as a common digital camera, a handheld intelligent device, a monitoring camera and the like; the device should be capable of directly reading the face image and performing data transmission for further data processing;
s2: starting equipment to acquire faces; after the equipment is ready, the equipment needs to be started to acquire the human face; before starting, the equipment needs to be configured correspondingly, for example, proper shooting distance, angle, lamplight and the like are set, so that the quality of the acquired face image is ensured to be as high as possible;
s3: reading the collected face image;
after the equipment collects the face image, the image data is required to be transmitted to the equipment for data processing; in the step, preprocessing is carried out on the image, such as cutting, brightness adjustment, color processing and the like, so as to facilitate the subsequent face recognition;
S4: performing face recognition; after the collected face images are transmitted to data processing equipment, face recognition is carried out by utilizing an artificial intelligence technology; in this step, an appropriate face recognition algorithm, such as a convolutional neural network algorithm, is required to recognize the image and extract the face features;
s5: recognizing a facial expression; facial expression recognition is needed while face recognition is performed, and the facial expression is classified and counted by using a proper algorithm; in this step, a facial expression recognition method based on deep learning, such as a convolutional neural network plus a recurrent neural network algorithm, or the like, is employed.
The face recognition data correction method comprises the following steps: when the face is in partial shielding or the camera cannot shoot the front face, the data processing method adjusts:
data preprocessing: when a face image is input, the accuracy of the algorithm under the condition of shielding or non-frontal face is improved by preprocessing the face image; for example, the image is further processed by adopting technologies such as face recognition, gesture estimation and the like so as to recognize the position and the orientation of the face in space and correct the face;
Leading in the pre-condition: setting preconditions for specific application scenes, such as only identifying the face, or requiring that the acquired images must be clear and not blocked; thus, data noise caused by abnormal shooting is avoided to a certain extent, and the accuracy of an algorithm is improved;
algorithm robustness is enhanced: improving the performance of the algorithm under different image quality, angle and illumination conditions by training the algorithm; for example, the algorithm is trained by expanding the data set, increasing the diversity and scale of the data samples, or adding some noise, so that the algorithm is better adapted to different input conditions;
therefore, when the face is in partial shielding or the camera cannot shoot the face, the face is adjusted in the modes of data preprocessing, introduction of preconditions, reinforcement of algorithm robustness and the like, so that the accuracy and reliability of the algorithm are improved.
Facial expression recognition of the face image is realized through the following steps:
step one, face detection: firstly, carrying out face detection on an input image; generally, haar, HOG or CNN algorithms are adopted for face region positioning;
and step two, positioning key points: for the detected face area, key point positioning is performed next; positioning important facial positions such as eyes, nasal tips, mouths and the like by adopting a method based on regression, integration, visual attention mechanisms or the like; these keypoint location information are used to calculate changes in facial expression;
And step three, feature extraction: after the key point positioning is finished, the face features are extracted through a modern deep learning algorithm such as a convolutional neural network; obtaining a high quality representation of the facial feature image by using a convolutional neural network or other deep learning network; for facial expression recognition, a CNN model is generally trained for pattern recognition and expression classification;
fourth step, expression classification: classifying different expressions by using the face characteristics and a classifier; the classifier is carried out by adopting a forward neural network, a support vector machine, a decision tree or a probability method and the like, and finally, the classification and recognition of the expression are realized;
therefore, the main implementation steps of facial expression recognition of the facial image comprise facial detection, key point positioning, feature extraction and expression classification; through the continuous development and optimization of deep learning and artificial intelligence technology, more accurate and reliable facial expression recognition is realized.
The expression classifying and counting process is as follows:
firstly, carrying out expression classification; facial expression recognition is carried out on the face image through an artificial intelligence technology; after the face image is identified, the algorithm classifies the expression identified by the face image into a plurality of basic expressions, such as happiness, anger, fun, happiness, convulsion and the like; when recognizing the expression, using algorithms such as deep learning, convolutional neural network and the like;
Then carrying out expression statistics; performing time stamping on each identified expression, and recording the corresponding expression type and expression intensity value; in the process, summarizing and counting all expression data; data are analyzed by adopting a statistical method, such as average value, variance, standard deviation and the like; sequencing all acquired data according to a time sequence, and carrying out classified statistical analysis according to different expression types;
data visualization is performed immediately thereafter; for better understanding and using the data, the analysis results are presented in a data visualization manner; the analysis results are visually presented in a way of statistics chart, thermodynamic diagram, line graph and the like, so that a user can better understand and use the analysis results;
in summary, in the facial expression recognition tracking and data acquisition analysis method based on the artificial intelligence technology, the process of classifying and counting the expressions comprises the steps of classifying the expressions, counting the expressions, visualizing the data and the like; the process expresses the distribution and change conditions of the facial expressions in an intuitive way, provides more comprehensive emotion state data for people, and also provides a basis for subsequent data analysis.
The method for performing time stamping on the expression mainly performs time stamping on the expression data collected each time so as to facilitate subsequent statistics and analysis; the specific description is as follows:
s1, collecting expression data: collecting a face image in a real-time or off-line mode, and carrying out facial expression recognition; every time an expression image is identified, the expression data is recorded by the algorithm;
s2, recording a time stamp: when the expression data is recorded, the time stamp is recorded as an important data item; the time stamp is recorded by using the current system time or adopting a relative time representation mode of a starting time point and a time offset;
s3, storing expression data: storing the recorded expression data in a database or a file system for subsequent analysis and application; when storing expression data, generally, a table of expression history is established by taking a time stamp as a main key, and the expression data collected each time is recorded in the table by taking the time stamp as a basis;
s4, analyzing expression data: obtaining a series of time stamps and corresponding expression data by analyzing the historical expression record data; based on the data, further statistics and analysis are carried out on the expression data, such as statistics of the duty ratio of different expression types, expression intensity change in different time periods and the like; when analysis is performed, time series analysis, correlation analysis and the like are also performed according to the time stamp;
Therefore, the expression time stamping method is realized by associating the expression data collected each time with the collection time stamp thereof and storing the record into a database or a file system; through analysis of the historical expression record data, more comprehensive and accurate expression statistics and change distribution data are obtained, and a foundation is provided for subsequent analysis and application.
The control of the expression intensity value corresponding to each timestamp is realized by the following method:
s1, data calibration: when facial expression recognition is carried out, the expression intensity value is required to be calibrated; the process requires manual or professional tools to score and label the expression images so that the algorithm can learn and train; generally, the algorithm adopts a standardized step scoring system (for example, a 1-5 score or 1-10 score scale and the like) to score and mark the intensities of different expression types; the calibrated data are used as the basis for algorithm training and testing;
s2, parameter entering setting: controlling the expression intensity corresponding to each time stamp by adjusting the model to enter parameters; for example, a threshold value of the recognition model is set, and when the expression intensity value is smaller than the threshold value, the recognition result is ignored; when the expression intensity value is greater than or equal to the threshold value, the corresponding expression intensity value of the timestamp can be assigned to the predicted corresponding value;
S3, controlling output: for each timestamp, setting a range of output expression intensity values; for example, the range of expression intensity values is controlled within a certain interval (e.g., 0-1) for subsequent data analysis and processing;
when the control of the expression intensity value corresponding to the timestamp is realized, the range and the precision of the expression intensity value are controlled mainly through the modes of data calibration, model parameter input setting, control output and the like; the method can control the expression intensity value corresponding to each time stamp more accurately, and ensures the quality and accuracy of data. The analysis of the collected data is generally performed according to the following steps:
s1, data processing: the collected data is processed, and the processing mainly comprises the operations of cleaning, filtering, de-duplication and the like; the accuracy and the integrity of the data are ensured through the operations;
s2, data classification: classifying the acquired data according to different requirements; for example, the data is classified according to different time periods, different people, different expression types, and the like;
s3, data statistics: after data classification is completed, the algorithm analyzes the data by calculating statistical indexes; the main statistical indexes comprise average value, median, variance, minimum value, maximum value and the like; in the field of facial expression recognition, the algorithm generally counts the distribution condition of expression intensity in each time period, the occurrence frequency of various expressions, the variation trend of expressions in different time periods and the like;
S4, data visualization: after the calculation of the statistical index is completed, the data is presented in a data visualization mode; for example, a histogram, a line graph, a scatter plot, and the like are used to show the data distribution and change rules; in general, data visualization helps users understand data more intuitively;
s5, data analysis: after the visualization of the data is completed, further performing data analysis; the data analysis adopts a machine learning or statistical method, such as clustering, regression, association rule mining and the like; by these methods, more accurate and useful analysis results are obtained;
in summary, the analysis of the collected data mainly includes the steps of data processing, data classification, data statistics, data visualization, data analysis, etc.; by executing the steps, a more comprehensive and accurate facial expression data analysis result is obtained, and a foundation is provided for subsequent data application;
the data classification of the faces of different ages in the step 2 adopts the following method:
s1, age group division: firstly, data needs to be divided according to different age groups, and the age group standards adopted generally are as follows: infant (0-2 years), child (3-12 years), teenager (13-18 years), adult (19-40 years), middle-aged (40-60 years), elderly (above 60 years), etc.;
S2, age prediction: for samples without the noted ages, the human face age prediction model can be used for prediction; the face age prediction model is generally constructed by adopting a deep learning method, and a predicted age bracket is obtained by analyzing a face image;
s3, customizing a classifier: customizing face recognition classifiers for data sets of divided age groups according to different age groups; as the human face changes along with the age, the human face characteristics of different age groups are also different; in order to improve the accuracy of identification, a customized classifier is adopted to classify the data of different age groups;
s4, data set division: aiming at data sets of different age groups, dividing the data sets by adopting a training set and a testing set; the specific dividing proportion is adjusted according to the data quantity and the algorithm requirement;
in summary, the data classification is performed on the faces of different age groups, the data classification is performed according to the age groups, and the age groups are classified by means of an age prediction model or manual labeling and the like; face recognition classifiers are customized for data of different age groups, and meanwhile, data sets are divided, so that the accuracy and the precision of the classifiers are improved.
The time stamp and the expression intensity value are used for analyzing the following steps:
s1, time sequence analysis: because the time stamp records the acquisition time of each expression intensity value, the acquired data are arranged in time sequence and then are subjected to time sequence analysis; the analysis helps the algorithm to research the change trend, periodicity and trend of the expression intensity value along with time, such as seasonal expression change and the like;
s2, expression intensity distribution: according to the distribution condition of the expression intensity values, the frequency and intensity of different expressions are researched, such as the occurrence probability of each expression is counted, and the distribution condition of the expression intensity values in a certain time is counted;
s3, correlation analysis: taking the time stamp and the expression intensity value as variables for carrying out correlation analysis; by researching the relativity among variables, researching the law of the change of expression strength along with time; for example, the influence of seasonal period, and the influence of factors such as gender, age, environment and the like on the change of the emotion intensity are known through correlation analysis;
s4, audience research: the time stamp and the expression intensity value are used as two parameters for analysis, so that the algorithm can be helped to better know the emotion and attitude of an audience; for example, by combining user portrait analysis and the change trend of time stamps and expression intensity values, the requirements and participation of users and the effect of marketing strategies are more accurately known;
In general, the expression data are analyzed and researched more deeply by using two parameters of the timestamp and the expression intensity value, so that the change trend of the expression, the interaction emotion, the influence of the interaction emotion on the environment, the user group and the like are known, and the service with more intelligent orientation is realized.
The acquired face image identity recognition technology is a deep learning and convolution neural network technology; the different expression types include, but are not limited to, happiness, anger, grime, happiness, and convulsion; the expression intensity value corresponding to each timestamp comprises, but is not limited to smile and frowning; the facial image acquired by the reading camera can identify and track facial expressions in real time, so that the human emotion state can be accurately captured, the recorded expression data can be used for data mining, and beneficial suggestions and references are provided for user experience optimization; the application scenarios of the data acquisition and analysis method for the AI recognition of the facial expression include, but are not limited to, mobile phone APP, intelligent home, automatic driving and customer service robot.
Example 1: i teach a process of identifying various expressions of a 50 year old man by the method provided by the invention.
Assuming that the algorithm has a set of intensity data for five different expressions of the man, namely Happy (Happy), sad (Sad), surprise (Surprised), disgust (Disgusted), anger (Angry), the specific data can be seen in the table of fig. 5:
The following is the identification process in the present invention:
1. feature extraction: and extracting the time sequence data, namely extracting time sequence information in each expression intensity sequence, intensity change patterns of various expressions and the like.
2. Time series analysis: and extracting expression intensity change rules by adopting a time sequence analysis method. For example, the present algorithm may calculate the trend of the expression intensity in combination with Emotion Time Series (ETSs) in the data set, thereby deriving a certain trend of the expression over the years of the male 50 years of age along the time dimension (2019 to date).
3. And (3) establishing a prediction model: and predicting the expression intensity by using a time sequence analysis method, and establishing a prediction model. The predictive model may be trained by regression analysis, ARIMA (time series autoregressive moving average model) or LSTM (long short term memory model) and validated.
4. Expression intensity prediction: and carrying out expression strength prediction by using a prediction model. For example, the algorithm can predict each expression intensity of the future day, week and month of the man according to the data, so as to obtain a prediction result.
Through the identification process, the intensity change rule of various expressions in different ages and situations of the male is truly reflected.
Example 2: the method provided by the invention explains the recognition and analysis process of various expressions of 30-year-old women with masks.
Because women wear masks, the traditional facial expression recognition technology is difficult to accurately recognize the expression. Therefore, the invention predicts the variation trend of the expression intensity by adopting a method of utilizing time series analysis, and deduces and predicts by observing the behavior of the target person under the condition that no facial expression is directly visible.
The following is the recognition analysis process in the present invention:
s1, feature extraction: other movements of the women wearing the mask, such as movements of eyes, eyebrows and heads, are observed, the characteristics are extracted to be used as the substitution correspondence of the expression, and the intensity change trend is recorded by combining time.
Under the condition that the mask is covered, the traditional judgment basis of facial expression cannot be effectively applied, and the facial expression needs to be predicted by replacing the expression by the characteristics of other parts. Thus, in extracting features representing a particular expression, the following method may be employed:
s1.1. Features around eyes and orbit: eye changes are often an important component of expression, such as eye bending in smiles, tears in sadness, and the like. Therefore, when wearing the mask, the algorithm can pay attention to the shape of eyes, the height of eyebrows, the size of pupil and the like of women. These features can be combined with the key area of the lower half of the mask for women to make comprehensive inferences.
S1.2. change of head pose: expression is often related to a permanent posture of the head, such as low head at trouble, lifting at surprise, etc. Therefore, the expression condition can be assisted to be judged by observing the change of the head posture.
S1.3. other body language features: women can also express expressions through body language features such as gestures, limb movements, and the like. For example, an arm may jump or swing when she feels happy, and may bend the body or shake gently when she feels wounded.
It should be noted that in the prediction process, according to the specific situation and actual requirement of the data, the replacement feature related to the target expression intensity needs to be selected, and an effective algorithm and model are adopted to build the prediction model.
S2, time sequence analysis: and extracting time sequence information and expression intensity change trend among the features by adopting a time sequence analysis method so as to accurately define the features of different expressions.
The time sequence information and expression intensity change trend between the extracted features can be obtained by the following method:
s1.1, establishing a time sequence model: and establishing a time sequence model, and describing the change trend of the expression characteristics along with the time by taking time as an abscissa and expression intensity as an ordinate. For example, the ARIMA model is a commonly used time series analysis model, and can predict the trend of the expression intensity in the future by using historical time series data.
S1.2. Sliding time window is used: and observing the variation trend of the expression intensity by adopting a sliding time window method. A moving window is added between time series data to observe an increase or decrease in expression intensity over different time ranges. When the sliding time window is established, the window length and the stepping span can be set according to actual conditions to meet the requirements of the trend of the expression intensity change.
S1.3. Utilizing differential and smoothing techniques: noise and uncertainty of expression intensity data are reduced using differential and smoothing techniques. The difference technology can eliminate random variation in the data, and the smoothing technology can eliminate noise in the data, so that the variation trend of expression intensity can be observed more accurately, and a prediction model is built.
Through the combination of the methods, more accurate and stable time series data can be obtained, the variation trend of expression intensity is revealed, and expression prediction and judgment are facilitated.
S3, building a prediction model: based on the extracted characteristic data, a corresponding expression strength prediction model, such as regression analysis, ARIMA or LSTM and other models, is established.
The process for establishing the corresponding expression intensity prediction model mainly comprises the following steps:
S1.1. Selecting an appropriate dataset: data sets related to specific expression intensity prediction are acquired from specific scenes and use occasions.
S1.2, data preprocessing: and (3) cleaning, denoising, normalizing, smoothing and the like are performed on the acquired data by using a proper method so as to ensure the quality and accuracy of the data.
S1.3. Characteristic engineering: and selecting characteristics related to target expression intensity prediction, such as time series data, eye characteristics in a mask area, body state characteristics and the like, and performing characteristic extraction and characteristic selection to obtain a characteristic vector with prediction capability.
S1.4, model selection: and screening and selecting various prediction models suitable for the current application scene according to the attribute, the prediction method, the data scale and other factors of the feature vector.
S1.5, training a model: training the preprocessed data set by using the selected model, and improving generalization capability and prediction accuracy of the model.
S1.6. Model evaluation and parameter adjustment: and evaluating the established model by means of cross verification, error analysis, experiment comparison and the like, and adjusting and optimizing the model parameters according to the evaluation result.
S1.7. Test and prediction: and applying the trained expression strength prediction model to an actual scene, testing and predicting unknown data, and evaluating the prediction effect of the unknown data.
It should be noted that the process of building the predictive model is an iterative process, and needs to be continuously improved and optimized to adapt to the specific application requirements and scene changes.
S4, expression strength prediction: and predicting the expression intensity of the lady wearing the mask by using the established prediction model. For example, the algorithm can predict each expression intensity of the women wearing the mask for a period of time (for example, one day, one week and one month) according to the data, so as to obtain a prediction result.
Through the identification and analysis process, under the condition that a woman wears the mask, the change trend of the expression intensity of the woman can be accurately predicted to reflect the emotion state of the woman, so that the effect of facial expression identification is realized, and an actual reference is provided for future facial expression identification technology development.
The foregoing has shown and described the basic principles, principal features and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and that the above embodiments and descriptions are merely illustrative of the principles of the present invention, and various changes and modifications may be made without departing from the spirit and scope of the invention, which is defined in the appended claims. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (10)

1. A data acquisition and analysis method for AI to recognize facial expression includes the following steps:
s1: reading a face image acquired by a camera;
s2: facial expression recognition is carried out on the face image through an artificial intelligence technology, and classification statistics is carried out on the recognized expression;
s3: performing time stamp marking on each identified expression, and recording the expression type corresponding to each time stamp and the expression intensity value corresponding to each time stamp;
s4: analyzing the acquired data; and ordering all the acquired data according to a time sequence by utilizing two parameters of the time stamp and the expression intensity value, and carrying out classified statistical analysis according to different expression types.
2. The method for collecting and analyzing AI-identified facial expression data according to claim 1, wherein the process of reading the facial image collected by the camera is divided into:
s1: preparing face recognition equipment; before face recognition, a device is required to be prepared for acquiring a face image, and the device can directly read the face image and perform data transmission so as to perform the next data processing;
s2: starting equipment to acquire faces; after the equipment is ready, the equipment needs to be started to acquire the human face; before starting, the equipment needs to be correspondingly configured to ensure that the quality of the acquired face images is high;
S3: reading the collected face image; after the equipment collects the face image, the image data is required to be transmitted to the equipment for data processing; in the step, preprocessing is carried out on the image so as to facilitate the subsequent face recognition;
s4: performing face recognition; after the collected face images are transmitted to data processing equipment, face recognition is carried out by utilizing an artificial intelligence technology; in this step, an appropriate face recognition algorithm is required to recognize the image and extract the face features;
s5: recognizing a facial expression; facial expression recognition is also needed while face recognition is performed, and facial expressions are classified and counted by using a proper algorithm.
3. The data acquisition and analysis method for AI-recognizing facial expressions according to claim 2, wherein the method for performing face recognition data correction is characterized by comprising the steps of: when the face is in partial shielding or the camera cannot shoot the front face, the data processing method adopts three modes for adjustment:
data preprocessing: when a face image is input, the accuracy of the algorithm under the condition of shielding or non-frontal face is improved by preprocessing the face image;
Leading in the pre-condition: the pre-condition is set for the specific application scene, so that data noise caused by abnormal shooting is avoided to a certain extent, and the accuracy of an algorithm is improved;
algorithm robustness is enhanced: the performance of the algorithm under different image quality, angle and illumination conditions is improved by training the algorithm.
4. The method for collecting and analyzing AI-identified facial expression data according to claim 1, wherein said facial expression identification is performed by:
s1: face detection: firstly, carrying out face detection on an input image; a Haar, HOG or CNN algorithm is adopted for positioning the face region;
s2: positioning key points: for the detected face area, key point positioning is performed next; positioning important face positions of eyes, nose tips and mouths by adopting a regression-based, integration-based or visual attention mechanism method; these keypoint location information are used to calculate changes in facial expression;
s3: feature extraction: after the key point positioning is finished, extracting the face features through a modern deep learning algorithm; obtaining a high quality representation of the facial feature image by using a convolutional neural network or other deep learning network; training a CNN model aiming at facial expression recognition, and using the CNN model for pattern recognition and expression classification;
S4: expression classification: classifying different expressions by using the face characteristics and a classifier; the classifier is carried out by adopting a forward neural network, a support vector machine, a decision tree or a probability method, and finally the classification and recognition of the expression are realized;
therefore, the main implementation steps of facial expression recognition of the facial image comprise facial detection, key point positioning, feature extraction and expression classification; through the continuous development and optimization of deep learning and artificial intelligence technology, more accurate and reliable facial expression recognition is realized.
5. The method for collecting and analyzing data of AI-identified facial expressions according to claim 1, wherein the classifying and counting process of the expressions is as follows:
firstly, carrying out expression classification; facial expression recognition is carried out on the face image through an artificial intelligence technology; after the face image is identified, the algorithm classifies the expression identified by the face image;
then carrying out expression statistics; performing time stamping on each identified expression, and recording the corresponding expression type and expression intensity value; in the process, summarizing and counting all expression data;
data visualization is performed immediately thereafter; presenting analysis results in a data visualization mode; and visually presenting the analysis result in a statistical chart, thermodynamic diagram and line diagram mode.
6. The method for acquiring and analyzing data of AI-identified facial expressions according to claim 1, wherein the method for time-stamping expressions is mainly characterized by time-stamping expression data acquired each time, and is specifically described as follows:
s1, collecting expression data: collecting a face image in a real-time or off-line mode, and carrying out facial expression recognition; recording the expression data each time an expression image is identified;
s2, recording a time stamp: when the expression data is recorded, the time stamp is recorded as an important data item; the time stamp is recorded by using the current system time or adopting a relative time representation mode of a starting time point and a time offset;
s3, storing expression data: storing the recorded expression data in a database or a file system for subsequent analysis and application; when the expression data is stored, a expression history record table is established by taking the time stamp as a main key, and the expression data collected each time is recorded in the table by taking the time stamp as a basis;
s4, analyzing expression data: obtaining a series of time stamps and corresponding expression data by analyzing the historical expression record data; based on these data, the surface data is further counted and analyzed, and when the analysis is performed, time series analysis and correlation analysis are also performed according to the time stamp.
7. The method for collecting and analyzing AI-identified facial expression data according to claim 1, wherein the control of the expression intensity value corresponding to each time stamp is realized by the following method:
s1, data calibration: when facial expression recognition is carried out, the expression intensity value is required to be calibrated; the process requires manual or professional tools to score and label the expression images so that the algorithm can learn and train; marking by adopting a standardized step scoring system according to the intensity of different expression types; the calibrated data are used as the basis for algorithm training and testing;
s2, parameter entering setting: controlling the expression intensity corresponding to each time stamp by adjusting the model to enter parameters;
s3, controlling output: for each time stamp, a range of output expression intensity values is set for subsequent data analysis and processing.
8. The method for collecting and analyzing AI-identified facial expression data according to claim 1, wherein the analyzing of the collected data is performed according to the steps of:
s1, data processing: the collected data is processed mainly including cleaning, filtering and de-duplication operations; the accuracy and the integrity of the data are ensured through the operations;
S2, data classification: classifying the acquired data according to different requirements; for example, classifying the data according to different time periods, different people, different expression types;
s3, data statistics: after the data classification is completed, the data is analyzed by calculating statistical indexes; the main statistical indexes comprise average value, median, variance, minimum value and maximum value; in the facial expression recognition field, counting the distribution condition of expression intensity in each time period, the occurrence frequency of various expressions and the variation trend of expressions in different time periods;
s4, data visualization: after the calculation of the statistical index is completed, the data is presented in a data visualization mode;
s5, data analysis: after the visualization of the data is completed, further performing data analysis; wherein the data analysis adopts a machine learning or statistical method;
the data classification of the faces of different ages in the step 2 adopts the following method:
s1, age group division: firstly, dividing data according to different age groups;
s2, age prediction: for samples without the noted ages, the human face age prediction model can be used for prediction; the face age prediction model is constructed by adopting a deep learning method, and a predicted age bracket is obtained by analyzing a face image;
S3, customizing a classifier: customizing face recognition classifiers for data sets of divided age groups according to different age groups;
s4, data set division: aiming at data sets of different age groups, dividing the data sets by adopting a training set and a testing set; the specific dividing ratio is adjusted according to the data quantity and the algorithm requirement.
9. The method for collecting and analyzing AI-identified facial expression data according to claim 1, wherein the time stamp and the expression intensity value are analyzed by:
s1, time sequence analysis: because the time stamp records the acquisition time of each expression intensity value, the acquired data are arranged in time sequence and then are subjected to time sequence analysis; this analysis helps to calculate the trend, periodicity and trending of the expression intensity values over time;
s2, expression intensity distribution: calculating the frequency and intensity of different expressions according to the distribution condition of expression intensity values, and calculating the distribution condition of expression intensity values in a certain time;
s3, correlation analysis: taking the time stamp and the expression intensity value as variables for carrying out correlation analysis; calculating the law of the change of expression strength along with time through calculating the correlation among variables;
S4, audience calculation: the timestamp and the expression intensity value are used as two parameters for analysis, so that the emotion and attitude of the audience can be better known.
10. The method for acquiring and analyzing data of AI-identified facial expressions according to claim 1, wherein the acquired facial image identification technology is a deep learning and convolutional neural network technology; the different expression types include, but are not limited to, happiness, anger, grime, happiness, and convulsion; the expression intensity value corresponding to each timestamp comprises, but is not limited to smile and frowning; the facial image acquired by the reading camera can identify and track facial expressions in real time, so that the human emotion state can be accurately captured, the recorded expression data can be used for data mining, and beneficial suggestions and references are provided for user experience optimization; the application scenarios of the data acquisition and analysis method for the AI recognition of the facial expression include, but are not limited to, mobile phone APP, intelligent home, automatic driving and customer service robot.
CN202310720238.0A 2023-06-16 2023-06-16 Data acquisition and analysis method for AI (advanced technology attachment) recognition of facial expressions Pending CN116597497A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310720238.0A CN116597497A (en) 2023-06-16 2023-06-16 Data acquisition and analysis method for AI (advanced technology attachment) recognition of facial expressions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310720238.0A CN116597497A (en) 2023-06-16 2023-06-16 Data acquisition and analysis method for AI (advanced technology attachment) recognition of facial expressions

Publications (1)

Publication Number Publication Date
CN116597497A true CN116597497A (en) 2023-08-15

Family

ID=87590114

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310720238.0A Pending CN116597497A (en) 2023-06-16 2023-06-16 Data acquisition and analysis method for AI (advanced technology attachment) recognition of facial expressions

Country Status (1)

Country Link
CN (1) CN116597497A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108288034A (en) * 2018-01-11 2018-07-17 中国地质大学(武汉) A kind of method for evaluating quality and system of game design
CN109284713A (en) * 2018-09-21 2019-01-29 上海健坤教育科技有限公司 A kind of Emotion identification analysis system based on camera acquisition expression data
CN110363154A (en) * 2019-07-17 2019-10-22 安徽航天信息有限公司 A kind of service quality examining method and system based on Emotion identification

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108288034A (en) * 2018-01-11 2018-07-17 中国地质大学(武汉) A kind of method for evaluating quality and system of game design
CN109284713A (en) * 2018-09-21 2019-01-29 上海健坤教育科技有限公司 A kind of Emotion identification analysis system based on camera acquisition expression data
CN110363154A (en) * 2019-07-17 2019-10-22 安徽航天信息有限公司 A kind of service quality examining method and system based on Emotion identification

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
廖海斌 等: "基于性别和年龄因子分析的鲁棒性人脸表情识别", 计算机研究与发展, 31 March 2021 (2021-03-31), pages 531 - 534 *
虞苏鑫 等: "基于子区域加权的不同年龄段人脸表情识别", 计算机工程与科学, vol. 44, no. 8, 31 August 2022 (2022-08-31), pages 1427 *
郭军 等: "人脸图像处理与识别技术", 31 May 2018, 华中科技大学出版社, pages: 100 - 103 *

Similar Documents

Publication Publication Date Title
US9530048B2 (en) Automated facial action coding system
WO2020119630A1 (en) Multi-mode comprehensive evaluation system and method for customer satisfaction
Pantic et al. Facial action recognition for facial expression analysis from static face images
Littlewort et al. Dynamics of facial expression extracted automatically from video
KR100974293B1 (en) METHOD AND SYSTEM FOR AUTOMATED FACE DETECTION and Recognition
Abd El Meguid et al. Fully automated recognition of spontaneous facial expressions in videos using random forest classifiers
CN109948447B (en) Character network relation discovery and evolution presentation method based on video image recognition
US8498454B2 (en) Optimal subspaces for face recognition
CN108629336B (en) Face characteristic point identification-based color value calculation method
CN111666845B (en) Small sample deep learning multi-mode sign language recognition method based on key frame sampling
CN110363154A (en) A kind of service quality examining method and system based on Emotion identification
CN105335691A (en) Smiling face identification and encouragement system
WO2018154098A1 (en) Method and system for recognizing mood by means of image analysis
CN109325408A (en) A kind of gesture judging method and storage medium
KR20150064977A (en) Video analysis and visualization system based on face information
Faria et al. Interface framework to drive an intelligent wheelchair using facial expressions
CN110705523B (en) Entrepreneur performance evaluation method and system based on neural network
Pantic et al. An expert system for recognition of facial actions and their intensity
CN117198468A (en) Intervention scheme intelligent management system based on behavior recognition and data analysis
CN109902656B (en) Method and system for identifying facial action unit
CN112370058A (en) Method for identifying and monitoring emotion of user based on mobile terminal
CN116597497A (en) Data acquisition and analysis method for AI (advanced technology attachment) recognition of facial expressions
CN115273150A (en) Novel identification method and system for wearing safety helmet based on human body posture estimation
Venkatesh et al. Automatic expression recognition and expertise prediction in Bharatnatyam
CN112507959A (en) Method for establishing emotion perception model based on individual face analysis in video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination